text
stringlengths
21
172k
source
stringlengths
32
113
Bootstrappingis a procedure for estimating the distribution of an estimator byresampling(oftenwith replacement) one's data or a model estimated from the data.[1]Bootstrapping assigns measures of accuracy (bias, variance,confidence intervals, prediction error, etc.) to sample estimates.[2][3]This technique allows estimation of the sampling distribution of almost any statistic using random sampling methods.[1] Bootstrapping estimates the properties of anestimand(such as itsvariance) by measuring those properties when sampling from an approximating distribution. One standard choice for an approximating distribution is theempirical distribution functionof the observed data. In the case where a set of observations can be assumed to be from anindependent and identically distributedpopulation, this can be implemented by constructing a number ofresampleswith replacement, of the observed data set (and of equal size to the observed data set). A key result in Efron's seminal paper that introduced the bootstrap[4]is the favorable performance of bootstrap methods usingsampling with replacementcompared to prior methods like thejackknifethat sample without replacement. However, since its introduction, numerous variants on the bootstrap have been proposed, including methods that sample without replacement or that create bootstrap samples larger or smaller than the original data. The bootstrap may also be used for constructinghypothesis tests.[5]It is often used as an alternative tostatistical inferencebased on the assumption of a parametric model when that assumption is in doubt, or where parametric inference is impossible or requires complicated formulas for the calculation ofstandard errors. The bootstrap[a]was first described byBradley Efronin "Bootstrap methods: another look at the jackknife" (1979),[4]inspired by earlier work on thejackknife.[6][7][8]Improved estimates of the variance were developed later.[9][10]A Bayesian extension was developed in 1981.[11]The bias-corrected and accelerated (BCa{\displaystyle BC_{a}}) bootstrap was developed by Efron in 1987,[12]and the approximate bootstrap confidence interval (ABC, or approximateBCa{\displaystyle BC_{a}}) procedure in 1992.[13] The basic idea of bootstrapping is that inference about a population from sample data (sample → population) can be modeled byresamplingthe sample data and performing inference about a sample from resampled data (resampled → sample).[14]As the population is unknown, the true error in a sample statistic against its population value is unknown. In bootstrap-resamples, the 'population' is in fact the sample, and this is known; hence the quality of inference of the 'true' sample from resampled data (resampled → sample) is measurable. More formally, the bootstrap works by treating inference of the trueprobability distributionJ, given the original data, as being analogous to an inference of the empirical distributionĴ, given the resampled data. The accuracy of inferences regardingĴusing the resampled data can be assessed because we knowĴ. IfĴis a reasonable approximation toJ, then the quality of inference onJcan in turn be inferred. As an example, assume we are interested in the average (ormean) height of people worldwide. We cannot measure all the people in the global population, so instead, we sample only a tiny part of it, and measure that. Assume the sample is of sizeN; that is, we measure the heights ofNindividuals. From that single sample, only one estimate of the mean can be obtained. In order to reason about the population, we need some sense of thevariabilityof the mean that we have computed. The simplest bootstrap method involves taking the original data set of heights, and, using a computer, sampling from it to form a new sample (called a 'resample' or bootstrap sample) that is also of sizeN. The bootstrap sample is taken from the original by usingsampling with replacement(e.g. we might 'resample' 5 times from [1,2,3,4,5] and get [2,5,4,4,1]), so, assumingNis sufficiently large, for all practical purposes there is virtually zero probability that it will be identical to the original "real" sample. This process is repeated a large number of times (typically 1,000 or 10,000 times), and for each of these bootstrap samples, we compute its mean (each of these is called a "bootstrap estimate"). We now can create a histogram of bootstrap means. This histogram provides an estimate of the shape of the distribution of the sample mean from which we can answer questions about how much the mean varies across samples. (The method here, described for the mean, can be applied to almost any otherstatisticorestimator.) A great advantage of bootstrap is its simplicity. It is a straightforward way to derive estimates ofstandard errorsandconfidence intervalsfor complex estimators of the distribution, such as percentile points, proportions,Odds ratio, and correlation coefficients. However, despite its simplicity, bootstrapping can be applied to complex sampling designs (e.g. for population divided into s strata with nsobservations per strata, one example of which is a dose-response experiment, where bootstrapping can be applied for each stratum).[15]Bootstrap is also an appropriate way to control and check the stability of the results. Although for most problems it is impossible to know the true confidence interval, bootstrap is asymptotically more accurate than the standard intervals obtained using sample variance and assumptions of normality.[16]Bootstrapping is also a convenient method that avoids the cost of repeating the experiment to get other groups of sample data. Bootstrapping depends heavily on the estimator used and, though simple, naive use of bootstrapping will not always yield asymptotically valid results and can lead to inconsistency.[17]Although bootstrapping is (under some conditions) asymptoticallyconsistent, it does not provide general finite-sample guarantees. The result may depend on the representative sample. The apparent simplicity may conceal the fact that important assumptions are being made when undertaking the bootstrap analysis (e.g. independence of samples or large enough of a sample size) where these would be more formally stated in other approaches. Also, bootstrapping can be time-consuming and there are not many available software for bootstrapping as it is difficult to automate using traditional statistical computer packages.[15] Scholars have recommended more bootstrap samples as available computing power has increased. If the results may have substantial real-world consequences, then one should use as many samples as is reasonable, given available computing power and time. Increasing the number of samples cannot increase the amount of information in the original data; it can only reduce the effects of random sampling errors which can arise from a bootstrap procedure itself. Moreover, there is evidence that numbers of samples greater than 100 lead to negligible improvements in the estimation of standard errors.[18]In fact, according to the original developer of the bootstrapping method, even setting the number of samples at 50 is likely to lead to fairly good standard error estimates.[19] Adèr et al. recommend the bootstrap procedure for the following situations:[20] However, Athreya has shown[21]that if one performs a naive bootstrap on the sample mean when the underlying population lacks a finite variance (for example, apower law distribution), then the bootstrap distribution will not converge to the same limit as the sample mean. As a result, confidence intervals on the basis of aMonte Carlo simulationof the bootstrap could be misleading. Athreya states that "Unless one is reasonably sure that the underlying distribution is notheavy tailed, one should hesitate to use the naive bootstrap". In univariate problems, it is usually acceptable to resample the individual observations with replacement ("case resampling" below) unlikesubsampling, in which resampling is without replacement and is valid under much weaker conditions compared to the bootstrap. In small samples, a parametric bootstrap approach might be preferred. For other problems, asmooth bootstrapwill likely be preferred. For regression problems, various other alternatives are available.[2] The bootstrap is generally useful for estimating the distribution of a statistic (e.g. mean, variance) without using normality assumptions (as required, e.g., for a z-statistic or a t-statistic). In particular, the bootstrap is useful when there is no analytical form or an asymptotic theory (e.g., an applicablecentral limit theorem) to help estimate the distribution of the statistics of interest. This is because bootstrap methods can apply to most random quantities, e.g., the ratio of variance and mean. There are at least two ways of performing case resampling. Consider a coin-flipping experiment. We flip the coin and record whether it lands heads or tails. LetX = x1,x2, …,x10be 10 observations from the experiment.xi= 1if the i th flip lands heads, and 0 otherwise. By invoking the assumption that the average of the coin flips is normally distributed, we can use thet-statisticto estimate the distribution of the sample mean, Such a normality assumption can be justified either as an approximation of the distribution of eachindividualcoin flip or as an approximation of the distribution of theaverageof a large number of coin flips. The former is a poor approximation because the true distribution of the coin flips isBernoulliinstead of normal. The latter is a valid approximation ininfinitely largesamples due to thecentral limit theorem. However, if we are not ready to make such a justification, then we can use the bootstrap instead. Using case resampling, we can derive the distribution ofx¯{\displaystyle {\bar {x}}}. We first resample the data to obtain abootstrap resample. An example of the first resample might look like thisX1* =x2,x1,x10,x10,x3,x4,x6,x7,x1,x9. There are some duplicates since a bootstrap resample comes from sampling with replacement from the data. Also the number of data points in a bootstrap resample is equal to the number of data points in our original observations. Then we compute the mean of this resample and obtain the firstbootstrap mean:μ1*. We repeat this process to obtain the second resampleX2* and compute the second bootstrap meanμ2*. If we repeat this 100 times, then we haveμ1*,μ2*, ...,μ100*. This represents anempirical bootstrap distributionof sample mean. From this empirical distribution, one can derive abootstrap confidence intervalfor the purpose of hypothesis testing. In regression problems,case resamplingrefers to the simple scheme of resampling individual cases – often rows of adata set. For regression problems, as long as the data set is fairly large, this simple scheme is often acceptable.[citation needed]However, the method is open to criticism[citation needed].[15] In regression problems, theexplanatory variablesare often fixed, or at least observed with more control than the response variable. Also, the range of the explanatory variables defines the information available from them. Therefore, to resample cases means that each bootstrap sample will lose some information. As such, alternative bootstrap procedures should be considered. Bootstrapping can be interpreted in aBayesianframework using a scheme that creates new data sets through reweighting the initial data. Given a set ofN{\displaystyle N}data points, the weighting assigned to data pointi{\displaystyle i}in a new data setDJ{\displaystyle {\mathcal {D}}^{J}}iswiJ=xiJ−xi−1J{\displaystyle w_{i}^{J}=x_{i}^{J}-x_{i-1}^{J}}, wherexJ{\displaystyle \mathbf {x} ^{J}}is a low-to-high ordered list ofN−1{\displaystyle N-1}uniformly distributed random numbers on[0,1]{\displaystyle [0,1]}, preceded by 0 and succeeded by 1. The distributions of a parameter inferred from considering many such data setsDJ{\displaystyle {\mathcal {D}}^{J}}are then interpretable asposterior distributionson that parameter.[23] Under this scheme, a small amount of (usually normally distributed) zero-centered random noise is added onto each resampled observation. This is equivalent to sampling from akernel densityestimate of the data. AssumeKto be a symmetric kernel density function with unit variance. The standard kernel estimatorf^h(x){\displaystyle {\hat {f\,}}_{h}(x)}off(x){\displaystyle f(x)}is whereh{\displaystyle h}is the smoothing parameter. And the corresponding distribution function estimatorF^h(x){\displaystyle {\hat {F\,}}_{h}(x)}is Based on the assumption that the original data set is a realization of a random sample from a distribution of a specific parametric type, in this case a parametric model is fitted by parameter θ, often bymaximum likelihood, and samples ofrandom numbersare drawn from this fitted model. Usually the sample drawn has the same sample size as the original data. Then the estimate of original function F can be written asF^=Fθ^{\displaystyle {\hat {F}}=F_{\hat {\theta }}}. This sampling process is repeated many times as for other bootstrap methods. Considering the centeredsample meanin this case, the random sample original distribution functionFθ{\displaystyle F_{\theta }}is replaced by a bootstrap random sample with functionFθ^{\displaystyle F_{\hat {\theta }}}, and theprobability distributionofXn¯−μθ{\displaystyle {\bar {X_{n}}}-\mu _{\theta }}is approximated by that ofX¯n∗−μ∗{\displaystyle {\bar {X}}_{n}^{*}-\mu ^{*}}, whereμ∗=μθ^{\displaystyle \mu ^{*}=\mu _{\hat {\theta }}}, which is the expectation corresponding toFθ^{\displaystyle F_{\hat {\theta }}}.[25]The use of a parametric model at the sampling stage of the bootstrap methodology leads to procedures which are different from those obtained by applying basic statistical theory to inference for the same model. Another approach to bootstrapping in regression problems is to resampleresiduals. The method proceeds as follows. This scheme has the advantage that it retains the information in the explanatory variables. However, a question arises as to which residuals to resample. Raw residuals are one option; another isstudentized residuals(in linear regression). Although there are arguments in favor of using studentized residuals; in practice, it often makes little difference, and it is easy to compare the results of both schemes. When data are temporally correlated, straightforward bootstrapping destroys the inherent correlations. This method uses Gaussian process regression (GPR) to fit a probabilistic model from which replicates may then be drawn. GPR is a Bayesian non-linear regression method. A Gaussian process (GP) is a collection of random variables, any finite number of which have a joint Gaussian (normal) distribution. A GP is defined by a mean function and a covariance function, which specify the mean vectors and covariance matrices for each finite collection of the random variables.[26] Regression model: Gaussian process prior: For any finite collection of variables,x1, ...,xn, the function outputsf(x1),…,f(xn){\displaystyle f(x_{1}),\ldots ,f(x_{n})}are jointly distributed according to a multivariate Gaussian with meanm=[m(x1),…,m(xn)]⊺{\displaystyle m=[m(x_{1}),\ldots ,m(x_{n})]^{\intercal }}and covariance matrix(K)ij=k(xi,xj).{\displaystyle (K)_{ij}=k(x_{i},x_{j}).} Assumef(x)∼GP(m,k).{\displaystyle f(x)\sim {\mathcal {GP}}(m,k).}Theny(x)∼GP(m,l){\displaystyle y(x)\sim {\mathcal {GP}}(m,l)}, wherel(xi,xj)=k(xi,xj)+σ2δ(xi,xj){\displaystyle l(x_{i},x_{j})=k(x_{i},x_{j})+\sigma ^{2}\delta (x_{i},x_{j})}, andδ(xi,xj){\displaystyle \delta (x_{i},x_{j})}is the standard Kronecker delta function.[26] Gaussian process posterior: According to GP prior, we can get wherem0=[m(x1),…,m(xr)]⊺{\displaystyle m_{0}=[m(x_{1}),\ldots ,m(x_{r})]^{\intercal }}and(K0)ij=k(xi,xj)+σ2δ(xi,xj).{\displaystyle (K_{0})_{ij}=k(x_{i},x_{j})+\sigma ^{2}\delta (x_{i},x_{j}).} Let x1*,...,xs*be another finite collection of variables, it's obvious that wherem∗=[m(x1∗),…,m(xs∗)]⊺{\displaystyle m_{*}=[m(x_{1}^{*}),\ldots ,m(x_{s}^{*})]^{\intercal }},(K∗∗)ij=k(xi∗,xj∗){\displaystyle (K_{**})_{ij}=k(x_{i}^{*},x_{j}^{*})},(K∗)ij=k(xi,xj∗).{\displaystyle (K_{*})_{ij}=k(x_{i},x_{j}^{*}).} According to the equations above, the outputsyare also jointly distributed according to a multivariate Gaussian. Thus, wherey=[y1,...,yr]⊺{\displaystyle y=[y_{1},...,y_{r}]^{\intercal }},mpost=m∗+K∗⊺(KO+σ2Ir)−1(y−m0){\displaystyle m_{\text{post}}=m_{*}+K_{*}^{\intercal }(K_{O}+\sigma ^{2}I_{r})^{-1}(y-m_{0})},Kpost=K∗∗−K∗⊺(KO+σ2Ir)−1K∗{\displaystyle K_{\text{post}}=K_{**}-K_{*}^{\intercal }(K_{O}+\sigma ^{2}I_{r})^{-1}K_{*}}, andIr{\displaystyle I_{r}}isr×r{\displaystyle r\times r}identity matrix.[26] The wild bootstrap, proposed originally by Wu (1986),[27]is suited when the model exhibitsheteroskedasticity. The idea is, as the residual bootstrap, to leave the regressors at their sample value, but to resample the response variable based on the residuals values. That is, for each replicate, one computes a newy{\displaystyle y}based on so the residuals are randomly multiplied by a random variablevi{\displaystyle v_{i}}with mean 0 and variance 1. For most distributions ofvi{\displaystyle v_{i}}(but not Mammen's), this method assumes that the 'true' residual distribution is symmetric and can offer advantages over simple residual sampling for smaller sample sizes. Different forms are used for the random variablevi{\displaystyle v_{i}}, such as The block bootstrap is used when the data, or the errors in a model, are correlated. In this case, a simple case or residual resampling will fail, as it is not able to replicate the correlation in the data. The block bootstrap tries to replicate the correlation by resampling inside blocks of data (seeBlocking (statistics)). The block bootstrap has been used mainly with data correlated in time (i.e. time series) but can also be used with data correlated in space, or among groups (so-called cluster data). In the (simple) block bootstrap, the variable of interest is split into non-overlapping blocks. In the moving block bootstrap, introduced by Künsch (1989),[29]data is split inton−b+ 1 overlapping blocks of lengthb: Observation 1 to b will be block 1, observation 2 tob+ 1 will be block 2, etc. Then from thesen−b+ 1 blocks,n/bblocks will be drawn at random with replacement. Then aligning these n/b blocks in the order they were picked, will give the bootstrap observations. This bootstrap works with dependent data, however, the bootstrapped observations will not be stationary anymore by construction. But, it was shown that varying randomly the block length can avoid this problem.[30]This method is known as thestationary bootstrap.Other related modifications of the moving block bootstrap are theMarkovian bootstrapand a stationary bootstrap method that matches subsequent blocks based on standard deviation matching. Vinod (2006),[31]presents a method that bootstraps time series data using maximum entropy principles satisfying the Ergodic theorem with mean-preserving and mass-preserving constraints. There is an R package,meboot,[32]that utilizes the method, which has applications in econometrics and computer science. Cluster data describes data where many observations per unit are observed. This could be observing many firms in many states or observing students in many classes. In such cases, the correlation structure is simplified, and one does usually make the assumption that data is correlated within a group/cluster, but independent between groups/clusters. The structure of the block bootstrap is easily obtained (where the block just corresponds to the group), and usually only the groups are resampled, while the observations within the groups are left unchanged.Cameronet al. (2008) discusses this for clustered errors in linear regression.[33] The bootstrap is a powerful technique although may require substantial computing resources in both time and memory. Some techniques have been developed to reduce this burden. They can generally be combined with many of the different types of Bootstrap schemes and various choices of statistics. Most bootstrap methods areembarrassingly parallelalgorithms. That is, the statistic of interest for each bootstrap sample does not depend on other bootstrap samples. Such computations can therefore be performed on separateCPUsor compute nodes with the results from the separate nodes eventually aggregated for final analysis. The nonparametric bootstrap samples items from a list of size n with counts drawn from amultinomial distribution. IfWi{\displaystyle W_{i}}denotes the number times element i is included in a given bootstrap sample, then eachWi{\displaystyle W_{i}}is distributed as abinomial distributionwith n trials and mean 1, butWi{\displaystyle W_{i}}is not independent ofWj{\displaystyle W_{j}}fori≠j{\displaystyle i\neq j}. The Poisson bootstrap instead draws samples assuming allWi{\displaystyle W_{i}}'s are independently and identically distributed as Poisson variables with mean 1. The rationale is that the limit of the binomial distribution is Poisson: The Poisson bootstrap had been proposed by Hanley and MacGibbon as potentially useful for non-statisticians using software likeSASandSPSS, which lacked the bootstrap packages ofRandS-Plusprogramming languages.[34]The same authors report that for large enough n, the results are relatively similar to the nonparametric bootstrap estimates but go on to note the Poisson bootstrap has seen minimal use in applications. Another proposed advantage of the Poisson bootstrap is the independence of theWi{\displaystyle W_{i}}makes the method easier to apply for large datasets that must be processed as streams.[35] A way to improve on the Poisson bootstrap, termed "sequential bootstrap", is by taking the first samples so that the proportion of unique values is ≈0.632 of the original sample size n. This provides a distribution with main empirical characteristics being within a distance ofO(n3/4){\displaystyle O(n^{3/4})}.[36]Empirical investigation has shown this method can yield good results.[37]This is related to the reduced bootstrap method.[38] For massive data sets, it is often computationally prohibitive to hold all the sample data in memory and resample from the sample data. The Bag of Little Bootstraps (BLB)[39]provides a method of pre-aggregating data before bootstrapping to reduce computational constraints. This works by partitioning the data set intob{\displaystyle b}equal-sized buckets and aggregating the data within each bucket. This pre-aggregated data set becomes the new sample data over which to draw samples with replacement. This method is similar to the Block Bootstrap, but the motivations and definitions of the blocks are very different. Under certain assumptions, the sample distribution should approximate the full bootstrapped scenario. One constraint is the number of bucketsb=nγ{\displaystyle b=n^{\gamma }}whereγ∈[0.5,1]{\displaystyle \gamma \in [0.5,1]}and the authors recommend usage ofb=n0.7{\displaystyle b=n^{0.7}}as a general solution. The bootstrap distribution of a point estimator of a population parameter has been used to produce a bootstrappedconfidence intervalfor the parameter's true value if the parameter can be written as afunction of the population's distribution. Population parametersare estimated with manypoint estimators. Popular families of point-estimators includemean-unbiased minimum-variance estimators,median-unbiased estimators,Bayesian estimators(for example, theposterior distribution'smode,median,mean), andmaximum-likelihood estimators. A Bayesian point estimator and a maximum-likelihood estimator have good performance when the sample size is infinite, according toasymptotic theory. For practical problems with finite samples, other estimators may be preferable. Asymptotic theory suggests techniques that often improve the performance of bootstrapped estimators; the bootstrapping of a maximum-likelihood estimator may often be improved using transformations related topivotal quantities.[40] The bootstrap distribution of a parameter-estimator is often used to calculateconfidence intervalsfor its population-parameter.[2]A variety of methods for constructing the confidence intervals have been proposed, although there is disagreement which method is the best. The survey of bootstrap confidence interval methods of DiCiccio and Efron and consequent discussion lists several desired properties of confidence intervals, which generally are not all simultaneously met. There are several methods for constructing confidence intervals from the bootstrap distribution of arealparameter: Efron and Tibshirani[2]suggest the following algorithm for comparing the means of two independent samples: Letx1,…,xn{\displaystyle x_{1},\ldots ,x_{n}}be a random sample from distribution F with sample meanx¯{\displaystyle {\bar {x}}}and sample varianceσx2{\displaystyle \sigma _{x}^{2}}. Lety1,…,ym{\displaystyle y_{1},\ldots ,y_{m}}be another, independent random sample from distribution G with meany¯{\displaystyle {\bar {y}}}and varianceσy2{\displaystyle \sigma _{y}^{2}} In 1878,Simon Newcombtook observations on thespeed of light.[46]The data set contains twooutliers, which greatly influence thesample mean. (The sample mean need not be aconsistent estimatorfor anypopulation mean, because no mean needs to exist for aheavy-tailed distribution.) A well-defined androbust statisticfor the central tendency is the sample median, which is consistent andmedian-unbiasedfor the population median. The bootstrap distribution for Newcomb's data appears below. We can reduce the discreteness of the bootstrap distribution by adding a small amount of random noise to each bootstrap sample. A conventional choice is to add noise with a standard deviation ofσ/n{\displaystyle \sigma /{\sqrt {n}}}for a sample sizen; this noise is often drawn from a Student-t distribution withn-1degrees of freedom.[47]This results in an approximately-unbiased estimator for the variance of the sample mean.[48]This means that samples taken from the bootstrap distribution will have a variance which is, on average, equal to the variance of the total population. Histograms of the bootstrap distribution and the smooth bootstrap distribution appear below. The bootstrap distribution of the sample-median has only a small number of values. The smoothed bootstrap distribution has a richersupport. However, note that whether the smoothed or standard bootstrap procedure is favorable is case-by-case and is shown to depend on both the underlying distribution function and on the quantity being estimated.[49] In this example, the bootstrapped 95% (percentile) confidence-interval for the population median is (26, 28.5), which is close to the interval for (25.98, 28.46) for the smoothed bootstrap. The bootstrap is distinguished from: Bootstrap aggregating(bagging) is ameta-algorithmbased on averaging model predictions obtained from models trained on multiple bootstrap samples. In situations where an obvious statistic can be devised to measure a required characteristic using only a small number,r, of data items, a corresponding statistic based on the entire sample can be formulated. Given anr-sample statistic, one can create ann-sample statistic by something similar to bootstrapping (taking the average of the statistic over all subsamples of sizer). This procedure is known to have certain good properties and the result is aU-statistic. Thesample meanandsample varianceare of this form, forr= 1 andr= 2. The bootstrap has under certain conditions desirableasymptotic properties. The asymptotic properties most often described are weak convergence / consistency of thesample pathsof the bootstrap empirical process and the validity ofconfidence intervalsderived from the bootstrap. This section describes the convergence of the empirical bootstrap. This paragraph summarizes more complete descriptions of stochastic convergence in van der Vaart and Wellner[50]and Kosorok.[51]The bootstrap defines astochastic process, a collection of random variables indexed by some setT{\displaystyle T}, whereT{\displaystyle T}is typically thereal line(R{\displaystyle \mathbb {R} }) or a family of functions. Processes of interest are those with bounded sample paths, i.e., sample paths inL-infinity(ℓ∞(T){\displaystyle \ell ^{\infty }(T)}), the set of alluniformly boundedfunctionsfromT{\displaystyle T}toR{\displaystyle \mathbb {R} }. When equipped with the uniform distance,ℓ∞(T){\displaystyle \ell ^{\infty }(T)}is ametric space, and whenT=R{\displaystyle T=\mathbb {R} }, two subspaces ofℓ∞(T){\displaystyle \ell ^{\infty }(T)}are of particular interest,C[0,1]{\displaystyle C[0,1]}, the space of allcontinuous functionsfromT{\displaystyle T}to theunit interval[0,1], andD[0,1]{\displaystyle D[0,1]}, the space of allcadlag functionsfromT{\displaystyle T}to [0,1]. This is becauseC[0,1]{\displaystyle C[0,1]}contains thedistribution functionsfor all continuous random variables, andD[0,1]{\displaystyle D[0,1]}contains the distribution functions for all random variables. Statements about the consistency of the bootstrap are statements about the convergence of the sample paths of the bootstrap process asrandom elementsof the metric spaceℓ∞(T){\displaystyle \ell ^{\infty }(T)}or somesubspacethereof, especiallyC[0,1]{\displaystyle C[0,1]}orD[0,1]{\displaystyle D[0,1]}. Horowitz in a recent review[1]definesconsistencyas: the bootstrap estimatorGn(⋅,Fn){\displaystyle G_{n}(\cdot ,F_{n})}is consistent [for a statisticTn{\displaystyle T_{n}}] if, for eachF0{\displaystyle F_{0}},supτ|Gn(τ,Fn)−G∞(τ,F0)|{\displaystyle \sup _{\tau }|G_{n}(\tau ,F_{n})-G_{\infty }(\tau ,F_{0})|}converges in probabilityto 0 asn→∞{\displaystyle n\to \infty }, whereFn{\displaystyle F_{n}}is the distribution of the statistic of interest in the original sample,F0{\displaystyle F_{0}}is the true but unknown distribution of the statistic,G∞(τ,F0){\displaystyle G_{\infty }(\tau ,F_{0})}is the asymptotic distribution function ofTn{\displaystyle T_{n}}, andτ{\displaystyle \tau }is the indexing variable in the distribution function, i.e.,P(Tn≤τ)=Gn(τ,F0){\displaystyle P(T_{n}\leq \tau )=G_{n}(\tau ,F_{0})}. This is sometimes more specifically calledconsistency relative to the Kolmogorov-Smirnov distance.[52] Horowitz goes on to recommend using a theorem from Mammen[53]that provides easier to check necessary and sufficient conditions for consistency for statistics of a certain common form. In particular, let{Xi:i=1,…,n}{\displaystyle \{X_{i}:i=1,\ldots ,n\}}be the random sample. IfTn=∑i=1ngn(Xi)−tnσn{\displaystyle T_{n}={\frac {\sum _{i=1}^{n}g_{n}(X_{i})-t_{n}}{\sigma _{n}}}}for a sequence of numberstn{\displaystyle t_{n}}andσn{\displaystyle \sigma _{n}}, then the bootstrap estimate of the cumulative distribution function estimates the empirical cumulative distribution function if and only ifTn{\displaystyle T_{n}}converges in distributionto thestandard normal distribution. Convergence in (outer) probability as described above is also calledweak consistency. It can also be shown with slightly stronger assumptions, that the bootstrap isstrongly consistent, where convergence in (outer) probability is replaced by convergence (outer) almost surely. When only one type of consistency is described, it is typically weak consistency. This is adequate for most statistical applications since it implies confidence bands derived from the bootstrap are asymptotically valid.[51] In simpler cases, it is possible to use thecentral limit theoremdirectly to show theconsistencyof the bootstrap procedure for estimating the distribution of the sample mean. Specifically, let us considerXn1,…,Xnn{\displaystyle X_{n1},\ldots ,X_{nn}}independent identically distributed random variables withE[Xn1]=μ{\displaystyle \mathbb {E} [X_{n1}]=\mu }andVar[Xn1]=σ2<∞{\displaystyle {\text{Var}}[X_{n1}]=\sigma ^{2}<\infty }for eachn≥1{\displaystyle n\geq 1}. LetX¯n=n−1(Xn1+⋯+Xnn){\displaystyle {\bar {X}}_{n}=n^{-1}(X_{n1}+\cdots +X_{nn})}. In addition, for eachn≥1{\displaystyle n\geq 1}, conditional onXn1,…,Xnn{\displaystyle X_{n1},\ldots ,X_{nn}}, letXn1∗,…,Xnn∗{\displaystyle X_{n1}^{*},\ldots ,X_{nn}^{*}}be independent random variables with distribution equal to the empirical distribution ofXn1,…,Xnn{\displaystyle X_{n1},\ldots ,X_{nn}}. This is the sequence of bootstrap samples. Then it can be shown thatsupτ∈R|P∗(n(X¯n∗−X¯n)σ^n≤τ)−P(n(X¯n−μ)σ≤τ)|→0in probability asn→∞,{\displaystyle \sup _{\tau \in \mathbb {R} }\left|P^{*}\left({\frac {{\sqrt {n}}({\bar {X}}_{n}^{*}-{\bar {X}}_{n})}{{\hat {\sigma }}_{n}}}\leq \tau \right)-P\left({\frac {{\sqrt {n}}({\bar {X}}_{n}-\mu )}{\sigma }}\leq \tau \right)\right|\to 0{\text{ in probability as }}n\to \infty ,}whereP∗{\displaystyle P^{*}}represents probability conditional onXn1,…,Xnn{\displaystyle X_{n1},\ldots ,X_{nn}},n≥1{\displaystyle n\geq 1},X¯n∗=n−1(Xn1∗+⋯+Xnn∗){\displaystyle {\bar {X}}_{n}^{*}=n^{-1}(X_{n1}^{*}+\cdots +X_{nn}^{*})}, andσ^n2=n−1∑i=1n(Xni−X¯n)2{\displaystyle {\hat {\sigma }}_{n}^{2}=n^{-1}\sum _{i=1}^{n}(X_{ni}-{\bar {X}}_{n})^{2}}. To see this, note that(Xni∗−X¯n)/nσ^n{\displaystyle (X_{ni}^{*}-{\bar {X}}_{n})/{\sqrt {n}}{\hat {\sigma }}_{n}}satisfies theLindeberg condition, so the CLT holds.[54] TheGlivenko–Cantelli theoremprovides theoretical background for the bootstrap method. Finite populationsanddrawing without replacementrequire adaptations of the bootstrap due to the violation of the i.i.d assumption. One example is "population bootstrap"[55].
https://en.wikipedia.org/wiki/Bootstrapping_(statistics)
Inartificial intelligence,eager learningis a learning method in which the system tries to construct a general, input-independent target function during training of the system, as opposed tolazy learning, where generalization beyond the training data is delayed until a query is made to the system.[1]The main advantage gained in employing an eager learning method, such as anartificial neural network, is that the target function will be approximated globally during training, thus requiring much less space than using a lazy learning system. Eager learning systems also deal much better with noise in thetraining data. Eager learning is an example ofoffline learning, in which post-training queries to the system have no effect on the system itself, and thus the same query to the system will always produce the same result. The main disadvantage with eager learning is that it is generally unable to provide good local approximations in the target function.[2] Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Eager_learning
Inmathematics, in the area ofcomplex analysis,Nachbin's theorem(named afterLeopoldo Nachbin) is a result used to establish bounds on the growth rates foranalytic functions. In particular, Nachbin's theorem may be used to give the domain of convergence of thegeneralized Borel transform, also calledNachbin summation. This article provides a brief review of growth rates, including the idea of afunction of exponential type. Classification of growth rates based on type help provide a finer tool thanbig OorLandau notation, since a number of theorems about the analytic structure of the bounded function and itsintegral transformscan be stated. A functionf(z){\displaystyle f(z)}defined on thecomplex planeis said to be of exponential type if there exist constantsM{\displaystyle M}andα{\displaystyle \alpha }such that in the limit ofr→∞{\displaystyle r\to \infty }. Here, thecomplex variablez{\displaystyle z}was written asz=reiθ{\displaystyle z=re^{i\theta }}to emphasize that the limit must hold in all directionsθ{\displaystyle \theta }. Lettingα{\displaystyle \alpha }stand for theinfimumof all suchα{\displaystyle \alpha }, one then says that the functionf{\displaystyle f}is ofexponential typeα{\displaystyle \alpha }. For example, letf(z)=sin⁡(πz){\displaystyle f(z)=\sin(\pi z)}. Then one says thatsin⁡(πz){\displaystyle \sin(\pi z)}is of exponential typeπ{\displaystyle \pi }, sinceπ{\displaystyle \pi }is the smallest number that bounds the growth ofsin⁡(πz){\displaystyle \sin(\pi z)}along the imaginary axis. So, for this example,Carlson's theoremcannot apply, as it requires functions of exponential type less thanπ{\displaystyle \pi }. Additional function types may be defined for other bounding functions besides the exponential function. In general, a functionΨ(t){\displaystyle \Psi (t)}is acomparison functionif it has a series withΨn>0{\displaystyle \Psi _{n}>0}for alln{\displaystyle n}, and Comparison functions are necessarilyentire, which follows from theratio test. IfΨ(t){\displaystyle \Psi (t)}is such a comparison function, one then says thatf{\displaystyle f}is ofΨ{\displaystyle \Psi }-type if there exist constantsM{\displaystyle M}andτ{\displaystyle \tau }such that asr→∞{\displaystyle r\to \infty }. Ifτ{\displaystyle \tau }is the infimum of all suchτ{\displaystyle \tau }one says thatf{\displaystyle f}is ofΨ{\displaystyle \Psi }-typeτ{\displaystyle \tau }. Nachbin's theorem states that a functionf(z){\displaystyle f(z)}with the series is ofΨ{\displaystyle \Psi }-typeτ{\displaystyle \tau }if and only if This is naturally connected to theroot testand can be considered a relative of theCauchy–Hadamard theorem. Nachbin's theorem has immediate applications inCauchy theorem-like situations, and forintegral transforms. For example, thegeneralized Borel transformis given by Iff{\displaystyle f}is ofΨ{\displaystyle \Psi }-typeτ{\displaystyle \tau }, then the exterior of the domain of convergence ofF(w){\displaystyle F(w)}, and all of its singular points, are contained within the disk Furthermore, one has where thecontour of integrationγ encircles the disk|w|≤τ{\displaystyle |w|\leq \tau }. This generalizes the usualBorel transformfor functions of exponential type, whereΨ(t)=et{\displaystyle \Psi (t)=e^{t}}. The integral form for the generalized Borel transform follows as well. Letα(t){\displaystyle \alpha (t)}be a function whose first derivative is bounded on the interval[0,∞){\displaystyle [0,\infty )}and that satisfies the defining equation wheredα(t)=α′(t)dt{\displaystyle d\alpha (t)=\alpha ^{\prime }(t)\,dt}. Then the integral form of the generalized Borel transform is The ordinary Borel transform is regained by settingα(t)=−e−t{\displaystyle \alpha (t)=-e^{-t}}. Note that the integral form of the Borel transform is theLaplace transform. Nachbin summation can be used to sum divergent series thatBorel summationdoes not, for instance toasymptotically solveintegral equations of the form: whereg(s)=∑n=0∞ans−n{\textstyle g(s)=\sum _{n=0}^{\infty }a_{n}s^{-n}},f(t){\displaystyle f(t)}may or may not be of exponential type, and the kernelK(u){\displaystyle K(u)}has aMellin transform. The solution can be obtained using Nachbin summation asf(x)=∑n=0∞anM(n+1)xn{\displaystyle f(x)=\sum _{n=0}^{\infty }{\frac {a_{n}}{M(n+1)}}x^{n}}with thean{\displaystyle a_{n}}fromg(s){\displaystyle g(s)}and withM(n){\displaystyle M(n)}the Mellin transform ofK(u){\displaystyle K(u)}. An example of this is the Gram seriesπ(x)≈1+∑n=1∞logn⁡(x)n⋅n!ζ(n+1).{\displaystyle \pi (x)\approx 1+\sum _{n=1}^{\infty }{\frac {\log ^{n}(x)}{n\cdot n!\zeta (n+1)}}.} In some cases as an extra condition we require∫0∞K(t)tndt{\displaystyle \int _{0}^{\infty }K(t)t^{n}\,dt}to be finite and nonzero forn=0,1,2,3,....{\displaystyle n=0,1,2,3,....} Collections of functions of exponential typeτ{\displaystyle \tau }can form acompleteuniform space, namely aFréchet space, by thetopologyinduced by the countable family ofnorms
https://en.wikipedia.org/wiki/Nachbin%27s_theorem
Indatabasetechnologies, arollbackis an operation which returns the database to some previous state. Rollbacks are important for databaseintegrity, because they mean that the database can be restored to a clean copy even after erroneous operations are performed.[1]They are crucial for recovering from database server crashes; by rolling back anytransactionwhich was active at the time of the crash, the database is restored to a consistent state. The rollback feature is usually implemented with atransaction log, but can also be implemented viamultiversion concurrency control. A cascading rollback occurs in database systems when a transaction (T1) causes a failure and a rollback must be performed. Other transactions dependent on T1's actions must also be rollbacked due to T1's failure, thus causing a cascading effect. That is, one transaction's failure causes many to fail. Practical database recovery techniques guarantee cascadeless rollback, therefore a cascading rollback is not a desirable result. Cascading rollback is scheduled by dba. SQL refers to Structured Query Language, a kind of language used to access, update and manipulate database. InSQL,ROLLBACKis a command that causes all data changes since the lastSTART TRANSACTIONorBEGINto be discarded by therelational database management systems(RDBMS), so that the state of the data is "rolled back" to the way it was before those changes were made.[2] AROLLBACKstatement will also release any existingsavepointsthat may be in use. In most SQL dialects,ROLLBACKs are connection specific. This means that if two connections are made to the same database, aROLLBACKmade in one connection will not affect any other connections. This is vital for properconcurrency. Rollbacks are not exclusive to databases: anystatefuldistributed systemmay use rollback operations to maintainconsistency. Examples of distributed systems that can support rollbacks includemessage queuesandworkflow management systems. More generally, any operation that resets a system to its previous state before another operation or series of operations can be viewed as a rollback.
https://en.wikipedia.org/wiki/Rollback_(data_management)
Instatistics, theuncertainty coefficient, also calledproficiency,entropy coefficientorTheil's U, is a measure of nominalassociation. It was first introduced byHenri Theil[citation needed]and is based on the concept ofinformation entropy. Suppose we have samples of two discrete random variables,XandY. By constructing the joint distribution,PX,Y(x,y), from which we can calculate theconditional distributions,PX|Y(x|y) =PX,Y(x,y)/PY(y)andPY|X(y|x) =PX,Y(x,y)/PX(x), and calculating the various entropies, we can determine the degree of association between the two variables. The entropy of a single distribution is given as:[1] while theconditional entropyis given as:[1] The uncertainty coefficient[2]or proficiency[3]is defined as: and tells us: givenY, what fraction of the bits ofXcan we predict? In this case we can think ofXas containing the total information, and ofYas allowing one to predict part of such information. The above expression makes clear that the uncertainty coefficient is a normalisedmutual informationI(X;Y). In particular, the uncertainty coefficient ranges in [0, 1] asI(X;Y) < H(X)and bothI(X,Y)andH(X)are positive or null. Note that the value ofU(but notH!) is independent of the base of thelogsince all logarithms are proportional. The uncertainty coefficient is useful for measuring the validity of a statistical classification algorithm and has the advantage over simpler accuracy measures such asprecision and recallin that it is not affected by the relative fractions of the different classes, i.e.,P(x).[4]It also has the unique property that it won't penalize an algorithm for predicting the wrong classes, so long as it does so consistently (i.e., it simply rearranges the classes). This is useful in evaluatingclustering algorithmssince cluster labels typically have no particular ordering.[3] The uncertainty coefficient is not symmetric with respect to the roles ofXandY. The roles can be reversed and a symmetrical measure thus defined as a weighted average between the two:[2] Although normally applied to discrete variables, the uncertainty coefficient can be extended to continuous variables[1]usingdensity estimation.[citation needed]
https://en.wikipedia.org/wiki/Uncertainty_coefficient
AnInternet Experiment Note(IEN) is a sequentially numbered document in a series of technical publications issued by the participants of the early development work groups that created the precursors of the modernInternet. AfterDARPAbegan the Internet program in earnest in 1977, the project members were in need of communication and documentation of their work in order to realize the concepts laid out byBob KahnandVint Cerfsome years before. TheRequest for Comments(RFC) series was considered the province of theARPANETproject and the Network Working Group (NWG) which defined thenetwork protocolsused on it. Thus, the members of the Internet project decided on publishing their own series of documents,Internet Experiment Notes, which were modeled after the RFCs.[1][2] Jon Postelbecame the editor of the new series, in addition to his existing role of administering the long-standing RFC series. Between March, 1977, and September, 1982, 206 IENs were published. After that, with the plan to terminate support of theNetwork Control Protocol(NCP) on the ARPANET and switch toTCP/IP, the production of IENs was discontinued, and all further publication was conducted within the existing RFC system.[3][2] The second, third and fourth versions of TCP, including the split into TCP/IP, were developed during the IEN work.[4][5][6]The "Final Report" of the "TCP Project", mentions some of the people involved, including groups from Stanford University, University College London, USC-ISI, MIT, BBN, NDRE, among others.[7] Key networking principles, such as therobustness principle, were defined during the IEN work.[8]
https://en.wikipedia.org/wiki/Internet_Experiment_Note
Ridge regression(also known asTikhonov regularization, named forAndrey Tikhonov) is a method of estimating thecoefficientsof multiple-regression modelsin scenarios where the independent variables are highly correlated.[1]It has been used in many fields including econometrics, chemistry, and engineering.[2]It is a method ofregularizationofill-posed problems.[a]It is particularly useful to mitigate the problem ofmulticollinearityinlinear regression, which commonly occurs in models with large numbers of parameters.[3]In general, the method provides improvedefficiencyin parameter estimation problems in exchange for a tolerable amount ofbias(seebias–variance tradeoff).[4] The theory was first introduced by Hoerl and Kennard in 1970 in theirTechnometricspapers "Ridge regressions: biased estimation of nonorthogonal problems" and "Ridge regressions: applications in nonorthogonal problems".[5][6][1] Ridge regression was developed as a possible solution to the imprecision of least square estimators when linear regression models have some multicollinear (highly correlated) independent variables—by creating a ridge regression estimator (RR). This provides a more precise ridge parameters estimate, as its variance and mean square estimator are often smaller than the least square estimators previously derived.[7][2] In the simplest case, the problem of anear-singularmoment matrixXTX{\displaystyle \mathbf {X} ^{\mathsf {T}}\mathbf {X} }is alleviated by adding positive elements to thediagonals, thereby decreasing itscondition number. Analogous to theordinary least squaresestimator, the simple ridge estimator is then given byβ^R=(XTX+λI)−1XTy{\displaystyle {\hat {\boldsymbol {\beta }}}_{R}=\left(\mathbf {X} ^{\mathsf {T}}\mathbf {X} +\lambda \mathbf {I} \right)^{-1}\mathbf {X} ^{\mathsf {T}}\mathbf {y} }wherey{\displaystyle \mathbf {y} }is theregressand,X{\displaystyle \mathbf {X} }is thedesign matrix,I{\displaystyle \mathbf {I} }is theidentity matrix, and the ridge parameterλ≥0{\displaystyle \lambda \geq 0}serves as the constant shifting the diagonals of the moment matrix.[8]It can be shown that this estimator is the solution to theleast squaresproblem subject to theconstraintβTβ=c{\displaystyle {\boldsymbol {\beta }}^{\mathsf {T}}{\boldsymbol {\beta }}=c}, which can be expressed as a Lagrangian minimization:β^R=argminβ(y−Xβ)T(y−Xβ)+λ(βTβ−c){\displaystyle {\hat {\boldsymbol {\beta }}}_{R}={\text{argmin}}_{\boldsymbol {\beta }}\,\left(\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }}\right)^{\mathsf {T}}\left(\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }}\right)+\lambda \left({\boldsymbol {\beta }}^{\mathsf {T}}{\boldsymbol {\beta }}-c\right)}which shows thatλ{\displaystyle \lambda }is nothing but theLagrange multiplierof the constraint.[9]In fact, there is a one-to-one relationship betweenc{\displaystyle c}andβ{\displaystyle \beta }and since, in practice, we do not knowc{\displaystyle c}, we defineλ{\displaystyle \lambda }heuristically or find it via additional data-fitting strategies, seeDetermination of the Tikhonov factor. Note that, whenλ=0{\displaystyle \lambda =0}, in which case theconstraint is non-binding, the ridge estimator reduces toordinary least squares. A more general approach to Tikhonov regularization is discussed below. Tikhonov regularization was invented independently in many different contexts. It became widely known through its application to integral equations in the works ofAndrey Tikhonov[10][11][12][13][14]and David L. Phillips.[15]Some authors use the termTikhonov–Phillips regularization. The finite-dimensional case was expounded byArthur E. Hoerl, who took a statistical approach,[16]and by Manus Foster, who interpreted this method as aWiener–Kolmogorov (Kriging)filter.[17]Following Hoerl, it is known in the statistical literature as ridge regression,[18]named after ridge analysis ("ridge" refers to the path from the constrained maximum).[19] Suppose that for a knownreal matrixA{\displaystyle A}and vectorb{\displaystyle \mathbf {b} }, we wish to find a vectorx{\displaystyle \mathbf {x} }such thatAx=b,{\displaystyle A\mathbf {x} =\mathbf {b} ,}wherex{\displaystyle \mathbf {x} }andb{\displaystyle \mathbf {b} }may be of different sizes andA{\displaystyle A}may be non-square. The standard approach isordinary least squareslinear regression.[clarification needed]However, if nox{\displaystyle \mathbf {x} }satisfies the equation or more than onex{\displaystyle \mathbf {x} }does—that is, the solution is not unique—the problem is said to beill posed. In such cases, ordinary least squares estimation leads to anoverdetermined, or more often anunderdeterminedsystem of equations. Most real-world phenomena have the effect oflow-pass filters[clarification needed]in the forward direction whereA{\displaystyle A}mapsx{\displaystyle \mathbf {x} }tob{\displaystyle \mathbf {b} }. Therefore, in solving the inverse-problem, the inverse mapping operates as ahigh-pass filterthat has the undesirable tendency of amplifying noise (eigenvalues/ singular values are largest in the reverse mapping where they were smallest in the forward mapping). In addition, ordinary least squares implicitly nullifies every element of the reconstructed version ofx{\displaystyle \mathbf {x} }that is in the null-space ofA{\displaystyle A}, rather than allowing for a model to be used as a prior forx{\displaystyle \mathbf {x} }. Ordinary least squares seeks to minimize the sum of squaredresiduals, which can be compactly written as‖Ax−b‖22,{\displaystyle \left\|A\mathbf {x} -\mathbf {b} \right\|_{2}^{2},}where‖⋅‖2{\displaystyle \|\cdot \|_{2}}is theEuclidean norm. In order to give preference to a particular solution with desirable properties, a regularization term can be included in this minimization:‖Ax−b‖22+‖Γx‖22=‖(AΓ)x−(b0)‖22{\displaystyle \left\|A\mathbf {x} -\mathbf {b} \right\|_{2}^{2}+\left\|\Gamma \mathbf {x} \right\|_{2}^{2}=\left\|{\begin{pmatrix}A\\\Gamma \end{pmatrix}}\mathbf {x} -{\begin{pmatrix}\mathbf {b} \\{\boldsymbol {0}}\end{pmatrix}}\right\|_{2}^{2}}for some suitably chosenTikhonov matrixΓ{\displaystyle \Gamma }. In many cases, this matrix is chosen as a scalar multiple of theidentity matrix(Γ=αI{\displaystyle \Gamma =\alpha I}), giving preference to solutions with smallernorms; this is known asL2regularization.[20]In other cases, high-pass operators (e.g., adifference operatoror a weightedFourier operator) may be used to enforce smoothness if the underlying vector is believed to be mostly continuous. This regularization improves the conditioning of the problem, thus enabling a direct numerical solution. An explicit solution, denoted byx^{\displaystyle {\hat {\mathbf {x} }}}, is given byx^=(ATA+ΓTΓ)−1ATb=((AΓ)T(AΓ))−1(AΓ)T(b0).{\displaystyle {\hat {\mathbf {x} }}=\left(A^{\mathsf {T}}A+\Gamma ^{\mathsf {T}}\Gamma \right)^{-1}A^{\mathsf {T}}\mathbf {b} =\left({\begin{pmatrix}A\\\Gamma \end{pmatrix}}^{\mathsf {T}}{\begin{pmatrix}A\\\Gamma \end{pmatrix}}\right)^{-1}{\begin{pmatrix}A\\\Gamma \end{pmatrix}}^{\mathsf {T}}{\begin{pmatrix}\mathbf {b} \\{\boldsymbol {0}}\end{pmatrix}}.}The effect of regularization may be varied by the scale of matrixΓ{\displaystyle \Gamma }. ForΓ=0{\displaystyle \Gamma =0}this reduces to the unregularized least-squares solution, provided that (ATA)−1exists. Note that in case of acomplex matrixA{\displaystyle A}, as usual the transposeAT{\displaystyle A^{\mathsf {T}}}has to be replaced by theHermitian transposeAH{\displaystyle A^{\mathsf {H}}}. L2regularization is used in many contexts aside from linear regression, such asclassificationwithlogistic regressionorsupport vector machines,[21]and matrix factorization.[22] Since Tikhonov Regularization simply adds a quadratic term to the objective function in optimization problems, it is possible to do so after the unregularised optimisation has taken place. E.g., if the above problem withΓ=0{\displaystyle \Gamma =0}yields the solutionx^0{\displaystyle {\hat {\mathbf {x} }}_{0}}, the solution in the presence ofΓ≠0{\displaystyle \Gamma \neq 0}can be expressed as:x^=Bx^0,{\displaystyle {\hat {\mathbf {x} }}=B{\hat {\mathbf {x} }}_{0},}with the "regularisation matrix"B=(ATA+ΓTΓ)−1ATA{\displaystyle B=\left(A^{\mathsf {T}}A+\Gamma ^{\mathsf {T}}\Gamma \right)^{-1}A^{\mathsf {T}}A}. If the parameter fit comes with a covariance matrix of the estimated parameter uncertaintiesV0{\displaystyle V_{0}}, then the regularisation matrix will beB=(V0−1+ΓTΓ)−1V0−1,{\displaystyle B=(V_{0}^{-1}+\Gamma ^{\mathsf {T}}\Gamma )^{-1}V_{0}^{-1},}and the regularised result will have a new covarianceV=BV0BT.{\displaystyle V=BV_{0}B^{\mathsf {T}}.} In the context of arbitrary likelihood fits, this is valid, as long as the quadratic approximation of the likelihood function is valid. This means that, as long as the perturbation from the unregularised result is small, one can regularise any result that is presented as a best fit point with a covariance matrix. No detailed knowledge of the underlying likelihood function is needed.[23] For general multivariate normal distributions forx{\displaystyle \mathbf {x} }and the data error, one can apply a transformation of the variables to reduce to the case above. Equivalently, one can seek anx{\displaystyle \mathbf {x} }to minimize‖Ax−b‖P2+‖x−x0‖Q2,{\displaystyle \left\|A\mathbf {x} -\mathbf {b} \right\|_{P}^{2}+\left\|\mathbf {x} -\mathbf {x} _{0}\right\|_{Q}^{2},}where we have used‖x‖Q2{\displaystyle \left\|\mathbf {x} \right\|_{Q}^{2}}to stand for the weighted norm squaredxTQx{\displaystyle \mathbf {x} ^{\mathsf {T}}Q\mathbf {x} }(compare with theMahalanobis distance). In the Bayesian interpretationP{\displaystyle P}is the inversecovariance matrixofb{\displaystyle \mathbf {b} },x0{\displaystyle \mathbf {x} _{0}}is theexpected valueofx{\displaystyle \mathbf {x} }, andQ{\displaystyle Q}is the inverse covariance matrix ofx{\displaystyle \mathbf {x} }. The Tikhonov matrix is then given as a factorization of the matrixQ=ΓTΓ{\displaystyle Q=\Gamma ^{\mathsf {T}}\Gamma }(e.g. theCholesky factorization) and is considered awhitening filter. This generalized problem has an optimal solutionx∗{\displaystyle \mathbf {x} ^{*}}which can be written explicitly using the formulax∗=(ATPA+Q)−1(ATPb+Qx0),{\displaystyle \mathbf {x} ^{*}=\left(A^{\mathsf {T}}PA+Q\right)^{-1}\left(A^{\mathsf {T}}P\mathbf {b} +Q\mathbf {x} _{0}\right),}or equivalently, whenQisnota null matrix:x∗=x0+(ATPA+Q)−1(ATP(b−Ax0)).{\displaystyle \mathbf {x} ^{*}=\mathbf {x} _{0}+\left(A^{\mathsf {T}}PA+Q\right)^{-1}\left(A^{\mathsf {T}}P\left(\mathbf {b} -A\mathbf {x} _{0}\right)\right).} In some situations, one can avoid using the transposeAT{\displaystyle A^{\mathsf {T}}}, as proposed byMikhail Lavrentyev.[24]For example, ifA{\displaystyle A}is symmetric positive definite, i.e.A=AT>0{\displaystyle A=A^{\mathsf {T}}>0}, so is its inverseA−1{\displaystyle A^{-1}}, which can thus be used to set up the weighted norm squared‖x‖P2=xTA−1x{\displaystyle \left\|\mathbf {x} \right\|_{P}^{2}=\mathbf {x} ^{\mathsf {T}}A^{-1}\mathbf {x} }in the generalized Tikhonov regularization, leading to minimizing‖Ax−b‖A−12+‖x−x0‖Q2{\displaystyle \left\|A\mathbf {x} -\mathbf {b} \right\|_{A^{-1}}^{2}+\left\|\mathbf {x} -\mathbf {x} _{0}\right\|_{Q}^{2}}or, equivalently up to a constant term,xT(A+Q)x−2xT(b+Qx0).{\displaystyle \mathbf {x} ^{\mathsf {T}}\left(A+Q\right)\mathbf {x} -2\mathbf {x} ^{\mathsf {T}}\left(\mathbf {b} +Q\mathbf {x} _{0}\right).} This minimization problem has an optimal solutionx∗{\displaystyle \mathbf {x} ^{*}}which can be written explicitly using the formulax∗=(A+Q)−1(b+Qx0),{\displaystyle \mathbf {x} ^{*}=\left(A+Q\right)^{-1}\left(\mathbf {b} +Q\mathbf {x} _{0}\right),}which is nothing but the solution of the generalized Tikhonov problem whereA=AT=P−1.{\displaystyle A=A^{\mathsf {T}}=P^{-1}.} The Lavrentyev regularization, if applicable, is advantageous to the original Tikhonov regularization, since the Lavrentyev matrixA+Q{\displaystyle A+Q}can be better conditioned, i.e., have a smallercondition number, compared to the Tikhonov matrixATA+ΓTΓ.{\displaystyle A^{\mathsf {T}}A+\Gamma ^{\mathsf {T}}\Gamma .} Typically discrete linear ill-conditioned problems result from discretization ofintegral equations, and one can formulate a Tikhonov regularization in the original infinite-dimensional context. In the above we can interpretA{\displaystyle A}as acompact operatoronHilbert spaces, andx{\displaystyle x}andb{\displaystyle b}as elements in the domain and range ofA{\displaystyle A}. The operatorA∗A+ΓTΓ{\displaystyle A^{*}A+\Gamma ^{\mathsf {T}}\Gamma }is then aself-adjointbounded invertible operator. WithΓ=αI{\displaystyle \Gamma =\alpha I}, this least-squares solution can be analyzed in a special way using thesingular-value decomposition. Given the singular value decompositionA=UΣVT{\displaystyle A=U\Sigma V^{\mathsf {T}}}with singular valuesσi{\displaystyle \sigma _{i}}, the Tikhonov regularized solution can be expressed asx^=VDUTb,{\displaystyle {\hat {x}}=VDU^{\mathsf {T}}b,}whereD{\displaystyle D}has diagonal valuesDii=σiσi2+α2{\displaystyle D_{ii}={\frac {\sigma _{i}}{\sigma _{i}^{2}+\alpha ^{2}}}}and is zero elsewhere. This demonstrates the effect of the Tikhonov parameter on thecondition numberof the regularized problem. For the generalized case, a similar representation can be derived using ageneralized singular-value decomposition.[25] Finally, it is related to theWiener filter:x^=∑i=1qfiuiTbσivi,{\displaystyle {\hat {x}}=\sum _{i=1}^{q}f_{i}{\frac {u_{i}^{\mathsf {T}}b}{\sigma _{i}}}v_{i},}where the Wiener weights arefi=σi2σi2+α2{\displaystyle f_{i}={\frac {\sigma _{i}^{2}}{\sigma _{i}^{2}+\alpha ^{2}}}}andq{\displaystyle q}is therankofA{\displaystyle A}. The optimal regularization parameterα{\displaystyle \alpha }is usually unknown and often in practical problems is determined by anad hocmethod. A possible approach relies on the Bayesian interpretation described below. Other approaches include thediscrepancy principle,cross-validation,L-curve method,[26]restricted maximum likelihoodandunbiased predictive risk estimator.Grace Wahbaproved that the optimal parameter, in the sense ofleave-one-out cross-validationminimizes[27][28]G=RSSτ2=‖Xβ^−y‖2[Tr⁡(I−X(XTX+α2I)−1XT)]2,{\displaystyle G={\frac {\operatorname {RSS} }{\tau ^{2}}}={\frac {\left\|X{\hat {\beta }}-y\right\|^{2}}{\left[\operatorname {Tr} \left(I-X\left(X^{\mathsf {T}}X+\alpha ^{2}I\right)^{-1}X^{\mathsf {T}}\right)\right]^{2}}},}whereRSS{\displaystyle \operatorname {RSS} }is theresidual sum of squares, andτ{\displaystyle \tau }is theeffective number of degrees of freedom. Using the previous SVD decomposition, we can simplify the above expression:RSS=‖y−∑i=1q(ui′b)ui‖2+‖∑i=1qα2σi2+α2(ui′b)ui‖2,{\displaystyle \operatorname {RSS} =\left\|y-\sum _{i=1}^{q}(u_{i}'b)u_{i}\right\|^{2}+\left\|\sum _{i=1}^{q}{\frac {\alpha ^{2}}{\sigma _{i}^{2}+\alpha ^{2}}}(u_{i}'b)u_{i}\right\|^{2},}RSS=RSS0+‖∑i=1qα2σi2+α2(ui′b)ui‖2,{\displaystyle \operatorname {RSS} =\operatorname {RSS} _{0}+\left\|\sum _{i=1}^{q}{\frac {\alpha ^{2}}{\sigma _{i}^{2}+\alpha ^{2}}}(u_{i}'b)u_{i}\right\|^{2},}andτ=m−∑i=1qσi2σi2+α2=m−q+∑i=1qα2σi2+α2.{\displaystyle \tau =m-\sum _{i=1}^{q}{\frac {\sigma _{i}^{2}}{\sigma _{i}^{2}+\alpha ^{2}}}=m-q+\sum _{i=1}^{q}{\frac {\alpha ^{2}}{\sigma _{i}^{2}+\alpha ^{2}}}.} The probabilistic formulation of aninverse problemintroduces (when all uncertainties are Gaussian) a covariance matrixCM{\displaystyle C_{M}}representing thea prioriuncertainties on the model parameters, and a covariance matrixCD{\displaystyle C_{D}}representing the uncertainties on the observed parameters.[29]In the special case when these two matrices are diagonal and isotropic,CM=σM2I{\displaystyle C_{M}=\sigma _{M}^{2}I}andCD=σD2I{\displaystyle C_{D}=\sigma _{D}^{2}I}, and, in this case, the equations of inverse theory reduce to the equations above, withα=σD/σM{\displaystyle \alpha ={\sigma _{D}}/{\sigma _{M}}}.[30][31] Although at first the choice of the solution to this regularized problem may look artificial, and indeed the matrixΓ{\displaystyle \Gamma }seems rather arbitrary, the process can be justified from aBayesian point of view.[32]Note that for an ill-posed problem one must necessarily introduce some additional assumptions in order to get a unique solution. Statistically, theprior probabilitydistribution ofx{\displaystyle x}is sometimes taken to be amultivariate normal distribution.[33]For simplicity here, the following assumptions are made: the means are zero; their components are independent; the components have the samestandard deviationσx{\displaystyle \sigma _{x}}. The data are also subject to errors, and the errors inb{\displaystyle b}are also assumed to beindependentwith zero mean and standard deviationσb{\displaystyle \sigma _{b}}. Under these assumptions the Tikhonov-regularized solution is themost probablesolution given the data and thea prioridistribution ofx{\displaystyle x}, according toBayes' theorem.[34] If the assumption ofnormalityis replaced by assumptions ofhomoscedasticityand uncorrelatedness oferrors, and if one still assumes zero mean, then theGauss–Markov theorementails that the solution is the minimalunbiased linear estimator.[35]
https://en.wikipedia.org/wiki/Tikhonov_regularization
Inlinguistics, abound morphemeis amorpheme(the elementary unit of morphosyntax) that can appear only as part of a larger expression, while afree morpheme(orunbound morpheme) is one that can stand alone.[1]A bound morpheme is a type ofbound form, and a free morpheme is a type offree form.[2] A form is a free form if it can occur in isolation as a complete utterance, e.g.Johnny is running, orJohnny, orrunning(this can occur as the answer to a question such asWhat is he doing?).[3]A form that cannot occur in isolation is a bound form, e.g.-y,is, and-ing(inJohnny is running). Non-occurrence in isolation is given as the primary criterion for boundness in most linguistics textbooks.[4] Affixesare bound by definition.[5]English languageaffixes are almost exclusivelyprefixesorsuffixes:pre-in "precaution" and-mentin "shipment". Affixes may beinflectional, indicating how a certain word relates to other words in a larger phrase, orderivational, changing either thepart of speechor the actual meaning of a word.[6][dead link] Mostrootsin English are free morphemes (e.g.examin-inexamination, which can occur in isolation:examine), but others are bound (e.g.bio-inbiology). Words likechairmanthat contain two free morphemes (chairandman) are referred to ascompoundwords.[7] Cranberry morphemesare a special form of bound morpheme whose independent meaning has been displaced and serves only to distinguish one word from another, like incranberry,in which the free morphemeberryis preceded by the bound morphemecran-,meaning "crane" from the earlier name for the berry, "crane berry".[8] An empty morpheme is a special type of bound morpheme with no inherent meaning. Empty morphemes change the phonetics of a word but offer no semantic value to the word as a whole.[9] Examples: Words can be formed purely from bound morphemes, as in Englishpermit,ultimately fromLatinper"through" +mittō"I send", whereper-and-mitare bound morphemes in English. However, they are often thought of as simply a single morpheme. Per is not a bound morpheme; a bound morpheme, by definition, cannot stand alone as a word. Per is a standalone word as seen in the sentence, "I go to the gym twice per day." A similar example is given inChinese; most of its morphemes are monosyllabic and identified with aChinese characterbecause of the largelymorphosyllabicscript, but disyllabic words exist that cannot be analyzed into independent morphemes, such as 蝴蝶húdié'butterfly'. Then, the individual syllables and corresponding characters are used only in that word, and while they can be interpreted as bound morphemes 蝴hú-and 蝶-dié,it is more commonly considered a single disyllabic morpheme. Seepolysyllabic Chinese morphemesfor further discussion. Linguists usually distinguish betweenproductiveand unproductive forms when speaking about morphemes. For example, the morphemeten-intenantwas originally derived from the Latin wordtenere, "to hold", and the same basic meaning is seen in such words as "tenable" and "intention." But asten-is not used in English to form new words, most linguists would not consider it to be a morpheme at all. A language with a very low morpheme-to-word ratio is anisolating language. Because such a language uses few bound morphemes, it expresses most grammatical relationships byword orderor helper words, so it is ananalytic language. In contrast, a language that uses a substantial number of bound morphemes to express grammatical relationships is asynthetic language.
https://en.wikipedia.org/wiki/Bound_morpheme
Consensus-based assessmentexpands on the common practice ofconsensus decision-makingand the theoretical observation that expertise can be closely approximated by large numbers of novices or journeymen. It creates a method for determiningmeasurement standardsfor very ambiguous domains of knowledge, such asemotional intelligence, politics, religion, values and culture in general. From this perspective, the shared knowledge that forms cultural consensus can be assessed in much the same way as expertise or general intelligence. Consensus-based assessment is based on a simple finding: that samples of individuals with differing competence (e.g., experts and apprentices) rate relevant scenarios, usingLikert scales, with similar mean ratings. Thus, from the perspective of a CBA framework, cultural standards for scoring keys can be derived from the population that is being assessed. Peter Legree and Joseph Psotka, working together over the past decades, proposed thatpsychometricgcould be measured unobtrusively through survey-like scales requiring judgments. This could either use the deviation score for each person from the group or expert mean; or aPearson correlationbetween their judgments and the group mean. The two techniques are perfectly correlated. Legree and Psotka subsequently created scales that requested individuals to estimate word frequency; judge binary probabilities of good continuation; identify knowledge implications; and approximate employment distributions. The items were carefully identified to avoid objective referents, and therefore the scales required respondents to provide judgments that were scored against broadly developed, consensual standards. Performance on this judgment battery correlated approximately 0.80 with conventional measures of psychometricg. The response keys were consensually derived. Unlike mathematics or physics questions, the selection of items, scenarios, and options to assess psychometricgwere guided roughly by a theory that emphasized complex judgment, but the explicit keys were unknown until the assessments had been made: they were determined by the average of everyone's responses, using deviation scores, correlations, or factor scores. One way to understand the connection between expertise and consensus is to consider that for many performance domains, expertise largely reflects knowledge derived from experience. Since novices tend to have fewer experiences, their opinions err in various inconsistent directions. However, as experience is acquired, the opinions of journeymen through to experts become more consistent. According to this view, errors are random. Ratings data collected from large samples of respondents of varying expertise can thus be used to approximate the average ratings a substantial number of experts would provide were many experts available. Because the standard deviation of a mean will approach zero as the number of observations becomes very large, estimates based on groups of varying competence will provide converging estimates of the best performance standards. The means of these groups’ responses can be used to create effective scoringrubrics, or measurement standards to evaluate performance. This approach is particularly relevant to scoring subjective areas of knowledge that are scaled using Likert response scales, and the approach has been applied to develop scoring standards for several domains where experts are scarce. In practice, analyses have demonstrated high levels of convergence between expert and CBA standards with values quantifying those standards highly correlated (PearsonRs ranging from .72 to .95), and with scores based on those standards also highly correlated (Rs ranging from .88 to .99) provided the sample size of both groups is large (Legree, Psotka, Tremble & Bourne, 2005). This convergence between CBA and expert referenced scores and the associated validity data indicate that CBA and expert based scoring can be used interchangeably, provided that the ratings data are collected using large samples of experts and novices or journeymen. CBA is often computed by using the PearsonRcorrelation of each person'sLikert scalejudgments across a set of items against the mean of all people's judgments on those same items. The correlation is then a measure of that person's proximity to the consensus. It is also sometimes computed as a standardized deviation score from the consensus means of the groups. These two procedures are mathematically isomorphic. If culture is considered to be shared knowledge; and the mean of the group's ratings on a focused domain of knowledge is considered a measure of the cultural consensus in that domain; then both procedures assess CBA as a measure of an individual person's cultural understanding. However, it may be that the consensus is not evenly distributed over all subordinate items about a topic. Perhaps the knowledge content of the items is distributed over domains with differing consensus. For instance, conservatives who are libertarians may feel differently about invasion of privacy than conservatives who feel strongly about law and order. In fact, standardfactor analysisbrings this issue to the fore. In either centroid orprincipal components analysis(PCA) the first factor scores are created by multiplying each rating by the correlation of the factor (usually the mean of all standardized ratings for each person) against each item's ratings. This multiplication weights each item by the correlation of the pattern of individual differences on each item (the component scores). If consensus is unevenly distributed over these items, some items may be more focused on the overall issues of the common factor. If an item correlates highly with the pattern of overall individual differences, then it is weighted more strongly in the overall factor scores. This weighting implicitly also weights the CBA score, since it is those items that share a common CBA pattern of consensus that are weighted more in factor analysis. The transposed orQ methodologyfactor analysis, created byWilliam Stephenson (psychologist)brings this relationship out explicitly. CBA scores are statistically isomorphic to the component scores in PCA for a Q factor analysis. They are the loading of each person's responses on the mean of all people's responses. So, Q factor analysis may provide a superior CBA measure, if it can be used first to select the people who represent the dominant dimension, over items that best represent a subordinate attribute dimension of a domain (such as liberalism in a political domain). Factor analysis can then provide the CBA of individuals along that particular axis of the domain. In practice, when items are not easily created and arrayed to provide a highly reliable scale, the Q factor analysis is not necessary, since the original factor analysis should also select those items that have a common consensus. So, for instance, in a scale of items for political attitudes, the items may ask about attitudes toward big government; law and order; economic issues; labor issues; or libertarian issues. Which of these items most strongly bear on the political attitudes of the groups polled may be difficult to determine a priori. However, since factor analysis is a symmetric computation on the matrix of items and people, the original factor analysis of items, (when these are Likert scales) selects not just those items that are in a similar domain, but more generally, those items that have a similar consensus. The added advantage of this factor analytic technique is that items are automatically arranged along a factor so that the highest Likert ratings are also the highest CBA standard scores. Once selected, that factor determines the CBA (component) scores. The most common critique of CBA standards is to question how an average could possibly be a maximal standard. This critique argues that CBA is unsuitable for maximum-performance tests of psychological attributes, especially intelligence. Even so, CBA techniques are routinely employed in various measures of non-traditional intelligences (e.g., practical, emotional, social, etc.). Detailed critiques are presented in Gottfredson (2003) and MacCann, Roberts, Matthews, & Zeidner (2004) as well as elsewhere in the scientific literature.
https://en.wikipedia.org/wiki/Consensus_based_assessment
Rough fuzzy hybridizationis a method ofhybrid intelligent systemorsoft computing, whereFuzzy settheory is used for linguistic representation of patterns, leading to afuzzy granulationof thefeature space.Rough settheory is used to obtain dependency rules which model informative regions in the granulated feature space. Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Rough_fuzzy_hybridization
Knowledge representation(KR) aims to model information in a structured manner to formally represent it as knowledge in knowledge-based systems. Whereasknowledge representationand reasoning(KRR,KR&R, orKR²) also aims to understand, reason and interpret knowledge. KRR is widely used in the field ofartificial intelligence(AI) with the goal to representinformationabout the world in a form that a computer system can use to solve complex tasks, such asdiagnosing a medical conditionorhaving a natural-language dialog. KR incorporates findings from psychology[1]about how humans solve problems and represent knowledge, in order to designformalismsthat make complex systems easier to design and build. KRR also incorporates findings fromlogicto automate various kinds ofreasoning. Traditional KRR focuses more on the declarative representation of knowledge. Related knowledge representation formalisms mainly includevocabularies,thesaurus,semantic networks,axiom systems,frames,rules,logic programs, andontologies. Examples ofautomated reasoningengines includeinference engines,theorem provers,model generators, andclassifiers. In a broader sense, parameterized models inmachine learning— includingneural networkarchitectures such asconvolutional neural networksandtransformers— can also be regarded as a family of knowledge representation formalisms. The question of which formalism is most appropriate for knowledge-based systems has long been a subject of extensive debate. For instance, Frank van Harmelen et al. discussed the suitability of logic as a knowledge representation formalism and reviewed arguments presented by anti-logicists.[2]Paul Smolensky criticized the limitations of symbolic formalisms and explored the possibilities of integrating it with connectionist approaches.[3] More recently, Heng Zhang et al. have demonstrated that all universal (or equally expressive and natural) knowledge representation formalisms are recursively isomorphic.[4]This finding indicates a theoretical equivalence among mainstream knowledge representation formalisms with respect to their capacity for supportingartificial general intelligence(AGI). They further argue that while diverse technical approaches may draw insights from one another via recursive isomorphisms, the fundamental challenges remain inherently shared. The earliest work in computerized knowledge representation was focused on general problem-solvers such as theGeneral Problem Solver(GPS) system developed byAllen NewellandHerbert A. Simonin 1959 and theAdvice Takerproposed byJohn McCarthyalso in 1959. GPS featured data structures for planning and decomposition. The system would begin with a goal. It would then decompose that goal into sub-goals and then set out to construct strategies that could accomplish each subgoal. The Advisor Taker, on the other hand, proposed the use of thepredicate calculusto implementcommon sense reasoning. Many of the early approaches to knowledge representation in Artificial Intelligence (AI) used graph representations andsemantic networks, similar toknowledge graphstoday. In such approaches, problem solving was a form of graph traversal[5]or path-finding, as in theA* search algorithm. Typical applications included robot plan-formation and game-playing. Other researchers focused on developingautomated theorem-proversfor first-order logic, motivated by the use ofmathematical logicto formalise mathematics and to automate the proof of mathematical theorems. A major step in this direction was the development of theresolution methodbyJohn Alan Robinson. In the meanwhile, John McCarthy andPat Hayesdeveloped thesituation calculusas a logical representation of common sense knowledge about the laws of cause and effect.Cordell Green, in turn, showed how to do robot plan-formation by applying resolution to the situation calculus. He also showed how to use resolution forquestion-answeringandautomatic programming.[6] In contrast, researchers at Massachusetts Institute of Technology (MIT) rejected the resolution uniform proof procedure paradigm and advocated the procedural embedding of knowledge instead.[7]The resulting conflict between the use of logical representations and the use of procedural representations was resolved in the early 1970s with the development oflogic programmingandProlog, usingSLD resolutionto treatHorn clausesas goal-reduction procedures. The early development of logic programming was largely a European phenomenon. In North America, AI researchers such asEd FeigenbaumandFrederick Hayes-Rothadvocated the representation of domain-specific knowledge rather than general-purpose reasoning.[8] These efforts led to thecognitive revolutionin psychology and to the phase of AI focused on knowledge representation that resulted inexpert systemsin the 1970s and 80s,production systems,frame languages, etc. Rather than general problem solvers, AI changed its focus to expert systems that could match human competence on a specific task, such as medical diagnosis.[9] Expert systems gave us the terminology still in use today where AI systems are divided into aknowledge base, which includes facts and rules about a problem domain, and aninference engine, which applies the knowledge in theknowledge baseto answer questions and solve problems in the domain. In these early systems the facts in the knowledge base tended to be a fairly flat structure, essentially assertions about the values of variables used by the rules.[10] Meanwhile,Marvin Minskydeveloped the concept offramein the mid-1970s.[11]A frame is similar to an object class: It is an abstract description of a category describing things in the world, problems, and potential solutions. Frames were originally used on systems geared toward human interaction, e.g.understanding natural languageand the social settings in which various default expectations such as ordering food in a restaurant narrow the search space and allow the system to choose appropriate responses to dynamic situations. It was not long before the frame communities and the rule-based researchers realized that there was a synergy between their approaches. Frames were good for representing the real world, described as classes, subclasses, slots (data values) with various constraints on possible values. Rules were good for representing and utilizing complex logic such as the process to make a medical diagnosis. Integrated systems were developed that combined frames and rules. One of the most powerful and well known was the 1983Knowledge Engineering Environment(KEE) fromIntellicorp. KEE had a complete rule engine withforwardandbackward chaining. It also had a complete frame-based knowledge base with triggers, slots (data values), inheritance, and message passing. Although message passing originated in the object-oriented community rather than AI it was quickly embraced by AI researchers as well in environments such as KEE and in the operating systems for Lisp machines fromSymbolics,Xerox, andTexas Instruments.[12] The integration of frames, rules, and object-oriented programming was significantly driven by commercial ventures such as KEE and Symbolics spun off from various research projects. At the same time, there was another strain of research that was less commercially focused and was driven by mathematical logic and automated theorem proving.[citation needed]One of the most influential languages in this research was theKL-ONElanguage of the mid-'80s. KL-ONE was aframe languagethat had a rigorous semantics, formal definitions for concepts such as anIs-A relation.[13]KL-ONE and languages that were influenced by it such asLoomhad an automated reasoning engine that was based on formal logic rather than on IF-THEN rules. This reasoner is called the classifier. A classifier can analyze a set of declarations and infer new assertions, for example, redefine a class to be a subclass or superclass of some other class that wasn't formally specified. In this way the classifier can function as an inference engine, deducing new facts from an existing knowledge base. The classifier can also provide consistency checking on a knowledge base (which in the case of KL-ONE languages is also referred to as an Ontology).[14] Another area of knowledge representation research was the problem ofcommon-sense reasoning. One of the first realizations learned from trying to make software that can function with human natural language was that humans regularly draw on an extensive foundation of knowledge about the real world that we simply take for granted but that is not at all obvious to an artificial agent, such as basic principles of common-sense physics, causality, intentions, etc. An example is theframe problem, that in an event driven logic there need to be axioms that state things maintain position from one moment to the next unless they are moved by some external force. In order to make a true artificial intelligence agent that canconverse with humans using natural languageand can process basic statements and questions about the world, it is essential to represent this kind of knowledge.[15]In addition to McCarthy and Hayes' situation calculus, one of the most ambitious programs to tackle this problem was Doug Lenat'sCycproject. Cyc established its own Frame language and had large numbers of analysts document various areas of common-sense reasoning in that language. The knowledge recorded in Cyc included common-sense models of time, causality, physics, intentions, and many others.[16] The starting point for knowledge representation is theknowledge representation hypothesisfirst formalized byBrian C. Smithin 1985:[17] Any mechanically embodied intelligent process will be comprised of structural ingredients that a) we as external observers naturally take to represent a propositional account of the knowledge that the overall process exhibits, and b) independent of such external semantic attribution, play a formal but causal and essential role in engendering the behavior that manifests that knowledge. One of the most active areas of knowledge representation research is theSemantic Web.[citation needed]The Semantic Web seeks to add a layer of semantics (meaning) on top of the current Internet. Rather than indexing web sites and pages via keywords, the Semantic Web creates largeontologiesof concepts. Searching for a concept will be more effective than traditional text only searches. Frame languages and automatic classification play a big part in the vision for the future Semantic Web. The automatic classification gives developers technology to provide order on a constantly evolving network of knowledge. Defining ontologies that are static and incapable of evolving on the fly would be very limiting for Internet-based systems. The classifier technology provides the ability to deal with the dynamic environment of the Internet. Recent projects funded primarily by theDefense Advanced Research Projects Agency(DARPA) have integrated frame languages and classifiers with markup languages based on XML. TheResource Description Framework(RDF) provides the basic capability to define classes, subclasses, and properties of objects. TheWeb Ontology Language(OWL) provides additional levels of semantics and enables integration with classification engines.[18][19] Knowledge-representation is a field of artificial intelligence that focuses on designing computer representations that capture information about the world that can be used for solving complex problems. The justification for knowledge representation is that conventionalprocedural codeis not the best formalism to use to solve complex problems. Knowledge representation makes complex software easier to define and maintain than procedural code and can be used inexpert systems. For example, talking to experts in terms of business rules rather than code lessens the semantic gap between users and developers and makes development of complex systems more practical. Knowledge representation goes hand in hand withautomated reasoningbecause one of the main purposes of explicitly representing knowledge is to be able to reason about that knowledge, to make inferences, assert new knowledge, etc. Virtually allknowledge representation languageshave a reasoning or inference engine as part of the system.[20] A key trade-off in the design of knowledge representation formalisms is that between expressivity and tractability.[21]First Order Logic(FOL), with its high expressive power and ability to formalise much of mathematics, is a standard for comparing the expressibility of knowledge representation languages. Arguably, FOL has two drawbacks as a knowledge representation formalism in its own right, namely ease of use and efficiency of implementation. Firstly, because of its high expressive power, FOL allows many ways of expressing the same information, and this can make it hard for users to formalise or even to understand knowledge expressed in complex, mathematically-oriented ways. Secondly, because of its complex proof procedures, it can be difficult for users to understand complex proofs and explanations, and it can be hard for implementations to be efficient. As a consequence, unrestricted FOL can be intimidating for many software developers. One of the key discoveries of AI research in the 1970s was that languages that do not have the full expressive power of FOL can still provide close to the same expressive power of FOL, but can be easier for both the average developer and for the computer to understand. Many of the early AI knowledge representation formalisms, from databases to semantic nets to production systems, can be viewed as making various design decisions about how to balance expressive power with naturalness of expression and efficiency.[22]In particular, this balancing act was a driving motivation for the development of IF-THEN rules inrule-basedexpert systems. A similar balancing act was also a motivation for the development oflogic programming(LP) and the logic programming languageProlog. Logic programs have a rule-based syntax, which is easily confused with the IF-THEN syntax ofproduction rules. But logic programs have a well-defined logical semantics, whereas production systems do not. The earliest form of logic programming was based on theHorn clausesubset of FOL. But later extensions of LP included thenegation as failureinference rule, which turns LP into anon-monotonic logicfordefault reasoning. The resulting extended semantics of LP is a variation of the standard semantics of Horn clauses and FOL, and is a form of database semantics,[23]which includes theunique name assumptionand a form ofclosed world assumption. These assumptions are much harder to state and reason with explicitly using the standard semantics of FOL. In a key 1993 paper on the topic, Randall Davis ofMIToutlined five distinct roles to analyze a knowledge representation framework:[24] Knowledge representation and reasoning are a key enabling technology for theSemantic Web. Languages based on the Frame model with automatic classification provide a layer of semantics on top of the existing Internet. Rather than searching via text strings as is typical today, it will be possible to define logical queries and find pages that map to those queries.[18]The automated reasoning component in these systems is an engine known as the classifier. Classifiers focus on thesubsumptionrelations in a knowledge base rather than rules. A classifier can infer new classes and dynamically change the ontology as new information becomes available. This capability is ideal for the ever-changing and evolving information space of the Internet.[25] The Semantic Web integrates concepts from knowledge representation and reasoning with markup languages based on XML. TheResource Description Framework(RDF) provides the basic capabilities to define knowledge-based objects on the Internet with basic features such as Is-A relations and object properties. TheWeb Ontology Language(OWL) adds additional semantics and integrates with automatic classification reasoners.[19] In 1985,Ron Brachmancategorized the core issues for knowledge representation as follows:[26] In the early years ofknowledge-based systemsthe knowledge-bases were fairly small. The knowledge-bases that were meant to actually solve real problems rather than do proof of concept demonstrations needed to focus on well defined problems. So for example, not just medical diagnosis as a whole topic, but medical diagnosis of certain kinds of diseases. As knowledge-based technology scaled up, the need for larger knowledge bases and for modular knowledge bases that could communicate and integrate with each other became apparent. This gave rise to the discipline of ontology engineering, designing and building large knowledge bases that could be used by multiple projects. One of the leading research projects in this area was theCycproject. Cyc was an attempt to build a huge encyclopedic knowledge base that would contain not just expert knowledge but common-sense knowledge. In designing an artificial intelligence agent, it was soon realized that representing common-sense knowledge, knowledge that humans simply take for granted, was essential to make an AI that could interact with humans using natural language. Cyc was meant to address this problem. The language they defined was known asCycL. After CycL, a number ofontology languageshave been developed. Most aredeclarative languages, and are eitherframe languages, or are based onfirst-order logic. Modularity—the ability to define boundaries around specific domains and problem spaces—is essential for these languages because as stated byTom Gruber, "Every ontology is a treaty–a social agreement among people with common motive in sharing." There are always many competing and differing views that make any general-purpose ontology impossible. A general-purpose ontology would have to be applicable in any domain and different areas of knowledge need to be unified.[30] There is a long history of work attempting to build ontologies for a variety of task domains, e.g., an ontology for liquids,[31]thelumped element modelwidely used in representing electronic circuits (e.g.[32]), as well as ontologies for time, belief, and even programming itself. Each of these offers a way to see some part of the world. The lumped element model, for instance, suggests that we think of circuits in terms of components with connections between them, with signals flowing instantaneously along the connections. This is a useful view, but not the only possible one. A different ontology arises if we need to attend to the electrodynamics in the device: Here signals propagate at finite speed and an object (like a resistor) that was previously viewed as a single component with an I/O behavior may now have to be thought of as an extended medium through which an electromagnetic wave flows. Ontologies can of course be written down in a wide variety of languages and notations (e.g., logic, LISP, etc.); the essential information is not the form of that language but the content, i.e., the set of concepts offered as a way of thinking about the world. Simply put, the important part is notions like connections and components, not the choice between writing them as predicates or LISP constructs. The commitment made selecting one or another ontology can produce a sharply different view of the task at hand. Consider the difference that arises in selecting the lumped element view of a circuit rather than the electrodynamic view of the same device. As a second example, medical diagnosis viewed in terms of rules (e.g.,MYCIN) looks substantially different from the same task viewed in terms of frames (e.g., INTERNIST). Where MYCIN sees the medical world as made up of empirical associations connecting symptom to disease, INTERNIST sees a set of prototypes, in particular prototypical diseases, to be matched against the case at hand.
https://en.wikipedia.org/wiki/Knowledge_representation
Inmathematics,concentration of measure(about amedian) is a principle that is applied inmeasure theory,probabilityandcombinatorics, and has consequences for other fields such asBanach spacetheory. Informally, it states that "A random variable that depends in aLipschitzway on many independent variables (but not too much on any of them) is essentially constant".[1] The concentration of measure phenomenon was put forth in the early 1970s byVitali Milmanin his works on the local theory ofBanach spaces, extending an idea going back to the work ofPaul Lévy.[2][3]It was further developed in the works of Milman andGromov,Maurey,Pisier,Schechtman,Talagrand,Ledoux, and others. Let(X,d){\displaystyle (X,d)}be ametric spacewith ameasureμ{\displaystyle \mu }on theBorel setswithμ(X)=1{\displaystyle \mu (X)=1}. Let where is theϵ{\displaystyle \epsilon }-extension(also calledϵ{\displaystyle \epsilon }-fattening in the context ofthe Hausdorff distance) of a setA{\displaystyle A}. The functionα(⋅){\displaystyle \alpha (\cdot )}is called theconcentration rateof the spaceX{\displaystyle X}. The following equivalent definition has many applications: where the supremum is over all 1-Lipschitz functionsF:X→R{\displaystyle F:X\to \mathbb {R} }, and the median (or Levy mean)M=Med⁡F{\displaystyle M=\mathop {\mathrm {Med} } F}is defined by the inequalities Informally, the spaceX{\displaystyle X}exhibits a concentration phenomenon ifα(ϵ){\displaystyle \alpha (\epsilon )}decays very fast asϵ{\displaystyle \epsilon }grows. More formally, a family of metric measure spaces(Xn,dn,μn){\displaystyle (X_{n},d_{n},\mu _{n})}is called aLévy familyif the corresponding concentration ratesαn{\displaystyle \alpha _{n}}satisfy and anormal Lévy familyif for some constantsc,C>0{\displaystyle c,C>0}. For examples see below. The first example goes back toPaul Lévy. According to thespherical isoperimetric inequality, among all subsetsA{\displaystyle A}of the sphereSn{\displaystyle S^{n}}with prescribedspherical measureσn(A){\displaystyle \sigma _{n}(A)}, the spherical cap for suitableR{\displaystyle R}, has the smallestϵ{\displaystyle \epsilon }-extensionAϵ{\displaystyle A_{\epsilon }}(for anyϵ>0{\displaystyle \epsilon >0}). Applying this to sets of measureσn(A)=1/2{\displaystyle \sigma _{n}(A)=1/2}(whereσn(Sn)=1{\displaystyle \sigma _{n}(S^{n})=1}), one can deduce the followingconcentration inequality: whereC,c{\displaystyle C,c}are universal constants. Therefore(Sn)n{\displaystyle (S^{n})_{n}}meet the definition above of a normal Lévy family. Vitali Milmanapplied this fact to several problems in the local theory of Banach spaces, in particular, to give a new proof ofDvoretzky's theorem. All classical statistical physics is based on the concentration of measure phenomena: The fundamental idea (‘theorem’) about equivalence of ensembles in thermodynamic limit (Gibbs, 1902[4]andEinstein, 1902-1904[5][6][7]) is exactly the thin shell concentration theorem. For each mechanical system consider thephase spaceequipped by the invariantLiouville measure(the phase volume) and conserving energyE. Themicrocanonical ensembleis just an invariant distribution over the surface of constant energy E obtained by Gibbs as the limit of distributions inphase spacewith constant density in thin layers between the surfaces of states with energyEand with energyE+ΔE. Thecanonical ensembleis given by the probability density in the phase space (with respect to the phase volume)ρ=eF−EkT,{\displaystyle \rho =e^{\frac {F-E}{kT}},}where quantities F=const and T=const are defined by the conditions of probability normalisation and the given expectation of energyE. When the number of particles is large, then the difference between average values of the macroscopic variables for the canonical and microcanonical ensembles tends to zero, and theirfluctuationsare explicitly evaluated. These results are proven rigorously under some regularity conditions on the energy functionEbyKhinchin(1943).[8]The simplest particular case whenEis a sum of squares was well-known in detail beforeKhinchinand Lévy and even before Gibbs and Einstein. This is theMaxwell–Boltzmann distributionof the particle energy in ideal gas. The microcanonical ensemble is very natural from the naïve physical point of view: this is just a natural equidistribution on the isoenergetic hypersurface. The canonical ensemble is very useful because of an important property: if a system consists of two non-interacting subsystems, i.e. if the energyEis the sum,E=E1(X1)+E2(X2){\displaystyle E=E_{1}(X_{1})+E_{2}(X_{2})}, whereX1,X2{\displaystyle X_{1},X_{2}}are the states of the subsystems, then the equilibrium states of subsystems are independent, the equilibrium distribution of the system is the product of equilibrium distributions of the subsystems with the same T. The equivalence of these ensembles is the cornerstone of the mechanical foundations of thermodynamics.
https://en.wikipedia.org/wiki/Concentration_of_measure
multiOTPis an open source PHP class, a command line tool, and a web interface that can be used to provide an operating-system-independent, strongauthenticationsystem. multiOTP isOATH-certified since version 4.1.0 and is developed under theLGPLlicense. Starting with version 4.3.2.5, multiOTP open source is also available as a virtual appliance—as a standard OVA file, a customized OVA file with open-vm-tools, and also as avirtual machinedownloadable file that can run on Microsoft'sHyper-V, a common nativehypervisorin Windows computers.[jargon] AQR codeis generated automatically when printing the user-configuration page. Spyware, viruses and other hacking technologies or bugs (such asHeartbleed) are regularly used to steal passwords. If a strongtwo-factorauthentication system is used, the stolen passwords cannot be stored and later used because eachone-time passwordis valid for only one authentication session, and will fail if tried a second time.[1] multiOTP is a PHP class library. The class can be used with any PHP application using a PHP version of 5.3.0 or higher. The multiOTP library is provided as an all-in-one self-contained file that requires no other includes. If the strong authentication needs to be done from a hardware device instead of an Internet application, a request will go through a RADIUS server which will call the multiOTP command line tool. The implementation is light enough in order to work on limited computers, such as theRaspberry Pi. For Windows, the multiOTP library is provided with a pre-configured RADIUS server (freeradius) which can be installed as a service. A pre-configured web service (based on mongoose) can also be installed as a service and is needed if we want to use the multiOTP library in a client/server configuration. Under Linux, the readme.txt file provided with the library indicates what should be done in order to configure the RADIUS server and the web service. All necessary files and instructions are also provided to make a strong authentication device using a Raspberry Pi nano-computer. Since version 4.3.2.5, ready to use virtual appliance is provided in standard OVA format, with open-vm-tools integrated and also in Hyper-V format. The client can strongly authenticate on an application or a device using different methods: multiOTP isInitiative For Open Authenticationcertified for HOTP and TOTP and currently supports the following algorithms and RFCs: The multiOTP class provides strong authentication functionality and can be used in different strong authentication situations: Several free projects use the library:
https://en.wikipedia.org/wiki/MultiOTP
Inprobability theory, aLévy process, named after the French mathematicianPaul Lévy, is astochastic processwith independent, stationary increments: it represents the motion of a point whose successive displacements arerandom, in which displacements in pairwise disjoint time intervals are independent, and displacements in different time intervals of the same length have identical probability distributions. A Lévy process may thus be viewed as the continuous-time analog of arandom walk. The most well known examples of Lévy processes are theWiener process, often called theBrownian motionprocess, and thePoisson process. Further important examples include theGamma process, the Pascal process, and the Meixner process. Aside from Brownian motion with drift, all other proper (that is, not deterministic) Lévy processes havediscontinuouspaths. All Lévy processes areadditive processes.[1] A Lévy process is astochastic processX={Xt:t≥0}{\displaystyle X=\{X_{t}:t\geq 0\}}that satisfies the following properties: IfX{\displaystyle X}is a Lévy process then one may construct aversionofX{\displaystyle X}such thatt↦Xt{\displaystyle t\mapsto X_{t}}isalmost surelyright-continuous with left limits. A continuous-time stochastic process assigns arandom variableXtto each pointt≥ 0 in time. In effect it is a random function oft. Theincrementsof such a process are the differencesXs−Xtbetween its values at different timest<s. To call the increments of a processindependentmeans that incrementsXs−XtandXu−Xvareindependentrandom variables whenever the two time intervals do not overlap and, more generally, any finite number of increments assigned to pairwise non-overlapping time intervals are mutually (not justpairwise) independent. To call the increments stationary means that theprobability distributionof any incrementXt−Xsdepends only on the lengtht−sof the time interval; increments on equally long time intervals are identically distributed. IfX{\displaystyle X}is aWiener process, the probability distribution ofXt−Xsisnormalwithexpected value0 andvariancet−s. IfX{\displaystyle X}is aPoisson process, the probability distribution ofXt−Xsis aPoisson distributionwith expected value λ(t−s), where λ > 0 is the "intensity" or "rate" of the process. IfX{\displaystyle X}is aCauchy process, the probability distribution ofXt−Xsis aCauchy distributionwith densityf(x;t)=1π[γx2+γ2]{\displaystyle f(x;t)={1 \over \pi }\left[{\gamma \over x^{2}+\gamma ^{2}}\right]}whereγ=t−s{\displaystyle \gamma =t-s}. The distribution of a Lévy process has the property ofinfinite divisibility: given any integern, thelawof a Lévy process at time t can be represented as the law of the sum ofnindependent random variables, which are precisely the increments of the Lévy process over time intervals of lengtht/n,which are independent and identically distributed by assumptions 2 and 3. Conversely, for each infinitely divisible probability distributionF{\displaystyle F}, there is a Lévy processX{\displaystyle X}such that the law ofX1{\displaystyle X_{1}}is given byF{\displaystyle F}. In any Lévy process with finitemoments, thenth momentμn(t)=E(Xtn){\displaystyle \mu _{n}(t)=E(X_{t}^{n})}, is apolynomial functionoft;these functions satisfy a binomial identity: The distribution of a Lévy process is characterized by itscharacteristic function, which is given by theLévy–Khintchine formula(general for allinfinitely divisible distributions):[2] IfX=(Xt)t≥0{\displaystyle X=(X_{t})_{t\geq 0}}is a Lévy process, then its characteristic functionφX(θ){\displaystyle \varphi _{X}(\theta )}is given by wherea∈R{\displaystyle a\in \mathbb {R} },σ≥0{\displaystyle \sigma \geq 0}, andΠ{\displaystyle \Pi }is aσ-finite measure called theLévy measureofX{\displaystyle X}, satisfying the property In the above,1{\displaystyle \mathbf {1} }is theindicator function. Becausecharacteristic functionsuniquely determine their underlying probability distributions, each Lévy process is uniquely determined by the "Lévy–Khintchine triplet"(a,σ2,Π){\displaystyle (a,\sigma ^{2},\Pi )}. The terms of this triplet suggest that a Lévy process can be seen as having three independent components: a linear drift, aBrownian motion, and a Lévy jump process, as described below. This immediately gives that the only (nondeterministic) continuous Lévy process is a Brownian motion with drift; similarly, every Lévy process is asemimartingale.[3] Because the characteristic functions of independent random variables multiply, the Lévy–Khintchine theorem suggests that every Lévy process is the sum of Brownian motion with drift and another independent random variable, a Lévy jump process. The Lévy–Itô decomposition describes the latter as a (stochastic) sum of independent Poisson random variables. Letν=Π|R∖(−1,1)Π(R∖(−1,1)){\displaystyle \nu ={\frac {\Pi |_{\mathbb {R} \setminus (-1,1)}}{\Pi (\mathbb {R} \setminus (-1,1))}}}— that is, the restriction ofΠ{\displaystyle \Pi }toR∖(−1,1){\displaystyle \mathbb {R} \setminus (-1,1)}, normalized to be a probability measure; similarly, letμ=Π|(−1,1)∖{0}{\displaystyle \mu =\Pi |_{(-1,1)\setminus \{0\}}}(but do not rescale). Then The former is the characteristic function of acompound Poisson processwith intensityΠ(R∖(−1,1)){\displaystyle \Pi (\mathbb {R} \setminus (-1,1))}and child distributionν{\displaystyle \nu }. The latter is that of acompensated generalized Poisson process(CGPP): a process with countably many jump discontinuities on every intervala.s., but such that those discontinuities are of magnitude less than1{\displaystyle 1}. If∫R|x|μ(dx)<∞{\displaystyle \int _{\mathbb {R} }{|x|\,\mu (dx)}<\infty }, then the CGPP is apure jump process.[4][5]Therefore in terms of processes one may decomposeX{\displaystyle X}in the following way whereY{\displaystyle Y}is the compound Poisson process with jumps larger than1{\displaystyle 1}in absolute value andZt{\displaystyle Z_{t}}is the aforementioned compensated generalized Poisson process which is also a zero-mean martingale. A Lévyrandom fieldis a multi-dimensional generalization of Lévy process.[6][7]Still more general are decomposable processes.[8]
https://en.wikipedia.org/wiki/L%C3%A9vy_process
Incryptography,Triple DES(3DESorTDES), officially theTriple Data Encryption Algorithm(TDEAorTriple DEA), is asymmetric-keyblock cipher, which applies theDEScipher algorithm three times to each data block. The 56-bit key of the Data Encryption Standard (DES) is no longer considered adequate in the face of modern cryptanalytic techniques and supercomputing power; Triple DES increases the effective security to 112 bits. ACVEreleased in 2016,CVE-2016-2183, disclosed a major security vulnerability in the DES and 3DES encryption algorithms. This CVE, combined with the inadequate key size of 3DES, led toNISTdeprecating 3DES in 2019 and disallowing all uses (except processing already encrypted data) by the end of 2023.[1]It has been replaced with the more secure, more robustAES. While US government and industry standards abbreviate the algorithm's name as TDES (Triple DES) and TDEA (Triple Data Encryption Algorithm),[2]RFC 1851 referred to it as 3DES from the time it first promulgated the idea, and this namesake has since come into wide use by most vendors, users, and cryptographers.[3][4][5][6] In 1978, a triple encryption method using DES with two 56-bit keys was proposed byWalter Tuchman; in 1981,MerkleandHellmanproposed a more secure triple-key version of 3DES with 112 bits of security.[7] The Triple Data Encryption Algorithm is variously defined in several standards documents: The original DES cipher'skey sizeof 56 bits was considered generally sufficient when it was designed, but the availability of increasing computational power madebrute-force attacksfeasible. Triple DES provides a relatively simple method of increasing the key size of DES to protect against such attacks, without the need to design a completely new block cipher algorithm. A naive approach to increase the strength of a block encryption algorithm with a short key length (like DES) would be to use two keys(K1,K2){\displaystyle (K1,K2)}instead of one, and encrypt each block twice:EK2(EK1(plaintext)){\displaystyle E_{K2}(E_{K1}({\textrm {plaintext}}))}. If the original key length isn{\displaystyle n}bits, one would hope this scheme provides security equivalent to using a key2n{\displaystyle 2n}bits long. Unfortunately, this approach is vulnerable to themeet-in-the-middle attack: given aknown plaintextpair(x,y){\displaystyle (x,y)}, such thaty=EK2(EK1(x)){\displaystyle y=E_{K2}(E_{K1}(x))}, one can recover the key pair(K1,K2){\displaystyle (K1,K2)}in2n+1{\displaystyle 2^{n+1}}steps, instead of the22n{\displaystyle 2^{2n}}steps one would expect from an ideally secure algorithm with2n{\displaystyle 2n}bits of key. Therefore, Triple DES uses a "key bundle" that comprises three DESkeys,K1{\displaystyle K1},K2{\displaystyle K2}andK3{\displaystyle K3}, each of 56 bits (excludingparity bits). The encryption algorithm is: That is, encrypt withK1{\displaystyle K1},decryptwithK2{\displaystyle K2}, then encrypt withK3{\displaystyle K3}. Decryption is the reverse: That is, decrypt withK3{\displaystyle K3},encryptwithK2{\displaystyle K2}, then decrypt withK1{\displaystyle K1}. Each triple encryption encrypts oneblockof 64 bits of data. In each case, the middle operation is the reverse of the first and last. This improves the strength of the algorithm when usingkeying option2 and providesbackward compatibilitywith DES with keying option 3. The standards define three keying options: This is the strongest, with 3 × 56 = 168 independent key bits. It is still vulnerable to themeet-in-the-middle attack, but the attack requires 22 × 56steps. This provides a shorter key length of 56 × 2 or 112 bits and a reasonable compromise between DES and keying option 1, with the same caveat as above.[18]This is an improvement over "double DES" which only requires 256steps to attack. NIST disallowed this option in 2015.[16] This is backward-compatible with DES, since two of the operations cancel out. ISO/IEC 18033-3 never allowed this option, and NIST no longer allows K1= K2or K2= K3.[16][13] Each DES key is 8odd-paritybytes, with 56 bits of key and 8 bits of error-detection.[9]A key bundle requires 24 bytes for option 1, 16 for option 2, or 8 for option 3. NIST (and the current TCG specifications version 2.0 of approved algorithms forTrusted Platform Module) also disallows using any one of the 64 following 64-bit values in any keys (note that 32 of them are the binary complement of the 32 others; and that 32 of these keys are also the reverse permutation of bytes of the 32 others), listed here in hexadecimal (in each byte, the least significant bit is an odd-parity generated bit, which is discarded when forming the effectively 56-bit key): With these restrictions on allowed keys, Triple DES was reapproved with keying options 1 and 2 only. Generally, the three keys are generated by taking 24 bytes from a strong random generator, and only keying option 1 should be used (option 2 needs only 16 random bytes, but strong random generators are hard to assert and it is considered best practice to use only option 1). As with all block ciphers, encryption and decryption of multiple blocks of data may be performed using a variety ofmodes of operation, which can generally be defined independently of the block cipher algorithm. However, ANS X9.52 specifies directly, and NIST SP 800-67 specifies via SP 800-38A,[19]that some modes shall only be used with certain constraints on them that do not necessarily apply to general specifications of those modes. For example, ANS X9.52 specifies that forcipher block chaining, theinitialization vectorshall be different each time, whereas ISO/IEC 10116[20]does not. FIPS PUB 46-3 and ISO/IEC 18033-3 define only the single-block algorithm, and do not place any restrictions on the modes of operation for multiple blocks. In general, Triple DES with three independent keys (keying option1) has a key length of 168 bits (three 56-bit DES keys), but due to themeet-in-the-middle attack, the effective security it provides is only 112 bits.[16]Keying option 2 reduces the effective key size to 112 bits (because the third key is the same as the first). However, this option is susceptible to certainchosen-plaintextorknown-plaintextattacks,[21][22]and thus it is designated by NIST to have only 80bits of security.[16]This can be considered insecure; as a consequence, Triple DES's planned deprecation was announced by NIST in 2017.[23] The short block size of 64 bits makes 3DES vulnerable to block collision attacks if it is used to encrypt large amounts of data with the same key. The Sweet32 attack shows how this can be exploited in TLS and OpenVPN.[24]Practical Sweet32 attack on 3DES-based cipher-suites in TLS required236.6{\displaystyle 2^{36.6}}blocks (785 GB) for a full attack, but researchers were lucky to get a collision just after around220{\displaystyle 2^{20}}blocks, which took only 25 minutes. The security of TDEA is affected by the number of blocks processed with one key bundle. One key bundle shall not be used to apply cryptographic protection (e.g., encrypt) more than220{\displaystyle 2^{20}}64-bit data blocks. OpenSSLdoes not include 3DES by default since version 1.1.0 (August 2016) and considers it a "weak cipher".[25] As of 2008, theelectronic paymentindustry uses Triple DES and continues to develop and promulgate standards based upon it, such asEMV.[26] Earlier versions ofMicrosoft OneNote,[27]Microsoft Outlook2007[28]and MicrosoftSystem Center Configuration Manager2012[29]use Triple DES to password-protect user content and system data. However, in December 2018, Microsoft announced the retirement of 3DES throughout their Office 365 service.[30] FirefoxandMozilla Thunderbirduse Triple DES inCBC modeto encrypt website authentication login credentials when using a master password.[31] Below is a list of cryptography libraries that support Triple DES: Some implementations above may not include 3DES in the default build, in later or more recent versions, but may still support decryption in order to handle existing data.
https://en.wikipedia.org/wiki/Triple_DES
Inmathematics, anoperatorortransformis afunctionfrom onespace of functionsto another. Operators occur commonly inengineering,physicsand mathematics. Many areintegral operatorsanddifferential operators. In the followingLis an operator which takes a functiony∈F{\displaystyle y\in {\mathcal {F}}}to another functionL[y]∈G{\displaystyle L[y]\in {\mathcal {G}}}. Here,F{\displaystyle {\mathcal {F}}}andG{\displaystyle {\mathcal {G}}}are some unspecifiedfunction spaces, such asHardy space,Lpspace,Sobolev space, or, more vaguely, the space ofholomorphic functions.
https://en.wikipedia.org/wiki/List_of_mathematic_operators
No true Scotsmanorappeal to purityis aninformal fallacyin which one modifies a prior claim in response to acounterexampleby asserting the counterexample is excluded by definition.[1][2][3]Rather than admitting error or providing evidence to disprove the counterexample, the original claim is changed by using a non-substantive modifier such as "true", "pure", "genuine", "authentic", "real", or other similar terms.[4][2] PhilosopherBradley Dowdenexplains the fallacy as an "ad hoc rescue" of a refuted generalization attempt.[1]The following is a simplified rendition of the fallacy:[5] Person A: "NoScotsmanputs sugar on hisporridge."Person B: "But my uncle Angus is a Scotsman and he puts sugar on his porridge."Person A: "But notrueScotsman puts sugar on his porridge." The "no true Scotsman" fallacy is committed when the arguer satisfies the following conditions:[3][4][6] An appeal to purity is commonly associated with protecting a preferred group. Scottish national pride may be at stake if someone regularly considered to be Scottish commits a heinous crime. To protect people of Scottish heritage from a possible accusation ofguilt by association, one may use this fallacy to deny that the group is associated with this undesirable member or action. "NotrueScotsman would do something so undesirable"; i.e., the people who would do such a thing aretautologically(definitionally) excluded from being part of our group such that they cannot serve as a counterexample to the group's good nature.[4] The description of the fallacy in this form is attributed to the British philosopherAntony Flew, who wrote, in his 1966 bookGod & Philosophy, In this ungracious move a brash generalization, such asNo Scotsmen put sugar on their porridge, when faced with falsifying facts, is transformed while you wait into an impotent tautology: if ostensible Scotsmen put sugar on their porridge, then this is by itself sufficient to prove them nottrueScotsmen. In his 1975 bookThinking About Thinking, Flew wrote:[4] Imagine some Scottish chauvinist settled down one Sunday morning with his customary copy ofThe News of the World. He reads the story under the headline, "SidcupSex Maniac Strikes Again". Our reader is, as he confidently expected, agreeably shocked: "No Scot would do such a thing!" Yet the very next Sunday he finds in that same favourite source a report of the even more scandalous on-goings of Mr Angus McSporran inAberdeen. This clearly constitutes a counter example, which definitively falsifies the universal proposition originally put forward. ('Falsifies' here is, of course, simply the opposite of 'verifies'; and it therefore means 'shows to be false'.) Allowing that this is indeed such a counter example, he ought to withdraw; retreating perhaps to a rather weaker claim about most or some. But even an imaginary Scot is, like the rest of us, human; and none of us always does what we ought to do. So what he is in fact saying is: "No true Scotsman would do such a thing!" David P. Goldman, writing under his pseudonym "Spengler", compared distinguishing between "mature" democracies, whichnever start wars, and "emerging democracies", which may start them, with the "no true Scotsman" fallacy. Spengler alleges that political scientists have attempted to save the "US academic dogma" that democracies never start wars against other democracies from counterexamples by declaring any democracy which does indeed start a war against another democracy to be flawed, thus maintaining that notrue and maturedemocracy starts a war against a fellow democracy.[5] Cognitive psychologistSteven Pinkerhas suggested that phrases like "no true Christian ever kills, no true communist state is repressive and no trueTrumpsupporter endorses violence" exemplify the fallacy.[7]
https://en.wikipedia.org/wiki/No_true_Scotsman
Words per minute, commonly abbreviated asWPM(sometimes lowercased aswpm), is a measure ofwordsprocessed in a minute, often used as a measurement of the speed of typing,readingorMorse codesending and receiving. Since words vary in length, for the purpose of measurement of text entry the definition of each "word" is often standardized to be five characters orkeystrokeslong in English,[1]including spaces and punctuation. For example, under such a method applied to plain English text the phrase "I run" counts as one word, but "rhinoceros" and "let's talk" would both count as two. Karat et al. found in one study of averagecomputer usersin 1999 that the average rate for transcription was 32.5 words per minute, and 19.0 words per minute for composition.[2]In the same study, when the group was divided into "fast", "moderate", and "slow" groups, the average speeds were 40 wpm, 35 wpm, and 23 wpm, respectively. With the onset of the era ofdesktop computersandsmartphones, fast typing skills became much more widespread. As of 2019, the average typing speed on a mobile phone was 36.2 wpm with 2.3% uncorrected errors—there were significant correlations with age, level of English proficiency, and number of fingers used to type.[3]Some typists have sustained speeds over 200 wpm for a 15-second typing test with simple English words.[4] Typically, professionaltypiststype at speeds of 43 to 80 wpm, while some positions can require 80 to 95 (usually the minimum required fordispatchpositions and other time-sensitive typing jobs), and some advanced typists work at speeds above 120 wpm.[5]Two-finger typists, sometimes also referred to as "hunt and peck" typists, commonly reach sustained speeds of about 37 wpm for memorized text and 27 wpm when copying text, but in bursts may be able to reach much higher speeds.[6]From the 1920s through the 1970s, typing speed (along withshorthandspeed) was an important secretarial qualification, and typing contests were popular and often publicized by typewriter companies as promotional tools. Stenotype keyboards enable the trained user to input text as fast as 360 wpm at very highaccuracyfor an extended period, which is sufficient for real-time activities such as court reporting or closed captioning. While training dropout rates are very high — in some cases only 10% or even fewer graduate — stenotype students are usually able to reach speeds of 100–120 wpm within six months, which is faster than most alphanumeric typists. Guinness World Records gives 360 wpm with 97.23% accuracy as the highest achieved speed using a stenotype.[7] The numeric entry or 10-key speed is a measure of one's ability to manipulate thenumeric keypadfound on most modern separate computer keyboards. It is used to measure speed for jobs such asdata entryof number information on items such asremittance advice, bills, or checks, as deposited tolock boxes. It is measured in keystrokes per hour (KPH). Many jobs require a certain KPH, often 8,000 or 10,000.[8] For an adult population (age range 18–64) the average speed of copying is 68 letters per minute (approximately 13 wpm), with the range from a minimum of 26 to a maximum of 113 letters per minute (approximately 5 to 20 wpm).[9] A study of police interview records showed that the highest speed fell in the range 120–155 characters per minute, the highest possible limit being 190 characters per minute.[10] According to various studies the speed of handwriting of 3–7 graders varies from 25 to 94 letters per minute.[11] Usingstenography(shorthand) methods, this rate increases greatly. Handwriting speeds up to 350 words per minute have been achieved in shorthand competitions.[12] Words per minute is a common metric for assessingreadingspeed and is often used in the context of remedial skills evaluation, as well as in the context ofspeed reading, where it is a controversial measure of reading performance. A word in this context is the same as in the context of speech. Research done in 2012[13]measured the speed at which subjects read a text aloud, and found the typical range of speeds across 17 different languages to be 184±29 wpm or 863±234 characters per minute. However, the number of wpm varied between languages, even for languages that use the Latin orCyrillicalphabets: as low as 161±18 forFinnishand as high as 228±30 for English. This was because different languages have different average word lengths (longer words in such languages as Finnish and shorter words in English). However, the number of characters per minute tends to be around 1000 for all the tested languages. For the tested Asian languages that use particular writing systems (Arabic, Hebrew, Chinese, Japanese) these numbers are lower. Scientific studies have demonstrated that reading—defined here as capturing and decoding all the words on every page—faster than 900 wpm is not feasible given the limits set by the anatomy of the eye.[14] Whileproofreadingmaterials, people are able to read English at 200 wpm on paper, and 180 wpm on a monitor.[15][Those numbers from Ziefle, 1998, are for studies that used monitors prior to 1992. See Noyes & Garland 2008 for a modern tech view of equivalence.] Audiobooksare recommended to be 150–160 words per minute, which is the range that people comfortably hear and vocalize words.[16] Slide presentationstend to be closer to 100–125 wpm for a comfortable pace,[17]auctioneerscan speak at about 250 wpm,[18]and the fastest speakingpolicy debatersspeak from 350[19]to over 500 words per minute.[20]Internet speech calculators show that various things influence words per minute including nervousness.[18] An example of anagglutinative language, the average rate ofTurkishspeech is reported to be about 220 syllables per minute. When the time spent for the silent parts of speech are removed, the so-called average articulation rate reaches 310 syllables per minute.[21]The average number of syllables per (written) word has been measured as 2.6.[22][23]For a comparison,Fleschhas suggested that the conversational English for consumers aims 1.5 syllables per word,[24]although these measures are dependent on corpus. John Moschitta Jr.was listed inGuinness World Records, for a time, as the world's fastest speaker, being able to talk at 586 wpm.[25]He has since been surpassed bySteve Woodmore, who achieved a rate of 637 wpm.[26] In the realm ofAmerican Sign Language, the American Sign Language University (ASLU) specifies a cutoff proficiency for students who clock a signing speed of 110-130 wpm.[27] Morse codeuses variable length sequences of short and long durationsignals(dits and dahs, colloquially called dots and dashes) to represent source information[28]e.g., sequences for the letter "K" and numeral "2" are respectively (▄▄▄ ▄ ▄▄▄) and (▄ ▄ ▄▄▄ ▄▄▄ ▄▄▄). This variability complicates the measurement of Morse code speed rated in words per minute. Using telegram messages, the average English word length is about five characters, each averaging 5.124 dot durations orbaud. Spacing between words should also be considered, being seven dot durations in the USA and five in British territories. So the average British telegraph word was 30.67 dot times.[29]So the baud rate of a Morse code is50⁄60× word per minute rate. It is standard practice to use two different suchstandard wordsto measure Morse code speeds in words per minute. The standard words are: "PARIS" and "CODEX". In Morse code "PARIS" has a dot duration of 50, while "CODEX" has 60. Although many countries no longer require it for licensing, Morse is still widely used byamateur radio("ham") operators. Experienced hams routinely send Morse at 20 words per minute, using manually operated handtelegraph keys; enthusiasts such as members ofThe CW Operators' Clubroutinely send and receive Morse code at speeds up to 60 wpm. The upper limit for Morse operators attempting to write down Morse code received by ear using paper and pencil is roughly 20 wpm. Many skilled Morse code operators can receive Morse code by ear mentally without writing down the information at speeds up to 70 wpm.[30]To write down the Morse code information manually at speeds higher than 20 wpm it is usual for the operators to use a typewriter or computer keyboard to enable higher speed copying. In the United States a commercial radiotelegraph operator's license is still issued, although there is almost no demand for it, since for long distance communication ships now use the satellite-basedGlobal Maritime Distress and Safety System. Besides a written examination, proficiency at receiving Morse at 20 wpm plain language and 16 wpm in code groups must be demonstrated.[31] High-speed telegraphycontests are still held. The fastest Morse code operator wasTheodore Roosevelt McElroycopying at 75.6 wpm using a typewriter at the 1939 world championship.[32]
https://en.wikipedia.org/wiki/Words_per_minute
Inmathematics, aDiophantine equationis anequation, typically apolynomial equationin two or moreunknownswithintegercoefficients, for which onlyintegersolutions are of interest. Alinear Diophantine equationequates the sum of two or more unknowns, with coefficients, to a constant. Anexponential Diophantine equationis one in which unknowns can appear inexponents. Diophantine problemshave fewer equations than unknowns and involve finding integers that solve all equations simultaneously. Because suchsystems of equationsdefinealgebraic curves,algebraic surfaces, or, more generally,algebraic sets, their study is a part ofalgebraic geometrythat is calledDiophantine geometry. The wordDiophantinerefers to theHellenistic mathematicianof the 3rd century,DiophantusofAlexandria, who made a study of such equations and was one of the first mathematicians to introducesymbolismintoalgebra. The mathematical study of Diophantine problems that Diophantus initiated is now calledDiophantine analysis. While individual equations present a kind of puzzle and have been considered throughout history, the formulation of general theories of Diophantine equations, beyond the case of linear andquadraticequations, was an achievement of the twentieth century. In the following Diophantine equations,w, x, y, andzare the unknowns and the other letters are given constants: The simplest linear Diophantine equation takes the formax+by=c,{\displaystyle ax+by=c,}wherea,bandcare given integers. The solutions are described by the following theorem: Proof:Ifdis this greatest common divisor,Bézout's identityasserts the existence of integerseandfsuch thatae+bf=d. Ifcis a multiple ofd, thenc=dhfor some integerh, and(eh, fh)is a solution. On the other hand, for every pair of integersxandy, the greatest common divisordofaandbdividesax+by. Thus, if the equation has a solution, thencmust be a multiple ofd. Ifa=udandb=vd, then for every solution(x, y), we havea(x+kv)+b(y−ku)=ax+by+k(av−bu)=ax+by+k(udv−vdu)=ax+by,{\displaystyle {\begin{aligned}a(x+kv)+b(y-ku)&=ax+by+k(av-bu)\\&=ax+by+k(udv-vdu)\\&=ax+by,\end{aligned}}}showing that(x+kv, y−ku)is another solution. Finally, given two solutions such thatax1+by1=ax2+by2=c,{\displaystyle ax_{1}+by_{1}=ax_{2}+by_{2}=c,}one deduces thatu(x2−x1)+v(y2−y1)=0.{\displaystyle u(x_{2}-x_{1})+v(y_{2}-y_{1})=0.}Asuandvarecoprime,Euclid's lemmashows thatvdividesx2−x1, and thus that there exists an integerksuch that bothx2−x1=kv,y2−y1=−ku.{\displaystyle x_{2}-x_{1}=kv,\quad y_{2}-y_{1}=-ku.}Therefore,x2=x1+kv,y2=y1−ku,{\displaystyle x_{2}=x_{1}+kv,\quad y_{2}=y_{1}-ku,}which completes the proof. TheChinese remainder theoremdescribes an important class of linear Diophantine systems of equations: letn1,…,nk{\displaystyle n_{1},\dots ,n_{k}}bekpairwise coprimeintegers greater than one,a1,…,ak{\displaystyle a_{1},\dots ,a_{k}}bekarbitrary integers, andNbe the productn1⋯nk.{\displaystyle n_{1}\cdots n_{k}.}The Chinese remainder theorem asserts that the following linear Diophantine system has exactly one solution(x,x1,…,xk){\displaystyle (x,x_{1},\dots ,x_{k})}such that0 ≤x<N, and that the other solutions are obtained by adding toxa multiple ofN:x=a1+n1x1⋮x=ak+nkxk{\displaystyle {\begin{aligned}x&=a_{1}+n_{1}\,x_{1}\\&\;\;\vdots \\x&=a_{k}+n_{k}\,x_{k}\end{aligned}}} More generally, every system of linear Diophantine equations may be solved by computing theSmith normal formof its matrix, in a way that is similar to the use of thereduced row echelon formto solve asystem of linear equationsover a field. Usingmatrix notationevery system of linear Diophantine equations may be writtenAX=C,{\displaystyle AX=C,}whereAis anm×nmatrix of integers,Xis ann× 1column matrixof unknowns andCis anm× 1column matrix of integers. The computation of the Smith normal form ofAprovides twounimodular matrices(that is matrices that are invertible over the integers and have ±1 as determinant)UandVof respective dimensionsm×mandn×n, such that the matrixB=[bi,j]=UAV{\displaystyle B=[b_{i,j}]=UAV}is such thatbi,iis not zero forinot greater than some integerk, and all the other entries are zero. The system to be solved may thus be rewritten asB(V−1X)=UC.{\displaystyle B(V^{-1}X)=UC.}Callingyithe entries ofV−1Xanddithose ofD=UC, this leads to the systembi,iyi=di,1≤i≤k0yi=di,k<i≤n.{\displaystyle {\begin{aligned}&b_{i,i}y_{i}=d_{i},\quad 1\leq i\leq k\\&0y_{i}=d_{i},\quad k<i\leq n.\end{aligned}}} This system is equivalent to the given one in the following sense: A column matrix of integersxis a solution of the given system if and only ifx=Vyfor some column matrix of integersysuch thatBy=D. It follows that the system has a solution if and only ifbi,idividesdifori≤kanddi= 0fori>k. If this condition is fulfilled, the solutions of the given system areV[d1b1,1⋮dkbk,khk+1⋮hn],{\displaystyle V\,{\begin{bmatrix}{\frac {d_{1}}{b_{1,1}}}\\\vdots \\{\frac {d_{k}}{b_{k,k}}}\\h_{k+1}\\\vdots \\h_{n}\end{bmatrix}}\,,}wherehk+1, …,hnare arbitrary integers. Hermite normal formmay also be used for solving systems of linear Diophantine equations. However, Hermite normal form does not directly provide the solutions; to get the solutions from the Hermite normal form, one has to successively solve several linear equations. Nevertheless, Richard Zippel wrote that the Smith normal form "is somewhat more than is actually needed to solve linear diophantine equations. Instead of reducing the equation to diagonal form, we only need to make it triangular, which is called the Hermite normal form. The Hermite normal form is substantially easier to compute than the Smith normal form."[6] Integer linear programmingamounts to finding some integer solutions (optimal in some sense) of linear systems that include alsoinequations. Thus systems of linear Diophantine equations are basic in this context, and textbooks on integer programming usually have a treatment of systems of linear Diophantine equations.[7] A homogeneous Diophantine equation is a Diophantine equation that is defined by ahomogeneous polynomial. A typical such equation is the equation ofFermat's Last Theorem As a homogeneous polynomial innindeterminates defines ahypersurfacein theprojective spaceof dimensionn− 1, solving a homogeneous Diophantine equation is the same as finding therational pointsof a projective hypersurface. Solving a homogeneous Diophantine equation is generally a very difficult problem, even in the simplest non-trivial case of three indeterminates (in the case of two indeterminates the problem is equivalent with testing if arational numberis thedth power of another rational number). A witness of the difficulty of the problem is Fermat's Last Theorem (ford> 2, there is no integer solution of the above equation), which needed more than three centuries of mathematicians' efforts before being solved. For degrees higher than three, most known results are theorems asserting that there are no solutions (for example Fermat's Last Theorem) or that the number of solutions is finite (for exampleFalting's theorem). For the degree three, there are general solving methods, which work on almost all equations that are encountered in practice, but no algorithm is known that works for every cubic equation.[8] Homogeneous Diophantine equations of degree two are easier to solve. The standard solving method proceeds in two steps. One has first to find one solution, or to prove that there is no solution. When a solution has been found, all solutions are then deduced. For proving that there is no solution, one may reduce the equationmodulop. For example, the Diophantine equation does not have any other solution than the trivial solution(0, 0, 0). In fact, by dividingx, y, andzby theirgreatest common divisor, one may suppose that they arecoprime. The squares modulo 4 are congruent to 0 and 1. Thus the left-hand side of the equation is congruent to 0, 1, or 2, and the right-hand side is congruent to 0 or 3. Thus the equality may be obtained only ifx, y, andzare all even, and are thus not coprime. Thus the only solution is the trivial solution(0, 0, 0). This shows that there is norational pointon acircleof radius3{\displaystyle {\sqrt {3}}}, centered at the origin. More generally, theHasse principleallows deciding whether a homogeneous Diophantine equation of degree two has an integer solution, and computing a solution if there exist. If a non-trivial integer solution is known, one may produce all other solutions in the following way. Let be a homogeneous Diophantine equation, whereQ(x1,…,xn){\displaystyle Q(x_{1},\ldots ,x_{n})}is aquadratic form(that is, a homogeneous polynomial of degree 2), with integer coefficients. Thetrivial solutionis the solution where allxi{\displaystyle x_{i}}are zero. If(a1,…,an){\displaystyle (a_{1},\ldots ,a_{n})}is a non-trivial integer solution of this equation, then(a1,…,an){\displaystyle \left(a_{1},\ldots ,a_{n}\right)}are thehomogeneous coordinatesof arational pointof the hypersurface defined byQ. Conversely, if(p1q,…,pnq){\textstyle \left({\frac {p_{1}}{q}},\ldots ,{\frac {p_{n}}{q}}\right)}are homogeneous coordinates of a rational point of this hypersurface, whereq,p1,…,pn{\displaystyle q,p_{1},\ldots ,p_{n}}are integers, then(p1,…,pn){\displaystyle \left(p_{1},\ldots ,p_{n}\right)}is an integer solution of the Diophantine equation. Moreover, the integer solutions that define a given rational point are all sequences of the form wherekis any integer, anddis the greatest common divisor of thepi.{\displaystyle p_{i}.} It follows that solving the Diophantine equationQ(x1,…,xn)=0{\displaystyle Q(x_{1},\ldots ,x_{n})=0}is completely reduced to finding the rational points of the corresponding projective hypersurface. Let nowA=(a1,…,an){\displaystyle A=\left(a_{1},\ldots ,a_{n}\right)}be an integer solution of the equationQ(x1,…,xn)=0.{\displaystyle Q(x_{1},\ldots ,x_{n})=0.}AsQis a polynomial of degree two, a line passing throughAcrosses the hypersurface at a single other point, which is rational if and only if the line is rational (that is, if the line is defined by rational parameters). This allows parameterizing the hypersurface by the lines passing throughA, and the rational points are those that are obtained from rational lines, that is, those that correspond to rational values of the parameters. More precisely, one may proceed as follows. By permuting the indices, one may suppose, without loss of generality thatan≠0.{\displaystyle a_{n}\neq 0.}Then one may pass to the affine case by considering theaffine hypersurfacedefined by which has the rational point If this rational point is asingular point, that is if allpartial derivativesare zero atR, all lines passing throughRare contained in the hypersurface, and one has acone. The change of variables does not change the rational points, and transformsqinto a homogeneous polynomial inn− 1variables. In this case, the problem may thus be solved by applying the method to an equation with fewer variables. If the polynomialqis a product of linear polynomials (possibly with non-rational coefficients), then it defines twohyperplanes. The intersection of these hyperplanes is a rationalflat, and contains rational singular points. This case is thus a special instance of the preceding case. In the general case, consider theparametric equationof a line passing throughR: Substituting this inq, one gets a polynomial of degree two inx1, that is zero forx1=r1. It is thus divisible byx1−r1. The quotient is linear inx1, and may be solved for expressingx1as a quotient of two polynomials of degree at most two int2,…,tn−1,{\displaystyle t_{2},\ldots ,t_{n-1},}with integer coefficients: Substituting this in the expressions forx2,…,xn−1,{\displaystyle x_{2},\ldots ,x_{n-1},}one gets, fori= 1, …,n− 1, wheref1,…,fn{\displaystyle f_{1},\ldots ,f_{n}}are polynomials of degree at most two with integer coefficients. Then, one can return to the homogeneous case. Let, fori= 1, …,n, be thehomogenizationoffi.{\displaystyle f_{i}.}These quadratic polynomials with integer coefficients form a parameterization of the projective hypersurface defined byQ: A point of the projective hypersurface defined byQis rational if and only if it may be obtained from rational values oft1,…,tn−1.{\displaystyle t_{1},\ldots ,t_{n-1}.}AsF1,…,Fn{\displaystyle F_{1},\ldots ,F_{n}}are homogeneous polynomials, the point is not changed if alltiare multiplied by the same rational number. Thus, one may suppose thatt1,…,tn−1{\displaystyle t_{1},\ldots ,t_{n-1}}arecoprime integers. It follows that the integer solutions of the Diophantine equation are exactly the sequences(x1,…,xn){\displaystyle (x_{1},\ldots ,x_{n})}where, fori= 1, ...,n, wherekis an integer,t1,…,tn−1{\displaystyle t_{1},\ldots ,t_{n-1}}are coprime integers, anddis the greatest common divisor of thenintegersFi(t1,…,tn−1).{\displaystyle F_{i}(t_{1},\ldots ,t_{n-1}).} One could hope that the coprimality of theti, could imply thatd= 1. Unfortunately this is not the case, as shown in the next section. The equation is probably the first homogeneous Diophantine equation of degree two that has been studied. Its solutions are thePythagorean triples. This is also the homogeneous equation of theunit circle. In this section, we show how the above method allows retrievingEuclid's formulafor generating Pythagorean triples. For retrieving exactly Euclid's formula, we start from the solution(−1, 0, 1), corresponding to the point(−1, 0)of the unit circle. A line passing through this point may be parameterized by its slope: Putting this in the circle equation one gets Dividing byx+ 1, results in which is easy to solve inx: It follows Homogenizing as described above one gets all solutions as wherekis any integer,sandtare coprime integers, anddis the greatest common divisor of the three numerators. In fact,d= 2ifsandtare both odd, andd= 1if one is odd and the other is even. Theprimitive triplesare the solutions wherek= 1ands>t> 0. This description of the solutions differs slightly from Euclid's formula because Euclid's formula considers only the solutions such thatx, y, andzare all positive, and does not distinguish between two triples that differ by the exchange ofxandy, The questions asked in Diophantine analysis include: These traditional problems often lay unsolved for centuries, and mathematicians gradually came to understand their depth (in some cases), rather than treat them as puzzles. The given information is that a father's age is 1 less than twice that of his son, and that the digitsABmaking up the father's age are reversed in the son's age (i.e.BA). This leads to the equation10A+B= 2(10B+A) − 1, thus19B− 8A= 1. Inspection gives the resultA= 7,B= 3, and thusABequals 73 years andBAequals 37 years. One may easily show that there is not any other solution withAandBpositive integers less than 10. Many well known puzzles in the field ofrecreational mathematicslead to diophantine equations. Examples include thecannonball problem,Archimedes's cattle problemandthe monkey and the coconuts. In 1637,Pierre de Fermatscribbled on the margin of his copy ofArithmetica: "It is impossible to separate a cube into two cubes, or a fourth power into two fourth powers, or in general, any power higher than the second into two like powers." Stated in more modern language, "The equationan+bn=cnhas no solutions for anynhigher than 2." Following this, he wrote: "I have discovered a truly marvelous proof of this proposition, which this margin is too narrow to contain." Such a proof eluded mathematicians for centuries, however, and as such his statement became famous asFermat's Last Theorem. It was not until 1995 that it was proven by the British mathematicianAndrew Wiles. In 1657, Fermat attempted to solve the Diophantine equation61x2+ 1 =y2(solved byBrahmaguptaover 1000 years earlier). The equation was eventually solved byEulerin the early 18th century, who also solved a number of other Diophantine equations. The smallest solution of this equation in positive integers isx= 226153980,y= 1766319049(seeChakravala method). In 1900,David Hilbertproposed the solvability of all Diophantine equations asthe tenthof hisfundamental problems. In 1970,Yuri Matiyasevichsolved it negatively, building on work ofJulia Robinson,Martin Davis, andHilary Putnamto prove that a generalalgorithmfor solving all Diophantine equationscannot exist. Diophantine geometry, is the application of techniques fromalgebraic geometrywhich considers equations that also have a geometric meaning. The central idea of Diophantine geometry is that of arational point, namely a solution to a polynomial equation or asystem of polynomial equations, which is a vector in a prescribedfieldK, whenKisnotalgebraically closed. The oldest general method for solving a Diophantine equation—or for proving that there is no solution— is the method ofinfinite descent, which was introduced byPierre de Fermat. Another general method is theHasse principlethat usesmodular arithmeticmodulo all prime numbers for finding the solutions. Despite many improvements these methods cannot solve most Diophantine equations. The difficulty of solving Diophantine equations is illustrated byHilbert's tenth problem, which was set in 1900 byDavid Hilbert; it was to find an algorithm to determine whether a given polynomial Diophantine equation with integer coefficients has an integer solution.Matiyasevich's theoremimplies that such an algorithm cannot exist. During the 20th century, a new approach has been deeply explored, consisting of usingalgebraic geometry. In fact, a Diophantine equation can be viewed as the equation of anhypersurface, and the solutions of the equation are the points of the hypersurface that have integer coordinates. This approach led eventually to theproof by Andrew Wilesin 1994 ofFermat's Last Theorem, stated without proof around 1637. This is another illustration of the difficulty of solving Diophantine equations. An example of an infinite Diophantine equation is:n=a2+2b2+3c2+4d2+5e2+⋯,{\displaystyle n=a^{2}+2b^{2}+3c^{2}+4d^{2}+5e^{2}+\cdots ,}which can be expressed as "How many ways can a given integernbe written as the sum of a square plus twice a square plus thrice a square and so on?" The number of ways this can be done for eachnforms an integer sequence. Infinite Diophantine equations are related totheta functionsand infinite dimensional lattices. This equation always has a solution for any positiven.[9]Compare this to:n=a2+4b2+9c2+16d2+25e2+⋯,{\displaystyle n=a^{2}+4b^{2}+9c^{2}+16d^{2}+25e^{2}+\cdots ,}which does not always have a solution for positiven. If a Diophantine equation has as an additional variable or variables occurring asexponents, it is an exponential Diophantine equation. Examples include: A general theory for such equations is not available; particular cases such asCatalan's conjectureandFermat's Last Theoremhave been tackled. However, the majority are solved via ad-hoc methods such asStørmer's theoremor eventrial and error.
https://en.wikipedia.org/wiki/Diophantine_equation
Inlogic, thecorresponding conditionalof anargument(or derivation) is amaterial conditionalwhoseantecedentis theconjunctionof the argument's (or derivation's)premisesand whoseconsequentis the argument's conclusion. An argument isvalidif and only ifits corresponding conditional is alogical truth. It follows that an argument is valid if and only if the negation of its corresponding conditional is acontradiction. Therefore, the construction of a corresponding conditional provides a useful technique for determining the validity of an argument. Consider the argumentA: Either it is hot or it is coldIt is not hotTherefore it is cold This argument is of the form: Either P or QNot PTherefore Qor (using standard symbols ofpropositional calculus):P∨{\displaystyle \lor }Q¬{\displaystyle \neg }P____________Q The corresponding conditionalCis: IF ((P or Q) and not P) THEN Qor (using standard symbols):((P∨{\displaystyle \lor }Q)∧{\displaystyle \wedge }¬{\displaystyle \neg }P)→{\displaystyle \to }Q and the argumentAis valid just in case the corresponding conditionalCis a logical truth. IfCis a logical truth then¬{\displaystyle \neg }Centails Falsity (The False). Thus, any argument is valid if and only if the denial of its corresponding conditional leads to a contradiction. If we construct atruth tableforCwe will find that it comes outT(true) on every row (and of course if we construct a truth table for the negation ofCit will come outF(false) in every row. These results confirm the validity of the argumentA Some arguments needfirst-order predicate logicto reveal their forms and they cannot be tested properly by truth tables forms. Consider the argumentA1: Some mortals are not GreeksSome Greeks are not menNot every man is a logicianTherefore Some mortals are not logicians To test this argument for validity, construct the corresponding conditionalC1(you will need first-order predicate logic), negate it, and see if you can derive a contradiction from it. If you succeed, then the argument is valid. Instead of attempting to derive the conclusion from the premises proceed as follows. To test the validity of an argument (a) translate, as necessary, each premise and the conclusion into sentential or predicate logic sentences (b) construct from these the negation of the corresponding conditional (c) see if from it a contradiction can be derived (or if feasible construct a truth table for it and see if it comes out false on every row.) Alternatively construct a truth tree and see if every branch is closed. Success proves the validity of the original argument. In case of the difficulty in trying to derive a contradiction, one should proceed as follows. From the negation of the corresponding conditional derive a theorem inconjunctive normal formin the methodical fashions described in text books. If, and only if, the original argument was valid will the theorem in conjunctive normal form be a contradiction, and if it is, then that it is will be apparent.
https://en.wikipedia.org/wiki/Corresponding_conditional
Incomputer science,constrained clusteringis a class ofsemi-supervised learningalgorithms. Typically, constrained clustering incorporates either a set of must-link constraints, cannot-link constraints, or both, with adata clusteringalgorithm.[1]A cluster in which the members conform to all must-link and cannot-link constraints is called achunklet. Both a must-link and a cannot-link constraint define a relationship between two data instances. Together, the sets of these constraints act as a guide for which a constrained clustering algorithm will attempt to find chunklets (clusters in the dataset which satisfy the specified constraints). Some constrained clustering algorithms will abort if no such clustering exists which satisfies the specified constraints. Others will try to minimize the amount of constraint violation should it be impossible to find a clustering which satisfies the constraints. Constraints could also be used to guide the selection of a clustering model among several possible solutions.[2] Examples of constrained clustering algorithms include: Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Constrained_clustering
In re Boucher(case citation: No. 2:06-mJ-91, 2009 WL 424718) is afederalcriminalcaseinVermont, which was the first to directly address the question of whether investigators can compel a suspect to reveal theirencryptionpassphraseorpassword, despite theU.S. Constitution'sFifth Amendmentprotection againstself-incrimination. Amagistrate judgeheld that producing the passphrase would constitute self-incrimination. In its submission on appeal to the District Court, the Government stated that it does not seek the password for the encrypted hard drive, but only sought to force Boucher to produce the contents of his encrypted hard drive in an unencrypted format by opening the drive before the grand jury. A District Court judge agreed with the government, holding that, given Boucher's initial cooperation in showing some of the content of his computer to border agents, producing the complete contents would not constitute self-incrimination. In late 2009, Boucher finally gave up his password and investigators found numerous images and videosdepicting sexual abuse of children. In January 2010, Boucher was sentenced to 3 years in prison and deported.[1] On 17 December 2006, thelaptop computerofdefendantSebastien D. Boucher (born in 1977)[2][3]was inspected when he crossed the border from Canada into the United States atDerby Line, Vermont. The laptop was powered-up when the border was crossed, which allowed its contents to be browsed. Images containingchild pornographywere allegedly seen by Immigration and Customs Enforcement (ICE) border agents who seized the laptop, questioned Boucher and then arrested him on a complaint charging him with transportation of child pornography in violation of 18 U.S.C. 2252A(a)(1). The laptop was subsequently powered-down. When the laptop was switched on and booted on 29 December 2006, it was not possible to access its entire storage capability. This was because the laptop had been protected byPGP Diskencryption.[4]As a result, investigators working for the US government were unable to view the contents of drive "Z:", which allegedly contained the illegal content. A grand jury thensubpoenaedthe defendant to provide the password to theencryption keyprotecting the data. On November 29, 2007, U.S. Magistrate JudgeJerome Niedermeierof theUnited States District Court for the District of Vermontstated "Compelling Boucher to enter the password forces him to produce evidence that could be used to incriminate him."[4]Accordingly, Niedermeier quashed the subpoena. On January 2, 2008, the United States appealed the magistrate's opinion to the District Court in a sealed motion (court docket, case #: 2:06-mJ-00091-wks-jjn-1).[5]The appeal was heard by U.S. District JudgeWilliam K. Sessions.[6]Oral arguments were scheduled for April 30, 2008.[7] On February 19, 2009, Judge Sessions reversed the magistrate's ruling and directed Boucher "to provide an unencrypted version of the Z drive viewed by the ICE agent." Boucher accessed the Z drive of his laptop at the ICE agent's request. The ICE agent viewed the contents of some of the Z drive's files, and ascertained that they may consist of images or videos of child pornography. The Government thus knows of the existence and location of the Z drive and its files. Again providing access to the unencrypted Z drive 'adds little or nothing to the sum total of the Government's information about the existence and location of files that may contain incriminating information. Fisher, 425 U.S. at 411. Boucher's act of producing an unencrypted version of the Z drive likewise is not necessary to authenticate it. He has already admitted to possession of the computer, and provided the Government with access to the Z drive. The Government has submitted that it can link Boucher with the files on his computer without making use of his production of an unencrypted version of the Z drive, and that it will not use his act of production as evidence of authentication.[8]
https://en.wikipedia.org/wiki/In_re_Boucher
Real-time bidding(RTB) is a means by whichadvertising inventoryis bought and sold on a per-impressionbasis, via instantaneous programmaticauction, similar to financial markets. With real-time bidding,online advertisingbuyers bid on an impression and, if the bid is won, the buyer's ad is instantly displayed on the publisher's site.[2]Real-time bidding lets advertisers manage and optimize ads from multipleAd networks, allowing them to create and launch advertising campaigns, prioritize networks, and allocate percentages of unsold inventory, known as backfill.[3] Real-time bidding is distinguishable from static auctions by how it is a per-impression way of bidding, whereas static auctions are groups of up to several thousand impressions.[4]RTB is promoted as being more effective than static auctions for both advertisers and publishers in terms of advertising inventory sold, though the results vary by execution and local conditions. RTB replaced the traditional model. Research suggests that RTB digital advertising spend will reach $23.5 billion in the United States in 2018 compared to $6.3 billion spent in 2014.[5] RTB requires collection, accumulation and dissemination of data about users and their activities for both operating the bidding process, profiling users to "enrich" bid requests, and operate ancillary functions such as fraud detection. As a consequence, RTB has led to a range of privacy concerns,[6][7]and has attracted attention fromdata protection authorities(DPAs).[8]According to UK's DPA, theICO, report, companies involved in RTB "were collecting and trading information such as race, sexuality, health status or political affiliation" without consent from affected users.[9]Simon McDougall of ICO reported, in June 2019, that "sharing people’s data with potentially hundreds of companies, without properly assessing and addressing the risk of these counterparties, raises questions around the security and retention of this data."[10] In 2019, 12 NGOs complained about RTB to a range of regulators in the Union,[11]leading to a decision in February 2022 where the Belgian Data Protection Authority found a range of illegality in aspects of a system used to authorise much of RTB in the EU under theGDPR, the Transparency and Consent Framework produced by theInteractive Advertising Bureau Europe.[12]The Dutch DPA has since indicated that websites and other actors in the Netherlands should cease using RTB to profile users.[13]The Belgian DPA's decision has been described as "an atomic bomb",[14]with some academic commentators arguing that the RTB would require fundamental restructuring in order for a system such as the TCF to be able to authorise it under the decision.[15] Since RTB works throughmachine-to-machinecommunication, it has been gamed by malicious actors aiming to extract money from theprogrammatic commerceofonline advertisingby monetizingfake news websites[16]and other forms of made-for-advertising websites that extract rents viaad fraud.[1] A typical transaction begins with a user visiting a website. This triggers a bid request that can include various pieces of data such as the user's demographic information, browsing history, location, and the page being loaded. The request goes from the publisher to an ad exchange, which submits it and the accompanying data to multiple advertisers who automatically submit bids in real time to place their ads. Advertisers bid on each ad impression as it is served. The impression goes to the highest bidder and their ad is served on the page.[citation needed] The bidding happens autonomously and advertisers set maximum bids and budgets for an advertising campaign. The criteria for bidding on particular types of consumers can be very complex, taking into account everything from very detailed behavioural profiles to conversion data.[citation needed]Probabilistic models can be used to determine the probability for a click or a conversion given the user history data (aka user journey). This probability can be used to determine the size of the bid for the respective advertising slot.[17] Demand-side platforms(DSPs) give buyers direct RTB access to multiple sources of inventory. They typically streamline ad operations with applications that simplify workflow and reporting. DSPs are directed at advertisers. The technology that powers an ad exchange can also provide the foundation for a DSP, allowing for synergy between advertising campaigns.[4] The primary distinction between an ad network and a DSP is that DSPs have the technology to determine the value of an individual impression in real time (less than 100 milliseconds) based on what is known about a user's history.[18] Large publishers often manage multiple advertising networks and usesupply-side platforms(SSPs) to manage advertising yield. Supply-side platforms utilize data generated from impression-level bidding to help tailor advertising campaigns. Applications to manage ad operations are also often bundled into SSPs. SSP technology is adapted from ad exchange technology.[4] An individual's browser history is more difficult to determine on mobile devices.[18]This is due to technical limitations that continue to make the type of targeting and tracking available on the desktop essentially impossible on smartphones and tablets. The lack of a universal cookie alternative for mobile web browsing also limits the growth and feasibility of programmatic ad buying. Mobile real time bidding also lacks universal standards.[19]
https://en.wikipedia.org/wiki/Real-time_bidding
TheCommon Weakness Enumeration(CWE) is a category system for hardware and software weaknesses and vulnerabilities. It is sustained by a community project with the goals of understanding flaws in software and hardware and creating automated tools that can be used to identify, fix, and prevent those flaws.[1]The project is sponsored by the office of the U.S. Department of Homeland Security (DHS) Cybersecurity and Infrastructure Security Agency (CISA), which is operated byThe MITRE Corporation,[2]with support fromUS-CERTand theNational Cyber Security Divisionof the U.S. Department of Homeland Security.[3][4] The first release of the list and associated classification taxonomy was in 2006.[5]Version 4.15 of the CWE standard was released in July 2024.[6] CWE has over 600 categories, including classes for buffer overflows, path/directory tree traversal errors, race conditions,cross-site scripting, hard-coded passwords, and insecure random numbers.[7] Common Weakness Enumeration (CWE) Compatibility program allows a service or a product to be reviewed and registered as officially "CWE-Compatible" and "CWE-Effective". The program assists organizations in selecting the right software tools and learning about possible weaknesses and their possible impact. In order to obtain CWE Compatible status a product or a service must meet 4 out of 6 requirements, shown below: There are 56 organizations as of September 2019 that develop and maintain products and services that achieved CWE Compatible status.[9] Some researchers think that ambiguities in CWE can be avoided or reduced.[10] As of 4/16/2024, the CWE Compatibility Program has been discontinued.[11]
https://en.wikipedia.org/wiki/Common_Weakness_Enumeration
Incryptography,collision resistanceis a property ofcryptographic hash functions: a hash functionHis collision-resistant if it is hard to find two inputs that hash to the same output; that is, two inputsaandbwherea≠bbutH(a) =H(b).[1]: 136Thepigeonhole principlemeans that any hash function with more inputs than outputs will necessarily have such collisions;[1]: 136the harder they are to find, the more cryptographically secure the hash function is. The "birthday paradox" places an upper bound on collision resistance: if a hash function producesNbits of output, an attacker who computes only 2N/2(or2N{\displaystyle \scriptstyle {\sqrt {2^{N}}}}) hash operations on random input is likely to find two matching outputs. If there is an easier method to do this thanbrute-force attack, it is typically considered a flaw in the hash function.[2] Cryptographic hash functionsare usually designed to be collision resistant. However, many hash functions that were once thought to be collision resistant were later broken.MD5andSHA-1in particular both have published techniques more efficient than brute force for finding collisions.[3][4]However, some hash functions have a proof that finding collisions is at least as difficult as some hard mathematical problem (such asinteger factorizationordiscrete logarithm). Those functions are calledprovably secure.[2] A family of functions {hk: {0, 1}m(k)→ {0, 1}l(k)} generated by some algorithmGis a family of collision-resistant hash functions, if |m(k)| > |l(k)| for anyk, i.e.,hkcompresses the input string, and everyhkcan be computed withinpolynomial timegivenk, but for any probabilistic polynomial algorithmA, we have where negl(·) denotes somenegligible function, andnis thesecurity parameter.[5] There are two different types of collision resistance. A hash function has weak collision resistance when, given a hashing function H and an x, no other x' can be found such that H(x)=H(x'). In words, when given an x, it is not possible to find another x' such that the hashing function would create a collision. A hash function has strong collision resistance when, given a hashing function H, no arbitrary x and x' can be found where H(x)=H(x'). In words, no two x's can be found where the hashing function would create a collision. Collision resistance is desirable for several reasons.
https://en.wikipedia.org/wiki/Collision_resistance
Incryptographyauniversal one-way hash function(UOWHF, often pronounced "woof") is a type ofuniversal hash functionof particular importance tocryptography. UOWHFs are proposed as an alternative tocollision-resistant hash functions(CRHFs). CRHFs have a strong collision-resistance property: that it is hard, given randomly chosen hash function parameters, to find any collision of the hash function. In contrast, UOWHFs require that it be hard to find a collision where onepreimageis chosen independently of the hash function parameters. The primitive was suggested byMoni NaorandMoti Yungand is also known as "target collision resistant" hash functions; it was employed to construct general digital signature schemes without trapdoor functions, and also within chosen-ciphertext secure public key encryption schemes. The UOWHF family contains a finite number of hash functions with each having the same probability of being used. The security property of a UOWHF is as follows. LetA{\displaystyle A}be an algorithm that operates in two phases: Then for all polynomial-timeA{\displaystyle A}the probability thatA{\displaystyle A}succeeds is negligible. UOWHFs are thought to be less computationally expensive than CRHFs, and are most often used for efficiency purposes in schemes where the choice of the hash function happens at some stage of execution, rather than beforehand. For instance, theCramer–Shoup cryptosystemuses a UOWHF as part of the validity check in its ciphertexts.
https://en.wikipedia.org/wiki/Universal_one-way_hash_function
CPU shieldingis a practice where on a multiprocessor system or on a CPU with multiple cores,real-timetasks can run on one CPU or core while non-real-time tasks run on another. Theoperating systemmust be able to set aCPU affinityfor bothprocessesandinterrupts. InLinuxin order to shieldCPUsfrom individual interrupts being serviced on them you have to make sure that the followingkernelconfiguration parameter is set: Thisoperating-system-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/CPU_shielding
Exception(s),The Exception(s), orexceptionalmay refer to:
https://en.wikipedia.org/wiki/Exception_(disambiguation)
Inreal-time computer graphics,geometry instancingis the practice ofrenderingmultiple copies of the samemeshin a scene at once. This technique is primarily used for objects such as trees, grass, or buildings which can be represented as repeated geometry without appearing unduly repetitive, but may also be used for characters. Although vertex data is duplicated across all instanced meshes, each instance may have other differentiating parameters (such as color, orskeletal animationpose) changed in order to reduce the appearance of repetition. Starting inDirect3Dversion 9,Microsoftincluded support for geometry instancing. This method improves the potential runtime performance of rendering instanced geometry by explicitly allowing multiple copies of a mesh to be rendered sequentially by specifying the differentiating parameters for each in a separate stream. The same functionality is available inVulkancore, and theOpenGLcore in versions 3.1 and up but may be accessed in some earlier implementations using theEXT_draw_instancedextension. Geometry instancing inHoudini,Mayaor other3D packagesusually involves mapping a static or pre-animated object or geometry to particles or arbitrary points in space, which can then be rendered by almost any offline renderer. Geometry instancing in offline rendering is useful for creating things like swarms of insects, in which each one can be detailed, but still behaves in a realistic way that does not have to be determined by the animator. Most packages allow variation of thematerialor material parameters on a per instance basis, which helps ensure that instances do not appear to be exact copies of each other. InHoudini, many object level attributes (e.g. such as scale) can also be varied on a per instance basis. Because instancing geometry in most 3D packages only references the original object, file sizes are kept very small and changing the original changes all of the instances. In many offline renderers, such as Pixar'sPhotoRealistic RenderMan, instancing is achieved by using delayed load render procedurals to only load geometry when the bucket containing the instance is actually being rendered. This means that the geometry for all the instances does not have to be in memory at once. Thiscomputer graphics–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Geometry_instancing
Acomputer virus[1]is a type ofmalwarethat, when executed, replicates itself by modifying othercomputer programsandinsertingits owncodeinto those programs.[2][3]If this replication succeeds, the affected areas are then said to be "infected" with a computer virus, a metaphor derived from biologicalviruses.[4] Computer viruses generally require ahost program.[5]The virus writes its own code into the host program. When the program runs, the written virus program is executed first, causing infection and damage. By contrast, acomputer wormdoes not need a host program, as it is an independent program or code chunk. Therefore, it is not restricted by thehost program, but can run independently and actively carry out attacks.[6][7] Virus writers usesocial engineeringdeceptionsand exploit detailed knowledge ofsecurity vulnerabilitiesto initially infect systems and to spread the virus. Viruses use complex anti-detection/stealth strategies to evadeantivirus software.[8]Motives for creating viruses can include seekingprofit(e.g., withransomware), desire to send a political message, personal amusement, to demonstrate that a vulnerability exists in software, forsabotageanddenial of service, or simply because they wish to explorecybersecurityissues,artificial lifeandevolutionary algorithms.[9] As of 2013, computer viruses caused billions of dollars' worth of economic damage each year.[10]In response, an industry ofantivirus softwarehas cropped up, selling or freely distributing virus protection to users of variousoperating systems.[11] The first academic work on the theory of self-replicating computer programs was done in 1949 byJohn von Neumannwho gave lectures at theUniversity of Illinoisabout the "Theory and Organization of ComplicatedAutomata". The work of von Neumann was later published as the "Theory of self-reproducing automata". In his essay von Neumann described how a computer program could be designed to reproduce itself.[12]Von Neumann's design for a self-reproducing computer program is considered the world's first computer virus, and he is considered to be the theoretical "father" of computer virology.[13] In 1972, Veith Risak directly building on von Neumann's work onself-replication, published his article "Selbstreproduzierende Automaten mit minimaler Informationsübertragung" (Self-reproducing automata with minimal information exchange).[14]The article describes a fully functional virus written inassemblerprogramming language for a SIEMENS 4004/35 computer system. In 1980, Jürgen Kraus wrote hisDiplomthesis "Selbstreproduktion bei Programmen" (Self-reproduction of programs) at theUniversity of Dortmund.[15]In his work Kraus postulated that computer programs can behave in a way similar to biological viruses. TheCreeper viruswas first detected onARPANET, the forerunner of theInternet, in the early 1970s.[16]Creeper was an experimental self-replicating program written by Bob Thomas atBBN Technologiesin 1971.[17]Creeper used the ARPANET to infectDECPDP-10computers running theTENEXoperating system.[18]Creeper gained access via the ARPANET and copied itself to the remote system where the message, "I'M THE CREEPER. CATCH ME IF YOU CAN!" was displayed.[19]TheReaperprogram was created to delete Creeper.[20] In 1982, a program called "Elk Cloner" was the first personal computer virus to appear "in the wild"—that is, outside the single computer or computer lab where it was created.[21]Written in 1981 byRichard Skrenta, a ninth grader atMount Lebanon High SchoolnearPittsburgh, it attached itself to theApple DOS3.3 operating system and spread viafloppy disk.[21]On its 50th use theElk Clonervirus would be activated, infecting the personal computer and displaying a short poem beginning "Elk Cloner: The program with a personality." In 1984,Fred Cohenfrom theUniversity of Southern Californiawrote his paper "Computer Viruses – Theory and Experiments".[22]It was the first paper to explicitly call a self-reproducing program a "virus", a term introduced by Cohen's mentorLeonard Adleman.[23]In 1987, Cohen published a demonstration that there is noalgorithmthat can perfectly detect all possible viruses.[24]Cohen's theoreticalcompression virus[25]was an example of a virus which was not malicious software (malware), but was putatively benevolent (well-intentioned). However,antivirusprofessionals do not accept the concept of "benevolent viruses", as any desired function can be implemented without involving a virus (automatic compression, for instance, is available underWindowsat the choice of the user). Any virus will by definition make unauthorised changes to a computer, which is undesirable even if no damage is done or intended. The first page ofDr Solomon's Virus Encyclopaediaexplains the undesirability of viruses, even those that do nothing but reproduce.[26][27] An article that describes "useful virus functionalities" was published byJ. B. Gunnunder the title "Use of virus functions to provide a virtualAPLinterpreter under user control" in 1984.[28]The firstIBM PC compatiblevirus in the "wild" was aboot sectorvirus dubbed(c)Brain,[29]created in 1986 and was released in 1987 by Amjad Farooq Alvi and Basit Farooq Alvi inLahore, Pakistan, reportedly to deter unauthorized copying of the software they had written.[30] The first virus to specifically targetMicrosoft Windows,WinVirwas discovered in April 1992, two years after the release ofWindows 3.0.[31]The virus did not contain anyWindows APIcalls, instead relying onDOS interrupts. A few years later, in February 1996, Australian hackers from the virus-writing crew VLAD created theBizatchvirus (also known as "Boza" virus), which was the first known virus to specifically targetWindows 95.[32]This virus attacked the new portable executable (PE) files introduced in Windows 95.[33]In late 1997 the encrypted, memory-resident stealth virusWin32.Cabanaswas released—the first known virus that targetedWindows NT(it was also able to infect Windows 3.0 and Windows 9x hosts).[34] Evenhome computerswere affected by viruses. The first one to appear on theAmigawas a boot sector virus calledSCA virus, which was detected in November 1987.[35]By 1988, onesysopreportedly found that viruses infected 15% of the software available for download on his BBS.[36] A computer virus generally contains three parts: the infection mechanism, which finds and infects new files, the payload, which is the malicious code to execute, and the trigger, which determines when to activate the payload.[37] Virus phases is thelife cycleof the computer virus, described by using an analogy tobiology. This life cycle can be divided into four phases: Computer viruses infect a variety of different subsystems on their host computers and software.[45]One manner of classifying viruses is to analyze whether they reside inbinary executables(such as.EXEor.COM files), data files (such asMicrosoft Worddocuments orPDF files), or in theboot sectorof the host'shard drive(or some combination of all of these).[46][47] Amemory-resident virus(or simply "resident virus") installs itself as part of the operating system when executed, after which it remains inRAMfrom the time the computer is booted up to when it is shut down. Resident viruses overwriteinterrupt handlingcode or otherfunctions, and when the operating system attempts to access the target file or disk sector, the virus code intercepts the request and redirects thecontrol flowto the replication module, infecting the target. In contrast, anon-memory-resident virus(or "non-resident virus"), when executed, scans the disk for targets, infects them, and then exits (i.e. it does not remain in memory after it is done executing).[48] Many common applications, such asMicrosoft OutlookandMicrosoft Word, allowmacroprograms to be embedded in documents or emails, so that the programs may be run automatically when the document is opened. Amacro virus(or "document virus") is a virus that is written in amacro languageand embedded into these documents so that when users open the file, the virus code is executed, and can infect the user's computer. This is one of the reasons that it is dangerous to open unexpected or suspiciousattachmentsine-mails.[49][50]While not opening attachments in e-mails from unknown persons or organizations can help to reduce the likelihood of contracting a virus, in some cases, the virus is designed so that the e-mail appears to be from a reputable organization (e.g., a major bank or credit card company). Boot sector virusesspecifically target theboot sectorand/or theMaster Boot Record[51](MBR) of the host'shard disk drive,solid-state drive, or removable storage media (flash drives,floppy disks, etc.).[52] The most common way of transmission of computer viruses in boot sector is physical media. When reading theVBRof the drive, the infected floppy disk or USBflash driveconnected to the computer will transfer data, and then modify or replace the existing boot code. The next time a user tries to start the desktop, the virus will immediately load and run as part of the master boot record.[53] Email viruses are viruses that intentionally, rather than accidentally, use the email system to spread. While virus infected files may be accidentally sent asemail attachments, email viruses are aware of email system functions. They generally target a specific type of email system (Microsoft Outlookis the most commonly used), harvest email addresses from various sources, and may append copies of themselves to all email sent, or may generate email messages containing copies of themselves as attachments.[54] To avoid detection by users, some viruses employ different kinds ofdeception. Some old viruses, especially on theDOSplatform, make sure that the "last modified" date of a host file stays the same when the file is infected by the virus. This approach does not fool antivirussoftware, however, especially those which maintain and datecyclic redundancy checkson file changes.[55]Some viruses can infect files without increasing their sizes or damaging the files. They accomplish this by overwriting unused areas of executable files. These are calledcavity viruses. For example, theCIH virus, or Chernobyl Virus, infectsPortable Executablefiles. Because those files have many empty gaps, the virus, which was 1KBin length, did not add to the size of the file.[56]Some viruses try to avoid detection by killing the tasks associated with antivirus software before it can detect them (for example,Conficker). A Virus may also hide its presence using arootkitby not showing itself on the list of systemprocessesor by disguising itself within a trusted process.[57]In the 2010s, as computers and operating systems grow larger and more complex, old hiding techniques need to be updated or replaced. Defending a computer against viruses may demand that a file system migrate towards detailed and explicit permission for every kind of file access.[citation needed]In addition, only a small fraction of known viruses actually cause real incidents, primarily because many viruses remain below the theoretical epidemic threshold.[58] While some kinds of antivirus software employ various techniques to counter stealth mechanisms, once the infection occurs any recourse to "clean" the system is unreliable. In Microsoft Windows operating systems, theNTFS file systemis proprietary. This leaves antivirus software little alternative but to send a "read" request to Windows files that handle such requests. Some viruses trick antivirus software by intercepting its requests to the operating system. A virus can hide by intercepting the request to read the infected file, handling the request itself, and returning an uninfected version of the file to the antivirus software. The interception can occur bycode injectionof the actual operating system files that would handle the read request. Thus, an antivirus software attempting to detect the virus will either not be permitted to read the infected file, or, the "read" request will be served with the uninfected version of the same file.[59] The only reliable method to avoid "stealth" viruses is to boot from a medium that is known to be "clear". Security software can then be used to check the dormant operating system files. Most security software relies on virus signatures, or they employheuristics.[60][61]Security software may also use a database of file "hashes" for Windows OS files, so the security software can identify altered files, and request Windows installation media to replace them with authentic versions. In older versions of Windows, filecryptographic hash functionsof Windows OS files stored in Windows—to allow file integrity/authenticity to be checked—could be overwritten so that theSystem File Checkerwould report that altered system files are authentic, so using file hashes to scan for altered files would not always guarantee finding an infection.[62] Most modern antivirus programs try to find virus-patterns inside ordinary programs by scanning them for so-calledvirus signatures.[63]Different antivirus programs will employ different search methods when identifying viruses. If a virus scanner finds such a pattern in a file, it will perform other checks to make sure that it has found the virus, and not merely a coincidental sequence in an innocent file, before it notifies the user that the file is infected. The user can then delete, or (in some cases) "clean" or "heal" the infected file. Some viruses employ techniques that make detection by means of signatures difficult but probably not impossible. These viruses modify their code on each infection. That is, each infected file contains a different variant of the virus.[citation needed] One method of evading signature detection is to use simpleencryptionto encipher (encode) the body of the virus, leaving only the encryption module and a staticcryptographic keyincleartextwhich does not change from one infection to the next.[64]In this case, the virus consists of a small decrypting module and an encrypted copy of the virus code. If the virus is encrypted with a different key for each infected file, the only part of the virus that remains constant is the decrypting module, which would (for example) be appended to the end. In this case, a virus scanner cannot directly detect the virus using signatures, but it can still detect the decrypting module, which still makes indirect detection of the virus possible. Since these would be symmetric keys, stored on the infected host, it is entirely possible to decrypt the final virus, but this is probably not required, sinceself-modifying codeis such a rarity that finding some may be reason enough for virus scanners to at least "flag" the file as suspicious.[citation needed]An old but compact way will be the use of arithmetic operation like addition or subtraction and the use of logical conditions such asXORing,[65]where each byte in a virus is with a constant so that the exclusive-or operation had only to be repeated for decryption. It is suspicious for a code to modify itself, so the code to do the encryption/decryption may be part of the signature in many virus definitions.[citation needed]A simpler older approach did not use a key, where the encryption consisted only of operations with no parameters, like incrementing and decrementing, bitwise rotation, arithmetic negation, and logical NOT.[65]Some viruses, called polymorphic viruses, will employ a means of encryption inside an executable in which the virus is encrypted under certain events, such as the virus scanner being disabled for updates or the computer beingrebooted.[66]This is calledcryptovirology. Polymorphic codewas the first technique that posed a seriousthreatto virus scanners. Just like regular encrypted viruses, a polymorphic virus infects files with an encrypted copy of itself, which is decoded by adecryptionmodule. In the case of polymorphic viruses, however, this decryption module is also modified on each infection. A well-written polymorphic virus therefore has no parts which remain identical between infections, making it very difficult to detect directly using "signatures".[67][68]Antivirus software can detect it by decrypting the viruses using anemulator, or bystatistical pattern analysisof the encrypted virus body. To enable polymorphic code, the virus has to have apolymorphic engine(also called "mutating engine" or "mutationengine") somewhere in its encrypted body. Seepolymorphic codefor technical detail on how such engines operate.[69] Some viruses employ polymorphic code in a way that constrains the mutation rate of the virus significantly. For example, a virus can be programmed to mutate only slightly over time, or it can be programmed to refrain from mutating when it infects a file on a computer that already contains copies of the virus. The advantage of using such slow polymorphic code is that it makes it more difficult for antivirus professionals and investigators to obtain representative samples of the virus, because "bait" files that are infected in one run will typically contain identical or similar samples of the virus. This will make it more likely that the detection by the virus scanner will be unreliable, and that some instances of the virus may be able to avoid detection. To avoid being detected by emulation, some viruses rewrite themselves completely each time they are to infect new executables. Viruses that utilize this technique are said to be inmetamorphic code. To enable metamorphism, a "metamorphic engine" is needed. A metamorphic virus is usually very large and complex. For example,W32/Simileconsisted of over 14,000 lines ofassembly languagecode, 90% of which is part of the metamorphic engine.[70][71] Damage is due to causing system failure, corrupting data, wasting computer resources, increasing maintenance costs or stealing personal information.[10]Even though no antivirus software can uncover all computer viruses (especially new ones), computer security researchers are actively searching for new ways to enable antivirus solutions to more effectively detect emerging viruses, before they become widely distributed.[72] Apower virusis a computer program that executes specific machine code to reach the maximumCPU power dissipation(thermal energyoutput for thecentral processing units).[73]Computer cooling apparatus are designed to dissipate power up to thethermal design power, rather than maximum power, and a power virus could cause the system to overheat if it does not have logic to stop the processor. This may cause permanent physical damage. Power viruses can be malicious, but are often suites of test software used forintegration testingand thermal testing of computer components during the design phase of a product, or for productbenchmarking.[74] Stability testapplications are similar programs which have the same effect as power viruses (high CPU usage) but stay under the user's control. They are used for testing CPUs, for example, whenoverclocking.Spinlockin a poorly written program may cause similar symptoms, if it lasts sufficiently long. Different micro-architectures typically require different machine code to hit their maximum power. Examples of such machine code do not appear to be distributed in CPU reference materials.[75] As software is often designed with security features to prevent unauthorized use of system resources, many viruses must exploit and manipulatesecurity bugs, which aresecurity defectsin a system or application software, to spread themselves and infect other computers.Software developmentstrategies that produce large numbers of "bugs" will generally also produce potentialexploitable"holes" or "entrances" for the virus. To replicate itself, a virus must be permitted to execute code and write to memory. For this reason, many viruses attach themselves toexecutable filesthat may be part of legitimate programs (seecode injection). If a user attempts to launch an infected program, the virus' code may be executed simultaneously.[76]In operating systems that usefile extensionsto determine program associations (such as Microsoft Windows), the extensions may be hidden from the user by default. This makes it possible to create a file that is of a different type than it appears to the user. For example, an executable may be created and named "picture.png.exe", in which the user sees only "picture.png" and therefore assumes that this file is adigital imageand most likely is safe, yet when opened, it runs the executable on the client machine.[77]Viruses may be installed on removable media, such asflash drives. The drives may be left in a parking lot of a government building or other target, with the hopes that curious users will insert the drive into a computer. In a 2015 experiment, researchers at the University of Michigan found that 45–98 percent of users would plug in a flash drive of unknown origin.[78] The vast majority of viruses target systems runningMicrosoft Windows. This is due to Microsoft's large market share ofdesktop computerusers.[79]The diversity of software systems on a network limits the destructive potential of viruses and malware.[a]Open-sourceoperating systems such asLinuxallow users to choose from a variety ofdesktop environments, packaging tools, etc., which means that malicious code targeting any of these systems will only affect a subset of all users. Many Windows users are running the same set of applications, enabling viruses to rapidly spread among Microsoft Windows systems by targeting the same exploits on large numbers of hosts.[80][81][82][83] While Linux and Unix in general have always natively prevented normal users from making changes to theoperating systemenvironment without permission, Windows users are generally not prevented from making these changes, meaning that viruses can easily gain control of the entire system on Windows hosts. This difference has continued partly due to the widespread use ofadministratoraccounts in contemporary versions likeWindows XP. In 1997, researchers created and released a virus for Linux—known as "Bliss".[84]Bliss, however, requires that the user run it explicitly, and it can only infect programs that the user has the access to modify. Unlike Windows users, most Unix users do notlog inas an administrator, or"root user", except to install or configure software; as a result, even if a user ran the virus, it could not harm their operating system. The Bliss virus never became widespread, and remains chiefly a research curiosity. Its creator later posted the source code toUsenet, allowing researchers to see how it worked.[85] Before computer networks became widespread, most viruses spread onremovable media, particularlyfloppy disks. In the early days of thepersonal computer, many users regularly exchanged information and programs on floppies. Some viruses spread by infecting programs stored on these disks, while others installed themselves into the diskboot sector, ensuring that they would be run when the user booted the computer from the disk, usually inadvertently. Personal computers of the era would attempt to boot first from a floppy if one had been left in the drive. Until floppy disks fell out of use, this was the most successful infection strategy and boot sector viruses were the most common in the "wild" for many years. Traditional computer viruses emerged in the 1980s, driven by the spread of personal computers and the resultant increase inbulletin board system(BBS),modemuse, and software sharing.Bulletin board–driven software sharing contributed directly to the spread ofTrojan horseprograms, and viruses were written to infect popularly traded software.Sharewareandbootlegsoftware were equally commonvectorsfor viruses on BBSs.[86][87]Viruses can increase their chances of spreading to other computers by infecting files on anetwork file systemor a file system that is accessed by other computers.[88] Macro viruseshave become common since the mid-1990s. Most of these viruses are written in the scripting languages for Microsoft programs such asMicrosoft WordandMicrosoft Exceland spread throughoutMicrosoft Officeby infecting documents andspreadsheets. Since Word and Excel were also available forMac OS, most could also spread toMacintosh computers. Although most of these viruses did not have the ability to send infectedemail messages, those viruses which did take advantage of theMicrosoft OutlookComponent Object Model(COM) interface.[89][90]Some old versions of Microsoft Word allow macros to replicate themselves with additional blank lines. If two macro viruses simultaneously infect a document, the combination of the two, if also self-replicating, can appear as a "mating" of the two and would likely be detected as a virus unique from the "parents".[91] A virus may also send aweb address linkas aninstant messageto all the contacts (e.g., friends and colleagues' e-mail addresses) stored on an infected machine. If the recipient, thinking the link is from a friend (a trusted source) follows the link to the website, the virus hosted at the site may be able to infect this new computer and continue propagating.[92]Viruses that spread usingcross-site scriptingwere first reported in 2002,[93]and were academically demonstrated in 2005.[94]There have been multiple instances of the cross-site scripting viruses in the "wild", exploiting websites such asMySpace(with the Samy worm) andYahoo!. In 1989 TheADAPSOSoftware Industry DivisionpublishedDealing With Electronic Vandalism,[95]in which they followed the risk of data loss by "the added risk of losing customer confidence."[96][97][98] Many users installantivirus softwarethat can detect and eliminate known viruses when the computer attempts todownloador run the executable file (which may be distributed as an email attachment, or onUSB flash drives, for example). Some antivirus software blocks known malicious websites that attempt to install malware. Antivirus software does not change the underlying capability of hosts to transmit viruses. Users must update their software regularly topatchsecurity vulnerabilities("holes"). Antivirus software also needs to be regularly updated to recognize the latestthreats. This is because malicioushackersand other individuals are always creating new viruses. The GermanAV-TESTInstitute publishes evaluations of antivirus software for Windows[99]and Android.[100] Examples of Microsoft Windowsanti virusand anti-malware software include the optionalMicrosoft Security Essentials[101](for Windows XP, Vista and Windows 7) for real-time protection, theWindows Malicious Software Removal Tool[102](now included withWindows (Security) Updateson "Patch Tuesday", the second Tuesday of each month), andWindows Defender(an optional download in the case of Windows XP).[103]Additionally, several capable antivirus software programs are available for free download from the Internet (usually restricted to non-commercial use).[104]Some such free programs are almost as good as commercial competitors.[105]Commonsecurity vulnerabilitiesare assignedCVE IDsand listed in the USNational Vulnerability Database.Secunia PSI[106]is an example of software, free for personal use, that will check a PC for vulnerable out-of-date software, and attempt to update it.Ransomwareandphishingscamalerts appear as press releases on theInternet Crime Complaint Center noticeboard. Ransomware is a virus that posts a message on the user's screen saying that the screen or system will remain locked or unusable until aransompayment is made.Phishingis a deception in which the malicious individual pretends to be a friend, computer security expert, or other benevolent individual, with the goal of convincing the targeted individual to revealpasswordsor other personal information. Other commonly used preventive measures include timely operating system updates, software updates, careful Internet browsing (avoiding shady websites), and installation of only trusted software.[107]Certain browsers flag sites that have been reported to Google and that have been confirmed as hosting malware by Google.[108][109] There are two common methods that an antivirus software application uses to detect viruses, as described in theantivirus softwarearticle. The first, and by far the most common method of virus detection is using a list ofvirus signaturedefinitions. This works by examining the content of the computer's memory (itsRandom Access Memory(RAM), andboot sectors) and the files stored on fixed or removable drives (hard drives, floppy drives, or USB flash drives), and comparing those files against adatabaseof known virus "signatures". Virus signatures are just strings of code that are used to identify individual viruses; for each virus, the antivirus designer tries to choose a unique signature string that will not be found in a legitimate program. Different antivirus programs use different "signatures" to identify viruses. The disadvantage of this detection method is that users are only protected from viruses that are detected by signatures in their most recent virus definition update, and not protected from new viruses (see "zero-day attack").[110] A second method to find viruses is to use aheuristicalgorithmbased on common virus behaviors. This method can detect new viruses for which antivirus security firms have yet to define a "signature", but it also gives rise to morefalse positivesthan using signatures. False positives can be disruptive, especially in a commercial environment, because it may lead to a company instructing staff not to use the company computer system until IT services have checked the system for viruses. This can slow down productivity for regular workers. One may reduce the damage done by viruses by making regularbackupsof data (and the operating systems) on different media, that are either kept unconnected to the system (most of the time, as in a hard drive),read-onlyor not accessible for other reasons, such as using differentfile systems. This way, if data is lost through a virus, one can start again using the backup (which will hopefully be recent).[111]If a backup session onoptical medialikeCDandDVDis closed, it becomes read-only and can no longer be affected by a virus (so long as a virus or infected file was not copied onto theCD/DVD). Likewise, an operating system on abootableCD can be used to start the computer if the installed operating systems become unusable. Backups on removable media must be carefully inspected before restoration. The Gammima virus, for example, propagates via removableflash drives.[112][113] Many websites run by antivirus software companies provide free online virus scanning, with limited "cleaning" facilities (after all, the purpose of the websites is to sell antivirus products and services). Some websites—likeGooglesubsidiaryVirusTotal.com—allow users to upload one or more suspicious files to be scanned and checked by one or more antivirus programs in one operation.[114][115]Additionally, several capable antivirus software programs are available for free download from the Internet (usually restricted to non-commercial use).[116]Microsoft offers an optional free antivirus utility calledMicrosoft Security Essentials, aWindows Malicious Software Removal Toolthat is updated as part of the regular Windows update regime, and an older optional anti-malware (malware removal) toolWindows Defenderthat has been upgraded to an antivirus product in Windows 8. Some viruses disableSystem Restoreand other important Windows tools such asTask ManagerandCMD. An example of a virus that does this is CiaDoor. Many such viruses can be removed byrebootingthe computer, entering Windows "safe mode" with networking, and then using system tools orMicrosoft Safety Scanner.[117]System RestoreonWindows Me,Windows XP,Windows VistaandWindows 7can restore theregistryand critical system files to a previous checkpoint. Often a virus will cause a system to "hang" or "freeze", and a subsequent hard reboot will render a system restore point from the same day corrupted. Restore points from previous days should work, provided the virus is not designed to corrupt the restore files and does not exist in previous restore points.[118][119] Microsoft'sSystem File Checker(improved in Windows 7 and later) can be used to check for, and repair, corrupted system files.[120]Restoring an earlier "clean" (virus-free) copy of the entire partition from acloned disk, adisk image, or abackupcopy is one solution—restoring an earlier backup disk "image" is relatively simple to do, usually removes any malware, and may be faster than "disinfecting" the computer—or reinstalling and reconfiguring the operating system and programs from scratch, as described below, then restoring user preferences.[111]Reinstalling the operating system is another approach to virus removal. It may be possible to recover copies of essential user data by booting from alive CD, or connecting the hard drive to another computer and booting from the second computer's operating system, taking great care not to infect that computer by executing any infected programs on the original drive. The original hard drive can then be reformatted and the OS and all programs installed from original media. Once the system has been restored, precautions must be taken to avoid reinfection from any restoredexecutable files.[121] The first known description of a self-reproducing program in fiction is in the 1970 short storyThe Scarred ManbyGregory Benfordwhich describes a computer program called VIRUS which, when installed on a computer withtelephone modemdialing capability, randomly dials phone numbers until it hits a modem that is answered by another computer, and then attempts to program the answering computer with its own program, so that the second computer will also begin dialing random numbers, in search of yet another computer to program. The program rapidly spreads exponentially through susceptible computers and can only be countered by a second program called VACCINE.[122]His story was based on an actual computer virus written inFORTRANthat Benford had created and run on thelabcomputer in the 1960s, as a proof-of-concept, and whichhe told John Brunner aboutin 1970.[123] The idea was explored further in two 1972 novels,When HARLIE Was OnebyDavid GerroldandThe Terminal ManbyMichael Crichton, and became a major theme of the 1975 novelThe Shockwave RiderbyJohn Brunner.[124] The 1973Michael Crichtonsci-fifilmWestworldmade an early mention of the concept of a computer virus, being a central plot theme that causesandroidsto run amok.[125][better source needed]Alan Oppenheimer's character summarizes the problem by stating that "...there's a clear pattern here which suggests an analogy to an infectious disease process, spreading from one...area to the next." To which the replies are stated: "Perhaps there are superficial similarities to disease" and, "I must confess I find it difficult to believe in a disease of machinery."[126] In 2016,Jussi Parikkaannounced the creation of The Malware Museum of Art: a collection of malware programs, usually viruses, distributed in the 1980s and 1990s on home computers. Malware Museum of Art is hosted atThe Internet Archiveand is curated byMikko HyppönenfromHelsinki,Finland.[127]The collection allows anyone with a computer to experience virus infection of decades ago with safety.[128] The term "virus" is also misused by extension to refer to other types ofmalware. "Malware" encompasses computer viruses along with many other forms of malicious software, such ascomputer "worms",ransomware,spyware,adware,trojan horses,keyloggers,rootkits,bootkits, maliciousBrowser Helper Object(BHOs), and other malicious software. The majority of active malware threats are trojan horse programs or computer worms rather than computer viruses. The term computer virus, coined byFred Cohenin 1985, is a misnomer.[129]Viruses often perform some type of harmful activity on infected host computers, such as acquisition ofhard diskspace orcentral processing unit(CPU) time, accessing and stealing private information (e.g.,credit cardnumbers,debit cardnumbers, phone numbers, names, email addresses, passwords, bank information, house addresses, etc.), corrupting data, displaying political, humorous or threatening messages on the user's screen,spammingtheir e-mail contacts,logging their keystrokes, or even rendering the computer useless. However, not all viruses carry a destructive "payload" and attempt to hide themselves—the defining characteristic of viruses is that they are self-replicating computer programs that modify other software without user consent by injecting themselves into the said programs, similar to a biological virus which replicates within living cells.
https://en.wikipedia.org/wiki/Self-modifying_computer_virus
The term "computer", in use from the early 17th century (the first known written reference dates from 1613),[1]meant "one who computes": a person performing mathematicalcalculations, beforeelectronic calculatorsbecame available.Alan Turingdescribed the "human computer" as someone who is "supposed to be following fixed rules; he has no authority to deviate from them in any detail."[2]Teams of people, often women from the late nineteenth century onwards, were used to undertake long and often tedious calculations; the work was divided so that this could be done in parallel. The same calculations were frequently performed independently by separate teams to check the correctness of the results. Since the end of the 20th century, the term "human computer" has also been applied to individuals with prodigious powers ofmental arithmetic, also known asmental calculators. AstronomersinRenaissancetimes used that term about as often as they called themselves "mathematicians" for their principal work of calculating thepositions of planets. They often hired a "computer" to assist them. For some people, such asJohannes Kepler, assisting a scientist in computation was a temporary position until they moved on to greater advancements. Before he died in 1617,John Napiersuggested ways by which "the learned, who perchance may have plenty of pupils and computers" might construct an improvedlogarithm table.[3]: p.46 Computing became more organized when the FrenchmanAlexis Claude Clairaut(1713–1765) divided the computation to determine the time of the return ofHalley's Cometwith two colleagues,Joseph LalandeandNicole-Reine Lepaute.[4]Human computers continued plotting the future movements of astronomical objects to create celestial tables foralmanacsin the late 1760s.[5] The computers working on theNautical Almanacfor the British Admiralty includedWilliam Wales,Israel LyonsandRichard Dunthorne.[6]The project was overseen byNevil Maskelyne.[7]Maskelyne would borrow tables from other sources as often as he could in order to reduce the number of calculations his team of computers had to make.[8]Women were generally excluded, with some exceptions such asMary Edwardswho worked from the 1780s to 1815 as one of thirty-five computers for the BritishNautical Almanacused for navigation at sea. The United States also worked on their own version of a nautical almanac in the 1840s, withMaria Mitchellbeing one of the best-known computers on the staff.[9] Other innovations in human computing included the work done by a group of boys who worked in the Octagon Room of theRoyal Greenwich Observatoryfor Astronomer RoyalGeorge Airy.[10]Airy's computers, hired after 1835, could be as young as fifteen, and they were working on a backlog of astronomical data.[11]The way that Airy organized the Octagon Room with a manager, pre-printed computing forms, and standardized methods of calculating and checking results (similar to the way theNautical Almanaccomputers operated) would remain a standard for computing operations for the next 80 years.[12] Women were increasingly involved in computing after 1865.[13]Private companies hired them for computing and to manage office staff.[13] In the 1870s, the United StatesSignal Corpscreated a new way of organizing human computing to track weather patterns.[14]This built on previous work from theUS Navyand theSmithsonian meteorological project.[15]The Signal Corps used a small computing staff that processed data that had to be collected quickly and finished in "intensive two-hour shifts".[16]Each individual human computer was responsible for only part of the data.[14] In the late nineteenth centuryEdward Charles Pickeringorganized the "Harvard Computers".[17]The first woman to approach them,Anna Winlock, askedHarvard Observatoryfor a computing job in 1875.[18]By 1880, all of the computers working at the Harvard Observatory were women.[18]The standard computer pay started at twenty-five cents an hour.[19]There would be such a huge demand to work there, that some women offered to work for the Harvard Computers for free.[20]Many of the women astronomers from this era were computers with possibly the best-known beingFlorence Cushman,Henrietta Swan Leavitt, andAnnie Jump Cannon, who worked with Pickering from 1888, 1893, and 1896 respectively. Cannon could classify stars at a rate of three per minute.[21]Mina Fleming, one of the Harvard Computers, publishedThe Draper Catalogue of Stellar Spectrain 1890.[22]The catalogue organized stars byspectral lines.[22]The catalogue continued to be expanded by the Harvard Computers and added new stars in successive volumes.[23]Elizabeth Williamswas involved in calculations in the search for a new planet,Pluto, at theLowell Observatory. In 1893,Francis Galtoncreated the Committee for Conducting Statistical Inquiries into the Measurable Characteristics of Plants and Animals which reported to theRoyal Society.[24]The committee used advanced techniques for scientific research and supported the work of several scientists.[24]W.F. Raphael Weldon, the first scientist supported by the committee worked with his wife, Florence Tebb Weldon, who was his computer.[24]Weldon used logarithms and mathematical tables created byAugust Leopold Crelleand had no calculating machine.[25]Karl Pearson, who had a lab at theUniversity of London, felt that the work Weldon did was "hampered by the committee".[26]However, Pearson did create a mathematical formula that the committee was able to use for data correlation.[27]Pearson brought his correlation formula to his own Biometrics Laboratory.[27]Pearson had volunteer and salaried computers who were both men and women.[28]Alice Leewas one of his salaried computers who worked withhistogramsand thechi-squaredstatistics.[29]Pearson also worked withBeatriceandFrances Cave-Brown-Cave.[29]Pearson's lab, by 1906, had mastered the art ofmathematical tablemaking.[29] Human computers were used to compile 18th and 19th century Western Europeanmathematical tables, for example those fortrigonometryandlogarithms. Although these tables were most often known by the names of the principalmathematicianinvolved in the project, such tables were often in fact the work of an army of unknown and unsung computers. Ever more accurate tables to a high degree of precision were needed for navigation and engineering. Approaches differed, but one was to break up the project into a form ofpiece workcompleted at home. The computers, often educated middle-class women whom society deemed it unseemly to engage in the professions or go out to work, would receive and send back packets of calculations by post.[30]The Royal Astronomical Society eventually gave space to a new committee, the Mathematical Tables Committee, which was the only professional organization for human computers in 1925.[31] Human computers were used to predict the effects of building theAfsluitdijkbetween 1927 and 1932 in theZuiderzeein theNetherlands. The computer simulation was set up byHendrik Lorentz.[32] A visionary application to meteorology can be found in the scientific work ofLewis Fry Richardsonwho, in 1922, estimated that 64,000 humans could forecast the weather for the whole globe by solving the attending differentialprimitive equationsnumerically.[33]Around 1910 he had already used human computers to calculate the stresses inside a masonry dam.[34] It was not untilWorld War Ithat computing became a profession. "The First World War required large numbers of human computers. Computers on both sides of the war produced map grids, surveying aids, navigation tables and artillery tables. With the men at war, most of these new computers were women and many were college educated."[35]This would happen again duringWorld War II, as more men joined the fight, college educated women were left to fill their positions. One of the first female computers, Elizabeth Webb Wilson, was hired by the Army in 1918 and was a graduate ofGeorge Washington University. Wilson "patiently sought a war job that would make use of her mathematical skill. In later years, she would claim that the war spared her from the 'Washington social whirl', the rounds of society events that should have procured for her a husband"[35]and instead she was able to have a career. After the war, Wilson continued with a career in mathematics and became anactuaryand turned her focus tolife tables. Human computers played integral roles in the World War II war effort in the United States, and because of the depletion of the male labor force due to thedraft, many computers during World War II were women, frequently with degrees in mathematics. In the 1940s, women were hired to examine nuclear and particle tracks left on photographic emulsions.[36]In theManhattan Project, human computers working with a variety of mechanical aids assisted numerical studies of the complex formulas related tonuclear fission.[37] Human computers were involved in calculating ballistics tables during World War I.[38]Between the two world wars, computers were used in the Department of Agriculture in the United States and also atIowa State College.[39]The human computers in these places also used calculating machines and early electrical computers to aid in their work.[40]In the 1930s, The Columbia University Statistical Bureau was created byBenjamin Wood.[41]Organized computing was also established atIndiana University, theCowles Commissionand theNational Research Council.[42] Following World War II, theNational Advisory Committee for Aeronautics(NACA) used human computers in flight research to transcribe raw data from celluloid film andoscillographpaper and then, usingslide rulesand electriccalculators, reduced the data to standard engineering units.Margot Lee Shetterly's biographical book,Hidden Figures(made into amovie of the same namein 2016), depicts African-American women who served as human computers atNASAin support of theFriendship 7, the first American crewed mission into Earth orbit.[43]NACA had begun hiring black women as computers from 1940.[44]One such computer wasDorothy Vaughanwho began her work in 1943 with theLangley Research Centeras a special hire to aid the war effort,[45]and who came to supervise theWest Area Computers, a group of African-American women who worked as computers at Langley. Human computing was, at the time, considered menial work. On November 8, 2019, theCongressional Gold Medalwas awarded "In recognition of all the women who served as computers, mathematicians, and engineers at the National Advisory Committee for Aeronautics and the National Aeronautics and Space Administration (NASA) between the 1930s and the 1970s."[46] As electrical computers became more available, human computers, especially women, were drafted as some of the firstcomputer programmers.[47]Because the six people responsible for setting up problems on theENIAC(the first general-purpose electronic digital computer built at theUniversity of Pennsylvaniaduring World War II) were drafted from a corps of human computers, the world's first professional computer programmers were women, namely:Kay McNulty,Betty Snyder,Marlyn Wescoff,Ruth Lichterman,Betty Jean Jennings, andFran Bilas.[48] The term "human computer" has been recently used by a group of researchers who refer to their work as "human computation".[49]In this usage, "human computer" refers to activities of humans in the context ofhuman-based computation(HBC). This use of "human computer" is debatable for the following reason: HBC is a computational technique where a machine outsources certain parts of a task to humans to perform, which are not necessarily algorithmic. In fact, in the context of HBC most of the time humans are not provided with a sequence of exact steps to be executed to yield the desired result; HBC is agnostic about how humans solve the problem. This is why "outsourcing" is the term used in the definition above. The use of humans in the historical role of "human computers" forHBCis very rare.
https://en.wikipedia.org/wiki/Human_computer
LCS35is acryptographicchallenge and atime-lock puzzleset byRon Rivestin 1999. The challenge is to calculate the value w=22t(modn){\displaystyle w=2^{2^{t}}{\pmod {n}}} wheretis a specific 14-digit (or 47-bit) integer, namely 79685186856218, andnis a specific 616-digit (or 2048-bit) integer that is the product of two large primes (which are not given). The value ofwcan then be used to decrypt the ciphertextz, another 616-digit integer. The plaintext provides the concealed information about the factorisation ofn, allowing the solution to be easily verified. The idea behind the challenge is that the only known way to find the value ofwwithout knowing the factorisation ofnis bytsuccessive squarings. The value oftwas chosen so that this brute-force calculation would require about 35 years using 1999 chip speeds as a starting point, taking into accountMoore's law. Rivest notes that "just as a failure of Moore's Law could make the puzzle harder than intended, a breakthrough in the art of factoring would make the puzzle easier than intended." The challenge was set at (and takes its name from) the 35th anniversary celebrations of the MIT Laboratory for Computer Science, now part ofMIT Computer Science and Artificial Intelligence Laboratory. The LCS35 challenge was solved on April 15, 2019, twenty years later, by programmer Bernard Fabrot.[1][2]The plaintext begins with "!!!Happy Birthday LCS!!!".[3] On May 14, 2019, Ronald L. Rivest published a new version of LCS35 (named CSAIL2019) to extend the puzzle out to the year 2034.[4]
https://en.wikipedia.org/wiki/LCS35
Incomputer science,program synthesisis the task to construct aprogramthatprovablysatisfies a given high-levelformal specification. In contrast toprogram verification, the program is to be constructed rather than given; however, both fields make use of formal proof techniques, and both comprise approaches of different degrees of automation. In contrast toautomatic programmingtechniques, specifications in program synthesis are usually non-algorithmicstatements in an appropriatelogical calculus.[1] The primary application of program synthesis is to relieve the programmer of the burden of writing correct, efficient code that satisfies a specification. However, program synthesis also has applications tosuperoptimizationand inference ofloop invariants.[2] During the Summer Institute of Symbolic Logic at Cornell University in 1957,Alonzo Churchdefined the problem to synthesize a circuit from mathematical requirements.[3]Even though the work only refers to circuits and not programs, the work is considered to be one of the earliest descriptions of program synthesis and some researchers refer to program synthesis as "Church's Problem". In the 1960s, a similar idea for an "automatic programmer" was explored by researchers in artificial intelligence.[citation needed] Since then, various research communities considered the problem of program synthesis. Notable works include the 1969 automata-theoretic approach byBüchiandLandweber,[4]and the works byMannaandWaldinger(c. 1980). The development of modernhigh-level programming languagescan also be understood as a form of program synthesis. The early 21st century has seen a surge of practical interest in the idea of program synthesis in theformal verificationcommunity and related fields. Armando Solar-Lezama showed that it is possible to encode program synthesis problems inBoolean logicand use algorithms for theBoolean satisfiability problemto automatically find programs.[5] In 2013, a unified framework for program synthesis problems called Syntax-guided Synthesis (stylized SyGuS) was proposed by researchers atUPenn,UC Berkeley, andMIT.[6]The input to a SyGuS algorithm consists of a logical specification along with acontext-free grammarof expressions that constrains the syntax of valid solutions.[7]For example, to synthesize a functionfthat returns the maximum of two integers, the logical specification might look like this: (f(x,y) =x∨f(x,y) =y) ∧f(x,y) ≥ x ∧f(x,y) ≥ y and the grammar might be: where "ite" stands for "if-then-else". The expression would be a valid solution, because it conforms to the grammar and the specification. From 2014 through 2019, the yearly Syntax-Guided Synthesis Competition (or SyGuS-Comp) compared the different algorithms for program synthesis in a competitive event.[8]The competition used a standardized input format, SyGuS-IF, based onSMT-Lib 2. For example, the following SyGuS-IF encodes the problem of synthesizing the maximum of two integers (as presented above): A compliant solver might return the following output: Counter-example guided inductive synthesis (CEGIS) is an effective approach to building sound program synthesizers.[9][10]CEGIS involves the interplay of two components: ageneratorwhich generates candidate programs, and averifierwhich checks whether the candidates satisfy the specification. Given a set of inputsI, a set of possible programsP, and a specificationS, the goal of program synthesis is to find a programpinPsuch that for all inputsiinI,S(p,i) holds. CEGIS is parameterized over a generator and a verifier: CEGIS runs the generator and verifier run in a loop, accumulating counter-examples: Implementations of CEGIS typically useSMT solversas verifiers. CEGIS was inspired bycounterexample-guided abstraction refinement(CEGAR).[11] The framework ofMannaandWaldinger, published in 1980,[12][13]starts from a user-givenfirst-order specification formula. For that formula, a proof is constructed, thereby also synthesizing afunctional programfromunifyingsubstitutions. The framework is presented in a table layout, the columns containing: Initially, background knowledge, pre-conditions, and post-conditions are entered into the table. After that, appropriate proof rules are applied manually. The framework has been designed to enhance human readability of intermediate formulas: contrary toclassical resolution, it does not requireclausal normal form, but allows one to reason with formulas of arbitrary structure and containing any junctors ("non-clausal resolution"). The proof is complete whentrue{\displaystyle {\it {true}}}has been derived in theGoalscolumn, or, equivalently,false{\displaystyle {\it {false}}}in theAssertionscolumn. Programs obtained by this approach are guaranteed to satisfy the specification formula started from; in this sense they arecorrect by construction.[14]Only a minimalist, yetTuring-complete,[15]purely functional programming language, consisting of conditional, recursion, and arithmetic and other operators[note 3]is supported. Case studies performed within this framework synthesized algorithms to compute e.g.division,remainder,[16]square root,[17]term unification,[18]answers torelational databasequeries[19]and severalsorting algorithms.[20][21] Proof rules include: Murray has shown these rules to becompleteforfirst-order logic.[24]In 1986, Manna and Waldinger added generalized E-resolution andparamodulationrules to handle also equality;[25]later, these rules turned out to be incomplete (but neverthelesssound).[26] As a toy example, a functional program to compute the maximumM{\displaystyle M}of two numbersx{\displaystyle x}andy{\displaystyle y}can be derived as follows.[citation needed] Starting from the requirement description "The maximum is larger than or equal to any given number, and is one of the given numbers", the first-order formula∀X∀Y∃M:X≤M∧Y≤M∧(X=M∨Y=M){\displaystyle \forall X\forall Y\exists M:X\leq M\land Y\leq M\land (X=M\lor Y=M)}is obtained as its formal translation. This formula is to be proved. By reverseSkolemization,[note 4]the specification in line 10 is obtained, an upper- and lower-case letter denoting a variable and aSkolem constant, respectively. After applying a transformation rule for thedistributive lawin line 11, the proof goal is a disjunction, and hence can be split into two cases, viz. lines 12 and 13. Turning to the first case, resolving line 12 with the axiom in line 1 leads toinstantiationof the program variableM{\displaystyle M}in line 14. Intuitively, the last conjunct of line 12 prescribes the value thatM{\displaystyle M}must take in this case. Formally, the non-clausal resolution rule shown in line 57 above is applied to lines 12 and 1, with yielding¬({\displaystyle \lnot (}true∧false) ∧ (x ≤ x ∧ y ≤ x ∧true){\displaystyle )}, which simplifies tox≤x∧y≤x{\displaystyle x\leq x\land y\leq x}. In a similar way, line 14 yields line 15 and then line 16 by resolution. Also, the second case,x≤M∧y≤M∧y=M{\displaystyle x\leq M\land y\leq M\land y=M}in line 13, is handled similarly, yielding eventually line 18. In a last step, both cases (i.e. lines 16 and 18) are joined, using the resolution rule from line 58; to make that rule applicable, the preparatory step 15→16 was needed. Intuitively, line 18 could be read as "in casex≤y{\displaystyle x\leq y}, the outputy{\displaystyle y}is valid (with respect to the original specification), while line 15 says "in casey≤x{\displaystyle y\leq x}, the outputx{\displaystyle x}is valid; the step 15→16 established that both cases 16 and 18 are complementary.[note 5]Since both line 16 and 18 comes with a program term, aconditional expressionresults in the program column. Since the goal formulatrue{\displaystyle {\textit {true}}}has been derived, the proof is done, and the program column of the "true{\displaystyle {\textit {true}}}" line contains the program.
https://en.wikipedia.org/wiki/Program_synthesis
Arne Carl-August Beurling(3 February 1905 – 20 November 1986) was aSwedishmathematicianandprofessorofmathematicsatUppsala University(1937–1954) and later at theInstitute for Advanced StudyinPrinceton, New Jersey. Beurling worked extensively inharmonic analysis,complex analysisandpotential theory. The "Beurling factorization" helped mathematical scientists to understand theWold decomposition, and inspired further work on theinvariant subspacesof linear operators andoperator algebras, e.g. Håkan Hedenmalm's factorization theorem forBergman spaces. He is perhaps most famous for single-handedly decrypting an early version of the German cipher machineSiemens and Halske T52in a matter of two weeks during 1940, using only pen and paper. This machine's cipher is generally considered to be more complicated than that of the more famousEnigma machine. Beurling's method of decrypting military telegrams between Norway and Germany worked from June 1940 right up until 1943 when the Germans changed equipment. Beurling was born on 3 February 1905 inGothenburg, Sweden and was the son of the landowner Konrad Beurling and baroness Elsa Raab.[1]Aftergraduatingin 1924, he was enrolled at theUppsala Universitywhere he received a Bachelor of Arts degree in 1926 and two years later a Licentiate of Philosophy degree.[1] Beurling was assistant teacher at Uppsala University from 1931 to 1933.[1]He received his doctorate in mathematics in 1933 for his dissertationÉtudes sur un problème de majoration.[2]Beurling was adocentof mathematics at Uppsala University from 1933 and then professor of mathematics from 1937 to 1954.[1] In the summer of 1940 he single-handedlydecipheredandreverse-engineeredan early version of theSiemens and Halske T52also known as theGeheimfernschreiber("secret teletypewriter") used byNazi GermanyinWorld War IIfor sendingcipheredmessages.[3]The T52 was one of the so-called "Fish cyphers", that, using transposition, created nearly one quintillion (893,622,318,929,520,960) different variations. It took Beurling two weeks to solve the problem using pen and paper. Using Beurling's work, a device was created that enabled Sweden to decipher Germanteleprintertraffic passing through Sweden fromNorwayon a cable. In this way, Swedish authorities knew aboutOperation Barbarossabefore it occurred.[4]Since the Swedes would not reveal how this knowledge was attained, the Swedish warning was not treated as credible by Soviets[citation needed]. This became the foundation for the SwedishNational Defence Radio Establishment(FRA). The cypher in theGeheimfernschreiberis generally considered to be more complex than the cypher used in the Enigma machines.[5] He was visiting professor atHarvard Universityfrom 1948 to 1949.[6]From 1954 he was professor at theInstitute for Advanced Studyin Princeton, New Jersey,United States, where he took overAlbert Einstein's office.[7] He was thedoctoral advisorofLennart CarlesonandCarl-Gustav Esseen. Arne Beurling was first married (1936–40) to Britta Östberg (born 1907), daughter of Henrik Östberg and Gerda Nilsson. In 1950 he married Karin Lindblad (1920–2006), daughter of ironmonger Henric Lindblad and Wanja Bengtsson.[1]Karin was a distinguished Ph.D. student from Uppsala University. When they lived in Princeton, she worked in a biochemistry lab at Princeton University.[8]He had two children from his first marriage — Pehr-Henrik (1936–1962) and Jane (1938–1992).[1] Beurling's great-grandfather wasPehr Henrik Beurling(1758 or 1763–1806), who founded a high quality clock factory inStockholmin 1783. Arne Beurling died in 1986 and was buried atNorra begravningsplatseninSolna.[9] Beurling's prowess as a cryptanalysist is the subject of the 2005 short operaKrypto CEGbyJonas SjöstrandandKimmo Eriksson.
https://en.wikipedia.org/wiki/Arne_Beurling
Deep learningis a subset ofmachine learningthat focuses on utilizing multilayeredneural networksto perform tasks such asclassification,regression, andrepresentation learning. The field takes inspiration frombiological neuroscienceand is centered around stackingartificial neuronsinto layers and "training" them to process data. The adjective "deep" refers to the use of multiple layers (ranging from three to several hundred or thousands) in the network. Methods used can be eithersupervised,semi-supervisedorunsupervised.[2] Some common deep learning network architectures includefully connected networks,deep belief networks,recurrent neural networks,convolutional neural networks,generative adversarial networks,transformers, andneural radiance fields. These architectures have been applied to fields includingcomputer vision,speech recognition,natural language processing,machine translation,bioinformatics,drug design,medical image analysis,climate science, material inspection andboard gameprograms, where they have produced results comparable to and in some cases surpassing human expert performance.[3][4][5] Early forms of neural networks were inspired by information processing and distributed communication nodes inbiological systems, particularly thehuman brain. However, current neural networks do not intend to model the brain function of organisms, and are generally seen as low-quality models for that purpose.[6] Most modern deep learning models are based on multi-layeredneural networkssuch asconvolutional neural networksandtransformers, although they can also includepropositional formulasor latent variables organized layer-wise in deepgenerative modelssuch as the nodes indeep belief networksand deepBoltzmann machines.[7] Fundamentally, deep learning refers to a class ofmachine learningalgorithmsin which a hierarchy of layers is used to transform input data into a progressively more abstract and composite representation. For example, in animage recognitionmodel, the raw input may be animage(represented as atensorofpixels). The first representational layer may attempt to identify basic shapes such as lines and circles, the second layer may compose and encode arrangements of edges, the third layer may encode a nose and eyes, and the fourth layer may recognize that the image contains a face. Importantly, a deep learning process can learn which features to optimally place at which levelon its own. Prior to deep learning, machine learning techniques often involved hand-craftedfeature engineeringto transform the data into a more suitable representation for a classification algorithm to operate on. In the deep learning approach, features are not hand-crafted and the modeldiscoversuseful feature representations from the data automatically. This does not eliminate the need for hand-tuning; for example, varying numbers of layers and layer sizes can provide different degrees of abstraction.[8][2] The word "deep" in "deep learning" refers to the number of layers through which the data is transformed. More precisely, deep learning systems have a substantialcredit assignment path(CAP) depth. The CAP is the chain of transformations from input to output. CAPs describe potentially causal connections between input and output. For afeedforward neural network, the depth of the CAPs is that of the network and is the number of hidden layers plus one (as the output layer is also parameterized). Forrecurrent neural networks, in which a signal may propagate through a layer more than once, the CAP depth is potentially unlimited.[9]No universally agreed-upon threshold of depth divides shallow learning from deep learning, but most researchers agree that deep learning involves CAP depth higher than two. CAP of depth two has been shown to be a universal approximator in the sense that it can emulate any function.[10]Beyond that, more layers do not add to the function approximator ability of the network. Deep models (CAP > two) are able to extract better features than shallow models and hence, extra layers help in learning the features effectively. Deep learning architectures can be constructed with agreedylayer-by-layer method.[11]Deep learning helps to disentangle these abstractions and pick out which features improve performance.[8] Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data is more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner aredeep belief networks.[8][12] The termDeep Learningwas introduced to the machine learning community byRina Dechterin 1986,[13]and to artificial neural networks by Igor Aizenberg and colleagues in 2000, in the context ofBooleanthreshold neurons.[14][15]Although the history of its appearance is apparently more complicated.[16] Deep neural networks are generally interpreted in terms of theuniversal approximation theorem[17][18][19][20][21]orprobabilistic inference.[22][23][8][9][24] The classic universal approximation theorem concerns the capacity offeedforward neural networkswith a single hidden layer of finite size to approximatecontinuous functions.[17][18][19][20]In 1989, the first proof was published byGeorge Cybenkoforsigmoidactivation functions[17]and was generalised to feed-forward multi-layer architectures in 1991 by Kurt Hornik.[18]Recent work also showed that universal approximation also holds for non-bounded activation functions such asKunihiko Fukushima'srectified linear unit.[25][26] The universal approximation theorem fordeep neural networksconcerns the capacity of networks with bounded width but the depth is allowed to grow. Lu et al.[21]proved that if the width of a deep neural network withReLUactivation is strictly larger than the input dimension, then the network can approximate anyLebesgue integrable function; if the width is smaller or equal to the input dimension, then a deep neural network is not a universal approximator. Theprobabilisticinterpretation[24]derives from the field ofmachine learning. It features inference,[23][7][8][9][12][24]as well as theoptimizationconcepts oftrainingandtesting, related to fitting andgeneralization, respectively. More specifically, the probabilistic interpretation considers the activation nonlinearity as acumulative distribution function.[24]The probabilistic interpretation led to the introduction ofdropoutasregularizerin neural networks. The probabilistic interpretation was introduced by researchers includingHopfield,WidrowandNarendraand popularized in surveys such as the one byBishop.[27] There are twotypesof artificial neural network (ANN):feedforward neural network(FNN) ormultilayer perceptron(MLP) andrecurrent neural networks(RNN). RNNs have cycles in their connectivity structure, FNNs don't. In the 1920s,Wilhelm LenzandErnst Isingcreated theIsing model[28][29]which is essentially a non-learning RNN architecture consisting of neuron-like threshold elements. In 1972,Shun'ichi Amarimade this architecture adaptive.[30][31]His learning RNN was republished byJohn Hopfieldin 1982.[32]Other earlyrecurrent neural networkswere published by Kaoru Nakano in 1971.[33][34]Already in 1948,Alan Turingproduced work on "Intelligent Machinery" that was not published in his lifetime,[35]containing "ideas related to artificial evolution and learning RNNs".[31] Frank Rosenblatt(1958)[36]proposed the perceptron, an MLP with 3 layers: an input layer, a hidden layer with randomized weights that did not learn, and an output layer. He later published a 1962 book that also introduced variants and computer experiments, including a version with four-layer perceptrons "with adaptive preterminal networks" where the last two layers have learned weights (here he credits H. D. Block and B. W. Knight).[37]: section 16The book cites an earlier network by R. D. Joseph (1960)[38]"functionally equivalent to a variation of" this four-layer system (the book mentions Joseph over 30 times). Should Joseph therefore be considered the originator of proper adaptivemultilayer perceptronswith learning hidden units? Unfortunately, the learning algorithm was not a functional one, and fell into oblivion. The first working deep learning algorithm was theGroup method of data handling, a method to train arbitrarily deep neural networks, published byAlexey Ivakhnenkoand Lapa in 1965. They regarded it as a form of polynomial regression,[39]or a generalization of Rosenblatt's perceptron.[40]A 1971 paper described a deep network with eight layers trained by this method,[41]which is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates".[31] The first deep learningmultilayer perceptrontrained bystochastic gradient descent[42]was published in 1967 byShun'ichi Amari.[43]In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learnedinternal representationsto classify non-linearily separable pattern classes.[31]Subsequent developments in hardware and hyperparameter tunings have made end-to-endstochastic gradient descentthe currently dominant training technique. In 1969,Kunihiko Fukushimaintroduced theReLU(rectified linear unit)activation function.[25][31]The rectifier has become the most popular activation function for deep learning.[44] Deep learning architectures forconvolutional neural networks(CNNs) with convolutional layers and downsampling layers began with theNeocognitronintroduced byKunihiko Fukushimain 1979, though not trained by backpropagation.[45][46] Backpropagationis an efficient application of thechain rulederived byGottfried Wilhelm Leibnizin 1673[47]to networks of differentiable nodes. The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt,[37]but he did not know how to implement this, althoughHenry J. Kelleyhad a continuous precursor of backpropagation in 1960 in the context ofcontrol theory.[48]The modern form of backpropagation was first published inSeppo Linnainmaa's master thesis (1970).[49][50][31]G.M. Ostrovski et al. republished it in 1971.[51][52]Paul Werbosapplied backpropagation to neural networks in 1982[53](his 1974 PhD thesis, reprinted in a 1994 book,[54]did not yet describe the algorithm[52]). In 1986,David E. Rumelhartet al. popularised backpropagation but did not cite the original work.[55][56] Thetime delay neural network(TDNN) was introduced in 1987 byAlex Waibelto apply CNN to phoneme recognition. It used convolutions, weight sharing, and backpropagation.[57][58]In 1988, Wei Zhang applied a backpropagation-trained CNN to alphabet recognition.[59]In 1989,Yann LeCunet al. created a CNN calledLeNetforrecognizing handwritten ZIP codeson mail. Training required 3 days.[60]In 1990, Wei Zhang implemented a CNN onoptical computinghardware.[61]In 1991, a CNN was applied to medical image object segmentation[62]and breast cancer detection in mammograms.[63]LeNet-5 (1998), a 7-level CNN byYann LeCunet al., that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32x32 pixel images.[64] Recurrent neural networks(RNN)[28][30]were further developed in the 1980s. Recurrence is used for sequence processing, and when a recurrent network is unrolled, it mathematically resembles a deep feedforward layer. Consequently, they have similar properties and issues, and their developments had mutual influences. In RNN, two early influential works were theJordan network(1986)[65]and theElman network(1990),[66]which applied RNN to study problems incognitive psychology. In the 1980s, backpropagation did not work well for deep learning with long credit assignment paths. To overcome this problem, in 1991,Jürgen Schmidhuberproposed a hierarchy of RNNs pre-trained one level at a time byself-supervised learningwhere each RNN tries to predict its own next input, which is the next unexpected input of the RNN below.[67][68]This "neural history compressor" usespredictive codingto learninternal representationsat multiple self-organizing time scales. This can substantially facilitate downstream deep learning. The RNN hierarchy can becollapsedinto a single RNN, bydistillinga higher levelchunkernetwork into a lower levelautomatizernetwork.[67][68][31]In 1993, a neural history compressor solved a "Very Deep Learning" task that required more than 1000 subsequentlayersin an RNN unfolded in time.[69]The "P" inChatGPTrefers to such pre-training. Sepp Hochreiter's diploma thesis (1991)[70]implemented the neural history compressor,[67]and identified and analyzed thevanishing gradient problem.[70][71]Hochreiter proposed recurrentresidualconnections to solve the vanishing gradient problem. This led to thelong short-term memory(LSTM), published in 1995.[72]LSTM can learn "very deep learning" tasks[9]with long credit assignment paths that require memories of events that happened thousands of discrete time steps before. That LSTM was not yet the modern architecture, which required a "forget gate", introduced in 1999,[73]which became the standard RNN architecture. In 1991,Jürgen Schmidhuberalso published adversarial neural networks that contest with each other in the form of azero-sum game, where one network's gain is the other network's loss.[74][75]The first network is agenerative modelthat models aprobability distributionover output patterns. The second network learns bygradient descentto predict the reactions of the environment to these patterns. This was called "artificial curiosity". In 2014, this principle was used ingenerative adversarial networks(GANs).[76] During 1985–1995, inspired by statistical mechanics, several architectures and methods were developed byTerry Sejnowski,Peter Dayan,Geoffrey Hinton, etc., including theBoltzmann machine,[77]restricted Boltzmann machine,[78]Helmholtz machine,[79]and thewake-sleep algorithm.[80]These were designed for unsupervised learning of deep generative models. However, those were more computationally expensive compared to backpropagation. Boltzmann machine learning algorithm, published in 1985, was briefly popular before being eclipsed by the backpropagation algorithm in 1986. (p. 112[81]). A 1988 network became state of the art inprotein structure prediction, an early application of deep learning to bioinformatics.[82] Both shallow and deep learning (e.g., recurrent nets) of ANNs forspeech recognitionhave been explored for many years.[83][84][85]These methods never outperformed non-uniform internal-handcrafting Gaussianmixture model/Hidden Markov model(GMM-HMM) technology based on generative models of speech trained discriminatively.[86]Key difficulties have been analyzed, including gradient diminishing[70]and weak temporal correlation structure in neural predictive models.[87][88]Additional difficulties were the lack of training data and limited computing power. Mostspeech recognitionresearchers moved away from neural nets to pursue generative modeling. An exception was atSRI Internationalin the late 1990s. Funded by the US government'sNSAandDARPA, SRI researched in speech andspeaker recognition. The speaker recognition team led byLarry Heckreported significant success with deep neural networks in speech processing in the 1998NISTSpeaker Recognition benchmark.[89][90]It was deployed in the Nuance Verifier, representing the first major industrial application of deep learning.[91] The principle of elevating "raw" features over hand-crafted optimization was first explored successfully in the architecture of deep autoencoder on the "raw" spectrogram or linearfilter-bankfeatures in the late 1990s,[90]showing its superiority over theMel-Cepstralfeatures that contain stages of fixed transformation from spectrograms. The raw features of speech,waveforms, later produced excellent larger-scale results.[92] Neural networks entered a lull, and simpler models that use task-specific handcrafted features such asGabor filtersandsupport vector machines(SVMs) became the preferred choices in the 1990s and 2000s, because of artificial neural networks' computational cost and a lack of understanding of how the brain wires its biological networks.[citation needed] In 2003, LSTM became competitive with traditional speech recognizers on certain tasks.[93]In 2006,Alex Graves, Santiago Fernández, Faustino Gomez, and Schmidhuber combined it withconnectionist temporal classification(CTC)[94]in stacks of LSTMs.[95]In 2009, it became the first RNN to win apattern recognitioncontest, in connectedhandwriting recognition.[96][9] In 2006, publications byGeoff Hinton,Ruslan Salakhutdinov, Osindero andTeh[97][98]deep belief networkswere developed for generative modeling. They are trained by training one restricted Boltzmann machine, then freezing it and training another one on top of the first one, and so on, then optionallyfine-tunedusing supervised backpropagation.[99]They could model high-dimensional probability distributions, such as the distribution ofMNIST images, but convergence was slow.[100][101][102] The impact of deep learning in industry began in the early 2000s, when CNNs already processed an estimated 10% to 20% of all the checks written in the US, according to Yann LeCun.[103]Industrial applications of deep learning to large-scale speech recognition started around 2010. The 2009 NIPS Workshop on Deep Learning for Speech Recognition was motivated by the limitations of deep generative models of speech, and the possibility that given more capable hardware and large-scale data sets that deep neural nets might become practical. It was believed that pre-training DNNs using generative models of deep belief nets (DBN) would overcome the main difficulties of neural nets. However, it was discovered that replacing pre-training with large amounts of training data for straightforward backpropagation when using DNNs with large, context-dependent output layers produced error rates dramatically lower than then-state-of-the-art Gaussian mixture model (GMM)/Hidden Markov Model (HMM) and also than more-advanced generative model-based systems.[104]The nature of the recognition errors produced by the two types of systems was characteristically different,[105]offering technical insights into how to integrate deep learning into the existing highly efficient, run-time speech decoding system deployed by all major speech recognition systems.[23][106][107]Analysis around 2009–2010, contrasting the GMM (and other generative speech models) vs. DNN models, stimulated early industrial investment in deep learning for speech recognition.[105]That analysis was done with comparable performance (less than 1.5% in error rate) between discriminative DNNs and generative models.[104][105][108]In 2010, researchers extended deep learning fromTIMITto large vocabulary speech recognition, by adopting large output layers of the DNN based on context-dependent HMM states constructed bydecision trees.[109][110][111][106] The deep learning revolution started around CNN- and GPU-based computer vision. Although CNNs trained by backpropagation had been around for decades and GPU implementations of NNs for years,[112]including CNNs,[113]faster implementations of CNNs on GPUs were needed to progress on computer vision. Later, as deep learning becomes widespread, specialized hardware and algorithm optimizations were developed specifically for deep learning.[114] A key advance for the deep learning revolution was hardware advances, especially GPU. Some early work dated back to 2004.[112][113]In 2009, Raina, Madhavan, andAndrew Ngreported a 100M deep belief network trained on 30 NvidiaGeForce GTX 280GPUs, an early demonstration of GPU-based deep learning. They reported up to 70 times faster training.[115] In 2011, a CNN namedDanNet[116][117]by Dan Ciresan, Ueli Meier, Jonathan Masci,Luca Maria Gambardella, andJürgen Schmidhuberachieved for the first time superhuman performance in a visual pattern recognition contest, outperforming traditional methods by a factor of 3.[9]It then won more contests.[118][119]They also showed howmax-poolingCNNs on GPU improved performance significantly.[3] In 2012,Andrew NgandJeff Deancreated an FNN that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images taken fromYouTubevideos.[120] In October 2012,AlexNetbyAlex Krizhevsky,Ilya Sutskever, andGeoffrey Hinton[4]won the large-scaleImageNet competitionby a significant margin over shallow machine learning methods. Further incremental improvements included theVGG-16network byKaren SimonyanandAndrew Zisserman[121]and Google'sInceptionv3.[122] The success in image classification was then extended to the more challenging task ofgenerating descriptions(captions) for images, often as a combination of CNNs and LSTMs.[123][124][125] In 2014, the state of the art was training “very deep neural network” with 20 to 30 layers.[126]Stacking too many layers led to a steep reduction intrainingaccuracy,[127]known as the "degradation" problem.[128]In 2015, two techniques were developed to train very deep networks: the Highway Network was published in May 2015, and theresidual neural network(ResNet)[129]in Dec 2015. ResNet behaves like an open-gated Highway Net. Around the same time, deep learning started impacting the field of art. Early examples includedGoogle DeepDream(2015), andneural style transfer(2015),[130]both of which were based on pretrained image classification neural networks, such asVGG-19. Generative adversarial network(GAN) by (Ian Goodfellowet al., 2014)[131](based onJürgen Schmidhuber's principle of artificial curiosity[74][76]) became state of the art in generative modeling during 2014-2018 period. Excellent image quality is achieved byNvidia'sStyleGAN(2018)[132]based on the Progressive GAN by Tero Karras et al.[133]Here the GAN generator is grown from small to large scale in a pyramidal fashion. Image generation by GAN reached popular success, and provoked discussions concerningdeepfakes.[134]Diffusion models(2015)[135]eclipsed GANs in generative modeling since then, with systems such asDALL·E 2(2022) andStable Diffusion(2022). In 2015, Google's speech recognition improved by 49% by an LSTM-based model, which they made available throughGoogle Voice Searchonsmartphone.[136][137] Deep learning is part of state-of-the-art systems in various disciplines, particularly computer vision andautomatic speech recognition(ASR). Results on commonly used evaluation sets such asTIMIT(ASR) andMNIST(image classification), as well as a range of large-vocabulary speech recognition tasks have steadily improved.[104][138]Convolutional neural networks were superseded for ASR byLSTM.[137][139][140][141]but are more successful in computer vision. Yoshua Bengio,Geoffrey HintonandYann LeCunwere awarded the 2018Turing Awardfor "conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing".[142] Artificial neural networks(ANNs) orconnectionistsystemsare computing systems inspired by thebiological neural networksthat constitute animal brains. Such systems learn (progressively improve their ability) to do tasks by considering examples, generally without task-specific programming. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manuallylabeledas "cat" or "no cat" and using the analytic results to identify cats in other images. They have found most use in applications difficult to express with a traditional computer algorithm usingrule-based programming. An ANN is based on a collection of connected units calledartificial neurons, (analogous to biologicalneuronsin abiological brain). Each connection (synapse) between neurons can transmit a signal to another neuron. The receiving (postsynaptic) neuron can process the signal(s) and then signal downstream neurons connected to it. Neurons may have state, generally represented byreal numbers, typically between 0 and 1. Neurons and synapses may also have a weight that varies as learning proceeds, which can increase or decrease the strength of the signal that it sends downstream. Typically, neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times. The original goal of the neural network approach was to solve problems in the same way that a human brain would. Over time, attention focused on matching specific mental abilities, leading to deviations from biology such asbackpropagation, or passing information in the reverse direction and adjusting the network to reflect that information. Neural networks have been used on a variety of tasks, including computer vision,speech recognition,machine translation,social networkfiltering,playing board and video gamesand medical diagnosis. As of 2017, neural networks typically have a few thousand to a few million units and millions of connections. Despite this number being several order of magnitude less than the number of neurons on a human brain, these networks can perform many tasks at a level beyond that of humans (e.g., recognizing faces, or playing "Go"[144]). A deep neural network (DNN) is an artificial neural network with multiple layers between the input and output layers.[7][9]There are different types of neural networks but they always consist of the same components: neurons, synapses, weights, biases, and functions.[145]These components as a whole function in a way that mimics functions of the human brain, and can be trained like any other ML algorithm.[citation needed] For example, a DNN that is trained to recognize dog breeds will go over the given image and calculate the probability that the dog in the image is a certain breed. The user can review the results and select which probabilities the network should display (above a certain threshold, etc.) and return the proposed label. Each mathematical manipulation as such is considered a layer,[146]and complex DNN have many layers, hence the name "deep" networks. DNNs can model complex non-linear relationships. DNN architectures generate compositional models where the object is expressed as a layered composition ofprimitives.[147]The extra layers enable composition of features from lower layers, potentially modeling complex data with fewer units than a similarly performing shallow network.[7]For instance, it was proved that sparsemultivariate polynomialsare exponentially easier to approximate with DNNs than with shallow networks.[148] Deep architectures include many variants of a few basic approaches. Each architecture has found success in specific domains. It is not always possible to compare the performance of multiple architectures, unless they have been evaluated on the same data sets.[146] DNNs are typically feedforward networks in which data flows from the input layer to the output layer without looping back. At first, the DNN creates a map of virtual neurons and assigns random numerical values, or "weights", to connections between them. The weights and inputs are multiplied and return an output between 0 and 1. If the network did not accurately recognize a particular pattern, an algorithm would adjust the weights.[149]That way the algorithm can make certain parameters more influential, until it determines the correct mathematical manipulation to fully process the data. Recurrent neural networks, in which data can flow in any direction, are used for applications such aslanguage modeling.[150][151][152][153][154]Long short-term memory is particularly effective for this use.[155][156] Convolutional neural networks(CNNs) are used in computer vision.[157]CNNs also have been applied toacoustic modelingfor automatic speech recognition (ASR).[158] As with ANNs, many issues can arise with naively trained DNNs. Two common issues areoverfittingand computation time. DNNs are prone to overfitting because of the added layers of abstraction, which allow them to model rare dependencies in the training data.Regularizationmethods such as Ivakhnenko's unit pruning[41]orweight decay(ℓ2{\displaystyle \ell _{2}}-regularization) orsparsity(ℓ1{\displaystyle \ell _{1}}-regularization) can be applied during training to combat overfitting.[159]Alternativelydropoutregularization randomly omits units from the hidden layers during training. This helps to exclude rare dependencies.[160]Another interesting recent development is research into models of just enough complexity through an estimation of the intrinsic complexity of the task being modelled. This approach has been successfully applied for multivariate time series prediction tasks such as traffic prediction.[161]Finally, data can be augmented via methods such as cropping and rotating such that smaller training sets can be increased in size to reduce the chances of overfitting.[162] DNNs must consider many training parameters, such as the size (number of layers and number of units per layer), thelearning rate, and initial weights.Sweeping through the parameter spacefor optimal parameters may not be feasible due to the cost in time and computational resources. Various tricks, such asbatching(computing the gradient on several training examples at once rather than individual examples)[163]speed up computation. Large processing capabilities of many-core architectures (such as GPUs or the Intel Xeon Phi) have produced significant speedups in training, because of the suitability of such processing architectures for the matrix and vector computations.[164][165] Alternatively, engineers may look for other types of neural networks with more straightforward and convergent training algorithms. CMAC (cerebellar model articulation controller) is one such kind of neural network. It doesn't require learning rates or randomized initial weights. The training process can be guaranteed to converge in one step with a new batch of data, and the computational complexity of the training algorithm is linear with respect to the number of neurons involved.[166][167] Since the 2010s, advances in both machine learning algorithms andcomputer hardwarehave led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[168]By 2019, graphics processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method for training large-scale commercial cloud AI .[169]OpenAIestimated the hardware computation used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017) and found a 300,000-fold increase in the amount of computation required, with a doubling-time trendline of 3.4 months.[170][171] Specialelectronic circuitscalleddeep learning processorswere designed to speed up deep learning algorithms. Deep learning processors include neural processing units (NPUs) inHuaweicellphones[172]andcloud computingservers such astensor processing units(TPU) in theGoogle Cloud Platform.[173]Cerebras Systemshas also built a dedicated system to handle large deep learning models, the CS-2, based on the largest processor in the industry, the second-generation Wafer Scale Engine (WSE-2).[174][175] Atomically thinsemiconductorsare considered promising for energy-efficient deep learning hardware where the same basic device structure is used for both logic operations and data storage. In 2020, Marega et al. published experiments with a large-area active channel material for developing logic-in-memory devices and circuits based onfloating-gatefield-effect transistors(FGFETs).[176] In 2021, J. Feldmann et al. proposed an integratedphotonichardware acceleratorfor parallel convolutional processing.[177]The authors identify two key advantages of integrated photonics over its electronic counterparts: (1) massively parallel data transfer throughwavelengthdivisionmultiplexingin conjunction withfrequency combs, and (2) extremely high data modulation speeds.[177]Their system can execute trillions of multiply-accumulate operations per second, indicating the potential ofintegratedphotonicsin data-heavy AI applications.[177] Large-scale automatic speech recognition is the first and most convincing successful case of deep learning. LSTM RNNs can learn "Very Deep Learning" tasks[9]that involve multi-second intervals containing speech events separated by thousands of discrete time steps, where one time step corresponds to about 10 ms. LSTM with forget gates[156]is competitive with traditional speech recognizers on certain tasks.[93] The initial success in speech recognition was based on small-scale recognition tasks based on TIMIT. The data set contains 630 speakers from eight majordialectsofAmerican English, where each speaker reads 10 sentences.[178]Its small size lets many configurations be tried. More importantly, the TIMIT task concernsphone-sequence recognition, which, unlike word-sequence recognition, allows weak phonebigramlanguage models. This lets the strength of the acoustic modeling aspects of speech recognition be more easily analyzed. The error rates listed below, including these early results and measured as percent phone error rates (PER), have been summarized since 1991. The debut of DNNs for speaker recognition in the late 1990s and speech recognition around 2009-2011 and of LSTM around 2003–2007, accelerated progress in eight major areas:[23][108][106] All major commercial speech recognition systems (e.g., MicrosoftCortana,Xbox,Skype Translator,Amazon Alexa,Google Now,Apple Siri,BaiduandiFlyTekvoice search, and a range ofNuancespeech products, etc.) are based on deep learning.[23][183][184] A common evaluation set for image classification is theMNIST databasedata set. MNIST is composed of handwritten digits and includes 60,000 training examples and 10,000 test examples. As with TIMIT, its small size lets users test multiple configurations. A comprehensive list of results on this set is available.[185] Deep learning-based image recognition has become "superhuman", producing more accurate results than human contestants. This first occurred in 2011 in recognition of traffic signs, and in 2014, with recognition of human faces.[186][187] Deep learning-trained vehicles now interpret 360° camera views.[188]Another example is Facial Dysmorphology Novel Analysis (FDNA) used to analyze cases of human malformation connected to a large database of genetic syndromes. Closely related to the progress that has been made in image recognition is the increasing application of deep learning techniques to various visual art tasks. DNNs have proven themselves capable, for example, of Neural networks have been used for implementing language models since the early 2000s.[150]LSTM helped to improve machine translation and language modeling.[151][152][153] Other key techniques in this field are negative sampling[191]andword embedding. Word embedding, such asword2vec, can be thought of as a representational layer in a deep learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in avector space. Using word embedding as an RNN input layer allows the network to parse sentences and phrases using an effective compositional vector grammar. A compositional vector grammar can be thought of asprobabilistic context free grammar(PCFG) implemented by an RNN.[192]Recursive auto-encoders built atop word embeddings can assess sentence similarity and detect paraphrasing.[192]Deep neural architectures provide the best results for constituency parsing,[193]sentiment analysis,[194]information retrieval,[195][196]spoken language understanding,[197]machine translation,[151][198]contextual entity linking,[198]writing style recognition,[199]named-entity recognition(token classification),[200]text classification, and others.[201] Recent developments generalizeword embeddingtosentence embedding. Google Translate(GT) uses a large end-to-endlong short-term memory(LSTM) network.[202][203][204][205]Google Neural Machine Translation (GNMT)uses anexample-based machine translationmethod in which the system "learns from millions of examples".[203]It translates "whole sentences at a time, rather than pieces". Google Translate supports over one hundred languages.[203]The network encodes the "semantics of the sentence rather than simply memorizing phrase-to-phrase translations".[203][206]GT uses English as an intermediate between most language pairs.[206] A large percentage of candidate drugs fail to win regulatory approval. These failures are caused by insufficient efficacy (on-target effect), undesired interactions (off-target effects), or unanticipatedtoxic effects.[207][208]Research has explored use of deep learning to predict thebiomolecular targets,[209][210]off-targets, andtoxic effectsof environmental chemicals in nutrients, household products and drugs.[211][212][213] AtomNet is a deep learning system for structure-basedrational drug design.[214]AtomNet was used to predict novel candidate biomolecules for disease targets such as theEbola virus[215]andmultiple sclerosis.[216][215] In 2017graph neural networkswere used for the first time to predict various properties of molecules in a large toxicology data set.[217]In 2019, generative neural networks were used to produce molecules that were validated experimentally all the way into mice.[218][219] Deep reinforcement learninghas been used to approximate the value of possibledirect marketingactions, defined in terms ofRFMvariables. The estimated value function was shown to have a natural interpretation ascustomer lifetime value.[220] Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music and journal recommendations.[221][222]Multi-view deep learning has been applied for learning user preferences from multiple domains.[223]The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks. AnautoencoderANN was used inbioinformatics, to predictgene ontologyannotations and gene-function relationships.[224] In medical informatics, deep learning was used to predict sleep quality based on data from wearables[225]and predictions of health complications fromelectronic health recorddata.[226] Deep neural networks have shown unparalleled performance inpredicting protein structure, according to the sequence of the amino acids that make it up. In 2020,AlphaFold, a deep-learning based system, achieved a level of accuracy significantly higher than all previous computational methods.[227][228] Deep neural networks can be used to estimate the entropy of astochastic processand called Neural Joint Entropy Estimator (NJEE).[229]Such an estimation provides insights on the effects of inputrandom variableson an independentrandom variable. Practically, the DNN is trained as aclassifierthat maps an inputvectorormatrixX to an outputprobability distributionover the possible classes of random variable Y, given input X. For example, inimage classificationtasks, the NJEE maps a vector ofpixels' color values to probabilities over possible image classes. In practice, the probability distribution of Y is obtained by aSoftmaxlayer with number of nodes that is equal to thealphabetsize of Y. NJEE uses continuously differentiableactivation functions, such that the conditions for theuniversal approximation theoremholds. It is shown that this method provides a stronglyconsistent estimatorand outperforms other methods in case of large alphabet sizes.[229] Deep learning has been shown to produce competitive results in medical application such as cancer cell classification, lesion detection, organ segmentation and image enhancement.[230][231]Modern deep learning tools demonstrate the high accuracy of detecting various diseases and the helpfulness of their use by specialists to improve the diagnosis efficiency.[232][233] Finding the appropriate mobile audience formobile advertisingis always challenging, since many data points must be considered and analyzed before a target segment can be created and used in ad serving by any ad server.[234]Deep learning has been used to interpret large, many-dimensioned advertising datasets. Many data points are collected during the request/serve/click internet advertising cycle. This information can form the basis of machine learning to improve ad selection. Deep learning has been successfully applied toinverse problemssuch asdenoising,super-resolution,inpainting, andfilm colorization.[235]These applications include learning methods such as "Shrinkage Fields for Effective Image Restoration"[236]which trains on an image dataset, andDeep Image Prior, which trains on the image that needs restoration. Deep learning is being successfully applied to financialfraud detection, tax evasion detection,[237]and anti-money laundering.[238] In November 2023, researchers atGoogle DeepMindandLawrence Berkeley National Laboratoryannounced that they had developed an AI system known as GNoME. This system has contributed tomaterials scienceby discovering over 2 million new materials within a relatively short timeframe. GNoME employs deep learning techniques to efficiently explore potential material structures, achieving a significant increase in the identification of stable inorganiccrystal structures. The system's predictions were validated through autonomous robotic experiments, demonstrating a noteworthy success rate of 71%. The data of newly discovered materials is publicly available through theMaterials Projectdatabase, offering researchers the opportunity to identify materials with desired properties for various applications. This development has implications for the future of scientific discovery and the integration of AI in material science research, potentially expediting material innovation and reducing costs in product development. The use of AI and deep learning suggests the possibility of minimizing or eliminating manual lab experiments and allowing scientists to focus more on the design and analysis of unique compounds.[239][240][241] The United States Department of Defense applied deep learning to train robots in new tasks through observation.[242] Physics informed neural networks have been used to solvepartial differential equationsin both forward and inverse problems in a data driven manner.[243]One example is the reconstructing fluid flow governed by theNavier-Stokes equations. Using physics informed neural networks does not require the often expensive mesh generation that conventionalCFDmethods rely on.[244][245] Deep backward stochastic differential equation methodis a numerical method that combines deep learning withBackward stochastic differential equation(BSDE). This method is particularly useful for solving high-dimensional problems in financial mathematics. By leveraging the powerful function approximation capabilities ofdeep neural networks, deep BSDE addresses the computational challenges faced by traditional numerical methods in high-dimensional settings. Specifically, traditional methods like finite difference methods or Monte Carlo simulations often struggle with the curse of dimensionality, where computational cost increases exponentially with the number of dimensions. Deep BSDE methods, however, employ deep neural networks to approximate solutions of high-dimensional partial differential equations (PDEs), effectively reducing the computational burden.[246] In addition, the integration ofPhysics-informed neural networks(PINNs) into the deep BSDE framework enhances its capability by embedding the underlying physical laws directly into the neural network architecture. This ensures that the solutions not only fit the data but also adhere to the governing stochastic differential equations. PINNs leverage the power of deep learning while respecting the constraints imposed by the physical models, resulting in more accurate and reliable solutions for financial mathematics problems. Image reconstruction is the reconstruction of the underlying images from the image-related measurements. Several works showed the better and superior performance of the deep learning methods compared to analytical methods for various applications, e.g., spectral imaging[247]and ultrasound imaging.[248] Traditional weather prediction systems solve a very complex system of partial differential equations. GraphCast is a deep learning based model, trained on a long history of weather data to predict how weather patterns change over time. It is able to predict weather conditions for up to 10 days globally, at a very detailed level, and in under a minute, with precision similar to state of the art systems.[249][250] An epigenetic clock is abiochemical testthat can be used to measure age. Galkin et al. used deep neural networks to train an epigenetic aging clock of unprecedented accuracy using >6,000 blood samples.[251]The clock uses information from 1000CpG sitesand predicts people with certain conditions older than healthy controls:IBD,frontotemporal dementia,ovarian cancer,obesity. The aging clock was planned to be released for public use in 2021 by anInsilico Medicinespinoff company Deep Longevity. Deep learning is closely related to a class of theories ofbrain development(specifically, neocortical development) proposed bycognitive neuroscientistsin the early 1990s.[252][253][254][255]These developmental theories were instantiated in computational models, making them predecessors of deep learning systems. These developmental models share the property that various proposed learning dynamics in the brain (e.g., a wave ofnerve growth factor) support theself-organizationsomewhat analogous to the neural networks utilized in deep learning models. Like theneocortex, neural networks employ a hierarchy of layered filters in which each layer considers information from a prior layer (or the operating environment), and then passes its output (and possibly the original input), to other layers. This process yields a self-organizing stack oftransducers, well-tuned to their operating environment. A 1995 description stated, "...the infant's brain seems to organize itself under the influence of waves of so-called trophic-factors ... different regions of the brain become connected sequentially, with one layer of tissue maturing before another and so on until the whole brain is mature".[256] A variety of approaches have been used to investigate the plausibility of deep learning models from a neurobiological perspective. On the one hand, several variants of thebackpropagationalgorithm have been proposed in order to increase its processing realism.[257][258]Other researchers have argued that unsupervised forms of deep learning, such as those based on hierarchicalgenerative modelsanddeep belief networks, may be closer to biological reality.[259][260]In this respect, generative neural network models have been related to neurobiological evidence about sampling-based processing in the cerebral cortex.[261] Although a systematic comparison between the human brain organization and the neuronal encoding in deep networks has not yet been established, several analogies have been reported. For example, the computations performed by deep learning units could be similar to those of actual neurons[262]and neural populations.[263]Similarly, the representations developed by deep learning models are similar to those measured in the primate visual system[264]both at the single-unit[265]and at the population[266]levels. Facebook's AI lab performs tasks such asautomatically tagging uploaded pictureswith the names of the people in them.[267] Google'sDeepMind Technologiesdeveloped a system capable of learning how to playAtarivideo games using only pixels as data input. In 2015 they demonstrated theirAlphaGosystem, which learned the game ofGowell enough to beat a professional Go player.[268][269][270]Google Translateuses a neural network to translate between more than 100 languages. In 2017, Covariant.ai was launched, which focuses on integrating deep learning into factories.[271] As of 2008,[272]researchers atThe University of Texas at Austin(UT) developed a machine learning framework called Training an Agent Manually via Evaluative Reinforcement, or TAMER, which proposed new methods for robots or computer programs to learn how to perform tasks by interacting with a human instructor.[242]First developed as TAMER, a new algorithm called Deep TAMER was later introduced in 2018 during a collaboration betweenU.S. Army Research Laboratory(ARL) and UT researchers. Deep TAMER used deep learning to provide a robot with the ability to learn new tasks through observation.[242]Using Deep TAMER, a robot learned a task with a human trainer, watching video streams or observing a human perform a task in-person. The robot later practiced the task with the help of some coaching from the trainer, who provided feedback such as "good job" and "bad job".[273] Deep learning has attracted both criticism and comment, in some cases from outside the field of computer science. A main criticism concerns the lack of theory surrounding some methods.[274]Learning in the most common deep architectures is implemented using well-understood gradient descent. However, the theory surrounding other algorithms, such as contrastive divergence is less clear.[citation needed](e.g., Does it converge? If so, how fast? What is it approximating?) Deep learning methods are often looked at as ablack box, with most confirmations done empirically, rather than theoretically.[275] In further reference to the idea that artistic sensitivity might be inherent in relatively low levels of the cognitive hierarchy, a published series of graphic representations of the internal states of deep (20-30 layers) neural networks attempting to discern within essentially random data the images on which they were trained[276]demonstrate a visual appeal: the original research notice received well over 1,000 comments, and was the subject of what was for a time the most frequently accessed article onThe Guardian's[277]website. Some deep learning architectures display problematic behaviors,[278]such as confidently classifying unrecognizable images as belonging to a familiar category of ordinary images (2014)[279]and misclassifying minuscule perturbations of correctly classified images (2013).[280]Goertzelhypothesized that these behaviors are due to limitations in their internal representations and that these limitations would inhibit integration into heterogeneous multi-componentartificial general intelligence(AGI) architectures.[278]These issues may possibly be addressed by deep learning architectures that internally form states homologous to image-grammar[281]decompositions of observed entities and events.[278]Learning a grammar(visual or linguistic) from training data would be equivalent to restricting the system tocommonsense reasoningthat operates on concepts in terms of grammaticalproduction rulesand is a basic goal of both human language acquisition[282]andartificial intelligence(AI).[283] As deep learning moves from the lab into the world, research and experience show that artificial neural networks are vulnerable to hacks and deception.[284]By identifying patterns that these systems use to function, attackers can modify inputs to ANNs in such a way that the ANN finds a match that human observers would not recognize. For example, an attacker can make subtle changes to an image such that the ANN finds a match even though the image looks to a human nothing like the search target. Such manipulation is termed an "adversarial attack".[285] In 2016 researchers used one ANN to doctor images in trial and error fashion, identify another's focal points, and thereby generate images that deceived it. The modified images looked no different to human eyes. Another group showed that printouts of doctored images then photographed successfully tricked an image classification system.[286]One defense is reverse image search, in which a possible fake image is submitted to a site such asTinEyethat can then find other instances of it. A refinement is to search using only parts of the image, to identify images from which that piece may have been taken.[287] Another group showed that certainpsychedelicspectacles could fool afacial recognition systeminto thinking ordinary people were celebrities, potentially allowing one person to impersonate another. In 2017 researchers added stickers tostop signsand caused an ANN to misclassify them.[286] ANNs can however be further trained to detect attempts atdeception, potentially leading attackers and defenders into an arms race similar to the kind that already defines themalwaredefense industry. ANNs have been trained to defeat ANN-based anti-malwaresoftware by repeatedly attacking a defense with malware that was continually altered by agenetic algorithmuntil it tricked the anti-malware while retaining its ability to damage the target.[286] In 2016, another group demonstrated that certain sounds could make theGoogle Nowvoice command system open a particular web address, and hypothesized that this could "serve as a stepping stone for further attacks (e.g., opening a web page hosting drive-by malware)".[286] In "data poisoning", false data is continually smuggled into a machine learning system's training set to prevent it from achieving mastery.[286] The deep learning systems that are trained using supervised learning often rely on data that is created or annotated by humans, or both.[288]It has been argued that not only low-paidclickwork(such as onAmazon Mechanical Turk) is regularly deployed for this purpose, but also implicit forms of humanmicroworkthat are often not recognized as such.[289]The philosopherRainer Mühlhoffdistinguishes five types of "machinic capture" of human microwork to generate training data: (1)gamification(the embedding of annotation or computation tasks in the flow of a game), (2) "trapping and tracking" (e.g.CAPTCHAsfor image recognition or click-tracking on Googlesearch results pages), (3) exploitation of social motivations (e.g.tagging facesonFacebookto obtain labeled facial images), (4)information mining(e.g. by leveragingquantified-selfdevices such asactivity trackers) and (5)clickwork.[289]
https://en.wikipedia.org/wiki/Criticism_of_deep_learning
DALL-E,DALL-E 2, andDALL-E 3(stylisedDALL·E) aretext-to-image modelsdeveloped byOpenAIusingdeep learningmethodologies to generatedigital imagesfromnatural languagedescriptions known asprompts. The first version of DALL-E was announced in January 2021. In the following year, its successor DALL-E 2 was released. DALL-E 3 was released natively intoChatGPTfor ChatGPT Plus and ChatGPT Enterprise customers in October 2023,[1]with availability via OpenAI's API[2]and "Labs" platform provided in early November.[3]Microsoft implemented the model in Bing's Image Creator tool and plans to implement it into their Designer app.[4]With Bing's Image Creator tool,Microsoft Copilotruns on DALL-E 3.[5]In March 2025, DALL-E-3 was replaced in ChatGPT byGPT-4o's native image-generation capabilities.[6] DALL-E was revealed by OpenAI in a blog post on 5 January 2021, and uses a version ofGPT-3[7]modified to generate images. On 6 April 2022, OpenAI announced DALL-E 2, a successor designed to generate more realistic images at higher resolutions that "can combine concepts, attributes, and styles".[8]On 20 July 2022, DALL-E 2 entered into a beta phase with invitations sent to 1 million waitlisted individuals;[9]users could generate a certain number of images for free every month and may purchase more.[10]Access had previously been restricted to pre-selected users for a research preview due to concerns aboutethicsand safety.[11][12]On 28 September 2022, DALL-E 2 was opened to everyone and the waitlist requirement was removed.[13]In September 2023, OpenAI announced their latest image model, DALL-E 3, capable of understanding "significantly more nuance and detail" than previous iterations.[14]In early November 2022, OpenAI released DALL-E 2 as anAPI, allowing developers to integrate the model into their own applications.Microsoftunveiled their implementation of DALL-E 2 in their Designer app and Image Creator tool included inBingandMicrosoft Edge.[15]The API operates on a cost-per-image basis, with prices varying depending on image resolution. Volume discounts are available to companies working with OpenAI's enterprise team.[16] The software's name is aportmanteauof the names of animated robotPixarcharacterWALL-Eand the Spanish surrealist artistSalvador Dalí.[17][7] In February 2024, OpenAI began adding watermarks to DALL-E generated images, containing metadata in the C2PA (Coalition for Content Provenance and Authenticity) standard promoted by theContent Authenticity Initiative.[18] The firstgenerative pre-trained transformer(GPT) model was initially developed by OpenAI in 2018,[19]using aTransformerarchitecture. The first iteration, GPT-1,[20]was scaled up to produceGPT-2in 2019;[21]in 2020, it was scaled up again to produceGPT-3, with 175 billion parameters.[22][7][23] DALL-E has three components: a discreteVAE, an autoregressive decoder-only Transformer (12 billion parameters) similar to GPT-3, and a CLIP pair of image encoder and text encoder.[24] The discrete VAE can convert an image to a sequence of tokens, and conversely, convert a sequence of tokens back to an image. This is necessary as the Transformer does not directly process image data.[24] The input to the Transformer model is a sequence of tokenised image caption followed by tokenised image patches. The image caption is in English, tokenised bybyte pair encoding(vocabulary size 16384), and can be up to 256 tokens long. Each image is a 256×256 RGB image, divided into 32×32 patches of 4×4 each. Each patch is then converted by a discretevariational autoencoderto a token (vocabulary size 8192).[24] DALL-E was developed and announced to the public in conjunction withCLIP (Contrastive Language-Image Pre-training).[25]CLIP is a separate model based oncontrastive learningthat was trained on 400 million pairs of images with text captionsscrapedfrom the Internet. Its role is to "understand and rank" DALL-E's output by predicting which caption from a list of 32,768 captions randomly selected from the dataset (of which one was the correct answer) is most appropriate for an image.[26] A trained CLIP pair is used to filter a larger initial list of images generated by DALL-E to select the image that is closest to the text prompt.[24] DALL-E 2 uses 3.5 billion parameters, a smaller number than its predecessor.[24]Instead of an autoregressive Transformer, DALL-E 2 uses adiffusion modelconditioned on CLIP image embeddings, which, during inference, are generated from CLIP text embeddings by a prior model.[24]This is the same architecture as that ofStable Diffusion, released a few months later. While a technical report was written for DALL-E 3, it does not include training or implementation details of the model, instead focusing on the improved prompt following capabilities developed for DALL-E 3.[27] DALL-E can generate imagery in multiple styles, includingphotorealisticimagery,paintings, andemoji.[7]It can "manipulate and rearrange" objects in its images,[7]and can correctly place design elements in novel compositions without explicit instruction. Thom Dunn writing forBoingBoingremarked that "For example, when asked to draw a daikon radish blowing its nose, sipping a latte, or riding a unicycle, DALL-E often draws the handkerchief, hands, and feet in plausible locations."[28]DALL-E showed the ability to "fill in the blanks" to infer appropriate details without specific prompts, such as adding Christmas imagery to prompts commonly associated with the celebration,[29]and appropriately placed shadows to images that did not mention them.[30]Furthermore, DALL-E exhibits a broad understanding of visual and design trends.[citation needed] DALL-E can produce images for a wide variety of arbitrary descriptions from various viewpoints[31]with only rare failures.[17]Mark Riedl, an associate professor at theGeorgia TechSchool of Interactive Computing, found that DALL-E could blend concepts (described as a key element of humancreativity).[32][33] Its visual reasoning ability is sufficient to solveRaven's Matrices(visual tests often administered to humans to measure intelligence).[34][35] DALL-E 3 follows complex prompts with more accuracy and detail than its predecessors, and is able to generate more coherent and accurate text.[36][14]DALL-E 3 is integrated into ChatGPT Plus.[14] Given an existing image, DALL-E 2 and DALL-E 3 can produce "variations" of the image as individual outputs based on the original, as well as edit the image to modify or expand upon it. The "inpainting" and "outpainting" abilities of these models use context from an image to fill in missing areas using amediumconsistent with the original, following a given prompt. For example, this can be used to insert a new subject into an image, or expand an image beyond its original borders.[37]According to OpenAI, "Outpainting takes into account the image’s existing visual elements — including shadows, reflections, and textures — to maintain the context of the original image."[38] DALL-E 2's language understanding has limits. It is sometimes unable to distinguish "A yellow book and a red vase" from "A red book and a yellow vase" or "A panda making latte art" from "Latte art of a panda".[39]It generates images of an astronaut riding a horse when presented with the prompt "a horse riding an astronaut".[40]It also fails to generate the correct images in a variety of circumstances. Requesting more than three objects, negation, numbers, andconnected sentencesmay result in mistakes, and object features may appear on the wrong object.[31]Additional limitations include handling text — which, even with legible lettering, almost invariably results in dream-like gibberish — and its limited capacity to address scientific information, such as astronomy or medical imagery.[41] DALL-E 2's reliance on public datasets influences its results and leads toalgorithmic biasin some cases, such as generating higher numbers of men than women for requests that do not mention gender.[41]DALL-E 2's training data was filtered to remove violent and sexual imagery, but this was found to increase bias in some cases such as reducing the frequency of women being generated.[42]OpenAI hypothesise that this may be because women were more likely to be sexualised in training data which caused the filter to influence results.[42]In September 2022, OpenAI confirmed toThe Vergethat DALL-E invisibly inserts phrases into user prompts to address bias in results; for instance, "black man" and "Asian woman" are inserted into prompts that do not specify gender or race.[43]OpenAI claims to address concerns for potential "racy content" - containing nudity or sexual content generation, with DALL-E 3 through input/output filters, blocklists, ChatGPT refusals, and model level interventions.[44]However, DALL-E 3 continues to disproportionally represent people as White, female, and youthful. Users are able to somewhat remedy this through more specific prompts for image generation. A concern about DALL-E 2 and similar image generation models is that they could be used to propagatedeepfakesand other forms of misinformation.[45][46]As an attempt to mitigate this, the software rejects prompts involving public figures and uploads containing human faces.[47]Prompts containing potentially objectionable content are blocked, and uploaded images are analysed to detect offensive material.[48]A disadvantage of prompt-based filtering is that it is easy to bypass using alternative phrases that result in a similar output. For example, the word "blood" is filtered, but "ketchup" and "red liquid" are not.[49][48] Another concern about DALL-E 2 and similar models is that they could causetechnological unemploymentfor artists, photographers, and graphic designers due to their accuracy and popularity.[50][51]DALL-E 3 is designed to block users from generating art in the style of currently-living artists.[14]While OpenAI states that images produced using these models do not require permission to reprint, sell, or merchandise,[52]legal concerns have been raised regarding who owns those images.[53][54] In 2023 Microsoft pitched theUnited States Department of Defenseto use DALL-E models to trainbattlefield management systems.[55]In January 2024OpenAI removed its blanket banon military and warfare use from its usage policies.[56] Most coverage of DALL-E focuses on a small subset of "surreal"[25]or "quirky"[32]outputs. DALL-E's output for "an illustration of a baby daikon radish in a tutu walking a dog" was mentioned in pieces fromInput,[57]NBC,[58]Nature,[59]and other publications.[7][60][61]Its output for "an armchair in the shape of an avocado" was also widely covered.[25][33] ExtremeTechstated "you can ask DALL-E for a picture of a phone or vacuum cleaner from a specified period of time, and it understands how those objects have changed".[29]Engadgetalso noted its unusual capacity for "understanding how telephones and other objects change over time".[30] According toMIT Technology Review, one of OpenAI's objectives was to "give language models a better grasp of the everyday concepts that humans use to make sense of things".[25] Wall Street investors have had a positive reception of DALL-E 2, with some firms thinking it could represent a turning point for a future multi-trillion dollar industry. By mid-2019, OpenAI had already received over $1 billion in funding fromMicrosoftand Khosla Ventures,[62][63][64]and in January 2023, following the launch of DALL-E 2 and ChatGPT, received an additional $10 billion in funding from Microsoft.[65] Japan'sanimecommunity has had a negative reaction to DALL-E 2 and similar models.[66][67][68]Two arguments are typically presented by artists against the software. The first is that AI art is not art because it is not created by a human with intent. "The juxtaposition of AI-generated images with their own work is degrading and undermines the time and skill that goes into their art. AI-driven image generation tools have been heavily criticized by artists because they are trained on human-made art scraped from the web."[9]The second is the trouble withcopyright lawand data text-to-image models are trained on. OpenAI has not released information about what dataset(s) were used to train DALL-E 2, inciting concern from some that the work of artists has been used for training without permission. Copyright laws surrounding these topics are inconclusive at the moment.[10] After integrating DALL-E 3 into Bing Chat and ChatGPT, Microsoft and OpenAI faced criticism for excessive content filtering, with critics saying DALL-E had been "lobotomized."[69]The flagging of images generated by prompts such as "man breaks server rack with sledgehammer" was cited as evidence. Over the first days of its launch, filtering was reportedly increased to the point where images generated by some of Bing's own suggested prompts were being blocked.[69][70]TechRadarargued that leaning too heavily on the side of caution could limit DALL-E's value as a creative tool.[70] Since OpenAI has not releasedsource codefor any of the three models, there have been several attempts to createopen-sourcemodels offering similar capabilities.[71][72]Released in 2022 onHugging Face's Spaces platform, Craiyon (formerly DALL-E Mini until a name change was requested by OpenAI in June 2022) is an AI model based on the original DALL-E that was trained on unfiltered data from the Internet. It attracted substantial media attention in mid-2022, after its release due to its capacity for producing humorous imagery.[73][74][75]Another example of an open source text-to-image model isStable Diffusionby Stability AI.[76]
https://en.wikipedia.org/wiki/DALL-E
Inmathematics, and more specifically inabstract algebra, apseudo-ringis one of the following variants of aring: None of these definitions are equivalent, so it is best[editorializing]to avoid the term "pseudo-ring" or to clarify which meaning is intended. Thisalgebra-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Pseudo-ring#Properties_weaker_than_having_an_identity
Atrace tableis a technique used to test algorithms in order to make sure that no logical errors occur while thecalculationsare being processed. The table usually takes the form of a multi-column, multi-row table; With each column showing avariable, and each row showing each number input into the algorithm and the subsequent values of the variables. Trace tables are typically used in schools and colleges when teaching students how to program. They can be an essential tool in teaching students how certain calculations work and the systematic process that is occurring when an algorithm is executed. They can also be useful for debugging applications, helping theprogrammerto easily detect what error is occurring, and why it may be occurring. This example shows the systematic process that takes place whilst the algorithm is processed. The initial value ofxis zero, buti, although defined, has not been assigned a value. Thus, its initial value is unknown. As we execute the program, line by line, the values ofiandxchange, reflecting each statement of the source code in execution. Their new values are recorded in the trace table. Whenireaches the value of11because of thei++statement in thefordefinition, the comparisoni <= 10evaluates to false, thus halting the loop. As we also reached the end of the program, the trace table also ends. Thisalgorithmsordata structures-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Trace_table
Inmathematics, asemigroup with no elements(theempty semigroup) is asemigroupin which theunderlying setis theempty set. Many authors do not admit the existence of such a semigroup. For them a semigroup is by definition anon-emptyset together with anassociativebinary operation.[1][2]However not all authors insist on the underlying set of a semigroup being non-empty.[3]One can logically define a semigroup in which the underlying setSis empty. The binary operation in the semigroup is theempty functionfromS×StoS. This operationvacuouslysatisfies the closure and associativity axioms of a semigroup. Not excluding the empty semigroup simplifies certain results on semigroups. For example, the result that the intersection of two subsemigroups of a semigroupTis a subsemigroup ofTbecomes valid even when the intersection is empty. When a semigroup is defined to have additional structure, the issue may not arise. For example, the definition of amonoidrequires anidentity element, which rules out the empty semigroup as a monoid. Incategory theory, the empty semigroup is always admitted. It is the uniqueinitial objectof the category of semigroups. A semigroup with no elements is aninverse semigroup, since the necessary condition is vacuously satisfied.
https://en.wikipedia.org/wiki/Empty_semigroup
InBoolean functionsandpropositional calculus, theSheffer strokedenotes alogical operationthat is equivalent to thenegationof theconjunctionoperation, expressed in ordinary language as "not both". It is also callednon-conjunction,alternative denial(since it says in effect that at least one of its operands is false), orNAND("not and").[1]Indigital electronics, it corresponds to theNAND gate. It is named afterHenry Maurice Shefferand written as∣{\displaystyle \mid }or as↑{\displaystyle \uparrow }or as∧¯{\displaystyle {\overline {\wedge }}}or asDpq{\displaystyle Dpq}inPolish notationbyŁukasiewicz(but not as ||, often used to representdisjunction). Itsdualis theNOR operator(also known as thePeirce arrow,Quine daggerorWebb operator). Like its dual, NAND can be used by itself, without any other logical operator, to constitute a logicalformal system(making NANDfunctionally complete). This property makes theNAND gatecrucial to moderndigital electronics, including its use incomputer processordesign. Thenon-conjunctionis alogical operationon twological values. It produces a value of true, if — and only if — at least one of thepropositionsis false. Thetruth tableofA↑B{\displaystyle A\uparrow B}is as follows. The Sheffer stroke ofP{\displaystyle P}andQ{\displaystyle Q}is the negation of their conjunction ByDe Morgan's laws, this is also equivalent to the disjunction of the negations ofP{\displaystyle P}andQ{\displaystyle Q} Peircewas the first to show the functional completeness of non-conjunction (representing this as⋏¯{\displaystyle {\overline {\curlywedge }}}) but didn't publish his result.[2][3]Peirce's editor added⋏¯{\displaystyle {\overline {\curlywedge }}}) for non-disjunction.[3] In 1911,Stammwas the first to publish a proof of the completeness of non-conjunction, representing this with∼{\displaystyle \sim }(theStamm hook)[4]and non-disjunction in print at the first time and showed their functional completeness.[5] In 1913,Shefferdescribed non-disjunction using∣{\displaystyle \mid }and showed its functional completeness. Sheffer also used∧{\displaystyle \wedge }for non-disjunction.[4]Many people, beginning withNicodin 1917, and followed byWhitehead,Russelland many others[who?], mistakenly thought Sheffer had described non-conjunction using∣{\displaystyle \mid }, naming this symbol the Sheffer stroke.[citation needed] In 1928,HilbertandAckermanndescribed non-conjunction with the operator/{\displaystyle /}.[6][7] In 1929,ŁukasiewiczusedD{\displaystyle D}inDpq{\displaystyle Dpq}for non-conjunction in hisPolish notation.[8] An alternative notation for non-conjunction is↑{\displaystyle \uparrow }. It is not clear who first introduced this notation, although the corresponding↓{\displaystyle \downarrow }for non-disjunction was used by Quine in 1940.[9] The stroke is named afterHenry Maurice Sheffer, who in 1913 published a paper in theTransactions of the American Mathematical Society[10]providing an axiomatization ofBoolean algebrasusing the stroke, and proved its equivalence to a standard formulation thereof byHuntingtonemploying the familiar operators ofpropositional logic(AND,OR,NOT). Because of self-dualityof Boolean algebras, Sheffer's axioms are equally valid for either of the NAND or NOR operations in place of the stroke. Sheffer interpreted the stroke as a sign for nondisjunction (NOR) in his paper, mentioning non-conjunction only in a footnote and without a special sign for it. It wasJean Nicodwho first used the stroke as a sign for non-conjunction (NAND) in a paper of 1917 and which has since become current practice.[11][12]Russell and Whitehead used the Sheffer stroke in the 1927 second edition ofPrincipia Mathematicaand suggested it as a replacement for the "OR" and "NOT" operations of the first edition. Charles Sanders Peirce(1880) had discovered thefunctional completenessof NAND or NOR more than 30 years earlier, using the termampheck(for 'cutting both ways'), but he never published his finding. Two years before Sheffer,Edward Stamm[pl]also described the NAND and NOR operators and showed that the other Boolean operations could be expressed by it.[5] NAND is commutative but not associative, which means thatP↑Q↔Q↑P{\displaystyle P\uparrow Q\leftrightarrow Q\uparrow P}but(P↑Q)↑R↮P↑(Q↑R){\displaystyle (P\uparrow Q)\uparrow R\not \leftrightarrow P\uparrow (Q\uparrow R)}.[13] The Sheffer stroke, taken by itself, is afunctionally completeset of connectives.[14][15]This can be seen from the fact that NAND does not possess any of the following five properties, each of which is required to be absent from, and the absence of all of which is sufficient for, at least one member of a set offunctionally completeoperators: truth-preservation, falsity-preservation,linearity,monotonicity,self-duality. (An operator is truth-preserving if its value is truth whenever all of its arguments are truth, or falsity-preserving if its value is falsity whenever all of its arguments are falsity.)[16] It can also be proved by first showing, with atruth table, that¬A{\displaystyle \neg A}is truth-functionally equivalent toA↑A{\displaystyle A\uparrow A}.[17]Then, sinceA↑B{\displaystyle A\uparrow B}is truth-functionally equivalent to¬(A∧B){\displaystyle \neg (A\land B)},[17]andA∨B{\displaystyle A\lor B}is equivalent to¬(¬A∧¬B){\displaystyle \neg (\neg A\land \neg B)},[17]the Sheffer stroke suffices to define the set of connectives{∧,∨,¬}{\displaystyle \{\land ,\lor ,\neg \}},[17]which is shown to be truth-functionally complete by theDisjunctive Normal Form Theorem.[17] Expressed in terms of NAND↑{\displaystyle \uparrow }, the usual operators of propositional logic are:
https://en.wikipedia.org/wiki/Sheffer_stroke
Inautomata theory, afinite-state machineis called adeterministic finite automaton(DFA), if Anondeterministic finite automaton(NFA), ornondeterministic finite-state machine, does not need to obey these restrictions. In particular, every DFA is also an NFA. Sometimes the termNFAis used in a narrower sense, referring to an NFA that isnota DFA, but not in this article. Using thesubset construction algorithm, each NFA can be translated to an equivalent DFA; i.e., a DFA recognizing the sameformal language.[1]Like DFAs, NFAs only recognizeregular languages. NFAs were introduced in 1959 byMichael O. RabinandDana Scott,[2]who also showed their equivalence to DFAs. NFAs are used in the implementation ofregular expressions:Thompson's constructionis an algorithm for compiling a regular expression to an NFA that can efficiently perform pattern matching on strings. Conversely,Kleene's algorithmcan be used to convert an NFA into a regular expression (whose size is generally exponential in the input automaton). NFAs have been generalized in multiple ways, e.g.,nondeterministic finite automata with ε-moves,finite-state transducers,pushdown automata,alternating automata,ω-automata, andprobabilistic automata. Besides the DFAs, other known special cases of NFAs areunambiguous finite automata(UFA) andself-verifying finite automata(SVFA). There are at least two equivalent ways to describe the behavior of an NFA. The first way makes use of thenondeterminismin the name of an NFA. For each input symbol, the NFA transitions to a new state until all input symbols have been consumed. In each step, the automaton nondeterministically "chooses" one of the applicable transitions. If there exists at least one "lucky run", i.e. some sequence of choices leading to an accepting state after completely consuming the input, it is accepted. Otherwise, i.e. if no choice sequence at all can consume all the input[3]and lead to an accepting state, the input is rejected.[4][5]: 319[6] In the second way, the NFA consumes a string of input symbols, one by one. In each step, whenever two or more transitions are applicable, it "clones" itself into appropriately many copies, each one following a different transition. If no transition is applicable, the current copy is in a dead end, and it "dies". If, after consuming the complete input, any of the copies is in an accept state, the input is accepted, else, it is rejected.[4][7][6] For a more elementary introduction of the formal definition, seeautomata theory. AnNFAis represented formally by a 5-tuple,(Q,Σ,δ,q0,F){\displaystyle (Q,\Sigma ,\delta ,q_{0},F)}, consisting of Here,P(Q){\displaystyle {\mathcal {P}}(Q)}denotes thepower setofQ{\displaystyle Q}. Given an NFAM=(Q,Σ,δ,q0,F){\displaystyle M=(Q,\Sigma ,\delta ,q_{0},F)}, its recognized language is denoted byL(M){\displaystyle L(M)}, and is defined as the set of all strings over the alphabetΣ{\displaystyle \Sigma }that are accepted byM{\displaystyle M}. Loosely corresponding to theaboveinformal explanations, there are several equivalent formal definitions of a stringw=a1a2...an{\displaystyle w=a_{1}a_{2}...a_{n}}being accepted byM{\displaystyle M}: The above automaton definition uses asingle initial state, which is not necessary. Sometimes, NFAs are defined with a set of initial states. There is an easy construction that translates an NFA with multiple initial states to an NFA with a single initial state, which provides a convenient notation. The following automatonM, with a binary alphabet, determines if the input ends with a 1. LetM=({p,q},{0,1},δ,p,{q}){\displaystyle M=(\{p,q\},\{0,1\},\delta ,p,\{q\})}where the transition functionδ{\displaystyle \delta }can be defined by thisstate transition table(cf. upper left picture): Since the setδ(p,1){\displaystyle \delta (p,1)}contains more than one state,Mis nondeterministic. The language ofMcan be described by theregular languagegiven by theregular expression(0|1)*1. All possible state sequences for the input string "1011" are shown in the lower picture. The string is accepted byMsince one state sequence satisfies the above definition; it does not matter that other sequences fail to do so. The picture can be interpreted in a couple of ways: The feasibility to read the same picture in two ways also indicates the equivalence of both above explanations. In contrast, the string "10" is rejected byM(all possible state sequences for that input are shown in the upper right picture), since there is no way to reach the only accepting state,q, by reading the final 0 symbol. Whileqcan be reached after consuming the initial "1", this does not mean that the input "10" is accepted; rather, it means that an input string "1" would be accepted. Adeterministic finite automaton(DFA) can be seen as a special kind of NFA, in which for each state and symbol, the transition function has exactly one state. Thus, it is clear that everyformal languagethat can be recognized by a DFA can be recognized by an NFA. Conversely, for each NFA, there is a DFA such that it recognizes the same formal language. The DFA can be constructed using thepowerset construction. This result shows that NFAs, despite their additional flexibility, are unable to recognize languages that cannot be recognized by some DFA. It is also important in practice for converting easier-to-construct NFAs into more efficiently executable DFAs. However, if the NFA hasnstates, the resulting DFA may have up to 2nstates, which sometimes makes the construction impractical for large NFAs. Nondeterministic finite automaton with ε-moves (NFA-ε) is a further generalization to NFA. In this kind of automaton, the transition function is additionally defined on theempty stringε. A transition without consuming an input symbol is called an ε-transition and is represented in state diagrams by an arrow labeled "ε". ε-transitions provide a convenient way of modeling systems whose current states are not precisely known: i.e., if we are modeling a system and it is not clear whether the current state (after processing some input string) should be q or q', then we can add an ε-transition between these two states, thus putting the automaton in both states simultaneously. AnNFA-εis represented formally by a 5-tuple,(Q,Σ,δ,q0,F){\displaystyle (Q,\Sigma ,\delta ,q_{0},F)}, consisting of Here,P(Q){\displaystyle {\mathcal {P}}(Q)}denotes thepower setofQ{\displaystyle Q}andϵ{\displaystyle \epsilon }denotes empty string. For a stateq∈Q{\displaystyle q\in Q}, letE(q){\displaystyle E(q)}denote the set of states that are reachable fromq{\displaystyle q}by following ε-transitions in the transition functionδ{\displaystyle \delta }, i.e.,p∈E(q){\displaystyle p\in E(q)}if there is a sequence of statesq1,...,qk{\displaystyle q_{1},...,q_{k}}such that E(q){\displaystyle E(q)}is known as theepsilon closure, (alsoε-closure) ofq{\displaystyle q}. The ε-closure of a setP{\displaystyle P}of states of an NFA is defined as the set of states reachable from any state inP{\displaystyle P}following ε-transitions. Formally, forP⊆Q{\displaystyle P\subseteq Q}, defineE(P)=⋃q∈PE(q){\displaystyle E(P)=\bigcup \limits _{q\in P}E(q)}. Similar to NFA without ε-moves, the transition functionδ{\displaystyle \delta }of an NFA-ε can be extended to strings. Informally,δ∗(q,w){\displaystyle \delta ^{*}(q,w)}denotes the set of all states the automaton may have reached when starting in stateq∈Q{\displaystyle q\in Q}and reading the stringw∈Σ∗.{\displaystyle w\in \Sigma ^{*}.}The functionδ∗:Q×Σ∗→P(Q){\displaystyle \delta ^{*}:Q\times \Sigma ^{*}\rightarrow {\mathcal {P}}(Q)}can be defined recursively as follows. The automaton is said to accept a stringw{\displaystyle w}if that is, if readingw{\displaystyle w}may drive the automaton from its start stateq0{\displaystyle q_{0}}to some accepting state inF.{\displaystyle F.}[11] LetM{\displaystyle M}be a NFA-ε, with a binary alphabet, that determines if the input contains an even number of 0s or an even number of 1s. Note that 0 occurrences is an even number of occurrences as well. In formal notation, letM=({S0,S1,S2,S3,S4},{0,1},δ,S0,{S1,S3}){\displaystyle M=(\{S_{0},S_{1},S_{2},S_{3},S_{4}\},\{0,1\},\delta ,S_{0},\{S_{1},S_{3}\})}where the transition relationδ{\displaystyle \delta }can be defined by thisstate transition table: M{\displaystyle M}can be viewed as the union of twoDFAs: one with states{S1,S2}{\displaystyle \{S_{1},S_{2}\}}and the other with states{S3,S4}{\displaystyle \{S_{3},S_{4}\}}. The language ofM{\displaystyle M}can be described by theregular languagegiven by thisregular expression(1∗01∗01∗)∗∪(0∗10∗10∗)∗{\displaystyle (1^{*}01^{*}01^{*})^{*}\cup (0^{*}10^{*}10^{*})^{*}}. We defineM{\displaystyle M}using ε-moves butM{\displaystyle M}can be defined without using ε-moves. To show NFA-ε is equivalent to NFA, first note that NFA is a special case of NFA-ε, so it remains to show for every NFA-ε, there exists an equivalent NFA. Given an NFA with epsilon movesM=(Q,Σ,δ,q0,F),{\displaystyle M=(Q,\Sigma ,\delta ,q_{0},F),}define an NFAM′=(Q,Σ,δ′,q0,F′),{\displaystyle M'=(Q,\Sigma ,\delta ',q_{0},F'),}where and One has to distinguish the transition functions ofM{\displaystyle M}andM′,{\displaystyle M',}viz.δ{\displaystyle \delta }andδ′,{\displaystyle \delta ',}and their extensions to strings,δ{\displaystyle \delta }andδ′∗,{\displaystyle \delta '^{*},}respectively. By construction,M′{\displaystyle M'}has no ε-transitions. One can prove thatδ′∗(q0,w)=δ∗(q0,w){\displaystyle \delta '^{*}(q_{0},w)=\delta ^{*}(q_{0},w)}for each stringw≠ε{\displaystyle w\neq \varepsilon }, byinductionon the length ofw.{\displaystyle w.} Based on this, one can show thatδ′∗(q0,w)∩F′≠{}{\displaystyle \delta '^{*}(q_{0},w)\cap F'\neq \{\}}if, and only if,δ∗(q0,w)∩F≠{},{\displaystyle \delta ^{*}(q_{0},w)\cap F\neq \{\},}for each stringw∈Σ∗:{\displaystyle w\in \Sigma ^{*}:} Since NFA is equivalent to DFA, NFA-ε is also equivalent to DFA. The set of languages recognized by NFAs isclosed underthe following operations. These closure operations are used inThompson's construction algorithm, which constructs an NFA from anyregular expression. They can also be used to prove that NFAs recognize exactly theregular languages. Since NFAs are equivalent to nondeterministic finite automaton with ε-moves (NFA-ε), the above closures are proved using closure properties of NFA-ε. The machine starts in the specified initial state and reads in a string of symbols from itsalphabet. The automaton uses thestate transition functionΔ to determine the next state using the current state, and the symbol just read or the empty string. However, "the next state of an NFA depends not only on the current input event, but also on an arbitrary number of subsequent input events. Until these subsequent events occur it is not possible to determine which state the machine is in".[13]If, when the automaton has finished reading, it is in an accepting state, the NFA is said to accept the string, otherwise it is said to reject the string. The set of all strings accepted by an NFA is the language the NFA accepts. This language is aregular language. For every NFA adeterministic finite automaton(DFA) can be found that accepts the same language. Therefore, it is possible to convert an existing NFA into a DFA for the purpose of implementing a (perhaps) simpler machine. This can be performed using thepowerset construction, which may lead to an exponential rise in the number of necessary states. For a formal proof of the powerset construction, please see thePowerset constructionarticle. There are many ways to implement a NFA: NFAs and DFAs are equivalent in that if a language is recognized by an NFA, it is also recognized by a DFA and vice versa. The establishment of such equivalence is important and useful. It is useful because constructing an NFA to recognize a given language is sometimes much easier than constructing a DFA for that language. It is important because NFAs can be used to reduce the complexity of the mathematical work required to establish many important properties in thetheory of computation. For example, it is much easier to proveclosure propertiesofregular languagesusing NFAs than DFAs.
https://en.wikipedia.org/wiki/Nondeterministic_finite_automaton
Thecorporate opportunitydoctrine is the legal principle providing thatdirectors, officers, and controlling shareholders of acorporationmust not take for themselves any business opportunity that could benefit the corporation.[1]The corporate opportunity doctrine is one application of thefiduciaryduty of loyalty.[2] The corporate opportunity doctrine does not apply to all fiduciaries of a corporation; rather, it is limited to directors, officers, and controlling shareholders.[3]The doctrine applies regardless of whether the corporation is harmed by the transaction; indeed, it applies even if the corporation benefits from the transaction.[4]The corporate opportunity doctrine only applies if the opportunity was not disclosed to the corporation. If the opportunity was disclosed to the board of directors and the board declined to take the opportunity for the corporation, the fiduciary may take the opportunity for themself.[5]When the corporate opportunity doctrine applies, the corporation is entitled to all profits earned by the fiduciary from the transaction.[6]In the leadingEnglish lawcase ofRegal (Hastings) Ltd v Gulliver[1942] UKHL 1it was held that "The rule of equity which insists on those who by use of a fiduciary position make a profit, being liable to account for that profit, in no way depends on fraud, or absence ofbona fides... or whether the plaintiff has in fact been damaged or benefited by his action." A business opportunity is a corporate opportunity if the corporation is financially able to undertake the opportunity, the opportunity is within the corporation's line of business, and the corporation has an interest or expectancy in the opportunity.[7]TheDelaware Court of Chanceryhas stated, "An opportunity is within a corporation's line of business . . . if it is an activity as to which the corporation has fundamental knowledge, practical experience and ability to pursue."[8]InIn reeBay, Inc.Shareholders Litigation, investing in varioussecuritieswas held to be in a line of business of eBay despite the fact that eBay's primary purpose is to provide an online auction platform.[9]Investing was in a line of business of eBay because eBay "consistently invested a portion of its cash on hand in marketable securities."[10]A corporation has an interest or expectancy in a business opportunity if the opportunity would further an established business policy of the corporation.[11]
https://en.wikipedia.org/wiki/Corporate_opportunity
Quantum complex networksarecomplex networkswhose nodes arequantum computingdevices.[1][2]Quantum mechanicshas been used to create securequantum communicationschannels that are protected from hacking.[3][4]Quantum communications offer the potential for secureenterprise-scale solutions.[5][2][6] In theory, it is possible to take advantage ofquantum mechanicsto createsecure communicationsusing features such asquantum key distributionis an application ofquantum cryptographythat enablessecure communications[3]Quantum teleportationcan transfer data at a higher rate than classical channels.[4][relevant?] Successfulquantum teleportationexperiments in 1998.[7]Prototypical quantum communication networks arrived in 2004.[8]Large scalecommunication networkstend to have non-trivial topologies and characteristics, such assmall world effect,community structure, orscale-free.[6] Inquantum information theory,qubitsare analogous tobitsin classical systems. Aqubitis a quantum object that, when measured, can be found to be in one of only two states, and that is used to transmit information.[3]Photon polarizationornuclear spinare examples of binary phenomena that can be used as qubits.[3] Quantum entanglementis a physical phenomenon characterized by correlation between the quantum states of two or more physically separate qubits.[3]Maximally entangled states are those that maximize theentropy of entanglement.[9][10]In the context of quantum communication, entangled qubits are used as aquantum channel.[3] Bell measurementis a kind of joint quantum-mechanical measurement of two qubits such that, after the measurement, the two qubits are maximally entangled.[3][10] Entanglement swappingis a strategy used in the study of quantum networks that allows connections in the network to change.[1][11]For example, given 4 qubits, A, B, C and D, such that qubits C and D belong to the same station[clarification needed], while A and C belong to two different stations[clarification needed], and where qubit A is entangled with qubit C and qubit B is entangled with qubit D. Performing aBell measurementfor qubits A and B, entangles qubits A and B. It is also possible to entangle qubits C and D, despite the fact that these two qubits never interact directly with each other. Following this process, the entanglement between qubits A and C, and qubits B and D are lost. This strategy can be used to definenetwork topology.[1][11][12] While models for quantum complex networks are not of identical structure, usually a node represents a set of qubits in the same station (where operations likeBell measurementsandentanglement swappingcan be applied) and an edge between nodei{\displaystyle i}andj{\displaystyle j}means that a qubit in nodei{\displaystyle i}is entangled to a qubit in nodej{\displaystyle j}, although those two qubits are in different places and so cannot physically interact.[1][11]Quantum networks where the links are interaction terms[clarification needed]instead of entanglement are also of interest.[13][which?] Each node in the network contains a set of qubits in different states. To represent the quantum state of these qubits, it is convenient to useDirac notationand represent the two possible states of each qubit as|0⟩{\displaystyle |0\rangle }and|1⟩{\displaystyle |1\rangle }.[1][11]In this notation, two particles are entangled if the jointwave function,|ψij⟩{\displaystyle |\psi _{ij}\rangle }, cannot be decomposed as[3][10] where|ϕ⟩i{\displaystyle |\phi \rangle _{i}}represents the quantum state of the qubit at nodeiand|ϕ⟩j{\displaystyle |\phi \rangle _{j}}represents the quantum state of the qubit at nodej. Another important concept is maximally entangled states. The four states (theBell states) that maximize theentropy of entanglementbetween two qubits can be written as follows:[3][10] The quantum random network model proposed by Perseguers et al. (2009)[1]can be thought of as a quantum version of theErdős–Rényi model. In this model, each node containsN−1{\displaystyle N-1}qubits, one for each other node. The degree of entanglement between a pair of nodes, represented byp{\displaystyle p}, plays a similar role to the parameterp{\displaystyle p}in the Erdős–Rényi model in which two nodes form a connection with probabilityp{\displaystyle p}, whereas in the context of quantum random networks,p{\displaystyle p}refers to the probability of converting an entangled pair of qubits to a maximally entangled state using onlylocal operations and classical communication.[14] Using Dirac notation, a pair of entangled qubits connecting the nodesi{\displaystyle i}andj{\displaystyle j}is represented as Forp=0{\displaystyle p=0}, the two qubits are not entangled: and forp=1{\displaystyle p=1}, we obtain the maximally entangled state: For intermediate values ofp{\displaystyle p},0<p<1{\displaystyle 0<p<1}, any entangled state is, with probabilityp{\displaystyle p}, successfully converted to the maximally entangled state using LOCC operations.[14] One feature that distinguishes this model from its classical analogue is the fact that, in quantum random networks, links are only truly established after they are measured, and it is possible to exploit this fact to shape the final state of the network.[relevant?]For an initial quantum complex network with an infinite number of nodes, Perseguers et al.[1]showed that, the right measurements andentanglement swapping, make it possible[how?]to collapse the initial network to a network containing any finite subgraph, provided thatp{\displaystyle p}scales withN{\displaystyle N}asp∼NZ{\textstyle p\sim N^{Z}}, whereZ≥−2{\displaystyle Z\geq -2}. This result is contrary to classical graph theory, where the type of subgraphs contained in a network is bounded by the value ofz{\displaystyle z}.[15][why?] Entanglement percolation models attempt to determine whether a quantum network is capable of establishing a connection between two arbitrary nodes through entanglement, and to find the best strategies to create such connections.[11][16] Cirac et al. (2007)[16]applied a model to complex networks by Cuquet et al. (2009),[11]in which nodes are distributed in a lattice[16]or in a complex network,[11]and each pair of neighbors share two pairs of entangled qubits that can be converted to a maximally entangled qubit pair with probabilityp{\displaystyle p}. We can think of maximally entangled qubits as the true links between nodes. In classicalpercolation theory, with a probabilityp{\displaystyle p}that two nodes are connected,p{\displaystyle p}has a critical value (denoted bypc{\displaystyle p_{c}}), so that ifp>pc{\displaystyle p>p_{c}}a path between two randomly selected nodes exists with a finite probability, and forp<pc{\displaystyle p<p_{c}}the probability of such a path existing is asymptotically zero.[17]pc{\displaystyle p_{c}}depends only on the network topology.[17] A similar phenomenon was found in the model proposed by Cirac et al. (2007),[16]where the probability of forming a maximally entangled state between two randomly selected nodes is zero ifp<pc{\displaystyle p<p_{c}}and finite ifp>pc{\displaystyle p>p_{c}}. The main difference between classical and entangled percolation is that, in quantum networks, it is possible to change the links in the network, in a way changing the effective topology of the network. As a result,pc{\displaystyle p_{c}}depends on the strategy used to convert partially entangled qubits to maximally connected[clarification needed]qubits.[11][16]With a naïve approach,pc{\displaystyle p_{c}}for a quantum network is equal topc{\displaystyle p_{c}}for a classic network with the same topology.[16]Nevertheless, it was shown that is possible to take advantage of quantum swapping to lowerpc{\displaystyle p_{c}}both inregular lattices[16]andcomplex networks.[11]
https://en.wikipedia.org/wiki/Quantum_complex_network
Aforeign language writing aidis acomputer programor any other instrument that assists a non-native language user (also referred to as a foreign language learner) in writing decently in their target language. Assistive operations can be classified into two categories: on-the-fly prompts and post-writing checks. Assisted aspects of writing include:lexical,syntactic(syntactic and semantic roles of a word's frame),lexical semantic(context/collocation-influenced word choice and user-intention-drivensynonymchoice) andidiomaticexpression transfer, etc. Different types offoreign languagewriting aids include automated proofreading applications,text corpora,dictionaries,translationaids andorthographyaids. The four major components in the acquisition of a language are namely;listening,speaking,readingandwriting.[1]While most people have no difficulties in exercising these skills in their native language, doing so in a second or foreign language is not that easy. In the area of writing, research has found that foreign language learners find it painstaking to compose in the target language, producing less eloquent sentences and encountering difficulties in the revisions of their written work. However, these difficulties are not attributed to their linguistic abilities.[2] Many language learners experienceforeign language anxiety, feelings of apprehensiveness and nervousness, when learning a second language.[1]In the case of writing in a foreign language, this anxiety can be alleviated via foreign language writing aids as they assist non-native language users in independently producing decent written work at their own pace, hence increasing confidence about themselves and their own learning abilities.[3] With advancements in technology, aids in foreign language writing are no longer restricted to traditional mediums such as teacher feedback and dictionaries. Known ascomputer-assisted language learning(CALL), use of computers in language classrooms has become more common, and one example would be the use ofword processorsto assist learners of a foreign language in the technical aspects of their writing, such asgrammar.[4]In comparison with correction feedback from the teacher, the use of word processors is found to be a better tool in improving the writing skills of students who are learningEnglish as a foreign language(EFL), possibly because students find it more encouraging to learn their mistakes from a neutral and detached source.[3]Apart from learners' confidence in writing, their motivation and attitudes will also improve through the use of computers.[2] Foreign language learners' awareness of the conventions in writing can be improved through reference to guidelines showing the features and structure of the target genre.[2]At the same time, interactions and feedback help to engage the learners and expedite their learning, especially with active participation.[5]In online writing situations, learners are isolated without face-to-face interaction with others. Therefore, a foreign language writing aid should provide interaction and feedback so as to ease the learning process. This complementscommunicative language teaching(CLT); which is a teaching approach that highlights interaction as both the means and aim of learning a language. In accordance with the simple view of writing, both lower-order and higher-order skills are required. Lower-order skills involve those ofspellingandtranscription, whereas higher order-skills involve that of ideation; which refers to idea generation and organisation.[6]Proofreading is helpful for non-native language users in minimising errors while writing in a foreign language.Spell checkersandgrammar checkersare two applications that aid in the automatic proofreading process of written work.[7] To achieve writing competence in a non-native language, especially in an alphabetic language, spelling proficiency is of utmost importance.[8]Spelling proficiency has been identified as a good indicator of a learner’s acquisition and comprehension of alphabetic principles in the target language.[9]Documented data on misspelling patterns indicate that majority of misspellings fall under the four categories of letter insertion, deletion, transposition and substitution.[10]In languages where pronunciation of certain sequences of letters may be similar, misspellings may occur when the non-native language learner relies heavily on the sounds of the target language because they are unsure about the accurate spelling of the words.[11]The spell checker application is a type of writing aid that non-native language learners can rely on to detect and correct their misspellings in the target language.[12] In general, spell checkers can operate one of two modes, the interactive spell checking mode or the batch spell checking.[7]In the interactive mode, the spell checker detects and marks misspelled words with a squiggly underlining as the words are being typed. On the other hand, batch spell checking is performed on a batch-by-batch basis as the appropriate command is entered. Spell checkers, such as those used inMicrosoft Word, can operate in either mode. Although spell checkers are commonplace in numerous software products, errors specifically made by learners of a target language may not be sufficiently catered for.[13]This is because generic spell checkers function on the assumption that their users are competent speakers of the target language, whose misspellings are primarily due to accidental typographical errors.[14]The majority of misspellings were found to be attributed to systematic competence errors instead of accidental typographical ones, with up to 48% of these errors failing to be detected or corrected by the generic spell checker used.[15] In view of the deficiency of generic spell checkers, programs have been designed to gear towards non-native misspellings,[14]such as FipsCor and Spengels. In FipsCor, a combination of methods, such as the alpha-code method, phonological reinterpretation method and morphological treatment method, has been adopted in an attempt to create a spell checker tailored to French language learners.[11]On the other hand, Spengels is a tutoring system developed to aid Dutch children and non-native Dutch writers of English in accurate English spelling.[16] Grammar(syntactical and morphological) competency is another indicator of a non-native speaker’s proficiency in writing in the target language.Grammar checkersare a type of computerised application which non-native speakers can make use of to proofread their writings as such programs endeavor to identify syntactical errors.[17]Grammar and style checking is recognized as one of the seven major applications ofNatural Language Processingand every project in this field aims to build grammar checkers into a writing aid instead of a robust man-machine interface.[17] Currently, grammar checkers are incapable of inspecting the linguistic or even syntactic correctness of text as a whole. They are restricted in their usefulness in that they are only able to check a small fraction of all the possible syntactic structures. Grammar checkers are unable to detect semantic errors in a correctly structured syntax order; i.e. grammar checkers do not register the error when the sentence structure is syntactically correct but semantically meaningless.[18] Although grammar checkers have largely been concentrated on ensuring grammatical writing, majority of them are modelled after native writers, neglecting the needs of non-native language users.[19]Much research have attempted to tailor grammar checkers to the needs of non-native language users. Granska, a Swedish grammar checker, has been greatly worked upon by numerous researchers in the investigation of grammar checking properties for foreign language learners.[19][20]TheUniversidad Nacional de Educación a Distanciahas a computerised grammar checker for native Spanish speakers of EFL to help identify and correct grammatical mistakes without feedback from teachers.[21] Theoretically, the functions of a conventional spell checker can be incorporated into a grammar checker entirely and this is likely the route that the language processing industry is working towards.[18]In reality, internationally available word processors such as Microsoft Word have difficulties combining spell checkers and grammar checkers due to licensing issues; various proofing instrument mechanisms for a certain language would have been licensed under different providers at different times.[18] Electronic corpora in the target language provide non-native language users with authentic examples of language use rather than fixed examples, which may not be reflected in daily interactions.[22]The contextualised grammatical knowledge acquired by non-native language users through exposure to authentic texts in corpora allows them to grasp the manner of sentence formation in the target language, enabling effective writing.[23] Concordanceset up through concordancing programs of corpora allow non-native language users to conveniently grasp lexico-grammatical patterns of the target language. Collocational frequencies of words (i.e. word pairings frequencies) provide non-native language users with information about accurate grammar structures which can be used when writing in the target language.[22]Collocational information also enable non-native language users to make clearer distinctions between words and expressions commonly regarded as synonyms. In addition, corpora information about thesemantic prosody; i.e. appropriate choices of words to be used in positive and negative co-texts, is available as reference for non-native language users in writing. The corpora can also be used to check for the acceptability or syntactic "grammaticality" of their written work.[24] A survey conducted onEnglish as a Second Language(ESL) students revealed corpus activities to be generally well received and thought to be especially useful for learning word usage patterns and improving writing skills in the foreign language.[23]It was also found that students' writings became more natural after using two online corpora in a 90-minute training session.[25]In recent years, there were also suggestions to incorporate the applications of corpora into EFL writing courses in China to improve the writing skills of learners.[26] Dictionaries of the target learning languages are commonly recommended to non-native language learners.[27]They serve as reference tools by offering definitions, phonetic spelling, word classes and sample sentences.[22]It was found that the use of a dictionary can help learners of a foreign language write better if they know how to use them.[28]Foreign language learners can make use of grammar-related information from the dictionary to select appropriate words, check the correct spelling of a word and look upsynonymsto add more variety to their writing.[28]Nonetheless, learners have to be careful when using dictionaries as the lexical-semantic information contained in dictionaries might not be sufficient with regards to language production in a particular context and learners may be misled into choosing incorrect words.[29] Presently, many notable dictionaries are available online and basic usage is usually free. These online dictionaries allow learners of a foreign language to find references for a word much faster and more conveniently than with a manual version, thus minimising the disruption to the flow of writing.[30]Online dictionaries available can be found under thelist of online dictionaries. Dictionaries come in different levels of proficiency; such as advanced, intermediate and beginner, which learners can choose accordingly to the level best suited to them. There are many different types of dictionaries available; such as thesaurus or bilingual dictionaries, which cater to the specific needs of a learner of a foreign language. In recent years, there is also specialised dictionaries for foreign language learners that employ natural language processing tools to assist in the compilations of dictionary entries by generating feedback on the vocabulary that learners use and automatically providing inflectional and/or derivational forms for referencing items in the explanations.[31] The wordthesaurusmeans 'treasury' or 'storehouse' in Greek and Latin is used to refer to several varieties of language resources, it is most commonly known as a book that groups words in synonym clusters and related meanings.[32]Its original sense of 'dictionary or encyclopedia' has been overshadowed by the emergence of the Roget-style thesaurus[32]and it is considered as a writing aid as it helps writers with the selection of words.[33]The differences between a Roget-style thesaurus and a dictionary would be the indexing and information given; the words in thesaurus are grouped by meaning, usually without definitions, while the latter is byalphabetical orderwith definitions.[33]When users are unable to find a word in a dictionary, it is usually due to the constraint of searching alphabetically by common and well-known headwords and the use of a thesaurus eliminates this issue by allowing users to search for a word through another word based on concept.[34] Foreign language learners can make use of thesaurus to find near synonyms of a word to expand their vocabulary skills and add variety to their writing. Many word processors are equipped with a basic function of thesaurus, allowing learners to change a word to another similar word with ease. However, learners must be mindful that even if the words are near synonyms, they might not be suitable replacements depending on the context.[33] Spelling dictionaries are referencing materials that specifically aid users in finding the correct spelling of a word. Unlike common dictionaries, spelling dictionaries do not typically provide definitions and other grammar-related information of the words. While typical dictionaries can be used to check or search for correct spellings, new and improved spelling dictionaries can assist users in finding the correct spelling of words even when the user does not know the first alphabet or knows it imperfectly.[35]This circumvents the alphabetic ordering limitations of a classic dictionary.[34]These spelling dictionaries are especially useful to foreign language learners as inclusion of concise definitions and suggestions for commonly confused words help learners to choose the correct spellings of words that sound alike or are pronounced wrongly by them.[35] A personal spelling dictionary, being a collection of a single learner’s regularly misspelled words, is tailored to the individual and can be expanded with new entries that the learner does not know how to spell or contracted when the learner had mastered the words.[36]Learners also use the personal spelling dictionary more than electronic spellcheckers, and additions can be easily made to better enhance it as a learning tool as it can include things like rules for writing and proper nouns, which are not included in electronic spellcheckers.[36]Studies also suggest that personal spelling dictionaries are better tools for learners to improve their spelling as compared to trying to memorize words that are unrelated from lists or books.[37] Current research have shown that language learners utilise dictionaries predominantly to check for meanings and thatbilingual dictionariesare preferred over monolingual dictionaries for these uses.[38]Bilingual dictionaries have proved to be helpful for learners of a new language, although in general, they hold less extensive coverage of information as compared to monolingual dictionaries.[30]Nonetheless, good bilingual dictionaries capitalize on the fact that they are useful for learners to integrate helpful information about commonly known errors, false friends and contrastive predicaments from the two languages.[30] Studies have shown that learners of English have benefited from the use of bilingual dictionaries on their production and comprehension of unknown words.[39]When using bilingual dictionaries, learners also tend to read entries in both native and target languages[39]and this helps them to map the meanings of the target word in the foreign language onto its counterpart in their native language. It was also found that the use of bilingual dictionaries improves the results of translation tasks by learners of ESL, thus showing that language learning can be enhanced with the use of bilingual dictionaries.[40] The use of bilingual dictionaries in foreign language writing tests remains a debate. Some studies support the view that the use of a dictionary in a foreign language examination increases the mean score of the test, and hence is one of the factors that influenced the decision to ban the use of dictionaries in several foreign language tests in the UK.[41]More recent studies, however, present that further research into the use of bilingual dictionaries during writing tests have shown that there is no significant differences in the test scores that can be attributed to the use of a dictionary.[42]Nevertheless, from the perspective of foreign language learners, being able to use a bilingual dictionary during a test is reassuring and increases their confidence.[43] There are many free translation aids online, also known asmachine translation(MT) engines, such asGoogle TranslateandBabel Fish(now defunct), that allow foreign language learners to translate between their native language and the target language quickly and conveniently.[44]Out of the three major categories in computerised translation tools;computer-assisted translation(CAT), Terminology data banks and machine translation. Machine translation is the most ambitious as it is designed to handle the whole process of translation entirely without the intervention of human assistance.[45] Studies have shown that translation into the target language can be used to improve the linguistic proficiency of foreign language learners.[46]Machine translation aids help beginner learners of a foreign language to write more and produce better quality work in the target language; writing directly in the target language without any aid requires more effort on the learners' part, resulting in the difference in quantity and quality.[44] However, teachers advise learners against the use of machine translation aids as output from the machine translation aids are highly misleading and unreliable; producing the wrong answers most of the time.[47]Over-reliance on the aids also hinder the development of learners' writing skills, and is viewed as an act of plagiarism since the language used is technically not produced by the student.[47] Theorthographyof a language is the usage of a specific script to write a language according to a conventionalised usage.[48]One’s ability to read in a language is further enhanced by a concurrent learning of writing.[49]This is because writing is a means of helping the language learner recognise and remember the features of the orthography, which is particularly helpful when the orthography has irregular phonetic-to-spelling mapping.[49]This, in turn, helps the language learner to focus on the components which make up the word.[49] Online orthography aids[50]provide language learners with a step-by-step process on learning how to write characters. These are especially useful for learners of languages withlogographicwriting systems, such as Chinese or Japanese, in which the ordering of strokes for characters are important. Alternatively, tools like Skritter provide an interactive way of learning via a system similar to writing tablets[51][better source needed]albeit on computers, at the same time providing feedback on stroke ordering and progress. Handwriting recognitionis supported on certain programs,[52]which help language learners in learning the orthography of the target language. Practice of orthography is also available in many applications, with tracing systems in place to help learners with stroke orders.[53] Apart from online orthography programs, offline orthography aids for language learners of logographic languages are also available. Character cards, which contain lists of frequently used characters of the target language, serve as a portable form of visual writing aid for language learners of logographic languages who may face difficulties in recalling the writing of certain characters.[54] Studies have shown that tracing logographic characters improves the word recognition abilities of foreign language learners, as well as their ability to map the meanings onto the characters.[55]This, however, does not improve their ability to link pronunciation with characters, which suggests that these learners need more than orthography aids to help them in mastering the language in both writing and speech.[56]
https://en.wikipedia.org/wiki/Foreign-language_writing_aid
Inmathematics, afinite fieldorGalois field(so-named in honor ofÉvariste Galois) is afieldthat contains a finite number ofelements. As with any field, a finite field is aseton which the operations of multiplication, addition, subtraction and division are defined and satisfy certain basic rules. The most common examples of finite fields are theintegers modp{\displaystyle p}whenp{\displaystyle p}is aprime number. Theorderof a finite field is its number of elements, which is either a prime number or aprime power. For every prime numberp{\displaystyle p}and every positive integerk{\displaystyle k}there are fields of orderpk{\displaystyle p^{k}}. All finite fields of a given order areisomorphic. Finite fields are fundamental in a number of areas of mathematics andcomputer science, includingnumber theory,algebraic geometry,Galois theory,finite geometry,cryptographyandcoding theory. A finite field is a finite set that is afield; this means that multiplication, addition, subtraction and division (excluding division by zero) are defined and satisfy the rules of arithmetic known as thefield axioms.[1] The number of elements of a finite field is called itsorderor, sometimes, itssize. A finite field of orderq{\displaystyle q}exists if and only ifq{\displaystyle q}is aprime powerpk{\displaystyle p^{k}}(wherep{\displaystyle p}is a prime number andk{\displaystyle k}is a positive integer). In a field of orderpk{\displaystyle p^{k}}, addingp{\displaystyle p}copies of any element always results in zero; that is, thecharacteristicof the field isp{\displaystyle p}.[1] Forq=pk{\displaystyle q=p^{k}}, all fields of orderq{\displaystyle q}areisomorphic(see§ Existence and uniquenessbelow).[2]Moreover, a field cannot contain two different finitesubfieldswith the same order. One may therefore identify all finite fields with the same order, and they are unambiguously denotedFq{\displaystyle \mathbb {F} _{q}},Fq{\displaystyle \mathbf {F} _{q}}orGF(q){\displaystyle \mathrm {GF} (q)}, where the letters GF stand for "Galois field".[3] In a finite field of orderq{\displaystyle q}, thepolynomialXq−X{\displaystyle X^{q}-X}has allq{\displaystyle q}elements of the finite field asroots.[citation needed]The non-zero elements of a finite field form amultiplicative group. This group iscyclic, so all non-zero elements can be expressed as powers of a single element called aprimitive elementof the field. (In general there will be several primitive elements for a given field.)[1] The simplest examples of finite fields are the fields of prime order: for eachprime numberp{\displaystyle p}, theprime fieldof orderp{\displaystyle p}may be constructed as theintegers modulop{\displaystyle p},Z/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }.[1] The elements of the prime field of orderp{\displaystyle p}may be represented by integers in the range0,…,p−1{\displaystyle 0,\ldots ,p-1}. The sum, the difference and the product are theremainder of the divisionbyp{\displaystyle p}of the result of the corresponding integer operation.[1]The multiplicative inverse of an element may be computed by using the extended Euclidean algorithm (seeExtended Euclidean algorithm § Modular integers).[citation needed] LetF{\displaystyle F}be a finite field. For any elementx{\displaystyle x}inF{\displaystyle F}and anyintegern{\displaystyle n}, denote byn⋅x{\displaystyle n\cdot x}the sum ofn{\displaystyle n}copies ofx{\displaystyle x}. The least positiven{\displaystyle n}such thatn⋅1=0{\displaystyle n\cdot 1=0}is the characteristicp{\displaystyle p}of the field.[1]This allows defining a multiplication(k,x)↦k⋅x{\displaystyle (k,x)\mapsto k\cdot x}of an elementk{\displaystyle k}ofGF(p){\displaystyle \mathrm {GF} (p)}by an elementx{\displaystyle x}ofF{\displaystyle F}by choosing an integer representative fork{\displaystyle k}. This multiplication makesF{\displaystyle F}into aGF(p){\displaystyle \mathrm {GF} (p)}-vector space.[1]It follows that the number of elements ofF{\displaystyle F}ispn{\displaystyle p^{n}}for some integern{\displaystyle n}.[1] Theidentity(x+y)p=xp+yp{\displaystyle (x+y)^{p}=x^{p}+y^{p}}(sometimes called thefreshman's dream) is true in a field of characteristicp{\displaystyle p}. This follows from thebinomial theorem, as eachbinomial coefficientof the expansion of(x+y)p{\displaystyle (x+y)^{p}}, except the first and the last, is a multiple ofp{\displaystyle p}.[citation needed] ByFermat's little theorem, ifp{\displaystyle p}is a prime number andx{\displaystyle x}is in the fieldGF(p){\displaystyle \mathrm {GF} (p)}thenxp=x{\displaystyle x^{p}=x}. This implies the equalityXp−X=∏a∈GF(p)(X−a){\displaystyle X^{p}-X=\prod _{a\in \mathrm {GF} (p)}(X-a)}for polynomials overGF(p){\displaystyle \mathrm {GF} (p)}. More generally, every element inGF(pn){\displaystyle \mathrm {GF} (p^{n})}satisfies the polynomial equationxpn−x=0{\displaystyle x^{p^{n}}-x=0}.[citation needed] Any finitefield extensionof a finite field isseparableand simple. That is, ifE{\displaystyle E}is a finite field andF{\displaystyle F}is a subfield ofE{\displaystyle E}, thenE{\displaystyle E}is obtained fromF{\displaystyle F}by adjoining a single element whoseminimal polynomialisseparable. To use a piece of jargon, finite fields areperfect.[1] A more generalalgebraic structurethat satisfies all the other axioms of a field, but whose multiplication is not required to becommutative, is called adivision ring(or sometimesskew field). ByWedderburn's little theorem, any finite division ring is commutative, and hence is a finite field.[1] Letq=pn{\displaystyle q=p^{n}}be aprime power, andF{\displaystyle F}be thesplitting fieldof the polynomialP=Xq−X{\displaystyle P=X^{q}-X}over the prime fieldGF(p){\displaystyle \mathrm {GF} (p)}. This means thatF{\displaystyle F}is a finite field of lowest order, in whichP{\displaystyle P}hasq{\displaystyle q}distinct roots (theformal derivativeofP{\displaystyle P}isP′=−1{\displaystyle P'=-1}, implying thatgcd(P,P′)=1{\displaystyle \mathrm {gcd} (P,P')=1}, which in general implies that the splitting field is aseparable extensionof the original). Theabove identityshows that the sum and the product of two roots ofP{\displaystyle P}are roots ofP{\displaystyle P}, as well as the multiplicative inverse of a root ofP{\displaystyle P}. In other words, the roots ofP{\displaystyle P}form a field of orderq{\displaystyle q}, which is equal toF{\displaystyle F}by the minimality of the splitting field. The uniqueness up to isomorphism of splitting fields implies thus that all fields of orderq{\displaystyle q}are isomorphic. Also, if a fieldF{\displaystyle F}has a field of orderq=pk{\displaystyle q=p^{k}}as a subfield, its elements are theq{\displaystyle q}roots ofXq−X{\displaystyle X^{q}-X}, andF{\displaystyle F}cannot contain another subfield of orderq{\displaystyle q}. In summary, we have the following classification theorem first proved in 1893 byE. H. Moore:[2] The order of a finite field is a prime power. For every prime powerq{\displaystyle q}there are fields of orderq{\displaystyle q}, and they are all isomorphic. In these fields, every element satisfiesxq=x,{\displaystyle x^{q}=x,}and the polynomialXq−X{\displaystyle X^{q}-X}factors asXq−X=∏a∈F(X−a).{\displaystyle X^{q}-X=\prod _{a\in F}(X-a).} It follows thatGF(pn){\displaystyle \mathrm {GF} (p^{n})}contains a subfield isomorphic toGF(pm){\displaystyle \mathrm {GF} (p^{m})}if and only ifm{\displaystyle m}is a divisor ofn{\displaystyle n}; in that case, this subfield is unique. In fact, the polynomialXpm−X{\displaystyle X^{p^{m}}-X}dividesXpn−X{\displaystyle X^{p^{n}}-X}if and only ifm{\displaystyle m}is a divisor ofn{\displaystyle n}. Given a prime powerq=pn{\displaystyle q=p^{n}}withp{\displaystyle p}prime andn>1{\displaystyle n>1}, the fieldGF(q){\displaystyle \mathrm {GF} (q)}may be explicitly constructed in the following way. One first chooses anirreducible polynomialP{\displaystyle P}inGF(p)[X]{\displaystyle \mathrm {GF} (p)[X]}of degreen{\displaystyle n}(such an irreducible polynomial always exists). Then thequotient ringGF(q)=GF(p)[X]/(P){\displaystyle \mathrm {GF} (q)=\mathrm {GF} (p)[X]/(P)}of the polynomial ringGF(p)[X]{\displaystyle \mathrm {GF} (p)[X]}by the ideal generated byP{\displaystyle P}is a field of orderq{\displaystyle q}. More explicitly, the elements ofGF(q){\displaystyle \mathrm {GF} (q)}are the polynomials overGF(p){\displaystyle \mathrm {GF} (p)}whose degree is strictly less thann{\displaystyle n}. The addition and the subtraction are those of polynomials overGF(p){\displaystyle \mathrm {GF} (p)}. The product of two elements is the remainder of theEuclidean divisionbyP{\displaystyle P}of the product inGF(q)[X]{\displaystyle \mathrm {GF} (q)[X]}. The multiplicative inverse of a non-zero element may be computed with the extended Euclidean algorithm; seeExtended Euclidean algorithm § Simple algebraic field extensions. However, with this representation, elements ofGF(q){\displaystyle \mathrm {GF} (q)}may be difficult to distinguish from the corresponding polynomials. Therefore, it is common to give a name, commonlyα{\displaystyle \alpha }to the element ofGF(q){\displaystyle \mathrm {GF} (q)}that corresponds to the polynomialX{\displaystyle X}. So, the elements ofGF(q){\displaystyle \mathrm {GF} (q)}become polynomials inα{\displaystyle \alpha }, whereP(α)=0{\displaystyle P(\alpha )=0}, and, when one encounters a polynomial inα{\displaystyle \alpha }of degree greater or equal ton{\displaystyle n}(for example after a multiplication), one knows that one has to use the relationP(α)=0{\displaystyle P(\alpha )=0}to reduce its degree (it is what Euclidean division is doing). Except in the construction ofGF(4){\displaystyle \mathrm {GF} (4)}, there are several possible choices forP{\displaystyle P}, which produce isomorphic results. To simplify the Euclidean division, one commonly chooses forP{\displaystyle P}a polynomial of the formXn+aX+b,{\displaystyle X^{n}+aX+b,}which make the needed Euclidean divisions very efficient. However, for some fields, typically in characteristic2{\displaystyle 2}, irreducible polynomials of the formXn+aX+b{\displaystyle X^{n}+aX+b}may not exist. In characteristic2{\displaystyle 2}, if the polynomialXn+X+1{\displaystyle X^{n}+X+1}is reducible, it is recommended to chooseXn+Xk+1{\displaystyle X^{n}+X^{k}+1}with the lowest possiblek{\displaystyle k}that makes the polynomial irreducible. If all thesetrinomialsare reducible, one chooses "pentanomials"Xn+Xa+Xb+Xc+1{\displaystyle X^{n}+X^{a}+X^{b}+X^{c}+1}, as polynomials of degree greater than1{\displaystyle 1}, with an even number of terms, are never irreducible in characteristic2{\displaystyle 2}, having1{\displaystyle 1}as a root.[4] A possible choice for such a polynomial is given byConway polynomials. They ensure a certain compatibility between the representation of a field and the representations of its subfields. In the next sections, we will show how the general construction method outlined above works for small finite fields. The smallest non-prime field is the field with four elements, which is commonly denotedGF(4){\displaystyle \mathrm {GF} (4)}orF4.{\displaystyle \mathbb {F} _{4}.}It consists of the four elements0,1,α,1+α{\displaystyle 0,1,\alpha ,1+\alpha }such thatα2=1+α{\displaystyle \alpha ^{2}=1+\alpha },1⋅α=α⋅1=α{\displaystyle 1\cdot \alpha =\alpha \cdot 1=\alpha },x+x=0{\displaystyle x+x=0}, andx⋅0=0⋅x=0{\displaystyle x\cdot 0=0\cdot x=0}, for everyx∈GF(4){\displaystyle x\in \mathrm {GF} (4)}, the other operation results being easily deduced from thedistributive law. See below for the complete operation tables. This may be deduced as follows from the results of the preceding section. OverGF(2){\displaystyle \mathrm {GF} (2)}, there is only oneirreducible polynomialof degree2{\displaystyle 2}:X2+X+1{\displaystyle X^{2}+X+1}Therefore, forGF(4){\displaystyle \mathrm {GF} (4)}the construction of the preceding section must involve this polynomial, andGF(4)=GF(2)[X]/(X2+X+1).{\displaystyle \mathrm {GF} (4)=\mathrm {GF} (2)[X]/(X^{2}+X+1).}Letα{\displaystyle \alpha }denote a root of this polynomial inGF(4){\displaystyle \mathrm {GF} (4)}. This implies thatα2=1+α,{\displaystyle \alpha ^{2}=1+\alpha ,}and thatα{\displaystyle \alpha }and1+α{\displaystyle 1+\alpha }are the elements ofGF(4){\displaystyle \mathrm {GF} (4)}that are not inGF(2){\displaystyle \mathrm {GF} (2)}. The tables of the operations inGF(4){\displaystyle \mathrm {GF} (4)}result from this, and are as follows: A table for subtraction is not given, because subtraction is identical to addition, as is the case for every field of characteristic 2. In the third table, for the division ofx{\displaystyle x}byy{\displaystyle y}, the values ofx{\displaystyle x}must be read in the left column, and the values ofy{\displaystyle y}in the top row. (Because0⋅z=0{\displaystyle 0\cdot z=0}for everyz{\displaystyle z}in everyringthedivision by 0has to remain undefined.) From the tables, it can be seen that the additive structure ofGF(4){\displaystyle \mathrm {GF} (4)}is isomorphic to theKlein four-group, while the non-zero multiplicative structure is isomorphic to the groupZ3{\displaystyle Z_{3}}. The mapφ:x↦x2{\displaystyle \varphi :x\mapsto x^{2}}is the non-trivial field automorphism, called theFrobenius automorphism, which sendsα{\displaystyle \alpha }into the second root1+α{\displaystyle 1+\alpha }of the above-mentioned irreducible polynomialX2+X+1{\displaystyle X^{2}+X+1}. For applying theabove general constructionof finite fields in the case ofGF(p2){\displaystyle \mathrm {GF} (p^{2})}, one has to find an irreducible polynomial of degree 2. Forp=2{\displaystyle p=2}, this has been done in the preceding section. Ifp{\displaystyle p}is an odd prime, there are always irreducible polynomials of the formX2−r{\displaystyle X^{2}-r}, withr{\displaystyle r}inGF(p){\displaystyle \mathrm {GF} (p)}. More precisely, the polynomialX2−r{\displaystyle X^{2}-r}is irreducible overGF(p){\displaystyle \mathrm {GF} (p)}if and only ifr{\displaystyle r}is aquadratic non-residuemodulop{\displaystyle p}(this is almost the definition of a quadratic non-residue). There arep−12{\displaystyle {\frac {p-1}{2}}}quadratic non-residues modulop{\displaystyle p}. For example,2{\displaystyle 2}is a quadratic non-residue forp=3,5,11,13,…{\displaystyle p=3,5,11,13,\ldots }, and3{\displaystyle 3}is a quadratic non-residue forp=5,7,17,…{\displaystyle p=5,7,17,\ldots }. Ifp≡3mod4{\displaystyle p\equiv 3\mod 4}, that isp=3,7,11,19,…{\displaystyle p=3,7,11,19,\ldots }, one may choose−1≡p−1{\displaystyle -1\equiv p-1}as a quadratic non-residue, which allows us to have a very simple irreducible polynomialX2+1{\displaystyle X^{2}+1}. Having chosen a quadratic non-residuer{\displaystyle r}, letα{\displaystyle \alpha }be a symbolic square root ofr{\displaystyle r}, that is, a symbol that has the propertyα2=r{\displaystyle \alpha ^{2}=r}, in the same way that the complex numberi{\displaystyle i}is a symbolic square root of−1{\displaystyle -1}. Then, the elements ofGF(p2){\displaystyle \mathrm {GF} (p^{2})}are all the linear expressionsa+bα,{\displaystyle a+b\alpha ,}witha{\displaystyle a}andb{\displaystyle b}inGF(p){\displaystyle \mathrm {GF} (p)}. The operations onGF(p2){\displaystyle \mathrm {GF} (p^{2})}are defined as follows (the operations between elements ofGF(p){\displaystyle \mathrm {GF} (p)}represented by Latin letters are the operations inGF(p){\displaystyle \mathrm {GF} (p)}):−(a+bα)=−a+(−b)α(a+bα)+(c+dα)=(a+c)+(b+d)α(a+bα)(c+dα)=(ac+rbd)+(ad+bc)α(a+bα)−1=a(a2−rb2)−1+(−b)(a2−rb2)−1α{\displaystyle {\begin{aligned}-(a+b\alpha )&=-a+(-b)\alpha \\(a+b\alpha )+(c+d\alpha )&=(a+c)+(b+d)\alpha \\(a+b\alpha )(c+d\alpha )&=(ac+rbd)+(ad+bc)\alpha \\(a+b\alpha )^{-1}&=a(a^{2}-rb^{2})^{-1}+(-b)(a^{2}-rb^{2})^{-1}\alpha \end{aligned}}} The polynomialX3−X−1{\displaystyle X^{3}-X-1}is irreducible overGF(2){\displaystyle \mathrm {GF} (2)}andGF(3){\displaystyle \mathrm {GF} (3)}, that is, it is irreduciblemodulo2{\displaystyle 2}and3{\displaystyle 3}(to show this, it suffices to show that it has no root inGF(2){\displaystyle \mathrm {GF} (2)}nor inGF(3){\displaystyle \mathrm {GF} (3)}). It follows that the elements ofGF(8){\displaystyle \mathrm {GF} (8)}andGF(27){\displaystyle \mathrm {GF} (27)}may be represented byexpressionsa+bα+cα2,{\displaystyle a+b\alpha +c\alpha ^{2},}wherea,b,c{\displaystyle a,b,c}are elements ofGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}(respectively), andα{\displaystyle \alpha }is a symbol such thatα3=α+1.{\displaystyle \alpha ^{3}=\alpha +1.} The addition, additive inverse and multiplication onGF(8){\displaystyle \mathrm {GF} (8)}andGF(27){\displaystyle \mathrm {GF} (27)}may thus be defined as follows; in following formulas, the operations between elements ofGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}, represented by Latin letters, are the operations inGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}, respectively:−(a+bα+cα2)=−a+(−b)α+(−c)α2(forGF(8),this operation is the identity)(a+bα+cα2)+(d+eα+fα2)=(a+d)+(b+e)α+(c+f)α2(a+bα+cα2)(d+eα+fα2)=(ad+bf+ce)+(ae+bd+bf+ce+cf)α+(af+be+cd+cf)α2{\displaystyle {\begin{aligned}-(a+b\alpha +c\alpha ^{2})&=-a+(-b)\alpha +(-c)\alpha ^{2}\qquad {\text{(for }}\mathrm {GF} (8),{\text{this operation is the identity)}}\\(a+b\alpha +c\alpha ^{2})+(d+e\alpha +f\alpha ^{2})&=(a+d)+(b+e)\alpha +(c+f)\alpha ^{2}\\(a+b\alpha +c\alpha ^{2})(d+e\alpha +f\alpha ^{2})&=(ad+bf+ce)+(ae+bd+bf+ce+cf)\alpha +(af+be+cd+cf)\alpha ^{2}\end{aligned}}} The polynomialX4+X+1{\displaystyle X^{4}+X+1}is irreducible overGF(2){\displaystyle \mathrm {GF} (2)}, that is, it is irreducible modulo2{\displaystyle 2}. It follows that the elements ofGF(16){\displaystyle \mathrm {GF} (16)}may be represented byexpressionsa+bα+cα2+dα3,{\displaystyle a+b\alpha +c\alpha ^{2}+d\alpha ^{3},}wherea,b,c,d{\displaystyle a,b,c,d}are either0{\displaystyle 0}or1{\displaystyle 1}(elements ofGF(2){\displaystyle \mathrm {GF} (2)}), andα{\displaystyle \alpha }is a symbol such thatα4=α+1{\displaystyle \alpha ^{4}=\alpha +1}(that is,α{\displaystyle \alpha }is defined as a root of the given irreducible polynomial). As the characteristic ofGF(2){\displaystyle \mathrm {GF} (2)}is2{\displaystyle 2}, each element is its additive inverse inGF(16){\displaystyle \mathrm {GF} (16)}. The addition and multiplication onGF(16){\displaystyle \mathrm {GF} (16)}may be defined as follows; in following formulas, the operations between elements ofGF(2){\displaystyle \mathrm {GF} (2)}, represented by Latin letters are the operations inGF(2){\displaystyle \mathrm {GF} (2)}.(a+bα+cα2+dα3)+(e+fα+gα2+hα3)=(a+e)+(b+f)α+(c+g)α2+(d+h)α3(a+bα+cα2+dα3)(e+fα+gα2+hα3)=(ae+bh+cg+df)+(af+be+bh+cg+df+ch+dg)α+(ag+bf+ce+ch+dg+dh)α2+(ah+bg+cf+de+dh)α3{\displaystyle {\begin{aligned}(a+b\alpha +c\alpha ^{2}+d\alpha ^{3})+(e+f\alpha +g\alpha ^{2}+h\alpha ^{3})&=(a+e)+(b+f)\alpha +(c+g)\alpha ^{2}+(d+h)\alpha ^{3}\\(a+b\alpha +c\alpha ^{2}+d\alpha ^{3})(e+f\alpha +g\alpha ^{2}+h\alpha ^{3})&=(ae+bh+cg+df)+(af+be+bh+cg+df+ch+dg)\alpha \;+\\&\quad \;(ag+bf+ce+ch+dg+dh)\alpha ^{2}+(ah+bg+cf+de+dh)\alpha ^{3}\end{aligned}}} The fieldGF(16){\displaystyle \mathrm {GF} (16)}has eightprimitive elements(the elements that have all nonzero elements ofGF(16){\displaystyle \mathrm {GF} (16)}as integer powers). These elements are the four roots ofX4+X+1{\displaystyle X^{4}+X+1}and theirmultiplicative inverses. In particular,α{\displaystyle \alpha }is a primitive element, and the primitive elements areαm{\displaystyle \alpha ^{m}}withm{\displaystyle m}less than andcoprimewith15{\displaystyle 15}(that is, 1, 2, 4, 7, 8, 11, 13, 14). The set of non-zero elements inGF(q){\displaystyle \mathrm {GF} (q)}is anabelian groupunder the multiplication, of orderq−1{\displaystyle q-1}. ByLagrange's theorem, there exists a divisork{\displaystyle k}ofq−1{\displaystyle q-1}such thatxk=1{\displaystyle x^{k}=1}for every non-zerox{\displaystyle x}inGF(q){\displaystyle \mathrm {GF} (q)}. As the equationxk=1{\displaystyle x^{k}=1}has at mostk{\displaystyle k}solutions in any field,q−1{\displaystyle q-1}is the lowest possible value fork{\displaystyle k}. Thestructure theorem of finite abelian groupsimplies that this multiplicative group iscyclic, that is, all non-zero elements are powers of a single element. In summary: Such an elementa{\displaystyle a}is called aprimitive elementofGF(q){\displaystyle \mathrm {GF} (q)}. Unlessq=2,3{\displaystyle q=2,3}, the primitive element is not unique. The number of primitive elements isϕ(q−1){\displaystyle \phi (q-1)}whereϕ{\displaystyle \phi }isEuler's totient function. The result above implies thatxq=x{\displaystyle x^{q}=x}for everyx{\displaystyle x}inGF(q){\displaystyle \mathrm {GF} (q)}. The particular case whereq{\displaystyle q}is prime isFermat's little theorem. Ifa{\displaystyle a}is a primitive element inGF(q){\displaystyle \mathrm {GF} (q)}, then for any non-zero elementx{\displaystyle x}inF{\displaystyle F}, there is a unique integern{\displaystyle n}with0≤n≤q−2{\displaystyle 0\leq n\leq q-2}such thatx=an{\displaystyle x=a^{n}}. This integern{\displaystyle n}is called thediscrete logarithmofx{\displaystyle x}to the basea{\displaystyle a}. Whilean{\displaystyle a^{n}}can be computed very quickly, for example usingexponentiation by squaring, there is no known efficient algorithm for computing the inverse operation, the discrete logarithm. This has been used in variouscryptographic protocols, seeDiscrete logarithmfor details. When the nonzero elements ofGF(q){\displaystyle \mathrm {GF} (q)}are represented by their discrete logarithms, multiplication and division are easy, as they reduce to addition and subtraction moduloq−1{\displaystyle q-1}. However, addition amounts to computing the discrete logarithm ofam+an{\displaystyle a^{m}+a^{n}}. The identityam+an=an(am−n+1){\displaystyle a^{m}+a^{n}=a^{n}\left(a^{m-n}+1\right)}allows one to solve this problem by constructing the table of the discrete logarithms ofan+1{\displaystyle a^{n}+1}, calledZech's logarithms, forn=0,…,q−2{\displaystyle n=0,\ldots ,q-2}(it is convenient to define the discrete logarithm of zero as being−∞{\displaystyle -\infty }). Zech's logarithms are useful for large computations, such aslinear algebraover medium-sized fields, that is, fields that are sufficiently large for making natural algorithms inefficient, but not too large, as one has to pre-compute a table of the same size as the order of the field. Every nonzero element of a finite field is aroot of unity, asxq−1=1{\displaystyle x^{q-1}=1}for every nonzero element ofGF(q){\displaystyle \mathrm {GF} (q)}. Ifn{\displaystyle n}is a positive integer, ann{\displaystyle n}thprimitive root of unityis a solution of the equationxn=1{\displaystyle x^{n}=1}that is not a solution of the equationxm=1{\displaystyle x^{m}=1}for any positive integerm<n{\displaystyle m<n}. Ifa{\displaystyle a}is an{\displaystyle n}th primitive root of unity in a fieldF{\displaystyle F}, thenF{\displaystyle F}contains all then{\displaystyle n}roots of unity, which are1,a,a2,…,an−1{\displaystyle 1,a,a^{2},\ldots ,a^{n-1}}. The fieldGF(q){\displaystyle \mathrm {GF} (q)}contains an{\displaystyle n}th primitive root of unity if and only ifn{\displaystyle n}is a divisor ofq−1{\displaystyle q-1}; ifn{\displaystyle n}is a divisor ofq−1{\displaystyle q-1}, then the number of primitiven{\displaystyle n}th roots of unity inGF(q){\displaystyle \mathrm {GF} (q)}isϕ(n){\displaystyle \phi (n)}(Euler's totient function). The number ofn{\displaystyle n}th roots of unity inGF(q){\displaystyle \mathrm {GF} (q)}isgcd(n,q−1){\displaystyle \mathrm {gcd} (n,q-1)}. In a field of characteristicp{\displaystyle p}, everynp{\displaystyle np}th root of unity is also an{\displaystyle n}th root of unity. It follows that primitivenp{\displaystyle np}th roots of unity never exist in a field of characteristicp{\displaystyle p}. On the other hand, ifn{\displaystyle n}iscoprimetop{\displaystyle p}, the roots of then{\displaystyle n}thcyclotomic polynomialare distinct in every field of characteristicp{\displaystyle p}, as this polynomial is a divisor ofXn−1{\displaystyle X^{n}-1}, whosediscriminantnn{\displaystyle n^{n}}is nonzero modulop{\displaystyle p}. It follows that then{\displaystyle n}thcyclotomic polynomialfactors overGF(q){\displaystyle \mathrm {GF} (q)}into distinct irreducible polynomials that have all the same degree, sayd{\displaystyle d}, and thatGF(pd){\displaystyle \mathrm {GF} (p^{d})}is the smallest field of characteristicp{\displaystyle p}that contains then{\displaystyle n}th primitive roots of unity. When computingBrauer characters, one uses the mapαk↦exp⁡(2πik/(q−1)){\displaystyle \alpha ^{k}\mapsto \exp(2\pi ik/(q-1))}to map eigenvalues of a representation matrix to the complex numbers. Under this mapping, the base subfieldGF(p){\displaystyle \mathrm {GF} (p)}consists of evenly spaced points around the unit circle (omitting zero). The fieldGF(64){\displaystyle \mathrm {GF} (64)}has several interesting properties that smaller fields do not share: it has two subfields such that neither is contained in the other; not all generators (elements withminimal polynomialof degree6{\displaystyle 6}overGF(2){\displaystyle \mathrm {GF} (2)}) are primitive elements; and the primitive elements are not all conjugate under theGalois group. The order of this field being26, and the divisors of6being1, 2, 3, 6, the subfields ofGF(64)areGF(2),GF(22) = GF(4),GF(23) = GF(8), andGF(64)itself. As2and3arecoprime, the intersection ofGF(4)andGF(8)inGF(64)is the prime fieldGF(2). The union ofGF(4)andGF(8)has thus10elements. The remaining54elements ofGF(64)generateGF(64)in the sense that no other subfield contains any of them. It follows that they are roots of irreducible polynomials of degree6overGF(2). This implies that, overGF(2), there are exactly9 =⁠54/6⁠irreduciblemonic polynomialsof degree6. This may be verified by factoringX64−XoverGF(2). The elements ofGF(64)are primitiven{\displaystyle n}th roots of unity for somen{\displaystyle n}dividing63{\displaystyle 63}. As the 3rd and the 7th roots of unity belong toGF(4)andGF(8), respectively, the54generators are primitiventh roots of unity for somenin{9, 21, 63}.Euler's totient functionshows that there are6primitive9th roots of unity,12{\displaystyle 12}primitive21{\displaystyle 21}st roots of unity, and36{\displaystyle 36}primitive63rd roots of unity. Summing these numbers, one finds again54{\displaystyle 54}elements. By factoring thecyclotomic polynomialsoverGF(2){\displaystyle \mathrm {GF} (2)}, one finds that: This shows that the best choice to constructGF(64){\displaystyle \mathrm {GF} (64)}is to define it asGF(2)[X] / (X6+X+ 1). In fact, this generator is a primitive element, and this polynomial is the irreducible polynomial that produces the easiest Euclidean division. In this section,p{\displaystyle p}is a prime number, andq=pn{\displaystyle q=p^{n}}is a power ofp{\displaystyle p}. InGF(q){\displaystyle \mathrm {GF} (q)}, the identity(x+y)p=xp+ypimplies that the mapφ:x↦xp{\displaystyle \varphi :x\mapsto x^{p}}is aGF(p){\displaystyle \mathrm {GF} (p)}-linear endomorphismand afield automorphismofGF(q){\displaystyle \mathrm {GF} (q)}, which fixes every element of the subfieldGF(p){\displaystyle \mathrm {GF} (p)}. It is called theFrobenius automorphism, afterFerdinand Georg Frobenius. Denoting byφkthecompositionofφwith itselfktimes, we haveφk:x↦xpk.{\displaystyle \varphi ^{k}:x\mapsto x^{p^{k}}.}It has been shown in the preceding section thatφnis the identity. For0 <k<n, the automorphismφkis not the identity, as, otherwise, the polynomialXpk−X{\displaystyle X^{p^{k}}-X}would have more thanpkroots. There are no otherGF(p)-automorphisms ofGF(q). In other words,GF(pn)has exactlynGF(p)-automorphisms, which areId=φ0,φ,φ2,…,φn−1.{\displaystyle \mathrm {Id} =\varphi ^{0},\varphi ,\varphi ^{2},\ldots ,\varphi ^{n-1}.} In terms ofGalois theory, this means thatGF(pn)is aGalois extensionofGF(p), which has acyclicGalois group. The fact that the Frobenius map is surjective implies that every finite field isperfect. IfFis a finite field, a non-constantmonic polynomialwith coefficients inFisirreducibleoverF, if it is not the product of two non-constant monic polynomials, with coefficients inF. As everypolynomial ringover a field is aunique factorization domain, every monic polynomial over a finite field may be factored in a unique way (up to the order of the factors) into a product of irreducible monic polynomials. There are efficient algorithms for testing polynomial irreducibility and factoring polynomials over finite fields. They are a key step for factoring polynomials over the integers or therational numbers. At least for this reason, everycomputer algebra systemhas functions for factoring polynomials over finite fields, or, at least, over finite prime fields. The polynomialXq−X{\displaystyle X^{q}-X}factors into linear factors over a field of orderq. More precisely, this polynomial is the product of all monic polynomials of degree one over a field of orderq. This implies that, ifq=pnthenXq−Xis the product of all monic irreducible polynomials overGF(p), whose degree dividesn. In fact, ifPis an irreducible factor overGF(p)ofXq−X, its degree dividesn, as itssplitting fieldis contained inGF(pn). Conversely, ifPis an irreducible monic polynomial overGF(p)of degreeddividingn, it defines a field extension of degreed, which is contained inGF(pn), and all roots ofPbelong toGF(pn), and are roots ofXq−X; thusPdividesXq−X. AsXq−Xdoes not have any multiple factor, it is thus the product of all the irreducible monic polynomials that divide it. This property is used to compute the product of the irreducible factors of each degree of polynomials overGF(p); seeDistinct degree factorization. The numberN(q,n)of monic irreducible polynomials of degreenoverGF(q)is given by[5]N(q,n)=1n∑d∣nμ(d)qn/d,{\displaystyle N(q,n)={\frac {1}{n}}\sum _{d\mid n}\mu (d)q^{n/d},}whereμis theMöbius function. This formula is an immediate consequence of the property ofXq−Xabove and theMöbius inversion formula. By the above formula, the number of irreducible (not necessarily monic) polynomials of degreenoverGF(q)is(q− 1)N(q,n). The exact formula implies the inequalityN(q,n)≥1n(qn−∑ℓ∣n,ℓprimeqn/ℓ);{\displaystyle N(q,n)\geq {\frac {1}{n}}\left(q^{n}-\sum _{\ell \mid n,\ \ell {\text{ prime}}}q^{n/\ell }\right);}this is sharp if and only ifnis a power of some prime. For everyqand everyn, the right hand side is positive, so there is at least one irreducible polynomial of degreenoverGF(q). Incryptography, the difficulty of thediscrete logarithm problemin finite fields or inelliptic curvesis the basis of several widely used protocols, such as theDiffie–Hellmanprotocol. For example, in 2014, a secure internet connection to Wikipedia involved the elliptic curve Diffie–Hellman protocol (ECDHE) over a large finite field.[6]Incoding theory, many codes are constructed assubspacesofvector spacesover finite fields. Finite fields are used by manyerror correction codes, such asReed–Solomon error correction codeorBCH code. The finite field almost always has characteristic of2, since computer data is stored in binary. For example, a byte of data can be interpreted as an element ofGF(28). One exception isPDF417bar code, which isGF(929). Some CPUs have special instructions that can be useful for finite fields of characteristic2, generally variations ofcarry-less product. Finite fields are widely used innumber theory, as many problems over the integers may be solved by reducing themmoduloone or severalprime numbers. For example, the fastest known algorithms forpolynomial factorizationandlinear algebraover the field ofrational numbersproceed by reduction modulo one or several primes, and then reconstruction of the solution by usingChinese remainder theorem,Hensel liftingor theLLL algorithm. Similarly many theoretical problems in number theory can be solved by considering their reductions modulo some or all prime numbers. See, for example,Hasse principle. Many recent developments ofalgebraic geometrywere motivated by the need to enlarge the power of these modular methods.Wiles' proof of Fermat's Last Theoremis an example of a deep result involving many mathematical tools, including finite fields. TheWeil conjecturesconcern the number of points onalgebraic varietiesover finite fields and the theory has many applications includingexponentialandcharacter sumestimates. Finite fields have widespread application incombinatorics, two well known examples being the definition ofPaley Graphsand the related construction forHadamard Matrices. Inarithmetic combinatoricsfinite fields[7]and finite field models[8][9]are used extensively, such as inSzemerédi's theoremon arithmetic progressions. Adivision ringis a generalization of field. Division rings are not assumed to be commutative. There are no non-commutative finite division rings:Wedderburn's little theoremstates that all finitedivision ringsare commutative, and hence are finite fields. This result holds even if we relax theassociativityaxiom toalternativity, that is, all finitealternative division ringsare finite fields, by theArtin–Zorn theorem.[10] A finite fieldF{\displaystyle F}is not algebraically closed: the polynomialf(T)=1+∏α∈F(T−α),{\displaystyle f(T)=1+\prod _{\alpha \in F}(T-\alpha ),}has no roots inF{\displaystyle F}, sincef(α) = 1for allα{\displaystyle \alpha }inF{\displaystyle F}. Given a prime numberp, letF¯p{\displaystyle {\overline {\mathbb {F} }}_{p}}be an algebraic closure ofFp.{\displaystyle \mathbb {F} _{p}.}It is not only uniqueup toan isomorphism, as do all algebraic closures, but contrarily to the general case, all its subfield are fixed by all its automorphisms, and it is also the algebraic closure of all finite fields of the same characteristicp. This property results mainly from the fact that the elements ofFpn{\displaystyle \mathbb {F} _{p^{n}}}are exactly the roots ofxpn−x,{\displaystyle x^{p^{n}}-x,}and this defines an inclusionFpn⊂Fpnm{\displaystyle \mathbb {\mathbb {F} } _{p^{n}}\subset \mathbb {F} _{p^{nm}}}form>1.{\displaystyle m>1.}These inclusions allow writing informallyF¯p=⋃n≥1Fpn.{\displaystyle {\overline {\mathbb {F} }}_{p}=\bigcup _{n\geq 1}\mathbb {F} _{p^{n}}.}The formal validation of this notation results from the fact that the above field inclusions form adirected setof fields; Itsdirect limitisF¯p,{\displaystyle {\overline {\mathbb {F} }}_{p},}which may thus be considered as "directed union". Given aprimitive elementgmn{\displaystyle g_{mn}}ofFqmn,{\displaystyle \mathbb {F} _{q^{mn}},}thengmnm{\displaystyle g_{mn}^{m}}is a primitive element ofFqn.{\displaystyle \mathbb {F} _{q^{n}}.} For explicit computations, it may be useful to have a coherent choice of the primitive elements for all finite fields; that is, to choose the primitive elementgn{\displaystyle g_{n}}ofFqn{\displaystyle \mathbb {F} _{q^{n}}}in order that, whenevern=mh,{\displaystyle n=mh,}one hasgm=gnh,{\displaystyle g_{m}=g_{n}^{h},}wheregm{\displaystyle g_{m}}is the primitive element already chosen forFqm.{\displaystyle \mathbb {F} _{q^{m}}.} Such a construction may be obtained byConway polynomials. Although finite fields are not algebraically closed, they arequasi-algebraically closed, which means that everyhomogeneous polynomialover a finite field has a non-trivial zero whose components are in the field if the number of its variables is more than its degree. This was a conjecture ofArtinandDicksonproved byChevalley(seeChevalley–Warning theorem).
https://en.wikipedia.org/wiki/Galois_field#Galois_field_of_order_2^n
Cloud computing securityor, more simply,cloud security, refers to a broad set of policies, technologies, applications, and controls utilized to protect virtualized IP, data, applications, services, and the associated infrastructure ofcloud computing. It is a sub-domain ofcomputer security,network securityand, more broadly,information security. Cloud computingand storage provide users with the capabilities to store and process their data in third-partydata centers.[1]Organizations use the cloud in a variety of different service models (with acronyms such asSaaS,PaaS, andIaaS) and deployment models (private,public,hybrid, andcommunity).[2] Security concerns associated with cloud computing are typically categorized in two ways: as security issues faced by cloud providers (organizations providingsoftware-,platform-, orinfrastructure-as-a-servicevia the cloud) and security issues faced by their customers (companies or organizations who host applications or store data on the cloud).[3]The responsibility is shared, however, and is often detailed in a cloud provider's "shared security responsibility model" or "shared responsibility model."[4][5][6]The provider must ensure that their infrastructure is secure and that their clients’ data and applications are protected, while the user must take measures to fortify their application and use strong passwords and authentication measures.[5][6] When an organization elects to store data or host applications on the public cloud, it loses its ability to have physical access to the servers hosting its information. As a result, potentially sensitive data is at risk from insider attacks. According to a 2010Cloud Security Alliancereport, insider attacks are one of the top seven biggest threats in cloud computing.[7]Therefore, cloud service providers must ensure that thorough background checks are conducted for employees who have physical access to the servers in the data center. Additionally, data centers are recommended to be frequently monitored for suspicious activity. In order to conserve resources, cut costs, and maintain efficiency, cloud service providers often store more than one customer's data on the same server. As a result, there is a chance that one user's private data can be viewed by other users (possibly even competitors). To handle such sensitive situations, cloud service providers should ensure properdata isolationand logical storage segregation.[2] The extensive use ofvirtualizationin implementing cloud infrastructure brings unique security concerns for customers or tenants of a public cloud service.[8]Virtualization alters the relationship between the OS and underlying hardware – be it computing, storage or even networking. This introduces an additional layer – virtualization – that itself must be properly configured, managed and secured.[9]Specific concerns include the potential to compromise the virtualization software, or "hypervisor". While these concerns are largely theoretical, they do exist.[10]For example, a breach in the administrator workstation with the management software of the virtualization software can cause the whole data center to go down or be reconfigured to an attacker's liking. Cloud security architecture is effective only if the correct defensive implementations are in place. An efficient cloud security architecture should recognize the issues that will arise with security management and follow all of the best practices, procedures, and guidelines to ensure a secure cloud environment. Security management addresses these issues with security controls. These controls protect cloud environments and are put in place to safeguard any weaknesses in the system and reduce the effect of an attack. While there are many types of controls behind a cloud security architecture, they can usually be found in one of the following categories: Cloud security engineering is characterized by the security layers, plan, design, programming, and best practices that exist inside a cloud security arrangement. Cloud security engineering requires the composed and visual model (design and UI) to be characterized by the tasks inside the Cloud. This cloud security engineering process includes such things as access to the executives, techniques, and controls to ensure applications and information. It also includes ways to deal with and keep up with permeability, consistency, danger stance, and by and large security. Processes for imparting security standards into cloud administrations and activities assume an approach that fulfills consistent guidelines and essential framework security parts.[15] For interest in Cloud advancements to be viable, companies should recognize the various parts of the Cloud and how they remain to impact and help them. These interests may include investments in cloud computing and security, for example. This of course leads to leads to driving push for the Cloud advancements to succeed. Though the idea ofcloud computingisn't new, associations are increasingly enforcing it because of its flexible scalability, relative trustability, and cost frugality of services. However, despite its rapid-fire relinquishment in some sectors and disciplines, it's apparent from exploration and statistics that security-related pitfalls are the most conspicuous hedge to its wide relinquishment.[citation needed] It is generally recommended that information security controls be selected and implemented according to and in proportion to the risks, typically by assessing the threats, vulnerabilities and impacts. Cloud security concerns can be grouped in various ways; Gartner named seven[16]while theCloud Security Allianceidentified twelve areas of concern.[17]Cloud access security brokers(CASBs) are software that sits between cloud users and cloud applications to provide visibility into cloud application usage, data protection and governance to monitor all activity and enforce security policies.[18] Any service without a "hardened" environment is considered a "soft" target. Virtual servers should be protected just like a physical server againstdata leakage,malware, and exploited vulnerabilities. "Data loss or leakage represents 24.6% and cloud related malware 3.4% of threats causing cloud outages”.[19] Every enterprise will have its ownidentity management systemto control access to information and computing resources. Cloud providers either integrate the customer's identity management system into their own infrastructure, usingfederationorSSOtechnology or a biometric-based identification system,[1]or provide an identity management system of their own.[20]CloudID,[1]for instance, provides privacy-preserving cloud-based and cross-enterprise biometric identification. It links the confidential information of the users to their biometrics and stores it in an encrypted fashion. Making use of a searchable encryption technique, biometric identification is performed in the encrypted domain to make sure that the cloud provider or potential attackers do not gain access to any sensitive data or even the contents of the individual queries.[1] Cloud service providers physically secure the IThardware(servers, routers, cables etc.) against unauthorized access, interference, theft, fires, floods etc. and ensure that essential supplies (such as electricity) are sufficiently robust to minimize the possibility of disruption. This is normally achieved by serving cloud applications from professionally specified, designed, constructed, managed, monitored and maintained data centers. Various information security concerns relating to the IT and other professionals associated with cloud services are typically handled through pre-, para- and post-employment activities such as security screening potential recruits, security awareness and training programs, and proactive. Providers ensure that all critical data (credit card numbers, for example) aremaskedor encrypted and that only authorized users have access to data in its entirety. Moreover, digital identities and credentials must be protected as should any data that the provider collects or produces about customer activity in the cloud. Penetration testingis the process of performing offensive security tests on a system, service, orcomputer networkto find security weaknesses in it. Since the cloud is a shared environment with other customers or tenants, following penetration testing rules of engagement step-by-step is a mandatory requirement. Scanning and penetration testing from inside or outside the cloud should be authorized by the cloud provider. Violation of acceptable use policies can lead to termination of the service.[21] Scanning the cloud from outside and inside using free or commercial products is crucial because without a hardened environment your service is considered a soft target. Virtual servers should be hardened just like a physical server againstdata leakage, malware, and exploited vulnerabilities. "Data loss or leakage represents 24.6% and cloud-related malware 3.4% of threats causing cloud outages” Scanning and penetration testing from inside or outside the cloud must be authorized by the cloud provider. Since the cloud is a shared environment with other customers or tenants, following penetration testing rules of engagement step-by-step is a mandatory requirement. Violation of acceptable use policies can lead to the termination of the service. Some key terminology to grasp when discussing penetration testing is the difference between application and network layer testing. Understanding what is asked of you as the tester is sometimes the most important step in the process. The network-layer testing refers to testing that includes internal/external connections as well as the interconnected systems throughout the local network. Oftentimes, social engineering attacks are carried out, as the most vulnerable link in security is often the employee. White-box testing Testing under the condition that the “attacker” has full knowledge of the internal network, its design, and implementation. Grey-box testing Testing under the condition that the “attacker” has partial knowledge of the internal network, its design, and implementation. Black-box testing Testing under the condition that the “attacker” has no prior knowledge of the internal network, its design, and implementation. There are numerous security threats associated with cloud data services. This includes traditional threats and non-traditional threats. Traditional threats include:network eavesdropping, illegal invasion, and denial of service attacks, but also specific cloud computing threats, such as side channel attacks, virtualization vulnerabilities, and abuse of cloud services. In order to mitigate these threats security controls often rely on monitoring the three areas of the CIA triad. The CIA Triad refers to confidentiality (including access controllability which can be further understood from the following.[22]), integrity and availability. Many effective security measures cover several or all of the three categories. Encryption for example can be used to prevent unauthorized access, and also ensure integrity of the data). Backups on the other hand generally cover integrity and availability and firewalls only cover confidentiality and access controllability.[23] Data confidentiality is the property in that data contents are not made available or disclosed to illegal users. Outsourced data is stored in a cloud and out of the owners' direct control. Only authorized users can access the sensitive data while others, including CSPs, should not gain any information about the data. Meanwhile, data owners expect to fully utilize cloud data services, e.g., data search, data computation, anddata sharing, without the leakage of the data contents to CSPs or other adversaries. Confidentiality refers to how data must be kept strictly confidential to the owner of said data An example of security control that covers confidentiality is encryption so that only authorized users can access the data. Symmetric or asymmetric key paradigm can be used for encryption.[24] Access controllability means that a data owner can perform the selective restriction of access to their data outsourced to the cloud. Legal users can be authorized by the owner to access the data, while others can not access it without permission. Further, it is desirable to enforce fine-grainedaccess controlto the outsourced data, i.e., different users should be granted different access privileges with regard to different data pieces. The access authorization must be controlled only by the owner in untrusted cloud environments. Access control can also be referred to as availability. While unauthorized access should be strictly prohibited, access for administrative or even consumer uses should be allowed but monitored as well. Availability and Access control ensure that the proper amount of permissions is granted to the correct persons. Data integritydemands maintaining and assuring the accuracy and completeness of data. A data owner always expects that her or his data in a cloud can be stored correctly and trustworthy. It means that the data should not be illegally tampered with, improperly modified, deliberately deleted, or maliciously fabricated. If any undesirable operations corrupt or delete the data, the owner should be able to detect the corruption or loss. Further, when a portion of the outsourced data is corrupted or lost, it can still be retrieved by the data users. Effective integrity security controls go beyond protection from malicious actors and protect data from unintentional alterations as well. An example of security control that covers integrity is automated backups of information. While cloud computing is on the cutting edge of information technology there are risks and vulnerabilities to consider before investing fully in it. Security controls and services do exist for the cloud but as with any security system they are not guaranteed to succeed. Furthermore, some risks extend beyond asset security and may involve issues in productivity and even privacy as well.[25] Cloud computing is still an emerging technology and thus is developing in relatively new technological structures. As a result, all cloud services must undertake Privacy Impact Assessments or PIAs before releasing their platform. Consumers as well that intend to use clouds to store their customer's data must also be aware of the vulnerabilities of having non-physical storage for private information.[26] Due to the autonomous nature of the cloud, consumers are often given management interfaces to monitor their databases. By having controls in such a congregated location and by having the interface be easily accessible for convenience for users, there is a possibility that a single actor could gain access to the cloud's management interface; giving them a great deal of control and power over the database.[27] The cloud's capabilities with allocating resources as needed often result in resources in memory and otherwise being recycled to another user at a later event. For these memory or storage resources, it could be possible for current users to access information left by previous ones.[27] The cloud requires an internet connection and therefore internet protocols to access. Therefore, it is open to many internet protocol vulnerabilities such as man-in-the-middle attacks. Furthermore, by having a heavy reliance on internet connectivity, if the connection fails consumers will be completely cut off from any cloud resources.[27] Cryptography is an ever-growing field and technology. What was secure 10 years ago may be considered a significant security risk by today's standards. As technology continues to advance and older technologies grow old, new methods of breaking encryptions will emerge as well as fatal flaws in older encryption methods. Cloud providers must keep up to date with their encryption as the data they typically contain is especially valuable.[28] Privacy legislation often varies from country to country. By having information stored via the cloud it is difficult to determine under which jurisdictions the data falls under. Transborder clouds are especially popular given that the largest companies transcend several countries. Other legal dilemmas from the ambiguity of the cloud refer to how there is a difference in privacy regulation between information shared between and information shared inside of organizations.[26] There are several different types of attacks on cloud computing, one that is still very much untapped is infrastructure compromise. Though not completely known it is listed as the attack with the highest amount of payoff.[29]What makes this so dangerous is that the person carrying out the attack is able to gain a level of privilege of having essentially root access to the machine. It is very hard to defend against attacks like these because they are so unpredictable and unknown, attacks of this type are also calledzero day exploitsbecause they are difficult to defend against since the vulnerabilities were previously unknown and unchecked until the attack has already occurred. DoSattacks aim to have systems be unavailable to their users. Since cloud computing software is used by large numbers of people, resolving these attacks is increasingly difficult. Now with cloud computing on the rise, this has left new opportunities for attacks because of the virtualization of data centers and cloud services being utilized more.[30] With the global pandemic that started early in 2020 taking effect, there was a massive shift to remote work, because of this companies became more reliant on the cloud. This massive shift has not gone unnoticed, especially by cybercriminals and bad actors, many of which saw the opportunity to attack the cloud because of this new remote work environment. Companies have to constantly remind their employees to keep constant vigilance especially remotely. Constantly keeping up to date with the latest security measures and policies, mishaps in communication are some of the things that these cybercriminals are looking for and will prey upon. Moving work to the household was critical for workers to be able to continue, but as the move to remote work happened, several security issues arose quickly. The need for data privacy, using applications, personal devices, and the internet all came to the forefront. The pandemic has had large amounts of data being generated especially in the healthcare sector. Big data is accrued for the healthcare sector now more than ever due to the growing coronavirus pandemic. The cloud has to be able to organize and share the data with its users securely. Quality of data looks for four things: accuracy, redundancy, completeness and consistency.[31] Users had to think about the fact that massive amounts of data are being shared globally. Different countries have certain laws and regulations that have to be adhered to. Differences in policy and jurisdiction give rise to the risk involved with the cloud. Workers are using their personal devices more now that they are working from home. Criminals see this increase as an opportunity to exploit people, software is developed to infect people's devices and gain access to their cloud. The current pandemic has put people in a situation where they are incredibly vulnerable and susceptible to attacks. The change to remote work was so sudden that many companies simply were unprepared to deal with the tasks and subsequent workload they have found themselves deeply entrenched in. Tighter security measures have to be put in place to ease that newfound tension within organizations. The attacks that can be made on cloud computing systems includeman-in-the middleattacks,phishingattacks, authentication attacks, and malware attacks. One of the largest threats is considered to be malware attacks, such asTrojan horses. Recent research conducted in 2022 has revealed that the Trojan horse injection method is a serious problem with harmful impacts on cloud computing systems. A Trojan attack on cloud systems tries to insert an application or service into the system that can impact the cloud services by changing or stopping the functionalities. When the cloud system identifies the attacks as legitimate, the service or application is performed which can damage and infect the cloud system.[32] Some advancedencryptionalgorithms which have been applied to cloud computing increase the protection of privacy. In a practice calledcrypto-shredding, the keys can simply be deleted when there is no more use of the data. Attribute-based encryptionis a type ofpublic-key encryptionin which thesecret keyof a user and the ciphertext are dependent upon attributes (e.g. the country in which he lives, or the kind of subscription he has). In such a system, the decryption of a ciphertext is possible only if the set of attributes of the user key matches the attributes of the ciphertext. Some of the strengths of Attribute-based encryption are that it attempts to solve issues that exist in current public-key infrastructure(PKI) and identity-based encryption(IBE) implementations. By relying on attributes ABE circumvents needing to share keys directly, as with PKI, as well as having to know the identity of the receiver, as with IBE. These benefits come at a cost as ABE suffers from the decryption key re-distribution problem. Since decryption keys in ABE only contain information regarding access structure or the attributes of the user it is hard to verify the user's actual identity. Thus malicious users can intentionally leak their attribute information so that unauthorized users can imitate and gain access.[33] Ciphertext-policy ABE (CP-ABE) is a type of public-key encryption. In the CP-ABE, the encryptor controls the access strategy. The main research work of CP-ABE is focused on the design of the access structure. A Ciphertext-policy attribute-based encryption scheme consists of four algorithms: Setup, Encrypt, KeyGen, and Decrypt.[34]The Setup algorithm takes security parameters and an attribute universe description as input and outputs public parameters and a master key. The encryption algorithm takes data as input. It then encrypts it to produce ciphertext that only a user that possesses a set of attributes that satisfies the access structure will decrypt the message. The KeyGen algorithm then takes the master key and the user's attributes to develop a private key. Finally, the Decrypt algorithm takes the public parameters, the ciphertext, the private key, and user attributes as input. With this information, the algorithm first checks if the users’ attributes satisfy the access structure and then decrypts the ciphertext to return the data. Key-policy Attribute-Based Encryption, or KP-ABE, is an important type ofAttribute-Based Encryption. KP-ABE allows senders to encrypt their messages under a set of attributes, much like any Attribute Based Encryption system. For each encryption, private user keys are then generated which contain decryption algorithms for deciphering the message and these private user keys grant users access to specific messages that they correspond to. In a KP-ABE system,ciphertexts, or the encrypted messages, are tagged by the creators with a set of attributes, while the user's private keys are issued that specify which type of ciphertexts the key can decrypt.[35]The private keys control which ciphertexts a user is able to decrypt.[36]In KP-ABE, the attribute sets are used to describe the encrypted texts and the private keys are associated to the specified policy that users will have for the decryption of the ciphertexts. A drawback to KP-ABE is that in KP-ABE the encryptor does not control who has access to the encrypted data, except through descriptive attributes, which creates a reliance on the key-issuer granting and denying access to users. Hence, the creation of other ABE systems such as Ciphertext-Policy Attribute-Based Encryption.[37] Fully Homomorphic Encryptionis a cryptosystem that supports arbitrary computation on ciphertext and also allows computing sum and product for the encrypted data without decryption. Another interesting feature of Fully Homomorphic Encryption or FHE for short is that it allows operations to be executed without the need for a secret key.[38]FHE has been linked not only to cloud computing but to electronic voting as well. Fully Homomorphic Encryption has been especially helpful with the development of cloud computing and computing technologies. However, as these systems are developing the need for cloud security has also increased. FHE aims to secure data transmission as well as cloud computing storage with its encryption algorithms.[39]Its goal is to be a much more secure and efficient method of encryption on a larger scale to handle the massive capabilities of the cloud. Searchable encryption is a cryptographic system that offers secure search functions over encrypted data.[40][41]SE schemes can be classified into two categories: SE based on secret-key (or symmetric-key) cryptography, and SE based on public-key cryptography. In order to improve search efficiency, symmetric-key SE generally builds keyword indexes to answer user queries. This has the obvious disadvantage of providing multimodal access routes for unauthorized data retrieval, bypassing the encryption algorithm by subjecting the framework to alternative parameters within the shared cloud environment.[42] Numerous laws and regulations pertaining to the storage and use of data. In the US these include privacy or data protection laws,Payment Card Industry Data Security Standard(PCI DSS), theHealth Insurance Portability and Accountability Act(HIPAA), theSarbanes-Oxley Act, theFederal Information Security Management Act of 2002(FISMA), andChildren's Online Privacy Protection Act of 1998, among others. Similar standards exist in other jurisdictions, e.g. Singapore'sMulti-Tier Cloud Security Standard. Similar laws may apply in different legal jurisdictions and may differ quite markedly from those enforced in the US. Cloud service users may often need to be aware of the legal and regulatory differences between the jurisdictions. For example, data stored by a cloud service provider may be located in, say, Singapore and mirrored in the US.[43] Many of these regulations mandate particular controls (such as strong access controls and audit trails) and require regular reporting. Cloud customers must ensure that their cloud providers adequately fulfill such requirements as appropriate, enabling them to comply with their obligations since, to a large extent, they remain accountable. Aside from the security and compliance issues enumerated above, cloud providers and their customers will negotiate terms around liability (stipulating how incidents involving data loss or compromise will be resolved, for example),intellectual property, and end-of-service (when data and applications are ultimately returned to the customer). In addition, there are considerations for acquiring data from the cloud that may be involved in litigation.[46]These issues are discussed inservice-level agreements(SLA). Legal issues may also includerecords-keepingrequirements in thepublic sector, where many agencies are required by law to retain and make availableelectronic recordsin a specific fashion. This may be determined by legislation, or law may require agencies to conform to the rules and practices set by a records-keeping agency. Public agencies using cloud computing and storage must take these concerns into account.
https://en.wikipedia.org/wiki/Cloud_computing_security
Statistical machine translation(SMT) is amachine translationapproach where translations are generated on the basis ofstatistical modelswhose parameters are derived from the analysis of bilingualtext corpora. The statistical approach contrasts with therule-based approaches to machine translationas well as withexample-based machine translation,[1]that superseded the previous rule-based approach that required explicit description of each and every linguistic rule, which was costly, and which often did not generalize to other languages. The first ideas ofstatisticalmachine translation were introduced byWarren Weaverin 1949,[2]including the ideas of applyingClaude Shannon'sinformation theory. Statistical machine translation was re-introduced in the late 1980s and early 1990s by researchers atIBM'sThomas J. Watson Research Center.[3][4][5]Before the introduction of neural machine translation, it was by far the most widely studied machine translation method. The idea behind statistical machine translation comes frominformation theory. A document is translated according to theprobability distributionp(e|f){\displaystyle p(e|f)}that a stringe{\displaystyle e}in the target language (for example, English) is the translation of a stringf{\displaystyle f}in the source language (for example, French). The problem of modeling the probability distributionp(e|f){\displaystyle p(e|f)}has been approached in a number of ways. One approach which lends itself well to computer implementation is to applyBayes' theorem, that isp(e|f)∝p(f|e)p(e){\displaystyle p(e|f)\propto p(f|e)p(e)}, where the translation modelp(f|e){\displaystyle p(f|e)}is the probability that the source string is the translation of the target string, and thelanguage modelp(e){\displaystyle p(e)}is the probability of seeing that target language string. This decomposition is attractive as it splits the problem into two subproblems. Finding the best translatione~{\displaystyle {\tilde {e}}}is done by picking up the one that gives the highest probability: For a rigorous implementation of this one would have to perform an exhaustive search by going through all stringse∗{\displaystyle e^{*}}in the native language. Performing the search efficiently is the work of amachine translation decoderthat uses the foreign string, heuristics and other methods to limit the search space and at the same time keeping acceptable quality. This trade-off between quality and time usage can also be found inspeech recognition. As the translation systems are not able to store all native strings and their translations, a document is typically translated sentence by sentence. Language models are typically approximated bysmoothedn-gram models, and similar approaches have been applied to translation models, but this introduces additional complexity due to different sentence lengths and word orders in the languages. Statistical translation models were initiallywordbased (Models 1-5 fromIBMHidden Markov modelfrom Stephan Vogel[6]and Model 6 from Franz-Joseph Och[7]), but significant advances were made with the introduction ofphrasebased models.[8]Later work incorporatedsyntaxor quasi-syntactic structures.[9] The most frequently cited[citation needed]benefits of statistical machine translation (SMT) over rule-based approach are: In word-based translation, the fundamental unit of translation is a word in some natural language. Typically, the number of words in translated sentences are different, because of compound words, morphology and idioms. The ratio of the lengths of sequences of translated words is called fertility, which tells how many foreign words each native word produces. Necessarily it is assumed by information theory that each covers the same concept. In practice this is not really true. For example, the English wordcornercan be translated in Spanish by eitherrincónoresquina, depending on whether it is to mean its internal or external angle. Simple word-based translation cannot translate between languages with different fertility. Word-based translation systems can relatively simply be made to cope with high fertility, such that they could map a single word to multiple words, but not the other way about[citation needed]. For example, if we were translating from English to French, each word in English could produce any number of French words— sometimes none at all. But there is no way to group two English words producing a single French word. An example of a word-based translation system is the freely availableGIZA++package (GPLed), which includes the training program forIBMmodels and HMM model and Model 6.[7] The word-based translation is not widely used today; phrase-based systems are more common. Most phrase-based systems are still using GIZA++ to align the corpus[citation needed]. The alignments are used to extract phrases or deduce syntax rules.[11]And matching words in bi-text is still a problem actively discussed in the community. Because of the predominance of GIZA++, there are now several distributed implementations of it online.[12] In phrase-based translation, the aim is to reduce the restrictions of word-based translation by translating whole sequences of words, where the lengths may differ. The sequences of words are called blocks or phrases. These are typically not linguisticphrases, butphrasemesthat were found using statistical methods from corpora. It has been shown that restricting the phrases to linguistic phrases (syntactically motivated groups of words, seesyntactic categories) decreased the quality of translation.[13] The chosen phrases are further mapped one-to-one based on a phrase translation table, and may be reordered. This table could be learnt based on word-alignment, or directly from a parallel corpus. The second model is trained using theexpectation maximization algorithm, similarly to the word-basedIBM model.[14] Syntax-based translation is based on the idea of translatingsyntacticunits, rather than single words or strings of words (as in phrase-based MT), i.e. (partial)parse treesof sentences/utterances.[15]Until the 1990s, with advent of strongstochastic parsers, the statistical counterpart of the old idea of syntax-based translation did not take off. Examples of this approach includeDOP-based MT and latersynchronous context-free grammars. Hierarchical phrase-based translation combines the phrase-based and syntax-based approaches to translation. It usessynchronous context-free grammarrules, but the grammars can be constructed by an extension of methods for phrase-based translation without reference to linguistically motivated syntactic constituents. This idea was first introduced in Chiang's Hiero system (2005).[9] Alanguage modelis an essential component of any statistical machine translation system, which aids in making the translation as fluent as possible. It is a function that takes a translated sentence and returns the probability of it being said by a native speaker. A good language model will for example assign a higher probability to the sentence "the house is small" than to "small the is house". Other thanword order, language models may also help with word choice: if a foreign word has multiple possible translations, these functions may give better probabilities for certain translations in specific contexts in the target language.[14] Problems with statistical machine translation include: Single sentences in one language can be found translated into several sentences in the other and vice versa.[15]Long sentences may be broken up, while short sentences may be merged. There are even languages that use writing systems without clear indication of a sentence end, such asThai. Sentence aligning can be performed through theGale-Church alignment algorithm. Efficient search and retrieval of the highest scoring sentence alignment is possible through this and other mathematical models. Sentence alignment is usually either provided by the corpus or obtained by the aforementionedGale-Church alignment algorithm. To learn e.g. the translation model, however, we need to know which words align in a source-target sentence pair. TheIBM-Modelsor theHMM-approachwere attempts at solving this challenge. Function words that have no clear equivalent in the target language are another issue for the statistical models. For example, when translating from English to German, in the sentence "John does not live here", the word "does" has no clear alignment in the translated sentence "John wohnt hier nicht". Through logical reasoning, it may be aligned with the words "wohnt" (as it contains grammatical information for the English word "live") or "nicht" (as it only appears in the sentence because it is negated) or it may be unaligned.[14] An example of such an anomaly is the phrase "I took the train to Berlin" being mistranslated as "I took the train to Paris" due to the statistical abundance of "train to Paris" in the training set. Depending on the corpora used, the use ofidiomandlinguistic registermight not receive a translation that accurately represents the original intent. For example, the popular CanadianHansardbilingual corpus primarily consists of parliamentary speech examples, where "Hear, Hear!" is frequently associated with "Bravo!" Using a model built on this corpus to translate ordinary speech in a conversational register would lead to incorrect translation of the wordhearasBravo![19] This problem is connected with word alignment, as in very specific contexts the idiomatic expression aligned with words that resulted in an idiomatic expression of the same meaning in the target language. However, it is unlikely, as the alignment usually does not work in any other contexts. For that reason, idioms could only be subjected to phrasal alignment, as they could not be decomposed further without losing their meaning. This problem was specific for word-based translation.[14] Word order in languages differ. Some classification can be done by naming the typical order of subject (S), verb (V) and object (O) in a sentence and one can talk, for instance, of SVO or VSO languages. There are also additional differences in word orders, for instance, where modifiers for nouns are located, or where the same words are used as a question or a statement. Inspeech recognition, the speech signal and the corresponding textual representation can be mapped to each other in blocks in order. This is not always the case with the same text in two languages. For SMT, the machine translator can only manage small sequences of words, and word order has to be thought of by the program designer. Attempts at solutions have included re-ordering models, where a distribution of location changes for each item of translation is guessed from aligned bi-text. Different location changes can be ranked with the help of the language model and the best can be selected. SMT systems typically store different word forms as separate symbols without any relation to each other, and word forms or phrases that were not in the training data cannot be translated. This might be because of the lack of training data, changes in the human domain where the system is used, or differences in morphology.
https://en.wikipedia.org/wiki/Statistical_machine_translation
Thislist ofeponymouslawsprovides links to articles onlaws,principles,adages, and other succinct observations or predictions named after a person. In some cases the person named has coined the law – such asParkinson's law. In others, the work or publications of the individual have led to the law being so named – as is the case withMoore's law. There are also laws ascribed to individuals by others, such asMurphy's law; or given eponymous names despite the absence of the named person. Named laws range from significant scientific laws such as Newton's laws of motion, to humorous examples such as Murphy's law.
https://en.wikipedia.org/wiki/List_of_eponymous_laws
Subjunctive possibility(also calledalethicpossibility) is a form of modality studied inmodal logic. Subjunctive possibilities are the sorts of possibilities considered when conceivingcounterfactualsituations; subjunctive modalities are modalities that bear on whether a statementmight have beenorcould betrue—such asmight,could,must,possibly,necessarily,contingently,essentially,accidentally, and so on. Subjunctive possibilities includelogical possibility,metaphysicalpossibility,nomologicalpossibility, and temporal possibility. Subjunctive possibility is contrasted with (among other things)epistemic possibility(which deals with how the worldmaybe,for all we know) anddeontic possibility(which deals with how the worldoughtto be). The contrast with epistemic possibility is especially important to draw, since in ordinary language the same phrases ("it's possible," "it can't be", "it must be") are often used to express either sort of possibility. But they are not the same. We do notknowwhetherGoldbach's conjectureis true or not (no-one has come up with a proof yet); so it is (epistemically)possible thatit is true and it is (epistemically)possible thatit is false. But if itis, in fact, provably true (as it may be, for all we know), then it would have to be (subjunctively)necessarilytrue; what being provablemeansis that it would not be (logically)possible forit to be false. Similarly, it might not be at all (epistemically)possible thatit is raining outside—we mightknowbeyond a shadow of a doubt that it is not—but that would hardly mean that it is (subjunctively)impossible forit to rain outside. This point is also made byNorman Swartzand Raymond Bradley.[1] There is some overlap in language between subjunctive possibilities and deontic possibilities: for example, we sometimes use the statement "You can/cannot do that" to express (i) what it is or is not subjunctively possible for you to do, and we sometimes use it to express (ii) what it would or would not be right for you to do. The two are less likely to be confused in ordinary language than subjunctive and epistemic possibility as there are some important differences in the logic of subjunctive modalities and deontic modalities. In particular, subjunctive necessity entails truth: if people logically must such and such, then you can infer that they actually do it. But in this non-ideal world, a deontic ‘must’ does not carry the moral certitude that people morally must do such and such. There are several different types of subjunctive modality, which can be classified as broader or more narrow than one another depending on how restrictive the rules for what counts as "possible" are. Some of the most commonly discussed are: Similarly David Lewis could have taken a degree in Economics but not in, say, Aviation (because it was not taught at Harvard) or Cognitive Neuroscience (because the so-called 'conceptual space' for such a major did not exist). There is some debate whether this final type of possibility in fact constitutes a type of possibility distinct from Temporal, and is sometimes called Historical Possibility by thinkers likeIan Hacking.
https://en.wikipedia.org/wiki/Subjunctive_possibility
Inmathematics,sineandcosinearetrigonometric functionsof anangle. The sine and cosine of an acuteangleare defined in the context of aright triangle: for the specified angle, its sine is the ratio of the length of the side opposite that angle to the length of the longest side of thetriangle(thehypotenuse), and the cosine is theratioof the length of the adjacent leg to that of thehypotenuse. For an angleθ{\displaystyle \theta }, the sine and cosine functions are denoted assin⁡(θ){\displaystyle \sin(\theta )}andcos⁡(θ){\displaystyle \cos(\theta )}. The definitions of sine and cosine have been extended to anyrealvalue in terms of the lengths of certain line segments in aunit circle. More modern definitions express the sine and cosine asinfinite series, or as the solutions of certaindifferential equations, allowing their extension to arbitrary positive and negative values and even tocomplex numbers. The sine and cosine functions are commonly used to modelperiodicphenomena such assoundandlight waves, the position and velocity of harmonic oscillators, sunlight intensity and day length, and average temperature variations throughout the year. They can be traced to thejyāandkoṭi-jyāfunctions used inIndian astronomyduring theGupta period. To define the sine and cosine of an acute angleα{\displaystyle \alpha }, start with aright trianglethat contains an angle of measureα{\displaystyle \alpha }; in the accompanying figure, angleα{\displaystyle \alpha }in a right triangleABC{\displaystyle ABC}is the angle of interest. The three sides of the triangle are named as follows:[1] Once such a triangle is chosen, the sine of the angle is equal to the length of the opposite side divided by the length of the hypotenuse, and the cosine of the angle is equal to the length of the adjacent side divided by the length of the hypotenuse:[1]sin⁡(α)=oppositehypotenuse,cos⁡(α)=adjacenthypotenuse.{\displaystyle \sin(\alpha )={\frac {\text{opposite}}{\text{hypotenuse}}},\qquad \cos(\alpha )={\frac {\text{adjacent}}{\text{hypotenuse}}}.} The other trigonometric functions of the angle can be defined similarly; for example, thetangentis the ratio between the opposite and adjacent sides or equivalently the ratio between the sine and cosine functions. Thereciprocalof sine is cosecant, which gives the ratio of the hypotenuse length to the length of the opposite side. Similarly, the reciprocal of cosine is secant, which gives the ratio of the hypotenuse length to that of the adjacent side. The cotangent function is the ratio between the adjacent and opposite sides, a reciprocal of a tangent function. These functions can be formulated as:[1]tan⁡(θ)=sin⁡(θ)cos⁡(θ)=oppositeadjacent,cot⁡(θ)=1tan⁡(θ)=adjacentopposite,csc⁡(θ)=1sin⁡(θ)=hypotenuseopposite,sec⁡(θ)=1cos⁡(θ)=hypotenuseadjacent.{\displaystyle {\begin{aligned}\tan(\theta )&={\frac {\sin(\theta )}{\cos(\theta )}}={\frac {\text{opposite}}{\text{adjacent}}},\\\cot(\theta )&={\frac {1}{\tan(\theta )}}={\frac {\text{adjacent}}{\text{opposite}}},\\\csc(\theta )&={\frac {1}{\sin(\theta )}}={\frac {\text{hypotenuse}}{\text{opposite}}},\\\sec(\theta )&={\frac {1}{\cos(\theta )}}={\frac {\textrm {hypotenuse}}{\textrm {adjacent}}}.\end{aligned}}} As stated, the valuessin⁡(α){\displaystyle \sin(\alpha )}andcos⁡(α){\displaystyle \cos(\alpha )}appear to depend on the choice of a right triangle containing an angle of measureα{\displaystyle \alpha }. However, this is not the case as all such triangles aresimilar, and so the ratios are the same for each of them. For example, eachlegof the 45-45-90 right triangle is 1 unit, and its hypotenuse is2{\displaystyle {\sqrt {2}}}; therefore,sin⁡45∘=cos⁡45∘=22{\textstyle \sin 45^{\circ }=\cos 45^{\circ }={\frac {\sqrt {2}}{2}}}.[2]The following table shows the special value of each input for both sine and cosine with the domain between0<α<π2{\textstyle 0<\alpha <{\frac {\pi }{2}}}. The input in this table provides various unit systems such as degree, radian, and so on. The angles other than those five can be obtained by using a calculator.[3][4] Thelaw of sinesis useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known.[5]Given a triangleABC{\displaystyle ABC}with sidesa{\displaystyle a},b{\displaystyle b}, andc{\displaystyle c}, and angles opposite those sidesα{\displaystyle \alpha },β{\displaystyle \beta }, andγ{\displaystyle \gamma }, the law states,sin⁡αa=sin⁡βb=sin⁡γc.{\displaystyle {\frac {\sin \alpha }{a}}={\frac {\sin \beta }{b}}={\frac {\sin \gamma }{c}}.}This is equivalent to the equality of the first three expressions below:asin⁡α=bsin⁡β=csin⁡γ=2R,{\displaystyle {\frac {a}{\sin \alpha }}={\frac {b}{\sin \beta }}={\frac {c}{\sin \gamma }}=2R,}whereR{\displaystyle R}is the triangle'scircumradius. Thelaw of cosinesis useful for computing the length of an unknown side if two other sides and an angle are known.[5]The law states,a2+b2−2abcos⁡(γ)=c2{\displaystyle a^{2}+b^{2}-2ab\cos(\gamma )=c^{2}}In the case whereγ=π/2{\displaystyle \gamma =\pi /2}from whichcos⁡(γ)=0{\displaystyle \cos(\gamma )=0}, the resulting equation becomes thePythagorean theorem.[6] Thecross productanddot productare operations on twovectorsinEuclidean vector space. The sine and cosine functions can be defined in terms of the cross product and dot product. Ifa{\displaystyle \mathbb {a} }andb{\displaystyle \mathbb {b} }are vectors, andθ{\displaystyle \theta }is the angle betweena{\displaystyle \mathbb {a} }andb{\displaystyle \mathbb {b} }, then sine and cosine can be defined as:sin⁡(θ)=|a×b||a||b|,cos⁡(θ)=a⋅b|a||b|.{\displaystyle {\begin{aligned}\sin(\theta )&={\frac {|\mathbb {a} \times \mathbb {b} |}{|a||b|}},\\\cos(\theta )&={\frac {\mathbb {a} \cdot \mathbb {b} }{|a||b|}}.\end{aligned}}} The sine and cosine functions may also be defined in a more general way by usingunit circle, a circle of radius one centered at the origin(0,0){\displaystyle (0,0)}, formulated as the equation ofx2+y2=1{\displaystyle x^{2}+y^{2}=1}in theCartesian coordinate system. Let a line through the origin intersect the unit circle, making an angle ofθ{\displaystyle \theta }with the positive half of thex{\displaystyle x}-axis. Thex{\displaystyle x}-andy{\displaystyle y}-coordinates of this point of intersection are equal tocos⁡(θ){\displaystyle \cos(\theta )}andsin⁡(θ){\displaystyle \sin(\theta )}, respectively; that is,[7]sin⁡(θ)=y,cos⁡(θ)=x.{\displaystyle \sin(\theta )=y,\qquad \cos(\theta )=x.} This definition is consistent with the right-angled triangle definition of sine and cosine when0<θ<π2{\textstyle 0<\theta <{\frac {\pi }{2}}}because the length of the hypotenuse of the unit circle is always 1; mathematically speaking, the sine of an angle equals the opposite side of the triangle, which is simply they{\displaystyle y}-coordinate. A similar argument can be made for the cosine function to show that the cosine of an angle when0<θ<π2{\textstyle 0<\theta <{\frac {\pi }{2}}}, even under the new definition using the unit circle.[8][9] Using the unit circle definition has the advantage of drawing a graph of sine and cosine functions. This can be done by rotating counterclockwise a point along the circumference of a circle, depending on the inputθ>0{\displaystyle \theta >0}. In a sine function, if the input isθ=π2{\textstyle \theta ={\frac {\pi }{2}}}, the point is rotated counterclockwise and stopped exactly on they{\displaystyle y}-axis. Ifθ=π{\displaystyle \theta =\pi }, the point is at the circle's halfway. Ifθ=2π{\displaystyle \theta =2\pi }, the point returned to its origin. This results that both sine and cosine functions have therangebetween−1≤y≤1{\displaystyle -1\leq y\leq 1}.[10] Extending the angle to any real domain, the point rotated counterclockwise continuously. This can be done similarly for the cosine function as well, although the point is rotated initially from they{\displaystyle y}-coordinate. In other words, both sine and cosine functions areperiodic, meaning any angle added by the circumference's circle is the angle itself. Mathematically,[11]sin⁡(θ+2π)=sin⁡(θ),cos⁡(θ+2π)=cos⁡(θ).{\displaystyle \sin(\theta +2\pi )=\sin(\theta ),\qquad \cos(\theta +2\pi )=\cos(\theta ).} A functionf{\displaystyle f}is said to beoddiff(−x)=−f(x){\displaystyle f(-x)=-f(x)}, and is said to beeveniff(−x)=f(x){\displaystyle f(-x)=f(x)}. The sine function is odd, whereas the cosine function is even.[12]Both sine and cosine functions are similar, with their difference beingshiftedbyπ2{\textstyle {\frac {\pi }{2}}}. This means,[13]sin⁡(θ)=cos⁡(π2−θ),cos⁡(θ)=sin⁡(π2−θ).{\displaystyle {\begin{aligned}\sin(\theta )&=\cos \left({\frac {\pi }{2}}-\theta \right),\\\cos(\theta )&=\sin \left({\frac {\pi }{2}}-\theta \right).\end{aligned}}} Zero is the only realfixed pointof the sine function; in other words the only intersection of the sine function and theidentity functionissin⁡(0)=0{\displaystyle \sin(0)=0}. The only real fixed point of the cosine function is called theDottie number. The Dottie number is the unique real root of the equationcos⁡(x)=x{\displaystyle \cos(x)=x}. The decimal expansion of the Dottie number is approximately 0.739085.[14] The sine and cosine functions are infinitely differentiable.[15]The derivative of sine is cosine, and the derivative of cosine is negative sine:[16]ddxsin⁡(x)=cos⁡(x),ddxcos⁡(x)=−sin⁡(x).{\displaystyle {\frac {d}{dx}}\sin(x)=\cos(x),\qquad {\frac {d}{dx}}\cos(x)=-\sin(x).}Continuing the process in higher-order derivative results in the repeated same functions; the fourth derivative of a sine is the sine itself.[15]These derivatives can be applied to thefirst derivative test, according to which themonotonicityof a function can be defined as the inequality of function's first derivative greater or less than equal to zero.[17]It can also be applied tosecond derivative test, according to which theconcavityof a function can be defined by applying the inequality of the function's second derivative greater or less than equal to zero.[18]The following table shows that both sine and cosine functions have concavity and monotonicity—the positive sign (+{\displaystyle +}) denotes a graph is increasing (going upward) and the negative sign (−{\displaystyle -}) is decreasing (going downward)—in certain intervals.[19]This information can be represented as a Cartesian coordinates system divided into four quadrants. Both sine and cosine functions can be defined by using differential equations. The pair of(cos⁡θ,sin⁡θ){\displaystyle (\cos \theta ,\sin \theta )}is the solution(x(θ),y(θ)){\displaystyle (x(\theta ),y(\theta ))}to the two-dimensional system ofdifferential equationsy′(θ)=x(θ){\displaystyle y'(\theta )=x(\theta )}andx′(θ)=−y(θ){\displaystyle x'(\theta )=-y(\theta )}with theinitial conditionsy(0)=0{\displaystyle y(0)=0}andx(0)=1{\displaystyle x(0)=1}. One could interpret the unit circle in the above definitions as defining thephase space trajectoryof the differential equation with the given initial conditions. It can be interpreted as a phase space trajectory of the system of differential equationsy′(θ)=x(θ){\displaystyle y'(\theta )=x(\theta )}andx′(θ)=−y(θ){\displaystyle x'(\theta )=-y(\theta )}starting from the initial conditionsy(0)=0{\displaystyle y(0)=0}andx(0)=1{\displaystyle x(0)=1}.[citation needed] Their area under a curve can be obtained by using theintegralwith a certain bounded interval. Their antiderivatives are:∫sin⁡(x)dx=−cos⁡(x)+C∫cos⁡(x)dx=sin⁡(x)+C,{\displaystyle \int \sin(x)\,dx=-\cos(x)+C\qquad \int \cos(x)\,dx=\sin(x)+C,}whereC{\displaystyle C}denotes theconstant of integration.[20]These antiderivatives may be applied to compute the mensuration properties of both sine and cosine functions' curves with a given interval. For example, thearc lengthof the sine curve between0{\displaystyle 0}andt{\displaystyle t}is∫0t1+cos2⁡(x)dx=2E⁡(t,12),{\displaystyle \int _{0}^{t}\!{\sqrt {1+\cos ^{2}(x)}}\,dx={\sqrt {2}}\operatorname {E} \left(t,{\frac {1}{\sqrt {2}}}\right),}whereE⁡(φ,k){\displaystyle \operatorname {E} (\varphi ,k)}is theincomplete elliptic integral of the second kindwith modulusk{\displaystyle k}. It cannot be expressed usingelementary functions.[21]In the case of a full period, its arc length isL=42π3Γ(1/4)2+Γ(1/4)22π=2πϖ+2ϖ≈7.6404…{\displaystyle L={\frac {4{\sqrt {2\pi ^{3}}}}{\Gamma (1/4)^{2}}}+{\frac {\Gamma (1/4)^{2}}{\sqrt {2\pi }}}={\frac {2\pi }{\varpi }}+2\varpi \approx 7.6404\ldots }whereΓ{\displaystyle \Gamma }is thegamma functionandϖ{\displaystyle \varpi }is thelemniscate constant.[22] Theinverse functionof sine is arcsine or inverse sine, denoted as "arcsin", "asin", orsin−1{\displaystyle \sin ^{-1}}.[23]The inverse function of cosine is arccosine, denoted as "arccos", "acos", orcos−1{\displaystyle \cos ^{-1}}.[a]As sine and cosine are notinjective, their inverses are not exact inverse functions, but partial inverse functions. For example,sin⁡(0)=0{\displaystyle \sin(0)=0}, but alsosin⁡(π)=0{\displaystyle \sin(\pi )=0},sin⁡(2π)=0{\displaystyle \sin(2\pi )=0}, and so on. It follows that the arcsine function is multivalued:arcsin⁡(0)=0{\displaystyle \arcsin(0)=0}, but alsoarcsin⁡(0)=π{\displaystyle \arcsin(0)=\pi },arcsin⁡(0)=2π{\displaystyle \arcsin(0)=2\pi }, and so on. When only one value is desired, the function may be restricted to itsprincipal branch. With this restriction, for eachx{\displaystyle x}in the domain, the expressionarcsin⁡(x){\displaystyle \arcsin(x)}will evaluate only to a single value, called itsprincipal value. The standard range of principal values for arcsin is from−π2{\textstyle -{\frac {\pi }{2}}}toπ2{\textstyle {\frac {\pi }{2}}}, and the standard range for arccos is from0{\displaystyle 0}toπ{\displaystyle \pi }.[24] The inverse function of both sine and cosine are defined as:[citation needed]θ=arcsin⁡(oppositehypotenuse)=arccos⁡(adjacenthypotenuse),{\displaystyle \theta =\arcsin \left({\frac {\text{opposite}}{\text{hypotenuse}}}\right)=\arccos \left({\frac {\text{adjacent}}{\text{hypotenuse}}}\right),}where for some integerk{\displaystyle k},sin⁡(y)=x⟺y=arcsin⁡(x)+2πk,ory=π−arcsin⁡(x)+2πkcos⁡(y)=x⟺y=arccos⁡(x)+2πk,ory=−arccos⁡(x)+2πk{\displaystyle {\begin{aligned}\sin(y)=x\iff &y=\arcsin(x)+2\pi k,{\text{ or }}\\&y=\pi -\arcsin(x)+2\pi k\\\cos(y)=x\iff &y=\arccos(x)+2\pi k,{\text{ or }}\\&y=-\arccos(x)+2\pi k\end{aligned}}}By definition, both functions satisfy the equations:[citation needed]sin⁡(arcsin⁡(x))=xcos⁡(arccos⁡(x))=x{\displaystyle \sin(\arcsin(x))=x\qquad \cos(\arccos(x))=x}andarcsin⁡(sin⁡(θ))=θfor−π2≤θ≤π2arccos⁡(cos⁡(θ))=θfor0≤θ≤π{\displaystyle {\begin{aligned}\arcsin(\sin(\theta ))=\theta \quad &{\text{for}}\quad -{\frac {\pi }{2}}\leq \theta \leq {\frac {\pi }{2}}\\\arccos(\cos(\theta ))=\theta \quad &{\text{for}}\quad 0\leq \theta \leq \pi \end{aligned}}} According toPythagorean theorem, the squared hypotenuse is the sum of two squared legs of a right triangle. Dividing the formula on both sides with squared hypotenuse resulting in thePythagorean trigonometric identity, the sum of a squared sine and a squared cosine equals 1:[25][b]sin2⁡(θ)+cos2⁡(θ)=1.{\displaystyle \sin ^{2}(\theta )+\cos ^{2}(\theta )=1.} Sine and cosine satisfy the following double-angle formulas:[26]sin⁡(2θ)=2sin⁡(θ)cos⁡(θ),cos⁡(2θ)=cos2⁡(θ)−sin2⁡(θ)=2cos2⁡(θ)−1=1−2sin2⁡(θ){\displaystyle {\begin{aligned}\sin(2\theta )&=2\sin(\theta )\cos(\theta ),\\\cos(2\theta )&=\cos ^{2}(\theta )-\sin ^{2}(\theta )\\&=2\cos ^{2}(\theta )-1\\&=1-2\sin ^{2}(\theta )\end{aligned}}} The cosine double angle formula implies that sin2and cos2are, themselves, shifted and scaled sine waves. Specifically,[27]sin2⁡(θ)=1−cos⁡(2θ)2cos2⁡(θ)=1+cos⁡(2θ)2{\displaystyle \sin ^{2}(\theta )={\frac {1-\cos(2\theta )}{2}}\qquad \cos ^{2}(\theta )={\frac {1+\cos(2\theta )}{2}}}The graph shows both sine and sine squared functions, with the sine in blue and the sine squared in red. Both graphs have the same shape but with different ranges of values and different periods. Sine squared has only positive values, but twice the number of periods.[citation needed] Both sine and cosine functions can be defined by using aTaylor series, apower seriesinvolving the higher-order derivatives. As mentioned in§ Continuity and differentiation, thederivativeof sine is cosine and that the derivative of cosine is the negative of sine. This means the successive derivatives ofsin⁡(x){\displaystyle \sin(x)}arecos⁡(x){\displaystyle \cos(x)},−sin⁡(x){\displaystyle -\sin(x)},−cos⁡(x){\displaystyle -\cos(x)},sin⁡(x){\displaystyle \sin(x)}, continuing to repeat those four functions. The(4n+k){\displaystyle (4n+k)}-th derivative, evaluated at the point 0:sin(4n+k)⁡(0)={0whenk=01whenk=10whenk=2−1whenk=3{\displaystyle \sin ^{(4n+k)}(0)={\begin{cases}0&{\text{when }}k=0\\1&{\text{when }}k=1\\0&{\text{when }}k=2\\-1&{\text{when }}k=3\end{cases}}}where the superscript represents repeated differentiation. This implies the following Taylor series expansion atx=0{\displaystyle x=0}. One can then use the theory ofTaylor seriesto show that the following identities hold for allreal numbersx{\displaystyle x}—wherex{\displaystyle x}is the angle in radians.[28]More generally, for allcomplex numbers:[29]sin⁡(x)=x−x33!+x55!−x77!+⋯=∑n=0∞(−1)n(2n+1)!x2n+1{\displaystyle {\begin{aligned}\sin(x)&=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)!}}x^{2n+1}\end{aligned}}}Taking the derivative of each term gives the Taylor series for cosine:[28][29]cos⁡(x)=1−x22!+x44!−x66!+⋯=∑n=0∞(−1)n(2n)!x2n{\displaystyle {\begin{aligned}\cos(x)&=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-{\frac {x^{6}}{6!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n)!}}x^{2n}\end{aligned}}} Both sine and cosine functions with multiple angles may appear as theirlinear combination, resulting in a polynomial. Such a polynomial is known as thetrigonometric polynomial. The trigonometric polynomial's ample applications may be acquired inits interpolation, and its extension of a periodic function known as theFourier series. Letan{\displaystyle a_{n}}andbn{\displaystyle b_{n}}be any coefficients, then the trigonometric polynomial of a degreeN{\displaystyle N}—denoted asT(x){\displaystyle T(x)}—is defined as:[30][31]T(x)=a0+∑n=1Nancos⁡(nx)+∑n=1Nbnsin⁡(nx).{\displaystyle T(x)=a_{0}+\sum _{n=1}^{N}a_{n}\cos(nx)+\sum _{n=1}^{N}b_{n}\sin(nx).} Thetrigonometric seriescan be defined similarly analogous to the trigonometric polynomial, its infinite inversion. LetAn{\displaystyle A_{n}}andBn{\displaystyle B_{n}}be any coefficients, then the trigonometric series can be defined as:[32]12A0+∑n=1∞Ancos⁡(nx)+Bnsin⁡(nx).{\displaystyle {\frac {1}{2}}A_{0}+\sum _{n=1}^{\infty }A_{n}\cos(nx)+B_{n}\sin(nx).}In the case of a Fourier series with a given integrable functionf{\displaystyle f}, the coefficients of a trigonometric series are:[33]An=1π∫02πf(x)cos⁡(nx)dx,Bn=1π∫02πf(x)sin⁡(nx)dx.{\displaystyle {\begin{aligned}A_{n}&={\frac {1}{\pi }}\int _{0}^{2\pi }f(x)\cos(nx)\,dx,\\B_{n}&={\frac {1}{\pi }}\int _{0}^{2\pi }f(x)\sin(nx)\,dx.\end{aligned}}} Both sine and cosine can be extended further viacomplex number, a set of numbers composed of bothrealandimaginary numbers. For real numberθ{\displaystyle \theta }, the definition of both sine and cosine functions can be extended in acomplex planein terms of anexponential functionas follows:[34]sin⁡(θ)=eiθ−e−iθ2i,cos⁡(θ)=eiθ+e−iθ2,{\displaystyle {\begin{aligned}\sin(\theta )&={\frac {e^{i\theta }-e^{-i\theta }}{2i}},\\\cos(\theta )&={\frac {e^{i\theta }+e^{-i\theta }}{2}},\end{aligned}}} Alternatively, both functions can be defined in terms ofEuler's formula:[34]eiθ=cos⁡(θ)+isin⁡(θ),e−iθ=cos⁡(θ)−isin⁡(θ).{\displaystyle {\begin{aligned}e^{i\theta }&=\cos(\theta )+i\sin(\theta ),\\e^{-i\theta }&=\cos(\theta )-i\sin(\theta ).\end{aligned}}} When plotted on thecomplex plane, the functioneix{\displaystyle e^{ix}}for real values ofx{\displaystyle x}traces out theunit circlein the complex plane. Both sine and cosine functions may be simplified to the imaginary and real parts ofeiθ{\displaystyle e^{i\theta }}as:[35]sin⁡θ=Im⁡(eiθ),cos⁡θ=Re⁡(eiθ).{\displaystyle {\begin{aligned}\sin \theta &=\operatorname {Im} (e^{i\theta }),\\\cos \theta &=\operatorname {Re} (e^{i\theta }).\end{aligned}}} Whenz=x+iy{\displaystyle z=x+iy}for real valuesx{\displaystyle x}andy{\displaystyle y}, wherei=−1{\displaystyle i={\sqrt {-1}}}, both sine and cosine functions can be expressed in terms of real sines, cosines, andhyperbolic functionsas:[citation needed]sin⁡z=sin⁡xcosh⁡y+icos⁡xsinh⁡y,cos⁡z=cos⁡xcosh⁡y−isin⁡xsinh⁡y.{\displaystyle {\begin{aligned}\sin z&=\sin x\cosh y+i\cos x\sinh y,\\\cos z&=\cos x\cosh y-i\sin x\sinh y.\end{aligned}}} Sine and cosine are used to connect the real and imaginary parts of acomplex numberwith itspolar coordinates(r,θ){\displaystyle (r,\theta )}:z=r(cos⁡(θ)+isin⁡(θ)),{\displaystyle z=r(\cos(\theta )+i\sin(\theta )),}and the real and imaginary parts areRe⁡(z)=rcos⁡(θ),Im⁡(z)=rsin⁡(θ),{\displaystyle {\begin{aligned}\operatorname {Re} (z)&=r\cos(\theta ),\\\operatorname {Im} (z)&=r\sin(\theta ),\end{aligned}}}wherer{\displaystyle r}andθ{\displaystyle \theta }represent the magnitude and angle of the complex numberz{\displaystyle z}. For any real numberθ{\displaystyle \theta }, Euler's formula in terms of polar coordinates is stated asz=reiθ{\textstyle z=re^{i\theta }}. Applying the series definition of the sine and cosine to a complex argument,z, gives: where sinh and cosh are thehyperbolic sine and cosine. These areentire functions. It is also sometimes useful to express the complex sine and cosine functions in terms of the real and imaginary parts of its argument: Using the partial fraction expansion technique incomplex analysis, one can find that the infinite series∑n=−∞∞(−1)nz−n=1z−2z∑n=1∞(−1)nn2−z2{\displaystyle \sum _{n=-\infty }^{\infty }{\frac {(-1)^{n}}{z-n}}={\frac {1}{z}}-2z\sum _{n=1}^{\infty }{\frac {(-1)^{n}}{n^{2}-z^{2}}}}both converge and are equal toπsin⁡(πz){\textstyle {\frac {\pi }{\sin(\pi z)}}}. Similarly, one can show thatπ2sin2⁡(πz)=∑n=−∞∞1(z−n)2.{\displaystyle {\frac {\pi ^{2}}{\sin ^{2}(\pi z)}}=\sum _{n=-\infty }^{\infty }{\frac {1}{(z-n)^{2}}}.} Using product expansion technique, one can derivesin⁡(πz)=πz∏n=1∞(1−z2n2).{\displaystyle \sin(\pi z)=\pi z\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{n^{2}}}\right).} sin(z) is found in thefunctional equationfor theGamma function, which in turn is found in thefunctional equationfor theRiemann zeta-function, As aholomorphic function, sinzis a 2D solution ofLaplace's equation: The complex sine function is also related to the level curves ofpendulums.[how?][36][better source needed] The wordsineis derived, indirectly, from theSanskritwordjyā'bow-string' or more specifically its synonymjīvá(both adopted fromAncient Greekχορδή'string; chord'), due to visual similarity between the arc of a circle with its corresponding chord and a bow with its string (seejyā, koti-jyā and utkrama-jyā;sineandchordare closely related in a circle of unit diameter, seePtolemy’s Theorem). This wastransliteratedinArabicasjība, which is meaningless in that language and written asjb(جب). Since Arabic is written without short vowels,jbwas interpreted as thehomographjayb(جيب), which means 'bosom', 'pocket', or 'fold'.[37][38]When the Arabic texts ofAl-Battaniandal-Khwārizmīwere translated intoMedieval Latinin the 12th century byGerard of Cremona, he used the Latin equivalentsinus(which also means 'bay' or 'fold', and more specifically 'the hanging fold of atogaover the breast').[39][40][41]Gerard was probably not the first scholar to use this translation; Robert of Chester appears to have preceded him and there is evidence of even earlier usage.[42][43]The English formsinewas introduced inThomas Fale's 1593Horologiographia.[44] The wordcosinederives from an abbreviation of the Latincomplementi sinus'sine of thecomplementary angle' ascosinusinEdmund Gunter'sCanon triangulorum(1620), which also includes a similar definition ofcotangens.[45] While the early study of trigonometry can be traced to antiquity, thetrigonometric functionsas they are in use today were developed in the medieval period. Thechordfunction was discovered byHipparchusofNicaea(180–125 BCE) andPtolemyofRoman Egypt(90–165 CE).[46] The sine and cosine functions are closely related to thejyāandkoṭi-jyāfunctions used inIndian astronomyduring theGupta period(AryabhatiyaandSurya Siddhanta), via translation from Sanskrit to Arabic and then from Arabic to Latin.[39][47] All six trigonometric functions in current use were known inIslamic mathematicsby the 9th century, as was thelaw of sines, used insolving triangles.[48]Al-Khwārizmī(c. 780–850) produced tables of sines, cosines and tangents.[49][50]Muhammad ibn Jābir al-Harrānī al-Battānī(853–929) discovered the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°.[50] In the early 17th-century, the French mathematicianAlbert Girardpublished the first use of the abbreviationssin,cos, andtan; these were further promulgated by Euler (see below). TheOpus palatinum de triangulisofGeorg Joachim Rheticus, a student ofCopernicus, was probably the first in Europe to define trigonometric functions directly in terms of right triangles instead of circles, with tables for all six trigonometric functions; this work was finished by Rheticus' student Valentin Otho in 1596. In a paper published in 1682,Leibnizproved that sinxis not analgebraic functionofx.[51]Roger Cotescomputed the derivative of sine in hisHarmonia Mensurarum(1722).[52]Leonhard Euler'sIntroductio in analysin infinitorum(1748) was mostly responsible for establishing the analytic treatment of trigonometric functions in Europe, also defining them as infinite series and presenting "Euler's formula", as well as the near-modern abbreviationssin.,cos.,tang.,cot.,sec., andcosec.[39] There is no standard algorithm for calculating sine and cosine.IEEE 754, the most widely used standard for the specification of reliable floating-point computation, does not address calculating trigonometric functions such as sine. The reason is that no efficient algorithm is known for computing sine and cosine with a specified accuracy, especially for large inputs.[53] Algorithms for calculating sine may be balanced for such constraints as speed, accuracy, portability, or range of input values accepted. This can lead to different results for different algorithms, especially for special circumstances such as very large inputs, e.g.sin(1022). A common programming optimization, used especially in 3D graphics, is to pre-calculate a table of sine values, for example one value per degree, then for values in-between pick the closest pre-calculated value, orlinearly interpolatebetween the 2 closest values to approximate it. This allows results to be looked up from a table rather than being calculated in real time. With modern CPU architectures this method may offer no advantage.[citation needed] TheCORDICalgorithm is commonly used in scientific calculators. The sine and cosine functions, along with other trigonometric functions, are widely available across programming languages and platforms. In computing, they are typically abbreviated tosinandcos. Some CPU architectures have a built-in instruction for sine, including the Intel x87 FPUs since the 80387. In programming languages,sinandcosare typically either a built-in function or found within the language's standard math library. For example, theC standard librarydefines sine functions withinmath.h:sin(double),sinf(float), andsinl(long double). The parameter of each is afloating pointvalue, specifying the angle in radians. Each function returns the samedata typeas it accepts. Many other trigonometric functions are also defined inmath.h, such as for cosine, arc sine, and hyperbolic sine (sinh). Similarly,Pythondefinesmath.sin(x)andmath.cos(x)within the built-inmathmodule. Complex sine and cosine functions are also available within thecmathmodule, e.g.cmath.sin(z).CPython's math functions call theCmathlibrary, and use adouble-precision floating-point format. Some software libraries provide implementations of sine and cosine using the input angle in half-turns, a half-turn being an angle of 180 degrees orπ{\displaystyle \pi }radians. Representing angles in turns or half-turns has accuracy advantages and efficiency advantages in some cases.[54][55]These functions are calledsinpiandcospiin MATLAB,[54]OpenCL,[56]R,[55]Julia,[57]CUDA,[58]and ARM.[59]For example,sinpi(x)would evaluate tosin⁡(πx),{\displaystyle \sin(\pi x),}wherexis expressed in half-turns, and consequently the final input to the function,πxcan be interpreted in radians bysin. The accuracy advantage stems from the ability to perfectly represent key angles like full-turn, half-turn, and quarter-turn losslessly in binary floating-point or fixed-point. In contrast, representing2π{\displaystyle 2\pi },π{\displaystyle \pi }, andπ2{\textstyle {\frac {\pi }{2}}}in binary floating-point or binary scaled fixed-point always involves a loss of accuracy since irrational numbers cannot be represented with finitely many binary digits. Turns also have an accuracy advantage and efficiency advantage for computing modulo to one period. Computing modulo 1 turn or modulo 2 half-turns can be losslessly and efficiently computed in both floating-point and fixed-point. For example, computing modulo 1 or modulo 2 for a binary point scaled fixed-point value requires only a bit shift or bitwise AND operation. In contrast, computing moduloπ2{\textstyle {\frac {\pi }{2}}}involves inaccuracies in representingπ2{\textstyle {\frac {\pi }{2}}}. For applications involving angle sensors, the sensor typically provides angle measurements in a form directly compatible with turns or half-turns. For example, an angle sensor may count from 0 to 4096 over one complete revolution.[60]If half-turns are used as the unit for angle, then the value provided by the sensor directly and losslessly maps to a fixed-point data type with 11 bits to the right of the binary point. In contrast, if radians are used as the unit for storing the angle, then the inaccuracies and cost of multiplying the raw sensor integer by an approximation toπ2048{\textstyle {\frac {\pi }{2048}}}would be incurred.
https://en.wikipedia.org/wiki/Cosine#Properties
Inmathematics, theepsilon numbersare a collection oftransfinite numberswhose defining property is that they arefixed pointsof anexponential map. Consequently, they are not reachable from 0 via a finite series of applications of the chosen exponential map and of "weaker" operations like addition and multiplication. The original epsilon numbers were introduced byGeorg Cantorin the context ofordinal arithmetic; they are theordinal numbersεthat satisfy theequation in which ω is the smallest infinite ordinal. The least such ordinal isε0(pronouncedepsilon nought(chiefly British),epsilon naught(chiefly American), orepsilon zero), which can be viewed as the "limit" obtained bytransfinite recursionfrom a sequence of smaller limit ordinals: wheresupis thesupremum, which is equivalent toset unionin the case of the von Neumann representation of ordinals. Larger ordinal fixed points of the exponential map are indexed by ordinal subscripts, resulting inε1,ε2,…,εω,εω+1,…,εε0,…,εε1,…,εεε⋅⋅⋅,…ζ0=φ2(0){\displaystyle \varepsilon _{1},\varepsilon _{2},\ldots ,\varepsilon _{\omega },\varepsilon _{\omega +1},\ldots ,\varepsilon _{\varepsilon _{0}},\ldots ,\varepsilon _{\varepsilon _{1}},\ldots ,\varepsilon _{\varepsilon _{\varepsilon _{\cdot _{\cdot _{\cdot }}}}},\ldots \zeta _{0}=\varphi _{2}(0)}.[1]The ordinalε0is stillcountable, as is any epsilon number whose index is countable.Uncountableordinals also exist, along with uncountable epsilon numbers whose index is an uncountable ordinal. The smallest epsilon numberε0appears in manyinductionproofs, because for many purposestransfinite inductionis only required up toε0(as inGentzen's consistency proofand the proof ofGoodstein's theorem). Its use byGentzento prove the consistency ofPeano arithmetic, along withGödel's second incompleteness theorem, show that Peano arithmetic cannot prove thewell-foundednessof this ordering (it is in fact the least ordinal with this property, and as such, inproof-theoreticordinal analysis, is used as a measure of the strength of the theory of Peano arithmetic). Many larger epsilon numbers can be defined using theVeblen function. A more general class of epsilon numbers has been identified byJohn Horton ConwayandDonald Knuthin thesurreal numbersystem, consisting of all surreals that are fixed points of the base ω exponential mapx→ωx. Hessenberg (1906)defined gamma numbers (seeadditively indecomposable ordinal) to be numbersγ> 0such thatα+γ=γwheneverα<γ, and delta numbers (seemultiplicatively indecomposable ordinal) to be numbersδ> 1such thatαδ=δwhenever0 <α<δ, and epsilon numbers to be numbersε> 2such thatαε=εwhenever1 <α<ε. His gamma numbers are those of the formωβ, and his delta numbers are those of the formωωβ. The standard definition ofordinal exponentiationwith base α is: From this definition, it follows that for any fixed ordinalα> 1, themappingβ↦αβ{\displaystyle \beta \mapsto \alpha ^{\beta }}is anormal function, so it has arbitrarily largefixed pointsby thefixed-point lemma for normal functions. Whenα=ω{\displaystyle \alpha =\omega }, these fixed points are precisely the ordinal epsilon numbers. Because a different sequence with the same supremum,ε1{\displaystyle \varepsilon _{1}}, is obtained by starting from 0 and exponentiating with baseε0instead: Generally, the epsilon numberεβ{\displaystyle \varepsilon _{\beta }}indexed by any ordinal that has an immediate predecessorβ−1{\displaystyle \beta -1}can be constructed similarly. In particular, whether or not the index β is a limit ordinal,εβ{\displaystyle \varepsilon _{\beta }}is a fixed point not only of base ω exponentiation but also of base δ exponentiation for all ordinals1<δ<εβ{\displaystyle 1<\delta <\varepsilon _{\beta }}. Since the epsilon numbers are an unbounded subclass of the ordinal numbers, they are enumerated using the ordinal numbers themselves. For any ordinal numberβ{\displaystyle \beta },εβ{\displaystyle \varepsilon _{\beta }}is the least epsilon number (fixed point of the exponential map) not already in the set{εδ∣δ<β}{\displaystyle \{\varepsilon _{\delta }\mid \delta <\beta \}}. It might appear that this is the non-constructive equivalent of the constructive definition using iterated exponentiation; but the two definitions are equally non-constructive at steps indexed by limit ordinals, which represent transfinite recursion of a higher order than taking the supremum of an exponential series. The following facts about epsilon numbers are straightforward to prove: Any epsilon number ε hasCantor normal formε=ωε{\displaystyle \varepsilon =\omega ^{\varepsilon }}, which means that the Cantor normal form is not very useful for epsilon numbers. The ordinals less thanε0, however, can be usefully described by their Cantor normal forms, which leads to a representation ofε0as the ordered set of allfinite rooted trees, as follows. Any ordinalα<ε0{\displaystyle \alpha <\varepsilon _{0}}has Cantor normal formα=ωβ1+ωβ2+⋯+ωβk{\displaystyle \alpha =\omega ^{\beta _{1}}+\omega ^{\beta _{2}}+\cdots +\omega ^{\beta _{k}}}wherekis anatural numberandβ1,…,βk{\displaystyle \beta _{1},\ldots ,\beta _{k}}are ordinals withα>β1≥⋯≥βk{\displaystyle \alpha >\beta _{1}\geq \cdots \geq \beta _{k}}, uniquely determined byα{\displaystyle \alpha }. Each of the ordinalsβ1,…,βk{\displaystyle \beta _{1},\ldots ,\beta _{k}}in turn has a similar Cantor normal form. We obtain the finite rooted tree representing α by joining the roots of the trees representingβ1,…,βk{\displaystyle \beta _{1},\ldots ,\beta _{k}}to a new root. (This has the consequence that the number 0 is represented by a single root while the number1=ω0{\displaystyle 1=\omega ^{0}}is represented by a tree containing a root and a single leaf.) An order on the set of finite rooted trees is defined recursively: we first order the subtrees joined to the root in decreasing order, and then uselexicographic orderon these ordered sequences of subtrees. In this way the set of all finite rooted trees becomes awell-ordered setwhich isorder isomorphictoε0. This representation is related to the proof of thehydra theorem, which represents decreasing sequences of ordinals as agraph-theoreticgame. The fixed points of the "epsilon mapping"x↦εx{\displaystyle x\mapsto \varepsilon _{x}}form a normal function, whose fixed points form a normal function; this is known as theVeblen hierarchy(the Veblen functions with baseφ0(α) =ωα). In the notation of the Veblen hierarchy, the epsilon mapping isφ1, and its fixed points are enumerated byφ2(seeordinal collapsing function.) Continuing in this vein, one can define mapsφαfor progressively larger ordinals α (including, by this rarefied form of transfinite recursion, limit ordinals), with progressively larger least fixed pointsφα+1(0). The least ordinal not reachable from 0 by this procedure—i. e., the least ordinal α for whichφα(0) =α, or equivalently the first fixed point of the mapα↦φα(0){\displaystyle \alpha \mapsto \varphi _{\alpha }(0)}—is theFeferman–Schütte ordinalΓ0. In a set theory where such an ordinal can be proved to exist, one has a mapΓthat enumerates the fixed pointsΓ0,Γ1,Γ2, ... ofα↦φα(0){\displaystyle \alpha \mapsto \varphi _{\alpha }(0)}; these are all still epsilon numbers, as they lie in the image ofφβfor everyβ≤ Γ0, including of the mapφ1that enumerates epsilon numbers. InOn Numbers and Games, the classic exposition onsurreal numbers,John Horton Conwayprovided a number of examples of concepts that had natural extensions from the ordinals to the surreals. One such function is theω{\displaystyle \omega }-mapn↦ωn{\displaystyle n\mapsto \omega ^{n}}; this mapping generalises naturally to include all surreal numbers in itsdomain, which in turn provides a natural generalisation of theCantor normal formfor surreal numbers. It is natural to consider any fixed point of this expanded map to be an epsilon number, whether or not it happens to be strictly an ordinal number. Some examples of non-ordinal epsilon numbers are and There is a natural way to defineεn{\displaystyle \varepsilon _{n}}for every surreal numbern, and the map remainsorder-preserving. Conway goes on to define a broader class of "irreducible" surreal numbers that includes the epsilon numbers as a particularly interesting subclass.
https://en.wikipedia.org/wiki/Epsilon_numbers_(mathematics)
Astatistical hypothesis testis a method of statistical inference used to decide whether the data provide sufficient evidence to reject a particular hypothesis. A statistical hypothesis test typically involves a calculation of atest statistic. Then a decision is made, either by comparing the test statistic to acritical valueor equivalently by evaluating ap-valuecomputed from the test statistic. Roughly 100specialized statistical testsare in use and noteworthy.[1][2] While hypothesis testing was popularized early in the 20th century, early forms were used in the 1700s. The first use is credited toJohn Arbuthnot(1710),[3]followed byPierre-Simon Laplace(1770s), in analyzing thehuman sex ratioat birth; see§ Human sex ratio. Paul Meehlhas argued that theepistemologicalimportance of the choice of null hypothesis has gone largely unacknowledged. When the null hypothesis is predicted by theory, a more precise experiment will be a more severe test of the underlying theory. When the null hypothesis defaults to "no difference" or "no effect", a more precise experiment is a less severe test of the theory that motivated performing the experiment.[4]An examination of the origins of the latter practice may therefore be useful: 1778:Pierre Laplacecompares the birthrates of boys and girls in multiple European cities. He states: "it is natural to conclude that these possibilities are very nearly in the same ratio". Thus, the null hypothesis in this case that the birthrates of boys and girls should be equal given "conventional wisdom".[5] 1900:Karl Pearsondevelops thechi squared testto determine "whether a given form of frequency curve will effectively describe the samples drawn from a given population." Thus the null hypothesis is that a population is described by some distribution predicted by theory. He uses as an example the numbers of five and sixes in theWeldon dice throw data.[6] 1904:Karl Pearsondevelops the concept of "contingency" in order to determine whether outcomes areindependentof a given categorical factor. Here the null hypothesis is by default that two things are unrelated (e.g. scar formation and death rates from smallpox).[7]The null hypothesis in this case is no longer predicted by theory or conventional wisdom, but is instead theprinciple of indifferencethat ledFisherand others to dismiss the use of "inverse probabilities".[8] Modern significance testing is largely the product ofKarl Pearson(p-value,Pearson's chi-squared test),William Sealy Gosset(Student's t-distribution), andRonald Fisher("null hypothesis",analysis of variance, "significance test"), while hypothesis testing was developed byJerzy NeymanandEgon Pearson(son of Karl). Ronald Fisher began his life in statistics as a Bayesian (Zabell 1992), but Fisher soon grew disenchanted with the subjectivity involved (namely use of theprinciple of indifferencewhen determining prior probabilities), and sought to provide a more "objective" approach to inductive inference.[9] Fisher emphasized rigorous experimental design and methods to extract a result from few samples assumingGaussian distributions. Neyman (who teamed with the younger Pearson) emphasized mathematical rigor and methods to obtain more results from many samples and a wider range of distributions. Modern hypothesis testing is an inconsistent hybrid of the Fisher vs Neyman/Pearson formulation, methods and terminology developed in the early 20th century. Fisher popularized the "significance test". He required a null-hypothesis (corresponding to a population frequency distribution) and a sample. His (now familiar) calculations determined whether to reject the null-hypothesis or not. Significance testing did not utilize an alternative hypothesis so there was no concept of aType II error(false negative). Thep-value was devised as an informal, but objective, index meant to help a researcher determine (based on other knowledge) whether to modify future experiments or strengthen one'sfaithin the null hypothesis.[10]Hypothesis testing (and Type I/II errors) was devised by Neyman and Pearson as a more objective alternative to Fisher'sp-value, also meant to determine researcher behaviour, but without requiring anyinductive inferenceby the researcher.[11][12] Neyman & Pearson considered a different problem to Fisher (which they called "hypothesis testing"). They initially considered two simple hypotheses (both with frequency distributions). They calculated two probabilities and typically selected the hypothesis associated with the higher probability (the hypothesis more likely to have generated the sample). Their method always selected a hypothesis. It also allowed the calculation of both types of error probabilities. Fisher and Neyman/Pearson clashed bitterly. Neyman/Pearson considered their formulation to be an improved generalization of significance testing (the defining paper[11]wasabstract; Mathematicians have generalized and refined the theory for decades[13]). Fisher thought that it was not applicable to scientific research because often, during the course of the experiment, it is discovered that the initial assumptions about the null hypothesis are questionable due to unexpected sources of error. He believed that the use of rigid reject/accept decisions based on models formulated before data is collected was incompatible with this common scenario faced by scientists and attempts to apply this method to scientific research would lead to mass confusion.[14] The dispute between Fisher and Neyman–Pearson was waged on philosophical grounds, characterized by a philosopher as a dispute over the proper role of models in statistical inference.[15] Events intervened: Neyman accepted a position in theUniversity of California, Berkeleyin 1938, breaking his partnership with Pearson and separating the disputants (who had occupied the same building).World War IIprovided an intermission in the debate. The dispute between Fisher and Neyman terminated (unresolved after 27 years) with Fisher's death in 1962. Neyman wrote a well-regarded eulogy.[16]Some of Neyman's later publications reportedp-values and significance levels.[17] The modern version of hypothesis testing is generally called thenull hypothesis significance testing (NHST)[18]and is a hybrid of the Fisher approach with the Neyman-Pearson approach. In 2000,Raymond S. Nickersonwrote an article stating that NHST was (at the time) "arguably the most widely used method of analysis of data collected in psychological experiments and has been so for about 70 years" and that it was at the same time "very controversial".[18] This fusion resulted from confusion by writers of statistical textbooks (as predicted by Fisher) beginning in the 1940s[19](butsignal detection, for example, still uses the Neyman/Pearson formulation). Great conceptual differences and many caveats in addition to those mentioned above were ignored. Neyman and Pearson provided the stronger terminology, the more rigorous mathematics and the more consistent philosophy, but the subject taught today in introductory statistics has more similarities with Fisher's method than theirs.[20] Sometime around 1940,[19]authors of statistical text books began combining the two approaches by using thep-value in place of thetest statistic(or data) to test against the Neyman–Pearson "significance level". Hypothesis testing and philosophy intersect.Inferential statistics, which includes hypothesis testing, is applied probability. Both probability and its application are intertwined with philosophy. PhilosopherDavid Humewrote, "All knowledge degenerates into probability." Competing practical definitions ofprobabilityreflect philosophical differences. The most common application of hypothesis testing is in the scientific interpretation of experimental data, which is naturally studied by thephilosophy of science. Fisher and Neyman opposed the subjectivity of probability. Their views contributed to the objective definitions. The core of their historical disagreement was philosophical. Many of the philosophical criticisms of hypothesis testing are discussed by statisticians in other contexts, particularlycorrelation does not imply causationand thedesign of experiments. Hypothesis testing is of continuing interest to philosophers.[15][21] Statistics is increasingly being taught in schools with hypothesis testing being one of the elements taught.[22][23]Many conclusions reported in the popular press (political opinion polls to medical studies) are based on statistics. Some writers have stated that statistical analysis of this kind allows for thinking clearly about problems involving mass data, as well as the effective reporting of trends and inferences from said data, but caution that writers for a broad public should have a solid understanding of the field in order to use the terms and concepts correctly.[24][25]An introductory college statistics class places much emphasis on hypothesis testing – perhaps half of the course. Such fields as literature and divinity now include findings based on statistical analysis (see theBible Analyzer). An introductory statistics class teaches hypothesis testing as a cookbook process. Hypothesis testing is also taught at the postgraduate level. Statisticians learn how to create good statistical test procedures (likez, Student'st,Fand chi-squared). Statistical hypothesis testing is considered a mature area within statistics,[26]but a limited amount of development continues. An academic study states that the cookbook method of teaching introductory statistics leaves no time for history, philosophy or controversy. Hypothesis testing has been taught as received unified method. Surveys showed that graduates of the class were filled with philosophical misconceptions (on all aspects of statistical inference) that persisted among instructors.[27]While the problem was addressed more than a decade ago,[28]and calls for educational reform continue,[29]students still graduate from statistics classes holding fundamental misconceptions about hypothesis testing.[30]Ideas for improving the teaching of hypothesis testing include encouraging students to search for statistical errors in published papers, teaching the history of statistics and emphasizing the controversy in a generally dry subject.[31] Raymond S. Nickerson commented: The debate about NHST has its roots in unresolved disagreements among major contributors to the development of theories of inferential statistics on which modern approaches are based.Gigerenzeret al. (1989) have reviewed in considerable detail the controversy between R. A. Fisher on the one hand and Jerzy Neyman and Egon Pearson on the other as well as the disagreements between both of these views and those of the followers of Thomas Bayes. They noted the remarkable fact that little hint of the historical and ongoing controversy is to be found in most textbooks that are used to teach NHST to its potential users. The resulting lack of an accurate historical perspective and understanding of the complexity and sometimes controversial philosophical foundations of various approaches to statistical inference may go a long way toward explaining the apparent ease with which statistical tests are misused and misinterpreted.[18] The typical steps involved in performing a frequentist hypothesis test in practice are: The difference in the two processes applied to the radioactive suitcase example (below): The former report is adequate, the latter gives a more detailed explanation of the data and the reason why the suitcase is being checked. Not rejecting the null hypothesis does not mean the null hypothesis is "accepted" per se (though Neyman and Pearson used that word in their original writings; see theInterpretationsection). The processes described here are perfectly adequate for computation. They seriously neglect thedesign of experimentsconsiderations.[33][34] It is particularly critical that appropriate sample sizes be estimated before conducting the experiment. The phrase "test of significance" was coined by statisticianRonald Fisher.[35] When the null hypothesis is true and statistical assumptions are met, the probability that the p-value will be less than or equal to the significance levelα{\displaystyle \alpha }is at mostα{\displaystyle \alpha }. This ensures that the hypothesis test maintains its specified false positive rate (provided that statistical assumptions are met).[36] Thep-value is the probability that a test statistic which is at least as extreme as the one obtained would occur under the null hypothesis. At a significance level of 0.05, a fair coin would be expected to (incorrectly) reject the null hypothesis (that it is fair) in 1 out of 20 tests on average. Thep-value does not provide the probability that either the null hypothesis or its opposite is correct (a common source of confusion).[37] If thep-value is less than the chosen significance threshold (equivalently, if the observed test statistic is in the critical region), then we say the null hypothesis is rejected at the chosen level of significance. If thep-value isnotless than the chosen significance threshold (equivalently, if the observed test statistic is outside the critical region), then the null hypothesis is not rejected at the chosen level of significance. In the "lady tasting tea" example (below), Fisher required the lady to properly categorize all of the cups of tea to justify the conclusion that the result was unlikely to result from chance. His test revealed that if the lady was effectively guessing at random (the null hypothesis), there was a 1.4% chance that the observed results (perfectly ordered tea) would occur. Statistics are helpful in analyzing most collections of data. This is equally true of hypothesis testing which can justify conclusions even when no scientific theory exists. In the Lady tasting tea example, it was "obvious" that no difference existed between (milk poured into tea) and (tea poured into milk). The data contradicted the "obvious". Real world applications of hypothesis testing include:[38] Statistical hypothesis testing plays an important role in the whole of statistics and instatistical inference. For example, Lehmann (1992) in a review of the fundamental paper by Neyman and Pearson (1933) says: "Nevertheless, despite their shortcomings, the new paradigm formulated in the 1933 paper, and the many developments carried out within its framework continue to play a central role in both the theory and practice of statistics and can be expected to do so in the foreseeable future". Significance testing has been the favored statistical tool in some experimental social sciences (over 90% of articles in theJournal of Applied Psychologyduring the early 1990s).[39]Other fields have favored the estimation of parameters (e.g.effect size). Significance testing is used as a substitute for the traditional comparison of predicted value and experimental result at the core of thescientific method. When theory is only capable of predicting the sign of a relationship, a directional (one-sided) hypothesis test can be configured so that only a statistically significant result supports theory. This form of theory appraisal is the most heavily criticized application of hypothesis testing. "If the government required statistical procedures to carry warning labels like those on drugs, most inference methods would have long labels indeed."[40]This caution applies to hypothesis tests and alternatives to them. The successful hypothesis test is associated with a probability and a type-I error rate. The conclusionmightbe wrong. The conclusion of the test is only as solid as the sample upon which it is based. The design of the experiment is critical. A number of unexpected effects have been observed including: A statistical analysis of misleading data produces misleading conclusions. The issue of data quality can be more subtle. Inforecastingfor example, there is no agreement on a measure of forecast accuracy. In the absence of a consensus measurement, no decision based on measurements will be without controversy. Publication bias: Statistically nonsignificant results may be less likely to be published, which can bias the literature. Multiple testing: When multiple true null hypothesis tests are conducted at once without adjustment, the overall probability of Type I error is higher than the nominal alpha level.[41] Those making critical decisions based on the results of a hypothesis test are prudent to look at the details rather than the conclusion alone. In the physical sciences most results are fully accepted only when independently confirmed. The general advice concerning statistics is, "Figures never lie, but liars figure" (anonymous). The following definitions are mainly based on the exposition in the book by Lehmann and Romano:[36] A statistical hypothesis test compares a test statistic (zortfor examples) to a threshold. The test statistic (the formula found in the table below) is based on optimality. For a fixed level of Type I error rate, use of these statistics minimizes Type II error rates (equivalent to maximizing power). The following terms describe tests in terms of such optimality: Bootstrap-basedresamplingmethods can be used for null hypothesis testing. A bootstrap creates numerous simulated samples by randomly resampling (with replacement) the original, combined sample data, assuming the null hypothesis is correct. The bootstrap is very versatile as it is distribution-free and it does not rely on restrictive parametric assumptions, but rather on empirical approximate methods with asymptotic guarantees. Traditional parametric hypothesis tests are more computationally efficient but make stronger structural assumptions. In situations where computing the probability of the test statistic under the null hypothesis is hard or impossible (due to perhaps inconvenience or lack of knowledge of the underlying distribution), the bootstrap offers a viable method for statistical inference.[43][44][45][46] The earliest use of statistical hypothesis testing is generally credited to the question of whether male and female births are equally likely (null hypothesis), which was addressed in the 1700s byJohn Arbuthnot(1710),[47]and later byPierre-Simon Laplace(1770s).[48] Arbuthnot examined birth records in London for each of the 82 years from 1629 to 1710, and applied thesign test, a simplenon-parametric test.[49][50][51]In every year, the number of males born in London exceeded the number of females. Considering more male or more female births as equally likely, the probability of the observed outcome is 0.582, or about 1 in 4,836,000,000,000,000,000,000,000; in modern terms, this is thep-value. Arbuthnot concluded that this is too small to be due to chance and must instead be due to divine providence: "From whence it follows, that it is Art, not Chance, that governs." In modern terms, he rejected the null hypothesis of equally likely male and female births at thep= 1/282significance level. Laplace considered the statistics of almost half a million births. The statistics showed an excess of boys compared to girls.[5]He concluded by calculation of ap-value that the excess was a real, but unexplained, effect.[52] In a famous example of hypothesis testing, known as theLady tasting tea,[53]Dr.Muriel Bristol, a colleague of Fisher, claimed to be able to tell whether the tea or the milk was added first to a cup. Fisher proposed to give her eight cups, four of each variety, in random order. One could then ask what the probability was for her getting the number she got correct, but just by chance. The null hypothesis was that the Lady had no such ability. The test statistic was a simple count of the number of successes in selecting the 4 cups. The critical region was the single case of 4 successes of 4 possible based on a conventional probability criterion (< 5%). A pattern of 4 successes corresponds to 1 out of 70 possible combinations (p≈ 1.4%). Fisher asserted that no alternative hypothesis was (ever) required. The lady correctly identified every cup,[54]which would be considered a statistically significant result. A statistical test procedure is comparable to a criminaltrial; a defendant is considered not guilty as long as his or her guilt is not proven. The prosecutor tries to prove the guilt of the defendant. Only when there is enough evidence for the prosecution is the defendant convicted. In the start of the procedure, there are two hypothesesH0{\displaystyle H_{0}}: "the defendant is not guilty", andH1{\displaystyle H_{1}}: "the defendant is guilty". The first one,H0{\displaystyle H_{0}}, is called thenull hypothesis. The second one,H1{\displaystyle H_{1}}, is called thealternative hypothesis. It is the alternative hypothesis that one hopes to support. The hypothesis of innocence is rejected only when an error is very unlikely, because one does not want to convict an innocent defendant. Such an error is callederror of the first kind(i.e., the conviction of an innocent person), and the occurrence of this error is controlled to be rare. As a consequence of this asymmetric behaviour, anerror of the second kind(acquitting a person who committed the crime), is more common. A criminal trial can be regarded as either or both of two decision processes: guilty vs not guilty or evidence vs a threshold ("beyond a reasonable doubt"). In one view, the defendant is judged; in the other view the performance of the prosecution (which bears the burden of proof) is judged. A hypothesis test can be regarded as either a judgment of a hypothesis or as a judgment of evidence. A person (the subject) is tested forclairvoyance. They are shown the back face of a randomly chosen playing card 25 times and asked which of the foursuitsit belongs to. The number of hits, or correct answers, is calledX. As we try to find evidence of their clairvoyance, for the time being the null hypothesis is that the person is not clairvoyant.[55]The alternative is: the person is (more or less) clairvoyant. If the null hypothesis is valid, the only thing the test person can do is guess. For every card, the probability (relative frequency) of any single suit appearing is 1/4. If the alternative is valid, the test subject will predict the suit correctly with probability greater than 1/4. We will call the probability of guessing correctlyp. The hypotheses, then, are: and When the test subject correctly predicts all 25 cards, we will consider them clairvoyant, and reject the null hypothesis. Thus also with 24 or 23 hits. With only 5 or 6 hits, on the other hand, there is no cause to consider them so. But what about 12 hits, or 17 hits? What is the critical number,c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical valuec? With the choicec=25 (i.e. we only accept clairvoyance when all cards are predicted correctly) we're more critical than withc=10. In the first case almost no test subjects will be recognized to be clairvoyant, in the second case, a certain number will pass the test. In practice, one decides how critical one will be. That is, one decides how often one accepts an error of the first kind – afalse positive, or Type I error. Withc= 25 the probability of such an error is: and hence, very small. The probability of a false positive is the probability of randomly guessing correctly all 25 times. Being less critical, withc= 10, gives: Thus,c= 10 yields a much greater probability of false positive. Before the test is actually performed, the maximum acceptable probability of a Type I error (α) is determined. Typically, values in the range of 1% to 5% are selected. (If the maximum acceptable error rate is zero, an infinite number of correct guesses is required.) Depending on this Type 1 error rate, the critical valuecis calculated. For example, if we select an error rate of 1%,cis calculated thus: From all the numbers c, with this property, we choose the smallest, in order to minimize the probability of a Type II error, afalse negative. For the above example, we select:c=13{\displaystyle c=13}. Statistical hypothesis testing is a key technique of bothfrequentist inferenceandBayesian inference, although the two types of inference have notable differences. Statistical hypothesis tests define a procedure that controls (fixes) the probability of incorrectlydecidingthat a default position (null hypothesis) is incorrect. The procedure is based on how likely it would be for a set of observations to occur if the null hypothesis were true. This probability of making an incorrect decision isnotthe probability that the null hypothesis is true, nor whether any specific alternative hypothesis is true. This contrasts with other possible techniques ofdecision theoryin which the null andalternative hypothesisare treated on a more equal basis. One naïveBayesianapproach to hypothesis testing is to base decisions on theposterior probability,[56][57]but this fails when comparing point and continuous hypotheses. Other approaches to decision making, such asBayesian decision theory, attempt to balance the consequences of incorrect decisions across all possibilities, rather than concentrating on a single null hypothesis. A number of other approaches to reaching a decision based on data are available viadecision theoryandoptimal decisions, some of which have desirable properties. Hypothesis testing, though, is a dominant approach to data analysis in many fields of science. Extensions to the theory of hypothesis testing include the study of thepowerof tests, i.e. the probability of correctly rejecting the null hypothesis given that it is false. Such considerations can be used for the purpose ofsample size determinationprior to the collection of data. An example of Neyman–Pearson hypothesis testing (or null hypothesis statistical significance testing) can be made by a change to the radioactive suitcase example. If the "suitcase" is actually a shielded container for the transportation of radioactive material, then a test might be used to select among three hypotheses: no radioactive source present, one present, two (all) present. The test could be required for safety, with actions required in each case. TheNeyman–Pearson lemmaof hypothesis testing says that a good criterion for the selection of hypotheses is the ratio of their probabilities (alikelihood ratio). A simple method of solution is to select the hypothesis with the highest probability for the Geiger counts observed. The typical result matches intuition: few counts imply no source, many counts imply two sources and intermediate counts imply one source. Notice also that usually there are problems forproving a negative. Null hypotheses should be at leastfalsifiable. Neyman–Pearson theory can accommodate both prior probabilities and the costs of actions resulting from decisions.[58]The former allows each test to consider the results of earlier tests (unlike Fisher's significance tests). The latter allows the consideration of economic issues (for example) as well as probabilities. A likelihood ratio remains a good criterion for selecting among hypotheses. The two forms of hypothesis testing are based on different problem formulations. The original test is analogous to a true/false question; the Neyman–Pearson test is more like multiple choice. In the view ofTukey[59]the former produces a conclusion on the basis of only strong evidence while the latter produces a decision on the basis of available evidence. While the two tests seem quite different both mathematically and philosophically, later developments lead to the opposite claim. Consider many tiny radioactive sources. The hypotheses become 0,1,2,3... grains of radioactive sand. There is little distinction between none or some radiation (Fisher) and 0 grains of radioactive sand versus all of the alternatives (Neyman–Pearson). The major Neyman–Pearson paper of 1933[11]also considered composite hypotheses (ones whose distribution includes an unknown parameter). An example proved the optimality of the (Student's)t-test, "there can be no better test for the hypothesis under consideration" (p 321). Neyman–Pearson theory was proving the optimality of Fisherian methods from its inception. Fisher's significance testing has proven a popular flexible statistical tool in application with little mathematical growth potential. Neyman–Pearson hypothesis testing is claimed as a pillar of mathematical statistics,[60]creating a new paradigm for the field. It also stimulated new applications instatistical process control,detection theory,decision theoryandgame theory. Both formulations have been successful, but the successes have been of a different character. The dispute over formulations is unresolved. Science primarily uses Fisher's (slightly modified) formulation as taught in introductory statistics. Statisticians study Neyman–Pearson theory in graduate school. Mathematicians are proud of uniting the formulations. Philosophers consider them separately. Learned opinions deem the formulations variously competitive (Fisher vs Neyman), incompatible[9]or complementary.[13]The dispute has become more complex since Bayesian inference has achieved respectability. The terminology is inconsistent. Hypothesis testing can mean any mixture of two formulations that both changed with time. Any discussion of significance testing vs hypothesis testing is doubly vulnerable to confusion. Fisher thought that hypothesis testing was a useful strategy for performing industrial quality control, however, he strongly disagreed that hypothesis testing could be useful for scientists.[10]Hypothesis testing provides a means of finding test statistics used in significance testing.[13]The concept of power is useful in explaining the consequences of adjusting the significance level and is heavily used insample size determination. The two methods remain philosophically distinct.[15]They usually (butnot always) produce the same mathematical answer. The preferred answer is context dependent.[13]While the existing merger of Fisher and Neyman–Pearson theories has been heavily criticized, modifying the merger to achieve Bayesian goals has been considered.[61] Criticism of statistical hypothesis testing fills volumes.[62][63][64][65][66][67]Much of the criticism can be summarized by the following issues: Critics and supporters are largely in factual agreement regarding the characteristics of null hypothesis significance testing (NHST): While it can provide critical information, it isinadequate as the sole tool for statistical analysis.Successfully rejecting the null hypothesis may offer no support for the research hypothesis.The continuing controversy concerns the selection of the best statistical practices for the near-term future given the existing practices. However, adequate research design can minimize this issue. Critics would prefer to ban NHST completely, forcing a complete departure from those practices,[78]while supporters suggest a less absolute change.[citation needed] Controversy over significance testing, and its effects on publication bias in particular, has produced several results. TheAmerican Psychological Associationhas strengthened its statistical reporting requirements after review,[79]medical journalpublishers have recognized the obligation to publish some results that are not statistically significant to combat publication bias,[80]and a journal (Journal of Articles in Support of the Null Hypothesis) has been created to publish such results exclusively.[81]Textbooks have added some cautions,[82]and increased coverage of the tools necessary to estimate the size of the sample required to produce significant results. Few major organizations have abandoned use of significance tests although some have discussed doing so.[79]For instance, in 2023, the editors of theJournal of Physiology"strongly recommend the use of estimation methods for those publishing in The Journal" (meaning the magnitude of theeffect size(to allow readers to judge whether a finding has practical, physiological, or clinical relevance) andconfidence intervalsto convey the precision of that estimate), saying "Ultimately, it is the physiological importance of the data that those publishing in The Journal of Physiology should be most concerned with, rather than the statistical significance."[83] A unifying position of critics is that statistics should not lead to an accept-reject conclusion or decision, but to an estimated value with aninterval estimate; this data-analysis philosophy is broadly referred to asestimation statistics. Estimation statistics can be accomplished with either frequentist[84]or Bayesian methods.[85][86] Critics of significance testing have advocated basing inference less on p-values and more on confidence intervals for effect sizes for importance, prediction intervals for confidence, replications and extensions for replicability, meta-analyses for generality :.[87]But none of these suggested alternatives inherently produces a decision. Lehmann said that hypothesis testing theory can be presented in terms of conclusions/decisions, probabilities, or confidence intervals: "The distinction between the ... approaches is largely one of reporting and interpretation."[26] Bayesian inferenceis one proposed alternative to significance testing. (Nickerson cited 10 sources suggesting it, including Rozeboom (1960)).[18]For example, Bayesianparameter estimationcan provide rich information about the data from which researchers can draw inferences, while using uncertainpriorsthat exert only minimal influence on the results when enough data is available. PsychologistJohn K. Kruschkehas suggested Bayesian estimation as an alternative for thet-test[85]and has also contrasted Bayesian estimation for assessing null values with Bayesian model comparison for hypothesis testing.[86]Two competing models/hypotheses can be compared usingBayes factors.[88]Bayesian methods could be criticized for requiring information that is seldom available in the cases where significance testing is most heavily used. Neither the prior probabilities nor theprobability distributionof the test statistic under the alternative hypothesis are often available in the social sciences.[18] Advocates of a Bayesian approach sometimes claim that the goal of a researcher is most often toobjectivelyassess theprobabilitythat ahypothesisis true based on the data they have collected.[89][90]NeitherFisher's significance testing, norNeyman–Pearsonhypothesis testing can provide this information, and do not claim to. The probability a hypothesis is true can only be derived from use ofBayes' Theorem, which was unsatisfactory to both the Fisher and Neyman–Pearson camps due to the explicit use ofsubjectivityin the form of theprior probability.[11][91]Fisher's strategy is to sidestep this with thep-value(an objectiveindexbased on the data alone) followed byinductive inference, while Neyman–Pearson devised their approach ofinductive behaviour.
https://en.wikipedia.org/wiki/Hypothesis_testing
Ontology learning(ontology extraction,ontology augmentation generation,ontology generation, orontology acquisition) is the automatic or semi-automatic creation ofontologies, including extracting the correspondingdomain'sterms and the relationships between theconceptsthat these terms represent from acorpusof natural language text, and encoding them with anontology languagefor easy retrieval. Asbuilding ontologiesmanually is extremely labor-intensive and time-consuming, there is great motivation to automate the process. Typically, the process starts byextracting termsand concepts ornoun phrasesfrom plain text using linguistic processors such aspart-of-speech taggingandphrase chunking. Then statistical[1]or symbolic[2][3]techniques are used to extractrelation signatures, often based on pattern-based[4]or definition-based[5]hypernym extraction techniques. Ontology learning (OL) is used to (semi-)automatically extract whole ontologies from natural language text.[6][7]The process is usually split into the following eight tasks, which are not all necessarily applied in every ontology learning system. During the domainterminology extractionstep, domain-specific terms are extracted, which are used in the following step (concept discovery) to derive concepts. Relevant terms can be determined, e.g., by calculation of theTF/IDFvalues or by application of the C-value / NC-value method. The resulting list of terms has to be filtered by a domain expert. In the subsequent step, similarly to coreference resolution ininformation extraction, the OL system determines synonyms, because they share the same meaning and therefore correspond to the same concept. The most common methods therefore are clustering and the application of statistical similarity measures. In the concept discovery step, terms are grouped to meaning bearing units, which correspond to an abstraction of the world and therefore toconcepts. The grouped terms are these domain-specific terms and their synonyms, which were identified in the domain terminology extraction step. In the concept hierarchy derivation step, the OL system tries to arrange the extracted concepts in a taxonomic structure. This is mostly achieved with unsupervisedhierarchical clusteringmethods. Because the result of such methods is often noisy, a supervision step, e.g., user evaluation, is added. A further method for the derivation of a concept hierarchy exists in the usage of several patterns that should indicate asub- or supersumption relationship. Patterns like “X, that is a Y” or “X is a Y” indicate that X is a subclass of Y. Such pattern can be analyzed efficiently, but they often occur too infrequently to extract enough sub- or supersumption relationships. Instead, bootstrapping methods are developed, which learn these patterns automatically and therefore ensure broader coverage. In the learning of non-taxonomic relations step, relationships are extracted that do not express any sub- or supersumption. Such relationships are, e.g., works-for or located-in. There are two common approaches to solve this subtask. The first is based upon the extraction of anonymous associations, which are named appropriately in a second step. The second approach extracts verbs, which indicate a relationship between entities, represented by the surrounding words. The result of both approaches need to be evaluated by an ontologist to ensure accuracy. Duringrule discovery,[8]axioms (formal description of concepts) are generated for the extracted concepts. This can be achieved, e.g., by analyzing the syntactic structure of a natural language definition and the application of transformation rules on the resulting dependency tree. The result of this process is a list of axioms, which, afterwards, is comprehended to a concept description. This output is then evaluated by an ontologist. At this step, the ontology is augmented with instances of concepts and properties. For the augmentation with instances of concepts, methods based on the matching of lexico-syntactic patterns are used. Instances of properties are added through the application ofbootstrapping methods, which collect relation tuples. In this step, the OL system tries to extend the taxonomic structure of an existing ontology with further concepts. This can be performed in a supervised manner with a trained classifier or in an unsupervised manner via the application ofsimilarity measures. During frame/event detection, the OL system tries to extract complex relationships from text, e.g., who departed from where to what place and when. Approaches range from applying SVM withkernel methodstosemantic role labeling(SRL)[9]to deepsemantic parsingtechniques.[10] Dog4Dag (Dresden Ontology Generator for Directed Acyclic Graphs) is an ontology generation plugin for Protégé 4.1 and OBOEdit 2.1. It allows for term generation, sibling generation, definition generation, and relationship induction. Integrated into Protégé 4.1 and OBO-Edit 2.1, DOG4DAG allows ontology extension for all common ontology formats (e.g., OWL and OBO). Limited largely to EBI and Bio Portal lookup service extensions.[11]
https://en.wikipedia.org/wiki/Ontology_extraction
Incomputer science, aheartbeatis aperiodic signalgenerated byhardwareorsoftwareto indicate normal operation or tosynchronizeother parts of acomputer system.[1][2]Heartbeat mechanism is one of the common techniques inmission critical systemsfor providinghigh availabilityandfault toleranceofnetwork servicesby detecting thenetworkor systems failures ofnodesordaemonswhich belongs to anetwork cluster—administered by amaster server—for the purpose of automatic adaptation andrebalancingof the system by using the remaining redundant nodes on the cluster to take over theloadof failed nodes for providing constant services.[3][1]Usually a heartbeat is sent between machines at a regular interval in the order of seconds; aheartbeat message.[4]If the endpoint does not receive a heartbeat for a time—usually a few heartbeat intervals—the machine that should have sent the heartbeat is assumed to have failed.[5]Heartbeat messages are typically sent non-stop on aperiodicor recurring basis from the originator's start-up until the originator's shutdown. When the destination identifies a lack of heartbeat messages during an anticipated arrival period, the destination may determine that the originator has failed, shutdown, or is generally no longer available. A heartbeat protocol is generally used to negotiate and monitor the availability of a resource, such as a floatingIP address, and the procedure involves sendingnetwork packetsto all the nodes in the cluster to verify itsreachability.[3]Typically when a heartbeat starts on a machine, it will perform an election process with other machines on theheartbeat networkto determine which machine, if any, owns the resource. On heartbeat networks of more than two machines, it is important to take into account partitioning, where two halves of the network could be functioning but not able to communicate with each other. In a situation such as this, it is important that the resource is only owned by one machine, not one machine in each partition. As a heartbeat is intended to be used to indicate the health of a machine, it is important that the heartbeat protocol and the transport that it runs on are as reliable as possible. Causing afailoverbecause of a false alarm may, depending on the resource, be highly undesirable. It is also important to react quickly to an actual failure, further signifying the reliability of the heartbeat messages. For this reason, it is often desirable to have a heartbeat running over more than one transport; for instance, anEthernetsegment usingUDP/IP, and a serial link. A "cluster membership" of a node is a property ofnetwork reachability: if the master can communicate with the nodex{\displaystyle x}, it's considered a member of the cluster and "dead" otherwise.[6]A heartbeat program as a whole consist of varioussubsystems:[7] Heartbeat messages are sent in a periodic manner through techniques such asbroadcastormulticastsin larger clusters.[6]Since CMs have transactions across the cluster, the most common pattern is to send heartbeat messages to all the nodes and "await" responses innon-blockingfashion.[8]Since the heartbeat orkeepalivemessages are the overwhelming majority of non-application related cluster control messages—which also goes to all the members of the cluster—major critical systems also include non-IPprotocols likeserial portsto deliver heartbeats.[9] Every CM on the master server maintains afinite-state machinewith three states for each node it administers: Down, Init, and Alive.[10]Whenever a new node joins, the CM changes the state of the node from Down to Init and broadcasts a "boot-up message", which the node receives the executes set of start-up procedures. It then responses with an acknowledgment message, CM then includes the node as the member of the cluster andtransitions the stateof the node from Init to Alive. Every node in the Alive state would receive a periodic broadcast heartbeat message from the HS subsystem and expects an acknowledgment message back within atimeout range. If CM didn't receive an acknowledgment heartbeat message back, the node is consideredunavailable, and a state transition from Alive to Down takes place for that node by CM.[11]The procedures or scripts to run, and actions to take between each state transition is animplementation detailof the system. Heartbeat network is aprivate networkwhich is shared only by the nodes in the cluster, and is not accessible from outside the cluster. It is used by cluster nodes in order to monitor each node's status and communicate with each other messages necessary for maintaining the operation of the cluster. The heartbeat method uses theFIFOnature of the signals sent across the network. By making sure that all messages have been received, the system ensures that events can be properly ordered.[12] In thiscommunications protocolevery node sends back a message in a given interval, saydelta, in effect confirming that it is alive and has a heartbeat. These messages are viewed as control messages that help determine that the network includes no delayed messages. A receiver node called a "sync", maintains an ordered list of the received messages. Once a message with atimestamplater than the given marked time is received from every node, the system determines that all messages have been received since the FIFO property ensures that the messages are ordered.[13] In general, it is difficult to select a delta that is optimal for all applications. If delta is too small, it requires too much overhead and if it is large it results in performance degradation as everything waits for the next heartbeat signal.[14]
https://en.wikipedia.org/wiki/Heartbeat_private_network
This is a list of some well-knownperiodic functions. The constant functionf(x) =c, wherecis independent ofx, is periodic with any period, but lacks afundamental period. A definition is given for some of the following functions, though each function may have many equivalent definitions. All trigonometric functions listed have period2π{\displaystyle 2\pi }, unless otherwise stated. For the following trigonometric functions: The following functions have periodp{\displaystyle p}and takex{\displaystyle x}as their argument. The symbol⌊n⌋{\displaystyle \lfloor n\rfloor }is thefloor functionofn{\displaystyle n}andsgn{\displaystyle \operatorname {sgn} }is thesign function. K meansElliptic integralK(m) whereH{\displaystyle H}is theHeaviside step functiont is how long the pulse stays at 1 givenf(x)=x−sin⁡(x){\displaystyle f(x)=x-\sin(x)}andf(−1)(x){\displaystyle f^{(-1)}(x)}is its real-valued inverse. whereJn⁡(x){\displaystyle \operatorname {J} _{n}(x)}is theBessel Function of the first kind.
https://en.wikipedia.org/wiki/List_of_periodic_functions
In software, astack buffer overfloworstack buffer overrunoccurs when a program writes to amemoryaddress on the program'scall stackoutside of the intended data structure, which is usually a fixed-lengthbuffer.[1][2]Stack buffer overflow bugs are caused when a program writes more data to a buffer located on the stack than what is actually allocated for that buffer. This almost always results in corruption of adjacent data on the stack, and in cases where the overflow was triggered by mistake, will often cause the program to crash or operate incorrectly. Stack buffer overflow is a type of the more general programming malfunction known asbuffer overflow(or buffer overrun).[1]Overfilling a buffer on the stack is more likely to derail program execution than overfilling a buffer on the heap because the stack contains the return addresses for all active function calls. A stack buffer overflow can be caused deliberately as part of an attack known asstack smashing. If the affected program is running with special privileges, or accepts data from untrusted network hosts (e.g. awebserver) then the bug is a potentialsecurity vulnerability. If the stack buffer is filled with data supplied from an untrusted user then that user can corrupt the stack in such a way as to inject executable code into the running program and take control of the process. This is one of the oldest and more reliable methods forattackersto gain unauthorized access to a computer.[3][4][5] The canonical method forexploitinga stack-based buffer overflow is to overwrite the function return address with a pointer to attacker-controlled data (usually on the stack itself).[3][6]This is illustrated withstrcpy()in the following example: This code takes an argument from the command line and copies it to a local stack variablec. This works fine for command-line arguments smaller than 12 characters (as can be seen in figure B below). Any arguments larger than 11 characters long will result in corruption of the stack. (The maximum number of characters that is safe is one less than the size of the buffer here because in the C programming language, strings are terminated by a null byte character. A twelve-character input thus requires thirteen bytes to store, the input followed by the sentinel zero byte. The zero byte then ends up overwriting a memory location that's one byte beyond the end of the buffer.) The program stack infoo()with various inputs: In figure C above, when an argument larger than 11 bytes is supplied on the command linefoo()overwrites local stack data, the saved frame pointer, and most importantly, the return address. Whenfoo()returns, it pops the return address off the stack and jumps to that address (i.e. starts executing instructions from that address). Thus, the attacker has overwritten the return address with a pointer to the stack bufferchar c[12], which now contains attacker-supplied data. In an actual stack buffer overflow exploit the string of "A"'s would instead beshellcodesuitable to the platform and desired function. If this program had special privileges (e.g. theSUIDbit set to run as thesuperuser), then the attacker could use this vulnerability to gain superuser privileges on the affected machine.[3] The attacker can also modify internal variable values to exploit some bugs. With this example: There are typically two methods that are used to alter the stored address in the stack - direct and indirect. Attackers started developing indirect attacks, which have fewer dependencies, in order to bypass protection measures that were made to reduce direct attacks.[7] A number of platforms have subtle differences in their implementation of the call stack that can affect the way a stack buffer overflow exploit will work. Some machine architectures store the top-level return address of the call stack in a register. This means that any overwritten return address will not be used until a later unwinding of the call stack. Another example of a machine-specific detail that can affect the choice of exploitation techniques is the fact that mostRISC-style machine architectures will not allow unaligned access to memory.[8]Combined with a fixed length for machine opcodes, this machine limitation can make the technique of jumping to the stack almost impossible to implement (with the one exception being when the program actually contains the unlikely code to explicitly jump to the stack register).[9][10] Within the topic of stack buffer overflows, an often-discussed-but-rarely-seen architecture is one in which the stack grows in the opposite direction. This change in architecture is frequently suggested as a solution to the stack buffer overflow problem because any overflow of a stack buffer that occurs within the same stack frame cannot overwrite the return pointer. However, any overflow that occurs in a buffer from a previous stack frame will still overwrite a return pointer and allow for malicious exploitation of the bug.[11]For instance, in the example above, the return pointer forfoowill not be overwritten because the overflow actually occurs within the stack frame formemcpy. However, because the buffer that overflows during the call tomemcpyresides in a previous stack frame, the return pointer formemcpywill have a numerically higher memory address than the buffer. This means that instead of the return pointer forfoobeing overwritten, the return pointer formemcpywill be overwritten. At most, this means that growing the stack in the opposite direction will change some details of how stack buffer overflows are exploitable, but it will not reduce significantly the number of exploitable bugs.[citation needed] Over the years, a number ofcontrol-flow integrityschemes have been developed to inhibit malicious stack buffer overflow exploitation. These may usually be classified into three categories: Stack canaries, named for their analogy to acanary in a coal mine, are used to detect a stack buffer overflow before execution of malicious code can occur. This method works by placing a small integer, the value of which is randomly chosen at program start, in memory just before the stack return pointer. Most buffer overflows overwrite memory from lower to higher memory addresses, so in order to overwrite the return pointer (and thus take control of the process) the canary value must also be overwritten. This value is checked to make sure it has not changed before a routine uses the return pointer on the stack.[2]This technique can greatly increase the difficulty of exploiting a stack buffer overflow because it forces the attacker to gain control of the instruction pointer by some non-traditional means such as corrupting other important variables on the stack.[2] Another approach to preventing stack buffer overflow exploitation is to enforce a memory policy on the stack memory region that disallows execution from the stack (W^X, "Write XOR Execute"). This means that in order to execute shellcode from the stack an attacker must either find a way to disable the execution protection from memory, or find a way to put their shellcode payload in a non-protected region of memory. This method is becoming more popular now that hardware support for the no-execute flag is available in most desktop processors. While this method prevents the canonical stack smashing exploit, stack overflows can be exploited in other ways. First, it is common to find ways to store shellcode in unprotected memory regions like the heap, and so very little need change in the way of exploitation.[12] Another attack is the so-calledreturn to libcmethod for shellcode creation. In this attack the malicious payload will load the stack not with shellcode, but with a proper call stack so that execution is vectored to a chain of standard library calls, usually with the effect of disabling memory execute protections and allowing shellcode to run as normal.[13]This works because the execution never actually vectors to the stack itself. A variant of return-to-libc isreturn-oriented programming(ROP), which sets up a series of return addresses, each of which executes a small sequence of cherry-picked machine instructions within the existing program code or system libraries, sequence which ends with a return. These so-calledgadgetseach accomplish some simple register manipulation or similar execution before returning, and stringing them together achieves the attacker's ends. It is even possible to use "returnless" return-oriented programming by exploiting instructions or groups of instructions that behave much like a return instruction.[14] Instead of separating the code from the data, another mitigation technique is to introduce randomization to the memory space of the executing program. Since the attacker needs to determine where executable code that can be used resides, either an executable payload is provided (with an executable stack) or one is constructed using code reuse such as in ret2libc or return-oriented programming (ROP). Randomizing the memory layout will, as a concept, prevent the attacker from knowing where any code is. However, implementations typically will not randomize everything; usually the executable itself is loaded at a fixed address and hence even whenASLR(address space layout randomization) is combined with a non-executable stack the attacker can use this fixed region of memory. Therefore, all programs should be compiled withPIE(position-independent executables) such that even this region of memory is randomized. The entropy of the randomization is different from implementation to implementation and a low enough entropy can in itself be a problem in terms of brute forcing the memory space that is randomized. The previous mitigations make the steps of the exploitation harder. But it is still possible to exploit a stack buffer overflow if some vulnerabilities are presents or if some conditions are met.[15] An attacker is able to exploit theformat string vulnerabilityfor revealing the memory locations in the vulnerable program.[16] WhenData Execution Preventionis enabled to forbid any execute access to the stack, the attacker can still use the overwritten return address (the instruction pointer) to point to data in acode segment(.texton Linux) or every other executable section of the program. The goal is to reuse existing code.[17] Consists to overwrite the return pointer a bit before a return instruction (ret in x86) of the program. The instructions between the new return pointer and the return instruction will be executed and the return instruction will return to the payload controlled by the exploiter.[17][clarification needed] Jump Oriented Programming is a technique that uses jump instructions to reuse code instead of the ret instruction.[18] A limitation of ASLR realization on 64-bit systems is that it is vulnerable to memory disclosure and information leakage attacks. The attacker can launch the ROP by revealing a single function address using information leakage attack. The following section describes the similar existing strategy for breaking down the ASLR protection.[19]
https://en.wikipedia.org/wiki/Stack_canary
Inanalytic number theoryand related branches of mathematics, a complex-valuedarithmetic functionχ:Z→C{\displaystyle \chi :\mathbb {Z} \rightarrow \mathbb {C} }is aDirichlet character of modulusm{\displaystyle m}(wherem{\displaystyle m}is a positive integer) if for all integersa{\displaystyle a}andb{\displaystyle b}:[1] The simplest possible character, called theprincipal character, usually denotedχ0{\displaystyle \chi _{0}}, (seeNotationbelow) exists for all moduli:[2] The German mathematicianPeter Gustav Lejeune Dirichlet—for whom the character is named—introduced these functions in his 1837 paper onprimes in arithmetic progressions.[3][4] ϕ(n){\displaystyle \phi (n)}isEuler's totient function.[5] ζn{\displaystyle \zeta _{n}}is a complex primitiven-th root of unity: (Z/mZ)×{\displaystyle (\mathbb {Z} /m\mathbb {Z} )^{\times }}is thegroup of units modm{\displaystyle m}. It has orderϕ(m).{\displaystyle \phi (m).} (Z/mZ)×^{\displaystyle {\widehat {(\mathbb {Z} /m\mathbb {Z} )^{\times }}}}is the group of Dirichlet characters modm{\displaystyle m}. p,pk,{\displaystyle p,p_{k},}etc. areprime numbers. (m,n){\displaystyle (m,n)}is a standard[6]abbreviation[7]forgcd(m,n){\displaystyle \gcd(m,n)} χ(a),χ′(a),χr(a),{\displaystyle \chi (a),\chi '(a),\chi _{r}(a),}etc. are Dirichlet characters. (the lowercaseGreek letter chifor "character") There is no standard notation for Dirichlet characters that includes the modulus. In many contexts (such as in the proof of Dirichlet's theorem) the modulus is fixed. In other contexts, such as this article, characters of different moduli appear. Where appropriate this article employs a variation ofConrey labeling(introduced byBrian Conreyand used by theLMFDB). In this labeling characters for modulusm{\displaystyle m}are denotedχm,t(a){\displaystyle \chi _{m,t}(a)}where the indext{\displaystyle t}is described in the sectionthe group of charactersbelow. In this labeling,χm,_(a){\displaystyle \chi _{m,\_}(a)}denotes an unspecified character andχm,1(a){\displaystyle \chi _{m,1}(a)}denotes the principal character modm{\displaystyle m}. The word "character" is used several ways in mathematics. In this section it refers to ahomomorphismfrom a groupG{\displaystyle G}(written multiplicatively) to the multiplicative group of the field of complex numbers: The set of characters is denotedG^.{\displaystyle {\widehat {G}}.}If the product of two characters is defined by pointwise multiplicationηθ(a)=η(a)θ(a),{\displaystyle \eta \theta (a)=\eta (a)\theta (a),}the identity by the trivial characterη0(a)=1{\displaystyle \eta _{0}(a)=1}and the inverse by complex inversionη−1(a)=η(a)−1{\displaystyle \eta ^{-1}(a)=\eta (a)^{-1}}thenG^{\displaystyle {\widehat {G}}}becomes an abelian group.[8] IfA{\displaystyle A}is afinite abelian groupthen[9]there is anisomorphismA≅A^{\displaystyle A\cong {\widehat {A}}}, and the orthogonality relations:[10] The elements of the finite abelian group(Z/mZ)×{\displaystyle (\mathbb {Z} /m\mathbb {Z} )^{\times }}are the residue classes[a]={x:x≡a(modm)}{\displaystyle [a]=\{x:x\equiv a{\pmod {m}}\}}where(a,m)=1.{\displaystyle (a,m)=1.} A group characterρ:(Z/mZ)×→C×{\displaystyle \rho :(\mathbb {Z} /m\mathbb {Z} )^{\times }\rightarrow \mathbb {C} ^{\times }}can be extended to a Dirichlet characterχ:Z→C{\displaystyle \chi :\mathbb {Z} \rightarrow \mathbb {C} }by defining and conversely, a Dirichlet character modm{\displaystyle m}defines a group character on(Z/mZ)×.{\displaystyle (\mathbb {Z} /m\mathbb {Z} )^{\times }.} Paraphrasing Davenport,[11]Dirichlet characters can be regarded as a particular case of Abelian group characters. But this article follows Dirichlet in giving a direct and constructive account of them. This is partly for historical reasons, in that Dirichlet's work preceded by several decades the development of group theory, and partly for a mathematical reason, namely that the group in question has a simple and interesting structure which is obscured if one treats it as one treats the general Abelian group. 4) Sincegcd(1,m)=1,{\displaystyle \gcd(1,m)=1,}property 2) saysχ(1)≠0{\displaystyle \chi (1)\neq 0}so it can be canceled from both sides ofχ(1)χ(1)=χ(1×1)=χ(1){\displaystyle \chi (1)\chi (1)=\chi (1\times 1)=\chi (1)}: 5) Property 3) is equivalent to 6) Property 1) implies that, for any positive integern{\displaystyle n} 7)Euler's theoremstates that if(a,m)=1{\displaystyle (a,m)=1}thenaϕ(m)≡1(modm).{\displaystyle a^{\phi (m)}\equiv 1{\pmod {m}}.}Therefore, That is, the nonzero values ofχ(a){\displaystyle \chi (a)}areϕ(m){\displaystyle \phi (m)}-throots of unity: for some integerr{\displaystyle r}which depends onχ,ζ,{\displaystyle \chi ,\zeta ,}anda{\displaystyle a}. This implies there are only a finite number of characters for a given modulus. 8) Ifχ{\displaystyle \chi }andχ′{\displaystyle \chi '}are two characters for the same modulus so is their productχχ′,{\displaystyle \chi \chi ',}defined by pointwise multiplication: The principal character is an identity: 9) Leta−1{\displaystyle a^{-1}}denote the inverse ofa{\displaystyle a}in(Z/mZ)×{\displaystyle (\mathbb {Z} /m\mathbb {Z} )^{\times }}. Then Thecomplex conjugateof a root of unity is also its inverse (seeherefor details), so for(a,m)=1{\displaystyle (a,m)=1} Thus for all integersa{\displaystyle a} 10) The multiplication and identity defined in 8) and the inversion defined in 9) turn the set of Dirichlet characters for a given modulus into afinite abelian group. There are three different cases because the groups(Z/mZ)×{\displaystyle (\mathbb {Z} /m\mathbb {Z} )^{\times }}have different structures depending on whetherm{\displaystyle m}is a power of 2, a power of an odd prime, or the product of prime powers.[14] Ifq=pk{\displaystyle q=p^{k}}is an odd number(Z/qZ)×{\displaystyle (\mathbb {Z} /q\mathbb {Z} )^{\times }}is cyclic of orderϕ(q){\displaystyle \phi (q)}; a generator is called aprimitive rootmodq{\displaystyle q}.[15]Letgq{\displaystyle g_{q}}be a primitive root and for(a,q)=1{\displaystyle (a,q)=1}define the functionνq(a){\displaystyle \nu _{q}(a)}(theindexofa{\displaystyle a}) by For(ab,q)=1,a≡b(modq){\displaystyle (ab,q)=1,\;\;a\equiv b{\pmod {q}}}if and only ifνq(a)=νq(b).{\displaystyle \nu _{q}(a)=\nu _{q}(b).}Since Letωq=ζϕ(q){\displaystyle \omega _{q}=\zeta _{\phi (q)}}be a primitiveϕ(q){\displaystyle \phi (q)}-th root of unity. From property 7) above the possible values ofχ(gq){\displaystyle \chi (g_{q})}areωq,ωq2,...ωqϕ(q)=1.{\displaystyle \omega _{q},\omega _{q}^{2},...\omega _{q}^{\phi (q)}=1.}These distinct values give rise toϕ(q){\displaystyle \phi (q)}Dirichlet characters modq.{\displaystyle q.}For(r,q)=1{\displaystyle (r,q)=1}defineχq,r(a){\displaystyle \chi _{q,r}(a)}as Then for(rs,q)=1{\displaystyle (rs,q)=1}and alla{\displaystyle a}andb{\displaystyle b} 2 is a primitive root mod 3.   (ϕ(3)=2{\displaystyle \phi (3)=2}) so the values ofν3{\displaystyle \nu _{3}}are The nonzero values of the characters mod 3 are 2 is a primitive root mod 5.   (ϕ(5)=4{\displaystyle \phi (5)=4}) so the values ofν5{\displaystyle \nu _{5}}are The nonzero values of the characters mod 5 are 3 is a primitive root mod 7.   (ϕ(7)=6{\displaystyle \phi (7)=6}) so the values ofν7{\displaystyle \nu _{7}}are The nonzero values of the characters mod 7 are (ω=ζ6,ω3=−1{\displaystyle \omega =\zeta _{6},\;\;\omega ^{3}=-1}) 2 is a primitive root mod 9.   (ϕ(9)=6{\displaystyle \phi (9)=6}) so the values ofν9{\displaystyle \nu _{9}}are The nonzero values of the characters mod 9 are (ω=ζ6,ω3=−1{\displaystyle \omega =\zeta _{6},\;\;\omega ^{3}=-1}) (Z/2Z)×{\displaystyle (\mathbb {Z} /2\mathbb {Z} )^{\times }}is the trivial group with one element.(Z/4Z)×{\displaystyle (\mathbb {Z} /4\mathbb {Z} )^{\times }}is cyclic of order 2. For 8, 16, and higher powers of 2, there is no primitive root; the powers of 5 are the units≡1(mod4){\displaystyle \equiv 1{\pmod {4}}}and their negatives are the units≡3(mod4).{\displaystyle \equiv 3{\pmod {4}}.}[16]For example Letq=2k,k≥3{\displaystyle q=2^{k},\;\;k\geq 3}; then(Z/qZ)×{\displaystyle (\mathbb {Z} /q\mathbb {Z} )^{\times }}is the direct product of a cyclic group of order 2 (generated by −1) and a cyclic group of orderϕ(q)2{\displaystyle {\frac {\phi (q)}{2}}}(generated by 5). For odd numbersa{\displaystyle a}define the functionsν0{\displaystyle \nu _{0}}andνq{\displaystyle \nu _{q}}by For odda{\displaystyle a}andb,a≡b(modq){\displaystyle b,\;\;a\equiv b{\pmod {q}}}if and only ifν0(a)=ν0(b){\displaystyle \nu _{0}(a)=\nu _{0}(b)}andνq(a)=νq(b).{\displaystyle \nu _{q}(a)=\nu _{q}(b).}For odda{\displaystyle a}the value ofχ(a){\displaystyle \chi (a)}is determined by the values ofχ(−1){\displaystyle \chi (-1)}andχ(5).{\displaystyle \chi (5).} Letωq=ζϕ(q)2{\displaystyle \omega _{q}=\zeta _{\frac {\phi (q)}{2}}}be a primitiveϕ(q)2{\displaystyle {\frac {\phi (q)}{2}}}-th root of unity. The possible values ofχ((−1)ν0(a)5νq(a)){\displaystyle \chi ((-1)^{\nu _{0}(a)}5^{\nu _{q}(a)})}are±ωq,±ωq2,...±ωqϕ(q)2=±1.{\displaystyle \pm \omega _{q},\pm \omega _{q}^{2},...\pm \omega _{q}^{\frac {\phi (q)}{2}}=\pm 1.}These distinct values give rise toϕ(q){\displaystyle \phi (q)}Dirichlet characters modq.{\displaystyle q.}For oddr{\displaystyle r}defineχq,r(a){\displaystyle \chi _{q,r}(a)}by Then for oddr{\displaystyle r}ands{\displaystyle s}and alla{\displaystyle a}andb{\displaystyle b} The only character mod 2 is the principal characterχ2,1{\displaystyle \chi _{2,1}}. −1 is a primitive root mod 4 (ϕ(4)=2{\displaystyle \phi (4)=2}) The nonzero values of the characters mod 4 are −1 is and 5 generate the units mod 8 (ϕ(8)=4{\displaystyle \phi (8)=4}) The nonzero values of the characters mod 8 are −1 and 5 generate the units mod 16 (ϕ(16)=8{\displaystyle \phi (16)=8}) The nonzero values of the characters mod 16 are Letm=p1m1p2m2⋯pkmk=q1q2⋯qk{\displaystyle m=p_{1}^{m_{1}}p_{2}^{m_{2}}\cdots p_{k}^{m_{k}}=q_{1}q_{2}\cdots q_{k}}wherep1<p2<⋯<pk{\displaystyle p_{1}<p_{2}<\dots <p_{k}}be the factorization ofm{\displaystyle m}into prime powers. The group of units modm{\displaystyle m}is isomorphic to the direct product of the groups mod theqi{\displaystyle q_{i}}:[17] This means that 1) there is a one-to-one correspondence betweena∈(Z/mZ)×{\displaystyle a\in (\mathbb {Z} /m\mathbb {Z} )^{\times }}andk{\displaystyle k}-tuples(a1,a2,…,ak){\displaystyle (a_{1},a_{2},\dots ,a_{k})}whereai∈(Z/qiZ)×{\displaystyle a_{i}\in (\mathbb {Z} /q_{i}\mathbb {Z} )^{\times }}and 2) multiplication modm{\displaystyle m}corresponds to coordinate-wise multiplication ofk{\displaystyle k}-tuples: TheChinese remainder theorem(CRT) implies that theai{\displaystyle a_{i}}are simplyai≡a(modqi).{\displaystyle a_{i}\equiv a{\pmod {q_{i}}}.} There are subgroupsGi<(Z/mZ)×{\displaystyle G_{i}<(\mathbb {Z} /m\mathbb {Z} )^{\times }}such that[18] Then(Z/mZ)×≅G1×G2×...×Gk{\displaystyle (\mathbb {Z} /m\mathbb {Z} )^{\times }\cong G_{1}\times G_{2}\times ...\times G_{k}}and everya∈(Z/mZ)×{\displaystyle a\in (\mathbb {Z} /m\mathbb {Z} )^{\times }}corresponds to ak{\displaystyle k}-tuple(a1,a2,...ak){\displaystyle (a_{1},a_{2},...a_{k})}whereai∈Gi{\displaystyle a_{i}\in G_{i}}andai≡a(modqi).{\displaystyle a_{i}\equiv a{\pmod {q_{i}}}.}Everya∈(Z/mZ)×{\displaystyle a\in (\mathbb {Z} /m\mathbb {Z} )^{\times }}can be uniquely factored asa=a1a2...ak.{\displaystyle a=a_{1}a_{2}...a_{k}.}[19][20] Ifχm,_{\displaystyle \chi _{m,\_}}is a character modm,{\displaystyle m,}on the subgroupGi{\displaystyle G_{i}}it must be identical to someχqi,_{\displaystyle \chi _{q_{i},\_}}modqi{\displaystyle q_{i}}Then showing that every character modm{\displaystyle m}is the product of characters mod theqi{\displaystyle q_{i}}. For(t,m)=1{\displaystyle (t,m)=1}define[21] Then for(rs,m)=1{\displaystyle (rs,m)=1}and alla{\displaystyle a}andb{\displaystyle b}[22] (Z/15Z)×≅(Z/3Z)××(Z/5Z)×.{\displaystyle (\mathbb {Z} /15\mathbb {Z} )^{\times }\cong (\mathbb {Z} /3\mathbb {Z} )^{\times }\times (\mathbb {Z} /5\mathbb {Z} )^{\times }.} The factorization of the characters mod 15 is The nonzero values of the characters mod 15 are (Z/24Z)×≅(Z/8Z)××(Z/3Z)×.{\displaystyle (\mathbb {Z} /24\mathbb {Z} )^{\times }\cong (\mathbb {Z} /8\mathbb {Z} )^{\times }\times (\mathbb {Z} /3\mathbb {Z} )^{\times }.}The factorization of the characters mod 24 is The nonzero values of the characters mod 24 are (Z/40Z)×≅(Z/8Z)××(Z/5Z)×.{\displaystyle (\mathbb {Z} /40\mathbb {Z} )^{\times }\cong (\mathbb {Z} /8\mathbb {Z} )^{\times }\times (\mathbb {Z} /5\mathbb {Z} )^{\times }.}The factorization of the characters mod 40 is The nonzero values of the characters mod 40 are Letm=p1k1p2k2⋯=q1q2⋯{\displaystyle m=p_{1}^{k_{1}}p_{2}^{k_{2}}\cdots =q_{1}q_{2}\cdots },p1<p2<…{\displaystyle p_{1}<p_{2}<\dots }be the factorization ofm{\displaystyle m}and assume(rs,m)=1.{\displaystyle (rs,m)=1.} There areϕ(m){\displaystyle \phi (m)}Dirichlet characters modm.{\displaystyle m.}They are denoted byχm,r,{\displaystyle \chi _{m,r},}whereχm,r=χm,s{\displaystyle \chi _{m,r}=\chi _{m,s}}is equivalent tor≡s(modm).{\displaystyle r\equiv s{\pmod {m}}.}The identityχm,r(a)χm,s(a)=χm,rs(a){\displaystyle \chi _{m,r}(a)\chi _{m,s}(a)=\chi _{m,rs}(a)\;}is an isomorphism(Z/mZ)×^≅(Z/mZ)×.{\displaystyle {\widehat {(\mathbb {Z} /m\mathbb {Z} )^{\times }}}\cong (\mathbb {Z} /m\mathbb {Z} )^{\times }.}[23] Each character modm{\displaystyle m}has a unique factorization as the product of characters mod the prime powers dividingm{\displaystyle m}: Ifm=m1m2,(m1,m2)=1{\displaystyle m=m_{1}m_{2},(m_{1},m_{2})=1}the productχm1,rχm2,s{\displaystyle \chi _{m_{1},r}\chi _{m_{2},s}}is a characterχm,t{\displaystyle \chi _{m,t}}wheret{\displaystyle t}is given byt≡r(modm1){\displaystyle t\equiv r{\pmod {m_{1}}}}andt≡s(modm2).{\displaystyle t\equiv s{\pmod {m_{2}}}.} Also,[24][25]χm,r(s)=χm,s(r){\displaystyle \chi _{m,r}(s)=\chi _{m,s}(r)} The two orthogonality relations are[26] The relations can be written in the symmetric form The first relation is easy to prove: Ifχ=χ0{\displaystyle \chi =\chi _{0}}there areϕ(m){\displaystyle \phi (m)}non-zero summands each equal to 1. Ifχ≠χ0{\displaystyle \chi \neq \chi _{0}}there is[27]somea∗,(a∗,m)=1,χ(a∗)≠1.{\displaystyle a^{*},\;(a^{*},m)=1,\;\chi (a^{*})\neq 1.}Then The second relation can be proven directly in the same way, but requires a lemma[29] The second relation has an important corollary: if(a,m)=1,{\displaystyle (a,m)=1,}define the function That isfa=1[a]{\displaystyle f_{a}=\mathbb {1} _{[a]}}theindicator functionof the residue class[a]={x:x≡a(modm)}{\displaystyle [a]=\{x:\;x\equiv a{\pmod {m}}\}}. It is basic in the proof of Dirichlet's theorem.[30][31] Any character mod a prime power is also a character mod every larger power. For example, mod 16[32] χ16,3{\displaystyle \chi _{16,3}}has period 16, butχ16,9{\displaystyle \chi _{16,9}}has period 8 andχ16,15{\displaystyle \chi _{16,15}}has period 4:χ16,9=χ8,5{\displaystyle \chi _{16,9}=\chi _{8,5}}andχ16,15=χ8,7=χ4,3.{\displaystyle \chi _{16,15}=\chi _{8,7}=\chi _{4,3}.} We say that a characterχ{\displaystyle \chi }of modulusq{\displaystyle q}has aquasiperiod ofd{\displaystyle d}ifχ(m)=χ(n){\displaystyle \chi (m)=\chi (n)}for allm{\displaystyle m},n{\displaystyle n}coprime toq{\displaystyle q}satisfyingm≡n{\displaystyle m\equiv n}modd{\displaystyle d}.[33]For example,χ2,1{\displaystyle \chi _{2,1}}, the only Dirichlet character of modulus2{\displaystyle 2}, has a quasiperiod of1{\displaystyle 1}, butnota period of1{\displaystyle 1}(it has a period of2{\displaystyle 2}, though). The smallest positive integer for whichχ{\displaystyle \chi }is quasiperiodic is theconductorofχ{\displaystyle \chi }.[34]So, for instance,χ2,1{\displaystyle \chi _{2,1}}has a conductor of1{\displaystyle 1}. The conductor ofχ16,3{\displaystyle \chi _{16,3}}is 16, the conductor ofχ16,9{\displaystyle \chi _{16,9}}is 8 and that ofχ16,15{\displaystyle \chi _{16,15}}andχ8,7{\displaystyle \chi _{8,7}}is 4. If the modulus and conductor are equal the character isprimitive, otherwiseimprimitive. An imprimitive character isinducedby the character for the smallest modulus:χ16,9{\displaystyle \chi _{16,9}}is induced fromχ8,5{\displaystyle \chi _{8,5}}andχ16,15{\displaystyle \chi _{16,15}}andχ8,7{\displaystyle \chi _{8,7}}are induced fromχ4,3{\displaystyle \chi _{4,3}}. A related phenomenon can happen with a character mod the product of primes; itsnonzero valuesmay be periodic with a smaller period. For example, mod 15, The nonzero values ofχ15,8{\displaystyle \chi _{15,8}}have period 15, but those ofχ15,11{\displaystyle \chi _{15,11}}have period 3 and those ofχ15,13{\displaystyle \chi _{15,13}}have period 5. This is easier to see by juxtaposing them with characters mod 3 and 5: If a character modm=qr,(q,r)=1,q>1,r>1{\displaystyle m=qr,\;\;(q,r)=1,\;\;q>1,\;\;r>1}is defined as its nonzero values are determined by the character modq{\displaystyle q}and have periodq{\displaystyle q}. The smallest period of the nonzero values is theconductorof the character. For example, the conductor ofχ15,8{\displaystyle \chi _{15,8}}is 15, the conductor ofχ15,11{\displaystyle \chi _{15,11}}is 3, and that ofχ15,13{\displaystyle \chi _{15,13}}is 5. As in the prime-power case, if the conductor equals the modulus the character isprimitive, otherwiseimprimitive. If imprimitive it isinducedfrom the character with the smaller modulus. For example,χ15,11{\displaystyle \chi _{15,11}}is induced fromχ3,2{\displaystyle \chi _{3,2}}andχ15,13{\displaystyle \chi _{15,13}}is induced fromχ5,3{\displaystyle \chi _{5,3}} The principal character is not primitive.[35] The characterχm,r=χq1,rχq2,r...{\displaystyle \chi _{m,r}=\chi _{q_{1},r}\chi _{q_{2},r}...}is primitive if and only if each of the factors is primitive.[36] Primitive characters often simplify (or make possible) formulas in the theories ofL-functions[37]andmodular forms. χ(a){\displaystyle \chi (a)}isevenifχ(−1)=1{\displaystyle \chi (-1)=1}and isoddifχ(−1)=−1.{\displaystyle \chi (-1)=-1.} This distinction appears in thefunctional equationof theDirichlet L-function. Theorderof a character is itsorder as an element of the group(Z/mZ)×^{\displaystyle {\widehat {(\mathbb {Z} /m\mathbb {Z} )^{\times }}}}, i.e. the smallest positive integern{\displaystyle n}such thatχn=χ0.{\displaystyle \chi ^{n}=\chi _{0}.}Because of the isomorphism(Z/mZ)×^≅(Z/mZ)×{\displaystyle {\widehat {(\mathbb {Z} /m\mathbb {Z} )^{\times }}}\cong (\mathbb {Z} /m\mathbb {Z} )^{\times }}the order ofχm,r{\displaystyle \chi _{m,r}}is the same as the order ofr{\displaystyle r}in(Z/mZ)×.{\displaystyle (\mathbb {Z} /m\mathbb {Z} )^{\times }.}The principal character has order 1; otherreal charactershave order 2, and imaginary characters have order 3 or greater. ByLagrange's theoremthe order of a character divides the order of(Z/mZ)×^{\displaystyle {\widehat {(\mathbb {Z} /m\mathbb {Z} )^{\times }}}}which isϕ(m){\displaystyle \phi (m)} χ(a){\displaystyle \chi (a)}isrealorquadraticif all of its values are real (they must be0,±1{\displaystyle 0,\;\pm 1}); otherwise it iscomplexorimaginary. χ{\displaystyle \chi }is real if and only ifχ2=χ0{\displaystyle \chi ^{2}=\chi _{0}};χm,k{\displaystyle \chi _{m,k}}is real if and only ifk2≡1(modm){\displaystyle k^{2}\equiv 1{\pmod {m}}}; in particular,χm,−1{\displaystyle \chi _{m,-1}}is real and non-principal.[38] Dirichlet's original proof thatL(1,χ)≠0{\displaystyle L(1,\chi )\neq 0}(which was only valid for prime moduli) took two different forms depending on whetherχ{\displaystyle \chi }was real or not. His later proof, valid for all moduli, was based on hisclass number formula.[39][40] Real characters areKronecker symbols;[41]for example, the principal character can be written[42]χm,1=(m2∙){\displaystyle \chi _{m,1}=\left({\frac {m^{2}}{\bullet }}\right)}. The real characters in the examples are: Ifm=p1k1p2k2...,p1<p2<...{\displaystyle m=p_{1}^{k_{1}}p_{2}^{k_{2}}...,\;p_{1}<p_{2}<\;...}the principal character is[43]χm,1=(p12p22...∙).{\displaystyle \chi _{m,1}=\left({\frac {p_{1}^{2}p_{2}^{2}...}{\bullet }}\right).} χ16,1=χ8,1=χ4,1=χ2,1=(4∙){\displaystyle \chi _{16,1}=\chi _{8,1}=\chi _{4,1}=\chi _{2,1}=\left({\frac {4}{\bullet }}\right)}χ9,1=χ3,1=(9∙){\displaystyle \chi _{9,1}=\chi _{3,1}=\left({\frac {9}{\bullet }}\right)}χ5,1=(25∙){\displaystyle \chi _{5,1}=\left({\frac {25}{\bullet }}\right)}χ7,1=(49∙){\displaystyle \chi _{7,1}=\left({\frac {49}{\bullet }}\right)}χ15,1=(225∙){\displaystyle \chi _{15,1}=\left({\frac {225}{\bullet }}\right)}χ24,1=(36∙){\displaystyle \chi _{24,1}=\left({\frac {36}{\bullet }}\right)}χ40,1=(100∙){\displaystyle \chi _{40,1}=\left({\frac {100}{\bullet }}\right)} If the modulus is the absolute value of afundamental discriminantthere is a real primitive character (there are two if the modulus is a multiple of 8); otherwise if there are any primitive characters[36]they are imaginary.[44] χ3,2=(−3∙){\displaystyle \chi _{3,2}=\left({\frac {-3}{\bullet }}\right)}χ4,3=(−4∙){\displaystyle \chi _{4,3}=\left({\frac {-4}{\bullet }}\right)}χ5,4=(5∙){\displaystyle \chi _{5,4}=\left({\frac {5}{\bullet }}\right)}χ7,6=(−7∙){\displaystyle \chi _{7,6}=\left({\frac {-7}{\bullet }}\right)}χ8,3=(−8∙){\displaystyle \chi _{8,3}=\left({\frac {-8}{\bullet }}\right)}χ8,5=(8∙){\displaystyle \chi _{8,5}=\left({\frac {8}{\bullet }}\right)}χ15,14=(−15∙){\displaystyle \chi _{15,14}=\left({\frac {-15}{\bullet }}\right)}χ24,5=(−24∙){\displaystyle \chi _{24,5}=\left({\frac {-24}{\bullet }}\right)}χ24,11=(24∙){\displaystyle \chi _{24,11}=\left({\frac {24}{\bullet }}\right)}χ40,19=(−40∙){\displaystyle \chi _{40,19}=\left({\frac {-40}{\bullet }}\right)}χ40,29=(40∙){\displaystyle \chi _{40,29}=\left({\frac {40}{\bullet }}\right)} χ8,7=χ4,3=(−4∙){\displaystyle \chi _{8,7}=\chi _{4,3}=\left({\frac {-4}{\bullet }}\right)}χ9,8=χ3,2=(−3∙){\displaystyle \chi _{9,8}=\chi _{3,2}=\left({\frac {-3}{\bullet }}\right)}χ15,4=χ5,4χ3,1=(45∙){\displaystyle \chi _{15,4}=\chi _{5,4}\chi _{3,1}=\left({\frac {45}{\bullet }}\right)}χ15,11=χ3,2χ5,1=(−75∙){\displaystyle \chi _{15,11}=\chi _{3,2}\chi _{5,1}=\left({\frac {-75}{\bullet }}\right)}χ16,7=χ8,3=(−8∙){\displaystyle \chi _{16,7}=\chi _{8,3}=\left({\frac {-8}{\bullet }}\right)}χ16,9=χ8,5=(8∙){\displaystyle \chi _{16,9}=\chi _{8,5}=\left({\frac {8}{\bullet }}\right)}χ16,15=χ4,3=(−4∙){\displaystyle \chi _{16,15}=\chi _{4,3}=\left({\frac {-4}{\bullet }}\right)} χ24,7=χ8,7χ3,1=χ4,3χ3,1=(−36∙){\displaystyle \chi _{24,7}=\chi _{8,7}\chi _{3,1}=\chi _{4,3}\chi _{3,1}=\left({\frac {-36}{\bullet }}\right)}χ24,13=χ8,5χ3,1=(72∙){\displaystyle \chi _{24,13}=\chi _{8,5}\chi _{3,1}=\left({\frac {72}{\bullet }}\right)}χ24,17=χ3,2χ8,1=(−12∙){\displaystyle \chi _{24,17}=\chi _{3,2}\chi _{8,1}=\left({\frac {-12}{\bullet }}\right)}χ24,19=χ8,3χ3,1=(−72∙){\displaystyle \chi _{24,19}=\chi _{8,3}\chi _{3,1}=\left({\frac {-72}{\bullet }}\right)}χ24,23=χ8,7χ3,2=χ4,3χ3,2=(12∙){\displaystyle \chi _{24,23}=\chi _{8,7}\chi _{3,2}=\chi _{4,3}\chi _{3,2}=\left({\frac {12}{\bullet }}\right)} χ40,9=χ5,4χ8,1=(20∙){\displaystyle \chi _{40,9}=\chi _{5,4}\chi _{8,1}=\left({\frac {20}{\bullet }}\right)}χ40,11=χ8,3χ5,1=(−200∙){\displaystyle \chi _{40,11}=\chi _{8,3}\chi _{5,1}=\left({\frac {-200}{\bullet }}\right)}χ40,21=χ8,5χ5,1=(200∙){\displaystyle \chi _{40,21}=\chi _{8,5}\chi _{5,1}=\left({\frac {200}{\bullet }}\right)}χ40,31=χ8,7χ5,1=χ4,3χ5,1=(−100∙){\displaystyle \chi _{40,31}=\chi _{8,7}\chi _{5,1}=\chi _{4,3}\chi _{5,1}=\left({\frac {-100}{\bullet }}\right)}χ40,39=χ8,7χ5,4=χ4,3χ5,4=(−20∙){\displaystyle \chi _{40,39}=\chi _{8,7}\chi _{5,4}=\chi _{4,3}\chi _{5,4}=\left({\frac {-20}{\bullet }}\right)} The Dirichlet L-series for a characterχ{\displaystyle \chi }is This series only converges forR(s)>1{\displaystyle {\mathfrak {R}}(s)>1}; it can be analytically continued to a meromorphic function. Dirichlet introduced theL{\displaystyle L}-function along with the characters in his 1837 paper. Dirichlet characters appear several places in the theory of modular forms and functions. A typical example is[45] Letχ∈(Z/MZ)×^{\displaystyle \chi \in {\widehat {(\mathbb {Z} /M\mathbb {Z} )^{\times }}}}and letχ1∈(Z/NZ)×^{\displaystyle \chi _{1}\in {\widehat {(\mathbb {Z} /N\mathbb {Z} )^{\times }}}}be primitive. If define Then Seetheta series of a Dirichlet characterfor another example. The Gauss sum of a Dirichlet character moduloNis It appears in thefunctional equationof theDirichlet L-function. Ifχ{\displaystyle \chi }andψ{\displaystyle \psi }are Dirichlet characters mod a primep{\displaystyle p}their Jacobi sum is Jacobi sums can be factored into products of Gauss sums. Ifχ{\displaystyle \chi }is a Dirichlet character modq{\displaystyle q}andζ=e2πiq{\displaystyle \zeta =e^{\frac {2\pi i}{q}}}the Kloosterman sumK(a,b,χ){\displaystyle K(a,b,\chi )}is defined as[48] Ifb=0{\displaystyle b=0}it is a Gauss sum. It is not necessary to establish the defining properties 1) – 3) to show that a function is a Dirichlet character. IfX:Z→C{\displaystyle \mathrm {X} :\mathbb {Z} \rightarrow \mathbb {C} }such that thenX(a){\displaystyle \mathrm {X} (a)}is one of theϕ(m){\displaystyle \phi (m)}characters modm{\displaystyle m}[49] A Dirichlet character is a completely multiplicative functionf:N→C{\displaystyle f:\mathbb {N} \rightarrow \mathbb {C} }that satisfies alinear recurrence relation: that is, ifa1f(n+b1)+⋯+akf(n+bk)=0{\displaystyle a_{1}f(n+b_{1})+\cdots +a_{k}f(n+b_{k})=0} for all positive integersn{\displaystyle n}, wherea1,…,ak{\displaystyle a_{1},\ldots ,a_{k}}are not all zero andb1,…,bk{\displaystyle b_{1},\ldots ,b_{k}}are distinct thenf{\displaystyle f}is a Dirichlet character.[50] A Dirichlet character is a completely multiplicative functionf:N→C{\displaystyle f:\mathbb {N} \rightarrow \mathbb {C} }satisfying the following three properties: a)f{\displaystyle f}takes only finitely many values; b)f{\displaystyle f}vanishes at only finitely many primes; c) there is anα∈C{\displaystyle \alpha \in \mathbb {C} }for which the remainder |∑n≤xf(n)−αx|{\displaystyle \left|\sum _{n\leq x}f(n)-\alpha x\right|} is uniformly bounded, asx→∞{\displaystyle x\rightarrow \infty }. This equivalent definition of Dirichlet characters was conjectured by Chudakov[51]in 1956, and proved in 2017 by Klurman and Mangerel.[52]
https://en.wikipedia.org/wiki/Dirichlet_character
TheSwedish Armed Forces' radio alphabetwas aradiotelephony alphabetmade up of Swedish two-syllable male names with the exception of Z which is just the name of the letter as pronounced in Swedish. TheSwedish Armed Forcesare since 2006 instructed to use theNATO alphabetinstead of the original Swedish alphabet, along with and adaptation of theNATOvoice procedures to communicate, since most activity is in various internationalUNandNATOmissions. This has been changed back again since the administrative authorities are required to use the Swedish language according to Swedish law even the Swedish Armed Forces.[clarification needed] The alphabet is also used for civil communications in Sweden, one example being local flights operating underVFR.
https://en.wikipedia.org/wiki/Swedish_Armed_Forces_radio_alphabet
Cherry picking,suppressing evidence, or thefallacy of incomplete evidenceis the act of pointing to individual cases or data that seem to confirm a particular position while ignoring a significant portion of related and similar cases or data that maycontradictthat position. Cherry picking may be committed intentionally or unintentionally.[2] The term is based on the perceived process of harvesting fruit, such ascherries. The picker would be expected to select only the ripest and healthiest fruits. An observer who sees only the selected fruit may thus wrongly conclude that most, or even all, of the tree's fruit is in a likewise good condition. This can also give a false impression of the quality of the fruit (since it is only a sample and is not arepresentative sample). A concept sometimes confused with cherry picking is the idea of gathering only the fruit that is easy to harvest, while ignoring other fruit that is higher up on the tree and thus more difficult to obtain (seelow-hanging fruit). Cherry picking has a negative connotation as the practice neglects, overlooks or directly suppresses evidence that could lead to a complete picture. Cherry picking can be found in manylogical fallacies. For example, the "fallacy ofanecdotal evidence" tends to overlook large amounts of data in favor of that known personally, "selective use of evidence" rejects material unfavorable to an argument, while afalse dichotomypicks only two options when more are available. Some scholars classify cherry-picking as afallacyof selective attention, the most common example of which is theconfirmation bias.[3]Cherry picking can refer to the selection of data or data sets so a study or survey will give desired, predictable results which may be misleading or even completely contrary to reality.[4] A story about the 5th centuryBCEatheist philosopherDiagoras of Melossays how, when shown the votive gifts of people who had supposedly escaped death by shipwreck by praying to gods, he pointed out that many peoplehaddied at sea in spite of their prayers, yet these cases were not likewise commemorated[5](this is an example ofsurvivorship bias).Michel de Montaigne(1533–1592) in hisessay on propheciescomments on people willing to believe in the validity of supposed seers: I see some who are mightily given to study and comment upon their almanacs, and produce them to us as an authority when anything has fallen out pat; and, for that matter, it is hardly possible but that these alleged authorities sometimes stumble upon a truth amongst an infinite number of lies. ... I think never the better of them for some such accidental hit. ... [N]obody records their flimflams and false prognostics, forasmuch as they are infinite and common; but if they chop upon one truth, that carries a mighty report, as being rare, incredible, and prodigious.[6] Cherry picking is one of the epistemological characteristics ofdenialismand widely used by different sciencedenialiststo seemingly contradict scientific findings. For example, it is used inclimate change denial,evolution denialby creationists, denial of the negative health effects of consumingtobacco productsand passive smoking.[1] Choosing to make selective choices among competing evidence, so as to emphasize those results that support a given position, while ignoring or dismissing any findings that do not support it, is a practice known as "cherry picking" and is a hallmark of poor science or pseudo-science.[7] Rigorous science looks at all the evidence (rather than cherry picking only favorable evidence), controls for variables as to identify what is actually working, uses blinded observations so as to minimize the effects of bias, and uses internally consistent logic."[8] In a 2002 study, a review of previous medical data found cherry picking in tests of anti-depression medication: [researchers] reviewed 31 antidepressant efficacy trials to identify the primary exclusion criteria used in determining eligibility for participation. Their findings suggest that patients in current antidepressant trials represent only a minority of patients treated in routine clinical practice for depression. Excluding potential clinical trial subjects with certain profiles means that the ability to generalize the results of antidepressant efficacy trials lacks empirical support, according to the authors.[9] In argumentation, the practice of "quote mining" is a form of cherry picking,[7]in which the debater selectively picks some quotes supporting a position (or exaggerating an opposing position) while ignoring those that moderate the original quote or put it into a different context. Cherry picking in debates is a large problem as the facts themselves are true but need to be put in context. Because research cannot be done live and is often untimely, cherry-picked facts or quotes usually stick in the public mainstream and, even when corrected, lead to widespread misrepresentation of groups targeted. Aone-sided argument(also known ascard stacking,stacking the deck,ignoring the counterevidence,slanting, andsuppressed evidence)[10]is aninformal fallacythat occurs when only the reasons supporting a proposition are supplied, while all reasons opposing it are omitted. Philosophy professorPeter Suberhas written: The one-sidedness fallacy does not make an argument invalid. It may not even make the argument unsound. The fallacy consists in persuading readers, and perhaps ourselves, that we have said enough to tilt the scale of evidence and therefore enough to justify a judgment. If we have been one-sided, though, then we haven't yet said enough to justify a judgment. The arguments on the other side may be stronger than our own. We won't know until we examine them. So the one-sidedness fallacy doesn't mean that your premises are false or irrelevant, only that they are incomplete. […] You might think that one-sidedness is actually desirable when your goal is winning rather than discovering a complex and nuanced truth. If this is true, then it's true of every fallacy. If winning is persuading a decision-maker, then any kind of manipulation or deception that actually works is desirable. But in fact, while winning may sometimes be served by one-sidedness, it is usually better served by two-sidedness. If your argument (say) in court is one-sided, then you are likely to be surprised by a strong counter-argument for which you are unprepared. The lesson is to cultivate two-sidedness in your thinking about any issue. Beware of any job that requires you to truncate your own understanding.[11] Card stackingis apropagandatechnique that seeks to manipulate audience perception of an issue by emphasizing one side and repressing another.[12]Such emphasis may be achieved throughmedia biasor the use ofone-sidedtestimonials, or by simplycensoringthe voices of critics. The technique is commonly used in persuasive speeches by political candidates to discredit their opponents and to make themselves seem more worthy.[13] The term originates from themagician's gimmick of "stacking the deck", which involves presenting adeck of cardsthat appears to have been randomly shuffled but which is, in fact, 'stacked' in a specific order. The magician knows the order and is able to control the outcome of the trick. In poker, cards can be stacked so that certain hands are dealt to certain players.[14] The phenomenon can be applied to any subject and has wide applications. Whenever a broad spectrum of information exists, appearances can be rigged by highlighting some facts and ignoring others. Card stacking can be a tool of advocacy groups or of those groups with specific agendas.[15]For example, an enlistment poster might focus upon an impressive picture, with words such as "travel" and "adventure", while placing the words, "enlist for two to four years" at the bottom in a smaller and less noticeable point size.[16]
https://en.wikipedia.org/wiki/Cherry_picking
Inmathematics, theKolakoski sequence, sometimes also known as theOldenburger–Kolakoski sequence,[1]is aninfinite sequenceof symbols {1,2} that is the sequence of run lengths in its ownrun-length encoding.[2]It is named after therecreational mathematicianWilliam Kolakoski(1944–97) who described it in 1965,[3]but it was previously discussed byRufus Oldenburgerin 1939.[1][4] The initial terms of the Kolakoski sequence are: Each symbol occurs in a "run" (a sequence of equal elements) of either one or two consecutive terms, and writing down the lengths of these runs gives exactly the same sequence: The description of the Kolakoski sequence is therefore reversible. IfKstands for "the Kolakoski sequence", description #1 logically implies description #2 (and vice versa): Accordingly, one can say that each term of the Kolakoski sequence generates a run of one or two future terms. The first 1 of the sequence generates a run of "1", i.e. itself; the first 2 generates a run of "22", which includes itself; the second 2 generates a run of "11"; and so on. Each number in the sequence is thelengthof the next run to be generated, and theelementto be generated alternates between 1 and 2: As can be seen, the length of the sequence at each stage is equal to the sum of terms in the previous stage. This animation illustrates the process: These self-generating properties, which remain if the sequence is written without the initial 1, mean that the Kolakoski sequence can be described as afractal, or mathematical object that encodes its own representation on other scales.[1]Bertran Steinsky has created a recursive formula for thei-th term of the sequence.[5] The sequence is not eventuallyperiodic, that is, its terms do not have a general repeating pattern (cf.irrational numberslikeπand√2). More generally, the sequence is cube-free, i.e., has no substring of the formwww{\displaystyle www}withw{\displaystyle w}some nonempty finite string.[6] It seems plausible that the density of 1s in the Kolakoski {1,2}-sequence is 1/2, but this conjecture remains unproved.[7]Václav Chvátalhas proved that the upper density of 1s is less than 0.50084.[8]Nilsson has used the same method with far greater computational power to obtain the bound 0.500080.[9] Although calculations of the first 3×108values of the sequence appeared to show its density converging to a value slightly different from 1/2,[5]later calculations that extended the sequence to its first 1013values show the deviation from a density of 1/2 growing smaller, as one would expect if the limiting density actually is 1/2.[10] The Kolakoski sequence can also be described as the result of a simple cyclictag system. However, as this system is a 2-tag system rather than a 1-tag system (that is, it replaces pairs of symbols by other sequences of symbols, rather than operating on a single symbol at a time) it lies in the region of parameters for which tag systems areTuring complete, making it difficult to use this representation to reason about the sequence.[11] The Kolakoski sequence may be generated by analgorithmthat, in thei-th iteration, reads the valuexithat has already been output as thei-th value of the sequence (or, if no such value has been output yet, setsxi=i). Then, ifiis odd, it outputsxicopies of the number 1, while ifiis even, it outputsxicopies of the number 2. Thus, the first few steps of the algorithm are: This algorithm takeslinear time, but because it needs to refer back to earlier positions in the sequence it needs to store the whole sequence, taking linear space. An alternative algorithm that generates multiple copies of the sequence at different speeds, with each copy of the sequence using the output of the previous copy to determine what to do at each step, can be used to generate the sequence in linear time and onlylogarithmic space.[10]
https://en.wikipedia.org/wiki/Kolakoski_sequence
Inlinguistics, especially withingenerative grammar,phi features(denoted with the Greek letterφ'phi') are themorphologicalexpression of asemanticprocess in which a word ormorphemevaries with the form of another word or phrase in the same sentence.[1]This variation can includeperson,number,gender, andcase, as encoded in pronominal agreement withnounsandpronouns(the latter are said to consist only of phi-features, containing no lexical head). Several other features are included in the set of phi-features, such as the categorical features ±N (nominal) and ±V (verbal), which can be used to describelexical categoriesand case features.[2] Phi-features are often thought of as the "silent" features that exist onlexicalheads (or, according to some theories,[3]within the syntactic structure) that are understood for number, gender, person or reflexivity. Due to their silent nature, phi-features are often only understood if someone is anativespeaker of a language, or if the translation includes a gloss of all these features. Many languages exhibit apro-dropphenomenon which means that they rely on other lexical categories to determine the phi-features of the lexical heads. Chomskyfirst proposed that the N node in a clause carries with it all the features to include person, number and gender.[4]InEnglish, we rely on nouns to determine the phi-features of a word, but some other languages rely on inflections of the different parts of speech to determine person, number and gender of the nominal phrases to which they refer.[5]Adjectives also carry phi-features in some languages, however, they tend to agree in number and gender but rarely for person.[5] The grammatical termnumberis the name of the system contrasting singular and plural.[6]In English, number agreement is not expressed through agreement of verbal elements like they are in other languages (though present tense verbs do agree in number with third person subjects). This is partly because English is a language that requires subjects and the subjects in English overtly express number. Instead, English number is a phi-feature that is inflected on nouns when the nominal phrase isplural. The most common in English is-sinflected on nouns that are plural: - Ducks, fridges, baseballs, cups, books, mirrors, cars, buildings, clowns, bridges, creams.... Some cases of plurality in English require inflection within the noun to express the phi-feature of plurality: - Men, women, mice, teeth.... Neither verbs nor adjectives are used to agree with the number feature of the noun that they are agreeing with in English. Some languages, however, likeSalish Halkomelem, differ from English in their syntactic categorization of plural marking. Halkomelem allows for both marked and unmarked plural forms of its nouns. It also allows for the determiners to be marked or unmarked in their plurality. Plural nouns and determiners in Halkomelem can be freely combined as well, but it appears that if a determiner is plural in a phrase it is sufficient to pluralize the noun that it modifies:[7] English is a language that does not have nominal phrases that belong to a gender class where agreement of other elements in the phrase is required. Dutch is another language that only differentiates between neuter and the common gender.[8]Many other languages of the world do have gender classes.German, for example, has three genders; feminine, masculine and neuter.[8]For a Romance language like Italian, there are feminine and masculine genders. Inflections on theadjectivesanddeterminersare used for gender agreement within the pronominal phrase.[9] English only expresses gender when the pronoun addresses a specific person who semantically belongs to a certain gender. See the table below for Pronominal Case Forms in English under3 sg. fem/masc. The phi-feature ofcaseis explicit in English only for pronominal forms (see the picture of the table for Pronominal Case Forms in English). English is not a language that has inflectional case forms for proper nouns. German is a language that exhibits some inflectional case forms on nouns.[10]It also obligatorily displays case forms on its determiners: Case in terms ofreflexivityis overt in English for every person (see the table for Pronominal Case Forms in English): myself,yourself,himself, herself, yourselves, ourselves, themselves The pattern for the reflexive form of the third person masculine pronoun does not follow the same pattern of reflexivity as the other pronouns in terms of case form marking for pronominal English. In many languages, reflexivity is not overt for person. A prime example is apparent inFrenchse. Frenchseis used to express reflexivity for every expression of the third person, regardless of gender or number. It also functions as a middle, an inchoative, an applicative and an impersonal. For this reason, some theories suggest that reflexive phi-features for languages such as French posit in a level on the syntactic structure that is silent, between the determiner and the noun. This creates a new "silent" projection to a node specifically for φ-reflexives in French structure.[11] When phi feature agreement occurs on a verb, it typically marks features relating to grammatical function (subject versus object), person, gender, or case.[12]A key area of verbal agreement is attraction, in which case verbs are sensitive to the grammatical number of a noun phrase that is not the expected controller, but is close in vicinity.[13]In other words, agreement is understood to be a relationship between a probing head and a target goal in the probe's c-command domain.[14] In English, agreement on a verb is triggered by the highest DP in subject position of a finite clause.[15]Overt agreement is found only in the present tense, with a 3rd person singular subject, in which case the verb is suffixed with -s:[16] In anull-subject languagesuch asItalian, however, pronominal subjects are not required (in fact, in many null-subject languages, producing overt subjects is a sign of non-nativity). These type of "unstressed" pronouns are called clitic pronouns. Therefore, Italian uses a different inflectional morphology on verbs that is based on the person features of the nominal subject it agrees with:[9] Past tense, present continuous tense and the future tense are the three divisions of time expression of the action of a verb.[17]In languages such as English, verbs agree with their subjects and not their objects. However, inMohawk, an Indigenous language of North America, verbs agree with their subjects as well as their objects. Interestingly in Mohawk, a predicate can be counted as a verb, like 'big'. As shown in 1a), the form of 'big' changes to express the particular grammatical function, tense.[18] CIS:cislocative NE:Mohawk prenominal particle Ra-kowan-v-hne' MsS-be.big-STAT-PAST ne NE Sak. Sak Ra-kowan-v-hne' ne Sak. MsS-be.big-STAT-PAST NE Sak ‘Sak used to be big.’ This change utilizes /v-hne/ as can be compared with the verb present in sentence 1b), 'fallen'. t-yo-ya't-y'-v-hne' CIS-NsO-body-fall-STAT-PAST t-yo-ya't-y'-v-hne' CIS-NsO-body-fall-STAT-PAST ‘It has fallen.'[18] Example 2) demonstrates the use of 'big' without inflecting for tense (/v-hne/), but instead we see, /-v/. w-a'shar-owan-v. NsS-knife-be.big-STAT w-a'shar-owan-v. NsS-knife-be.big-STAT "The knife is big; it is a big knife.'[18] Verb negation in many languages, including English, is not subject to phi-feature agreement. However, there do exist some languages which possess the morphological variance that indicates agreement. As an example, one of these is theIbibiolanguage inNigeria. In sentences with anauxiliary verb, the auxiliary verb is directly affixed with a negation agreement morpheme /í/, in place of the typical subject-verb agreement morphemes /á/ or /é/, and the non-auxiliary verb subject-verb agreement also changes in agreement with the negation, despite the fact that only the auxiliary undergoes negation.[19]This double variation is shown in 1a-b), where I in the sentence gloss indicates the agreement affix /í/. In these examples, the verbs are undergoing morphological changes in order to be in agreement with the negation, regardless of whether they are directly negated or not. Okon Okon i-sʌk-kɔ I-AUX-NEG i-di I-come Okoni-sʌk-kɔi-di OkonI-AUX-NEGI-come 'Okon has still not come (in spite of...)' Okon Okon i-sɔp-pɔ I-do.quickly-NEG i-dɔk I-make ekpat. bag Okoni-sɔp-pɔi-dɔk ekpat. OkonI-do.quickly-NEGI-make bag 'Okon did not make the bag quickly.'[19] In certain languages, verb agreement can be controlled by formality, as withKoreansubject honorific agreement. When the subject of the sentence is a respected person, the honorificsuffixsioccurs after the verbrootand the honorific subjectcase markerisKkeysein as seen in (3a). Moreover, honorific agreement is optional, as seen in(3b)., Seonsaengnim-kkeys-e teacher-HON.NOM o-si-ess-ta. come-HON-past-DEC Seonsaengnim-kkeys-eo-si-ess-ta. teacher-HON.NOMcome-HON-past-DEC ‘The teacher came.’ Seonsaengnim-i teacher-NOM o-ass-ta. come-past-DEC Seonsaengnim-i o-ass-ta. teacher-NOM come-past-DEC ‘The teacher came.’[20] There is a debate about whether Korean subject honorific marking is authentic agreement. The debate stems from the fact that languages where verbs show person agreement, the agreement is obligatory. Based on this reason, some scholars contend that since honorific marking is optional, it is not an instance of agreement. However, other scholars others argue that it is indeed agreement.[20]A fundamental quality about honorific marking that is often overlooked is the fact that is it possible only with a human referent. Consequently, as shown by the examples in (4), when the subject is non-human, honorific agreement is ungrammatical.[21] cha-ka car-NOM o-(*si)-ess-e. come-HON-PST-DECL cha-ka o-(*si)-ess-e. car-NOM come-HON-PST-DECL ‘The car came.’ kwukhoy-ka congress-NOM ku the pepan-ul bill-ACC simuy-ha-(*si)-ess-e review-do-HON-PST-DECL kwukhoy-ka ku pepan-ul simuy-ha-(*si)-ess-e congress-NOM the bill-ACC review-do-HON-PST-DECL ‘The congress reviewed the bill.’[21] Phi-features can also be considered the silent features that determine whether a root word is a noun or a verb. This is called the noun-verb distinction ofDistributed Morphology. The table below describes how category classes are organized by their Nominal or Verbal characteristics. Definitions for these four categories of predicates have been described as follows: Averbal predicatehas a predicative use only; anominalpredicate can be used as the head of a term; anadjectivalpredicate can be used as a modifier of a nominal head; aprepositionacts as a term-predicate for which the noun is still the head; anadverbial(not shown below) predicate can be used as a modifier of a non-nominal head.[22] X-bar theory approaches categorical features in this way: when a head X selects its complement to project to X', the XP that it projects to then is a combination of the head X and all of its categorical features, those being either nominal, verbal, adjectival or prepositional features.[23]It has also been argued thatadpositions(cover term for prepositions and postpositions[24]) are not part of the [+/-N] [+/-V] system as shown above. This is because they resist being part of a single class category, like nouns, verbs and adjectives do. This argument also posits that some appositions may behave as part of this type of categorization, but not all of them do. There arethree main hypothesesregarding the syntactic categories of words. The first one is theStrong Lexicalhypothesis, which states that verbs and nouns are inherent in nature, and when a word such as "walk" in English can surface as either a noun or a verb, depending on the speaker's intuitions of what the meaning of the verb is.[25]This means that the root "walk" in English has two separate lexical entries:[26] walkN <[AP]>an act or instance of going on foot especially for exercise or pleasure[27] walkV <[DPtheme]>to move along on foot : advance by steps[27] This analysis states that the category is determined by syntax or context. Aroot wordis inserted into the syntax as bare and the surrounding syntax determines if it will behave as a verb or a noun. Once the environment has determined its category, morphologicalinflectionsalso surface on the root according to the determined category. Typically, if the element before it is a determiner, the word will surface as a noun, and if the element before it is a tense element, the root word will surface as a verb.[29]The example in the photo shows an example from Italian. The root of the word iscammin-("walk"). This word could surface as either a noun or verb. The first tree shows that when the element before is a D "una", the root will be an N and the following morphology will inflect-atawhich is the correct full orthography for the noun "walk" in Italian. The tree on the right shows a similar process but in the environment where the root follows a tense element, and the morphology inflects-oas a suffix, which makes the verb surface not only as a verb, but as discussed before in person agreement, also shows that this is the first person present form of the verb ("I walk"). Syntactic decomposition for categorization ofparts of speechincludes an explanation for why some verbs and nouns have a predictable relationship to their nominal counterparts and why some don't. It says that the predictable forms aredenominaland that the unpredictable forms are strictly root-derived.[30]The examples provided are of the English verbshammerandtape. A verb such as hammer is aroot-derived form, meaning that it can appear within an NP or within a VP. Adenominalized verb, such as tape must first be converted from an NP because its meaning relies on the semantics of the noun.[31] The discussion of how categorical features are determined is still up for debate and there have been numerous other theories trying to explain how words get their meanings and surface in a category. This is an issue within categorical distinction theories that has not yet come to a conclusion which is agreed upon in the linguistic community. This is interesting because phi-features in terms of person, number and gender are concrete features that have been observed numerous times in natural languages, and are consistent patterns that are rooted in rule-based grammar.
https://en.wikipedia.org/wiki/Phi_features
Inabstract algebra, acoveris one instance of somemathematical structuremappingontoanother instance, such as agroup(trivially) covering asubgroup. This should not be confused with the concept of acover in topology. When some objectXis said to cover another objectY, the cover is given by somesurjectiveandstructure-preservingmapf:X→Y. The precise meaning of "structure-preserving" depends on the kind of mathematical structure of whichXandYare instances. In order to be interesting, the cover is usually endowed with additional properties, which are highly dependent on the context. A classic result insemigrouptheory due toD. B. McAlisterstates that everyinverse semigrouphas anE-unitarycover; besides being surjective, the homomorphism in this case is alsoidempotentseparating, meaning that in itskernelan idempotent and non-idempotent never belong to the same equivalence class.; something slightly stronger has actually be shown for inverse semigroups: every inverse semigroup admits anF-inversecover.[1]McAlister's covering theorem generalizes toorthodox semigroups: every orthodox semigroup has a unitary cover.[2] Examples from other areas of algebra include theFrattini coverof aprofinite group[3]and theuniversal coverof aLie group. IfFis some family of modules over some ringR, then anF-cover of a moduleMis a homomorphismX→Mwith the following properties: In general anF-cover ofMneed not exist, but if it does exist then it is unique up to (non-unique) isomorphism. Examples include: Thisabstract algebra-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Cover_(algebra)
InUnixandUnix-likecomputer operating systems, afile descriptor(FD, less frequentlyfildes) is a process-unique identifier (handle) for afileor otherinput/outputresource, such as apipeornetwork socket. File descriptors typically have non-negativeintegervalues, with negative values being reserved to indicate "no value" or error conditions. File descriptors are a part of thePOSIXAPI. Each Unixprocess(except perhapsdaemons) should have three standard POSIX file descriptors, corresponding to the threestandard streams: In the traditional implementation of Unix, file descriptors index into a per-processfile descriptor tablemaintained by the kernel, that in turn indexes into a system-wide table of files opened by all processes, called thefile table. This table records themodewith which the file (or other resource) has been opened: for reading, writing, appending, and possibly other modes. It also indexes into a third table called theinode tablethat describes the actual underlying files.[3]To perform input or output, the process passes the file descriptor to the kernel through asystem call, and the kernel will access the file on behalf of the process. The process does not have direct access to the file or inode tables. OnLinux, the set of file descriptors open in a process can be accessed under the path/proc/PID/fd/, where PID is theprocess identifier. File descriptor/proc/PID/fd/0isstdin,/proc/PID/fd/1isstdout, and/proc/PID/fd/2isstderr. As a shortcut to these, any running process can also accessits ownfile descriptors through the folders/proc/self/fdand/dev/fd.[4] InUnix-likesystems, file descriptors can refer to anyUnix file typenamed in a file system. As well as regular files, this includesdirectories,blockandcharacter devices(also called "special files"),Unix domain sockets, andnamed pipes. File descriptors can also refer to other objects that do not normally exist in the file system, such asanonymous pipesandnetwork sockets. The FILE data structure in theC standard I/O libraryusually includes a low level file descriptor for the object in question on Unix-like systems. The overall data structure provides additional abstraction and is instead known as afilehandle. The following lists typical operations on file descriptors on modernUnix-likesystems. Most of these functions are declared in the<unistd.h>header, but some are in the<fcntl.h>header instead. Thefcntl()function is used to perform various operations on a file descriptor, depending on the command argument passed to it. There are commands to get and set attributes associated with a file descriptor, includingF_GETFD, F_SETFD, F_GETFLandF_SETFL. A series of new operations has been added to many modern Unix-like systems, as well as numerous C libraries, to be standardized in a future version ofPOSIX.[7]Theatsuffix signifies that the function takes an additional first argument supplying a file descriptor from whichrelative pathsare resolved, the forms lacking theatsuffix thus becoming equivalent to passing a file descriptor corresponding to the currentworking directory. The purpose of these new operations is to defend against a certain class ofTOCTOUattacks. Unix file descriptors behave in many ways ascapabilities. They can be passed between processes acrossUnix domain socketsusing thesendmsg()system call. Note, however, that what is actually passed is a reference to an "open file description" that has mutable state (the file offset, and the file status and access flags). This complicates the secure use of file descriptors as capabilities, since when programs share access to the same open file description, they can interfere with each other's use of it by changing its offset or whether it is blocking or non-blocking, for example.[8][9]In operating systems that are specifically designed as capability systems, there is very rarely any mutable state associated with a capability itself. A Unix process' file descriptor table is an example of aC-list.
https://en.wikipedia.org/wiki/File_descriptor
Incomputing,rebootingis the process by which a runningcomputer systemis restarted, either intentionally or unintentionally. Reboots can be either acold reboot(alternatively known as ahard reboot) in which the power to the system is physically turned off and back on again (causing aninitial bootof the machine); or awarm reboot(orsoft reboot) in which the system restarts while still powered up. The termrestart(as a system command) is used to refer to a reboot when theoperating systemcloses all programs and finalizes all pending input and output operations before initiating a soft reboot. Early electronic computers (like theIBM 1401) had no operating system and little internal memory. The input was often a stack ofpunch cardsor via aswitch register. On systems with cards, the computer was initiated by pressing a start button that performed a single command - "read a card". This first card then instructed the machine to read more cards that eventually loaded a user program. This process was likened to an old saying, "picking yourself up by the bootstraps", referring to a horseman who lifts himself off the ground by pulling on the straps of his boots. This set of initiating punch cards was called "bootstrap cards". Thus a cold start was calledbootingthe computer up. If the computercrashed, it was rebooted. The boot reference carried over to all subsequent types of computers. ForIBM PC compatiblecomputers, acold bootis a boot process in which the computer starts from a powerless state, in which the system performs a completepower-on self-test(POST).[1][2][3][4]Both the operating system and third-party software can initiate a cold boot; the restart command inWindows 9xinitiates a cold reboot, unless Shift key is held.[1]: 509 Awarm bootis initiated by theBIOS, either as a result of theControl-Alt-Deletekey combination[1][2][3][4]or directly through BIOS interruptINT19h.[5]It may not perform a complete POST - for example, it may skip the memory test - and may not perform a POST at all.[1][2][4]Malwaremay prevent or subvert a warm boot by intercepting the Ctrl + Alt + Delete key combination and prevent it from reaching BIOS.[6]TheWindows NTfamily of operating systems also does the same and reserves the key combination for its own use.[7][8] Operating systems based onLinuxsupport an alternative to warm boot; the Linux kernel has optional support forkexec, asystem callwhich transfers execution to a new kernel and skips hardware or firmware reset. The entire process occurs independently of the system firmware. The kernel being executed does not have to be a Linux kernel.[citation needed] Outside the domain of IBM PC compatible computers, the types of boot may not be as clear. According to Sue Loh ofWindows CEBase Team, Windows CE devices support three types of boots: Warm, cold and clean.[9]A warm boot discards program memory. A cold boot additionally discards storage memory (also known as the "object store"), while a clean boot erasesallforms of memory storage from the device. However, since these areas do not exist on all Windows CE devices, users are only concerned with two forms of reboot: one that resets the volatile memory and one that wipes the device clean and restores factory settings. For example, for aWindows Mobile 5.0device, the former is a cold boot and the latter is a clean boot.[9] A hard reboot means that the system is not shut down in an orderly manner, skipping file system synchronisation and other activities that would occur on an orderly shutdown. This can be achieved by either applying areset, bycycling power, by issuing thehalt-qcommand in mostUnix-likesystems, or by triggering akernel panic. Hard reboots are used in thecold boot attack. The term "restart" is used by theMicrosoft WindowsandLinuxfamilies of operating systems to denote an operating system-assisted reboot. In a restart, the operating system ensures that all pending I/O operations are gracefully ended before commencing a reboot. Users may deliberately initiate a reboot. Rationale for such action may include: The means of performing a deliberate reboot also vary and may include: Unexpected loss of power for any reason (includingpower outage,power supplyfailure or depletion ofbatteryon a mobile device) forces the system user to perform a cold boot once the power is restored. SomeBIOSeshave an option to automatically boot the system after a power failure.[23][24]Anuninterruptible power supply(UPS), backup battery or redundant power supply can prevent such circumstances. "Random reboot" is a non-technical term referring to an unintended (and often undesired) reboot following asystem crash, whose root cause may not immediately be evident to the user. Such crashes may occur due to a multitude of software and hardware problems, such astriple faults. They are generally symptomatic of an error inring 0that is not trapped by anerror handlerin an operating system or a hardware-triggerednon-maskable interrupt. Systems may be configured to reboot automatically after a power failure, or afatal system errororkernel panic. The method by which this is done varies depending on whether the reboot can be handled via software or must be handled at the firmware or hardware level. Operating systems in theWindows NTfamily (fromWindows NT 3.1throughWindows 7) have an option to modify the behavior of the error handler so that a computer immediately restarts rather than displaying aBlue Screen of Death(BSOD) error message. This option is enabled by default in some editions. The introduction ofadvanced power managementallowed operating systems greater control of hardware power management features. WithAdvanced Configuration and Power Interface(ACPI), newer operating systems are able to manage different power states and thereby sleep and/orhibernate. While hibernation also involves turning a system off then subsequently back on again, the operating system does not start from scratch, thereby differentiating this process from rebooting. A reboot may be simulated by software running on an operating system. For example: the Sysinternals BlueScreen utility, which is used for pranking; or some modes of thebsodXScreenSaver"hack", for entertainment (albeit possibly concerning at first glance). Malware may also simulate a reboot, and thereby deceive a computer user for some nefarious purpose.[6] Microsoft App-Vsequencing tool captures all the file system operations of an installer in order to create a virtualized software package for users. As part of the sequencing process, it will detect when an installer requires a reboot, interrupt the triggered reboot, and instead simulate the required reboot by restarting services and loading/unloading libraries.[25] Windows 8&10enable (by default) ahibernation-like "Fast Startup" (a.k.a. "Fast Boot") which can cause problems (including confusion) for users accustomed to turning off computers to (cold) reboot them.[26][27][28]
https://en.wikipedia.org/wiki/Reboot#Cold
This article contains a list of notablewikis, which arewebsitesthat usewiki software, allowing users to collaboratively edit content and view old versions of the content. These websites useseveral different wiki software packages. 550,069[citation needed]
https://en.wikipedia.org/wiki/List_of_wikis
Ananonymous post, is an entry on atextboard, anonymousbulletin board system, or other discussion forums likeInternet forum, without ascreen nameor more commonly by using a non-identifiablepseudonym. Some online forums such asSlashdotdo not allow such posts, requiring users to be registered either under theirreal nameor utilizing apseudonym. Others likeJuicyCampus,AutoAdmit,2channel, and otherFutaba-basedimageboards(such as4chan) thrive on anonymity. Users of 4chan, in particular, interact in an anonymous and ephemeral environment that facilitates rapid generation of new trends. Online anonymity can be traced toUsenetnewsgroupsin the late 1990s where the notion of using invalid emails for posting to newsgroups was introduced. This was primarily used for discussion on newsgroups pertaining to certain sensitive topics. There was also the introduction ofanonymous remailerswhich were capable of stripping away the sender's address from mail packets before sending them to the receiver. Online services which facilitated anonymous posting sprang up around mid-1992, originating with thecypherpunkgroup.[1] The precursor to Internet forums like2channeland4chanweretextboardslike Ayashii World and Amezou World that provided the ability for anonymous posts inJapan. These "large-scale anonymous textboards" were inspired by the Usenet culture and were primarily focused on technology, unlike their descendants.[2] Today, image boards receive tremendous Internet traffic from all parts of the world. In 2011, on 4chan's most popular board, /b/, there were roughly 35,000 threads and 400,000 posts created per day. At that time, that level of content was on par withYouTube. Such high traffic suggests a broad demand from Internet users for anonymous content sharing sites.[3] Anonymity on the Internet can pertain to both the utilization ofpseudonymsor requiring no authentication at all (also called "perfect anonymity") for posting on a website.[4]Online anonymity is also limited byIP addresses. For example,WikiScannerassociates anonymousWikipediaedits with the IP address that made the change and tries to identify the entity that owns the IP address. On other websites, IP addresses may not be publicly available, but they can be obtained from the website administrators only through legal intervention. They might not always be traceable to the poster.[5] Utilizingpseudonymsallow people to post without revealing their real identity. Pseudonyms, however, are still prone to being tracked to the user'sIP address.[6]To avoid being tracked to an IP address, it is possible to post via apublic computerwhere the IP address would usually be under the purview of the public workspace such as acoffee shop, and hence cannot be traced to the individual user.[6]Adversarial stylometrycan be employed to resist identification by writing style. Another way people are posting anonymously online is through the use ofmemes. One popular meme is the Confession Bear meme. People use Confession Bear to post everything from funny and embarrassing stories to very troubled thoughts.[7] There are services described asanonymizerswhich aim to provide users the ability to post anonymously by hiding their identifying information. Anonymizers are essentiallyproxy serverswhich act as an intermediary between the user who wants to post anonymously and the website which logs user information such as IP addresses. The proxy server is the only computer in this network which is aware of the user's information and provides its own information to anonymize the poster.[8]Examples of such anonymizers includeTorandI2P, which employ techniques such asonionandgarlic routing(respectively) to provide enhancedencryptionto messages that travel through multiple proxy servers.[6] Applications likePGPutilizing techniques likeprivate-keyandpublic-keyencryptions are also utilized by users to post content in Usenet groups and other online forums.[9] The revised draft of theChinesegovernment's "Internet Information Services"[10]proposes that "Internet information service providers, includingmicroblogs, forums, and blogs, that allow users to post information on the Internet should ensure users are registered with their real identities".[11]Starting October 1, 2017, it will require Internet users to identify themselves with their real names to use comments sections on news and social media websites.[12] ThePhilippinegovernment passed theCybercrime Prevention Acton 12 September 2012, which among other things grants theDepartment of Justicethe ability to "block access to 'computer data' that is in violation of the Act; in other words, a website hosting criminallylibelousspeech could be shut down without a court order".[13] Under theDefamation Act 2013, in an action against a website operator, on a statement posted on the website, it is a defense to show that it was not the operator who posted the statement on the website. The defense is defeated if it was not possible for the claimant to identify the person who posted the statement. In the United States, the right to speak anonymously online is protected by theFirst Amendmentand variousother laws. These laws restrict the ability of the government and civil litigants to obtain the identity of anonymous speakers. The First Amendment says that "Congress shall make no law ... abridging the freedom of speech, or of thepress".[14]This protection has been interpreted by theU.S. Supreme Courtto protect the right to speak anonymously offline. For example, inMcIntyre v. Ohio Elections Commission, the Supreme Court overturned an Ohio law banning the distribution of anonymous election pamphlets, claiming that an "author's decision to remain anonymous ... is an aspect of the freedom of speech protected by the First Amendment" and that "anonymouspamphleteeringis not a pernicious, fraudulent practice, but an honorable tradition ofadvocacyand ofdissent", as well as a "shield" against the so-calledtyranny of the majority.[15]Various courts have interpreted these offline protections to extend to the online world.[16] Identifying the author of an anonymous post may require aDoe subpoena. This involves gaining access to the IP address of the poster via the hosting website. The courts can then order anISPto identify the subscriber to whom it had assigned said IP address. Requests for such data are almost always fruitful, though providers will often effect a finite term ofdata retention(in accordance with theprivacy policyof each—local law may specify a minimum and/or maximum term). The usage of IP addresses has, in recent times, been challenged as a legitimate way to identify anonymous users.[17][18] On March 21, 2012, theNew York State Senateintroduced the bill numbered S.6779 (and A.8668) labeled as the "Internet Protection Act". It proposes the ability of awebsite administratorof a New York–based website to take down anonymous comments unless the original author of the comment agrees to identify themselves on the post.[19] Online communities vary with their stances on anonymous postings.Wikipediaallows anonymous editing in most cases, but does not label users, instead identifying them by theirIP addresses. Other editors commonly refer to these users with neutral terms such as "anons" or "IPs".[20] Many online bulletin boards require users to be signed in to write—and, in some cases, even to read—posts.2channeland otherFutaba-based image boards take an opposite stance, encouraging the anonymity, and in the case of English-language Futaba-based websites, calling those who useusernamesandtripcodes"namefags" and "tripfags", respectively.[21]As required by law, even communities such as 4chan do require the logging of IP addresses of such anonymous posters.[citation needed]Such data, however, can only be accessed by the particular site administrator. Slashdotdiscourages anonymous posting by displaying "Anonymous Coward" as the author of each anonymous post. The mildly derogatory term is meant to chide anonymous contributors into logging in.[22][23] The effects of posting online anonymously has been linked to theonline disinhibition effectin users whilst been categorized into either benign or toxic disinhibition.[24]Disinhibition can result in misbehavior but can also improve user relationships. It may also result in greater disclosure among Internet users, allowing more emotional closeness and openness in a safe social context.[25] Anonymous computer communication has also been linked to accentuateself-stereotyping.[26]Although it has been linked to notable effects in gender differences, only when the topic bears similarity and fits with thegender stereotype.[26] A 2015 study suggested that anonymous news comment sections are more susceptible to uncivil comments, especially those directed at other users. Anonymous news comment section users are also more likely to be impolite by either being sarcastic and casting aspersions.[27] With regard to a recent hostile subpoena in California, commentators have asked if there will be a "Layfield & Barrett effect" chilling job review posting free speech.[28][29]On May 2, 2016, through its lawyers,Layfield and Barrettand partner Phil Layfield issued a subpoena onGlassdoorseeking the online identities of former employees who posted extremely critical and negative reviews. Glassdoor executives have stated that they will fight the subpoena as they have fought off other efforts to disclose anonymous identities in the recent past.[30]Other litigants in California have won their right to anonymously post negative job reviews but the law remains hotly contested.[31][32] The conditions fordeindividuation, such as "anonymity, reducedself-awareness, and reduced self-regulation," fosters creations of online communities much in the same way that they might be employed offline.[33]This is evident in proliferation of communities such asRedditor4chanwhich utilize total anonymity or pseudonymity, or tools such as Informers (which add anonymity to non anonymous social media likeFacebookorTwitter), to provide its users the ability to post varied content. The effect ofdisinhibitionhas been seen to be beneficial in "advice and discussionthreadsby providing a cover for more intimate and open conversations".[3] The "ephemerality", or short-lived nature, of posts that exist on some anonymous image boards such as4chancreate a fast-paced environment. As of 2009, threads on 4chan had a median lifespan of 3.9 minutes.[3] There is also research suggesting that content that gets posted in such communities also tends to be more deviant in nature than would be otherwise.[34]The ability to post anonymously has also been linked to the proliferation ofpornographyin newsgroups and other online forums wherein users utilize sophisticated mechanisms such as mentioned intechnology.[9]
https://en.wikipedia.org/wiki/Anonymous_post
Accuracy and precisionare two measures ofobservational error.Accuracyis how close a given set ofmeasurements(observationsor readings) are to theirtrue value.Precisionis how close the measurements are to each other. TheInternational Organization for Standardization(ISO) defines a related measure:[1]trueness, "the closeness of agreement between thearithmetic meanof a large number of test results and the true or accepted reference value." Whileprecisionis a description ofrandom errors(a measure ofstatistical variability),accuracyhas two different definitions: In simpler terms, given astatistical sampleor set of data points from repeated measurements of the same quantity, the sample or set can be said to beaccurateif theiraverageis close to the true value of the quantity being measured, while the set can be said to bepreciseif theirstandard deviationis relatively small. In the fields ofscienceandengineering, the accuracy of ameasurementsystem is the degree of closeness of measurements of aquantityto that quantity's truevalue.[3]The precision of a measurement system, related toreproducibilityandrepeatability, is the degree to which repeated measurements under unchanged conditions show the sameresults.[3][4]Although the two words precision and accuracy can besynonymousincolloquialuse, they are deliberately contrasted in the context of thescientific method. The field ofstatistics, where the interpretation of measurements plays a central role, prefers to use the termsbiasandvariabilityinstead of accuracy and precision: bias is the amount of inaccuracy and variability is the amount of imprecision. A measurement system can be accurate but not precise, precise but not accurate, neither, or both. For example, if an experiment contains asystematic error, then increasing thesample sizegenerally increases precision but does not improve accuracy. The result would be a consistent yet inaccurate string of results from the flawed experiment. Eliminating the systematic error improves accuracy but does not change precision. A measurement system is consideredvalidif it is bothaccurateandprecise. Related terms includebias(non-randomor directed effects caused by a factor or factors unrelated to theindependent variable) anderror(random variability). The terminology is also applied to indirect measurements—that is, values obtained by a computational procedure from observed data. In addition to accuracy and precision, measurements may also have ameasurement resolution, which is the smallest change in the underlying physical quantity that produces a response in the measurement. Innumerical analysis, accuracy is also the nearness of a calculation to the true value; while precision is the resolution of the representation, typically defined by the number of decimal or binary digits. In military terms, accuracy refers primarily to the accuracy of fire (justesse de tir), the precision of fire expressed by the closeness of a grouping of shots at and around the centre of the target.[5] A shift in the meaning of these terms appeared with the publication of the ISO 5725 series of standards in 1994, which is also reflected in the 2008 issue of the BIPMInternational Vocabulary of Metrology(VIM), items 2.13 and 2.14.[3] According to ISO 5725-1,[1]the general term "accuracy" is used to describe the closeness of a measurement to the true value. When the term is applied to sets of measurements of the samemeasurand, it involves a component of random error and a component of systematic error. In this case trueness is the closeness of the mean of a set of measurement results to the actual (true) value, that is the systematic error, and precision is the closeness of agreement among a set of results, that is the random error. ISO 5725-1 and VIM also avoid the use of the term "bias", previously specified in BS 5497-1,[6]because it has different connotations outside the fields of science and engineering, as in medicine and law. In industrial instrumentation, accuracy is the measurement tolerance, or transmission of the instrument and defines the limits of the errors made when the instrument is used in normal operating conditions.[7] Ideally a measurement device is both accurate and precise, with measurements all close to and tightly clustered around the true value. The accuracy and precision of a measurement process is usually established by repeatedly measuring sometraceablereferencestandard. Such standards are defined in theInternational System of Units(abbreviated SI from French:Système international d'unités) and maintained by nationalstandards organizationssuch as theNational Institute of Standards and Technologyin the United States. This also applies when measurements are repeated and averaged. In that case, the termstandard erroris properly applied: the precision of the average is equal to the known standard deviation of the process divided by the square root of the number of measurements averaged. Further, thecentral limit theoremshows that theprobability distributionof the averaged measurements will be closer to a normal distribution than that of individual measurements. With regard to accuracy we can distinguish: A common convention in science and engineering is to express accuracy and/or precision implicitly by means ofsignificant figures. Where not explicitly stated, the margin of error is understood to be one-half the value of the last significant place. For instance, a recording of 843.6 m, or 843.0 m, or 800.0 m would imply a margin of 0.05 m (the last significant place is the tenths place), while a recording of 843 m would imply a margin of error of 0.5 m (the last significant digits are the units). A reading of 8,000 m, with trailing zeros and no decimal point, is ambiguous; the trailing zeros may or may not be intended as significant figures. To avoid this ambiguity, the number could be represented in scientific notation: 8.0 × 103m indicates that the first zero is significant (hence a margin of 50 m) while 8.000 × 103m indicates that all three zeros are significant, giving a margin of 0.5 m. Similarly, one can use a multiple of the basic measurement unit: 8.0 km is equivalent to 8.0 × 103m. It indicates a margin of 0.05 km (50 m). However, reliance on this convention can lead tofalse precisionerrors when accepting data from sources that do not obey it. For example, a source reporting a number like 153,753 with precision +/- 5,000 looks like it has precision +/- 0.5. Under the convention it would have been rounded to 150,000. Alternatively, in a scientific context, if it is desired to indicate the margin of error with more precision, one can use a notation such as 7.54398(23) × 10−10m, meaning a range of between 7.54375 and 7.54421 × 10−10m. Precision includes: In engineering, precision is often taken as three times Standard Deviation of measurements taken, representing the range that 99.73% of measurements can occur within.[8]For example, an ergonomist measuring the human body can be confident that 99.73% of their extracted measurements fall within ± 0.7 cm - if using the GRYPHON processing system - or ± 13 cm - if using unprocessed data.[9] Accuracyis also used as a statistical measure of how well abinary classificationtest correctly identifies or excludes a condition. That is, the accuracy is the proportion of correct predictions (bothtrue positivesandtrue negatives) among the total number of cases examined.[10]As such, it compares estimates ofpre- and post-test probability. To make the context clear by the semantics, it is often referred to as the "Rand accuracy" or "Rand index".[11][12][13]It is a parameter of the test. The formula for quantifying binary accuracy is:Accuracy=TP+TNTP+TN+FP+FN{\displaystyle {\text{Accuracy}}={\frac {TP+TN}{TP+TN+FP+FN}}}whereTP = True positive;FP = False positive;TN = True negative;FN = False negative In this context, the concepts of trueness and precision as defined by ISO 5725-1 are not applicable. One reason is that there is not a single “true value” of a quantity, but rather two possible true values for every case, while accuracy is an average across all cases and therefore takes into account both values. However, the termprecisionis used in this context to mean a different metric originating from the field of information retrieval (see below). When computing accuracy in multiclass classification, accuracy is simply the fraction of correct classifications:[14][15]Accuracy=correct classificationsall classifications{\displaystyle {\text{Accuracy}}={\frac {\text{correct classifications}}{\text{all classifications}}}}This is usually expressed as a percentage. For example, if a classifier makes ten predictions and nine of them are correct, the accuracy is 90%. Accuracy is sometimes also viewed as amicro metric, to underline that it tends to be greatly affected by the particular class prevalence in a dataset and the classifier's biases.[14] Furthermore, it is also called top-1 accuracy to distinguish it from top-5 accuracy, common inconvolutional neural networkevaluation. To evaluate top-5 accuracy, the classifier must provide relative likelihoods for each class. When these are sorted, a classification is considered correct if the correct classification falls anywhere within the top 5 predictions made by the network. Top-5 accuracy was popularized by theImageNetchallenge. It is usually higher than top-1 accuracy, as any correct predictions in the 2nd through 5th positions will not improve the top-1 score, but do improve the top-5 score. Inpsychometricsandpsychophysics, the termaccuracyis interchangeably used withvalidityandconstant error.Precisionis a synonym forreliabilityandvariable error. The validity of a measurement instrument or psychological test is established through experiment or correlation with behavior. Reliability is established with a variety of statistical techniques, classically through an internal consistency test likeCronbach's alphato ensure sets of related questions have related responses, and then comparison of those related question between reference and target population.[citation needed] Inlogic simulation, a common mistake in evaluation of accurate models is to compare alogic simulation modelto atransistorcircuit simulation model. This is a comparison of differences in precision, not accuracy. Precision is measured with respect to detail and accuracy is measured with respect to reality.[16][17] Information retrieval systems, such asdatabasesandweb search engines, are evaluated bymany different metrics, some of which are derived from theconfusion matrix, which divides results into true positives (documents correctly retrieved), true negatives (documents correctly not retrieved), false positives (documents incorrectly retrieved), and false negatives (documents incorrectly not retrieved). Commonly used metrics include the notions ofprecision and recall. In this context, precision is defined as the fraction of documents correctly retrieved compared to the documents retrieved (true positives divided by true positives plus false positives), using a set ofground truthrelevant results selected by humans. Recall is defined as the fraction of documents correctly retrieved compared to the relevant documents (true positives divided by true positives plus false negatives). Less commonly, the metric of accuracy is used, is defined as the fraction of documents correctly classified compared to the documents (true positives plus true negatives divided by true positives plus true negatives plus false positives plus false negatives). None of these metrics take into account the ranking of results. Ranking is very important for web search engines because readers seldom go past the first page of results, and there are too many documents on the web to manually classify all of them as to whether they should be included or excluded from a given search. Adding a cutoff at a particular number of results takes ranking into account to some degree. The measureprecision at k, for example, is a measure of precision looking only at the top ten (k=10) search results. More sophisticated metrics, such asdiscounted cumulative gain, take into account each individual ranking, and are more commonly used where this is important. In cognitive systems, accuracy and precision is used to characterize and measure results of a cognitive process performed by biological or artificial entities where a cognitive process is a transformation of data, information, knowledge, or wisdom to a higher-valued form. (DIKW Pyramid) Sometimes, a cognitive process produces exactly the intended or desired output but sometimes produces output far from the intended or desired. Furthermore, repetitions of a cognitive process do not always produce the same output.Cognitive accuracy(CA) is the propensity of a cognitive process to produce the intended or desired output.Cognitive precision(CP) is the propensity of a cognitive process to produce the same output.[18][19][20]To measureaugmented cognitionin human/cog ensembles, where one or more humans work collaboratively with one or more cognitive systems (cogs), increases in cognitive accuracy and cognitive precision assist in measuring the degree ofcognitive augmentation.
https://en.wikipedia.org/wiki/Accuracy
Incomputer science, abinary decision diagram(BDD) orbranching programis adata structurethat is used to represent aBoolean function. On a more abstract level, BDDs can be considered as acompressedrepresentation ofsetsorrelations. Unlike other compressed representations, operations are performed directly on the compressed representation, i.e. without decompression. Similardata structuresincludenegation normal form(NNF),Zhegalkin polynomials, andpropositional directed acyclic graphs(PDAG). A Boolean function can be represented as arooted, directed, acyclicgraph, which consists of several (decision) nodes and two terminal nodes. The two terminal nodes are labeled 0 (FALSE) and 1 (TRUE). Each (decision) nodeu{\displaystyle u}is labeled by a Boolean variablexi{\displaystyle x_{i}}and has twochild nodescalled low child and high child. The edge from nodeu{\displaystyle u}to a low (or high) child represents an assignment of the value FALSE (or TRUE, respectively) to variablexi{\displaystyle x_{i}}. Such aBDDis called 'ordered' if different variables appear in the same order on all paths from the root. A BDD is said to be 'reduced' if the following two rules have been applied to its graph: In popular usage, the termBDDalmost always refers toReduced Ordered Binary Decision Diagram(ROBDDin the literature, used when the ordering and reduction aspects need to be emphasized). The advantage of an ROBDD is that it is canonical (unique up to isomorphism) for a particular function and variable order.[1]This property makes it useful in functional equivalence checking and other operations like functional technology mapping. A path from the root node to the 1-terminal represents a (possibly partial) variable assignment for which the represented Boolean function is true. As the path descends to a low (or high) child from a node, then that node's variable is assigned to 0 (respectively 1). The left figure below shows a binarydecisiontree(the reduction rules are not applied), and atruth table, each representing the functionf(x1,x2,x3){\displaystyle f(x_{1},x_{2},x_{3})}. In the tree on the left, the value of the function can be determined for a given variable assignment by following a path down the graph to a terminal. In the figures below, dotted lines represent edges to a low child, while solid lines represent edges to a high child. Therefore, to findf(0,1,1){\displaystyle f(0,1,1)}, begin at x1, traverse down the dotted line to x2(since x1has an assignment to 0), then down two solid lines (since x2and x3each have an assignment to one). This leads to the terminal 1, which is the value off(0,1,1){\displaystyle f(0,1,1)}. The binary decisiontreeof the left figure can be transformed into a binary decisiondiagramby maximally reducing it according to the two reduction rules. The resultingBDDis shown in the right figure. Another notation for writing this Boolean function isx¯1x¯2x¯3+x1x2+x2x3{\displaystyle {\overline {x}}_{1}{\overline {x}}_{2}{\overline {x}}_{3}+x_{1}x_{2}+x_{2}x_{3}}. An ROBDD can be represented even more compactly, using complemented edges, also known ascomplement links.[2][3]The resulting BDD is sometimes known as atyped BDD[4]orsigned BDD. Complemented edges are formed by annotating low edges as complemented or not. If an edge is complemented, then it refers to the negation of the Boolean function that corresponds to the node that the edge points to (the Boolean function represented by the BDD with root that node). High edges are not complemented, in order to ensure that the resulting BDD representation is a canonical form. In this representation, BDDs have a single leaf node, for reasons explained below. Two advantages of using complemented edges when representing BDDs are: However, Knuth[5]argues otherwise: A reference to a BDD in this representation is a (possibly complemented) "edge" that points to the root of the BDD. This is in contrast to a reference to a BDD in the representation without use of complemented edges, which is the root node of the BDD. The reason why a reference in this representation needs to be an edge is that for each Boolean function, the function and its negation are represented by an edge to the root of a BDD, and a complemented edge to the root of the same BDD. This is why negation takes constant time. It also explains why a single leaf node suffices: FALSE is represented by a complemented edge that points to the leaf node, and TRUE is represented by an ordinary edge (i.e., not complemented) that points to the leaf node. For example, assume that a Boolean function is represented with a BDD represented using complemented edges. To find the value of the Boolean function for a given assignment of (Boolean) values to the variables, we start at the reference edge, which points to the BDD's root, and follow the path that is defined by the given variable values (following a low edge if the variable that labels a node equals FALSE, and following the high edge if the variable that labels a node equals TRUE), until we reach the leaf node. While following this path, we count how many complemented edges we have traversed. If when we reach the leaf node we have crossed an odd number of complemented edges, then the value of the Boolean function for the given variable assignment is FALSE, otherwise (if we have crossed an even number of complemented edges), then the value of the Boolean function for the given variable assignment is TRUE. An example diagram of a BDD in this representation is shown on the right, and represents the same Boolean expression as shown in diagrams above, i.e.,(¬x1∧¬x2∧¬x3)∨(x1∧x2)∨(x2∧x3){\displaystyle (\neg x_{1}\wedge \neg x_{2}\wedge \neg x_{3})\vee (x_{1}\wedge x_{2})\vee (x_{2}\wedge x_{3})}. Low edges are dashed, high edges solid, and complemented edges are signified by a circle at their source. The node with the @ symbol represents the reference to the BDD, i.e., the reference edge is the edge that starts from this node. The basic idea from which the data structure was created is theShannon expansion. Aswitching functionis split into two sub-functions (cofactors) by assigning one variable (cf.if-then-else normal form). If such a sub-function is considered as a sub-tree, it can be represented by abinary decision tree. Binary decision diagrams (BDDs) were introduced by C. Y. Lee,[6]and further studied and made known by Sheldon B. Akers[7]and Raymond T. Boute.[8]Independently of these authors, a BDD under the name "canonical bracket form" was realized Yu. V. Mamrukov in a CAD for analysis of speed-independent circuits.[9]The full potential for efficient algorithms based on the data structure was investigated byRandal BryantatCarnegie Mellon University: his key extensions were to use a fixed variable ordering (for canonical representation) and shared sub-graphs (for compression). Applying these two concepts results in an efficient data structure and algorithms for the representation of sets and relations.[10][11]By extending the sharing to several BDDs, i.e. one sub-graph is used by several BDDs, the data structureShared Reduced Ordered Binary Decision Diagramis defined.[2]The notion of a BDD is now generally used to refer to that particular data structure. In his video lectureFun With Binary Decision Diagrams (BDDs),[12]Donald Knuthcalls BDDs "one of the only really fundamental data structures that came out in the last twenty-five years" and mentions that Bryant's 1986 paper was for some time one of the most-cited papers in computer science. Adnan Darwicheand his collaborators have shown that BDDs are one of several normal forms for Boolean functions, each induced by a different combination of requirements. Another important normal form identified by Darwiche isdecomposable negation normal formor DNNF. BDDs are extensively used inCADsoftware to synthesize circuits (logic synthesis) and informal verification. There are several lesser known applications of BDD, includingfault treeanalysis,Bayesianreasoning, product configuration, andprivate information retrieval.[13][14][citation needed] Every arbitrary BDD (even if it is not reduced or ordered) can be directly implemented in hardware by replacing each node with a 2 to 1multiplexer; each multiplexer can be directly implemented by a 4-LUT in aFPGA. It is not so simple to convert from an arbitrary network of logic gates to a BDD[citation needed](unlike theand-inverter graph). BDDs have been applied in efficientDataloginterpreters.[15] The size of the BDD is determined both by the function being represented and by the chosen ordering of the variables. There exist Boolean functionsf(x1,…,xn){\displaystyle f(x_{1},\ldots ,x_{n})}for which depending upon the ordering of the variables we would end up getting a graph whose number of nodes would be linear (inn) at best and exponential at worst (e.g., aripple carry adder). Consider the Boolean functionf(x1,…,x2n)=x1x2+x3x4+⋯+x2n−1x2n.{\displaystyle f(x_{1},\ldots ,x_{2n})=x_{1}x_{2}+x_{3}x_{4}+\cdots +x_{2n-1}x_{2n}.}Using the variable orderingx1<x3<⋯<x2n−1<x2<x4<⋯<x2n{\displaystyle x_{1}<x_{3}<\cdots <x_{2n-1}<x_{2}<x_{4}<\cdots <x_{2n}}, the BDD needs2n+1{\displaystyle 2^{n+1}}nodes to represent the function. Using the orderingx1<x2<x3<x4<⋯<x2n−1<x2n{\displaystyle x_{1}<x_{2}<x_{3}<x_{4}<\cdots <x_{2n-1}<x_{2n}}, the BDD consists of2n+2{\displaystyle 2n+2}nodes. It is of crucial importance to care about variable ordering when applying this data structure in practice. The problem of finding the best variable ordering isNP-hard.[16]For any constantc> 1 it is even NP-hard to compute a variable ordering resulting in an OBDD with a size that is at mostctimes larger than an optimal one.[17]However, there exist efficient heuristics to tackle the problem.[18] There are functions for which the graph size is always exponential—independent of variable ordering. This holds e.g. for the multiplication function.[1]In fact, the function computing the middle bit of the product of twon{\displaystyle n}-bit numbers does not have an OBDD smaller than2⌊n/2⌋/61−4{\displaystyle 2^{\lfloor n/2\rfloor }/61-4}vertices.[19](If the multiplication function had polynomial-size OBDDs, it would show thatinteger factorizationis inP/poly, which is not known to be true.[20]) Researchers have suggested refinements on the BDD data structure giving way to a number of related graphs, such as BMD (binary moment diagrams), ZDD (zero-suppressed decision diagrams), FBDD (free binary decision diagrams), FDD (functional decision diagrams), PDD (parity decision diagrams), and MTBDDs (multiple terminal BDDs). Many logical operations on BDDs can be implemented bypolynomial-timegraph manipulation algorithms:[21]: 20 However, repeating these operations several times, for example forming the conjunction or disjunction of a set of BDDs, may in the worst case result in an exponentially big BDD. This is because any of the preceding operations for two BDDs may result in a BDD with a size proportional to the product of the BDDs' sizes, and consequently for several BDDs the size may be exponential in the number of operations. Variable ordering needs to be considered afresh; what may be a good ordering for (some of) the set of BDDs may not be a good ordering for the result of the operation. Also, since constructing the BDD of a Boolean function solves the NP-completeBoolean satisfiability problemand the co-NP-completetautology problem, constructing the BDD can take exponential time in the size of the Boolean formula even when the resulting BDD is small. Computing existential abstraction over multiple variables of reduced BDDs is NP-complete.[22] Model-counting, counting the number of satisfying assignments of a Boolean formula, can be done in polynomial time for BDDs. For general propositional formulas the problem is♯P-complete and the best known algorithms require an exponential time in the worst case.
https://en.wikipedia.org/wiki/Binary_decision_diagram
Afalse awakeningis a vivid and convincingdreamaboutawakeningfromsleep, while the dreamer in reality continues to sleep. After a false awakening, subjects often dream they are performing their daily morning routine such as showering or eating breakfast. False awakenings, mainly those in which one dreams that they have awoken from a sleep that featured dreams, take on aspects of adouble dreamor adream within a dream. A classic example in fiction is the double false awakening of the protagonist inGogol'sPortrait(1835). Studies have shown that false awakening is closely related tolucid dreamingthat often transforms into one another. The only differentiating feature between them is that the dreamer has a logical understanding of the dream in a lucid dream, while that is not the case in a false awakening.[1] Once one realizes they are falsely awakened, they either wake up or begin lucid dreaming.[1] A false awakening may occur following a dream or following alucid dream(one in which the dreamer has been aware of dreaming). Particularly, if the false awakening follows a lucid dream, the false awakening may turn into a "pre-lucid dream",[2]that is, one in which the dreamer may start to wonder if they are really awake and may or may not come to the correct conclusion. In a study byHarvardpsychologistDeirdre Barrett, 2,000 dreams from 200 subjects were examined and it was found that false awakenings and lucidity were significantly more likely to occur within the same dream or within different dreams of the same night. False awakenings often preceded lucidity as a cue, but they could also follow the realization of lucidity, often losing it in the process.[3] Because the mind still dreams after a false awakening, there may be more than one false awakening in a single dream. Subjects may dream they wake up, eat breakfast, brush their teeth, and so on; suddenly awake again in bed (still in a dream), begin morning rituals again, awaken again, and so forth. The philosopherBertrand Russellclaimed to have experienced "about a hundred" false awakenings in succession while coming around from a general anesthetic.[4] Giorgio Buzzisuggests that FAs may indicate the occasional re-appearing of a vestigial (or anyway anomalous) REM sleep in the context of disturbed or hyperaroused sleep (lucid dreaming,sleep paralysis, or situations of high anticipation). This peculiar form of REM sleep permits the replay of unaltered experiential memories, thus providing a unique opportunity to study how waking experiences interact with the hypothesized predictive model of the world. In particular, it could permit to catch a glimpse of the protoconscious world without the distorting effect of ordinary REM sleep.[5] In accordance with the proposed hypothesis, a high prevalence of FAs could be expected in children, whose "REM sleep machinery" might be less developed.[5] Gibson's dreamprotoconsciousnesstheory states that false awakening is shaped on some fixed patterns depicting real activities, especially the day-to-day routine. False awakening is often associated with highly realistic environmental details of the familiar events like the day-to-day activities or autobiographic andepisodicmoments.[5] Certain aspects of life may be dramatized or out of place in false awakenings. Things may seem wrong: details, like the painting on a wall, not being able to talk or difficulty reading (reportedly, reading in lucid dreams is often difficult or impossible).[6]A common theme in false awakenings is visiting the bathroom, upon which the dreamer will see that their reflection in the mirror is distorted (which can be an opportunity for lucidity, but usually resulting in wakefulness). Celia Greensuggested a distinction should be made between two types of false awakening:[2] Type 1 is the more common, in which the dreamer seems to wake up, but not necessarily in realistic surroundings; that is, not in their own bedroom. A pre-lucid dream may ensue. More commonly, dreamers will believe they have awakened, and then either genuinely wake up in their own bed or "fall back asleep" in the dream. A common false awakening is a "late for work" scenario. A person may "wake up" in a typical room, with most things looking normal, and realize they overslept and missed the start time at work or school. Clocks, if found in the dream, will show time indicating that fact. The resulting panic is often strong enough to truly awaken the dreamer (much like from anightmare). Another common Type 1 example of false awakening can result in bedwetting. In this scenario, the dreamer has had a false awakening and while in the state of dream has performed all the traditional behaviors that precede urinating – arising from bed, walking to the bathroom, and sitting down on the toilet or walking up to a urinal. The dreamer may then urinate and suddenly wake up to find they have wet themselves. The Type 2 false awakening seems to be considerably less common. Green characterized it as follows: The subject appears to wake up in a realistic manner but to an atmosphere of suspense.... The dreamer's surroundings may at first appear normal, and they may gradually become aware of something uncanny in the atmosphere, and perhaps of unwanted [unusual] sounds and movements, or they may "awake" immediately to a "stressed" and "stormy" atmosphere. In either case, the end result would appear to be characterized by feelings of suspense, excitement or apprehension.[7] Charles McCreerydraws attention to the similarity between this description and the description by the German psychopathologistKarl Jaspers(1923) of the so-called "primary delusionary experience" (a general feeling that precedes more specific delusory belief).[8]Jaspers wrote: Patients feel uncanny and that there is something suspicious afoot. Everything gets anew meaning. The environment is somehow different—not to a gross degree—perception is unaltered in itself but there is some change which envelops everything with a subtle, pervasive and strangely uncertain light.... Something seems in the air which the patient cannot account for, a distrustful, uncomfortable, uncanny tension invades him.[9] McCreery suggests this phenomenological similarity is not coincidental and results from the idea that both phenomena, the Type 2 false awakening and the primary delusionary experience, are phenomena of sleep.[10]He suggests that the primary delusionary experience, like other phenomena of psychosis such as hallucinations and secondary or specific delusions, represents an intrusion into waking consciousness of processes associated withstage 1 sleep. It is suggested that the reason for these intrusions is that the psychotic subject is in a state ofhyperarousal, a state that can lead to whatIan Oswaldcalled "microsleeps" in waking life.[11] Other researchers doubt that these are clearly distinguished types, as opposed to being points on a subtle spectrum.[12] The clinical and neurophysiological descriptions of false awakening are rare. One notable report by Takeuchiet al.,[13]was considered by some experts as a case of false awakening. It depicts ahypnagogichallucinationof an unpleasant and fearful feeling of presence in sleeping lab with perception of having risen from the bed. Thepolysomnographyshowed abundant trains of alpha rhythm onEEG(sometimes blocked byREMsmixed withslow eye movementsand low muscle tone). Conversely, the two experiences of FA monitored here were close to regular REM sleep. Even quantitative analysis clearly shows theta waves predominantly, suggesting that these two experiences are a product of adreamingrather than a fully conscious brain.[14] The clinical and neurophysiological characteristics of false awakening are
https://en.wikipedia.org/wiki/Dream_within_a_dream
Wardenclyffe Tower(1901–1917), also known as theTesla Tower, was an early experimentalwirelesstransmission station designed and built byNikola TeslaonLong Islandin 1901–1902, located in the village ofShoreham, New York. Tesla intended to transmit messages,telephony, and evenfacsimile imagesacross theAtlantic Oceanto England and to ships at sea based on his theories of using theEarthto conduct the signals. His decision to increase the scale of the facility and implement his ideas ofwireless power transferto better compete withGuglielmo Marconi's radio-basedtelegraphsystem was met with refusal to fund the changes by the project's primary backer, financierJ. P. Morgan. Additional investment could not be found, and the project was abandoned in 1906, never to become operational. In an attempt to satisfy Tesla's debts, the tower was demolished for scrap in 1917 and the property taken inforeclosurein 1922. For 50 years, Wardenclyffe was a processing facility producing photography supplies. Many buildings were added to the site and the land it occupies has been trimmed down from 200 acres (81 ha) to 16 acres (6.5 ha) but the original, 94 by 94 ft (29 by 29 m), brick building designed byStanford Whiteremains standing. In the 1980s and 2000s, hazardous waste from the photographic era wascleaned up, and the site was sold and cleared for new development. A grassroots campaign to save the site succeeded in purchasing the property in 2013, with plans to builda future museum dedicated to Nikola Tesla. In 2018, the property was listed on theNational Register of Historic Places.[2] Tesla's design for Wardenclyffe grew out of his experiments beginning in the early 1890s. His primary goal in these experiments was to develop a new wireless power transmission system. Tesla discarded the idea of using the newly discoveredHertzian waves(radio waves), detected in 1888 by German physicistHeinrich Rudolf Hertz. Tesla doubted they existed and he followed scientific thought of the period that, if they did exist, this was just a type of invisible light which would travel in straight lines the wayvisible lightdid, meaning they would travel straight out into space and be "hopelessly lost".[3][4] In laboratory work and later large-scale experiments atColorado Springs, Colorado, in 1899, Tesla developed his own ideas on how aworldwide wireless systemwould work. He theorized from these experiments that if he injected electric current into the Earth at just the right frequency he could harness what he believed was the planet's own electrical charge and cause it to resonate at a frequency that would be amplified in "standing waves" that could be tapped anywhere on the planet to run devices or, throughmodulation, carry a signal.[5]His system was based more on 19th-century ideas ofelectrical conductionand telegraphy instead of the newer theories of electromagnetic waves, with an electrical charge being conducted through the ground and being returned through the air.[6] Tesla's design used a concept of a charged conductive upper layer in the atmosphere,[6]a theory dating back to an 1872 idea for a proposed wireless power system byMahlon Loomis.[7]Tesla not only believed that he could use this layer as the return path in his electrical conduction system, but that the power flowing through it would make it glow, providing night time lighting for cities and shipping lanes.[7] In February 1901, in aCollier'sWeeklyarticle titled "Talking With Planets", Tesla described his "system of energy transmission and of telegraphy without the use of wires" as: (using) the Earth itself as the medium for conducting the currents, thus dispensing with wires and all other artificial conductors ... a machine which, to explain its operation in plain language, resembled a pump in its action, drawing electricity from the Earth and driving it back into the same at an enormous rate, thus creating ripples or disturbances which, spreading through the Earth as through a wire, could be detected at great distances by carefully attuned receiving circuits. In this manner I was able to transmit to a distance, not only feeble effects for the purposes of signaling, but considerable amounts of energy, and later discoveries I made convinced me that I shall ultimately succeed in conveying power without wires, for industrial purposes, with high economy, and to any distance, however great.[8] Although Tesla demonstrated wireless power transmission at Colorado Springs, lighting electric lights mounted outside the building where he had his large experimental coil,[9]he did not scientifically test his theories. He believed he had achieved Earth resonance which, according to his theory, would work at any distance.[10] Tesla was back in New York in January 1900. He had convinced his friendRobert Underwood Johnson, editor ofThe Century Magazine, to allow him to publish an article covering his work and Johnson had even sent a photographer to Colorado Springs the previous year to photograph Tesla's experiments. The article written by Tesla, titled "The Problem of Increasing Human Energy", appeared in the June 1900 edition ofCentury Magazine. Instead of the understandable scientific description Johnson had hoped for,[11]it was more of a lengthy philosophical treatise where Tesla described his futuristic ideas on harnessing the sun's energy, control of the weather with electricity, wireless control, and how future inventions would make war impossible. It also contained what were to become iconic images by photographer Dickenson Alley of Tesla and his Colorado Springs experiments. Tesla made the rounds in New York trying to find investors for his system of wireless transmission, wining and dining them at theWaldorf-Astoria's Palm Garden (the hotel where he was living at the time),The Players ClubandDelmonico's.[12]Tesla first went to his old friendGeorge Westinghousefor help. Westinghouse seemed like a natural fit for the project given the large-scale AC equipmentWestinghouse Electricmanufactured and Tesla's need for similar equipment. Tesla asked Westinghouse to "meet me on some fair terms in furnishing me the machinery, retaining the ownership of the same and interesting yourself to a certain extent". Though Westinghouse declined to buy into the project, he did agree to lend Tesla $6,000 ($226,776 in 2024).[13]Westinghouse suggested Tesla pursue some of the rich venture capitalists. Tesla talked toJohn Jacob Astor,Thomas Fortune Ryan, and even sent acabochonsapphirering as a gift toHenry O. Havemeyer. No investment was forthcoming from Havemeyer and Ryan, but Astor did buy 500 shares in Tesla's company.[14]Tesla gained the attention of financierJ. P. Morganin November 1900. Morgan was impressed byGuglielmo Marconi's feat of sending reports from theAmerica's Cupyacht races offLong Islandback to New York City via radio the previous year, and he was dubious about the feasibility and patent priority of Tesla's system.[15][16] In several discussions, Tesla assured Morgan his system was superior to, and based on patents that superseded, that of Marconi and of other wireless inventors, and that it would far outpace the performance of its main competitor, thetransatlantic telegraph cable. Morgan signed a contract with Tesla in March 1901, agreeing to give the inventor $150,000 ($5.67 million in 2024) to develop and build a wireless station[16]on Long Island, capable of sending wireless messages to London as well as ships at sea. The deal also included Morgan having a 51% interest in the company as well as a 51% share in present and future wireless patents developed from the project.[17] Tesla began working on his wireless station immediately. As soon as the contract was signed with Morgan in March 1901, he placed an order for generators and transformers with Westinghouse Electric. Tesla's plans changed radically after he read a June 1901Electrical Reviewarticle by Marconi titled "Syntonic Wireless Telegraph".[16][18] At this point, Marconi was transmitting radio signals beyond the range most physicists thought possible (over the horizon) and the description of the Italian inventor's use of a "Tesla coil" "connected to the Earth" led Tesla to believe Marconi was copying his earth resonance system to do it.[16][19]Tesla, believing a small pilot system capable of sendingMorse codeyacht race results to Morgan in Europe would not be able to capture the attention of potential investors, decided to scale up his designs with a much more powerful transmitter, incorporating his ideas of advanced telephone and image transmission[citation needed]as well as his ideas of wireless power delivery. In July 1901, Tesla informed Morgan of his planned changes to the project and the need for much more money to build it. He explained the more grandiose plan as a way to leap ahead of competitors and secure much larger profits on the investment. With Tesla basically proposing a breach of contract, Morgan refused to lend additional funds and demanded an account of money already spent.[16]Tesla claimed a few years later that funds were also running short because of Morgan's role in triggering the stock marketPanic of 1901, making everything Tesla had to buy much more expensive.[16] Morgan stated no additional funds would be supplied, but Tesla continued with the project. He explored the idea of building several small towers or a tower 300 feet (91 m) and even 600 feet (180 m) tall to transmit the type of low-frequencylongwavesthat Tesla thought were needed to resonate the Earth. His friend, architectStanford White, who was working on designing structures for the project, calculated that a 600-foot tower would cost $450,000 ($17 million in 2024) and the idea had to be canceled. Tesla purchased 200 acres (81 ha) of land close to a railway line 65 miles (105 km) from New York City inShorehamon Long Island Sound from land developer James S. Warden who was building a resort community known as Wardenclyffe-On-Sound. Tesla would later state his plans were to eventually make Wardenclyffe a hub "city" in his plans for a worldwide system of 30 wireless plants, sending messages and media content and broadcasting electrical power.[16]The land surrounding the Wardenclyffe plant was intended to be what Tesla would later in life refer to as a "radio city" with factories producing Tesla's patented devices.[20]Warden expected to build housing on the part of his remaining land for the expected 2,000–2,500 Tesla employees. At the end of July 1901 Tesla closed a contract for the building of the wireless telegraph plant and electrical laboratory at Wardenclyffe. The final design Tesla started building at Wardenclyffe consisted of a wood-framed tower 186 feet (57 m) tall and the cupola 68 feet (21 m) in diameter. It had a 55-ton steel (some report it was a better conducting material, such as copper) hemispherical structure at the top (referred to as a cupola). The structure was such as to allow each piece to be taken out and replaced as necessary. The main building occupied the rest of the facility grounds. Stanford White designed the Wardenclyffe facility main building. It included a laboratory area, instrumentation room, boiler room, generator room and machine shop. Inside the main building, there were electromechanical devices, electrical generators, electrical transformers, glass blowing equipment,X-raydevices,Tesla coils, a remote controlled boat, cases with bulbs and tubes, wires, cables, a library, and an office. It was constructed in the style of the Italian Renaissance. The tower was designed by W.D. Crow, an associate of White. There was a great deal of construction under the tower to establish some form of ground connection but Tesla and his workers kept the public and the press away from the project so little is known. The descriptions (some from Tesla's 1923 testimony in foreclosure proceedings on the property) include that the facility had a ten by twelve foot wood and steel lined shaft sunk into the ground 120 feet (37 m) beneath the tower with a stairway inside it. Tesla stated that at the bottom of the shaft he "had special machines rigged up which would push the iron pipe, one length after another, and I pushed these iron pipes, I think sixteen of them, three hundred feet, and then the current through these pipes takes hold of the earth."[21]In Tesla's words, the function of this was "to have a grip on the earth so the whole of this globe can quiver".[22][23]There is also contemporaneous and later descriptions of four 100 foot long tunnels, possibly brick lined and waterproofed, radiating from the bottom of the shaft north, south, east, and west terminating back at ground level in little brickigloos.[24]Speculation on the tunnels ranges from them being for drainage, acting as access ways, or having the function of enhancing ground connection or resonance by interacting with thewater tablebelow the tower, maybe via being filled with salt water orliquid nitrogen.[21][24] The Tesla biographerJohn Joseph O'Neillnoted the cupola at the top of the 186-foot tower had a 5-foot hole in its top where ultraviolet lights were to be mounted, perhaps to create an ionized path up through the atmosphere that could conduct electricity.[25]How Tesla intended to employ the ground conduction method and atmospheric method in Wardenclyffe's design is unknown.[26]Power for the entire system was to be provided by a coal fired 200 kilowatt Westinghouse alternating current industrialgenerator. Construction began in September 1901 but money was so short (with Morgan still owing Tesla the remainder of the original $150,000 promised) Tesla complained in a letter to White he was facing foreclosure. Tesla kept writing Morgan letters pleading for more money and assuring the financier his wireless system would be superior to Marconi's, but in December Tesla's plans were dealt another serious blow when Marconi announced to the world he was able to send a wireless transmission (theMorse codefor the letter S) across the Atlantic. Construction at Wardenclyffe continued in 1902 and that June Tesla began moving his laboratory operations from 46 EastHouston Streetlaboratory to the 94-foot-square brick building at Wardenclyffe. By the end of 1902 the tower reached its full height of 187 feet. What Tesla was doing at Wardenclyffe and the site itself was generally kept from the public. Tesla would respond to reporters inquiries stating there was a similar wireless plant in Scotland and that "We have been sending wireless messages for long distances from this station for some time, but whether we are going into the telegraph field on a commercial basis I cannot say at present."[27] Tesla continued to write to Morgan asking the investor to reconsider his position on the contract and invest the additional funds the project needed. In a July 3, 1903 letter Tesla wrote "Will you help me or let my great work — almost complete — go to pots?" Morgan's reply on July 14 was "I have received your letter and in reply would say that I should not feel disposed at present to make any further advances". The night of Morgan's reply, and several nights after, newspapers reported that the Wardenclyffe tower came alive shooting off bright flashes lighting up the night sky. No explanation was forthcoming from Tesla or any of his workers as to the meaning of the display and Wardenclyffe never seemed to operate again. Tesla's finances continued to unravel. Investor money on Wall Street was continuing to flow to Marconi's system, which was making regular transmissions, and doing it with equipment far less expensive than the "wireless plant" Tesla was attempting to build. Some in the press began turning against Tesla's project claiming it was a hoax[28]and the "rich man's panic" of late 1903 on Wall Street reduced investment further.[29][30][31]Some money came from Thomas Fortune Ryan but the funds went towards the debt on the project instead of funding any further construction.[12]Investors seemed to be shying away from putting money into a project that J. P. Morgan had abandoned.[12]Tesla continued to write Morgan trying to get extra funding stating his "knowledge and ability [...] if applied effectively would advance the world a century". Morgan would only reply through his secretary saying "it will be impossible for [me/ Morgan] to do anything in the matter".[32]Tesla's attempts to raise money by getting the US Navy interested in his remote control boat and torpedo and other attempts to commercialize his inventions went nowhere. In May 1905, Tesla's patents onalternating currentmotors and other methods of power transmission expired, halting royalty payments and causing a further severe reduction of funding to the Wardenclyffe Tower. In an attempt to find alternative funding Tesla advertised the services of the Wardenclyffe facility but he was met with little success. In 1906, the financial problems and other events may have led to what Tesla biographerMarc J. Seifersuspects was a nervous breakdown on Tesla's part.[33]In June architect Stanford White was murdered byHarry Kendall Thawover White's affair with Thaw's wife, actressEvelyn Nesbit. In October long time investor William Rankine died of a heart attack. George Scherff, Tesla's chief manager who had been supervising Wardenclyffe, had to leave to find other employment. The people living around Wardenclyffe noticed the Tesla plant seemed to have been abandoned without notice.[34] In 1904, Tesla took out a mortgage on the Wardenclyffe property withGeorge C. Boldt, proprietor of theWaldorf-Astoria Hotel, to cover Tesla's living expenses at the hotel. In 1908 Tesla procured a second mortgage from Boldt to further cover expenses.[35][36]The facility was partially abandoned around 1911, and the tower structure deteriorated. Between 1912 and 1915, Tesla's finances unraveled, and when the funders wanted to know how they were going to recoup their investments, Tesla was unable to give satisfactory answers. The March 1, 1916 edition of the publicationExport American Industriesran a story titled "Tesla's Million DollarFolly" describing the abandoned Wardenclyffe site: There everything seemed left as for a day — chairs, desks, and papers in businesslike array. The great wheels seemed only awaiting Monday life. But the magic word has not been spoken, and the spell still rests on the great plant.[37] By mid-1917 the facility's main building was breached and vandalized.[38] By 1915, Tesla's accumulated debt at the Waldorf-Astoria was aroundUS$20,000(equivalent to $622,000 in 2024). When Tesla was unable to make any further payments on the mortgages, Boldt foreclosed on the Wardenclyffe property.[35]Boldt went on to make the property available for sale and decided to demolish the tower for scrap.[39]On July 4, 1917, the Smiley Steel Company of New York began demolition of the tower bydynamitingit. The tower was knocked on a tilt by the initial explosion but it was not totally demolished until September.[40][41]The scrap value realized was$1,750(equivalent to $43,000 in 2024). Since this was duringWorld War Ia rumor spread, picked up by newspapers and other publications, that the tower was demolished on orders of the United States Government with claims German spies were using it as a radio transmitter or observation post, or that it was being used as a landmark for Germansubmarines.[41][42]Tesla was not pleased with what he saw as attacks on his patriotism via the rumors about Wardenclyffe, but since the original mortgages with Boldt as well as the foreclosure had been kept off the public record in order to hide his financial difficulties, Tesla was not able to reveal the real reason for the demolition.[40][41][43] On April 20, 1922, Tesla lost an appeal of judgment on Boldt's original foreclosure.[44] In 1925, the property ownership was transferred to Walter L. Johnson ofBrooklyn. On March 6, 1939, Plantacres, Inc. purchased the facility's land and subsequently leased it to Peerless Photo Products, Inc. AGFA Corporationbought the property from Peerless and used the site from 1969 to 1992 before closing the facility. The site has undergone a final cleanup of waste produced during its Photo Products era. The clean up was conducted under the scrutiny of theNew York State Department of Environmental Conservation, and paid for by AGFA. In 2009, AGFA put the property up for sale for $1,650,000. The main building remains standing to this day; AGFA advertised that the land can "be delivered fully cleared and level." It says it spent $5 million through September 2008 cleaning up silver andcadmium.[45][46][47]A non-profit preservation organization supported byThe Oatmealpurchased the land in 2013 with hopes to create amuseum to Teslathere.[48] On February 14, 1967, the nonprofit public benefit corporation Brookhaven Town Historical Trust was established. It selected the Wardenclyffe facility to be designated as a historic site and as the first site to be preserved by the Trust on March 3, 1967. The Brookhaven Town Historic Trust was rescinded by resolution on February 1, 1972. There were never any appointments made after a legal opinion was received; it was never set up properly.[49]On July 7, 1976, a plaque fromYugoslaviawas installed by representatives fromBrookhaven National Laboratory[50]near the entrance of the building. It reads:[51] IN THIS BUILDINGDESIGNED BY STANFORD WHITE, ARCHITECTNIKOLA TESLABORN SMILJAN, YUGOSLAVIA 1856—DIED NEW YORK, U.S.A. 1943CONSTRUCTED IN 1901–1905 WARDENCLYFFEHUGE RADIO STATION WITH ANTENNA TOWER187 FEET HIGH /DESTROYED 1917/, WHICHWAS TO HAVE SERVED AS HIS FIRST WORLDCOMMUNICATIONS SYSTEM.IN MEMORY OF 120TH ANNIVERSARY OF TESLA'S BIRTHAND 200TH ANNIVERSARY OF THE U.S.A INDEPENDENCE The sign was stolen from the property in November 2009. An anonymous benefactor is offering a $2,000 reward if it is returned to the property.[52] In 1976, an application was filed to nominate the main building for listing on theNational Register of Historic Places(NRHP). It failed to get approval. TheTesla Wardenclyffe Project, Inc.was established in 1994 for the purpose of seeking placement of the Wardenclyffe laboratory-office building and the Tesla tower foundation on both the New York State and NRHP. Its mission is the preservation and adaptive reuse of Wardenclyffe, the century-old laboratory of electrical pioneer Nikola Tesla located in Shoreham, Long Island, New York.[53] In October 1994, a second application for formal nomination was filed. The New York State Office of Parks, Recreation and Historic Preservation conducted inspections and determined the facility meets New York State criteria for historic designation. A second visit was made on February 25, 2009. The site cannot be registered until it is nominated by a willing owner. Designation of the structure as a National Landmark is awaiting completion of plant decommissioning activities by its present owner.[54] In August 2012, concerned about an apparent offer to purchase the site and develop it for commercial use, web cartoonThe Oatmeallaunched a fundraiser for theTesla Science Center at Wardenclyffeto raise $1.7 million to purchase the property, with the hope of eventually building a museum on the grounds.[55] Jane Alcorn, president of the nonprofit group The Tesla Science Center at Wardenclyffe, and Matthew Inman, creator ofThe Oatmeal, collaborated in 2012 to honor "the Father of the Electric Age", by preserving the Wardenclyffe facility as a science center and museum. They initiated theLet's Build a Goddamn Tesla Museumfund-raising campaign on theIndiegogocrowdfundingsite, to raise funding to buy the Wardenclyffe property and restore the facility. The project reached its goal of raising $850,000 within a week, more than exceeded the requested amount, including a $33,333 donation from the producers of the Tesla film "Fragments from Olympus-The Vision of Nikola Tesla".[56]The campaign also attracted donations from benefactors such asElon Musk, CEO ofTesla, Inc.[57] The money raised within one week was enough to get a matching grant from the state of New York, allowing the project to be able to meet the seller's asking price of $1.6 million;[57][58]the state had agreed to match donations up to half that amount.[59]A total of $1.37 million was donated, the matching grant from the State of New York brought the total collected to over $2.2 million. The surplus was to be used to fund the cleaning and restoration of the property. Tesla, Wardenclyffe and the museum fundraising effort was the subject of a documentary calledTower to the People – Tesla's Dream at Wardenclyffe Continues.[60][61][62] On May 2, 2013, The Tesla Science Center at Wardenclyffe announced that they had purchased the 15.69-acre laboratory site fromAgfa Corporationand will begin to raise "about $10 million to create a science learning center and museum worthy of Tesla and his legacy".[48] On May 13, 2014, The Oatmeal published a comic called "What It's Like to Own aModel S, Part 2", to request a further donation of $8 million fromTesla MotorsfounderElon Musk.[63]The next day, Musktweetedthat he "would be happy to help".[64]On July 10, 2014, during a 158th birthday celebration for Tesla at the Wardenclyffe site, it was announced that Musk would donate $1 million toward funding the museum, and install a Tesla Motors supercharging station onsite.[65] The center plans to offer several programs, including science teacher associations, conferences, symposia, field trips, associations with science competitions, and other science programs. Planned permanent exhibits include a Tesla exhibit, exploratorium-type exhibits, and a living museum.[66]On September 23, 2013, thePresident of Serbia,Tomislav Nikolić, unveiled a monument to Tesla at the Wardenclyffe site. Nikolić said that he had planned to push for the monument to be displayed at theUnited Nations, but chose Wardenclyffe once he learned it had been purchased for the center.[67] Emergency renovations on the chimney started in February 2020.[68]A ground breaking event took place in June of 2023.[69] Wardenclyffe is located near the Shoreham Post Office and Shoreham Fire House on Route 25A in Shoreham, Long Island, New York. Wardenclyffe was divided into two main sections. The tower, which was located in the back, and the main building now compose the entire facility grounds. At one time the property was about 200 acres (0.81 km2). Now it consists of slightly less than 16 acres (65,000 m2). On 21 November 2023, just months after the groundbreaking, the laboratory building caught fire. Over 100 firefighters across Long Island helped contain the fire. Much of the original brick building survived.[70]
https://en.wikipedia.org/wiki/Wardenclyffe_tower
Stochastic chains with memory of variable lengthare a family ofstochastic chainsof finite order in a finite alphabet, such as, for every time pass, only one finite suffix of the past, called context, is necessary to predict the next symbol. These models were introduced in the information theory literature byJorma Rissanenin 1983,[1]as a universal tool todata compression, but recently have been used to model data in different areas such asbiology,[2]linguistics[3]andmusic.[4] A stochastic chain with memory of variable length is a stochastic chain(Xn)n∈Z{\displaystyle (X_{n})_{n\in Z}}, taking values in a finite alphabetA{\displaystyle A}, and characterized by a probabilistic context tree(τ,p){\displaystyle (\tau ,p)}, so that The class of stochastic chains with memory of variable length was introduced byJorma Rissanenin the articleA universal data compression system.[1]Such class of stochastic chains was popularized in the statistical and probabilistic community by P. Bühlmann and A. J. Wyner in 1999, in the articleVariable Length Markov Chains. Named by Bühlmann and Wyner as “variable lengthMarkov chains” (VLMC), these chains are also known as “variable-order Markov models" (VOM), “probabilisticsuffix trees”[2]and “contexttree models”.[5]The name “stochastic chains with memory of variable length” seems to have been introduced byGalvesand Löcherbach, in 2008, in the article of the same name.[6] Consider asystemby a lamp, an observer and a door between both of them. The lamp has two possiblestates: on, represented by 1, or off, represented by 0. When the lamp is on, the observer may see the light through the door, depending on which state the door is at the time: open, 1, or closed, 0. such states are independent of the original state of the lamp. Let(Xn)n≥0{\displaystyle (X_{n})_{n\geq 0}}aMarkov chainthat represents the state of the lamp, with values inA=0,1{\displaystyle A={0,1}}and letp{\displaystyle p}be aprobability transition matrix. Also, let(ξn)n≥0{\displaystyle (\xi _{n})_{n\geq 0}}be a sequence ofindependent random variablesthat represents the door's states, also taking values inA{\displaystyle A}, independent of the chain(Xn)n≥0{\displaystyle (X_{n})_{n\geq 0}}and such that where0<ϵ<1{\displaystyle 0<\epsilon <1}. Define a new sequence(Zn)n≥0{\displaystyle (Z_{n})_{n\geq 0}}such that In order to determine the last instant that the observer could see the lamp on, i.e. to identify the least instantk{\displaystyle k}, withk<n{\displaystyle k<n}in whichZk=1{\displaystyle Z_{k}=1}. Using a context tree it's possible to represent the past states of the sequence, showing which are relevant to identify the next state. The stochastic chain(Zn)n∈Z{\displaystyle (Z_{n})_{n\in \mathbb {Z} }}is, then, a chain with memory of variable length, taking values inA{\displaystyle A}and compatible with the probabilistic context tree(τ,p){\displaystyle (\tau ,p)}, where Given a sampleXl,…,Xn{\displaystyle X_{l},\ldots ,X_{n}}, one can find the appropriated context tree using the following algorithms. In the articleA Universal Data Compression System,[1]Rissanen introduced a consistent algorithm to estimate the probabilistic context tree that generates the data. This algorithm's function can be summarized in two steps: BeX0,…,Xn−1{\displaystyle X_{0},\ldots ,X_{n-1}}a sample of a finite probabilistic tree(τ,p){\displaystyle (\tau ,p)}. For any sequencex−j−1{\displaystyle x_{-j}^{-1}}withj≤n{\displaystyle j\leq n}, it is possible to denote byNn(x−j−1){\displaystyle N_{n}(x_{-j}^{-1})}the number of occurrences of the sequence in the sample, i.e., Rissanen first built a context maximum candidate, given byXn−K(n)n−1{\displaystyle X_{n-K(n)}^{n-1}}, whereK(n)=Clog⁡n{\displaystyle K(n)=C\log {n}}andC{\displaystyle C}is an arbitrary positive constant. The intuitive reason for the choice ofClog⁡n{\displaystyle C\log {n}}comes from the impossibility of estimating the probabilities of sequence with lengths greater thanlog⁡n{\displaystyle \log {n}}based in a sample of sizen{\displaystyle n}. From there, Rissanen shortens the maximum candidate through successive cutting the branches according to a sequence of tests based in statistical likelihood ratio. In a more formal definition, if bANnxk1b0 define the probability estimator of the transition probabilityp{\displaystyle p}by wherex−j−1a=(x−j,…,x−1,a){\displaystyle x_{-j}^{-1}a=(x_{-j},\ldots ,x_{-1},a)}. If∑b∈ANn(x−k−1b)=0{\displaystyle \sum _{b\in A}N_{n}(x_{-k}^{-1}b)\,=\,0}, definep^n(a∣x−k−1)=1/|A|{\displaystyle {\hat {p}}_{n}(a\mid x_{-k}^{-1})\,=\,1/|A|}. Toi≥1{\displaystyle i\geq 1}, define whereyx−i−1=(y,x−i,…,x−1){\displaystyle yx_{-i}^{-1}=(y,x_{-i},\ldots ,x_{-1})}and Note thatΛn(x−i−1){\displaystyle \Lambda _{n}(x_{-i}^{-1})}is the ratio of the log-likelihood to test the consistency of the sample with the probabilistic context tree(τ,p){\displaystyle (\tau ,p)}against the alternative that is consistent with(τ′,p′){\displaystyle (\tau ',p')}, whereτ{\displaystyle \tau }andτ′{\displaystyle \tau '}differ only by a set of sibling knots. The length of the current estimated context is defined by whereC{\displaystyle C}is any positive constant. At last, by Rissanen,[1]there's the following result. GivenX0,…,Xn−1{\displaystyle X_{0},\ldots ,X_{n-1}}of a finite probabilistic context tree(τ,p){\displaystyle (\tau ,p)}, then whenn→∞{\displaystyle n\rightarrow \infty }. The estimator of the context tree by BIC with a penalty constantc>0{\displaystyle c>0}is defined as The smallest maximizer criterion[3]is calculated by selecting the smallest treeτof a set of champion treesCsuch that
https://en.wikipedia.org/wiki/Stochastic_chains_with_memory_of_variable_length
Peer-to-peer(P2P) computing or networking is adistributed applicationarchitecture that partitions tasks or workloads between peers. Peers are equally privileged,equipotentparticipants in the network, forming a peer-to-peer network ofnodes.[1]In addition, apersonal area network(PAN) is also in nature a type ofdecentralizedpeer-to-peer network typically between two devices.[2] Peers make a portion of their resources, such as processing power, disk storage, ornetwork bandwidth, directly available to other network participants, without the need for central coordination by servers or stable hosts.[3]Peers are both suppliers and consumers of resources, in contrast to the traditionalclient–server modelin which the consumption and supply of resources are divided.[4] While P2P systems had previously been used in manyapplication domains,[5]the architecture was popularized by theInternetfile sharing systemNapster, originally released in 1999.[6]P2P is used in many protocols such asBitTorrentfile sharing over the Internet[7]and inpersonal networkslikeMiracastdisplaying andBluetoothradio.[8]The concept has inspired new structures and philosophies in many areas of human interaction. In such social contexts,peer-to-peer as a memerefers to theegalitariansocial networkingthat has emerged throughout society, enabled byInternettechnologies in general. While P2P systems had previously been used in many application domains,[5]the concept was popularized byfile sharingsystems such as the music-sharing applicationNapster. The peer-to-peer movement allowed millions of Internet users to connect "directly, forming groups and collaborating to become user-created search engines, virtual supercomputers, and filesystems".[9]The basic concept of peer-to-peer computing was envisioned in earlier software systems and networking discussions, reaching back to principles stated in the firstRequest for Comments, RFC 1.[10] Tim Berners-Lee's vision for theWorld Wide Webwas close to a P2P network in that it assumed each user of the web would be an active editor and contributor, creating and linking content to form an interlinked "web" of links. The early Internet was more open than the present day, where two machines connected to the Internet could send packets to each other without firewalls and other security measures.[11][9][page needed]This contrasts with thebroadcasting-like structure of the web as it has developed over the years.[12][13][14]As a precursor to the Internet,ARPANETwas a successful peer-to-peer network where "every participating node could request and serve content". However, ARPANET was not self-organized, and it could not "provide any means for context or content-based routing beyond 'simple' address-based routing."[14] Therefore,Usenet, a distributed messaging system that is often described as an early peer-to-peer architecture, was established. It was developed in 1979 as a system that enforces adecentralized modelof control.[15]The basic model is aclient–servermodel from the user or client perspective that offers a self-organizing approach to newsgroup servers. However,news serverscommunicate with one another as peers to propagate Usenet news articles over the entire group of network servers. The same consideration applies toSMTPemail in the sense that the core email-relaying network ofmail transfer agentshas a peer-to-peer character, while the periphery ofEmail clientsand their direct connections is strictly a client-server relationship.[16] In May 1999, with millions more people on the Internet,Shawn Fanningintroduced the music and file-sharing application calledNapster.[14]Napster was the beginning of peer-to-peer networks, as we know them today, where "participating users establish a virtual network, entirely independent from the physical network, without having to obey any administrative authorities or restrictions".[14] A peer-to-peer network is designed around the notion of equalpeernodes simultaneously functioning as both "clients" and "servers" to the other nodes on the network.[17]This model of network arrangement differs from theclient–servermodel where communication is usually to and from a central server. A typical example of a file transfer that uses the client-server model is theFile Transfer Protocol(FTP) service in which the client and server programs are distinct: the clients initiate the transfer, and the servers satisfy these requests. Peer-to-peer networks generally implement some form of virtualoverlay networkon top of the physical network topology, where the nodes in the overlay form asubsetof the nodes in the physical network.[18]Data is still exchanged directly over the underlyingTCP/IPnetwork, but at theapplication layerpeers can communicate with each other directly, via the logical overlay links (each of which corresponds to a path through the underlying physical network). Overlays are used for indexing and peer discovery, and make the P2P system independent from the physical network topology. Based on how the nodes are linked to each other within the overlay network, and how resources are indexed and located, we can classify networks asunstructuredorstructured(or as a hybrid between the two).[19][20][21] Unstructured peer-to-peer networksdo not impose a particular structure on the overlay network by design, but rather are formed by nodes that randomly form connections to each other.[22](Gnutella,Gossip, andKazaaare examples of unstructured P2P protocols).[23] Because there is no structure globally imposed upon them, unstructured networks are easy to build and allow for localized optimizations to different regions of the overlay.[24]Also, because the role of all peers in the network is the same, unstructured networks are highly robust in the face of high rates of "churn"—that is, when large numbers of peers are frequently joining and leaving the network.[25][26] However, the primary limitations of unstructured networks also arise from this lack of structure. In particular, when a peer wants to find a desired piece of data in the network, the search query must be flooded through the network to find as many peers as possible that share the data. Flooding causes a very high amount of signaling traffic in the network, uses moreCPU/memory (by requiring every peer to process all search queries), and does not ensure that search queries will always be resolved. Furthermore, since there is no correlation between a peer and the content managed by it, there is no guarantee that flooding will find a peer that has the desired data. Popular content is likely to be available at several peers and any peer searching for it is likely to find the same thing. But if a peer is looking for rare data shared by only a few other peers, then it is highly unlikely that the search will be successful.[27] Instructured peer-to-peer networksthe overlay is organized into a specific topology, and the protocol ensures that any node can efficiently[28]search the network for a file/resource, even if the resource is extremely rare.[23] The most common type of structured P2P networks implement adistributed hash table(DHT),[4][29]in which a variant ofconsistent hashingis used to assign ownership of each file to a particular peer.[30][31]This enables peers to search for resources on the network using ahash table: that is, (key,value) pairs are stored in the DHT, and any participating node can efficiently retrieve the value associated with a given key.[32][33] However, in order to route traffic efficiently through the network, nodes in a structured overlay must maintain lists of neighbors[34]that satisfy specific criteria. This makes them less robust in networks with a high rate ofchurn(i.e. with large numbers of nodes frequently joining and leaving the network).[26][35]More recent evaluation of P2P resource discovery solutions under real workloads have pointed out several issues in DHT-based solutions such as high cost of advertising/discovering resources and static and dynamic load imbalance.[36] Notable distributed networks that use DHTs includeTixati, an alternative toBitTorrent'sdistributed tracker, theKad network, theStorm botnet, and theYaCy. Some prominent research projects include theChord project,Kademlia,PAST storage utility,P-Grid, a self-organized and emerging overlay network, andCoopNet content distribution system.[37]DHT-based networks have also been widely utilized for accomplishing efficient resource discovery[38][39]forgrid computingsystems, as it aids in resource management and scheduling of applications. Hybrid models are a combination of peer-to-peer andclient–servermodels.[40]A common hybrid model is to have a central server that helps peers find each other.Spotifywas an example of a hybrid model [until 2014].[41]There are a variety of hybrid models, all of which make trade-offs between the centralized functionality provided by a structured server/client network and the node equality afforded by the pure peer-to-peer unstructured networks. Currently, hybrid models have better performance than either pure unstructured networks or pure structured networks because certain functions, such as searching, do require a centralized functionality but benefit from the decentralized aggregation of nodes provided by unstructured networks.[42] CoopNet (Cooperative Networking)was a proposed system for off-loading serving to peers who have recentlydownloadedcontent, proposed by computer scientists Venkata N. Padmanabhan and Kunwadee Sripanidkulchai, working atMicrosoft ResearchandCarnegie Mellon University.[43][44]When aserverexperiences an increase in load it redirects incoming peers to other peers who have agreed tomirrorthe content, thus off-loading balance from the server. All of the information is retained at the server. This system makes use of the fact that the bottleneck is most likely in the outgoing bandwidth than theCPU, hence its server-centric design. It assigns peers to other peers who are 'close inIP' to its neighbors [same prefix range] in an attempt to use locality. If multiple peers are found with the samefileit designates that the node choose the fastest of its neighbors.Streaming mediais transmitted by having clientscachethe previous stream, and then transmit it piece-wise to new nodes. Peer-to-peer systems pose unique challenges from acomputer securityperspective. Like any other form ofsoftware, P2P applications can containvulnerabilities. What makes this particularly dangerous for P2P software, however, is that peer-to-peer applications act as servers as well as clients, meaning that they can be more vulnerable toremote exploits.[45] Since each node plays a role in routing traffic through the network, malicious users can perform a variety of "routing attacks", ordenial of serviceattacks. Examples of common routing attacks include "incorrect lookup routing" whereby malicious nodes deliberately forward requests incorrectly or return false results, "incorrect routing updates" where malicious nodes corrupt the routing tables of neighboring nodes by sending them false information, and "incorrect routing network partition" where when new nodes are joining they bootstrap via a malicious node, which places the new node in a partition of the network that is populated by other malicious nodes.[45] The prevalence ofmalwarevaries between different peer-to-peer protocols.[46]Studies analyzing the spread of malware on P2P networks found, for example, that 63% of the answered download requests on thegnutellanetwork contained some form of malware, whereas only 3% of the content onOpenFTcontained malware. In both cases, the top three most common types of malware accounted for the large majority of cases (99% in gnutella, and 65% in OpenFT). Another study analyzing traffic on theKazaanetwork found that 15% of the 500,000 file sample taken were infected by one or more of the 365 differentcomputer virusesthat were tested for.[47] Corrupted data can also be distributed on P2P networks by modifying files that are already being shared on the network. For example, on theFastTracknetwork, theRIAAmanaged to introduce faked chunks into downloads and downloaded files (mostlyMP3files). Files infected with the RIAA virus were unusable afterwards and contained malicious code. The RIAA is also known to have uploaded fake music and movies to P2P networks in order to deter illegal file sharing.[48]Consequently, the P2P networks of today have seen an enormous increase of their security and file verification mechanisms. Modernhashing,chunk verificationand different encryption methods have made most networks resistant to almost any type of attack, even when major parts of the respective network have been replaced by faked or nonfunctional hosts.[49] The decentralized nature of P2P networks increases robustness because it removes thesingle point of failurethat can be inherent in a client–server based system.[50]As nodes arrive and demand on the system increases, the total capacity of the system also increases, and the likelihood of failure decreases. If one peer on the network fails to function properly, the whole network is not compromised or damaged. In contrast, in a typical client–server architecture, clients share only their demands with the system, but not their resources. In this case, as more clients join the system, fewer resources are available to serve each client, and if the central server fails, the entire network is taken down. There are both advantages and disadvantages in P2P networks related to the topic of databackup, recovery, and availability. In a centralized network, the system administrators are the only forces controlling the availability of files being shared. If the administrators decide to no longer distribute a file, they simply have to remove it from their servers, and it will no longer be available to users. Along with leaving the users powerless in deciding what is distributed throughout the community, this makes the entire system vulnerable to threats and requests from the government and other large forces. For example,YouTubehas been pressured by theRIAA,MPAA, and entertainment industry to filter out copyrighted content. Although server-client networks are able to monitor and manage content availability, they can have more stability in the availability of the content they choose to host. A client should not have trouble accessing obscure content that is being shared on a stable centralized network. P2P networks, however, are more unreliable in sharing unpopular files because sharing files in a P2P network requires that at least one node in the network has the requested data, and that node must be able to connect to the node requesting the data. This requirement is occasionally hard to meet because users may delete or stop sharing data at any point.[51] In a P2P network, the community of users is entirely responsible for deciding which content is available. Unpopular files eventually disappear and become unavailable as fewer people share them. Popular files, however, are highly and easily distributed. Popular files on a P2P network are more stable and available than files on central networks. In a centralized network, a simple loss of connection between the server and clients can cause a failure, but in P2P networks, the connections between every node must be lost to cause a data-sharing failure. In a centralized system, the administrators are responsible for all data recovery and backups, while in P2P systems, each node requires its backup system. Because of the lack of central authority in P2P networks, forces such as the recording industry,RIAA,MPAA, and the government are unable to delete or stop the sharing of content on P2P systems.[52] In P2P networks, clients both provide and use resources. This means that unlike client–server systems, the content-serving capacity of peer-to-peer networks can actuallyincreaseas more users begin to access the content (especially with protocols such asBitTorrentthat require users to share, refer a performance measurement study[53]). This property is one of the major advantages of using P2P networks because it makes the setup and running costs very small for the original content distributor.[54][55] Peer-to-peer file sharingnetworks such asGnutella,G2, and theeDonkey networkhave been useful in popularizing peer-to-peer technologies. These advancements have paved the way forPeer-to-peer content delivery networksand services, including distributed caching systems like Correli Caches to enhance performance.[56]Furthermore, peer-to-peer networks have made possible the software publication and distribution, enabling efficient sharing ofLinux distributionand various games throughfile sharingnetworks. Peer-to-peer networking involves data transfer from one user to another without using an intermediate server. Companies developing P2P applications have been involved in numerous legal cases, primarily in the United States, over conflicts withcopyrightlaw.[57]Two major cases areGrokstervs RIAAandMGM Studios, Inc. v. Grokster, Ltd..[58]In the last case, the Court unanimously held that defendant peer-to-peer file sharing companies Grokster and Streamcast could be sued for inducing copyright infringement. TheP2PTVandPDTPprotocols are used in various peer-to-peer applications. Someproprietarymultimedia applications leverage a peer-to-peer network in conjunction with streaming servers to stream audio and video to their clients.Peercastingis employed for multicasting streams. Additionally, a project calledLionShare, undertaken byPennsylvania State University, MIT, andSimon Fraser University, aims to facilitate file sharing among educational institutions globally. Another notable program,Osiris, enables users to create anonymous and autonomous web portals that are distributed via a peer-to-peer network. Datis a distributed version-controlled publishing platform.I2P, is anoverlay networkused to browse the Internetanonymously. Unlike the related I2P, theTor networkis not itself peer-to-peer[dubious–discuss]; however, it can enable peer-to-peer applications to be built on top of it viaonion services. TheInterPlanetary File System(IPFS) is aprotocoland network designed to create acontent-addressable, peer-to-peer method of storing and sharinghypermediadistribution protocol, with nodes in the IPFS network forming adistributed file system.Jamiis a peer-to-peer chat andSIPapp.JXTAis a peer-to-peer protocol designed for theJava platform.Netsukukuis aWireless community networkdesigned to be independent from the Internet.Open Gardenis a connection-sharing application that shares Internet access with other devices using Wi-Fi or Bluetooth. Resilio Syncis a directory-syncing app. Research includes projects such as theChord project, thePAST storage utility, theP-Grid, and theCoopNet content distribution system.Secure Scuttlebuttis a peer-to-peergossip protocolcapable of supporting many different types of applications, primarilysocial networking.Syncthingis also a directory-syncing app.Tradepall andM-commerceapplications are designed to power real-time marketplaces. TheU.S. Department of Defenseis conducting research on P2P networks as part of its modern network warfare strategy.[59]In May 2003,Anthony Tether, then director ofDARPA, testified that the United States military uses P2P networks.WebTorrentis a P2Pstreamingtorrent clientinJavaScriptfor use inweb browsers, as well as in theWebTorrent Desktopstandalone version that bridges WebTorrent andBitTorrentserverless networks.Microsoft, inWindows 10, uses a proprietary peer-to-peer technology called "Delivery Optimization" to deploy operating system updates using end-users' PCs either on the local network or other PCs. According to Microsoft's Channel 9, this led to a 30%-50% reduction in Internet bandwidth usage.[60]Artisoft'sLANtasticwas built as a peer-to-peer operating system where machines can function as both servers and workstations simultaneously.Hotline CommunicationsHotline Client was built with decentralized servers and tracker software dedicated to any type of files and continues to operate today.Cryptocurrenciesare peer-to-peer-baseddigital currenciesthat useblockchains Cooperation among a community of participants is key to the continued success of P2P systems aimed at casual human users; these reach their full potential only when large numbers of nodes contribute resources. But in current practice, P2P networks often contain large numbers of users who utilize resources shared by other nodes, but who do not share anything themselves (often referred to as the "freeloader problem"). Freeloading can have a profound impact on the network and in some cases can cause the community to collapse.[61]In these types of networks "users have natural disincentives to cooperate because cooperation consumes their own resources and may degrade their own performance".[62]Studying the social attributes of P2P networks is challenging due to large populations of turnover, asymmetry of interest and zero-cost identity.[62]A variety of incentive mechanisms have been implemented to encourage or even force nodes to contribute resources.[63][45] Some researchers have explored the benefits of enabling virtual communities to self-organize and introduce incentives for resource sharing and cooperation, arguing that the social aspect missing from today's P2P systems should be seen both as a goal and a means for self-organized virtual communities to be built and fostered.[64]Ongoing research efforts for designing effective incentive mechanisms in P2P systems, based on principles from game theory, are beginning to take on a more psychological and information-processing direction. Some peer-to-peer networks (e.g.Freenet) place a heavy emphasis onprivacyandanonymity—that is, ensuring that the contents of communications are hidden from eavesdroppers, and that the identities/locations of the participants are concealed.Public key cryptographycan be used to provideencryption,data validation, authorization, and authentication for data/messages.Onion routingand othermix networkprotocols (e.g. Tarzan) can be used to provide anonymity.[65] Perpetrators oflive streaming sexual abuseand othercybercrimeshave used peer-to-peer platforms to carry out activities with anonymity.[66] Although peer-to-peer networks can be used for legitimate purposes, rights holders have targeted peer-to-peer over the involvement with sharing copyrighted material. Peer-to-peer networking involves data transfer from one user to another without using an intermediate server. Companies developing P2P applications have been involved in numerous legal cases, primarily in the United States, primarily over issues surroundingcopyrightlaw.[57]Two major cases areGrokstervs RIAAandMGM Studios, Inc. v. Grokster, Ltd.[58]In both of the cases the file sharing technology was ruled to be legal as long as the developers had no ability to prevent the sharing of the copyrighted material. To establish criminal liability for the copyright infringement on peer-to-peer systems, the government must prove that the defendant infringed a copyright willingly for the purpose of personal financial gain or commercial advantage.[67]Fair useexceptions allow limited use of copyrighted material to be downloaded without acquiring permission from the rights holders. These documents are usually news reporting or under the lines of research and scholarly work. Controversies have developed over the concern of illegitimate use of peer-to-peer networks regarding public safety and national security. When a file is downloaded through a peer-to-peer network, it is impossible to know who created the file or what users are connected to the network at a given time. Trustworthiness of sources is a potential security threat that can be seen with peer-to-peer systems.[68] A study ordered by theEuropean Unionfound that illegal downloadingmaylead to an increase in overall video game sales because newer games charge for extra features or levels. The paper concluded that piracy had a negative financial impact on movies, music, and literature. The study relied on self-reported data about game purchases and use of illegal download sites. Pains were taken to remove effects of false and misremembered responses.[69][70][71] Peer-to-peer applications present one of the core issues in thenetwork neutralitycontroversy. Internet service providers (ISPs) have been known to throttle P2P file-sharing traffic due to its high-bandwidthusage.[72]Compared to Web browsing, e-mail or many other uses of the internet, where data is only transferred in short intervals and relative small quantities, P2P file-sharing often consists of relatively heavy bandwidth usage due to ongoing file transfers and swarm/network coordination packets. In October 2007,Comcast, one of the largest broadband Internet providers in the United States, started blocking P2P applications such asBitTorrent. Their rationale was that P2P is mostly used to share illegal content, and their infrastructure is not designed for continuous, high-bandwidth traffic. Critics point out that P2P networking has legitimate legal uses, and that this is another way that large providers are trying to control use and content on the Internet, and direct people towards aclient–server-based application architecture. The client–server model provides financial barriers-to-entry to small publishers and individuals, and can be less efficient for sharing large files. As a reaction to thisbandwidth throttling, several P2P applications started implementing protocol obfuscation, such as theBitTorrent protocol encryption. Techniques for achieving "protocol obfuscation" involves removing otherwise easily identifiable properties of protocols, such as deterministic byte sequences and packet sizes, by making the data look as if it were random.[73]The ISP's solution to the high bandwidth isP2P caching, where an ISP stores the part of files most accessed by P2P clients in order to save access to the Internet. Researchers have used computer simulations to aid in understanding and evaluating the complex behaviors of individuals within the network. "Networking research often relies on simulation in order to test and evaluate new ideas. An important requirement of this process is that results must be reproducible so that other researchers can replicate, validate, and extend existing work."[74]If the research cannot be reproduced, then the opportunity for further research is hindered. "Even though new simulators continue to be released, the research community tends towards only a handful of open-source simulators. The demand for features in simulators, as shown by our criteria and survey, is high. Therefore, the community should work together to get these features in open-source software. This would reduce the need for custom simulators, and hence increase repeatability and reputability of experiments."[74] Popular simulators that were widely used in the past are NS2, OMNeT++, SimPy, NetLogo, PlanetLab, ProtoPeer, QTM, PeerSim, ONE, P2PStrmSim, PlanetSim, GNUSim, and Bharambe.[75] Besides all the above stated facts, there has also been work done on ns-2 open source network simulators. One research issue related to free rider detection and punishment has been explored using ns-2 simulator here.[76]
https://en.wikipedia.org/wiki/Peer-to-peer#Security_and_trust
What Is Art?(Russian:Что такое искусство?Chto takoye iskusstvo?) is a book byLeo Tolstoy. It was completed in Russian in 1897 but first published in English in 1898 due to difficulties with the Russian censors.[1] Tolstoy cites the time, effort, public funds, and public respect spent on art and artists[2]as well as the imprecision of general opinions on art[3]as reason for writing the book. In his words, "it is difficult to say what is meant by art, and especially what is good, useful art, art for the sake of which we might condone such sacrifices as are being offered at its shrine".[4] Throughout the book Tolstoy demonstrates an "unremitting moralism",[5]evaluating artworks in light of his radical Christian ethics,[6]and displaying a willingness to dismiss accepted masters, includingWagner,[7]Shakespeare,[8]andDante,[9]as well as the bulk of his own writings.[10] Having rejected the use of beauty in definitions of art (seeaesthetics), Tolstoy conceptualises art as anything that communicates emotion: "Art begins when a man, with the purpose of communicating to other people a feeling he once experienced, calls it up again within himself and expresses it by certain external signs".[11] This view of art is inclusive: "jokes", "home decoration", and "church services" may all be considered art as long as they convey feeling.[12]It is also amoral: "[f]eelings... very bad and very good, if only they infect the reader... constitute the subject of art".[13] Tolstoy also notes that the "sincerity" of the artist – that is, the extent to which the artist "experiences the feeling he conveys" – influences the infection.[14] While Tolstoy's basic conception of art is broad[15]and amoral,[13]his idea of "good" art is strict and moralistic, based on what he sees as the function of art in the development of humanity: just as in the evolution of knowledge – that is, the forcing out and supplanting of mistaken and unnecessary knowledge by truer and more necessary knowledge – so the evolution of feelings takes place by means of art, replacing lower feelings, less kind and less needed for the good of humanity, by kinder feelings, more needed for that good. This is the purpose of art.[16] Tolstoy's analysis is influenced by his radical Christian views (seeThe Kingdom of God is Within You), views which led him to be excommunicated from the Russian Orthodox Church in 1901.[17]He states that Christian art, rooted in "the consciousness of sonship to God and the brotherhood of men":[18] can evoke reverence for each man's dignity, for every animal’s life, it can evoke the shame of luxury, of violence, of revenge, of using for one’s pleasure objects that are a necessity for other people, it can make people sacrifice themselves to serve others freely and joyfully, without noticing it.[19] Ultimately, "by calling up the feelings of brotherhood and love in people under imaginary conditions, religious art will accustom people to experiencing the same feelings in reality under the same conditions".[19] Tolstoy's examples:Schiller'sThe Robbers,Victor Hugo'sLes Misérables,Charles Dickens'sA Tale of Two CitiesandThe Chimes,Harriet Beecher Stowe'sUncle Tom's Cabin,Dostoevsky'sThe House of the Dead,George Eliot'sAdam Bede,[20]Ge'sJudgement,Liezen-Mayer'sSigning the Death Sentence, and paintings "portraying the labouring man with respect and love" such as those byMillet,Breton,Lhermitte, andDefregger.[21] "Universal" art[20]illustrates that people are "already united in the oneness of life's joys and sorrows"[22]by communicating "feelings of the simplest, most everyday sort, accessible to all people without exception, such as the feelings of merriment, tenderness, cheerfulness, peacefulness, and so on".[18]Tolstoy contrasts this ideal with art that is partisan in nature, whether it be by class, religion, nation, or style.[23] Tolstoy's examples: he mentions, with many qualifiers, the works ofCervantes,Dickens,Moliere,Gogol, andPushkin, comparing all of these unfavourably to the story ofJoseph.[21]In music he commends a violin aria ofBach, theE-flat major nocturneof Chopin, and "selected passages" fromSchubert,Haydn, Chopin, andMozart. He also speaks briefly ofgenre paintingsandlandscapes.[24] Tolstoy notes the susceptibility of his contemporaries to the "charm of obscurity".[25]Works have become laden with "euphemisms, mythological and historical allusions", and general "vagueness, mysteriousness, obscurity and inaccessibility to the masses".[25]Tolstoy lambastes such works, insisting that art can and should be comprehensible to everyone. Having emphasised that art has a function in the improvement of humanity – capable of expressing man's best sentiment – he finds it offensive that artists should be so wilfully and arrogantly abstruse.[26] One criticism Tolstoy levels against art is that at some point it "ceased to be sincere and became artificial and cerebral",[27]leading to the creation of millions of works of technical brilliance but few of honourable sentiment.[28]Tolstoy outlines four common markers of bad art: these are not however considered the canon or ultimate indicators Involves recycling and concentrating elements from other works,[29]typical examples of which are: "maidens, warriors, shepherds, hermits, angels, devils in all forms, moonlight, thunderstorms, mountains, the sea, precipices, flowers, long hair, lions, the lamb, the dove, the nightingale".[30] Imitation is highly descriptive realism, where painting becomes photography, or a scene in a book becomes a listing of facial expressions, tone of voice, the setting, and so on.[31]Any potential communication of feeling is "disrupted by the superfluity of details".[32] Reliance on "strikingness", often involving contrasts of "horrible and tender, beautiful and ugly, loud and soft, dark and light", descriptions of lust,[31]"crescendo and complication", unexpected changes in rhythm, tempo, etc.[33]Tolstoy contends that works marked by such techniques "do not convey any feeling, but only affect the nerves".[34] Diversion is "an intellectual interest added to the work of art", such as the melding of documentary and fiction, as well as the writing of novels, poetry, and music "in such a way that they must be puzzled out".[33]All such works do not correspond with Tolstoy's view of art as the infection of others with feelings previously experienced,[35]and his exhortation that art be "universal" in appeal.[24] Tolstoy approves of early Christian art for being inspired by love of Christ and man, as well as its antagonism to pleasure-seeking. He prefers this to the art born of "Church Christianity", which ostensibly evades the "essential theses of true Christianity" (that is, that all men are born of the Father, are equals, and should strive towards mutual love).[36]Art became pagan – worshipping religious figures – and subservient to the dictates of the Church.[36] The corruption of art was deepened after theCrusades, as the abuse of papal power became more obvious. The rich began to doubt, seeing contradictions between the actions of the Church and the message of Christianity.[37]But instead of turning back to the early Christian teachings, the upper classes began to appreciate and commission art that was merely pleasing.[38]This tendency was facilitated by theRenaissance, with the aggrandisement of ancient Greek art, philosophy, and culture which, Tolstoy alleges, is inclined to pleasure and beauty worship.[39] Tolstoy perceives the roots ofaestheticsin the Renaissance. Art for pleasure was validated in reference to the philosophy of the Greeks[40][41]and the elevation of “beauty” as a legitimate criterion with which to separate good from bad art.[42] Tolstoy moves to discredit aesthetics by reviewing and reducing previous theories – including those ofBaumgarten,[43]Kant[44](Critique of Judgement),Hegel,[45]Hume, andSchopenhauer[46]– to two main “aesthetic definitions of beauty”:[47] Tolstoy then argues that, despite their apparent divergence, there is little substantive difference between the two strands. This is because both schools recognise beauty only by the pleasure it gives: "both notions of beauty come down to a certain sort of pleasure that we receive, meaning that we recognize as beauty that which pleases us without awakening our lust".[48]Therefore, there is no objective definition of art in aesthetics.[49] Tolstoy condemns the focus on beauty/pleasure at length, calling aesthetics a discipline: according to which the difference between good art, conveying good feelings, and bad art, conveying wicked feelings, was totally obliterated, and one of the lowest manifestations of art, art for mere pleasure – against which all teachers of mankind have warned people – came to be regarded as the highest art. And art became, not the important thing it was intended to be, but the empty amusement of idle people.[42] Tolstoy sees the developing professionalism of art as hampering the creation of good works. The professional artist can and must create to prosper, making for art that is insincere and most likely partisan – made to suit the whims of fashion orpatrons.[50] Art criticism is a symptom of the obscurity of art, for "[a]n artist, if he is a true artist, has in his work conveyed to others the feelings he has experienced: what is there to explain?".[51]Criticism, moreover, tends to contribute to the veneration of "authorities"[52]such asShakespeareandDante.[53]By constant unfavourable comparison, the young artist is corralled into imitating the works of the greats, as all of them are said to be true art. In short, new artists imitate the classics, setting their own feelings aside, which, according to Tolstoy, is contrary to the point of art.[54] Art schools teach people how to imitate the method of the masters, but they cannot teach the sincerity of emotion that is the propellant of great works.[55]In Tolstoy's words, "[n]o school can call up feelings in a man, and still less can it teach a man what is the essence of art: the manifestation of feeling in his own particular fashion".[55] Throughout the book Tolstoy demonstrates a willingness to dismiss generally accepted masters, among themLiszt,Richard Strauss,[56]Nietzsche,[59]andOscar Wilde.[28]He also labels his own works as "bad art", excepting only the short stories "God Sees the Truth" and "Prisoner of the Caucasus".[61] He attempts to justify these conclusions by pointing to the ostensible chaos of previous aesthetic analysis. Theories usually involve selecting popular works and constructing principles from these examples.Volkelt, for instance, remarks that art cannot be judged on its moral content because thenRomeo and Julietwould not be good art. Such retrospective justification cannot, he stresses, be the basis for theory, as people will tend to create subjective frameworks to justify their own tastes.[62] Jahn notes the "often confusing use of categorisation"[63]and the lack of definition of the key concept of emotion.[64]Bayley writes that "the effectiveness ofWhat is Art?lies not so much in its positive assertions as in its rejection of much that was taken for granted in the aesthetic theories of the time".[65]Noyes criticises Tolstoy's dismissal of beauty,[66]but states that, "despite its shortcomings",What is Art?"may be pronounced the most stimulating critical work of our time".[67]Simmons mentions the "occasional brilliant passages" along with the "repetition, awkward language, and loose terminology".[68]Aylmer Maude, translator of many of Tolstoy's writings, calls it "probably the most masterly of all Tolstoy's works", citing the difficulty of the subject matter and its clarity.[69]For a comprehensive review of the reception at the time of publication, see Maude 1901b.[70]
https://en.wikipedia.org/wiki/What_Is_Art%3F
Incomputational linguistics, atrigram taggeris a statistical method forautomatically identifying words as being nouns, verbs, adjectives, adverbs, etc.based on second orderMarkov modelsthat consider triples of consecutive words. It is trained on atext corpusas a method to predict the next word, taking the product of the probabilities ofunigram,bigramandtrigram. Inspeech recognition, algorithms utilizing trigram-tagger score better than those algorithms utilizing IIMM tagger but less well than Net tagger. The description of the trigram tagger is provided by Brants (2000). Thiscomputational linguistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Trigram_tagger
Incomputer science,binary space partitioning(BSP) is a method forspace partitioningwhichrecursivelysubdivides aEuclidean spaceinto twoconvex setsby usinghyperplanesas partitions. This process of subdividing gives rise to a representation of objects within the space in the form of atree data structureknown as aBSP tree. Binary space partitioning was developed in the context of3D computer graphicsin 1969.[1][2]The structure of a BSP tree is useful inrenderingbecause it can efficiently give spatial information about the objects in a scene, such as objects being ordered from front-to-back with respect to a viewer at a given location. Other applications of BSP include: performinggeometricaloperations withshapes(constructive solid geometry) inCAD,[3]collision detectioninroboticsand 3D video games,ray tracing, virtual landscape simulation,[4]and other applications that involve the handling of complex spatial scenes. Binary space partitioning is a generic process ofrecursivelydividing a scene into two until the partitioning satisfies one or more requirements. It can be seen as a generalization of other spatial tree structures such ask-d treesandquadtrees, one where hyperplanes that partition the space may have any orientation, rather than being aligned with the coordinate axes as they are ink-d trees or quadtrees. When used in computer graphics to render scenes composed of planarpolygons, the partitioning planes are frequently chosen to coincide with the planes defined by polygons in the scene. The specific choice of partitioning plane and criterion for terminating the partitioning process varies depending on the purpose of the BSP tree. For example, in computer graphics rendering, the scene is divided until each node of the BSP tree contains only polygons that can be rendered in arbitrary order. Whenback-face cullingis used, each node, therefore, contains a convex set of polygons, whereas when rendering double-sided polygons, each node of the BSP tree contains only polygons in a single plane. In collision detection or ray tracing, a scene may be divided up intoprimitiveson which collision or ray intersection tests are straightforward. Binary space partitioning arose from computer graphics needing to rapidly draw three-dimensional scenes composed of polygons. A simple way to draw such scenes is thepainter's algorithm, which produces polygons in order of distance from the viewer, back to front, painting over the background and previous polygons with each closer object. This approach has two disadvantages: the time required to sort polygons in back-to-front order, and the possibility of errors in overlapping polygons. Fuchs and co-authors[2]showed that constructing a BSP tree solved both of these problems by providing a rapid method of sorting polygons with respect to a given viewpoint (linear in the number of polygons in the scene) and by subdividing overlapping polygons to avoid errors that can occur with the painter's algorithm. A disadvantage of binary space partitioning is that generating a BSP tree can be time-consuming. Typically, it is therefore performed once on static geometry, as a pre-calculation step, prior to rendering or other real-time operations on a scene. The expense of constructing a BSP tree makes it difficult and inefficient to directly implement moving objects into a tree. The canonical use of a BSP tree is for rendering polygons (that are double-sided, that is, withoutback-face culling) with the painter's algorithm. Each polygon is designated with a front side and a backside which could be chosen arbitrarily and only affects the structure of the tree but not the required result.[2]Such a tree is constructed from an unsorted list of all the polygons in a scene. The recursive algorithm for construction of a BSP tree from that list of polygons is:[2] The following diagram illustrates the use of this algorithm in converting a list of lines or polygons into a BSP tree. At each of the eight steps (i.-viii.), the algorithm above is applied to a list of lines, and one new node is added to the tree. The final number of polygons or lines in a tree is often larger (sometimes much larger[2]) than the original list, since lines or polygons that cross the partitioning plane must be split into two. It is desirable to minimize this increase, but also to maintain reasonablebalancein the final tree. The choice of which polygon or line is used as a partitioning plane (in step 1 of the algorithm) is therefore important in creating an efficient BSP tree. A BSP tree istraversedin a linear time, in an order determined by the particular function of the tree. Again using the example of rendering double-sided polygons using the painter's algorithm, to draw a polygonPcorrectly requires that all polygons behind the planePlies in must be drawn first, then polygonP, then finally the polygons in front ofP. If this drawing order is satisfied for all polygons in a scene, then the entire scene renders in the correct order. This procedure can be implemented by recursively traversing a BSP tree using the following algorithm.[2]From a given viewing locationV, to render a BSP tree, Applying this algorithm recursively to the BSP tree generated above results in the following steps: The tree is traversed in linear time and renders the polygons in a far-to-near ordering (D1,B1,C1,A,D2,B2,C2,D3) suitable for the painter's algorithm. BSP trees are often used by 3Dvideo games, particularlyfirst-person shootersand those with indoor environments.Game enginesusing BSP trees include theDoom (id Tech 1),Quake (id Tech 2 variant),GoldSrcandSourceengines. In them, BSP trees containing the static geometry of a scene are often used together with aZ-buffer, to correctly merge movable objects such as doors and characters onto the background scene. While binary space partitioning provides a convenient way to store and retrieve spatial information about polygons in a scene, it does not solve the problem ofvisible surface determination. BSP trees have also been applied to image compression.[6]
https://en.wikipedia.org/wiki/Binary_space_partitioning
Link grammar(LG) is a theory ofsyntaxby Davy Temperley andDaniel Sleatorwhich builds relations between pairs of words, rather than constructing constituents in aphrase structurehierarchy. Link grammar is similar todependency grammar, but dependency grammar includes a head-dependent relationship, whereas link grammar makes the head-dependent relationship optional (links need not indicate direction).[1]Colored Multiplanar Link Grammar (CMLG) is an extension of LG allowing crossing relations between pairs of words.[2]The relationship between words is indicated withlink types, thus making the Link grammar closely related to certaincategorial grammars. For example, in asubject–verb–objectlanguage like English, the verb would look left to form a subject link, and right to form an object link. Nouns would look right to complete the subject link, or left to complete the object link. In asubject–object–verblanguage likePersian, the verb would look left to form an object link, and a more distant left to form a subject link. Nouns would look to the right for both subject and object links. Link grammar connects the words in a sentence with links, similar in form to acatena. Unlike the catena or a traditionaldependency grammar, the marking of the head-dependent relationship is optional for most languages, becoming mandatory only infree-word-order languages(such asTurkish,[3][better source needed]Finnish,Hungarian). That is, in English, the subject-verb relationship is "obvious", in that the subject is almost always to the left of the verb, and thus no specific indication of dependency needs to be made. In the case ofsubject-verb inversion, a distinct link type is employed. For free word-order languages, this can no longer hold, and a link between the subject and verb must contain an explicit directional arrow to indicate which of the two words is which. Link grammar also differs from traditional dependency grammars by allowingcyclic relationsbetween words. Thus, for example, there can be links indicating both the head verb of a sentence, the head subject of the sentence, as well as a link between the subject and the verb. These three links thus form a cycle (a triangle, in this case). Cycles are useful in constraining what might otherwise be ambiguous parses; cycles help "tighten up" the set of allowable parses of a sentence. For example, in the parse the LEFT-WALL indicates the start of the sentence, or the root node. The directionalWVlink (with arrows) points at the head verb of the sentence; it is the Wall-Verb link.[4]The Wd link (drawn here without arrows) indicates the head noun (the subject) of the sentence. The link typeWdindicates both that it connects to the wall (W) and that the sentence is a declarative sentence (the lower-case "d" subtype).[5]TheSslink indicates the subject-verb relationship; the lower-case "s" indicating that the subject is singular.[6]Note that the WV, Wd and Ss links for a cycle. The Pa link connects the verb to a complement; the lower-case "a" indicating that it is apredicative adjectivein this case.[7] Parsing is performed in analogy to assembling ajigsaw puzzle(representing the parsed sentence) from puzzle pieces (representing individual words).[8][9]A language is represented by means of a dictionary orlexis, which consists of words and the set of allowed "jigsaw puzzle shapes" that each word can have. The shape is indicated by a "connector", which is a link-type, and a direction indicator+or-indicating right or left. Thus for example, atransitive verbmay have the connectorsS- & O+indicating that the verb may form a Subject ("S") connection to its left ("-") and an object connection ("O") to its right ("+"). Similarly, acommon nounmay have the connectorsD- & S+indicating that it may connect to adetermineron the left ("D-") and act as a subject, when connecting to a verb on the right ("S+"). The act of parsing is then to identify that theS+connector can attach to theS-connector, forming an "S" link between the two words. Parsing completes when all connectors have been connected. A given word may have dozens or even hundreds of allowed puzzle-shapes (termed "disjuncts"): for example, many verbs may be optionally transitive, thus making theO+connector optional; such verbs might also take adverbial modifiers (Econnectors) which are inherently optional. More complex verbs may have additional connectors for indirect objects, or forparticlesorprepositions. Thus, a part of parsing also involves picking one single unique disjunct for a word; the final parse mustsatisfy(connect)allconnectors for that disjunct.[10] Connectors may also include head-dependent indicatorshandd. In this case, a connector containing a head indicator is only allowed to connect to a connector containing the dependent indicator (or to a connector without any h-d indicators on it). When these indicators are used, the link is decorated with arrows to indicate the link direction.[9] A recent extension simplifies the specification of connectors for languages that have little or no restrictions on word-order, such asLithuanian. There are also extensions to make it easier to support languages with concatenativemorphologies. The parsing algorithm also requires that the final graph is aplanar graph, i.e. that no links cross.[9]This constraint is based on empirical psycho-linguistic evidence that, indeed, for most languages, in nearly all situations, dependency links really do not cross.[11][12]There are rare exceptions, e.g. in Finnish, and even in English; they can be parsed by link-grammar only by introducing more complex and selective connector types to capture these situations. Connectors can have an optionalfloating-pointcost markup, so that some are "cheaper" to use than others, thus giving preference to certain parses over others.[9]That is, the total cost of parse is the sum of the individual costs of the connectors that were used; the cheapest parse indicates the most likely parse. This is used for parse-ranking multiple ambiguous parses. The fact that the costs are local to the connectors, and are not a global property of the algorithm makes them essentiallyMarkovianin nature.[13][14][15][16][17][18] The assignment of a log-likelihood to linkages allows link grammar to implement thesemantic selectionof predicate-argument relationships. That is, certain constructions, although syntactically valid, are extremely unlikely. In this way, link grammar embodies some of the ideas present inoperator grammar. Because the costs are additive, they behave like the logarithm of the probability (since log-likelihoods are additive), or equivalently, somewhat like theentropy(since entropies are additive). This makes link grammar compatible with machine learning techniques such ashidden Markov modelsand theViterbi algorithm, because the link costs correspond to the link weights inMarkov networksorBayesian networks. The link grammar link types can be understood to be the types in the sense oftype theory.[9][19]In effect, the link grammar can be used to model theinternal languageof certain (non-symmetric)compact closed categories, such aspregroup grammars. In this sense, link grammar appears to be isomorphic or homomorphic to somecategorial grammars. Thus, for example, in a categorial grammar the noun phrase "the bad boy" may be written as whereas the corresponding disjuncts in link grammar would be The contraction rules (inference rules) of theLambek calculuscan be mapped to the connecting of connectors in link grammar. The+and-directional indicators correspond the forward and backward-slashes of the categorical grammar. Finally, the single-letter namesAandDcan be understood as labels or "easy-to-read" mnemonic names for the rather more verbose typesNP/N, etc. The primary distinction here is then that the categorical grammars have twotype constructors, the forward and backward slashes, that can be used to create new types (such asNP/N) from base types (such asNPandN). Link-grammar omits the use of type constructors, opting instead to define a much larger set of base types having compact, easy-to-remember mnemonics. A basic rule file for an SVO language might look like: Thus the English sentence, "The boy painted a picture" would appear as: Similar parses apply for Chinese.[20] Conversely, a rule file for anull subjectSOV language might consist of the following links: And a simplePersiansentence,man nAn xordam(من نان خوردم) 'I ate bread' would look like:[21][22][23] VSO order can be likewise accommodated, such as for Arabic.[24] In many languages with a concatenative morphology, the stem plays no grammatical role; the grammar is determined by the suffixes. Thus, inRussian, the sentence 'вверху плыли редкие облачка' might have the parse:[25][26] The subscripts, such as '.vnndpp', are used to indicate the grammatical category. The primary links: Wd, EI, SIp and Api connect together the suffixes, as, in principle, other stems could appear here, without altering the structure of the sentence. The Api link indicates the adjective; SIp denotes subject-verb inversion; EI is a modifier. The Wd link is used to indicate the head noun; the head verb is not indicated in this sentence. The LLXXX links serve only to attach stems to suffixes. The link-grammar can also indicatephonological agreementbetween neighboring words. For example: Here, the connector 'PH' is used to constrain the determiners that can appear before the word 'abstract'. It effectively blocks (makes it costly) to use the determiner 'a' in this sentence, while the link to 'an' becomes cheap. The other links are roughly as in previous examples: S denoting subject, O denoting object, D denoting determiner. The 'WV' link indicates the head verb, and the 'W' link indicates the head noun. The lower-case letters following the upper-case link types serve to refine the type; so for example, Ds can only connect to a singular noun; Ss only to a singular subject, Os to a singular object. The lower-case v in PHv denotes 'vowel'; the lower-case d in Wd denotes a declarative sentence. TheVietnamese languagesentence "Bữa tiệc hôm qua là một thành công lớn" - "The party yesterday was a great success" may be parsed as follows:[27] The link grammar syntaxparseris alibraryfornatural language processingwritten inC. It is available under theLGPL license. The parser[30]is an ongoing project. Recent versions include improved sentence coverage, Russian, Persian and Arabic language support, prototypes for German, Hebrew, Lithuanian, Vietnamese and Turkish, and programming API's forPython,Java,Common LISP,AutoItandOCaml, with 3rd-party bindings forPerl,[31]Ruby[32]andJavaScriptnode.js.[33] A current major undertaking is a project to learn the grammar and morphology of new languages, using unsupervised learning algorithms.[34][35] Thelink-parserprogram along with rules and word lists for English may be found in standardLinux distributions, e.g., as aDebianpackage, although many of these are years out of date.[36] AbiWord,[30]afreeword processor, uses link grammar for on-the-fly grammar checking. Words that cannot be linked anywhere are underlined in green. The semantic relationship extractor RelEx,[37]layered on top of the link grammar library, generates adependency grammaroutput by making explicit the semantic relationships between words in a sentence. Its output can be classified as being at a level between that of SSyntR and DSyntR ofMeaning-Text Theory. It also provides framing/grounding,anaphora resolution, head-word identification,lexical chunking, part-of-speech identification, and tagging, including entity, date, money, gender, etc. tagging. It includes a compatibility mode to generate dependency output compatible with theStanford parser,[38]and PennTreebank[39]-compatiblePOS tagging. Link grammar has also been employed forinformation extractionof biomedical texts[40][41]and events described in news articles,[42]as well as experimentalmachine translationsystems from English to German, Turkish, Indonesian.[43]andPersian.[44][45] The link grammar link dictionary is used to generate and verify the syntactic correctness of three differentnatural language generationsystems: NLGen,[46]NLGen2[47]and microplanner/surreal.[48]It is also used as a part of the NLP pipeline in theOpenCogAI project.
https://en.wikipedia.org/wiki/Link_grammar
AnEDA databaseis adatabasespecialized for the purpose ofelectronic design automation. These application specific databases are required because general purpose databases have historically not provided enough performance for EDA applications. In examining EDA design databases, it is useful to look at EDA tool architecture, to determine which parts are to be considered part of the design database, and which parts are the application levels. In addition to the database itself, many other components are needed for a useful EDA application. Associated with a database are one or more language systems (which, although not directly part of the database, are used by EDA applications such asparameterized cellsand user scripts). On top of the database are built the algorithmic engines within the tool (such astiming,placement,routing, orsimulation engines), and the highest level represents the applications built from these component blocks, such asfloorplanning. The scope of the design database includes the actual design, library information, technology information, and the set of translators to and from external formats such asVerilogandGDSII. Many instances of mature design databases exist in the EDA industry, both as a basis for commercial EDA tools as well as proprietary EDA tools developed by the CAD groups of major electronics companies.IBM,Hewlett-Packard, SDA Systems and ECAD (nowCadence Design Systems), High Level Design Systems, and many other companies developed EDA specific databases over the last 20 years, and these continue to be the basis of IC-design systems today. Many of these systems took ideas from university research and successfully productized them. Most of the mature design databases have evolved to the point where they can represent netlist data, layout data, and the ties between the two. They are hierarchical to allow for reuse and smaller designs. They can support styles of layout from digital through pure analog and many styles of mixed-signal design. Given the importance of a common design database in the EDA industry, theOpenAccessCoalition has been formed to develop, deploy, and support an open-sourced EDA design database with shared control. The data model presented in the OA DB provides a unified model that currently extends from structuralRTLthroughGDSII-level mask data, and now into thereticleand wafer space. It provides a rich enough capability to support digital, analog, and mixed-signal design data. It provides technology data that can express foundry process design rules through at least 20 nm, contains the definitions of the layers and purposes used in the design, definitions of VIAs and routing rules, definitions of operating points used for analysis, and so on. OA makes extensive use of IC-specific data compression techniques to reduce thememory footprint, to address the size, capacity, and performance problems of previous DBs. Despite what its name could imply, this file format has no publicly accessible implementation or specification. Those are exclusive to the members of the OpenAccess Coalition. The Milkyway database was originally developed by Avanti Corporation, which has since been acquired bySynopsys. It was first released in 1997. Milkyway is the database underlying most of Synopsys' physical design tools: Milkyway stores topological, parasitic and timing data. Having been used to design thousands of chips, Milkyway is very stable and production worthy. Milkyway is known to be written in C. Its internal implementation is not available outside Synopsys, so no comments may be made about the implementation. At the request of large customers such asTexas Instruments, Avanti released the MDX C-API in 1998. This enables the customers' CAD developers to createpluginsthat add custom functionality to Milkyway tools (chiefly Astro). MDX allows fairly complete access to topological data in Milkyway, but does not support timing or RC parasitic data. In early 2003, Synopsys (which acquired Avanti) opened Milkyway through theMilkyway Access Program (MAP-In). Any EDA company may become a MAP-in member for free (Synopsys customers must use MDX). Members are provided the means to interface their software to Milkyway using C,Tcl, orScheme. The Scheme interface is deprecated in favor of TCL. IC Compiler supports only TCL. The MAP-in C-API enables a non-Synopsys application to read and write Milkyway databases. Unlike MDX, MAP-in does not permit the creation of a plugin that can be used from within Synopsys Milkyway tools. MAP-in does not support access to timing or RC parasitic data. MAP-in also lacks direct support of certain geometric objects. MAP-in includes Milkyway Development Environment (MDE). MDE is a GUI application used to develop TCL and Scheme interfaces and diagnose problems. Its major features include: Another significant design database isFalcon, fromMentor Graphics. This database was one of the first in the industry written in C++. Like Milkyway is for Synopsys, Falcon seems to be a stable and mature platform for Mentor’s IC products. Again, the implementation is not publicly available, so little can be said about its features or performance relative to other industry standards. Magma Design Automation’s database is not just a disk format with an API, but is an entire system built around their DB as a central data structure. Again, since the details of the system are not publicly available, a direct comparison of features or performance is not possible. Looking at the capabilities of the Magma tools would indicate that this DB has a similar functionality to OpenAccess, and may be capable of representing behavioral (synthesis input) information. An EDA specific database is expected to provide many basic constructs and services. Here is a brief and incomplete list of what is needed:
https://en.wikipedia.org/wiki/EDA_database
In abstract algebra, aquasi-free algebrais anassociative algebrathat satisfies the lifting property similar to that of aformally smooth algebraincommutative algebra. The notion was introduced by Cuntz and Quillen for the applications tocyclic homology.[1]A quasi-free algebra generalizes afree algebra, as well as thecoordinate ringof a smooth affinecomplex curve. Because of the latter generalization, a quasi-free algebra can be thought of as signifying smoothness on anoncommutative space.[2] LetAbe an associative algebra over the complex numbers. ThenAis said to bequasi-freeif the following equivalent conditions are met:[3][4][5] Let(ΩA,d){\displaystyle (\Omega A,d)}denotes thedifferential envelopeofA; i.e., the universaldifferential-graded algebragenerated byA.[6][7]ThenAis quasi-free if and only ifΩ1A{\displaystyle \Omega ^{1}A}isprojectiveas abimoduleoverA.[3] There is also a characterization in terms of a connection. Given anA-bimoduleE, aright connectiononEis a linear map that satisfies∇r(as)=a∇r(s){\displaystyle \nabla _{r}(as)=a\nabla _{r}(s)}and∇r(sa)=∇r(s)a+s⊗da{\displaystyle \nabla _{r}(sa)=\nabla _{r}(s)a+s\otimes da}.[8]A left connection is defined in the similar way. ThenAis quasi-free if and only ifΩ1A{\displaystyle \Omega ^{1}A}admits a right connection.[9] One of basic properties of a quasi-free algebra is that the algebra is left and righthereditary(i.e., asubmoduleof a projective left or right module is projective or equivalently the left or right global dimension is at most one).[10]This puts a strong restriction for algebras to be quasi-free. For example, a hereditary (commutative)integral domainis precisely aDedekind domain. In particular, apolynomial ringover a field is quasi-free if and only if the number of variables is at most one. An analog of thetubular neighborhood theorem, called theformal tubular neighborhood theorem, holds for quasi-free algebras.[11] Thisabstract algebra-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Quasi-free_algebra
Incomputer science,merge sort(also commonly spelled asmergesortand asmerge-sort[2]) is an efficient, general-purpose, andcomparison-basedsorting algorithm. Most implementations produce astable sort, which means that the relative order of equal elements is the same in the input and output. Merge sort is adivide-and-conquer algorithmthat was invented byJohn von Neumannin 1945.[3]A detailed description and analysis of bottom-up merge sort appeared in a report byGoldstineand von Neumann as early as 1948.[4] Conceptually, a merge sort works as follows: ExampleC-likecode using indices for top-down merge sort algorithm that recursively splits the list (calledrunsin this example) into sublists until sublist size is 1, then merges those sublists to produce a sorted list. The copy back step is avoided with alternating the direction of the merge with each level of recursion (except for an initial one-time copy, that can be avoided too). As a simple example, consider an array with two elements. The elements are copied to B[], then merged back to A[]. If there are four elements, when the bottom of the recursion level is reached, single element runs from A[] are merged to B[], and then at the next higher level of recursion, those two-element runs are merged to A[]. This pattern continues with each level of recursion. Sorting the entire array is accomplished byTopDownMergeSort(A, B, length(A)). Example C-like code using indices for bottom-up merge sort algorithm which treats the list as an array ofnsublists (calledrunsin this example) of size 1, and iteratively merges sub-lists back and forth between two buffers: Pseudocodefor top-down merge sort algorithm which recursively divides the input list into smaller sublists until the sublists are trivially sorted, and then merges the sublists while returning up the call chain. In this example, themergefunction merges the left and right sublists. Pseudocodefor bottom-up merge sort algorithm which uses a small fixed size array of references to nodes, where array[i] is either a reference to a list of size 2iornil.nodeis a reference or pointer to a node. The merge() function would be similar to the one shown in the top-down merge lists example, it merges two already sorted lists, and handles empty lists. In this case, merge() would usenodefor its input parameters and return value. Haskell-like pseudocode, showing how merge sort can be implemented in such a language using constructs and ideas fromfunctional programming. In sortingnobjects, merge sort has anaverageandworst-case performanceofO(nlogn) comparisons. If the running time (number of comparisons) of merge sort for a list of lengthnisT(n), then therecurrence relationT(n) = 2T(n/2) +nfollows from the definition of the algorithm (apply the algorithm to two lists of half the size of the original list, and add thensteps taken to merge the resulting two lists).[5]The closed form follows from themaster theorem for divide-and-conquer recurrences. The number of comparisons made by merge sort in the worst case is given by thesorting numbers. These numbers are equal to or slightly smaller than (n⌈lgn⌉ − 2⌈lgn⌉+ 1), which is between (nlgn−n+ 1) and (nlgn+n+ O(lgn)).[6]Merge sort's best case takes about half as many iterations as its worst case.[7] For largenand a randomly ordered input list, merge sort's expected (average) number of comparisons approachesα·nfewer than the worst case, whereα=−1+∑k=0∞12k+1≈0.2645.{\displaystyle \alpha =-1+\sum _{k=0}^{\infty }{\frac {1}{2^{k}+1}}\approx 0.2645.} In theworstcase, merge sort uses approximately 39% fewer comparisons thanquicksortdoes in itsaveragecase, and in terms of moves, merge sort's worst case complexity isO(nlogn) - the same complexity as quicksort's best case.[7] Merge sort is more efficient than quicksort for some types of lists if the data to be sorted can only be efficiently accessed sequentially, and is thus popular in languages such asLisp, where sequentially accessed data structures are very common. Unlike some (efficient) implementations of quicksort, merge sort is a stable sort. Merge sort's most common implementation does not sort in place;[8]therefore, the memory size of the input must be allocated for the sorted output to be stored in (see below for variations that need onlyn/2 extra spaces). A natural merge sort is similar to a bottom-up merge sort except that any naturally occurringruns(sorted sequences) in the input are exploited. Both monotonic and bitonic (alternating up/down) runs may be exploited, with lists (or equivalently tapes or files) being convenient data structures (used asFIFO queuesorLIFO stacks).[9]In the bottom-up merge sort, the starting point assumes each run is one item long. In practice, random input data will have many short runs that just happen to be sorted. In the typical case, the natural merge sort may not need as many passes because there are fewer runs to merge. In the best case, the input is already sorted (i.e., is one run), so the natural merge sort need only make one pass through the data. In many practical cases, long natural runs are present, and for that reason natural merge sort is exploited as the key component ofTimsort. Example: Formally, the natural merge sort is said to beRuns-optimal, whereRuns(L){\displaystyle {\mathtt {Runs}}(L)}is the number of runs inL{\displaystyle L}, minus one. Tournament replacement selection sortsare used to gather the initial runs for external sorting algorithms. Instead of merging two blocks at a time, a ping-pong merge merges four blocks at a time. The four sorted blocks are merged simultaneously to auxiliary space into two sorted blocks, then the two sorted blocks are merged back to main memory. Doing so omits the copy operation and reduces the total number of moves by half. An early public domain implementation of a four-at-once merge was by WikiSort in 2014, the method was later that year described as an optimization forpatience sortingand named a ping-pong merge.[10][11]Quadsort implemented the method in 2020 and named it a quad merge.[12] One drawback of merge sort, when implemented on arrays, is itsO(n)working memory requirement. Several methods to reduce memory or make merge sort fullyin-placehave been suggested: Anexternalmerge sort is practical to run usingdiskortapedrives when the data to be sorted is too large to fit intomemory.External sortingexplains how merge sort is implemented with disk drives. A typical tape drive sort uses four tape drives. All I/O is sequential (except for rewinds at the end of each pass). A minimal implementation can get by with just two record buffers and a few program variables. Naming the four tape drives as A, B, C, D, with the original data on A, and using only two record buffers, the algorithm is similar tothe bottom-up implementation, using pairs of tape drives instead of arrays in memory. The basic algorithm can be described as follows: Instead of starting with very short runs, usually ahybrid algorithmis used, where the initial pass will read many records into memory, do an internal sort to create a long run, and then distribute those long runs onto the output set. The step avoids many early passes. For example, an internal sort of 1024 records will save nine passes. The internal sort is often large because it has such a benefit. In fact, there are techniques that can make the initial runs longer than the available internal memory. One of them, the Knuth's 'snowplow' (based on abinary min-heap), generates runs twice as long (on average) as a size of memory used.[18] With some overhead, the above algorithm can be modified to use three tapes.O(nlogn) running time can also be achieved using twoqueues, or astackand a queue, or three stacks. In the other direction, usingk> two tapes (andO(k) items in memory), we can reduce the number of tape operations inO(logk) times by using ak/2-way merge. A more sophisticated merge sort that optimizes tape (and disk) drive usage is thepolyphase merge sort. On modern computers,locality of referencecan be of paramount importance insoftware optimization, because multilevelmemory hierarchiesare used.Cache-aware versions of the merge sort algorithm, whose operations have been specifically chosen to minimize the movement of pages in and out of a machine's memory cache, have been proposed. For example, thetiled merge sortalgorithm stops partitioning subarrays when subarrays of size S are reached, where S is the number of data items fitting into a CPU's cache. Each of these subarrays is sorted with an in-place sorting algorithm such asinsertion sort, to discourage memory swaps, and normal merge sort is then completed in the standard recursive fashion. This algorithm has demonstrated better performance[example needed]on machines that benefit from cache optimization. (LaMarca & Ladner 1997) Merge sort parallelizes well due to the use of thedivide-and-conquermethod. Several different parallel variants of the algorithm have been developed over the years. Some parallel merge sort algorithms are strongly related to the sequential top-down merge algorithm while others have a different general structure and use theK-way mergemethod. The sequential merge sort procedure can be described in two phases, the divide phase and the merge phase. The first consists of many recursive calls that repeatedly perform the same division process until the subsequences are trivially sorted (containing one or no element). An intuitive approach is the parallelization of those recursive calls.[19]Following pseudocode describes the merge sort with parallel recursion using thefork and joinkeywords: This algorithm is the trivial modification of the sequential version and does not parallelize well. Therefore, its speedup is not very impressive. It has aspanofΘ(n){\displaystyle \Theta (n)}, which is only an improvement ofΘ(log⁡n){\displaystyle \Theta (\log n)}compared to the sequential version (seeIntroduction to Algorithms). This is mainly due to the sequential merge method, as it is the bottleneck of the parallel executions. Better parallelism can be achieved by using a parallelmerge algorithm.Cormen et al.present a binary variant that merges two sorted sub-sequences into one sorted output sequence.[19] In one of the sequences (the longer one if unequal length), the element of the middle index is selected. Its position in the other sequence is determined in such a way that this sequence would remain sorted if this element were inserted at this position. Thus, one knows how many other elements from both sequences are smaller and the position of the selected element in the output sequence can be calculated. For the partial sequences of the smaller and larger elements created in this way, the merge algorithm is again executed in parallel until the base case of the recursion is reached. The following pseudocode shows the modified parallel merge sort method using the parallel merge algorithm (adopted from Cormen et al.). In order to analyze arecurrence relationfor the worst case span, the recursive calls of parallelMergesort have to be incorporated only once due to their parallel execution, obtaining T∞sort(n)=T∞sort(n2)+T∞merge(n)=T∞sort(n2)+Θ(log⁡(n)2).{\displaystyle T_{\infty }^{\text{sort}}(n)=T_{\infty }^{\text{sort}}\left({\frac {n}{2}}\right)+T_{\infty }^{\text{merge}}(n)=T_{\infty }^{\text{sort}}\left({\frac {n}{2}}\right)+\Theta \left(\log(n)^{2}\right).} For detailed information about the complexity of the parallel merge procedure, seeMerge algorithm. The solution of this recurrence is given by T∞sort=Θ(log⁡(n)3).{\displaystyle T_{\infty }^{\text{sort}}=\Theta \left(\log(n)^{3}\right).} This parallel merge algorithm reaches a parallelism ofΘ(n(log⁡n)2){\textstyle \Theta \left({\frac {n}{(\log n)^{2}}}\right)}, which is much higher than the parallelism of the previous algorithm. Such a sort can perform well in practice when combined with a fast stable sequential sort, such asinsertion sort, and a fast sequential merge as a base case for merging small arrays.[20] It seems arbitrary to restrict the merge sort algorithms to a binary merge method, since there are usually p > 2 processors available. A better approach may be to use aK-way mergemethod, a generalization of binary merge, in whichk{\displaystyle k}sorted sequences are merged. This merge variant is well suited to describe a sorting algorithm on aPRAM.[21][22] Given an unsorted sequence ofn{\displaystyle n}elements, the goal is to sort the sequence withp{\displaystyle p}availableprocessors. These elements are distributed equally among all processors and sorted locally using a sequentialSorting algorithm. Hence, the sequence consists of sorted sequencesS1,...,Sp{\displaystyle S_{1},...,S_{p}}of length⌈np⌉{\textstyle \lceil {\frac {n}{p}}\rceil }. For simplification letn{\displaystyle n}be a multiple ofp{\displaystyle p}, so that|Si|=np{\textstyle \left\vert S_{i}\right\vert ={\frac {n}{p}}}fori=1,...,p{\displaystyle i=1,...,p}. These sequences will be used to perform a multisequence selection/splitter selection. Forj=1,...,p{\displaystyle j=1,...,p}, the algorithm determines splitter elementsvj{\displaystyle v_{j}}with global rankk=jnp{\textstyle k=j{\frac {n}{p}}}. Then the corresponding positions ofv1,...,vp{\displaystyle v_{1},...,v_{p}}in each sequenceSi{\displaystyle S_{i}}are determined withbinary searchand thus theSi{\displaystyle S_{i}}are further partitioned intop{\displaystyle p}subsequencesSi,1,...,Si,p{\displaystyle S_{i,1},...,S_{i,p}}withSi,j:={x∈Si|rank(vj−1)<rank(x)≤rank(vj)}{\textstyle S_{i,j}:=\{x\in S_{i}|rank(v_{j-1})<rank(x)\leq rank(v_{j})\}}. Furthermore, the elements ofS1,i,...,Sp,i{\displaystyle S_{1,i},...,S_{p,i}}are assigned to processori{\displaystyle i}, means all elements between rank(i−1)np{\textstyle (i-1){\frac {n}{p}}}and rankinp{\textstyle i{\frac {n}{p}}}, which are distributed over allSi{\displaystyle S_{i}}. Thus, each processor receives a sequence of sorted sequences. The fact that the rankk{\displaystyle k}of the splitter elementsvi{\displaystyle v_{i}}was chosen globally, provides two important properties: On the one hand,k{\displaystyle k}was chosen so that each processor can still operate onn/p{\textstyle n/p}elements after assignment. The algorithm is perfectlyload-balanced. On the other hand, all elements on processori{\displaystyle i}are less than or equal to all elements on processori+1{\displaystyle i+1}. Hence, each processor performs thep-way mergelocally and thus obtains a sorted sequence from its sub-sequences. Because of the second property, no furtherp-way-merge has to be performed, the results only have to be put together in the order of the processor number. In its simplest form, givenp{\displaystyle p}sorted sequencesS1,...,Sp{\displaystyle S_{1},...,S_{p}}distributed evenly onp{\displaystyle p}processors and a rankk{\displaystyle k}, the task is to find an elementx{\displaystyle x}with a global rankk{\displaystyle k}in the union of the sequences. Hence, this can be used to divide eachSi{\displaystyle S_{i}}in two parts at a splitter indexli{\displaystyle l_{i}}, where the lower part contains only elements which are smaller thanx{\displaystyle x}, while the elements bigger thanx{\displaystyle x}are located in the upper part. The presented sequential algorithm returns the indices of the splits in each sequence, e.g. the indicesli{\displaystyle l_{i}}in sequencesSi{\displaystyle S_{i}}such thatSi[li]{\displaystyle S_{i}[l_{i}]}has a global rank less thank{\displaystyle k}andrank(Si[li+1])≥k{\displaystyle \mathrm {rank} \left(S_{i}[l_{i}+1]\right)\geq k}.[23] For the complexity analysis thePRAMmodel is chosen. If the data is evenly distributed over allp{\displaystyle p}, the p-fold execution of thebinarySearchmethod has a running time ofO(plog⁡(n/p)){\displaystyle {\mathcal {O}}\left(p\log \left(n/p\right)\right)}. The expected recursion depth isO(log⁡(∑i|Si|))=O(log⁡(n)){\displaystyle {\mathcal {O}}\left(\log \left(\textstyle \sum _{i}|S_{i}|\right)\right)={\mathcal {O}}(\log(n))}as in the ordinaryQuickselect. Thus the overall expected running time isO(plog⁡(n/p)log⁡(n)){\displaystyle {\mathcal {O}}\left(p\log(n/p)\log(n)\right)}. Applied on the parallel multiway merge sort, this algorithm has to be invoked in parallel such that all splitter elements of rankinp{\textstyle i{\frac {n}{p}}}fori=1,..,p{\displaystyle i=1,..,p}are found simultaneously. These splitter elements can then be used to partition each sequence inp{\displaystyle p}parts, with the same total running time ofO(plog⁡(n/p)log⁡(n)){\displaystyle {\mathcal {O}}\left(p\,\log(n/p)\log(n)\right)}. Below, the complete pseudocode of the parallel multiway merge sort algorithm is given. We assume that there is a barrier synchronization before and after the multisequence selection such that every processor can determine the splitting elements and the sequence partition properly. Firstly, each processor sorts the assignedn/p{\displaystyle n/p}elements locally using a sorting algorithm with complexityO(n/plog⁡(n/p)){\displaystyle {\mathcal {O}}\left(n/p\;\log(n/p)\right)}. After that, the splitter elements have to be calculated in timeO(plog⁡(n/p)log⁡(n)){\displaystyle {\mathcal {O}}\left(p\,\log(n/p)\log(n)\right)}. Finally, each group ofp{\displaystyle p}splits have to be merged in parallel by each processor with a running time ofO(log⁡(p)n/p){\displaystyle {\mathcal {O}}(\log(p)\;n/p)}using a sequentialp-way merge algorithm. Thus, the overall running time is given by O(nplog⁡(np)+plog⁡(np)log⁡(n)+nplog⁡(p)){\displaystyle {\mathcal {O}}\left({\frac {n}{p}}\log \left({\frac {n}{p}}\right)+p\log \left({\frac {n}{p}}\right)\log(n)+{\frac {n}{p}}\log(p)\right)}. The multiway merge sort algorithm is very scalable through its high parallelization capability, which allows the use of many processors. This makes the algorithm a viable candidate for sorting large amounts of data, such as those processed incomputer clusters. Also, since in such systems memory is usually not a limiting resource, the disadvantage of space complexity of merge sort is negligible. However, other factors become important in such systems, which are not taken into account when modelling on aPRAM. Here, the following aspects need to be considered:Memory hierarchy, when the data does not fit into the processors cache, or the communication overhead of exchanging data between processors, which could become a bottleneck when the data can no longer be accessed via the shared memory. Sanderset al. have presented in their paper abulk synchronous parallelalgorithm for multilevel multiway mergesort, which dividesp{\displaystyle p}processors intor{\displaystyle r}groups of sizep′{\displaystyle p'}. All processors sort locally first. Unlike single level multiway mergesort, these sequences are then partitioned intor{\displaystyle r}parts and assigned to the appropriate processor groups. These steps are repeated recursively in those groups. This reduces communication and especially avoids problems with many small messages. The hierarchical structure of the underlying real network can be used to define the processor groups (e.g.racks,clusters,...).[22] Merge sort was one of the first sorting algorithms where optimal speed up was achieved, with Richard Cole using a clever subsampling algorithm to ensureO(1) merge.[24]Other sophisticated parallel sorting algorithms can achieve the same or better time bounds with a lower constant. For example, in 1991 David Powers described a parallelizedquicksort(and a relatedradix sort) that can operate inO(logn) time on aCRCWparallel random-access machine(PRAM) withnprocessors by performing partitioning implicitly.[25]Powers further shows that a pipelined version of Batcher'sBitonic MergesortatO((logn)2) time on a butterflysorting networkis in practice actually faster than hisO(logn) sorts on a PRAM, and he provides detailed discussion of the hidden overheads in comparison, radix and parallel sorting.[26] Althoughheapsorthas the same time bounds as merge sort, it requires only Θ(1) auxiliary space instead of merge sort's Θ(n). On typical modern architectures, efficientquicksortimplementations generally outperform merge sort for sorting RAM-based arrays.[27]Quicksorts are preferred when the data size to be sorted is lesser, since the space complexity for quicksort is O(logn), it helps in utilizing cache locality better than merge sort (with space complexity O(n)).[27]On the other hand, merge sort is a stable sort and is more efficient at handling slow-to-access sequential media. Merge sort is often the best choice for sorting alinked list: in this situation it is relatively easy to implement a merge sort in such a way that it requires only Θ(1) extra space, and the slow random-access performance of a linked list makes some other algorithms (such as quicksort) perform poorly, and others (such as heapsort) completely impossible. As ofPerl5.8, merge sort is its default sorting algorithm (it was quicksort in previous versions of Perl).[28]InJava, theArrays.sort()methods use merge sort or a tuned quicksort depending on the datatypes and for implementation efficiency switch toinsertion sortwhen fewer than seven array elements are being sorted.[29]TheLinuxkernel uses merge sort for its linked lists.[30] Timsort, a tuned hybrid of merge sort and insertion sort is used in variety of software platforms and languages including the Java and Android platforms[31]and is used byPythonsince version 2.3; since version 3.11, Timsort's merge policy was updated toPowersort.[32]
https://en.wikipedia.org/wiki/Merge_sort
Incomputer graphicsanddigital imaging,imagescalingrefers to the resizing of a digital image. In video technology, the magnification of digital material is known as upscaling orresolution enhancement. When scaling avector graphicimage, the graphic primitives that make up the image can be scaled using geometric transformations with no loss ofimage quality. When scaling araster graphicsimage, a new image with a higher or lower number of pixels must be generated. In the case of decreasing the pixel number (scaling down), this usually results in a visible quality loss. From the standpoint ofdigital signal processing, the scaling of raster graphics is a two-dimensional example ofsample-rate conversion, the conversion of adiscrete signalfrom asampling rate(in this case, the local sampling rate) to another. Image scaling can be interpreted as a form of image resampling or image reconstruction from the view of theNyquist sampling theorem. According to the theorem, downsampling to a smaller image from a higher-resolution original can only be carried out after applying a suitable 2Danti-aliasing filterto prevent aliasing artifacts. The image is reduced to the information that can be carried by the smaller image. In the case of up sampling, areconstruction filtertakes the place of the anti-aliasing filter. A more sophisticated approach to upscaling treats the problem as aninverse problem, solving the question of generating a plausible image that, when scaled down, would look like the input image. A variety of techniques have been applied for this, including optimization techniques withregularizationterms and the use ofmachine learningfrom examples. An image size can be changed in several ways. One of the simpler ways of increasing image size isnearest-neighbor interpolation, replacing every pixel with the nearest pixel in the output; for upscaling, this means multiple pixels of the same color will be present. This can preserve sharp details but also introducejaggednessin previously smooth images. 'Nearest' in nearest-neighbor does not have to be the mathematical nearest. One common implementation is to always round toward zero. Rounding this way produces fewer artifacts and is faster to calculate.[citation needed] This algorithm is often preferred for images which have little to no smooth edges. A common application of this can be found inpixel art. Bilinear interpolationworks byinterpolatingpixel color values, introducing a continuous transition into the output even where the original material has discrete transitions. Although this is desirable for continuous-tone images, this algorithm reducescontrast(sharp edges) in a way that may be undesirable for line art.Bicubic interpolationyields substantially better results, with an increase in computational cost.[citation needed] Sinc resampling, in theory, provides the best possible reconstruction for a perfectly bandlimited signal. In practice, the assumptions behind sinc resampling are not completely met by real-world digital images.Lanczos resampling, an approximation to the sinc method, yields better results. Bicubic interpolation can be regarded as a computationally efficient approximation to Lanczos resampling.[citation needed] One weakness of bilinear, bicubic, and related algorithms is that they sample a specific number of pixels. When downscaling below a certain threshold, such as more than twice for all bi-sampling algorithms, the algorithms will sample non-adjacent pixels, which results in both losing data and rough results.[citation needed] The trivial solution to this issue is box sampling, which is to consider the target pixel a box on the original image and sample all pixels inside the box. This ensures that all input pixels contribute to the output. The major weakness of this algorithm is that it is hard to optimize.[citation needed] Another solution to the downscale problem of bi-sampling scaling ismipmaps. A mipmap is a prescaled set of downscaled copies. When downscaling, the nearest larger mipmap is used as the origin to ensure no scaling below the useful threshold of bilinear scaling. This algorithm is fast and easy to optimize. It is standard in many frameworks, such asOpenGL. The cost is using more image memory, exactly one-third more in the standard implementation. Simple interpolation based on theFourier transformpads thefrequency domainwith zero components (a smooth window-based approach would reduce theringing). Besides the good conservation (or recovery) of details, notable are the ringing and the circular bleeding of content from the left border to the right border (and the other way around). Edge-directed interpolation algorithms aim to preserve edges in the image after scaling, unlike other algorithms, which can introduce staircase artifacts. Examples of algorithms for this task include New Edge-Directed Interpolation (NEDI),[1][2]Edge-Guided Image Interpolation (EGGI),[3]Iterative Curvature-Based Interpolation(ICBI),[4]andDirectional Cubic Convolution Interpolation(DCCI).[5]A 2013 analysis found that DCCI had the best scores inpeak signal-to-noise ratioandstructural similarityon a series of test images.[6] For magnifying computer graphics with low resolution and/or few colors (usually from 2 to 256 colors), better results can be achieved byhqxor otherpixel-art scaling algorithms. These produce sharp edges and maintain a high level of detail. Vector extraction, orvectorization, offers another approach. Vectorization first creates a resolution-independent vector representation of the graphic to be scaled. Then the resolution-independent version is rendered as a raster image at the desired resolution. This technique is used byAdobe Illustrator, Live Trace, andInkscape.[7]Scalable Vector Graphicsare well suited to simple geometric images, while photographs do not fare well with vectorization due to their complexity. This method usesmachine learningfor more detailed images, such as photographs and complex artwork. Programs that use this method includewaifu2x, Imglarger and Neural Enhance. Demonstration of conventional vs. waifu2x upscaling with noise reduction, using a detail ofPhosphorus and HesperusbyEvelyn De Morgan. [Click image for full size] AI-driven software such as theMyHeritage Photo Enhancerallows detail and sharpness to be added to historical photographs, where it is not present in the original. Image scaling is used in, among other applications,web browsers,[8]image editors, image and file viewers, software magnifiers, digital zoom, the process of generatingthumbnail images, and when outputting images through screens or printers. This application is the magnification of images for home theaters for HDTV-ready output devices from PAL-Resolution content, for example, from a DVD player. Upscaling is performed in real time, and the output signal is not saved. Aspixel-artgraphics are usually low-resolution, they rely on careful placement of individual pixels, often with a limited palette of colors. This results in graphics that rely on stylized visual cues to define complex shapes with little resolution, down to individual pixels. This makes scaling pixel art a particularly difficult problem. Specialized algorithms[9]were developed to handle pixel-art graphics, as the traditional scaling algorithms do not take perceptual cues into account. Since a typical application is to improve the appearance offourth-generationand earliervideo gamesonarcadeandconsole emulators, many are designed to run in real time for small input images at 60 frames per second. On fast hardware, these algorithms are suitable for gaming and other real-time image processing. These algorithms provide sharp, crisp graphics, while minimizing blur. Scaling art algorithms have been implemented in a wide range of emulators such as HqMAME andDOSBox, as well as 2Dgame enginesandgame engine recreationssuch asScummVM. They gained recognition with gamers, for whom these technologies encouraged a revival of 1980s and 1990s gaming experiences.[citation needed] Such filters are currently used in commercial emulators onXbox Live,Virtual Console, andPSNto allow classic low-resolution games to be more visually appealing on modernHDdisplays. Recently released games that incorporate these filters includeSonic's Ultimate Genesis Collection,Castlevania: The Dracula X Chronicles,Castlevania: Symphony of the Night, andAkumajō Dracula X Chi no Rondo. A number of companies have developed techniques to upscale video frames inreal-time, such as when they are drawn on screen in a video game.Nvidia'sdeep learning super sampling(DLSS) usesdeep learningto upsample lower-resolutionimages to a higher resolution for display on higher-resolution computer monitors.[10]AMD'sFidelityFX Super Resolution1.0 (FSR) does not employ machine learning, instead using traditional hand-written algorithms to achieve spatial upscaling on traditional shading units. FSR 2.0 utilises temporal upscaling, again with a hand-tuned algorithm. FSR standardized presets are not enforced, and some titles such asDota 2offer resolution sliders.[11]Other technologies includeIntelXeSS and Nvidia Image Scaler (NIS).[12][13]
https://en.wikipedia.org/wiki/Image_scaling
Thehacking of consumer electronicsis a common practice that users perform to customize and modify their devices beyond what is typically possible. This activity has a long history, dating from the days of early computer, programming, and electronics hobbyists. The process of consumer electronics hacking is usually accomplished through modification of the system software, either anoperating systemorfirmware, buthardwaremodifications are not uncommon. The legality of hacking consumer electronics has been challenged over the years, with an example of this being the cracking ofencryption keysused inHigh-bandwidth Digital Content Protection, where detractors have been threatened under the basis of legal action. However, some companies have encouraged hardware hacking, such as Google'sNexusandPixelseries of smartphones. Many modern consumer electronics run either anoperating systemorfirmware. When this is stored in a mutable storage device, these files can be modified to add functionality to the operating system, or to replace it entirely. Multiple methods are used in order to successfully hack the target device, such as gainingshellaccess, gathering information about the device hardware and software, before using the obtained information to manipulate the operating system.[1] Getting access to a shell allows the user to runcommandsto interact with the operating system. Typically, a root shell is aimed for, which grantsadministrative privileges, to let the user modify operating system files. Root access can be obtained through the use ofsoftware exploits(i.e. bugs), through thebootloaderconsole, or over aserial portembedded in the device, such as aJTAGorUARTinterface.[1] In the case of gaining root privileges on an Android device, the process is known asrooting. On some Android devices, the bootloader is locked for security to prevent installation of other operating systems.[2]Unlocking it is required before another OS can be installed. On Android devices,Fastboot(Odin modeon Samsung devices) allowsflashingof operating systems onto storage.[3] Das U-Bootis a bootloader commonly used in embedded devices such as routers and Chromebooks. Getting information on the device's hardware and software is vital because exploits can be identified, which is subsequently used to either gain shell access, port an operating system to the device, etc. A lot of device manufacturers include open source software in their products.[4]When the software used is licensed under a copyleft license, a manufacturer is obliged to provide the source code of the open source components. An instance of this was whenNaomi Wurequested theGPLv2licensed source code of the Linux Kernel branch of a smartphone vendor.[5] A good share of consumer devices run on a modifiedLinux kernel,[4]which isforkedbefore applying device-specific changes.[6]Android is an example of OS which makes use of the Linux kernel. Device manufacturers often include countermeasures to hinder hardware hacking, one of which is the use ofcryptographyto prevent unauthorized code from being executed. For example,Nvidiagraphics cards havesigned firmwareto prevent tampering or hacking. WhistleblowerEdward SnowdenshowedWiredcorrespondentShane Smithhow to remove thecamerasand microphones from a smartphone.[7] One of the reasons hacking is done is to add or unlock features in an operating system. Examples include: Another reason hacking is done is to allow unsupported operating systems to be installed. Ageneral purpose computerhas historically been open by design. However,Apple'sApple siliconbasedMachardware is based on theARM architecture family, making it difficult to install a third-party operating system. There are many reasonsvideo game consolesmay be hacked. Game consoles are often restricted in a way that may disallow unofficial games to be run on it (seeVideo game console § Licensing), and hacking is undertaken to allow unlicensed games to run on it, includingpirated games. Another reason is to allow features to be added, such as using the console as a multimedia player. An example of this isXbox Media Player, which was made to allow pictures and movies to be shown on anXbox. Some devices—most commonly open source—are built forhomebrewpurposes, and encourage hacking as an integral part of their existence. iOS jailbreakingwas often considered illegal in the United States until a recent[when?]ruling by theU.S. Copyright Officedeclaring that jailbreaking an iPhone or other mobile device would no longer violate copyright law.[17]However, simultaneously, there is ongoing prosecution against hackers of videogame consoles underanti-circumventionviolations of theDMCA. A main complication, in many cases, is the profiting from selling jailbroken or rooted equipment as a value-added service. At least some accused deny these charges and claim only to be making back-ups of legally purchased games.[18][19] In around 2010, theHigh-bandwidth Digital Content Protectionencryption system, which encrypts data running between cable boxes,Blu-rayplayers, and other similar devices and displays was cracked, and a copy of the master key needed to decrypt HDCP protected streams was posted on the internet.Intel, which created and now licenses HDCP technology, has stated that HDCP is sufficient to keep most users from circumventing it, but indicated that it may threaten legal action against more determined users under theDMCA.[20] Also in around 2010, on the issue of the hacking of its then new interactive game controller theKinect, Microsoft initially condemned and threatened legal action against those who hacked it, but soon after, it reversed this position and instead stated that it had intentionally left the device open, and would in fact not prosecute those who modified it.[21]
https://en.wikipedia.org/wiki/Hacking_of_consumer_electronics
Anoriented matroidis amathematicalstructurethat abstracts the properties ofdirected graphs,vectorarrangements over ordered fields, andhyperplane arrangementsoverordered fields.[1]In comparison, an ordinary (i.e., non-oriented)matroidabstracts thedependenceproperties that are common both tographs, which are not necessarilydirected, and to arrangements of vectors overfields, which are not necessarilyordered.[2][3] All oriented matroids have an underlyingmatroid. Thus, results on ordinary matroids can be applied to oriented matroids. However, theconverseis false; some matroids cannot become an oriented matroid byorientingan underlying structure (e.g., circuits or independent sets).[4]The distinction between matroids and oriented matroids is discussed further below. Matroids are often useful in areas such asdimension theoryandalgorithms. Because of an oriented matroid's inclusion of additional details about theorientednature of a structure, its usefulness extends further into several areas includinggeometryandoptimization. The first appearance of oriented matroids was in a 1966 article byGeorge J. Mintyand was confined toregular matroids.[5] Subsequently R.T. Rockefellar (1969) suggested the problem of generalizing Minty's concept to real vector spaces. His proposal helped lead to the development of the general theory. In order to abstract the concept oforientationon the edges of a graph to sets, one needs the ability to assign "direction" to the elements of a set. The way this achieved is with the following definition ofsigned sets. Given an elementx{\displaystyle x}of the support, we will writex{\displaystyle x}for a positive element and−x{\displaystyle -x}for a negative element. In this way, a signed set is just adding negative signs to distinguished elements. This will make sense as a "direction" only when we consider orientations of larger structures. Then the sign of each element will encode its direction relative to this orientation. Like ordinary matroids, several equivalentsystems of axiomsexist. (Such structures that possess multiple equivalent axiomatizations are calledcryptomorphic.) LetE{\displaystyle E}be any set. We refer toE{\displaystyle E}as theground set. LetC{\displaystyle {\mathcal {C}}}be a collection ofsigned sets, each of which issupportedby a subset ofE{\displaystyle E}. If the following axioms hold forC{\displaystyle {\mathcal {C}}}, then equivalentlyC{\displaystyle {\mathcal {C}}}is the set ofsigned circuitsfor anoriented matroidonE{\displaystyle E}. Thecompositionof signed setsX{\displaystyle X}andY{\displaystyle Y}is the signed setX∘Y{\displaystyle X\circ Y}defined byX∘Y_=X_∪Y_{\displaystyle {\underline {X\circ Y}}={\underline {X}}\cup {\underline {Y}}},(X∘Y)+=X+∪(Y+∖X−){\displaystyle (X\circ Y)^{+}=X^{+}\cup \left(Y^{+}\setminus X^{-}\right)}, and(X∘Y)−=X−∪(Y−∖X+){\displaystyle (X\circ Y)^{-}=X^{-}\cup \left(Y^{-}\setminus X^{+}\right)}. Thevectorsof an oriented matroid are the compositions of circuits. The vectorsV{\displaystyle {\mathcal {V}}}of an oriented matroid satisfy the following axioms: Thecovectorsof an oriented matroid are the vectors of its dual oriented matroid. LetE{\displaystyle E}be as above. For each non-negative integerr{\displaystyle r}, achirotope of rankr{\displaystyle r}is a functionχ:Er→{−1,0,1}{\displaystyle \chi \colon E^{r}\to \{-1,0,1\}}that satisfies the following axioms: The termchirotopeis derived from the mathematical notion ofchirality, which is a concept abstracted fromchemistry, where it is used to distinguish molecules that have the same structure except for a reflection. Every chirotope of rankr{\displaystyle r}gives rise to a set of bases of a matroid onE{\displaystyle E}consisting of thoser{\displaystyle r}-element subsets thatχ{\displaystyle \chi }assigns a nonzero value.[6]The chirotope can then sign the circuits of that matroid. IfC{\displaystyle C}is a circuit of the described matroid, thenC⊂{x1,…,xr,xr+1}{\displaystyle C\subset \{x_{1},\dots ,x_{r},x_{r+1}\}}where{x1,…,xr}{\displaystyle \{x_{1},\dots ,x_{r}\}}is a basis. ThenC{\displaystyle C}can be signed with positive elements and negative elements the complement. Thus a chirotope gives rise to theoriented basesof an oriented matroid. In this sense, (B0) is the nonempty axiom for bases and (B2) is the basis exchange property. Oriented matroids are often introduced (e.g., Bachem and Kern) as an abstraction for directed graphs or systems of linear inequalities. Below are the explicit constructions. Given adigraph, we define a signed circuit from the standardcircuitof the graph by the following method. The support of the signed circuitX_{\displaystyle \textstyle {\underline {X}}}is the standard set of edges in a minimal cycle. We go along the cycle in the clockwise or anticlockwise direction assigning those edges whose orientation agrees with the direction to the positive elementsX+{\displaystyle \textstyle X^{+}}and those edges whose orientation disagrees with the direction to the negative elementsX−{\displaystyle \textstyle X^{-}}. IfC{\displaystyle \textstyle {\mathcal {C}}}is the set of all suchX{\displaystyle \textstyle X}, thenC{\displaystyle \textstyle {\mathcal {C}}}is the set of signed circuits of an oriented matroid on the set of edges of the directed graph. If we consider the directed graph on the right, then we can see that there are only two circuits, namely{(1,2),(1,3),(3,2)}{\displaystyle \textstyle \{(1,2),(1,3),(3,2)\}}and{(3,4),(4,3)}{\displaystyle \textstyle \{(3,4),(4,3)\}}. Then there are only four possible signed circuits corresponding to clockwise and anticlockwise orientations, namely{(1,2),−(1,3),−(3,2)}{\displaystyle \textstyle \{(1,2),-(1,3),-(3,2)\}},{−(1,2),(1,3),(3,2)}{\displaystyle \textstyle \{-(1,2),(1,3),(3,2)\}},{(3,4),(4,3)}{\displaystyle \textstyle \{(3,4),(4,3)\}}, and{−(3,4),−(4,3)}{\displaystyle \textstyle \{-(3,4),-(4,3)\}}. These four sets form the set of signed circuits of an oriented matroid on the set{(1,2),(1,3),(3,2),(3,4),(4,3)}{\displaystyle \textstyle \{(1,2),(1,3),(3,2),(3,4),(4,3)\}}. IfE{\displaystyle \textstyle E}is any finite subset ofRn{\displaystyle \textstyle \mathbb {R} ^{n}}, then the set of minimal linearly dependent sets forms the circuit set of a matroid onE{\displaystyle \textstyle E}. To extend this construction to oriented matroids, for each circuit{v1,…,vm}{\displaystyle \textstyle \{v_{1},\dots ,v_{m}\}}there is a minimal linear dependence withλi∈R{\displaystyle \textstyle \lambda _{i}\in \mathbb {R} }. Then the signed circuitX={X+,X−}{\displaystyle \textstyle X=\{X^{+},X^{-}\}}has positive elementsX+={vi:λi>0}{\displaystyle \textstyle X^{+}=\{v_{i}:\lambda _{i}>0\}}and negative elementsX−={vi:λi<0}{\displaystyle \textstyle X^{-}=\{v_{i}:\lambda _{i}<0\}}. The set of all suchX{\displaystyle \textstyle X}forms the set of signed circuits of an oriented matroid onE{\displaystyle \textstyle E}. Oriented matroids that can be realized this way are calledrepresentable. Given the same set of vectorsE{\displaystyle E}, we can define the same oriented matroid with a chirotopeχ:Er→{−1,0,1}{\displaystyle \chi :E^{r}\rightarrow \{-1,0,1\}}. For anyx1,…,xr∈E{\displaystyle x_{1},\dots ,x_{r}\in E}let where the right hand side of the equation is the sign of thedeterminant. Thenχ{\displaystyle \chi }is the chirotope of the same oriented matroid on the setE{\displaystyle E}. A real hyperplane arrangementA={H1,…,Hn}{\displaystyle {\mathcal {A}}=\{H_{1},\ldots ,H_{n}\}}is a finite set of hyperplanes inRd{\displaystyle \mathbb {R} ^{d}}, each containing the origin. By picking one side of each hyperplane to be the positive side, we obtain an arrangement of half-spaces. A half-space arrangement breaks down the ambient space into a finite collection of cells, each defined by which side of each hyperplane it lands on. That is, assign each pointx∈Rd{\displaystyle x\in \mathbb {R} ^{d}}to the signed setX=(X+,X−){\displaystyle X=(X^{+},X^{-})}withi∈X+{\displaystyle i\in X^{+}}ifx{\displaystyle x}is on the positive side ofHi{\displaystyle H_{i}}andi∈X−{\displaystyle i\in X^{-}}ifx{\displaystyle x}is on the negative side ofHi{\displaystyle H_{i}}. This collection of signed sets defines the set of covectors of the oriented matroid, which are the vectors of the dual oriented matroid.[7] Günter M. Zieglerintroduces oriented matroids via convex polytopes. A standard matroid is calledorientableif its circuits are the supports of signed circuits of some oriented matroid. It is known that all real representable matroids are orientable. It is also known that the class of orientable matroids is closed under takingminors, however the list offorbidden minorsfor orientable matroids is known to be infinite.[8]In this sense, oriented matroids is a much stricter formalization than regular matroids. Just as a matroid has a uniqueduals, an oriented matroid has a unique dual, often called its "orthogonal dual". What this means is that the underlying matroids are dual and that the cocircuits are signed so that they are "orthogonal" to every circuit. Two signed sets are said to beorthogonalif the intersection of their supports is empty or if the restriction of their positive elements to the intersection and negative elements to the intersection form two nonidentical and non-opposite signed sets. The existence and uniqueness of the dual oriented matroid depends on the fact that every signed circuit is orthogonal to every signed cocircuit.[9] To see why orthogonality is necessary for uniqueness one needs only to look to the digraph example above. We know that for planar graphs the dual of the circuit matroid is the circuit matroid of the graph'splanar dual. Thus there are as many different dual pairs of oriented matroids based on the matroid of the graph as there are ways to orient the graph and in a corresponding way its dual. To see the explicit construction of this unique orthogonal dual oriented matroid, consider an oriented matroid's chirotopeχ:Er→{−1,0,1}{\displaystyle \chi :E^{r}\rightarrow \{-1,0,1\}}. If we consider a list of elements ofx1,…,xk∈E{\displaystyle x_{1},\dots ,x_{k}\in E}as a cyclic permutation then we definesgn⁡(x1,…,xk){\displaystyle \operatorname {sgn} (x_{1},\dots ,x_{k})}to be the sign of the associated permutation. Ifχ∗:E|E|−r→{−1,0,1}{\displaystyle \chi ^{*}:E^{|E|-r}\rightarrow \{-1,0,1\}}is defined as thenχ∗{\displaystyle \chi ^{*}}is the chirotope of the unique orthogonal dual oriented matroid.[10] Not all oriented matroids are representable—that is, not all have realizations as point configurations, or, equivalently, hyperplane arrangements. However, in some sense, all oriented matroids come close to having realizations as hyperplane arrangements. In particular, theFolkman–Lawrence topological representation theoremstates that any oriented matroid has a realization as anarrangement of pseudospheres. Ad{\displaystyle d}-dimensionalpseudosphereis an embedding ofe:Sd↪Sd+1{\displaystyle e:S^{d}\hookrightarrow S^{d+1}}such that there exists a homeomorphismh:Sd+1→Sd+1{\displaystyle h:S^{d+1}\rightarrow S^{d+1}}so thath∘e{\displaystyle h\circ e}embedsSd{\displaystyle S^{d}}as an equator ofSd+1{\displaystyle S^{d+1}}. In this sense a pseudosphere is just atamesphere (as opposed towild spheres). Apseudosphere arrangement inSd{\displaystyle S^{d}}is a collection of pseudospheres that intersect along pseudospheres. Finally, the Folkman–Lawrence topological representation theorem states that every oriented matroid of rankd+1{\displaystyle d+1}can be obtained from a pseudosphere arrangement inSd{\displaystyle S^{d}}.[11]It is named afterJon FolkmanandJim Lawrence, who published it in 1978. The theory of oriented matroids has influenced the development ofcombinatorial geometry, especially the theory ofconvex polytopes,zonotopes, and configurations of vectors (equivalently,arrangements of hyperplanes).[12]Many results—Carathéodory's theorem,Helly's theorem,Radon's theorem, theHahn–Banach theorem, theKrein–Milman theorem, thelemma of Farkas—can be formulated using appropriate oriented matroids.[13] The development of an axiom system for oriented matroids was initiated byR. Tyrrell Rockafellarto describe the sign patterns of the matrices arising through the pivoting operations of Dantzig's simplex algorithm; Rockafellar was inspired byAlbert W. Tucker's studies of such sign patterns in "Tucker tableaux".[14] The theory of oriented matroids has led to breakthroughs incombinatorial optimization. Inlinear programming, it was the language in whichRobert G. Blandformulated hispivoting rule, by which thesimplex algorithmavoids cycles. Similarly, it was used by Terlaky and Zhang to prove that theircriss-cross algorithmshave finite termination forlinear programmingproblems. Similar results were made in convexquadratic programmingby Todd and Terlaky.[15]It has been applied tolinear-fractional programming,[16]quadratic-programmingproblems, andlinear complementarity problems.[17][18][19] Outside ofcombinatorial optimization, oriented matroid theory also appears inconvex minimizationin Rockafellar's theory of "monotropic programming" and related notions of "fortified descent".[20]Similarly,matroidtheory has influenced the development of combinatorial algorithms, particularly thegreedy algorithm.[21]More generally, agreedoidis useful for studying the finite termination of algorithms.
https://en.wikipedia.org/wiki/Oriented_matroid
Inmathematics, asubsetRof theintegersis called areduced residue system modulonif: Here φ denotesEuler's totient function. A reduced residue system moduloncan be formed from acomplete residue systemmodulonby removing all integers notrelatively primeton. For example, a complete residue system modulo 12 is {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}. The so-calledtotatives1, 5, 7 and 11 are the only integers in this set which are relatively prime to 12, and so the corresponding reduced residue system modulo 12 is {1, 5, 7, 11}. Thecardinalityof this set can be calculated with the totient function: φ(12) = 4. Some other reduced residue systems modulo 12 are:
https://en.wikipedia.org/wiki/Reduced_residue_system
Inmathematics, thecomposition operator∘{\displaystyle \circ }takes twofunctions,f{\displaystyle f}andg{\displaystyle g}, and returns a new functionh(x):=(g∘f)(x)=g(f(x)){\displaystyle h(x):=(g\circ f)(x)=g(f(x))}. Thus, the functiongisappliedafter applyingftox.(g∘f){\displaystyle (g\circ f)}is pronounced "the composition ofgandf".[1] Reverse composition, sometimes denotedf↦g{\displaystyle f\mapsto g}, applies the operation in the opposite order, applyingf{\displaystyle f}first andg{\displaystyle g}second. Intuitively, reverse composition is a chaining process in which the output of functionffeeds the input of functiong. The composition of functions is a special case of thecomposition of relations, sometimes also denoted by∘{\displaystyle \circ }. As a result, all properties of composition of relations are true of composition of functions,[2]such asassociativity. The composition of functions is alwaysassociative—a property inherited from thecomposition of relations.[2]That is, iff,g, andhare composable, thenf∘ (g∘h) = (f∘g) ∘h.[3]Since the parentheses do not change the result, they are generally omitted. In a strict sense, the compositiong∘fis only meaningful if the codomain offequals the domain ofg; in a wider sense, it is sufficient that the former be an impropersubsetof the latter.[nb 1]Moreover, it is often convenient to tacitly restrict the domain off, such thatfproduces only values in the domain ofg. For example, the compositiong∘fof the functionsf:R→(−∞,+9]defined byf(x) = 9 −x2andg:[0,+∞)→Rdefined byg(x)=x{\displaystyle g(x)={\sqrt {x}}}can be defined on theinterval[−3,+3]. The functionsgandfare said tocommutewith each other ifg∘f=f∘g. Commutativity is a special property, attained only by particular functions, and often in special circumstances. For example,|x| + 3 = |x+ 3|only whenx≥ 0. The picture shows another example. The composition ofone-to-one(injective) functions is always one-to-one. Similarly, the composition ofonto(surjective) functions is always onto. It follows that the composition of twobijectionsis also a bijection. Theinverse functionof a composition (assumed invertible) has the property that(f∘g)−1=g−1∘f−1.[4] Derivativesof compositions involving differentiable functions can be found using thechain rule.Higher derivativesof such functions are given byFaà di Bruno's formula.[3] Composition of functions is sometimes described as a kind ofmultiplicationon a function space, but has very different properties frompointwisemultiplication of functions (e.g. composition is notcommutative).[5] Suppose one has two (or more) functionsf:X→X,g:X→Xhaving the same domain and codomain; these are often calledtransformations. Then one can form chains of transformations composed together, such asf∘f∘g∘f. Such chains have thealgebraic structureof amonoid, called atransformation monoidor (much more seldom) acomposition monoid. In general, transformation monoids can have remarkably complicated structure. One particular notable example is thede Rham curve. The set ofallfunctionsf:X→Xis called thefull transformation semigroup[6]orsymmetric semigroup[7]onX. (One can actually define two semigroups depending how one defines the semigroup operation as the left or right composition of functions.[8]) If the given transformations arebijective(and thus invertible), then the set of all possible combinations of these functions forms atransformation group(also known as apermutation group); and one says that the group isgeneratedby these functions. The set of all bijective functionsf:X→X(calledpermutations) forms a group with respect to function composition. This is thesymmetric group, also sometimes called thecomposition group. A fundamental result in group theory,Cayley's theorem, essentially says that any group is in fact just a subgroup of a symmetric group (up toisomorphism).[9] In the symmetric semigroup (of all transformations) one also finds a weaker, non-unique notion of inverse (called a pseudoinverse) because the symmetric semigroup is aregular semigroup.[10] IfY⊆X, thenf:X→Y{\displaystyle f:X\to Y}may compose with itself; this is sometimes denoted asf2{\displaystyle f^{2}}. That is: More generally, for anynatural numbern≥ 2, thenthfunctionalpowercan be defined inductively byfn=f∘fn−1=fn−1∘f, a notation introduced byHans Heinrich Bürmann[citation needed][11][12]andJohn Frederick William Herschel.[13][11][14][12]Repeated composition of such a function with itself is calledfunction iteration. Note:Ifftakes its values in aring(in particular for real or complex-valuedf), there is a risk of confusion, asfncould also stand for then-fold product off, e.g.f2(x) =f(x) ·f(x).[12]For trigonometric functions, usually the latter is meant, at least for positive exponents.[12]For example, intrigonometry, this superscript notation represents standardexponentiationwhen used withtrigonometric functions: sin2(x) = sin(x) · sin(x). However, for negative exponents (especially −1), it nevertheless usually refers to the inverse function, e.g.,tan−1= arctan ≠ 1/tan. In some cases, when, for a given functionf, the equationg∘g=fhas a unique solutiong, that function can be defined as thefunctional square rootoff, then written asg=f1/2. More generally, whengn=fhas a unique solution for some natural numbern> 0, thenfm/ncan be defined asgm. Under additional restrictions, this idea can be generalized so that theiteration countbecomes a continuous parameter; in this case, such a system is called aflow, specified through solutions ofSchröder's equation. Iterated functions and flows occur naturally in the study offractalsanddynamical systems. To avoid ambiguity, some mathematicians[citation needed]choose to use∘to denote the compositional meaning, writingf∘n(x)for then-th iterate of the functionf(x), as in, for example,f∘3(x)meaningf(f(f(x))). For the same purpose,f[n](x)was used byBenjamin Peirce[15][12]whereasAlfred PringsheimandJules Molksuggestednf(x)instead.[16][12][nb 2] Many mathematicians, particularly ingroup theory, omit the composition symbol, writinggfforg∘f.[17] During the mid-20th century, some mathematicians adoptedpostfix notation, writingxfforf(x)and(xf)gforg(f(x)).[18]This can be more natural thanprefix notationin many cases, such as inlinear algebrawhenxis arow vectorandfandgdenotematricesand the composition is bymatrix multiplication. The order is important because function composition is not necessarily commutative. Having successive transformations applying and composing to the right agrees with the left-to-right reading sequence. Mathematicians who use postfix notation may write "fg", meaning first applyfand then applyg, in keeping with the order the symbols occur in postfix notation, thus making the notation "fg" ambiguous. Computer scientists may write "f;g" for this,[19]thereby disambiguating the order of composition. To distinguish the left composition operator from a text semicolon, in theZ notationthe ⨾ character is used for leftrelation composition.[20]Since all functions arebinary relations, it is correct to use the [fat] semicolon for function composition as well (see the article oncomposition of relationsfor further details on this notation). Given a functiong, thecomposition operatorCgis defined as thatoperatorwhich maps functions to functions asCgf=f∘g.{\displaystyle C_{g}f=f\circ g.}Composition operators are studied in the field ofoperator theory. Function composition appears in one form or another in numerousprogramming languages. Partial composition is possible formultivariate functions. The function resulting when some argumentxiof the functionfis replaced by the functiongis called a composition offandgin some computer engineering contexts, and is denotedf|xi=gf|xi=g=f(x1,…,xi−1,g(x1,x2,…,xn),xi+1,…,xn).{\displaystyle f|_{x_{i}=g}=f(x_{1},\ldots ,x_{i-1},g(x_{1},x_{2},\ldots ,x_{n}),x_{i+1},\ldots ,x_{n}).} Whengis a simple constantb, composition degenerates into a (partial) valuation, whose result is also known asrestrictionorco-factor.[21] f|xi=b=f(x1,…,xi−1,b,xi+1,…,xn).{\displaystyle f|_{x_{i}=b}=f(x_{1},\ldots ,x_{i-1},b,x_{i+1},\ldots ,x_{n}).} In general, the composition of multivariate functions may involve several other functions as arguments, as in the definition ofprimitive recursive function. Givenf, an-ary function, andnm-ary functionsg1, ...,gn, the composition offwithg1, ...,gn, is them-ary functionh(x1,…,xm)=f(g1(x1,…,xm),…,gn(x1,…,xm)).{\displaystyle h(x_{1},\ldots ,x_{m})=f(g_{1}(x_{1},\ldots ,x_{m}),\ldots ,g_{n}(x_{1},\ldots ,x_{m})).} This is sometimes called thegeneralized compositeorsuperpositionoffwithg1, ...,gn.[22]The partial composition in only one argument mentioned previously can be instantiated from this more general scheme by setting all argument functions except one to be suitably chosenprojection functions. Hereg1, ...,gncan be seen as a single vector/tuple-valued function in this generalized scheme, in which case this is precisely the standard definition of function composition.[23] A set of finitaryoperationson some base setXis called acloneif it contains all projections and is closed under generalized composition. A clone generally contains operations of variousarities.[22]The notion of commutation also finds an interesting generalization in the multivariate case; a functionfof aritynis said to commute with a functiongof aritymiffis ahomomorphismpreservingg, and vice versa, that is:[22]f(g(a11,…,a1m),…,g(an1,…,anm))=g(f(a11,…,an1),…,f(a1m,…,anm)).{\displaystyle f(g(a_{11},\ldots ,a_{1m}),\ldots ,g(a_{n1},\ldots ,a_{nm}))=g(f(a_{11},\ldots ,a_{n1}),\ldots ,f(a_{1m},\ldots ,a_{nm})).} A unary operation always commutes with itself, but this is not necessarily the case for a binary (or higher arity) operation. A binary (or higher arity) operation that commutes with itself is calledmedial or entropic.[22] Compositioncan be generalized to arbitrarybinary relations. IfR⊆X×YandS⊆Y×Zare two binary relations, then their composition amounts to R∘S={(x,z)∈X×Z:(∃y∈Y)((x,y)∈R∧(y,z)∈S)}{\displaystyle R\circ S=\{(x,z)\in X\times Z:(\exists y\in Y)((x,y)\in R\,\land \,(y,z)\in S)\}}. Considering a function as a special case of a binary relation (namelyfunctional relations), function composition satisfies the definition for relation composition. A small circleR∘Shas been used for theinfix notation of composition of relations, as well as functions. When used to represent composition of functions(g∘f)(x)=g(f(x)){\displaystyle (g\circ f)(x)\ =\ g(f(x))}however, the text sequence is reversed to illustrate the different operation sequences accordingly. The composition is defined in the same way forpartial functionsand Cayley's theorem has its analogue called theWagner–Preston theorem.[24] Thecategory of setswith functions asmorphismsis the prototypicalcategory. The axioms of a category are in fact inspired from the properties (and also the definition) of function composition.[25]The structures given by composition are axiomatized and generalized incategory theorywith the concept ofmorphismas the category-theoretical replacement of functions. The reversed order of composition in the formula(f∘g)−1= (g−1∘f−1)applies forcomposition of relationsusingconverse relations, and thus ingroup theory. These structures formdagger categories. The standard "foundation" for mathematics starts withsets and their elements. It is possible to start differently, by axiomatising not elements of sets but functions between sets. This can be done by using the language of categories and universal constructions. . . . the membership relation for sets can often be replaced by the composition operation for functions. This leads to an alternative foundation for Mathematics upon categories -- specifically, on the category of all functions. Now much of Mathematics is dynamic, in that it deals with morphisms of an object into another object of the same kind. Such morphisms(like functions)form categories, and so the approach via categories fits well with the objective of organizing and understanding Mathematics. That, in truth, should be the goal of a proper philosophy of Mathematics. -Saunders Mac Lane,Mathematics: Form and Function[26] The composition symbol∘is encoded asU+2218∘RING OPERATOR(&compfn;, &SmallCircle;); see theDegree symbolarticle for similar-appearing Unicode characters. InTeX, it is written\circ.
https://en.wikipedia.org/wiki/Composition_of_functions
Inalgebra, achange of ringsis an operation of changing a coefficient ring to another. Given aring homomorphismf:R→S{\displaystyle f:R\to S}, there are three ways to change the coefficient ring of amodule; namely, for a rightR-moduleMand a rightS-moduleN, one can form They are related asadjoint functors: and This is related toShapiro's lemma. Throughout this section, letR{\displaystyle R}andS{\displaystyle S}be two rings (they may or may not becommutative, or contain anidentity), and letf:R→S{\displaystyle f:R\to S}be a homomorphism. Restriction of scalars changesS-modules intoR-modules. Inalgebraic geometry, the term "restriction of scalars" is often used as a synonym forWeil restriction. Suppose thatM{\displaystyle M}is a module overS{\displaystyle S}. Then it can be regarded as a module overR{\displaystyle R}where the action ofR{\displaystyle R}is given via wherem⋅f(r){\displaystyle m\cdot f(r)}denotes the action defined by theS{\displaystyle S}-module structure onM{\displaystyle M}.[1] Restriction of scalars can be viewed as afunctorfromS{\displaystyle S}-modules toR{\displaystyle R}-modules. AnS{\displaystyle S}-homomorphismu:M→N{\displaystyle u:M\to N}automatically becomes anR{\displaystyle R}-homomorphism between the restrictions ofM{\displaystyle M}andN{\displaystyle N}. Indeed, ifm∈M{\displaystyle m\in M}andr∈R{\displaystyle r\in R}, then As a functor, restriction of scalars is theright adjointof the extension of scalars functor. IfR{\displaystyle R}is the ring of integers, then this is just theforgetful functorfrom modules to abelian groups. Extension of scalars changesR-modules intoS-modules. Letf:R→S{\displaystyle f:R\to S}be a homomorphism between two rings, and letM{\displaystyle M}be a module overR{\displaystyle R}. Consider thetensor productMS=M⊗RS{\displaystyle M^{S}=M\otimes _{R}S}, whereS{\displaystyle S}is regarded as a leftR{\displaystyle R}-module viaf{\displaystyle f}. SinceS{\displaystyle S}is also a right module over itself, and the two actions commute, that isr⋅(s⋅s′)=(r⋅s)⋅s′{\displaystyle r\cdot (s\cdot s')=(r\cdot s)\cdot s'}forr∈R{\displaystyle r\in R},s,s′∈S{\displaystyle s,s'\in S}(in a more formal language,S{\displaystyle S}is a(R,S){\displaystyle (R,S)}-bimodule),MS{\displaystyle M^{S}}inherits a right action ofS{\displaystyle S}. It is given by(m⊗s)⋅s′=m⊗ss′{\displaystyle (m\otimes s)\cdot s'=m\otimes ss'}form∈M{\displaystyle m\in M},s,s′∈S{\displaystyle s,s'\in S}. This module is said to be obtained fromM{\displaystyle M}throughextension of scalars. Informally, extension of scalars is "the tensor product of a ring and a module"; more formally, it is a special case of a tensor product of a bimodule and a module – the tensor product of anR-module with an(R,S){\displaystyle (R,S)}-bimodule is anS-module. One of the simplest examples iscomplexification, which is extension of scalars from thereal numbersto thecomplex numbers. More generally, given anyfield extensionK<L,one can extend scalars fromKtoL.In the language of fields, a module over a field is called avector space, and thus extension of scalars converts a vector space overKto a vector space overL.This can also be done fordivision algebras, as is done inquaternionification(extension from the reals to thequaternions). More generally, given a homomorphism from a field orcommutativeringRto a ringS,the ringScan be thought of as anassociative algebraoverR,and thus when one extends scalars on anR-module, the resulting module can be thought of alternatively as anS-module, or as anR-module with analgebra representationofS(as anR-algebra). For example, the result of complexifying a real vector space (R=R,S=C) can be interpreted either as a complex vector space (S-module) or as a real vector space with alinear complex structure(algebra representation ofSas anR-module). This generalization is useful even for the study of fields – notably, many algebraic objects associated to a field are not themselves fields, but are instead rings, such as algebras over a field, as inrepresentation theory. Just as one can extend scalars on vector spaces, one can also extend scalars ongroup algebrasand also on modules over group algebras, i.e.,group representations. Particularly useful is relating howirreducible representationschange under extension of scalars – for example, the representation of the cyclic group of order 4, given by rotation of the plane by 90°, is an irreducible 2-dimensionalrealrepresentation, but on extension of scalars to the complex numbers, it split into 2 complex representations of dimension 1. This corresponds to the fact that thecharacteristic polynomialof this operator,x2+1,{\displaystyle x^{2}+1,}is irreducible of degree 2 over the reals, but factors into 2 factors of degree 1 over the complex numbers – it has no real eigenvalues, but 2 complex eigenvalues. Extension of scalars can be interpreted as a functor fromR{\displaystyle R}-modules toS{\displaystyle S}-modules. It sendsM{\displaystyle M}toMS{\displaystyle M^{S}}, as above, and anR{\displaystyle R}-homomorphismu:M→N{\displaystyle u:M\to N}to theS{\displaystyle S}-homomorphismuS:MS→NS{\displaystyle u^{S}:M^{S}\to N^{S}}defined byuS=u⊗RidS{\displaystyle u^{S}=u\otimes _{R}{\text{id}}_{S}}. Consider anR{\displaystyle R}-moduleM{\displaystyle M}and anS{\displaystyle S}-moduleN{\displaystyle N}. Given a homomorphismu∈HomR(M,NR){\displaystyle u\in {\text{Hom}}_{R}(M,N_{R})}, defineFu:MS→N{\displaystyle Fu:M^{S}\to N}to be thecomposition where the last map isn⊗s↦n⋅s{\displaystyle n\otimes s\mapsto n\cdot s}. ThisFu{\displaystyle Fu}is anS{\displaystyle S}-homomorphism, and henceF:HomR(M,NR)→HomS(MS,N){\displaystyle F:{\text{Hom}}_{R}(M,N_{R})\to {\text{Hom}}_{S}(M^{S},N)}is well-defined, and is a homomorphism (ofabelian groups). In case bothR{\displaystyle R}andS{\displaystyle S}have an identity, there is an inverse homomorphismG:HomS(MS,N)→HomR(M,NR){\displaystyle G:{\text{Hom}}_{S}(M^{S},N)\to {\text{Hom}}_{R}(M,N_{R})}, which is defined as follows. Letv∈HomS(MS,N){\displaystyle v\in {\text{Hom}}_{S}(M^{S},N)}. ThenGv{\displaystyle Gv}is the composition where the first map is thecanonicalisomorphismm↦m⊗1{\displaystyle m\mapsto m\otimes 1}. This construction establishes a one to one correspondence between the setsHomS(MS,N){\displaystyle {\text{Hom}}_{S}(M^{S},N)}andHomR(M,NR){\displaystyle {\text{Hom}}_{R}(M,N_{R})}. Actually, this correspondence depends only on the homomorphismf{\displaystyle f}, and so isfunctorial. In the language ofcategory theory, the extension of scalars functor isleft adjointto the restriction of scalars functor.
https://en.wikipedia.org/wiki/Change_of_rings
Verbosity, orverboseness, is speech or writing that uses more words than necessary.[1]The opposite of verbosity issuccinctness.[dubious–discuss] Some teachers, including the author ofThe Elements of Style, warn against verbosity. SimilarlyMark TwainandErnest Hemingway, among others, famously avoided it. Synonyms of "verbosity" includewordiness,verbiage,loquacity,garrulousness,logorrhea,prolixity,grandiloquence,expatiation,sesquipedalianism, andoverwriting. The wordverbositycomes fromLatinverbosus, "wordy". There are many other English words that also refer to the use of excessive words. Prolixitycomes from Latinprolixus, "extended".Prolixitycan also be used to refer to the length of amonologueor speech, especially a formal address such as a lawyer'soral argument.[2] Grandiloquenceis complex speech or writing judged to be pompous or bombasticdiction. It is a combination of the Latin wordsgrandis("great") andloqui("to speak").[3] Logorrheaorlogorrhoea(fromGreekλογόρροια,logorrhoia, "word-flux") is an excessive flow of words. It is often usedpejorativelyto describe prose that is hard to understand because it is needlessly complicated or uses excessive jargon. Sesquipedalianismis a linguistic style that involves the use of long words. Roman poetHoracecoined the phrasesesquipedalia verbain hisArs Poetica.[4]It is acompoundofsesqui, "one and a half", andpes, "foot", a reference tometer(notwords a foot long). The earliest recorded usage in English ofsesquipedalianis in 1656, and ofsesquipedalianism, 1863.[5] Garrulouscomes from Latingarrulus, "talkative", a form of the verbgarrīre, "to chatter". The adjective may describe a person who is excessively talkative, especially about trivial matters, or a speech that is excessively wordy or diffuse[6] The nounexpatiationand the verbexpatiatecome from Latinexpatiātus, past participle fromspatiārī, "to wander". They refer to enlarging a discourse, text, or description.[7] Overwritingis a simple compound of the English prefix "over-" ("excessive") and "writing", and as the name suggests, means using extra words that add little value. One rhetoric professor described it as "a wordy writing style characterized by excessive detail, needless repetition, overwrought figures of speech, and/or convoluted sentence structures."[8]Another writer cited "meaningless intensifiers", "adjectival & adverbial verbosity", "long conjunctions and subordinators", and "repetition and needless information" as common traps that the non-native writers of English the author studied fell into.[9] An essay intentionally filled with "logorrhea" that mixed physics concepts with sociological concepts in a nonsensical way was published by physics professorAlan Sokalin a journal (Social Text) as ascholarly publishing sting. The episode became known as theSokal Affair.[10] The term is sometimes also applied to unnecessarily wordy speech in general; this is more usually referred to asprolixity. Some people defend the use of additional words asidiomatic, a matter of artistic preference, or helpful in explaining complex ideas or messages.[11] Warren G. Harding, the 29thpresident of the United States, was notably verbose even for his era.[12]A Democratic leader,William Gibbs McAdoo, described Harding's speeches as "an army of pompous phrases moving across the landscape in search of an idea."[13] TheMichigan Law Reviewpublished a 229-page parody of postmodern writing titled "Pomobabble: Postmodern Newspeak and Constitutional 'Meaning' for the Uninitiated". The article consists of complicated and context-sensitive self-referencing narratives. The text is peppered with a number of parenthetical citations and asides, which is supposed to mock the cluttered style of postmodern writing.[14] InThe King's English, Fowler gives a passage fromThe Timesas an example of verbosity: The Emperorreceived yesterday and to-day General Baron von Beck.... It may therefore be assumed with some confidence that the terms of a feasible solution are maturing themselves inHis Majesty'smind and may form the basis of further negotiations with Hungarian party leaders whenthe Monarchgoes again to Budapest.[15] Fowler objected to this passage becauseThe Emperor,His Majesty, andthe Monarchall refer to the same person: "the effect", he pointed out inModern English Usage, "is to set readers wondering what the significance of the change is, only to conclude that there is none." Fowler called this tendency "elegant variation" in his later style guides. The ancient Greek philosopherCallimachusis quoted as saying "Big book, big evil" (μέγα βιβλίον μέγα κακόν,mega biblion, mega kakon),[16]rejecting theepicstyle ofpoetryin favor of his own.[clarification needed] Many style guides advise against excessive verbosity. While it may be rhetorically useful[1]verbose parts in communications are sometimes referred to as "fluff" or "fuzz".[17]For instance,William Strunk, an American professor of English advised in 1918 to "Use the active voice: Put statements in positive form; Omit needless words."[18] InA Dictionary of Modern English Usage(1926)Henry Watson Fowlersays, "It is the second-rate writers, those intent rather on expressing themselves prettily than on conveying their meaning clearly, & still more those whose notions of style are based on a few misleading rules of thumb, that are chiefly open to the allurements of elegant variation," Fowler's term for the over-use ofsynonyms.[19]Contrary to Fowler's criticism of several words being used to name the same thing in Englishprose, in many other languages, includingFrench, it might be thought to be a good writing style.[20][21] An inquiry into the2005 London bombingsfound that verbosity can be dangerous if used by emergency services. It can lead to delay that could cost lives.[22] A 2005 study from thepsychologydepartment ofPrinceton Universityfound that using long and obscure words does not make people seem more intelligent. Dr. Daniel M. Oppenheimer did research which showed that students rated short, concise texts as being written by the most intelligent authors. But those who used long words or complexfonttypes were seen as less intelligent.[23] In contrast to advice against verbosity, some editors and style experts suggest that maxims such as "omit needless words"[18]are unhelpful. It may be unclear which words are unnecessary, or where advice against prolixity may harm writing. In some cases a degree of repetition and redundancy, or use of figurative language and long or complex sentences can have positive effects on style or communicative effect.[11] In nonfiction writing, experts[who?]suggest that both concision and clarity are important: Elements that do not improve communication should be removed without rendering a style that is "too terse" to be clear, as similarly advised by law professorNeil Andrewson the writing and reasoning of legal decisions.[24]In such cases, attention should be paid to a conclusion's underlying argument so that the language used is both simple and precise. A number of writers advise against excessive verbosity in fiction. For example,Mark Twain(1835–1910) wrote "generally, the fewer the words that fully communicate or evoke the intended ideas and feelings, the more effective the communication."[25]SimilarlyErnest Hemingway(1899–1961), the 1954Nobel laureatefor literature, defended his concise style against a charge byWilliam Faulknerthat he "had never been known to use a word that might send the reader to the dictionary."[26]Hemingway responded by saying, "Poor Faulkner. Does he really think big emotions come from big words? He thinks I don't know the ten-dollar words. I know them all right. But there are older and simpler and better words, and those are the ones I use."[27] George Orwellmocked logorrhea in "Politics and the English Language" (1946) by taking verse (9:11) from the book ofEcclesiastesin theKing James Versionof theBible: I returned and saw under the sun, that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favour to men of skill; but time and chance happeneth to them all. and rewriting it as Objective consideration of contemporary phenomena compels the conclusion that success or failure in competitive activities exhibits no tendency to be commensurate with innate capacity, but that a considerable element of the unpredictable must invariably be taken into account. In contrast, though, some authors warn against pursuing concise writing for its own sake. Literary criticSven Birkerts, for instance, notes that authors striving to reduce verbosity might produce prose that is unclear in its message or dry in style. "There's no vivid world where every character speaks in one-line, three-word sentences," he notes.[28]There is a danger that the avoidance of prolixity can produce writing that feels unnatural or sterile. PhysicistRichard Feynmanhas spoken out against verbosity in scientific writing.[29] Wordiness is common in informal or playful conversation, lyrics, and comedy. People withAsperger syndromeandautismoften present with verbose speech.[30]
https://en.wikipedia.org/wiki/Verbosity
Ribaldryorblue comedyis humorous entertainment that ranges from bordering onindelicacytoindecency.[1]Blue comedy is also referred to as "bawdiness" or being "bawdy". Like any humour, ribaldry may be read as conventional orsubversive. Ribaldry typically depends on a shared background of sexual conventions and values, and itscomedygenerally depends on seeing those conventions broken. The ritualtaboo-breaking that is a usual counterpart of ribaldry underlies its controversial nature and explains why ribaldry is sometimes a subject ofcensorship. Ribaldry, whose usual aim isnot"merely" to be sexually stimulating, often does address larger concerns than mere sexual appetite. However, being presented in the form of comedy, these larger concerns may be overlooked by censors. Sex is presented in ribald material more for the purpose of poking fun at the foibles and weaknesses that manifest themselves inhuman sexuality, rather than to present sexual stimulation either overtly or artistically. Also, ribaldry may use sex as ametaphorto illustrate some non-sexual concern, in which case ribaldry borderssatire. Ribaldry differs fromblack comedyin that the latter deals with topics that would normally be consideredpainfulorfrightening, whereas ribaldry deals with topics that would only be considered offensive. Ribaldry is present to some degree in every culture and has likely been around for all of human history. Works likeLysistratabyAristophanes,MenaechmibyPlautus,Cena TrimalchionisbyPetronius, andThe Golden AssofApuleiusare ribald classics fromancient Greece and Rome.Geoffrey Chaucer's "The Miller's Tale" from hisCanterbury TalesandThe Crabfish, one of the oldest English traditional ballads, are classic examples. The FrenchmanFrançois Rabelaisshowed himself to be a master of ribaldry (technically calledgrotesque body) in hisGargantuaand other works.The Life and Opinions of Tristram Shandy, GentlemanbyLaurence SterneandThe Lady's Dressing RoombyJonathan Swiftare also in this genre; as isMark Twain's long-suppressed1601. Another example of ribaldry is "De Brevitate Vitae", a song which in manyEuropean-influenced universities is both a student beer-drinking song and an anthem sung by official universitychoirsat public graduation ceremonies. The private and public versions of the song contain vastly different words. More recent works likeCandy,Barbarella,L'Infermiera, the comedic works ofRuss Meyer,Little Annie FannyandJohn Barth'sThe Sot-Weed Factorare probably better classified as ribaldry than as either pornography or erotica.[citation needed] A bawdy song is a humorous song that emphasises sexual themes and is often rich withinnuendo. Historically these songs tend to be confined to groups of young males, either as students or in an environment where alcohol is flowing freely. An early collection wasWit and Mirth, or Pills to Purge Melancholy, edited by Thomas D'Urfey and published between 1698 and 1720. Selected songs fromWit and Mirthhave been recorded by theCity Waitesand other singers. Sailor's songs tend to be quite frank about the exploitative nature of the relationship between men and women. There are many examples of folk songs in which a man encounters a woman in the countryside. This is followed by a short conversation, and then sexual intercourse, e.g. "The Game of All Fours". Neither side demonstrates any shame or regret. If the woman becomes pregnant, the man will not be there anyway.Rugbysongs are often bawdy. Examples of bawdy folk songs are: "Seventeen Come Sunday" and "The Ballad of Eskimo Nell".Robert BurnscompiledThe Merry Muses of Caledonia(the title is not Burns's), a collection of bawdy lyrics that were popular in the music halls of Scotland as late as the 20th century. In modern timesHash House Harriershave taken on the role of tradition-bearers for this kind of song.The Unexpurgated Folk Songs of Men(Arhoolie 4006) is a gramophone record containing a collection of American bawdy songs recorded in 1959.[2] Blue comedy is comedy that isoff-colour,risqué,indecent, orprofane, largely about sex. It often containsprofanityor sexual imagery that may shock and offend some audience members.[citation needed] "Working blue" refers to the act of using swear words and discussing things that people would not discuss in "polite society". A "blue comedian" or "blue comic" is acomedianwho usually performs risqué routines layered with curse words. There is a common belief that comedianMax Miller(1894–1963) coined the phrase, after his stage act which involved telling jokes from either a white book or a blue book, chosen by audience preference (the blue book contained ribald jokes). This is not so, as theOxford English Dictionarycontains earlier references to the use of blue to mean ribald: 1890Sporting Times25 Jan. 1/1 "Shifter wondered whether the damsel knew any novel blue stories." and 1900Bulletin(Sydney) 20 Oct. 12/4 "Let someone propose to celebrateChaucerby publicly reading some of his bluest productions unexpurgated. The reader would probably be locked up." Private events at show business clubs such as theMasquersoften showed this blue side of otherwise clean-cut comedians; a recording survives of one Masquers roast from the 1950s withJack Benny,George Jessel,George Burns, andArt Linkletterall using highly risqué material and obscenities. Many comedians who are normally family-friendly might choose to work blue when off-camera or in an adult-oriented environment;Bob Sagetexemplified thisdichotomy.Bill Cosby's 1969 record album8:15 12:15records both his family-friendly evening standup comedy show, and his blue midnight show, which included a joke about impregnating his wife "right through the old midnight trampoline" (herdiaphragm) and other sexual references.[3] Some comedians build their careers on blue comedy. Among the best known of these areRedd Foxx,Lawanda Page, and the team of Leroy and Skillet, all of whom later performed on the family-friendly television showSanford and Son. Page, Leroy, and Skillet specialised in a particularAfrican Americanform of blue spoken word recitation calledsignifying or toasting.Dave Attellhas also been described by his peers as one of the greatest modern-day blue comics.[4] Ontalk radioin the United States and elsewhere, blue comedy is a staple of theshock jock's repertoire. The use of blue comedy over American radio airwaves is severely restricted due to decency regulations; theFederal Communications Commissioncan levy fines against radio stations that air obscene content. As a part of English literature, blue literature dates back to at leastMiddle English, while bawdy humor is a central element in works of such writers asShakespeareandChaucer. Examples of blue literature are also present in various cultures, among different social classes, and genders.[5]Until the 1940s, writers of English-language blue literature were almost exclusively men; since then, it has become possible for women to build a commercial career on blue literature.[5]: 170While no extensive cross-cultural study has been made in an attempt to prove the universality of blue literature, oral tradition around the world suggests that this may be the case.[5]: 169
https://en.wikipedia.org/wiki/Ribaldry
Arithmetic topologyis an area ofmathematicsthat is a combination ofalgebraic number theoryandtopology. It establishes an analogy betweennumber fieldsand closed, orientable3-manifolds. The following are some of the analogies used by mathematicians between number fields and 3-manifolds:[1] Expanding on the last two examples, there is an analogy betweenknotsandprime numbersin which one considers "links" between primes. The triple of primes(13, 61, 937)are "linked" modulo 2 (theRédei symbolis −1) but are "pairwise unlinked" modulo 2 (theLegendre symbolsare all 1). Therefore these primes have been called a "proper Borromean triple modulo 2"[2]or "mod 2 Borromean primes".[3] In the 1960s topological interpretations ofclass field theorywere given byJohn Tate[4]based onGalois cohomology, and also byMichael ArtinandJean-Louis Verdier[5]based onÉtale cohomology. ThenDavid Mumford(and independentlyYuri Manin) came up with an analogy betweenprime idealsandknots[6]which was further explored byBarry Mazur.[7][8]In the 1990s Reznikov[9]and Kapranov[10]began studying these analogies, coining the termarithmetic topologyfor this area of study.
https://en.wikipedia.org/wiki/Arithmetic_topology
Amobile driving licence(alsomobile driver's licenseormDL) is amobile appthat replaces a physicaldriver's license. AnInternational Organization for Standardization(ISO) standard for the mobile driving licence (ISO/IEC 18013-5) was approved on August 18, 2021 and published on 30 September 2021.[1] In November 2020, Denmark publicly released a digital/mobile driving licence using a proprietary app implementation using a QR code, also not conforming to the ISO/IEC 18013-5 standard. Similar to Iceland's implementation, it is fully equivalent to physical IDs, however only valid in Denmark.[2] Icelandwas the second country in Europe to introduce a digital/mobile driver's licence in July 2020.Icelandic driving licenceholders can request a digital version of their licence online by using theirelectronic ID(Icelandic: rafræn skilríki) and is issued as a.pkpass fileloaded into theWallet apponiPhoneor a third-party app onAndroid. Digital driving licences display the same information as a physical licence, along with a barcode (renewed regularly by the server, acting as verification). Commercial establishments (e.g. for proof of age) can use the island.is app to verify barcodes. The licences are equally valid as official ID, even for voting, however only within Iceland. The implementation does not conform to the ISO/IEC 18013-5 standard.[3]As of August 2022, 60% of driver's licences have been issued in digital/mobile form.[4] The first instance of an electronic driver's license was deployed in Mexico as early as 2007, using theGemaltosmart-card platform. In 2016, the U.S. National Institute of Standards and Technology (NIST) partnered with Gemalto to pilot the "digital driver's license" inWashington D.C.,Idaho,Colorado,MarylandandWyoming.[5] On 1 October 2019,Norwaybecame the first country inEuropeto introduce a digital driver's license. A holder of aNorwegian driver's licensecan request a digital version of their physical driver's license after downloading theappFørerkortfrom their preferredapp marketplace. The applicant must verify their identity withBankIDupon logging in on the app for the first time, which will then retrieve information from the national database for driving licenses. After this procedure, the digital driver's license will display the exact same information as on the physical driver's license. The app only allows one phone with a digital driver's license per user. If the holder has recently passed their driving exam or upgraded to a new category, and hence awaiting to receive their physical driving license, the app will display a temporary driving permit. When the physical driver's license has been produced, the app is able to display the holder's digital driver's license, regardless if the holder has received their physical driving license bymailyet or not. If a holder's driver's license has been revoked or suspended, this will information will be displayed in the app as long as the holder has not gotten their driver's license back. Upon atraffic stopby thepoliceor coming in contact with thePublic Roads Administration, the digital driver's license is valid as a proof of identification. Although the driver's license comes with a barcode, which can be scanned by either government authorities, commercial establishments or even private persons to verify the details, it is not considered as a proof of identification in most places. The digital driver's license is not valid outside of Norway.[6] The first mDL that claims compliance with ISO/IEC 18013-5 isLouisiana's, developed in part by Envoc, a software firm inBaton Rouge, whose president claimed that most drivers under 40 won't go back home if they forget their physical laminated license, "but if they forget their phone, they always turn around."[7]Ontarioin 2020, in response tothe COVID-19 pandemic, announced a "Digital Identity Program," including a mobile driver's license.[8] Colorado was the first state to deploy a production version of a digital license, primarily based onQR codesstored in adigital wallet, which it claims is accepted bypolice officers throughout the state.[9]After going through the standard process at the stateDepartment of Motor Vehicles, volunteers installed the "DigiDL" app on their phones and then downloaded the license. Volunteers tested the digital driver's license in stores, theColorado Lotteryclaim center, and anart fair.[10] Smartphoneoperating systemsare adapting to the new standard. For example,Android's JetPack suite comes with specific support for ISO 18013–5 from version API 24.[11][12]In March 2022,Appleintroduced support for mobile IDs conforming to ISO 18013-5 inApple Wallet, through a proprietary enrollment process which is implemented in partnership with governments.[13]ArizonaandGeorgiabecame the first two states to announce that IDs were supported on Apple Wallet, starting with versioniOS 15.4. On March 23, 2022, Arizona officially launched their program which includes the first TSA checkpoint to support Apple’s mobile driver’s license, Phoenix Sky Harbor International Airport.[14][15] Safety organization Fime has a product to help test an app's conformance with the ISO/IEC 18013-5 standard.[16]TheKantara Initiativecreated a "Privacy & Identity Protection in Mobile Driving License Ecosystems Discussion Group" to issue a report on the need for conformance specifications around identity and privacy.[17]
https://en.wikipedia.org/wiki/Mobile_driver%27s_license
In amultitaskingcomputersystem,processesmay occupy a variety ofstates. These distinct states may not be recognized as such by theoperating systemkernel. However, they are a useful abstraction for the understanding of processes. The following typical process states are possible on computer systems of all kinds. In most of these states, processes are "stored" onmain memory. When a process is first created, it occupies the "created" or "new" state. In this state, the process awaits admission to the "ready" state. Admission will be approved or delayed by a long-term, or admission,scheduler. Typically in mostdesktop computersystems, this admission will be approved automatically. However, forreal-time operating systemsthis admission may be delayed. In a realtime system, admitting too many processes to the "ready" state may lead to oversaturation andovercontentionof the system's resources, leading to an inability to meet process deadlines. A "ready" or "waiting" process has been loaded intomain memoryand is awaiting execution on aCPU(to be context switched onto the CPU by the dispatcher, or short-term scheduler). There may be many "ready" processes at any one point of the system's execution—for example, in a one-processor system, only one process can be executing at any one time, and all other "concurrently executing" processes will be waiting for execution. Aready queueorrun queueis used incomputer scheduling. Modern computers are capable of running many different programs or processes at the same time. However, the CPU is only capable of handling one process at a time. Processes that are ready for the CPU are kept in aqueuefor "ready" processes. Other processes that are waiting for an event to occur, such as loading information from a hard drive or waiting on an internet connection, are not in the ready queue. A process moves into the running state when it is chosen for execution. The process's instructions are executed by one of the CPUs (or cores) of the system. There is at most one running process per CPU or core. A process can run in either of the two modes, namelykernel modeoruser mode.[1][2] A process transitions to ablockedstate when it cannot carry on without an external change in state or event occurring. For example, a process may block on a call to an I/O device such as a printer, if the printer is not available. Processes also commonly block when they require user input, or require access to acritical sectionwhich must be executed atomically. Such critical sections are protected using a synchronization object such as a semaphore or mutex. A process may beterminated, either from the "running" state by completing its execution or by explicitly being killed. In either of these cases, the process moves to the "terminated" state. The underlying program is no longer executing, but the process remains in theprocess tableas azombie processuntil its parent process calls thewaitsystem callto read itsexit status, at which point the process is removed from the process table, finally ending the process's lifetime. If the parent fails to callwait, this continues to consume the process table entry (concretely theprocess identifieror PID), and causes aresource leak. Two additional states are available for processes in systems that supportvirtual memory. In both of these states, processes are "stored" on secondary memory (typically ahard disk). (Also calledsuspended and waiting.) In systems that support virtual memory, a process may be swapped out, that is, removed from main memory and placed on external storage by the scheduler. From here the process may be swapped back into the waiting state. (Also calledsuspended and blocked.) Processes that are blocked may also be swapped out. In this event the process is both swapped out and blocked, and may be swapped back in again under the same circumstances as a swapped out and waiting process (although in this case, the process will move to the blocked state, and may still be waiting for a resource to become available).
https://en.wikipedia.org/wiki/Process_state
Inquantum mechanics,counterfactual definiteness(CFD) is the ability to speak "meaningfully" of the definiteness of the results of measurements that have not been performed (i.e., the ability to assume theexistenceof objects, and properties of objects, even when they have not beenmeasured).[1][2]The term "counterfactualdefiniteness" is used in discussions of physics calculations, especially those related to the phenomenon calledquantum entanglementand those related to theBell inequalities.[3]In such discussions "meaningfully" means the ability to treat these unmeasured results on an equal footing with measured results in statistical calculations. It is this (sometimes assumed but unstated) aspect of counterfactual definiteness that is of direct relevance to physics and mathematical models of physical systems and not philosophical concerns regarding the meaning of unmeasured results. The subject of counterfactual definiteness receives attention in the study of quantum mechanics because it is argued that, when challenged by the findings of quantum mechanics, classical physics must give up its claim to one of three assumptions:locality(no "spooky action at a distance"),no-conspiracy(called also "asymmetry of time"),[4][5]or counterfactual definiteness (or "non-contextuality"). If physics gives up the claim to locality, it brings into question our ordinary ideas aboutcausalityand suggests that events may transpire at faster-than-light speeds.[6] If physics gives up the "no conspiracy" condition, it becomes possible for "nature to force experimenters to measure what she wants, and when she wants, hiding whatever she does not like physicists to see."[7] If physics rejects the possibility that, in all cases, there can be "counterfactual definiteness," then it rejects some features that humans are very much accustomed to regarding as enduring features of the universe. "The elements of reality the EPR paper is talking about are nothing but what the property interpretation calls properties existing independently of the measurements. In each run of the experiment, there exist some elements of reality, the system has particular properties < #ai> which unambiguously determine the measurement outcome < ai>, given that the corresponding measurementais performed."[8] As a noun, "counterfactual" may refer to an inferred effect or consequence of an unobserved macroscopic event. An example iscounterfactual quantum computation.[9] Aninterpretation of quantum mechanicscan be said to involve the use of counterfactual definiteness if it includes in the mathematical modelling outcomes of measurements that are counterfactual; in particular, those that are excluded according to quantum mechanics by the fact that quantum mechanics does not contain a description of simultaneous measurement of conjugate pairs of properties.[10] For example, theuncertainty principlestates that one cannot simultaneously know, with arbitrarily high precision, both the position andmomentumof a particle.[11]Suppose one measures the position of a particle. This act destroys any information about its momentum. Is it then possible to talk about the outcome that one would have obtained if one had measured its momentum instead of its position? In terms of mathematical formalism, is such a counterfactual momentum measurement to be included, together with the factual position measurement, in the statistical population of possible outcomes describing the particle? If the position were found to ber0then in an interpretation that permits counterfactual definiteness, the statistical population describing position and momentum would contain all pairs (r0,p) for every possible momentum valuep, whereas an interpretation that rejects counterfactual values completely would only have the pair (r0,⊥) where⊥(called "up tack" or "eet") denotes an undefined value.[12]To use a macroscopic analogy, an interpretation which rejects counterfactual definiteness views measuring the position as akin to asking where in a room a person is located, while measuring the momentum is akin to asking whether the person's lap is empty or has something on it. If the person's position has changed by making him or her stand rather than sit, then that person has no lap and neither the statement "the person's lap is empty" nor "there is something on the person's lap" is true. Any statistical calculation based on values where the person is standing at some place in the room and simultaneously has a lap as if sitting would be meaningless.[13] The dependability of counterfactually definite values is a basic assumption, which, together with "time asymmetry" and "local causality" led to theBell inequalities. Bell showed that the results of experiments intended to test the idea ofhidden variableswould be predicted to fall within certain limits based on all three of these assumptions, which are considered principles fundamental to classical physics, but that the results found within those limits would be inconsistent with the predictions of quantum mechanical theory. Experiments have shown that quantum mechanical results predictably exceed those classical limits. Calculating expectations based on Bell's work implies that for quantum physics the assumption of "local realism" must be abandoned.[14]Bell's theoremproves that every type of quantum theory must necessarily violatelocalityorreject the possibility of extending the mathematical description with outcomes of measurements which were not actually made.[15][16] Counterfactual definiteness is present in any interpretation of quantum mechanics that allows quantum mechanical measurement outcomes to be seen as deterministic functions of a system's state or of the state of the combined system and measurement apparatus. Cramer's (1986)transactional interpretationdoes not make that interpretation.[16] The traditionalCopenhagen interpretationof quantum mechanics rejects counterfactual definiteness as it does not ascribe any value at all to a measurement that was not performed. When measurements are performed, values result, but these are not considered to be revelations of pre-existing values. In the words ofAsher Peres, "unperformed experiments have no results".[17] Themany-worlds interpretationrejects counterfactual definiteness in a different sense; instead of not assigning a value to measurements that were not performed, it ascribes many values. When measurements are performed each of these values gets realized as the resulting value in a different world of a branching reality. As Prof. Guy Blaylock of theUniversity of Massachusetts Amherstputs it, "The many-worlds interpretation is not only counterfactually indefinite, it is factually indefinite as well."[18] Theconsistent historiesapproach rejects counterfactual definiteness in yet another manner; it ascribes single but hidden values to unperformed measurements and disallows combining values of incompatible measurements (counterfactual or factual) as such combinations do not produce results that would match any obtained purely from performed compatible measurements. When a measurement is performed the hidden value is nevertheless realized as the resulting value.Robert Griffithslikens these to "slips of paper" placed in "opaque envelopes".[19]Thus Consistent Histories does not reject counterfactual results per se, it rejects them only when they are being combined with incompatible results.[20]Whereas in the Copenhagen interpretation or the Many Worlds interpretation, the algebraic operations to derive Bell's inequality cannot proceed due to having no value or many values where a single value is required, in Consistent Histories, they can be performed but the resulting correlation coefficients can not be equated with those that would be obtained by actual measurements (which are instead given by the rules of quantum mechanical formalism). The derivation combines incompatible results, only some of which could be factual for a given experiment and the rest counterfactual.
https://en.wikipedia.org/wiki/Counterfactual_definiteness
The followingoutlineis provided as an overview of and topical guide to artificial intelligence: Artificial intelligence (AI)is intelligence exhibited by machines or software. It is also the name of thescientific fieldwhich studies how to create computers and computer software that are capable of intelligent behavior. Symbolic representations of knowledge Unsolved problems in knowledge representation Intelligent personal assistant– Artificial intelligence in fiction– Some examples of artificially intelligent entities depicted in science fiction include: List of artificial intelligence projects Competitions and prizes in artificial intelligence
https://en.wikipedia.org/wiki/Outline_of_artificial_intelligence
Slacktivism(ablendofslackerandactivism) is the practice of supporting a political or social cause by means such associal mediaoronline petitions, characterized as involving very little effort or commitment.[1]Additional forms of slacktivism include engaging in online activities such asliking,sharingortweetingabout a cause on social media, signing an Internet petition,[2]copying and pasting a status or message in support of the cause, sharing specifichashtagsassociated with the cause, or altering one's profile photo oravataron social network services to indicate solidarity. Critics of slacktivism suggest that it fails to make a meaningful contribution to an overall cause because a low-stakes show of support, whether online or offline, is superficial, ineffective, draws off energy that might be used more constructively, and serves as a substitute for more substantiveforms of activismrather than supplementing them, and might, in fact, be counter-productive.[3]As groups increasingly use social media to facilitate civic engagement andcollective action,[4][5]proponents of slacktivism have pointed out that it can lead to engagement and help generate support for lesser-known causes.[6][7][8] The term was coined by Dwight Ozard and Fred Clark in 1995 at theCornerstone Festival. The term was meant to shorten the phrase slacker activism, which refers to bottom-up activities by young people to affect society on a small, personal scale (such as planting a tree, as opposed to participating in a protest). The term originally had a positive connotation.[9] Monty Phan, staff writer forNewsday, was an early user of the term in his 2001 article titled, "On the Net, 'Slacktivism'/Do-Gooders Flood In-Boxes."[10] An early example of using the term "slacktivism" appeared in Barnaby Feder's article inThe New York Timescalled "They Weren't Careful What They Hoped For." Feder quoted anti-scam crusader Barbara Mikkelson ofSnopes, who described activities such as those listed above. "It's all fed by slacktivism ... the desire people have to do something good without getting out of their chair."[11] Another example of the term "Slacktivism" appeared inEvgeny Morozov's book,Net Delusion: The Dark Side of Internet Freedom(2011). In it, Morozov relates slacktivism to the Colding-Jørgensen experiment. In 2009, a Danish psychologist named Anders Colding-Jørgensen created a fictitious Facebook group as part of his research. On the page, he posted an announcement suggesting that the Copenhagen city authorities would be demolishing the historicalStork Fountain. Within the first day, 125 Facebook members joined Colding-Jørgensen's. The number of fans began to grow at a staggering rate, eventually reaching 27,500.[12]Morozov argues the Colding-Jørgensen experiment reveals a key component of slacktivism: "When communication costs are low, groups can easily spring into action."[13]Clay Shirkysimilarly characterized slacktivism as "ridiculously easy group forming".[13] Various people and groups express doubts about the value and effectiveness of slacktivism. Particularly, some skeptics argue that it entails an underlying assumption that all problems can be seamlessly fixed using social media, and while this may be true for local issues, slacktivism could prove ineffective for solving global predicaments.[14]A 2009NPRpiece by Morozov asked whether "the publicity gains gained through this greater reliance on new media [are] worth the organizational losses that traditional activist entities are likely to suffer, as ordinary people would begin to turn away from conventional (and proven) forms of activism."[15] Criticism of slacktivism often involves the idea that internet activities are ineffective, and/or that they prevent or lessenpolitical participationin real life. However, as many studies on slacktivism relate only to a specific case or campaign, it is difficult to find an exact percentage of slacktivist actions that reach a stated goal. Furthermore, many studies also focus on such activism in democratic or open contexts, whereas the act of publicly liking, RSVPing or adopting an avatar or slogan as one's profile picture can be a defiant act in authoritarian or repressive countries. Micah White has argued that although slacktivism is typically the easiest route to participation in movements and changes, the novelty of online activism wears off as people begin to realize that their participation created virtually no effect, leading people to lose hope in all forms of activism.[16] Malcolm Gladwell, in his October 2010New Yorkerarticle, lambasted those who compare social media "revolutions" with actual activism that challenges the status quo ante.[17]He argued that today's social media campaigns cannot compare with activism that takes place on the ground, using theGreensboro sit-insas an example of what real, high-risk activism looks like.[17] A 2011 study looking at college students found only a small positive correlation between those who engage online in politics on Facebook with those who engage off of it. Those who did engage only did so by posting comments and other low forms of political participation, helping to confirm the slacktivism theoretical model.[18] TheNew Statesmanhas analyzed the outcomes of ten most-shared petitions and listed all of them as unsuccessful.[19] Brian Dunning, in his 2014 podcast,Slacktivism: Raising Awareness, argues that the internet activities that slacktivism is associated with are a waste of time at their best and at their worst are ways to "steal millions of dollars from armchair activists who are persuaded to donate actual money to what they're told is some useful cause."[20]He says that most slacktivism campaigns are "based on bad information, bad science, and are hoaxes as often as not".[20] He uses theKony 2012campaign as an example of how slacktivism can be used as a way to exploit others. The movie asked viewers to send money to the filmmakers rather than African law enforcement. Four months after the movie was released,Invisible Children, the charity who created the film, reported $31.9 million of gross receipts. The money in the end was not used to stop Kony, but rather to make another movie about stopping Kony. Dunning goes as far as to say that raising awareness of Kony was not even useful, as law enforcement groups had been after him for years. Dunning does state that today, however, slacktivism is generally more benign. He citesChange.orgas an example. The site is full of hundreds of thousands of petitions. A person signing one of these online petitions may feel good about himself, but these petitions are generally not binding nor do they lead to any major change. Dunning suggests that before donating, or even "liking", a cause one should research the issue and the organization to ensure nothing is misattributed, exaggerated, or wrong.[20] An example of a campaign against slacktivism is the advertisement series "Liking Isn't Helping" created by the international advertisement company Publicis Singapore for a relief organization, Crisis Relief Singapore (CRS). This campaign features images of people struggling or in need, surrounded by many people giving a thumbs up with the caption "Liking isn't helping". Though the campaign lacked critical components that would generate success, it made viewers stop and think about their activism habits and question the effect that slacktivism really has.[21] In response to Gladwell's criticism of slacktivism in theNew Yorker(see above), journalist Leo Mirani argues that he might be right if activism is defined only as sit-ins, taking direct action, and confrontations on the streets. However, if activism is about arousing awareness of people, changing people's minds, and influencing opinions across the world, then the revolution will indeed be "tweeted",[22]"hashtagged",[23]and "YouTubed."[24]In a March 2012Financial Timesarticle, referring to efforts to address the ongoing violence related to theLord's Resistance Army, Matthew Green wrote that the slacktivists behind theKony 2012video had "achieved more with their 30-minute video than battalions of diplomats, NGO workers and journalists have since the conflict began 26 years ago."[25] Although slacktivism has often been used pejoratively, some scholars point out that activism within the digital space is a reality.[26][27]These scholars suggest that slacktivism may have its deficiencies, but it can be a positive contributor to activism, and it is inescapable in the current digital climate.[26][27]A 2011 correlational study conducted byGeorgetown Universityentitled "The Dynamics of Cause Engagement" determined that so-called slacktivists are indeed "more likely to take meaningful actions".[28]Notably, "slacktivists participate in more than twice as many activities as people who don't engage in slacktivism, and their actions "have a higher potential to influence others".[28]Cited benefits of slacktivism in achieving clear objectives include creating a secure, low-cost, effective means of organizing that is environmentally friendly.[29]These "social champions" have the ability to directly link social media engagement with responsiveness, leveraging their transparent dialogue into economic, social or political action.[7]Going along this mindset is Andrew Leonard, a staff writer atSalon, who published an article on the ethics of smartphones and how we use them. Though the means of producing these products go against ethical human rights standards, Leonard encourages the use of smartphones on the basis that the technology they provide can be utilized as a means of changing the problematic situation of their manufacture. The ability to communicate quickly and on a global scale enables the spread of knowledge, such as the conditions that corporations provide to the workers they employ, and the result their widespread manufacturing has on globalization. Leonard argues that phones and tablets can be effective tools in bringing about change through slacktivism, because they allow us to spread knowledge, donate money, and more effectively speak our opinions on important matters.[30] Others keep a slightly optimistic outlook on the possibilities of slacktivism while still acknowledging the pitfalls that come with this digital form of protest. Zeynep Tufekci, an assistant professor at theUniversity of North Carolinaand a faculty associate at the Berkman Center for Internet & Society, analyzed the capacity of slacktivism to influence collective group action in a variety of different social movements in a segment of theBerkman Luncheon Series. She acknowledges that digital activism is a great enabler of rising social and political movements, and it is an effective means of enabling differentialcapacity buildingfor protest. A 2015 study describes how slacktivism can contribute to a quicker growth of social protests, by propagation of information through peripheral nodes in social networks. The authors note that although slacktivists are less active than committed minorities, their power lies in their numbers: "their aggregate contribution to the spread of protest messages is comparable in magnitude to that of core participants".[31]However, Tufekci argues that the enhanced ability to rally protest is accompanied by a weakened ability to actually make an impact, as slacktivism can fail to reach the level of protest required in order to bring about change.[32] TheBlack Lives Mattermovement calls for the end of systemic racism.[33]The movement has been inextricably linked with social media since 2014, in particular to Twitter with the hashtags #blacklivesmatter and #BLM.[33]Much of the support and awareness of this movement has been made possible through social media. Studies show that the slacktivism commonly present within the movement has been linked with a positive effect on active participation in it.[34]The fact that participants in this movement were able to contribute from their phones increased awareness and participation of the public, particularly in the United States.[34] The Western-centric nature of the critique of slacktivism discounts the impact it can have in authoritarian or repressive contexts.[35][36]JournalistCourtney C. Radschargues that even such low level of engagement was an important form of activism forArabyouth before and during theArab Springbecause it was a form of free speech, and could successfully spark mainstream media coverage, such as when a hashtag becomes "a trending topic [it] helps generate media attention, even as it helps organize information....The power of social media to help shape the international news agenda is one of the ways in which they subvert state authority and power."[37]In addition, studies suggest that "fears of Internet activities supplanting real-life activity are unsubstantiated," in that they do not cause a negative or positive effect on political participation.[38] TheHuman Rights Campaign(HRC) on Marriage Equality offers another example of how slacktivism can be used to make a notable difference.[39]The campaign urged Facebook users to change their profile pictures to a red image that had anequals sign(=) in the middle.[39]The logo symbolized equality and if Facebook users put the image as their profile photo, it meant they were in support of marriage equality.[39]The campaign was credited for raising positive awareness and cultivating an environment of support for the marriage equality cause.[39]This study concluded that, although the act of changing one's profile photo is small, ultimately social media campaigns such as this make a cumulative difference over time.[39] The term "clicktivism" is used to describe forms of internet-based slacktivism such as signing online petitions or signing and sending form letter emails to politicians or corporate CEOs.[16]For example, the British groupUK Uncutuse Twitter and other websites to organise protests and direct action against companies accused of tax avoidance.[40]It allows organizations to quantify their success by keeping track of how many "clicked" on their petition or other call to action. The idea behind clicktivism is that social media allow for a quick and easy way to show support for an organization or cause.[41]The main focus of digital organizations has become inflating participation rates by asking less and less of their members/viewers.[16] Clicktivism can also be demonstrated by monitoring the success of a campaign by how many "likes" it receives.[42]Clicktivism strives to quantify support, presence and outreach without putting emphasis on real participation.[42]The act of "liking" a photo on Facebook or clicking a petition is in itself symbolic because it demonstrates that the individual is aware of the situation and it shows their peers the opinions and thoughts they have on certain subject matters.[43] Critics of clicktivism state that this new phenomenon turns social movements to resemble advertising campaigns in which messages are tested,clickthrough rateis recorded, andA/B testingis often done. In order to improve these metrics, messages are reduced to make their "asks easier and actions simpler". This in turn reduces social action to having members that are a list of email addresses, rather than engaged people.[44][16] Charity slacktivism is an action in support of a cause that takes little effort on the part of the individual. Examples of online charity slacktivism include posting a Facebook status to support a cause, "liking" a charity organization's cause on Facebook, tweeting or retweeting a charity organization's request for support on Twitter, signing Internet petitions, and posting and sharing YouTube videos about a cause. It can be argued that a person is not "liking" the photo in order to help the person in need, but to feel better about themselves, and to feel like they have done something positive for the person or scene depicted in front of them. This phenomenon has become increasingly popular with individuals whether they are going on trips to help less fortunate people, or by "liking" many posts on Facebook in order to "help" the person in the picture. Examples include theKony 2012campaign that exploded briefly in social media in March 2012.[45] Examples of offline charity slacktivism include awareness wristbands and paraphernalia in support of causes, such as theLivestrong wristband, as well as bumper stickers andmobile donating. In 2020, during theCOVID-19pandemic,Clap for Our Carersgained traction in several countries. The term slacktivism is often used to describe the world's reaction to the2010 Haiti earthquake. The Red Cross managed to raise $5 million in 2 days via text message donations.[46]Social media outlets were used to spread the word about the earthquake. The day after the earthquake, CNN reported that four of Twitter's top topics were related to the Haitian earthquake.[46] This is the act of purchasing products that highlight support for a particular cause and advertise that a percentage of the cost of the good will go to the cause. In some instances the donated funds are spread across various entities within one foundation, which in theory helps several deserving areas of the cause. Criticism tends to highlight the thin spread of the donation.[citation needed]An example of this is theProduct Redcampaign, whereby consumers can buy Red-branded variants of commons products, with a proportion of proceeds going towards fighting AIDS. Slacktivists may also purchase a product from a company because it has a history of donating funds to charity, as a way to second-handedly support a cause. For example, a slacktivist may buyBen and Jerry'sice cream because its founders invested in the nation's children, or promoted social and environmental concerns.[47] Certain forms of slacktivism have political goals in mind, such as gaining support for a presidential campaign, or signing an internet petition that aims to influence governmental action. The online petition websiteChange.orgclaimed it was attacked by Chinese hackers and brought down in April 2011. Change.org claimed the fact that hackers "felt the need to bring down the website must be seen as a testament to Change.org's fast-growing success and a vindication of one particular petition: A Call for the Release ofAi Weiwei."[48]Ai Weiwei, a noted human rights activist who had been arrested by Chinese authorities in April 2011, was released on June 22, 2011, from Beijing, which was deemed as a victory by Change.org of its online campaign and petition demanding Ai's release. Sympathy slacktivism can be observed on social media networks such as Facebook, where users can like pages to support a cause or show support to people in need. Also common in this type of slacktivism is for users to change their profile pictures to one that shows the user's peers that they care about the topic.[49]This can be considered a virtual counterpart of wearing a pin to display one's sympathies; however, acquiring such a pin often requires some monetary donation to the cause while changing profile picture does not. In sympathy slacktivism, images of young children, animals and people seemingly in need are often used to give a sense of credibility to the viewers, making the campaign resonate longer in their memory. Using children in campaigns is often the most effective way of reaching a larger audience due to the fact that most adults, when exposed to the ad, would not be able to ignore a child in need. An example of sympathy slacktivism is the Swedish newspaper Aftonbladet's campaign "Vi Gillar Olika" (literal translation: "We like different").[50]This campaign was launched against xenophobia and racism, something that was a hot topic in Sweden in 2010. The main icon of the campaign was an open hand with the text "Vi Gillar Olika," the icon that was adopted from the French organisationSOS Racisme's campaign Touche pas à mon Pote in 1985.[51] Another example was when Facebook users added a Norwegian flag to their pictures after the2011 Norway attacksin which 77 people were killed. This campaign received attention from the Swedish Moderate Party, who encouraged their supporters to update their profile pictures.[52] Kony 2012was a campaign created byInvisible Childrenin the form of a 28-minute video about the dangerous situation of many children in Africa at the hands ofJoseph Kony, the leader of theLord's Resistance Army(LRA). The LRA is said to have abducted a total of nearly 60,000 children,brainwashingthe boys to fight for them and turning the girls into sex slaves.[53] The campaign was used as an experiment to see if an online video could reach such a large audience that it would make a war criminal, Joseph Kony, famous. It became the fastest-growing viral video of all time, reaching 100 million views in six days.[54]The campaign grew an unprecedented amount of awareness, calling to international leaders as well as the general population. The reaction to and participation in this campaign demonstrates charity slacktivism due to the way in which many viewers responded. The success of the campaign has been attributed mostly by how many people viewed the video rather than the donations received. After watching the video, many viewers felt compelled to take action. This action, however, took the form of sharing the video and potentially pledging their support.[55] As described bySarah Kendziorof Aljazeera: The video seemed to embody the slacktivist ethos: viewers oblivious to a complex foreign conflict are made heroic by watching a video, buying a bracelet, hanging a poster. Advocates of Invisible Children's campaign protested that their desire to catch Kony was sincere, their emotional response to the film genuine—and that the sheer volume of supporters calling for the capture of Joseph Kony constituted a meaningful shift in human rights advocacy."[56] In the weeks following the kidnapping of hundreds of schoolgirls by the organizationBoko Haram, the hashtag #BringBackOurGirls began to trend globally on Twitter as the story continued to spread[57]and by May 11 it had attracted 2.3 million tweets. One such tweet came from theFirst Lady of the United States,Michelle Obama, holding a sign displaying the hashtag, posted to her official Twitter account, helping to spread the awareness of the kidnapping.[58]Comparisons have been made between the #BringBackOurGirls campaign and the Kony 2012 campaign.[59]The campaign was labeled slacktivism by some critics, particularly as the weeks and months passed with no progress being made in recovery of the kidnapped girls.[60][61] According to Mkeki Mutah, uncle of one of the kidnapped girls: There is a saying: "Actions speak louder than words." Leaders from around the world came out and said they would assist to bring the girls back, but now we hear nothing. The question I wish to raise is: why? If they knew they would not do anything, they wouldn't have even made that promise at all. By just coming out to tell the world, I see that as a political game, which it shouldn't be so far as the girls are concerned.[62]
https://en.wikipedia.org/wiki/Slacktivism
WPGMA(WeightedPairGroupMethod withArithmetic Mean) is a simple agglomerative (bottom-up)hierarchical clusteringmethod, generally attributed toSokalandMichener.[1] The WPGMA method is similar to itsunweightedvariant, theUPGMAmethod. The WPGMA algorithm constructs a rooted tree (dendrogram) that reflects the structure present in a pairwisedistance matrix(or asimilarity matrix). At each step, the nearest two clusters, sayi{\displaystyle i}andj{\displaystyle j}, are combined into a higher-level clusteri∪j{\displaystyle i\cup j}. Then, its distance to another clusterk{\displaystyle k}is simply the arithmetic mean of the average distances between members ofk{\displaystyle k}andi{\displaystyle i}andk{\displaystyle k}andj{\displaystyle j}: d(i∪j),k=di,k+dj,k2{\displaystyle d_{(i\cup j),k}={\frac {d_{i,k}+d_{j,k}}{2}}} The WPGMA algorithm produces rooted dendrograms and requires a constant-rate assumption: it produces anultrametrictree in which the distances from the root to every branch tip are equal. Thisultrametricityassumption is called themolecular clockwhen the tips involveDNA,RNAandproteindata. This working example is based on aJC69genetic distance matrix computed from the5S ribosomal RNAsequence alignment of five bacteria:Bacillus subtilis(a{\displaystyle a}),Bacillus stearothermophilus(b{\displaystyle b}),Lactobacillusviridescens(c{\displaystyle c}),Acholeplasmamodicum(d{\displaystyle d}), andMicrococcus luteus(e{\displaystyle e}).[2][3] Let us assume that we have five elements(a,b,c,d,e){\displaystyle (a,b,c,d,e)}and the following matrixD1{\displaystyle D_{1}}of pairwise distances between them : In this example,D1(a,b)=17{\displaystyle D_{1}(a,b)=17}is the smallest value ofD1{\displaystyle D_{1}}, so we join elementsa{\displaystyle a}andb{\displaystyle b}. Letu{\displaystyle u}denote the node to whicha{\displaystyle a}andb{\displaystyle b}are now connected. Settingδ(a,u)=δ(b,u)=D1(a,b)/2{\displaystyle \delta (a,u)=\delta (b,u)=D_{1}(a,b)/2}ensures that elementsa{\displaystyle a}andb{\displaystyle b}are equidistant fromu{\displaystyle u}. This corresponds to the expectation of theultrametricityhypothesis. The branches joininga{\displaystyle a}andb{\displaystyle b}tou{\displaystyle u}then have lengthsδ(a,u)=δ(b,u)=17/2=8.5{\displaystyle \delta (a,u)=\delta (b,u)=17/2=8.5}(see the final dendrogram) We then proceed to update the initial distance matrixD1{\displaystyle D_{1}}into a new distance matrixD2{\displaystyle D_{2}}(see below), reduced in size by one row and one column because of the clustering ofa{\displaystyle a}withb{\displaystyle b}. Bold values inD2{\displaystyle D_{2}}correspond to the new distances, calculated byaveraging distancesbetween each element of the first cluster(a,b){\displaystyle (a,b)}and each of the remaining elements: D2((a,b),c)=(D1(a,c)+D1(b,c))/2=(21+30)/2=25.5{\displaystyle D_{2}((a,b),c)=(D_{1}(a,c)+D_{1}(b,c))/2=(21+30)/2=25.5} D2((a,b),d)=(D1(a,d)+D1(b,d))/2=(31+34)/2=32.5{\displaystyle D_{2}((a,b),d)=(D_{1}(a,d)+D_{1}(b,d))/2=(31+34)/2=32.5} D2((a,b),e)=(D1(a,e)+D1(b,e))/2=(23+21)/2=22{\displaystyle D_{2}((a,b),e)=(D_{1}(a,e)+D_{1}(b,e))/2=(23+21)/2=22} Italicized values inD2{\displaystyle D_{2}}are not affected by the matrix update as they correspond to distances between elements not involved in the first cluster. We now reiterate the three previous steps, starting from the new distance matrixD2{\displaystyle D_{2}}: Here,D2((a,b),e)=22{\displaystyle D_{2}((a,b),e)=22}is the smallest value ofD2{\displaystyle D_{2}}, so we join cluster(a,b){\displaystyle (a,b)}and elemente{\displaystyle e}. Letv{\displaystyle v}denote the node to which(a,b){\displaystyle (a,b)}ande{\displaystyle e}are now connected. Because of the ultrametricity constraint, the branches joininga{\displaystyle a}orb{\displaystyle b}tov{\displaystyle v}, ande{\displaystyle e}tov{\displaystyle v}are equal and have the following length:δ(a,v)=δ(b,v)=δ(e,v)=22/2=11{\displaystyle \delta (a,v)=\delta (b,v)=\delta (e,v)=22/2=11} We deduce the missing branch length:δ(u,v)=δ(e,v)−δ(a,u)=δ(e,v)−δ(b,u)=11−8.5=2.5{\displaystyle \delta (u,v)=\delta (e,v)-\delta (a,u)=\delta (e,v)-\delta (b,u)=11-8.5=2.5}(see the final dendrogram) We then proceed to update theD2{\displaystyle D_{2}}matrix into a new distance matrixD3{\displaystyle D_{3}}(see below), reduced in size by one row and one column because of the clustering of(a,b){\displaystyle (a,b)}withe{\displaystyle e}:D3(((a,b),e),c)=(D2((a,b),c)+D2(e,c))/2=(25.5+39)/2=32.25{\displaystyle D_{3}(((a,b),e),c)=(D_{2}((a,b),c)+D_{2}(e,c))/2=(25.5+39)/2=32.25} Of note, thisaverage calculationof the new distance does not account for the larger size of the(a,b){\displaystyle (a,b)}cluster (two elements) with respect toe{\displaystyle e}(one element). Similarly: D3(((a,b),e),d)=(D2((a,b),d)+D2(e,d))/2=(32.5+43)/2=37.75{\displaystyle D_{3}(((a,b),e),d)=(D_{2}((a,b),d)+D_{2}(e,d))/2=(32.5+43)/2=37.75} The averaging procedure therefore gives differential weight to the initial distances of matrixD1{\displaystyle D_{1}}. This is the reason why the method isweighted, not with respect to the mathematical procedure but with respect to the initial distances. We again reiterate the three previous steps, starting from the updated distance matrixD3{\displaystyle D_{3}}. Here,D3(c,d)=28{\displaystyle D_{3}(c,d)=28}is the smallest value ofD3{\displaystyle D_{3}}, so we join elementsc{\displaystyle c}andd{\displaystyle d}. Letw{\displaystyle w}denote the node to whichc{\displaystyle c}andd{\displaystyle d}are now connected. The branches joiningc{\displaystyle c}andd{\displaystyle d}tow{\displaystyle w}then have lengthsδ(c,w)=δ(d,w)=28/2=14{\displaystyle \delta (c,w)=\delta (d,w)=28/2=14}(see the final dendrogram) There is a single entry to update:D4((c,d),((a,b),e))=(D3(c,((a,b),e))+D3(d,((a,b),e)))/2=(32.25+37.75)/2=35{\displaystyle D_{4}((c,d),((a,b),e))=(D_{3}(c,((a,b),e))+D_{3}(d,((a,b),e)))/2=(32.25+37.75)/2=35} The finalD4{\displaystyle D_{4}}matrix is: So we join clusters((a,b),e){\displaystyle ((a,b),e)}and(c,d){\displaystyle (c,d)}. Letr{\displaystyle r}denote the (root) node to which((a,b),e){\displaystyle ((a,b),e)}and(c,d){\displaystyle (c,d)}are now connected. The branches joining((a,b),e){\displaystyle ((a,b),e)}and(c,d){\displaystyle (c,d)}tor{\displaystyle r}then have lengths: δ(((a,b),e),r)=δ((c,d),r)=35/2=17.5{\displaystyle \delta (((a,b),e),r)=\delta ((c,d),r)=35/2=17.5} We deduce the two remaining branch lengths: δ(v,r)=δ(((a,b),e),r)−δ(e,v)=17.5−11=6.5{\displaystyle \delta (v,r)=\delta (((a,b),e),r)-\delta (e,v)=17.5-11=6.5} δ(w,r)=δ((c,d),r)−δ(c,w)=17.5−14=3.5{\displaystyle \delta (w,r)=\delta ((c,d),r)-\delta (c,w)=17.5-14=3.5} The dendrogram is now complete. It is ultrametric because all tips (a{\displaystyle a}toe{\displaystyle e}) are equidistant fromr{\displaystyle r}: δ(a,r)=δ(b,r)=δ(e,r)=δ(c,r)=δ(d,r)=17.5{\displaystyle \delta (a,r)=\delta (b,r)=\delta (e,r)=\delta (c,r)=\delta (d,r)=17.5} The dendrogram is therefore rooted byr{\displaystyle r}, its deepest node. Alternative linkage schemes includesingle linkage clustering,complete linkage clustering, andUPGMA average linkage clustering. Implementing a different linkage is simply a matter of using a different formula to calculate inter-cluster distances during the distance matrix update steps of the above algorithm. Complete linkage clustering avoids a drawback of the alternative single linkage clustering method - the so-calledchaining phenomenon, where clusters formed via single linkage clustering may be forced together due to single elements being close to each other, even though many of the elements in each cluster may be very distant to each other. Complete linkage tends to find compact clusters of approximately equal diameters.[4]
https://en.wikipedia.org/wiki/WPGMA
Early research and development: Merging the networks and creating the Internet: Commercialization, privatization, broader access leads to the modern Internet: Examples of Internet services: TheInternet Engineering Task Force(IETF) is astandards organizationfor theInternetand is responsible for thetechnical standardsthat make up theInternet protocol suite(TCP/IP).[3]It has no formal membership roster or requirements and all its participants are volunteers. Their work is usually funded by employers or other sponsors. The IETF was initially supported by thefederal government of the United Statesbut since 1993 has operated under the auspices of theInternet Society, anon-profit organizationwith local chapters around the world. There is no membership in the IETF. Anyone can participate by signing up to a working group mailing list, or registering for an IETF meeting.[4] The IETF operates in a bottom-up task creation mode[clarification needed], largely driven by working groups.[2]Each working group normally has appointed two co-chairs (occasionally three); a charter that describes its focus; and what it is expected to produce, and when. It is open to all who want to participate and holds discussions on an openmailing list. Working groups hold open sessions at IETF meetings, where the onsite registration fee in 2024 was betweenUS$875 (early registration) and $1200 per person for the week.[5]Significant discounts are available for students and remote participants. As working groups do not make decisions at IETF meetings, with all decisions taken later on the working groupmailing list, meeting attendance is not required for contributors. Rough consensusis the primary basis for decision making. There are no formal voting procedures. Each working group is intended to complete work on its topic and then disband. In some cases, the working group will instead have its charter updated to take on new tasks as appropriate.[2] The working groups are grouped into areas by subject matter (see§ Steering group, below). Each area is overseen by an area director (AD), with most areas having two ADs. The ADs are responsible for appointing working group chairs. The area directors, together with the IETF Chair, form theInternet Engineering Steering Group(IESG), which is responsible for the overall operation of the IETF.[citation needed] TheInternet Architecture Board(IAB) oversees the IETF's external relationships.[6]The IAB provides long-range technical direction for Internet development. The IAB also manages theInternet Research Task Force(IRTF), with which the IETF has a number of cross-group relations.[7] A nominating committee (NomCom) of ten randomly chosen volunteers who participate regularly at meetings, a non-voting chair and 4-5 liaisons, is vested with the power to appoint, reappoint, and remove members of the IESG, IAB, IETF Trust and the IETF LLC.[8]To date, no one has been removed by a NomCom, although several people have resigned their positions, requiring replacements.[9] In 1993 the IETF changed from an activity supported by the US federal government to an independent, international activity associated with theInternet Society, a US-based501(c)(3) organization.[10]In 2018 the Internet Society created a subsidiary, the IETF Administration LLC, to be the corporate, legal and financial home for the IETF.[11]IETF activities are funded by meeting fees, meeting sponsors and by the Internet Society via its organizational membership and the proceeds of thePublic Interest Registry.[12] In December 2005, the IETF Trust was established to manage the copyrighted materials produced by the IETF.[13] The Internet Engineering Steering Group (IESG) is a body composed of the Internet Engineering Task Force (IETF) chair and area directors. It provides the final technical review of Internet standards and is responsible for day-to-day management of the IETF. It receives appeals of the decisions of the working groups, and the IESG makes the decision to progress documents in thestandards track.[14] The chair of the IESG is the area director of the general area, who also serves as the overall IETF chair. Members of the IESG include the two directors, sometimes three, of each of the following areas:[15] Liaison andex officiomembers include:[citation needed] The Gateway Algorithms and Data Structures (GADS) Task Force was the precursor to the IETF. Its chairman wasDavid L. Millsof theUniversity of Delaware.[16] In January 1986, the Internet Activities Board (IAB; now called the Internet Architecture Board) decided to divide GADS into two entities: an Internet Architecture (INARC) Task Force chaired by Mills to pursue research goals, and the IETF to handle nearer-term engineering and technology transfer issues.[16]The first IETF chair was Mike Corrigan, who was then the technical program manager for theDefense Data Network(DDN).[16]Also in 1986, after leaving DARPA, Robert E. Kahn founded theCorporation for National Research Initiatives(CNRI), which began providing administrative support to the IETF. In 1987, Corrigan was succeeded as IETF chair by Phill Gross.[17] Effective March 1, 1989, but providing support dating back to late 1988, CNRI and NSF entered into a cooperative agreement, No. NCR-8820945, wherein CNRI agreed to create and provide a "secretariat" for the "overall coordination, management and support of the work of the IAB, its various task forces and, particularly, the IETF".[18] In 1992, CNRI supported the formation and early funding of the Internet Society, which took on the IETF as a fiscally sponsored project, along with the IAB, the IRTF, and the organization of annual INET meetings. Gross continued to serve as IETF chair throughout this transition. Cerf, Kahn, and Lyman Chapin announced the formation of ISOC as "a professional society to facilitate, support, and promote the evolution and growth of the Internet as a global research communications infrastructure".[19]At the first board meeting of the Internet Society, Cerf, representing CNRI, offered, "In the event a deficit occurs, CNRI has agreed to contribute up to USD$102,000 to offset it."[20]In 1993, Cerf continued to support the formation of ISOC while working for CNRI,[21]and the role of ISOC in "the official procedures for creating and documenting Internet Standards" was codified in the IETF'sRFC1602.[22] In 1995, IETF'sRFC2031describes ISOC's role in the IETF as being purely administrative, and ISOC as having "no influence whatsoever on the Internet Standards process, the Internet Standards or their technical content".[23] In 1998, CNRI established Foretec Seminars, Inc. (Foretec), a for-profit subsidiary to take over providing secretariat services to the IETF.[18]Foretec provided these services until at least 2004.[18]By 2013, Foretec was dissolved.[24] In 2003, IETF'sRFC3677described IETFs role in appointing three board members to the ISOC's board of directors.[25] In 2018, ISOC established The IETF Administration LLC, a separate LLC to handle the administration of the IETF.[26]In 2019, the LLC issued a call for proposals to provide secretariat services to the IETF.[27] The first IETF meeting was attended by 21 US federal government-funded researchers on 16 January 1986. It was a continuation of the work of the earlier GADS Task Force. Representatives from non-governmental entities (such as gateway vendors)[28]were invited to attend starting with the fourth IETF meeting in October 1986. Since that time all IETF meetings have been open to the public.[2] Initially, the IETF met quarterly, but from 1991, it has been meeting three times a year. The initial meetings were very small, with fewer than 35 people in attendance at each of the first five meetings. The maximum attendance during the first 13 meetings was only 120 attendees. This occurred at the twelfth meeting, held during January 1989. These meetings have grown in both participation and scope a great deal since the early 1990s; it had a maximum attendance of 2810 at the December 2000 IETF held inSan Diego, California. Attendance declined with industry restructuring during the early 2000s, and is currently around 1200.[29][2] The locations for IETF meetings vary greatly. A list of past and future meeting locations is on the IETF meetings page.[30]The IETF strives to hold its meetings near where most of the IETF volunteers are located. IETF meetings are held three times a year, with one meeting each in Asia, Europe and North America. An occasional exploratory meeting is held outside of those regions in place of one of the other regions.[31] The IETF also organizeshackathonsduring the IETF meetings. The focus is on implementing code that will improve standards in terms of quality and interoperability.[32] Due to recent changes in USA administration that deny entry to foreign free speech supporters and could impact transgender people. There is a movement to ask the IETF to have its meeting outside of the USA in a safe country instead.[33] The details of IETF operations have changed considerably as the organization has grown, but the basic mechanism remains publication of proposed specifications, development based on the proposals, review and independent testing by participants, and republication as a revised proposal, a draft proposal, or eventually as an Internet Standard. IETF standards are developed in an open, all-inclusive process in which any interested individual can participate. All IETF documents are freely available over the Internet and can be reproduced at will. Multiple, working, useful, interoperable implementations are the chief requirement before an IETF proposed specification can become a standard.[2]Most specifications are focused on single protocols rather than tightly interlocked systems. This has allowed the protocols to be used in many different systems, and its standards are routinely re-used by bodies which create full-fledged architectures (e.g.3GPPIMS).[citation needed] Because it relies on volunteers and uses "rough consensus and running code" as its touchstone, results can be slow whenever the number of volunteers is either too small to make progress, or so large as to make consensus difficult, or when volunteers lack the necessary expertise. For protocols likeSMTP, which is used to transport e-mail for a user community in the many hundreds of millions, there is also considerable resistance to any change that is not fullybackward compatible, except forIPv6. Work within the IETF on ways to improve the speed of the standards-making process is ongoing but, because the number of volunteers with opinions on it is very great, consensus on improvements has been slow to develop.[citation needed] The IETF cooperates with theW3C,ISO/IEC,ITU, and other standards bodies.[10] Statistics are available that show who the top contributors by RFC publication are.[34]While the IETF only allows for participation by individuals, and not by corporations or governments, sponsorship information is available from these statistics.[citation needed] The IETF chairperson is selected by the NomCom process for a two-year renewable term.[35]Before 1993, the IETF Chair was selected by the IAB.[36] A list of the past and current chairs of the IETF: The IETF works on a broad range of networking technologies which provide foundation for the Internet's growth and evolution.[38] It aims to improve the efficiency in management of networks as they grow in size and complexity. The IETF is alsostandardizingprotocols for autonomic networking that enables networks to be self managing.[39] It is a network of physical objects or things that are embedded with electronics, sensors, software and also enables objects to exchange data with operator, manufacturer and other connected devices. Several IETF working groups are developing protocols that are directly relevant toIoT.[40] Its development provides the ability of internet applications to send data over the Internet. There are some well-established transport protocols such as TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) which are continuously getting extended and refined to meet the needs of the global Internet.[41]
https://en.wikipedia.org/wiki/Internet_Engineering_Task_Force
Cyber espionage,cyber spying, orcyber-collectionis the act or practice of obtaining secrets and information without the permission andknowledgeof the holder of the information using methods on the Internet, networks or individual computers through the use ofproxy servers,[1]crackingtechniques andmalicious softwareincludingTrojan horsesandspyware.[2][3]Cyber espionage can be used to target various actors – individuals, competitors, rivals, groups, governments, and others – in order to obtain personal, economic, political or military advantages. It may wholly be perpetrated online from computer desks of professionals on bases in far away countries or may involve infiltration at home by computer trained conventionalspiesandmolesor in other cases may be thecriminalhandiwork ofamateurmalicious hackers andsoftware programmers.[2] Cyber spying started as far back as 1996, when widespread deployment ofInternet connectivityto government and corporate systems gained momentum. Since that time, there have been numerous cases of such activities.[4][5][6] Cyber spying typically involves the use of such access to secrets and classified information or control of individual computers or whole networks for astrategicadvantage and forpsychological,politicaland physical subversion activities andsabotage.[7]More recently, cyber spying involves analysis of public activity onsocial networking siteslikeFacebookandTwitter.[8] Such operations, like non-cyber espionage, are typically illegal in the victim country while fully supported by the highest level of government in the aggressor country. The ethical situation likewise depends on one's viewpoint, particularly one's opinion of the governments involved.[7] Cyber-collection tools have been developed by governments and private interests for nearly every computer and smart-phone operating system. Tools are known to exist for Microsoft, Apple, and Linux computers and iPhone, Android, Blackberry, and Windows phones.[9]Major manufacturers ofCommercial off-the-shelf(COTS) cyber collection technology include Gamma Group from the UK[10]andHacking Teamfrom Italy.[11]Bespoke cyber-collection tool companies, many offering COTS packages ofzero-dayexploits, includeEndgame, Inc.and Netragard of the United States and Vupen from France.[12]State intelligence agencies often have their own teams to develop cyber-collection tools, such asStuxnet, but require a constant source ofzero-day exploitsin order to insert their tools into newly targeted systems. Specific technical details of these attack methods often sell for six-figure sums.[13] Common functionality of cyber-collection systems include: There are several common ways to infect or access the target: Cyber-collection agents are usually installed by payload delivery software constructed usingzero-dayattacks and delivered via infected USB drives, e-mail attachments or malicious web sites.[20][21]State sponsored cyber-collections efforts have used official operating system certificates in place of relying on security vulnerabilities. In the Flame operation,Microsoftstates that the Microsoft certificate used to impersonate aWindows Updatewas forged;[22]however, some experts believe that it may have been acquired throughHUMINTefforts.[23]
https://en.wikipedia.org/wiki/Cyber_spying
User experience(UX) is how a user interacts with and experiences aproduct,systemorservice. It includes a person's perceptions ofutility,ease of use, andefficiency. Improving user experience is important to most companies, designers, and creators when creating and refining products because negative user experience can diminish the use of the product and, therefore, any desired positive impacts. Conversely, designing towardprofitabilityas a main objective often conflicts with ethical user experience objectives and even causes harm. User experience issubjective. However, the attributes that make up the user experience areobjective. According toNielsen Norman Group, 'user experience' includes all the aspects of the interaction between the end-user with the company, its services, and its products.[1] The international standard onergonomicsof human-system interaction,ISO 9241, defines user experience as a "user’s perceptions and responses that result from the use and/or anticipated use of a system, product or service".[2]According to the ISO definition, user experience includes all the users' emotions, beliefs, preferences, perceptions, physical and psychological responses, behaviors and accomplishments that occur before, during, and after use. The ISO also lists three factors that influence user experience: the system, the user, and the context of use. Note 3 of the standard hints thatusabilityaddresses aspects of user experience, e.g. "usability criteria can be used to assess aspects of user experience". The standard does not go further in clarifying the relation between user experience and usability. Clearly, the two are overlapping concepts, with usability includingpragmaticaspects (getting a task done) and user experience focusing on users' feelings stemming both from pragmatic andhedonicaspects of the system. Many practitioners use the terms interchangeably. The term "usability" pre-dates the term "user experience". Part of the reason the terms are often used interchangeably is that, as a practical matter, a user will, at a minimum, require sufficient usability to accomplish a task while the feelings of the user may be less important, even to the user themselves. Since usability is about getting a task done, aspects of user experience likeinformation architectureanduser interfacecan help or hinder a user's experience. If a website has "bad" information architecture and a user has a difficult time finding what they are looking for, then a user will not have an effective, efficient, and satisfying search. In addition to theISOstandard, there exist several other definitions for user experience.[3]Some of them have been studied by various researchers.[4] Early developments in user experience can be traced back to theMachine Agethat includes the 19th and early 20th centuries. Inspired by the machine age intellectual framework, a quest for improving assembly processes to increase production efficiency and output led to the development of major technological advancements, such as mass production of high-volume goods on moving assembly lines, high-speed printing press, large hydroelectric power production plants, and radio technology, to name a few. Frederick Winslow TaylorandHenry Fordexplored ways to make human labor more efficient and productive. Taylor's research into the efficiency of interactions between workers and their tools is the earliest example that resembles today's user experience fundamentals.[citation needed] The termuser experiencewas brought to wider knowledge byDonald Normanin the mid-1990s.[5]He never intended the term "user experience" to be applied only to the affective aspects of usage. A review of his earlier work[6]suggests that the term "user experience" was used to signal a shift to include affective factors, along with the pre-requisite behavioral concerns, which had been traditionally considered in the field. Many usability practitioners continue to research and attend to affective factors associated with end-users, and have been doing so for years, long before the term "user experience" was introduced in the mid-1990s.[7]In an interview in 2007, Norman discusses the widespread use of the term "user experience" and its imprecise meaning as a consequence thereof.[8] Several developments affected the rise of interest in the user experience: The field of user experience represents an expansion and extension of the field of usability, to include theholisticperspective of how a person feels about using a system. The focus is on pleasure and value as well as on performance. The exact definition, framework, and elements of user experience are still evolving. User experience of an interactive product or a website is usually measured by a number of methods, including questionnaires, focus groups, observed usability tests, user journey mapping and other methods. A freely available questionnaire (available in several languages) is the User Experience Questionnaire (UEQ).[15]The development and validation of this questionnaire is described in a computer science essay published in 2008.[16] Higher levels of user experience have been linked to increased effectiveness ofdigital healthinterventions targeting improvements in physical activity,[17]nutrition, mental health and smoking.[18] Google Ngram Viewershows wide use of the term starting in the 1930s.[19]"He suggested that more follow-up in the field would be welcomed by the user, and would be a means of incorporating the results of user's experience into the design of new machines." Use of the term in relation to computer software also pre-datesNorman.[20] Many factors can influence a user's experience with a system. To address the variety, factors influencing user experience have been classified into three main categories: user's state and previous experience, system properties, and the usage context (situation).[21]Understanding representative users, working environments, interactions and emotional reactions help in designing the system duringUser experience design. Single experiences influence the overall user experience:[22]the experience of a key click affects the experience of typing a text message, the experience of typing a message affects the experience of text messaging, and the experience of text messaging affects the overall user experience with the phone. The overall user experience is not simply a sum of smaller interaction experiences, because some experiences are more salient than others. Overall user experience is also influenced by factors outside the actual interaction episode:brand, pricing, friends' opinions, reports in media, etc. One branch inuser experience researchfocuses on emotions. This includes momentary experiences during interaction: designing effective interaction and evaluatingemotions. Another branch is interested in understanding the long-term relation between user experience and product appreciation. The industry sees good overall user experience with a company's products as critical for securing brand loyalty and enhancing the growth of the customer base. All temporal levels of user experience (momentary, episodic, and long-term) are important, but the methods todesignandevaluatethese levels can be very different. Developer experience (DX) is a user experience from a developer's point of view. It is defined by the tools, processes, and software that a developer uses when interacting with a product or system while in the process of production of another one, such as insoftware development.[23]DX has had increased attention paid to it especially in businesses who primarily offersoftware as a serviceto other businesses where ease of use is a key differentiator in the market.[24]
https://en.wikipedia.org/wiki/User_experience
TheInternational Roadmap for Devices and Systems, orIRDS, is a set of predictions about likely developments in electronic devices and systems. The IRDS was established in 2016 and is the successor to theInternational Technology Roadmap for Semiconductors. These predictions are intended to allow coordination of efforts across academia, manufacturers, equipment suppliers, and national research laboratories. TheIEEEspecifies the goals of the roadmap as:[1] The executive committee is drawn from regions with a major stake in developments in electronics:Europe,South Korea,Japan,Taiwan, and theUnited States. International Focus Teams (IFTs) assess present status and future evolution of the ecosystem in their specific field of expertise and produce a 15-year roadmap. IFT reports includes evolution, key challenges, major roadblocks, and possible solutions. IFTs include:
https://en.wikipedia.org/wiki/International_Roadmap_for_Devices_and_Systems
ThePayment Card Industry Data Security Standard(PCI DSS) is aninformation securitystandard used to handlecredit cardsfrom majorcard brands. The standard is administered by thePayment Card Industry Security Standards Council, and its use is mandated by the card brands. It was created to better control cardholder data and reducecredit card fraud. Validation of compliance is performed annually or quarterly with a method suited to the volume of transactions:[1] The major card brands had five different security programs: The intentions of each were roughly similar: to create an additional level of protection for card issuers by ensuring that merchants meet minimum levels of security when they store, process, and transmit cardholder data. To address interoperability problems among the existing standards, the combined effort by the principal credit-card organizations resulted in the release of version 1.0 of PCI DSS in December 2004.[citation needed]PCI DSS has been implemented and followed worldwide. The Payment Card Industry Security Standards Council (PCI SSC) was then formed, and these companies aligned their policies to create the PCI DSS.[2]MasterCard, American Express, Visa, JCB International and Discover Financial Services established the PCI SSC in September 2006 as an administrative and governing entity which mandates the evolution and development of the PCI DSS.[3]Independent private organizations can participate in PCI development after they register. Each participating organization joins a SIG (Special Interest Group) and contributes to activities mandated by the group. The following versions of the PCI DSS have been made available:[4] The PCI DSS has twelve requirements for compliance, organized into six related groups known as control objectives:[7] Each PCI DSS version has divided these six requirement groups differently, but the twelve requirements have not changed since the inception of the standard. Each requirement and sub-requirement is divided into three sections: In version 4.0.1 of the PCI DSS, the twelve requirements are:[8] The PCI SSC (Payment Card Industry Security Standards Council) has released supplemental information to clarify requirements, which includes: Companies subject to PCI DSS standards must be PCI-compliant; how they prove and report their compliance is based on their annual number of transactions and how the transactions are processed. Anacquireror payment brand may manually place an organization into a reporting level at its discretion.[11]Merchant levels are: Each card issuer maintains a table of compliance levels and a table for service providers.[12][13] Compliance validation involves the evaluation and confirmation that the security controls and procedures have been implemented according to the PCI DSS. Validation occurs through an annual assessment, either by an external entity, or by self-assessment.[14] A Report on Compliance (ROC) is conducted by a PCI Qualified Security Assessor (QSA) and is intended to provide independent validation of an entity's compliance with the PCI DSS standard. A completed ROC results in two documents: a ROC Reporting Template populated with detailed explanation of the testing completed, and an Attestation of Compliance (AOC) documenting that a ROC has been completed and the overall conclusion of the ROC. The PCI DSS Self-Assessment Questionnaire (SAQ) is a validation tool intended for small to medium sized merchants and service providers to assess their own PCI DSS compliance status. There are multiple types of SAQ, each with a different length depending on the entity type and payment model used. Each SAQ question has a yes-or-no answer, and any "no" response requires the entity to indicate its future implementation. As with ROCs, an attestation of compliance (AOC) based on the SAQ is also completed. The PCI Security Standards Council maintains a program to certify companies and individuals to perform assessment activities. AQualified Security Assessor(QSA) is an individual certified by the PCI Security Standards Council to validate another entity's PCI DSS compliance. QSAs must be employed and sponsored by a QSA Company, which also must be certified by the PCI Security Standards Council.[15][16] AnInternal Security Assessor(ISA) is an individual who has earned a certificate from the PCI Security Standards Council for their sponsoring organization, and can conduct PCI self-assessments for their organization. The ISA program was designed to help Level 2 merchants meet Mastercard compliance validation requirements.[17]ISA certification empowers an individual to conduct an appraisal of his or her association and propose security solutions and controls for PCI DSS compliance. ISAs are in charge of cooperation and participation with QSAs.[14] Although the PCI DSS must be implemented by all entities which process, store or transmit cardholder data, formal validation of PCI DSS compliance is not mandatory for all entities.VisaandMastercardrequire merchants and service providers to be validated according to the PCI DSS; Visa also offers a Technology Innovation Program (TIP), an alternative program which allows qualified merchants to discontinue the annual PCI DSS validation assessment. Merchants are eligible if they take alternative precautions against fraud, such as the use ofEMVorpoint-to-point encryption. Issuing banksare not required to undergo PCI DSS validation, although they must secure sensitive data in a PCI DSS-compliant manner. Acquiring banks must comply with PCI DSS and have their compliance validated with anaudit. In a security breach, any compromised entity which was not PCI DSS-compliant at the time of the breach may be subject to additional penalties (such as fines) from card brands or acquiring banks. Compliance with PCI DSS is not required by federal law in theUnited States, but the laws of some states refer to PCI DSS directly or make equivalent provisions. Legal scholars Edward Morse and Vasant Raval have said that by enshrining PCI DSS compliance in legislation, card networks reallocated the cost of fraud from card issuers to merchants.[18]In 2007, Minnesota enacted a law prohibiting the retention of some types of payment-card data more than 48 hours after authorization of a transaction.[19][20]Nevada incorporated the standard into state law two years later, requiring compliance by merchants doing business in that state with the current PCI DSS and shielding compliant entities from liability. The Nevada law also allows merchants to avoid liability by other approved security standards.[21][18]In 2010,Washingtonalso incorporated the standard into state law. Unlike Nevada's law, entities are not required to be PCI DSS-compliant; however, compliant entities are shielded from liability in the event of a data breach.[22][18] Visa and Mastercard impose fines for non-compliance. Stephen and Theodora "Cissy" McComb, owners of Cisero's Ristorante and Nightclub inPark City, Utah, were fined for a breach for which two forensics firms could not find evidence: The McCombs assert that the PCI system is less a system for securing customer card data than a system for raking in profits for the card companies via fines and penalties. Visa and MasterCard impose fines on merchants even when there is no fraud loss at all, simply because the fines are "profitable to them," the McCombs say.[23] Michael Jones,CIOofMichaels, testified before a U.S. Congressional subcommittee about the PCI DSS: [The PCI DSS requirements] are very expensive to implement, confusing to comply with, and ultimately subjective, both in their interpretation and in their enforcement. It is often stated that there are only twelve "Requirements" for PCI compliance. In fact there are over 220 sub-requirements; some of which can place anincredible burden on a retailerandmany of which are subject to interpretation.[24] The PCI DSS may compel businesses pay more attention to IT security, even if minimum standards are not enough to eradicate security problems.Bruce Schneierspoke in favor of the standard: Regulation—SOX,HIPAA, GLBA, the credit-card industry's PCI, the various disclosure laws, the European Data Protection Act, whatever—has been the best stick the industry has found to beat companies over the head with. And it works. Regulation forces companies to take security more seriously, and sells more products and services.[25] PCICouncil general manager Bob Russo responded to objections by theNational Retail Federation: [PCI is a structured] blend ... [of] specificity and high-level concepts [that allows] stakeholders the opportunity and flexibility to work with Qualified Security Assessors (QSAs) to determine appropriate security controls within their environment that meet the intent of the PCI standards.[26] Visa chief enterprise risk officer Ellen Richey said in 2018, "No compromised entity has yet been found to be in compliance with PCI DSS at the time of a breach".[27]However, a 2008 breach ofHeartland Payment Systems(validated as PCI DSS-compliant) resulted in the compromising of one hundred million card numbers. Around that time,Hannaford BrothersandTJX Companies(also validated as PCI DSS-compliant) were similarly breached as a result of the allegedly-coordinated efforts ofAlbert Gonzalezand two unnamed Russian hackers.[28] Assessments examine the compliance of merchants and service providers with the PCI DSS at a specific point in time, frequently usingsamplingto allow compliance to be demonstrated with representative systems and processes. It is the responsibility of the merchant and service provider to achieve, demonstrate, and maintain compliance throughout the annual validation-and-assessment cycle across all systems and processes. A breakdown in merchant and service-provider compliance with the written standard may have been responsible for the breaches; Hannaford Brothers received its PCI DSS compliance validation one day after it had been made aware of a two-month-long compromise of its internal systems. Compliance validation is required only for level 1 to 3 merchants and may be optional for Level 4, depending on the card brand and acquirer. According to Visa's compliance validation details for merchants, level-4 merchant compliance-validation requirements ("Merchants processing less than 20,000 Visa e-commerce transactions annually and all other merchants processing up to 1 million Visa transactions annually") are set by theacquirer. Over 80 percent of payment-card compromises between 2005 and 2007 affected level-4 merchants, who handled 32 percent of all such transactions.[citation needed]
https://en.wikipedia.org/wiki/Payment_Card_Industry_Data_Security_Standard
Inmathematics, anidentityis anequalityrelating onemathematical expressionAto another mathematical expressionB, such thatAandB(which might contain somevariables) produce the same value for all values of the variables within a certaindomain of discourse.[1][2]In other words,A=Bis an identity ifAandBdefine the samefunctions, and an identity is an equality between functions that are differently defined. For example,(a+b)2=a2+2ab+b2{\displaystyle (a+b)^{2}=a^{2}+2ab+b^{2}}andcos2⁡θ+sin2⁡θ=1{\displaystyle \cos ^{2}\theta +\sin ^{2}\theta =1}are identities.[3]Identities are sometimes indicated by thetriple barsymbol≡instead of=, theequals sign.[4]Formally, an identity is auniversally quantifiedequality. Certain identities, such asa+0=a{\displaystyle a+0=a}anda+(−a)=0{\displaystyle a+(-a)=0}, form the basis ofalgebra,[5]while other identities, such as(a+b)2=a2+2ab+b2{\displaystyle (a+b)^{2}=a^{2}+2ab+b^{2}}anda2−b2=(a+b)(a−b){\displaystyle a^{2}-b^{2}=(a+b)(a-b)}, can be useful in simplifying algebraic expressions and expanding them.[6] Geometrically,trigonometric identitiesare identities involving certain functions of one or moreangles.[7]They are distinct fromtriangle identities, which are identities involving both angles and side lengths of atriangle. Only the former are covered in this article. These identities are useful whenever expressions involving trigonometric functions need to be simplified. Another important application is theintegrationof non-trigonometric functions: a common technique which involves first using thesubstitution rule with a trigonometric function, and then simplifying the resulting integral with a trigonometric identity. One of the most prominent examples of trigonometric identities involves the equationsin2⁡θ+cos2⁡θ=1,{\displaystyle \sin ^{2}\theta +\cos ^{2}\theta =1,}which is true for allrealvalues ofθ{\displaystyle \theta }. On the other hand, the equation is only true for certain values ofθ{\displaystyle \theta }, not all. For example, this equation is true whenθ=0,{\displaystyle \theta =0,}but false whenθ=2{\displaystyle \theta =2}. Another group of trigonometric identities concerns the so-called addition/subtraction formulas (e.g. the double-angle identitysin⁡(2θ)=2sin⁡θcos⁡θ{\displaystyle \sin(2\theta )=2\sin \theta \cos \theta }, the addition formula fortan⁡(x+y){\displaystyle \tan(x+y)}), which can be used to break down expressions of larger angles into those with smaller constituents. The following identities hold for allintegerexponents, provided that the base is non-zero: Unlike addition and multiplication, exponentiation is notcommutative. For example,2 + 3 = 3 + 2 = 5and2 · 3 = 3 · 2 = 6, but23= 8whereas32= 9. Also unlike addition and multiplication, exponentiation is notassociativeeither. For example,(2 + 3) + 4 = 2 + (3 + 4) = 9and(2 · 3) · 4 = 2 · (3 · 4) = 24, but 23to the 4 is 84(or 4,096) whereas 2 to the 34is 281(or 2,417,851,639,229,258,349,412,352). When no parentheses are written, by convention the order is top-down, not bottom-up: Several important formulas, sometimes calledlogarithmic identitiesorlog laws, relatelogarithmsto one another:[a] The logarithm of a product is the sum of the logarithms of the numbers being multiplied; the logarithm of the ratio of two numbers is the difference of the logarithms. The logarithm of thepth power of a number isptimes the logarithm of the number itself; the logarithm of apth root is the logarithm of the number divided byp. The following table lists these identities with examples. Each of the identities can be derived after substitution of the logarithm definitionsx=blogb⁡x,{\displaystyle x=b^{\log _{b}x},}and/ory=blogb⁡y,{\displaystyle y=b^{\log _{b}y},}in the left hand sides. The logarithm logb(x) can be computed from the logarithms ofxandbwith respect to an arbitrary basekusing the following formula: Typicalscientific calculatorscalculate the logarithms to bases 10 ande.[8]Logarithms with respect to any basebcan be determined using either of these two logarithms by the previous formula: Given a numberxand its logarithm logb(x) to an unknown baseb, the base is given by: The hyperbolic functions satisfy many identities, all of them similar in form to thetrigonometric identities. In fact,Osborn's rule[9]states that one can convert any trigonometric identity into a hyperbolic identity by expanding it completely in terms of integer powers of sines and cosines, changing sine to sinh and cosine to cosh, and switching the sign of every term which contains a product of anevennumber of hyperbolic sines.[10] TheGudermannian functiongives a direct relationship between the trigonometric functions and the hyperbolic ones that does not involvecomplex numbers. Formally, an identity is a trueuniversally quantifiedformulaof the form∀x1,…,xn:s=t,{\displaystyle \forall x_{1},\ldots ,x_{n}:s=t,}wheresandtaretermswith no otherfree variablesthanx1,…,xn.{\displaystyle x_{1},\ldots ,x_{n}.}The quantifier prefix∀x1,…,xn{\displaystyle \forall x_{1},\ldots ,x_{n}}is often left implicit, when it is stated that the formula is an identity. For example, theaxiomsof amonoidare often given as the formulas or, shortly, So, these formulas are identities in every monoid. As for any equality, the formulas without quantifier are often calledequations. In other words, an identity is an equation that is true for all values of the variables.[11][12]
https://en.wikipedia.org/wiki/Identity_(mathematics)