text
stringlengths
12
14.7k
Normal-exponential-gamma distribution : In probability theory and statistics, the normal-exponential-gamma distribution (sometimes called the NEG distribution) is a three-parameter family of continuous probability distributions. It has a location parameter μ , scale parameter θ and a shape parameter k .
Normal-exponential-gamma distribution : The probability density function (pdf) of the normal-exponential-gamma distribution is proportional to f ( x ; μ , k , θ ) ∝ exp ⁡ ( ( x − μ ) 2 4 θ 2 ) D − 2 k − 1 ( | x − μ | θ ) \right)D_\left(\right) , where D is a parabolic cylinder function. As for the Laplace distribution,...
Normal-exponential-gamma distribution : The distribution has heavy tails and a sharp peak at μ and, because of this, it has applications in variable selection.
Normal-exponential-gamma distribution : Compound probability distribution Lomax distribution == References ==
1993 North Korean census : The 1993 North Korean census (Korean: 1993년 조선민주주의인민공화국 인구일제조사) was a census conducted by the Central Bureau of Statistics of North Korea on 31 December 1993. It was the first census held in North Korea since the founding of the country in 1949, for which Beijing provided technical assistance...
1993 North Korean census : The census was inconsistent internally and in comparison to previous censuses. According to the American political economist, Nicholas Eberstadt: "Quotation marks should attend the '1993' census because that enumeration was not actually conducted in 1993, but rather in early 1994, with respon...
1993 North Korean census : Demographics of North Korea 2008 North Korea Census == References ==
Northern and Western Region : The Northern and Western Region has been a region in Ireland since 1 January 2015. It is a NUTS Level II statistical region of Ireland (coded IE04). NUTS 2 Regions may be classified as less developed regions, transition regions, or more developed regions to determine eligibility for fundin...
Ogive (statistics) : In statistics, an ogive, also known as a cumulative frequency polygon, can refer to one of two things: any hand-drawn graphic of a cumulative distribution function any empirical cumulative distribution function. The points plotted as part of an ogive are the upper class limit and the corresponding ...
Ogive (statistics) : Along the horizontal axis, the limits of the class intervals for an ogive are marked. Based on the limit values, points above each are placed with heights equal to either the absolute or relative cumulative frequency. The shape of an ogive is obtained by connecting each of the points to its neighbo...
Ogive (statistics) : Ogives, similarly to other representations of cumulative distribution functions, are useful for estimating centiles in a distribution. For example, we can know the central point so that 50% of the observations would be below this point and 50% above. To do this, we draw a line from the point of 50%...
Ogive (statistics) : Dodge, Yadolah (2008). The concise Encyclopedia of Statistics. Springer. p. 395.
OpenIntro Statistics : OpenIntro Statistics is an open-source textbook for introductory statistics, written by David Diez, Christopher Barr, and Mine Çetinkaya-Rundel. The textbook is available online as a free PDF, as LaTeX source and as a royalty-free paperback. == References ==
OpenSAFELY : OpenSAFELY is a secure analytics platform, interfacing to NHS patient records and enabling statistical analysis of them by medical researchers. The platform was originally a collaboration between DataLab at the University of Oxford, the EHR group at London School of Hygiene & Tropical Medicine, and electro...
OpenSAFELY : OpenSAFELY – official website
Optimality criterion : In statistics, an optimality criterion provides a measure of the fit of the data to a given hypothesis, to aid in model selection. A model is designated as the "best" of the candidate models if it gives the best value of an objective function measuring the degree of satisfaction of the criterion ...
Order of a kernel : In statistics, the order of a kernel is the degree of the first non-zero moment of a kernel.
Order of a kernel : The literature knows two major definitions of the order of a kernel. Namely are:
Ordered subset expectation maximization : In mathematical optimization, the ordered subset expectation maximization (OSEM) method is an iterative method that is used in computed tomography. In applications in medical imaging, the OSEM method is used for positron emission tomography (PET), for single-photon emission com...
Ordered subset expectation maximization : Hudson, H.M., Larkin, R.S. (1994) "Accelerated image reconstruction using ordered subsets of projection data", IEEE Trans. Medical Imaging, 13 (4), 601–609 doi:10.1109/42.363108
Orthogonal signal correction : Orthogonal Signal Correction (OSC) is a spectral preprocessing technique that removes variation from a data matrix X that is orthogonal to the response matrix Y. OSC was introduced by researchers at the University of Umea in 1998 and has since found applications in domains including metab...
Outliers ratio : In objective video quality assessment, the outliers ratio (OR) is a measure of the performance of an objective video quality metric. It is the ratio of "false" scores given by the objective metric to the total number of scores. The "false" scores are the scores that lie outside the interval [ MOS − 2 σ...
Owen's T function : In mathematics, Owen's T function T(h, a), named after statistician Donald Bruce Owen, is defined by T ( h , a ) = 1 2 π ∫ 0 a e − 1 2 h 2 ( 1 + x 2 ) 1 + x 2 d x ( − ∞ < h , a < + ∞ ) . \int _^h^(1+x^)dx\quad \left(-\infty <h,a<+\infty \right). The function was first introduced by Owen in 1956.
Owen's T function : The function T(h, a) gives the probability of the event (X > h and 0 < Y < aX) where X and Y are independent standard normal random variables. This function can be used to calculate bivariate normal distribution probabilities and, from there, in the calculation of multivariate normal distribution pr...
Owen's T function : T ( h , 0 ) = 0 T ( 0 , a ) = 1 2 π arctan ⁡ ( a ) \arctan(a) T ( − h , a ) = T ( h , a ) T ( h , − a ) = − T ( h , a ) T ( h , a ) + T ( a h , 1 a ) = \right)=\left(\Phi (h)+\Phi (ah)\right)-\Phi (h)\Phi (ah)&\quad a\geq 0\\\left(\Phi (h)+\Phi (ah)\right)-\Phi (h)\Phi (ah)-&\quad a<0\end T ( h ,...
Owen's T function : Owen's T function (user web site) - offers C++, FORTRAN77, FORTRAN90, and MATLAB libraries released under the LGPL license LGPL Owen's T-function is implemented in Mathematica since version 8, as OwenT.
Owen's T function : Why You Should Care about the Obscure (Wolfram blog post)
Parallel analysis : Parallel analysis, also known as Horn's parallel analysis, is a statistical method used to determine the number of components to keep in a principal component analysis or factors to keep in an exploratory factor analysis. It is named after psychologist John L. Horn, who created the method, publishin...
Parallel analysis : Parallel analysis is regarded as one of the more accurate methods for determining the number of factors or components to retain. In particular, unlike early approaches to dimensionality estimation (such as examining scree plots), parallel analysis has the virtue of an objective decision criterion. S...
Parallel analysis : Parallel analysis has been implemented in JASP, SPSS, SAS, STATA, and MATLAB and in multiple packages for the R programming language, including the psych multicon, hornpa, and paran packages. Parallel analysis can also be conducted in Mplus version 8.0 and forward.
Parallel analysis : Scree plot Exploratory factor analysis § Selecting the appropriate number of factors Marchenko-Pastur distribution == References ==
Parity plot : A parity plot is a scatterplot that compares a set of results from a computational model against benchmark data. Each point has coordinates (x, y), where x is a benchmark value and y is the corresponding value from the model. A line of the equation y = x, representing perfect model performance, is sometim...
Parity plot : Q–Q plot == References ==
Per capita : Per capita is a Latin phrase literally meaning "by heads" or "for each head", and idiomatically used to mean "per person". The term is used in a wide variety of social sciences and statistical research contexts, including government statistics, economic indicators, and built environment studies. It is comm...
Per capita : Per capita income == References ==
Per-comparison error rate : In statistics, per-comparison error rate (PCER) is the probability of a Type I error in the absence of any multiple hypothesis testing correction. This is a liberal error rate relative to the false discovery rate and family-wise error rate, in that it is always less than or equal to those ra...
Poisson sampling : In survey methodology, Poisson sampling (sometimes denoted as PO sampling: 61 ) is a sampling process where each element of the population is subjected to an independent Bernoulli trial which determines whether the element becomes part of the sample.: 85 Each element of the population may have a diff...
Poisson sampling : Mathematically, the first-order inclusion probability of the ith element of the population is denoted by the symbol π i and the second-order inclusion probability that a pair consisting of the ith and jth element of the population that is sampled is included in a sample during the drawing of a singl...
Poisson sampling : Bernoulli sampling Poisson distribution Poisson process Sampling design == References ==
Poly-Weibull distribution : In probability theory and statistics, the poly-Weibull distribution is a continuous probability distribution. The distribution is defined to be that of a random variable defined to be the smallest of a number of statistically independent random variables having non-identical Weibull distribu...
Polykay : In statistics, a polykay, or generalised k-statistic, (denoted k r , s ) is a statistic defined as a linear combination of sample moments.
Polykay : The word polykay was coined by American mathematician John Tukey in 1956, from poly, "many" or "much", and kay, the phonetic spelling of the letter "k", as in k-statistic. == References ==
Pooled analysis : A pooled analysis is a statistical technique for combining the results of multiple epidemiological studies. It is one of three types of literature reviews frequently used in epidemiology, along with meta-analysis and traditional narrative reviews. Pooled analyses may be either retrospective or prospec...
Population projection : A population projection, in the field of demography, is an estimate of a future population. It is usually based on current population estimates derived from the most recent census plus a projection of possible changes based on assumptions of future births, deaths, and any migration into or out o...
Population projection : Census Population growth Projections of population growth == References ==
Population reconstruction : Population reconstruction is a method used by historical demographers. Using records, such as church registries, the size and composition of families living in a given region in a given past time is determined. This allows the identification and analysis of patterns of family formation, fert...
Population reconstruction : Paul-André Rosental, The Novelty of an Old Genre: Louis Henry and the Founding of Historical Demography, Population (English edition), Volume 58 –2003/1
Population study : Population study is an interdisciplinary field of scientific study that uses various statistical methods and models to analyse, determine, address, and predict population challenges and trends from data collected through various data collection methods such as population census, registration method, ...
Population study : Population study entry in the public domain NCI Dictionary of Cancer Terms
Portmanteau test : A portmanteau test is a type of statistical hypothesis test in which the null hypothesis is well specified, but the alternative hypothesis is more loosely specified. Tests constructed in this context can have the property of being at least moderately powerful against a wide range of departures from t...
Portmanteau test : In time series analysis, two well-known versions of a portmanteau test are available for testing for autocorrelation in the residuals of a model: it tests whether any of a group of autocorrelations of the residual time series are different from zero. This test is the Ljung–Box test, which is an impro...
Portmanteau test : Enders, W. (1995). Applied Econometric Time Series. New York: John Wiley & Sons. pp. 86–87. ISBN 0471039411.
Predictive informatics : Predictive informatics (PI) is the combination of predictive modeling and informatics applied to healthcare, pharmaceutical, life sciences and business industries. Predictive informatics enables researchers, analysts, physicians and decision-makers to aggregate and analyze disparate types of da...
Predictive informatics : Predictive analytics Informatics (academic field) Predictive modeling Biomedical informatics Chemoinformatics
Predictive informatics : Christophe Giraud-Carrier, Burdette Pixton, and Roberto A. Rocha. (2009) "Bariatric surgery performance: A predictive informatics case study". Intell. Data Anal., 13 (5), 741–754. Krohn R. (2008) "Predictive informatics. Why PI is the next great opportunity in healthcare", J Healthc Inf Manag, ...
Predictive informatics : Predictive Informatics: What Is Its Place in Healthcare? Christophe G Giraud-Carrier (2009), Brigham Young University
PRESS statistic : In statistics, the predicted residual error sum of squares (PRESS) is a form of cross-validation used in regression analysis to provide a summary measure of the fit of a model to a sample of observations that were not themselves used to estimate the model. It is calculated as the sum of squares of the...
PRESS statistic : Instead of fitting only one model on all data, leave-one-out cross-validation is used to fit N models (on N observations) where for each model one data point is left out from the training set. The out-of-sample predicted value is calculated for the omitted observation in each case, and the PRESS stati...
PRESS statistic : Given this procedure, the PRESS statistic can be calculated for a number of candidate model structures for the same dataset, with the lowest values of PRESS indicating the best structures. Models that are over-parameterised (over-fitted) would tend to give small residuals for observations included in ...
PRESS statistic : Model selection Stepwise regression Cross-validation (statistics) == References ==
Preventable fraction for the population : In epidemiology, preventable fraction for the population (PFp), is the proportion of incidents in the population that could be prevented by exposing the whole population. It is calculated as P F p = ( I p − I e ) / I p =(I_-I_)/I_ , where I e is the incidence in the exposed gr...
Preventable fraction for the population : Population Impact Measures Preventable fraction among the unexposed == References ==
Principal geodesic analysis : In geometric data analysis and statistical shape analysis, principal geodesic analysis is a generalization of principal component analysis to a non-Euclidean, non-linear setting of manifolds suitable for use with shape descriptors such as medial representations.
Principal geodesic analysis : Principal Geodesic Analysis for the Study of Nonlinear Statistics of Shape Probabilistic Principal Geodesic Analysis Kernel Principal Geodesic Analysis Mixture Probabilistic Principal Geodesic Analysis
Principal response curve : In multivariate statistics, principal response curves (PRC) are used for analysis of treatment effects in experiments with a repeated measures design. First developed as a special form of redundancy analysis, PRC allow temporal trends in control treatments to be corrected for, which allows th...
Principal stratification : Principal stratification is a statistical technique used in causal inference when adjusting results for post-treatment covariates. The idea is to identify underlying strata and then compute causal effects only within strata. It is a generalization of the local average treatment effect (LATE) ...
Principal stratification : An example of principal stratification is where there is attrition in a randomized controlled trial. With a binary post-treatment covariate (e.g. attrition) and a binary treatment (e.g. "treatment" and "control") there are four possible strata in which subjects could be: those who always stay...
Principal stratification : Instrumental variable Rubin causal model
Principal stratification : Frangakis, Constantine E.; Rubin, Donald B. (March 2002). "Principal stratification in causal inference". Biometrics. 58 (1): 21–9. doi:10.1111/j.0006-341X.2002.00021.x. PMC 4137767. PMID 11890317. Preprint Zhang, Junni L.; Rubin, Donald B. (2003) "Estimation of Causal Effects via Principal S...
Probabilistic proposition : A probabilistic proposition is a proposition with a measured probability of being true for an arbitrary person at an arbitrary time. They may be contrasted with deterministic propositions, which assert that something is certain with no element of chance. Probabilistic proportions may be eith...
Probability of error : In statistics, the term "error" arises in two ways. Firstly, it arises in the context of decision making, where the probability of error may be considered as being the probability of making a wrong decision and which would have a different value for each type of error. Secondly, it arises in the ...
Probability of error : In hypothesis testing in statistics, two types of error are distinguished. Type I errors which consist of rejecting a null hypothesis that is true; this amounts to a false positive result. Type II errors which consist of failing to reject a null hypothesis that is false; this amounts to a false n...
Probability of error : The fitting of many models in statistics and econometrics usually seeks to minimise the difference between observed and predicted or theoretical values. This difference is known as an error, though when observed it would be better described as a residual. The error is taken to be a random variabl...
Probability of error : https://www.bartleby.com/subject/engineering/electrical-engineering/concepts/probability-of-error
Probability-proportional-to-size sampling : In survey methodology, probability-proportional-to-size (pps) sampling is a sampling process where each element of the population (of size N) has some (independent) chance p i to be selected to the sample when performing one draw. This p i is proportional to some known quan...
Probability-proportional-to-size sampling : Bernoulli sampling Poisson distribution Poisson process Sampling design == References ==
Proper linear model : In statistics, a proper linear model is a linear regression model in which the weights given to the predictor variables are chosen in such a way as to optimize the relationship between the prediction and the criterion. Simple regression analysis is the most common example of a proper linear model....
Proper linear model : Dawes, R. M. (1979). "The robust beauty of improper linear models in decision making". American Psychologist. 34 (7): 571–582. doi:10.1037/0003-066X.34.7.571. S2CID 14428212.
Proxy (statistics) : In statistics, a proxy or proxy variable is a variable that is not in itself directly relevant, but that serves in place of an unobservable or immeasurable variable. In order for a variable to be a good proxy, it must have a close correlation, not necessarily linear, with the variable of interest. ...
Proxy (statistics) : In social sciences, proxy measurements are often required to stand in for variables that cannot be directly measured. This process of standing in is also known as operationalization. Per-capita gross domestic product (GDP) is often used as a proxy for measures of standard of living or quality of li...
Proxy (statistics) : Instrumental variable Latent variable Operationalization Proxy (climate)
Proxy (statistics) : Toutenburg, Helge; Götz Trenkler (1992). "Proxy variables and mean square error dominance in linear regression". Journal of Quantitative Economics. 8: 433–442. Stahlecker, Peter; Götz Trenkler (1993). "Some further results on the use of proxy variables in prediction". The Review of Economics and St...
Pseudomedian : In statistics, the pseudomedian is a measure of centrality for data-sets and populations. It agrees with the median for symmetric data-sets or populations. In mathematical statistics, the pseudomedian is also a location parameter for probability distributions.
Pseudomedian : The pseudomedian of a distribution F is defined to be a median of the distribution of ( Z 1 + Z 2 ) / 2 +Z_)/2 , where Z 1 and Z 2 are independent, each with the same distribution F . When F is a symmetric distribution, the pseudomedian coincides with the median; otherwise this is not generally the ...
Pseudomedian : In signal processing there is another definition of pseudomedian filter for discrete signals. For a time series of length 2N + 1, the pseudomedian is defined as follows. Construct N + 1 sliding windows each of length N + 1. For each window, compute the minimum and maximum. Across all N + 1 windows, find ...
Pseudomedian : Hodges–Lehmann estimator Median filter Lulu smoothing == References ==
Quartile coefficient of dispersion : In statistics, the quartile coefficient of dispersion (QCD) is a descriptive statistic which measures dispersion and is used to make comparisons within and between data sets. Since it is based on quantile information, it is less sensitive to outliers than measures such as the coeffi...
Quartile coefficient of dispersion : Consider the following two data sets: A = n = 7, range = 12, mean = 8, median = 8, Q1 = 4, Q3 = 12, quartile coefficient of dispersion = 0.5 B = n = 7, range = 1.2, mean = 2.4, median = 2.4, Q1 = 2, Q3 = 2.9, quartile coefficient of dispersion = 0.18 The quartile coefficient of di...
Quartile coefficient of dispersion : Robust measures of scale Coefficient of variation Interquartile range Median absolute deviation == References ==
Quasi-identifier : Quasi-identifiers are pieces of information that are not of themselves unique identifiers, but are sufficiently well correlated with an entity that they can be combined with other quasi-identifiers to create a unique identifier. Quasi-identifiers can thus, when combined, become personally identifying...
Quasi-identifier : De-identification Differential privacy Personally identifying information == References ==
Raking : Raking (also called "raking ratio estimation" or "iterative proportional fitting") is the statistical process of adjusting data sample weights of a contingency table to match desired marginal totals. == References ==
Random modulation : In the theories of modulation and of stochastic processes, random modulation is the creation of a new signal from two other signals by the process of quadrature amplitude modulation. In particular, the two signals are considered as being random processes. For applications, the two original signals n...
Random modulation : The random modulation procedure starts with two stochastic baseband signals, x c ( t ) (t) and x s ( t ) (t) , whose frequency spectrum is non-zero only for f ∈ [ − B / 2 , B / 2 ] . It applies quadrature modulation to combine these with a carrier frequency f 0 (with f 0 > B / 2 >B/2 ) to form the...
Random modulation : Papoulis, Athanasios; Pillai, S. Unnikrishna (2002). "Random walks and other applications". Probability, random variables and stochastic processes (4th ed.). McGraw-Hill Higher Education. pp. 463–473. Scarano, Gaetano (2009). Segnali, Processi Aleatori, Stima (in Italian). Centro Stampa d'Ateneo. Pa...
Rank mobility index : In demographics, the rank mobility index (RMI) is a measure of a city's change in population rank among a group of cities. Formally R M I = R 1 − R 2 R 1 + R 2 , -R_+R_, where R1 = city's rank at time 1 R2 = city's rank at time 2 A RMI value must be between −1 and 1. A RMI of 0 indicates no change...
Rate ratio : In epidemiology, a rate ratio, sometimes called an incidence density ratio or incidence rate ratio, is a relative difference measure used to compare the incidence rates of events occurring at any given point in time. It is defined as: Rate Ratio = Incidence Rate 1 Incidence Rate 2 = where incidence rate is...
Rate ratio : Odds ratio Ratio Risk ratio == References ==
Rational quadratic covariance function : In statistics, the rational quadratic covariance function is used in spatial statistics, geostatistics, machine learning, image analysis, and other fields where multivariate statistical analysis is conducted on metric spaces. It is commonly used to define the statistical covaria...
Rayleigh test : Rayleigh test can refer to: a test for periodicity in irregularly sampled data, a derivation of the above to test for non-uniformity (as unimodal clustering) of a set of points on a circle (e.g. compass directions), sometimes known as the Rayleigh z test.