text stringlengths 12 14.7k |
|---|
Composite measure : Indexes are often referred to as scales, but in fact not all indexes are scales. Whereas indexes are usually created by aggregating scores assigned to individual attributes of various variables, scales are more nuanced and take into account differences in intensity among the attribute of the same va... |
Composite measure : A good composite measure will ensure that the indicators are independent of one another. It should also successfully predict other indicators of the variable. == References == |
Concurrent validity : Concurrent validity is a type of evidence that can be gathered to defend the use of a test for predicting other outcomes. It is a parameter used in sociology, psychology, and other psychometric or behavioral sciences. Concurrent validity is demonstrated when a test correlates well with a measure t... |
Concurrent validity : Concurrent validity and predictive validity are two types of criterion-related validity. The difference between concurrent validity and predictive validity rests solely on the time at which the two measures are administered. Concurrent validity applies to validation studies in which the two measur... |
Concurrent validity : Concurrent validity differs from convergent validity in that it focuses on the power of the focal test to predict outcomes on another test or some outcome variable. Convergent validity refers to the observation of strong correlations between two tests that are assumed to measure the same construct... |
Concurrent validity : Construct validity Discriminant validity == References == |
Conditional change model : The conditional change model in statistics is the analytic procedure in which change scores are regressed on baseline values, together with the explanatory variables of interest (often including indicators of treatment groups). The method has some substantial advantages over the usual two-sam... |
Conditional change model : Plewis, I. (1985). Analysing Change: Measurement and Explanation Using Longitudinal Data. Wiley. ISBN 0-471-10444-2. Aickin, M. (2009). "Dealing With Change: Using the Conditional Change Model for Clinical Research". The Permanente Journal. 13 (2): 80–84. doi:10.7812/TPP/08-070. PMC 3034438. ... |
Conditional variance : In probability theory and statistics, a conditional variance is the variance of a random variable given the value(s) of one or more other variables. Particularly in econometrics, the conditional variance is also known as the scedastic function or skedastic function. Conditional variances are impo... |
Conditional variance : The conditional variance of a random variable Y given another random variable X is Var ( Y ∣ X ) = E ( ( Y − E ( Y ∣ X ) ) 2 | X ) . (Y\mid X)=\operatorname Y-\operatorname (Y\mid X)^\;\;X. The conditional variance tells us how much variance is left if we use E ( Y ∣ X ) (Y\mid X) to ... |
Conditional variance : Recall that variance is the expected squared deviation between a random variable (say, Y) and its expected value. The expected value can be thought of as a reasonable prediction of the outcomes of the random experiment (in particular, the expected value is the best constant prediction when predic... |
Conditional variance : The law of total variance says Var ( Y ) = E ( Var ( Y ∣ X ) ) + Var ( E ( Y ∣ X ) ) . (Y)=\operatorname (\operatorname (Y\mid X))+\operatorname (\operatorname (Y\mid X)). In words: the variance of Y is the sum of the expected conditional variance of Y given X and the variance of t... |
Conditional variance : Mixed model Random effects model |
Conditional variance : Casella, George; Berger, Roger L. (2002). Statistical Inference (Second ed.). Wadsworth. pp. 151–52. ISBN 0-534-24312-6. |
Consecutive sampling : In the design of experiments, consecutive sampling, also known as total enumerative sampling, is a sampling technique in which every subject meeting the criteria of inclusion is selected until the required sample size is achieved. Along with convenience sampling and snowball sampling, consecutive... |
Constraint (information theory) : Constraint in information theory is the degree of statistical dependence between or among variables. Garner provides a thorough discussion of various forms of constraint (internal constraint, external constraint, total constraint) with application to pattern recognition and psychology. |
Constraint (information theory) : Mutual Information Total Correlation Interaction information == References == |
Correct sampling : During sampling of granular materials (whether airborne, suspended in liquid, aerosol, or aggregated), correct sampling is defined in Gy's sampling theory as a sampling scenario in which all particles in a population have the same probability of ending up in the sample. The concentration of the prope... |
Correct sampling : Particle filter Particle in a box Particulate matter sampler Statistical sampling Gy's sampling theory == References == |
Cox process : In probability theory, a Cox process, also known as a doubly stochastic Poisson process is a point process which is a generalization of a Poisson process where the intensity that varies across the underlying mathematical space (often space or time) is itself a stochastic process. The process is named afte... |
Cox process : Let ξ be a random measure. A random measure η is called a Cox process directed by ξ , if L ( η ∣ ξ = μ ) (\eta \mid \xi =\mu ) is a Poisson process with intensity measure μ . Here, L ( η ∣ ξ = μ ) (\eta \mid \xi =\mu ) is the conditional distribution of η , given . |
Cox process : If η is a Cox process directed by ξ , then η has the Laplace transform L η ( f ) = exp ( − ∫ 1 − exp ( − f ( x ) ) ξ ( d x ) ) _(f)=\exp \left(-\int 1-\exp(-f(x))\;\xi (\mathrm x)\right) for any positive, measurable function f . |
Cox process : Poisson hidden Markov model Doubly stochastic model Inhomogeneous Poisson process, where λ(t) is restricted to a deterministic function Ross's conjecture Gaussian process Mixed Poisson process |
Cox process : Notes Bibliography Cox, D. R. and Isham, V. Point Processes, London: Chapman & Hall, 1980 ISBN 0-412-21910-7 Donald L. Snyder and Michael I. Miller Random Point Processes in Time and Space Springer-Verlag, 1991 ISBN 0-387-97577-2 (New York) ISBN 3-540-97577-2 (Berlin) |
Criterion validity : In psychometrics, criterion validity, or criterion-related validity, is the extent to which an operationalization of a construct, such as a test, relates to, or predicts, a theoretically related behaviour or outcome — the criterion. Criterion validity is often divided into concurrent and predictive... |
Criterion validity : Construct validity Content validity Discriminant validity (divergent validity) Face validity Test validity Validity (statistics) |
Criterion validity : A page detailing multiple validity points |
Cross-lagged panel model : The cross-lagged panel model is a type of discrete time structural equation model used to analyze panel data in which two or more variables are repeatedly measured at two or more different time points. This model aims to estimate the directional effects that one variable has on another at dif... |
Cross-sectional regression : In statistics and econometrics, a cross-sectional regression is a type of regression in which the explained and explanatory variables are all associated with the same single period or point in time. This type of cross-sectional analysis is in contrast to a time-series regression or longitud... |
Cross-sectional regression : Linear regression Regression analysis |
Cross-sectional regression : Andrews, D. W. K. (2005). "Cross-Section Regression with Common Shocks" (PDF). Econometrica. 73 (5): 1551. doi:10.1111/j.1468-0262.2005.00629.x. Preprint Wooldridge, Jeffrey M. (2009). "Part 1: Regression Analysis with Cross Sectional Data". Introductory econometrics: a modern approach (4th... |
Cross-sectional regression : A Review of Cross Sectional Regression for Financial Data Lecture notes by Gary Koop, Department of Economics, University of Strathclyde |
Cross-sequential study : A cross-sequential design is a research method that combines both a longitudinal design and a cross-sectional design. It aims to correct for some of the problems inherent in the cross-sectional and longitudinal designs. In a cross-sequential design (also called an "accelerated longitudinal" or ... |
Cumulative flow diagram : A cumulative flow diagram is a tool used in queuing theory. It is an area graph that depicts the quantity of work in a given state, showing arrivals, time in queue, quantity in queue, and departure. According to the Project Management Body of Knowledge (7th edition) by the Project Management I... |
Cumulative flow diagram : Morris, Peter W. G. (1997) [1994]. The Management of Projects. Thomas Telford. ISBN 978-0-7277-2593-6. Project Management Institute (2021). A guide to the project management body of knowledge (PMBOK guide). Project Management Institute (7th ed.). Newtown Square, PA. ISBN 978-1-62825-664-2.: CS... |
Cunningham function : In statistics, the Cunningham function or Pearson–Cunningham function ωm,n(x) is a generalisation of a special function introduced by Pearson (1906) and studied in the form here by Cunningham (1908). It can be defined in terms of the confluent hypergeometric function U, by ω m , n ( x ) = e − x + ... |
Cunningham function : Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 13". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972... |
Dasgupta's objective : In the study of hierarchical clustering, Dasgupta's objective is a measure of the quality of a clustering, defined from a similarity measure on the elements to be clustered. It is named after Sanjoy Dasgupta, who formulated it in 2016. Its key property is that, when the similarity comes from an u... |
Data binning : Data binning, also called data discrete binning or data bucketing, is a data pre-processing technique used to reduce the effects of minor observation errors. The original data values which fall into a given small interval, a bin, are replaced by a value representative of that interval, often a central va... |
Data binning : Histograms are an example of data binning used in order to observe underlying frequency distributions. They typically occur in one-dimensional space and in equal intervals for ease of visualization. Data binning may be used when small instrumental shifts in the spectral dimension from mass spectrometry (... |
Data binning : Binning (disambiguation) Censoring (statistics) Discretization of continuous features Grouped data Histogram Level of measurement Quantization (signal processing) Rounding == References == |
Data generating process : In statistics and in empirical sciences, a data generating process is a process in the real world that "generates" the data one is interested in. This process encompasses the underlying mechanisms, factors, and randomness that contribute to the production of observed data. Usually, scholars do... |
Data generating process : https://stats.stackexchange.com/questions/443320/what-does-a-data-generating-process-dgp-actually-mean |
Decile : In descriptive statistics, a decile is any of the nine values that divide the sorted data into ten equal parts, so that each part represents 1/10 of the sample or population. A decile is one possible form of a quantile; others include the quartile and percentile. A decile rank arranges the data in order from l... |
Decile : A moderately robust measure of central tendency - known as the decile mean - can be computed by making use of a sample's deciles D 1 to D 9 ( D 1 = 10th percentile, D 2 = 20th percentile and so on). It is calculated as follows: D M = ∑ i = 1 9 D i 9 ^D_ Apart from serving as an alternative for the mean and... |
Decile : Summary statistics Socio-economic decile (for New Zealand schools) == References == |
DeFries–Fulker regression : In behavioural genetics, DeFries–Fulker (DF) regression, also sometimes called DeFries–Fulker extremes analysis, is a type of multiple regression analysis designed for estimating the magnitude of genetic and environmental effects in twin studies. It is named after John C. DeFries and David F... |
DeFries–Fulker regression : DeFries–Fulker regression analysis is based on the differences in the magnitude of regression to the mean in a genetic trait between monozygotic (MZ) and dizygotic (DZ) twins. In DF regression, the first step is to select probands in a twin study with extreme scores on the trait being studie... |
DeFries–Fulker regression : The probands are chosen with scores that fall below a "cutoff" for what is considered "extreme", and regression is then used to predict the co-twin scores based on those of the probands and a term reflecting whether the twin pair is MZ (1.0) or DZ (0.5). The formula used for DF regression is... |
Discrepancy function : In structural equation modeling, a discrepancy function is a mathematical function which describes how closely a structural model conforms to observed data; it is a measure of goodness of fit. Larger values of the discrepancy function indicate a poor fit of the model to data. In general, the para... |
Discrepancy function : There are several basic types of discrepancy functions, including maximum likelihood (ML), generalized least squares (GLS), and ordinary least squares (OLS), which are considered the "classical" discrepancy functions. Discrepancy functions all meet the following basic criteria: They are non-negat... |
Discrepancy function : Constructions of low-discrepancy sequences Discrepancy theory Low-discrepancy sequence == References == |
Discretization of continuous features : In statistics and machine learning, discretization refers to the process of converting or partitioning continuous attributes, features or variables to discretized or nominal attributes/features/variables/intervals. This can be useful when creating probability mass functions – for... |
Discretization of continuous features : This is a partial list of software that implement MDL algorithm. discretize4crf tool designed to work with popular CRF implementations (C++) mdlp in the R package discretization Discretize in the R package RWeka |
Discretization of continuous features : Density estimation Continuity correction == References == |
Discrimination ratio : In Six Sigma, the discrimination ratio or reliability design index is a performance metric of attribute agreement analysis which assesses the level of agreement between how well the appraisers or inspectors can differentiate between acceptable and unacceptable items. == References == |
Displaced Poisson distribution : In statistics, the displaced Poisson, also known as the hyper-Poisson distribution, is a generalization of the Poisson distribution. |
Dominating decision rule : In decision theory, a decision rule is said to dominate another if the performance of the former is sometimes better, and never worse, than that of the latter. Formally, let δ 1 and δ 2 be two decision rules, and let R ( θ , δ ) be the risk of rule δ for parameter θ . The decision rule δ... |
Double counting (fallacy) : Double counting is a fallacy in reasoning. An example of double counting is shown starting with the question: What is the probability of seeing at least one 5 when throwing a pair of dice? An erroneous argument goes as follows: The first die shows a 5 with probability 1/6, and the second die... |
Double counting (fallacy) : Stephen Campbell, Flaws and Fallacies in Statistical Thinking (2012), in series Dover Books on Mathematics, Courier Corporation, ISBN 9780486140513 |
Dynamic Bayesian network : A dynamic Bayesian network (DBN) is a Bayesian network (BN) which relates variables to each other over adjacent time steps. |
Dynamic Bayesian network : A dynamic Bayesian network (DBN) is often called a "two-timeslice" BN (2TBN) because it says that at any point in time T, the value of a variable can be calculated from the internal regressors and the immediate prior value (time T-1). DBNs were developed by Paul Dagum in the early 1990s at St... |
Dynamic Bayesian network : Recursive Bayesian estimation Probabilistic logic network Generalized filtering |
Dynamic Bayesian network : Murphy, Kevin (2002). Dynamic Bayesian Networks: Representation, Inference and Learning. UC Berkeley, Computer Science Division. Ghahramani, Zoubin (1998). "Learning dynamic Bayesian networks". Adaptive Processing of Sequences and Data Structures. Lecture Notes in Computer Science. Vol. 1387.... |
Dynamic Bayesian network : bnt on GitHub: the Bayes Net Toolbox for Matlab, by Kevin Murphy, (released under a GPL license) Graphical Models Toolkit (GMTK): an open-source, publicly available toolkit for rapidly prototyping statistical models using dynamic graphical models (DGMs) and dynamic Bayesian networks (DBNs). G... |
Dynamic contagion process : In applied probability, a dynamic contagion process is a point process with stochastic intensity that generalises the Hawkes process and Cox process with exponentially decaying shot noise intensity. |
Dynamic contagion process : Point process Cox process Doubly stochastic model == References == |
Eastern and Midland Region : The Eastern and Midland Region has been defined as a region in Ireland since 1 January 2015. It is a NUTS Level II statistical region of Ireland (coded IE06). NUTS 2 Regions may be classified as less developed regions, transition regions, or more developed regions to determine eligibility f... |
Ecological regression : Ecological regression is a statistical technique which runs regression on aggregates, often used in political science and history to estimate group voting behavior from aggregate data. For example, if counties have a known Democratic vote (in percentage) D, and a known percentage of Catholics, C... |
Ecological regression : Ecological correlation Ecological fallacy |
Ecological regression : Brown, Philip J.; Payne, Clive D. (1986). "Aggregate Data, Ecological Regression, and Voting Transitions". Journal of the American Statistical Association. 81 (394): 452–460. doi:10.1080/01621459.1986.10478290. JSTOR 2289235. advanced techniques King, Gary; Martin Abba Tanner; Ori Rosen (2004). ... |
Elston–Stewart algorithm : The Elston–Stewart algorithm is an algorithm for computing the likelihood of observed data on a pedigree assuming a general model under which specific genetic segregation, linkage and association models can be tested. It is due to Robert Elston and John Stewart. It can handle relatively large... |
Elston–Stewart algorithm : Elston, R. C., Stewart, J. (1971) "A general model for the genetic analysis of pedigree data". Hum Hered., 21, 523–542 Elston R.C., George V.T., Severtson F. (1992) "The Elston-Stewart algorithm for continuous genotypes and environmental factors", Hum Hered., 42(1), 16–27. Stewart J. (1992) "... |
Encyclopedia of Statistical Sciences : The Encyclopedia of Statistical Sciences is an encyclopaedia of statistics published by John Wiley & Sons. The first edition, in nine volumes, was published in 1982; it was edited by Norman Lloyd Johnson and Samuel Kotz. The second edition, in 16 volumes, was published in 2006; th... |
Encyclopedia of Statistical Sciences : International Encyclopedia of Statistical Science |
Epitome (data processing) : An epitome, in data processing, is a condensed digital representation of the essential statistical properties of ordered datasets such as matrices that represent images, audio signals, videos or genetic sequences. Although much smaller than the data, the epitome contains many of its smaller ... |
Epitome (data processing) : Image processing Video imprint (computer vision) == References == |
Epps effect : In econometrics and time series analysis, the Epps effect, named after T. W. Epps, is the phenomenon that the empirical correlation between the returns of two different stocks decreases with the length of the interval for which the price changes are measured. The phenomenon is caused by non-synchronous/as... |
Ergograph : An ergograph is a graph that shows a relation between human activities and a seasonal year. The name was coined by Dr. Arthur Geddes of the University of Edinburgh. It can either be a polar coordinate (circular) or a cartesian coordinate (rectangular) graph, and either a line graph or a bar graph. In polar ... |
Ergograph : Seasonal adjustment |
Ergograph : Institute of British Geographers (1950). Transactions and Papers (Institute of British Geographers) (16–19). G. Philip: 2, 184. : Missing or empty |title= (help) |
Error bar : Error bars are graphical representations of the variability of data and used on graphs to indicate the error or uncertainty in a reported measurement. They give a general idea of how precise a measurement is, or conversely, how far from the reported value the true (error free) value might be. Error bars oft... |
Error bar : Box plot Information graphics Model selection Significant figures == References == |
Ethnostatistics : Ethnostatistics is the study of the social activity of producing and using statistics. The premise of the area of study is that statistics are themselves not neutral facts, but are themselves influenced by the social biases of the persons involved in their production. The concept was suggested in John... |
Evidential decision theory : Evidential decision theory (EDT) is a school of thought within decision theory which states that, when a rational agent is confronted with a set of possible actions, one should select the action with the highest news value, that is, the action which would be indicative of the best outcome i... |
Evidential decision theory : In a 1976 paper, Allan Gibbard and William Harper distinguished between two kinds of expected utility maximization. EDT proposes to maximize the expected utility of actions computed using conditional probabilities, namely V ( A ) = ∑ j P ( O j | A ) D ( O j ) , P(O_|A)D(O_), where D ( O j )... |
Evidential decision theory : Different decision theories are often examined in their recommendations for action in different thought experiments. |
Evidential decision theory : Even if one puts less credence on evidential decision theory, it may be reasonable to act as if EDT were true. Namely, because EDT can involve the actions of many correlated decision-makers, its stakes may be higher than causal decision theory and thus take priority. |
Evidential decision theory : David Lewis has characterized evidential decision theory as promoting "an irrational policy of managing the news". James M. Joyce asserted, "Rational agents choose acts on the basis of their causal efficacy, not their auspiciousness; they act to bring about good results even when doing so m... |
Evidential decision theory : Causal Decision Theory at the Stanford Encyclopedia of Philosophy |
Experimental event rate : In epidemiology and biostatistics, the experimental event rate (EER) is a measure of how often a particular statistical event (such as response to a drug, adverse event or death) occurs within the experimental group (non-control group) of an experiment. This value is very useful in determining... |
Experimental event rate : The control event rate (CER) is identical to the experimental event rate except that is measured within the scientific control group of an experiment. |
Experimental event rate : Absolute risk reduction Relative risk reduction Number needed to treat == References == |
Falconer's formula : Heritability is the proportion of variance caused by genetic factors of a specific trait in a population. Falconer's formula is a mathematical formula that is used in twin studies to estimate the relative contribution of genetic vs. environmental factors to variation in a particular trait (that is,... |
Falconer's formula : Quantitative genetics == References == |
Farr's laws : Farr's law is a law formulated by Dr. William Farr when he made the observation that epidemic events rise and fall in a roughly symmetrical pattern. The time-evolution behavior could be captured by a single mathematical formula that could be approximated by a bell-shaped curve. |
Farr's laws : In 1840, Farr submitted a letter to the Annual Report of the Registrar General of Births, Deaths and Marriages in England. In that letter, he applied mathematics to the records of deaths during a recent smallpox epidemic, proposing that: "If the latent cause of epidemics cannot be discovered, the mode in ... |
Fay and Wu's H : Fay and Wu's H is a statistical test created by and named after two researchers Justin Fay and Chung-I Wu. The purpose of the test is to distinguish between a DNA sequence evolving randomly ("neutrally") and one evolving under positive selection. This test is an advancement over Tajima's D, which is us... |
Fay and Wu's H : Imagine a DNA sequence which has very few polymorphisms in its alleles across different populations. This could arise due to at least three causes: The sequence is experiencing heavy purifying selection, so any new mutation in the sequence is deleterious and is purged off immediately, or The sequence j... |
Fay and Wu's H : A significantly positive Fay and Wu's H indicates a deficit of moderate- and high-frequency derived single nucleotide polymorphisms (SNPs) relative to equilibrium expectations, whereas a significant negative Fay and Wu's H indicates an excess of high-frequency derived SNPs. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.