text
stringlengths
12
14.7k
Statistics Korea : Statistics Korea (KOSTAT; Korean: 통계청; Hanja: 統計廳; RR: Tonggyecheong) is a government organization responsible for managing national statistics in South Korea. KOSTAT is headquartered in Daejeon, South Korea and operates under the Ministry of Economy and Finance. Statistics Korea generates population...
Statistics Korea : List of national and international statistical services Official statistics
Statistics Korea : Official site, in Korean and English
StatXact : StatXact is a statistical software package for analyzing data using exact statistics. It calculates exact p-values and confidence intervals for contingency tables and non-parametric procedures. It is marketed by Cytel Inc.
StatXact : Mehta, Cyrus R. (1991). "StatXact: A Statistical Package for Exact Nonparametric Inference". The American Statistician. 45 (1): 74–75. doi:10.2307/2685246. JSTOR 2685246.
StatXact : StatXact homepage at Cytel Inc.
Stimulus–response model : The stimulus–response model is a conceptual framework in psychology that describes how individuals react to external stimuli. According to this model, an external stimulus triggers a reaction in an organism, often without the need for conscious thought. This model emphasizes the mechanistic as...
Stimulus–response model : Stimulus–response models are applied in international relations, psychology, risk assessment, neuroscience, neurally-inspired system design, and many other fields. Pharmacological dose response relationships are an application of stimulus-response models. Another field this model can be applie...
Stimulus–response model : The object of a stimulus–response model is to establish a mathematical function that describes the relation f between the stimulus x and the expected value (or other measure of location) of the response Y: E ( Y ) = f ( x ) (Y)=f(x) A common simplification assumed for such functions is linear...
Stimulus–response model : Since many types of response have inherent physical limitations (e.g. minimal maximal muscle contraction), it is often applicable to use a bounded function (such as the logistic function) to model the response. Similarly, a linear response function may be unrealistic as it would imply arbitrar...
Stimulus–response model : Holland, Peter C. (2008). "Cognitive versus stimulus-response theories of learning". Learning & Behavior. 36 (3): 227–241. doi:10.3758/lb.36.3.227. PMC 3065938. PMID 18683467.
Streamgraph : A streamgraph, or stream graph, is a type of stacked area graph which is displaced around a central axis, resulting in a flowing, organic shape. Unlike a traditional stacked area graph in which the layers are stacked on top of an axis, in a streamgraph the layers are positioned to minimize their "wiggle"....
Streamgraph : Lee Byron's streamgraph_generator RAWGraphs Open-source visualization tool easing streamgraph generation. StreamGraph Open-source javascript for generating a streamgraph in SVG.
Strong and weak sampling : Strong and weak sampling are two sampling approach in Statistics, and are popular in computational cognitive science and language learning. In strong sampling, it is assumed that the data are intentionally generated as positive examples of a concept, while in weak sampling, it is assumed that...
Strong and weak sampling : In strong sampling, we assume observation is randomly sampled from the true hypothesis: P ( x | h ) = &x\in h\\0&\end In weak sampling, we assume observations randomly sampled and then classified: P ( x | h ) = 1&x\in h\\0&\end
Strong and weak sampling : P ( h | x ) = P ( x | h ) P ( h ) ∑ h ′ P ( x | h ′ ) P ( h ′ ) = P(x|h')P(h')=P(h')&x\in h\\0&\end Therefore the likelihood P ( x | h ′ ) for all hypotheses h ′ will be "ignored".
Strong and weak sampling : Lecture 20: Strong vs weak sampling
Structured data analysis (statistics) : Structured data analysis is the statistical data analysis of structured data. This can arise either in the form of an a priori structure such as multiple-choice questionnaires or in situations with the need to search for structure that fits the given data, either exactly or appro...
Structured data analysis (statistics) : Algebraic data analysis Bayesian analysis Cluster analysis Combinatorial data analysis Formal concept analysis Functional data analysis Geometric data analysis Regression analysis Shape analysis Topological data analysis Tree structured data analysis
Structured data analysis (statistics) : Carlsson, Gunnar (2009). "Topology and data". Bulletin of the American Mathematical Society. New Series. 46 (2): 255–308. doi:10.1090/S0273-0979-09-01249-X. James O. Ramsay; B. W. Silverman (2005). Functional data analysis. Springer. ISBN 9780387400808. Leland Wilkinson, (1992) T...
Studentization : In statistics, Studentization, named after William Sealy Gosset, who wrote under the pseudonym Student, is the adjustment consisting of division of a first-degree statistic derived from a sample, by a sample-based estimate of a population standard deviation. The term is also used for the standardisatio...
Studentization : Studentized range Studentized residual
Studentization : Pivotal quantity == References ==
Subgroup analysis : Subgroup analysis refers to repeating the analysis of a study within subgroups of subjects defined by a subgrouping variable. For example: smoking status defining two subgroups: smokers and non-smokers.
Subgroup analysis : Post hoc analysis == References ==
Subsetting : In research communities (for example, earth sciences, astronomy, business, and government), subsetting is the process of retrieving just the parts (a subset) of large files which are of interest for a specific purpose. This occurs usually in a client—server setting, where the extraction of the parts of int...
Subsetting : You can subset within statistical software programs to help speed up the process of subsetting if needed. There are many different types of subsetting that can provide challenges with using software programs though. Some types of subsetting are: Atomic Vectors Lists Matrices and Arrays Data Frames S3 Objec...
Superposed epoch analysis : Superposed epoch analysis (SPE or SEA), also called Chree analysis after a paper by Charles Chree that employed the technique, is a statistical tool used in data analysis either to detect periodicities within a time sequence or to reveal a correlation (usually in time) between two data seque...
Suppressor variable : A suppressor variable is a variable that increases the predictive validity of another variable when included in a regression equation. Suppression can occur when a single causal variable is related to an outcome variable through two separate mediator variables, and when one of those mediated effec...
Suppressor variable : Confounding variable == References ==
Symbolic data analysis : Symbolic data analysis (SDA) is an extension of standard data analysis where symbolic data tables are used as input and symbolic objects are made output as a result. The data units are called symbolic since they are more complex than standard ones, as they not only contain values or categories,...
Symbolic data analysis : Diday, Edwin; Noirhomme-Fraiture, Monique (2008). Symbolic Data Analysis and the SODAS Software. Wiley–Blackwell. ISBN 9780470018835.
Symbolic data analysis : Symbolic Data Analysis: Conceptual Statistics and Data Mining An introduction to symbolic data analysis and its Application to the Sodas Project by Edwin Diday R2S: An R package to transform relational data into symbolic data
Synthetic measure : A synthetic measure (or synthetic indicator) is a value that is the result of combining other metrics, which are measurements of various features.
Synthetic measure : Scientific works about synthetic measure on Google Scholar
Tampering (quality control) : Tampering in the context of a controlled process is adjusting the process on the basis of outcomes which are within the expected range of variability. The net result is to re-align the process so that an increased proportion of the output is out of specification. The term was introduced in...
Tampering (quality control) : Incentive program Control chart
Tampering (quality control) : W. Edwards Deming (1994) The New Economics for Industry, Government, Education, 2nd edition, Massachusetts Inst Technology. ISBN 0-911379-07-X (Chapter 9.) Deming, W. Edward (1986), Out of the Crisis, MIT Center for Advanced Engineering Study, 327–32. (2000 edition: ISBN 0-262-54115-7) Git...
Targeted projection pursuit : Targeted projection pursuit is a type of statistical technique used for exploratory data analysis, information visualization, and feature selection. It allows the user to interactively explore very complex data (typically having tens to hundreds of attributes) to find features or patterns ...
Targeted projection pursuit : Joe Faith (2007) "Targeted Projection Pursuit for Interactive Exploration of High-Dimensional Data Sets", Proceedings of 11th International Conference on Information Visualisation
Targeted projection pursuit : imDEV free Excel add-in for targeted projection pursuits using feature selection coupled with PLS and PLS-DA Targeted Projection Pursuit project page
The Tiger That Isn't : The Tiger That Isn't: Seeing Through a World of Numbers is a statistics book written by Michael Blastland and Andrew Dilnot, the creator and presenter of BBC Radio 4's More or Less. Like the radio show, it addresses the misuse of statistics in politics and the media. The book has received favoura...
The Tiger That Isn't : Blastland, Michael; and Dilnot, (2007) The Tiger That Isn't: Seeing Through a World of Numbers, Profile Books. ISBN 9781861978394 Blastland, Michael; Dilnot, Andrew (1 December 2007). "The tiger that isn't: numbers in the media". Plus Magazine (excerpt) (45).
Time-varying covariate : A time-varying covariate (also called time-dependent covariate) is a term used in statistics, particularly in survival analysis. It reflects the phenomenon that a covariate is not necessarily constant through the whole study Time-varying covariates are included to represent time-dependent withi...
Tucker decomposition : In mathematics, Tucker decomposition decomposes a tensor into a set of matrices and one small core tensor. It is named after Ledyard R. Tucker although it goes back to Hitchcock in 1927. Initially described as a three-mode extension of factor analysis and principal component analysis it may actua...
Tucker decomposition : Higher-order singular value decomposition Multilinear principal component analysis == References ==
TURF analysis : TURF analysis, an acronym for "total unduplicated reach and frequency", is a type of statistical analysis used for providing estimates of media or market potential and devising optimal communication and placement strategies given limited resources. TURF analysis identifies the number of users reached by...
TURF analysis : P-STAT, a software for TURF analysis. Retrieved 23 January 2007. TURF Analysis in Market Research – an example TURF Analysis in Sensory Evaluation – an example TURF Basics and Examples, by Displayr == References ==
Two-proportion Z-test : The Two-proportion Z-test (or, Two-sample proportion Z-test) is a statistical method used to determine whether the difference between the proportions of two groups, coming from a binomial distribution is statistically significant. This approach relies on the assumption that the sample proportion...
Two-proportion Z-test : The z-test for comparing two proportions is a Statistical hypothesis test for evaluating whether the proportion of a certain characteristic differs significantly between two independent samples. This test leverages the property that the sample proportions (which is the average of observations co...
Two-proportion Z-test : The confidence interval for the difference between two proportions, based on the definitions above, is: ( p ^ 1 − p ^ 2 ) ± z α / 2 p ^ 1 ( 1 − p ^ 1 ) n 1 + p ^ 2 ( 1 − p ^ 2 ) n 2 _-_)\pm z__(1-_)+_(1-_) Where: z α / 2 is the critical value of the standard normal distribution (e.g., 1.96 for ...
Two-proportion Z-test : The minimum detectable effect (MDE) is the smallest difference between two proportions ( p 1 and p 2 ) that a statistical test can detect for a chosen Type I error level ( α ), statistical power ( 1 − β ), and sample sizes ( n 1 and n 2 ). It is commonly used in study design to determine w...
Two-proportion Z-test : To ensure valid results, the following assumptions must be met: Independent random samples: The samples must be drawn independently from the populations of interest. Large sample sizes: Typically, n 1 and n 2 should exceed 30. Success/failure condition: n 1 p ^ 1 > 10 _>10 and n 1 ( 1 − p ^ 1 ...
Two-proportion Z-test : Two-sample hypothesis testing A/B testing
Two-proportion Z-test : Inference for Proportions: Comparing Two Independent Samples (power calculator)
Two-sample hypothesis testing : In statistical hypothesis testing, a two-sample test is a test performed on the data of two random samples, each independently obtained from a different given population. The purpose of the test is to determine whether the difference between these two populations is statistically signifi...
Two-sample hypothesis testing : Statistical tests that may apply for two-sample testing include: Hotelling's T-squared distribution#Two-sample statistic Kernel embedding of distributions#Kernel two-sample test Kolmogorov–Smirnov_test#Two-sample_Kolmogorov–Smirnov_test Kuiper's test Median test Pearson's chi-squared tes...
Typology (social science research method) : Typology is a composite measure that involves the classification of observations in terms of their attributes on multiple variables. Such classification is usually done on a nominal scale. Typologies are used in both qualitative and quantitative research. An example of a typo...
Tyranny of averages : The tyranny of averages is a phrase used in applied statistics to describe the often overlooked fact that the mean does not provide any information about the shape of the probability distribution of a data set or skewness, and that decisions or analysis based on only the mean—as opposed to median ...
Tyranny of averages : Law of large numbers Law of averages Trimean
Tyranny of averages : Mecklin, J.M. (1918) "The Tyranny of the Average Man", International Journal of Ethics, 28 (2), 240–252
Univariate distribution : In statistics, a univariate distribution is a probability distribution of only one random variable. This is in contrast to a multivariate distribution, the probability distribution of a random vector (consisting of multiple random variables).
Univariate distribution : One of the simplest examples of a discrete univariate distribution is the discrete uniform distribution, where all elements of a finite set are equally likely. It is the probability model for the outcomes of tossing a fair coin, rolling a fair die, etc. The univariate continuous uniform distri...
Univariate distribution : Univariate Bivariate distribution List of probability distributions
Univariate distribution : Leemis, L. M.; McQueston, J. T. (2008). "Univariate Distribution Relationships" (PDF). The American Statistician. 62: 45–53. doi:10.1198/000313008X270448.
University Statisticians of the Southern Experiment Stations : The University Statisticians of the Southern Experiment Stations (USSES) was a coalition of southern Universities formed in the mid-1960s for the purpose of coordinating efforts in the development of statistical software. This coalition was largely motivate...
University Statisticians of the Southern Experiment Stations : Anthony James Barr Gertrude Mary Cox James Goodnight SAS System == References ==
Utilization distribution : A utilization distribution is a probability distribution giving the probability density that an animal is found at a given point in space. It is estimated from data sampling the location of an individual or individuals in space over a period of time using, for example, telemetry or GPS based ...
Utilization distribution : Home range Local convex hull == References ==
Variational series : In statistics, a variational series is a non-decreasing sequence X ( 1 ) ⩽ X ( 2 ) ⩽ ⋯ ⩽ X ( n − 1 ) ⩽ X ( n ) \leqslant X_\leqslant \cdots \leqslant X_\leqslant X_ composed from an initial series of independent and identically distributed random variables X 1 , … , X n ,\ldots ,X_ . The members of...
Varimax rotation : In statistics, a varimax rotation is used to simplify the expression of a particular sub-space in terms of just a few major items each. The actual coordinate system is unchanged, it is the orthogonal basis that is being rotated to align with those coordinates. The sub-space found with principal compo...
Varimax rotation : A summary of the use of varimax rotation and of other types of factor rotation is presented in this article on factor analysis.
Varimax rotation : In the R programming language the varimax method is implemented in several packages including stats (function varimax( )), or in contributed packages including GPArotation or psych. In SAS varimax rotation is available in PROC FACTOR using ROTATE = VARIMAX.
Varimax rotation : Factor analysis Empirical orthogonal functions Q methodology Rotation matrix
Varimax rotation : Factor rotations in Factor Analyses by Herve Abdi About Varimax Properties of Principal Components http://www2.imm.dtu.dk/pubdb/views/edoc_download.php/4041/pdf/imm4041.pdf This article incorporates public domain material from the National Institute of Standards and Technology
W-test : In statistics, the W-test is designed to test the distributional differences between cases and controls for categorical variable set, which can be a single SNP, SNP-SNP, or SNP-environment pairs. It takes a combined log of odds ratio form, calculated from the contingency table of the variable set. The test inh...
W-test : Theoretically, the test is not restricted to pairwise interactions, and can go to higher order if sample size of the data can support it. The W-test's application for pairwise interaction effect has been tested in common genome-wide association study (GWAS) dataset with less than 5,000 subjects [1]. Since it c...
W-test : The W-test C++ software, linux version and R package are available from the wtest official website.
W-test : Maggie Haitian Wang, Rui Sun (co-first authors), Junfeng Guo, Haoyi Weng, Jack Lee, Inchi Hu, Pak Sham and Benny Chung-Ying Zee (2016). A fast and powerful W-test for pairwise epistasis testing. Nucleic Acids Research. doi:10.1093/nar/gkw347 http://www2.ccrb.cuhk.edu.hk/statgene/
Wait list control group : A wait list control group, also called a wait list comparison, is a group of participants included in an outcome study that is assigned to a waiting list and receives intervention after the active treatment group. This control group serves as an untreated comparison group during the study, but...
Wait list control group : Experimental design Scientific control Treatment and control groups == References ==
Watanabe–Akaike information criterion : In statistics, the Widely Applicable Information Criterion (WAIC), also known as Watanabe–Akaike information criterion, is the generalized version of the Akaike information criterion (AIC) onto singular statistical models. It is used as measure how well will model predict data it...
Watanabe–Akaike information criterion : Akaike information criterion Bayesian information criterion Deviance information criterion Hannan–Quinn information criterion == References ==
Weighted geometric mean : In statistics, the weighted geometric mean is a generalization of the geometric mean using the weighted arithmetic mean. Given a sample x = ( x 1 , x 2 … , x n ) ,x_\dots ,x_) and weights w = ( w 1 , w 2 , … , w n ) ,w_,\dots ,w_) , it is calculated as: x ¯ = ( ∏ i = 1 n x i w i ) 1 / ∑ i = 1 ...
Weighted geometric mean : Average Central tendency Summary statistics Weighted arithmetic mean Weighted harmonic mean
Weighted geometric mean : Non-Newtonian calculus website
Roy David Williams : Roy David Williams is a physicist and data scientist. He is a professor at Caltech and is most known for his work with the LIGO, and VOTable and VOEvent standards. He is a proponent of open data.
Roy David Williams : Fox, Geoffrey C., Roy D. Williams, and Paul C. Messina. Parallel computing works!. Elsevier, 2014. Giavalisco, M., et al. "The great observatories origins deep survey: initial results from optical and near-infrared imaging." The Astrophysical Journal Letters 600.2 (2004): L93. Williams, Roy D. "Per...
Roy David Williams : Roy David Williams' page at GitHub Roy David Williams publications indexed by Google Scholar
World Poverty Clock : The World Poverty Clock is a tool to monitor progress against poverty globally, and regionally. It provides real-time poverty data across countries. Created by the Vienna-based NGO, World Data Lab, it was launched in Berlin at the re:publica conference in 2017, and is funded by Germany's Federal M...
World Poverty Clock : The World Poverty Clock uses publicly available data on income distribution, stratification, production, and consumption, provided by different and multiple international organizations, most notably the UN, WorldBank, and the International Monetary Fund. These organizations compile data provided t...
Zero-order process (statistics) : In probability theory and statistics, a zero-order process is a stochastic process in which each observation is independent of all previous observations. For example, a zero-order process in marketing would be one in which the brands purchased next do not depend on the brands purchased...
Zero-order process (statistics) : Ward, Scott; Robertson, Thomas S. (1973). Consumer behavior: theoretical sources. Prentice-Hall. p. 536. ISBN 0131693913.
Bad control : In statistics, bad controls are variables that introduce an unintended discrepancy between regression coefficients and the effects that said coefficients are supposed to measure. These are contrasted with confounders which are "good controls" and need to be included to remove omitted variable bias. This i...
Bayes space : Bayes space is a function space defined as an equivalence class of measures with the same null-sets. Two measures are defined to be equivalent if they are proportional. The basic ideas of Bayes spaces have their roots in Compositional Data Analysis and the Aitchison geometry. Applications are mainly in st...
Bayes space : Consider a finite base measure P (not necessarily a probability measure) on a domain Ω . This may be a uniform distribution on a bounded interval, or it can be a Radon-Nikodym derivative of the Lebesgue measure (the Gaussian distribution, for example). If we take two densities f , g with respect to P ...
Bayes space : The measure P does not have to be univariate (1 dimensional), but can also be defined as a product measure on cartesian products, characterising bivariate (2 dimensional) or multivariate densities. The geometric structure of Hilbert spaces can be used to decompose multivariate densities into orthogonal i...
Bayes space : Compositional Data Analysis Functional data analysis Copulas Information geometry == References ==
Counting : Counting is the process of determining the number of elements of a finite set of objects; that is, determining the size of a set. The traditional way of counting consists of continually increasing a (mental or spoken) counter by a unit for every element of the set, in some order, while marking (or displacing...
Counting : Verbal counting involves speaking sequential numbers aloud or mentally to track progress. Generally such counting is done with base 10 numbers: "1, 2, 3, 4", etc. Verbal counting is often used for objects that are currently present rather than for counting things over time, since following an interruption co...