text
stringlengths
12
14.7k
TPL Tables : TPL Tables has a language for specifying tabulations and controlling format details. This language is the same for both Windows and Unix versions of the software. The Windows version also has an interactive interface that can access most features and includes Ted, an editor used to display PostScript table...
TPL Tables : TPL Tables can process an unlimited amount of data and produce tables that range in size from a few lines to hundreds of pages. Subsets of the data can be selected and new variables can be computed from incoming data or from tabulated values. Alternate computations can be performed depending on specified c...
TPL Tables : TPL Tables can read files with data in fixed columns or delimited file types such as CSV Comma Separated Values . TPL-SQL, an optional add-on feature, provides direct access from TPL Tables to SQL databases produced by products such as Sybase and Oracle. In the Windows version, TPL-SQL can access databases...
TPL Tables : TPL Tables automatically formats table output according to the table specification, available names and labels, and default settings. Tables can be created in PostScript or as text. Additional format features allow control of such things as page size, table orientation and column widths Rows or columns can...
TPL Tables : Tables can be exported as PDF, HTML, or CSV. The Windows version also allows tables to be exported for use as input to PC-Axis [3].
TPL Tables : Home page for QQQ Software, Inc. and TPL Tables [4] QQQ Software, Inc. download page [5]. Contains various documentation files, including the TPL Tables, Version 7.0 User Manual in PDF format.
TPL Tables : Mendelssohn, Rudolph C., The Bureau of Labor Statistics' Table Producing Language (TPL), ACM Press, New York, NY, 1974 Survey Data Processing: A Review of Issues and Procedures, United Nations Department of Technical Co-operation for Development and Statistical Office, New York, 1982
TWANG : TWANG, the Toolkit for Weighting and Analysis of Nonequivalent Groups, developed by the statistics group of the RAND Corporation, contains a set of functions to support Rubin causal modeling of observational data through the estimation and evaluation of propensity score weights by applying gradient boosting. It...
TWANG : Official website CRAN site == References ==
The Unscrambler : The Unscrambler X is a commercial software product for multivariate data analysis, used for calibration of multivariate data which is often in the application of analytical data such as near infrared spectroscopy and Raman spectroscopy, and development of predictive models for use in real-time spectro...
The Unscrambler : The Unscrambler X was an early adaptation of the use of partial least squares (PLS). Other techniques supported include principal component analysis (PCA), 3-way PLS, multivariate curve resolution, design of experiments, supervised classification, unsupervised classification and cluster analysis. The ...
Vecchia approximation : Vecchia approximation is a Gaussian processes approximation technique originally developed by Aldo Vecchia, a statistician at United States Geological Survey. It is one of the earliest attempts to use Gaussian processes in high-dimensional settings. It has since been extensively generalized givi...
Vecchia approximation : A joint probability distribution for events A , B , and C , denoted P ( A , B , C ) , can be expressed as P ( A , B , C ) = P ( A ) P ( B | A ) P ( C | A , B ) Vecchia's approximation takes the form, for example, P ( A , B , C ) ≈ P ( A ) P ( B | A ) P ( C | A ) and is accurate when events ...
Vecchia approximation : While conceptually simple, the assumption of the Vecchia approximation often proves to be fairly restrictive and inaccurate. This inspired important generalizations and improvements introduced in the basic version over the years: the inclusion of latent variables, more sophisticated conditioning...
Vecchia approximation : Several packages have been developed which implement some variants of the Vecchia approximation. GPvecchia is an R package available through CRAN which implements most versions of the Vecchia approximation GpGp is an R package available through CRAN which implements an scalable ordering method f...
ViSta, The Visual Statistics system : ViSta, the Visual Statistics system is a freeware statistical system developed by Forrest W. Young of the University of North Carolina. ViSta current version maintained by Pedro M. Valero-Mora of the University of Valencia and can be found at [1]. Old versions of ViSta and of the d...
ViSta, The Visual Statistics system : Young, F. W., Valero-Mora, P. M. & Friendly, M. (2006) Visual Statistics: Seeing Data with Interactive Graphics. Wiley ISBN 978-0-471-68160-1 Meissner, W. (2008) Book review of "Visual Statistics: Seeing Data with Interactive Graphics". Psychometrika 73, 1. Springer. ViSta is menti...
ViSta, The Visual Statistics system : This site keeps the last version of ViSta and other information [3] The original site for ViSta with old versions and documentation [4] Some Plug-ins to extend the ViSta's analysis options [5] Current version is 7.9.2.8 (2014, March) [6]
WinBUGS : WinBUGS is statistical software for Bayesian analysis using Markov chain Monte Carlo (MCMC) methods. It is based on the BUGS (Bayesian inference Using Gibbs Sampling) project started in 1989. It runs under Microsoft Windows, though it can also be run on Linux or Mac using Wine. It was developed by the BUGS Pr...
WinBUGS : Ntzoufras, Ioannis (2008). "WinBUGS Software: Introduction, Setup, and Basic Analysis". Bayesian Modeling Using WinBUGS. Wiley. pp. 83–123. ISBN 978-0-470-14114-4.
WinBUGS : WinBUGS Homepage
WINdows KwikStat : WINKS Statistical Data Analytics(SDA) & Graphs is a statistical analysis software package. It was first marketed in 1988 by the company TexaSoft (founded in 1981), named KWIKSTAT. The name WINdows KwikStat was shortened to WINKS when the Windows version was deployed. WINKS is sold in two editions: th...
WINdows KwikStat : Descriptive statistics Grubbs outlier test t-tests: single, independent, and paired Multiple regression, simple, stepwise, polynomial, all-possible ANOVA, simple, multi-way with multiple comparisons, 95% CI Analysis of covariance Repeated measures ANOVA Correlation: Pearson, Spearman & Partial Mantel...
Winpepi : WinPepi is a freeware package of statistical programs for epidemiologists, comprising seven programs with over 120 modules. WinPepi is not a complete compendium of statistical routines for epidemiologists but it provides a very wide range of procedures, including those most commonly used and many that are not...
Winpepi : "WINPEPI (PEPI-for-Windows)".
World Programming System : The World Programming System, also known as WPS Analytics or WPS, is a software product developed by a company called World Programming (acquired by Altair Engineering). WPS Analytics supports users of mixed ability to access and process data and to perform data science tasks. It has interact...
World Programming System : WPS can use programs written in the language of SAS without the need for translating them into any other language. In this regard WPS is compatible with the SAS system. WPS has a built-in language interpreter able to process the language of SAS and produce similar results. WPS is available to...
World Programming System : Runs on Windows, macOS, z/OS, Linux (x86, Armv8 64-bit, IBM Power LE, IBM Z), and AIX An integrated development environment based on Eclipse for Linux, macOS and Windows. Support for language of SAS elements. Support for the language of SAS Macros. Matrix Programming support using PROC IML. S...
World Programming System : Gartner recognized World Programming in their Cool Vendors in Data Science, 2014 Report.
World Programming System : In 2010 World Programming defended its use of the language of SAS in the High Court of England and Wales in SAS Institute Inc. v World Programming Ltd. The software was the subject of a lawsuit by SAS Institute. The EU Court of Justice ruled in favor of World Programming, stating that the cop...
World Programming System : World Programming web site
XLispStat : XLispStat is a statistical scientific package based on the XLISP language. Many free statistical software like ARC (nonlinear curve fitting problems) and ViSta are based on this package. It includes a variety of statistical functions and methods, including routines for nonlinear curve fit. Many add-on packa...
XLispStat : R (programming language)
XLispStat : Lisp-Stat and XLisp-Stat documentation (historical) XLispStat archive and related resources
Bayesian model reduction : Bayesian model reduction is a method for computing the evidence and posterior over the parameters of Bayesian models that differ in their priors. A full model is fitted to data using standard approaches. Hypotheses are then tested by defining one or more 'reduced' models with alternative (and...
Bayesian model reduction : Consider some model with parameters θ and a prior probability density on those parameters p ( θ ) . The posterior belief about θ after seeing the data p ( θ ∣ y ) is given by Bayes rule: The second line of Equation 1 is the model evidence, which is the probability of observing the data gi...
Bayesian model reduction : Under Gaussian prior and posterior densities, as are used in the context of variational Bayes, Bayesian model reduction has a simple analytical solution. First define normal densities for the priors and posteriors: where the tilde symbol (~) indicates quantities relating to the reduced model ...
Bayesian model reduction : Consider a model with a parameter θ and Gaussian prior p ( θ ) = N ( 0 , 0.5 2 ) ) , which is the Normal distribution with mean zero and standard deviation 0.5 (illustrated in the Figure, left). This prior says that without any data, the parameter is expected to have value zero, but we are w...
Bayesian model reduction : Bayesian model reduction is implemented in the Statistical Parametric Mapping toolbox, in the Matlab function spm_log_evidence_reduce.m . == References ==
Kneser–Ney smoothing : Kneser–Ney smoothing, also known as Kneser-Essen-Ney smoothing, is a method primarily used to calculate the probability distribution of n-grams in a document based on their histories. It is widely considered the most effective method of smoothing due to its use of absolute discounting by subtract...
Kneser–Ney smoothing : Let c ( w , w ′ ) be the number of occurrences of the word w followed by the word w ′ in the corpus. The equation for bigram probabilities is as follows: p K N ( w i | w i − 1 ) = max ( c ( w i − 1 , w i ) − δ , 0 ) ∑ w ′ c ( w i − 1 , w ′ ) + λ w i − 1 p K N ( w i ) (w_|w_)=,w_)-\delta ,0)c(w...
Kneser–Ney smoothing : Modifications of this method also exist. Chen and Goodman's 1998 paper lists and benchmarks several such modifications. Computational efficiency and scaling to multi-core systems is the focus of Chen and Goodman’s 1998 modification. This approach is once used for Google Translate under a MapReduc...
Lancichinetti–Fortunato–Radicchi benchmark : Lancichinetti–Fortunato–Radicchi benchmark is an algorithm that generates benchmark networks (artificial networks that resemble real-world networks). They have a priori known communities and are used to compare different community detection methods. The advantage of the benc...
Lancichinetti–Fortunato–Radicchi benchmark : The node degrees and the community sizes are distributed according to a power law, with different exponents. The benchmark assumes that both the degree and the community size have power law distributions with different exponents, γ and β , respectively. N is the number of...
Lancichinetti–Fortunato–Radicchi benchmark : Consider a partition into communities that do not overlap. The communities of randomly chosen nodes in each iteration follow a p ( C ) distribution that represents the probability that a randomly picked node is from the community C . Consider a partition of the same networ...
Least-squares spectral analysis : Least-squares spectral analysis (LSSA) is a method of estimating a frequency spectrum based on a least-squares fit of sinusoids to data samples, similar to Fourier analysis. Fourier analysis, the most used spectral method in science, generally boosts long-periodic noise in the long and...
Least-squares spectral analysis : The close connections between Fourier analysis, the periodogram, and the least-squares fitting of sinusoids have been known for a long time. However, most developments are restricted to complete data sets of equally spaced samples. In 1963, Freek J. M. Barning of Mathematisch Centrum, ...
Least-squares spectral analysis : The most useful feature of LSSA is enabling incomplete records to be spectrally analyzed — without the need to manipulate data or to invent otherwise non-existent data. Magnitudes in the LSSA spectrum depict the contribution of a frequency or period to the variance of the time series. ...
Least-squares spectral analysis : The LSSA can be implemented in less than a page of MATLAB code. In essence: "to compute the least-squares spectrum we must compute m spectral values ... which involves performing the least-squares approximation m times, each time to get [the spectral power] for a different frequency" I...
Least-squares spectral analysis : Non-uniform discrete Fourier transform Orthogonal functions SigSpec Sinusoidal model Spectral density Spectral density estimation, for competing alternatives
Least-squares spectral analysis : LSSA package freeware download, FORTRAN, Vaníček's least-squares spectral analysis method, from the University of New Brunswick. LSWAVE package freeware download, MATLAB, includes the Vaníček's least-squares spectral analysis method, from the U.S. National Geodetic Survey.
Risk-limiting audit : A risk-limiting audit (RLA) is a post-election tabulation auditing procedure which can limit the risk that the reported outcome in an election contest is incorrect. It generally involves (1) storing voter-verified paper ballots securely until they can be checked, and (2) manually examining a stati...
Risk-limiting audit : There are three general types of risk-limiting audits. Depending on the circumstances of the election and the auditing method, different numbers of ballots need to be hand-checked. For example, in a jurisdiction with 64,000 ballots tabulated in batches of 500 ballots each, an 8% margin of victory,...
Risk-limiting audit : The process starts by selecting a "risk limit", such as 9% in Colorado, meaning that if there are any erroneous winners in the initial results, the audit will catch at least 91% of them and let up to 9% stay undetected and take office. Another initial step is to decide whether to audit all contest...
Risk-limiting audit : As of early 2017, about half the states require some form of results audit. Typically, these states prescribe audits that check only a small flat percentage, such as 1%, of voting machines. As a result, few jurisdictions have samples large or timely enough to detect and correct tabulation errors b...
Risk-limiting audit : In 2018 the American Statistical Association, Brennan Center for Justice, Common Cause, Public Citizen and several election integrity groups endorsed all three methods of risk-limited audits. Their first five criteria are: EXAMINATION OF VOTER-VERIFIABLE PAPER BALLOTS: Audits require human examina...
Risk-limiting audit : Election audits Elections Electoral fraud Electoral integrity List of close election results == References ==
Sequence analysis in social sciences : In social sciences, sequence analysis (SA) is concerned with the analysis of sets of categorical sequences that typically describe longitudinal data. Analyzed sequences are encoded representations of, for example, individual life trajectories such as family formation, school to wo...
Sequence analysis in social sciences : Sequence analysis methods were first imported into the social sciences from the information and biological sciences (see Sequence alignment) by the University of Chicago sociologist Andrew Abbott in the 1980s, and they have since developed in ways that are unique to the social sci...
Sequence analysis in social sciences : A sequence s is an ordered list of elements (s1,s2,...,sl) taken from a finite alphabet A. For a set S of sequences, three sizes matter: the number n of sequences, the size a = |A| of the alphabet, and the length l of the sequences (that could be different for each sequence). In s...
Sequence analysis in social sciences : Conventional SA consists essentially in building a typology of the observed trajectories. Abbott and Tsay (2000) describe this typical SA as a three-step program: 1. Coding individual narratives as sequences of states; 2. Measuring pairwise dissimilarities between sequences; and 3...
Sequence analysis in social sciences : These techniques have proved valuable in a variety of contexts. In life-course research, for example, research has shown that retirement plans are affected not just by the last year or two of one's life, but instead how one's work and family careers unfolded over a period of sever...
Sequence analysis in social sciences : Two main statistical computing environment offer tools to conduct a sequence analysis in the form of user-written packages: Stata and R. Stata: SQ and SADI are general SA toolkits. MICT is dedicated to imputation of missing elements in sequences. R: TraMineR with its extension Tra...
Sequence analysis in social sciences : The first international conference dedicated to social-scientific research that uses sequence analysis methods – the Lausanne Conference on Sequence Analysis, or LaCOSA – was held in Lausanne, Switzerland in June 2012. A second conference (LaCOSA II) was held in Lausanne in June 2...
Sequence analysis in social sciences : The homepage of the Sequence Analysis Association. [1]Andrew Abbott's 1995 review of sociological approaches to sequence analysis. The TraMineR page Brendan Halpin's sequence analysis page at the University of Limerick. Laurent Lesnard's Stata plugin for sequence analysis using th...
Synthetic control method : The synthetic control method is an econometric method used to evaluate the effect of large-scale interventions. It was proposed in a series of articles by Alberto Abadie and his coauthors. A synthetic control is a weighted average of several units (such as regions or companies) combined to re...
Synthetic control method : Difference in difference Regression discontinuity Instrumental variables estimation == References ==
Vote counting : Vote counting is the process of counting votes in an election. It can be done manually or by machines. In the United States, the compilation of election returns and validation of the outcome that forms the basis of the official results is called canvassing. Counts are simplest in elections where just on...
Vote counting : Manual counting, also known as hand-counting, requires a physical ballot that represents voter intent. The physical ballots are taken out of ballot boxes and/or envelopes, read and interpreted; then results are tallied. Manual counting may be used for election audits and recounts in areas where automate...
Vote counting : Mechanical voting machines have voters selecting switches (levers), pushing plastic chips through holes, or pushing mechanical buttons which increment a mechanical counter (sometimes called the odometer) for the appropriate candidate. There is no record of individual votes to check.
Vote counting : Electronic machines for elections are being procured around the world, often with donor money. In places with honest independent election commissions, machines can add efficiency, though not usually transparency. Where the election commission is weaker, expensive machines can be fetishized, waste money ...
Vote counting : Recount Tally (voting) Electronic voting Electronic voting in Switzerland Voting machine Electoral system Ballot Election audits Elections Electoral fraud Electoral integrity List of close election results
Vote counting : The Election Technology Library research list – a comprehensive list of research relating to technology use in elections E-Voting information from ACE Project AEI-Brookings Election Reform Project Voting and Elections by Douglas W. Jones: Thorough articles about the history and problems with Voting Mach...
Administrative data : Administrative data are collected by governments or other organizations for non-statistical reasons to provide overviews on registration, transactions, and record keeping. They evaluate part of the output of administrating a program. Border records, pensions, taxation, and vital records like birth...
Administrative data : Records of land holding have been used to administer taxes around the world for many centuries. In the nineteenth century international institutions for cooperation was established, such as the International Statistical Institute. In recent decades administrative data on individuals and organizati...
Administrative data : Open administrative data allows transparency, participation, efficiency, and economic innovation. Linked administrative data allows for the creation of large data-sets and has become a vital tool for central and local governments conducting research. By linking sections of data individually, the o...
Administrative data : Some disadvantages of administrative data are that the information collected is not always open and is restricted to certain users. There is also a lack of control over content, for example Statistics Canada uses administrative data to enrich, replace survey data, or to increase the efficiency of ...
Data : Data ( DAY-tə, US also DAT-ə) are a collection of discrete or continuous values that convey information, describing the quantity, quality, fact, statistics, other basic units of meaning, or simply sequences of symbols that may be further interpreted formally. A datum is an individual value in a collection of dat...
Data : The Latin word data is the plural of datum, "(thing) given," and the neuter past participle of dare, "to give". The first English use of the word "data" is from the 1640s. The word "data" was first used to mean "transmissible and storable computer information" in 1946. The expression "data processing" was first ...
Data : Data, information, knowledge, and wisdom are closely related concepts, but each has its role concerning the other, and each term has its meaning. According to a common view, data is collected and analyzed; data only becomes information suitable for making decisions once it has been analyzed in some fashion. One ...
Data : With respect to ownership of data collected in the course of marketing or other corporate collection, data has been characterized according to "party" depending on how close the data is to the source or if it has been generated through additional processing. "Zero-party data" refers to data that customers "inten...
Data : Whenever data needs to be registered, data exists in the form of a data document. Kinds of data documents include: data repository data study data set software data paper database data handbook data journal Some of these data documents (data repositories, data studies, data sets, and software) are indexed in Dat...
Data : An important field in computer science, technology, and library science is the longevity of data. Scientific research generates huge amounts of data, especially in genomics and astronomy, but also in the medical sciences, e.g. in medical imaging. In the past, scientific data has been published in papers and book...
Data : Although data is also increasingly used in other fields, it has been suggested that their highly interpretive nature might be at odds with the ethos of data as "given". Peter Checkland introduced the term capta (from the Latin capere, "to take") to distinguish between an immense number of possible data and a sub...
Data : Data is a singular noun (a detailed assessment)
Univariate (statistics) : Univariate is a term commonly used in statistics to describe a type of data which consists of observations on only a single characteristic or attribute. A simple example of univariate data would be the salaries of workers in industry. Like all the other data, univariate data can be visualized ...
Univariate (statistics) : Some univariate data consists of numbers (such as the height of 65 inches or the weight of 100 pounds), while others are nonnumerical (such as eye colors of brown or blue). Generally, the terms categorical univariate data and numerical univariate data are used to distinguish between these type...
Univariate (statistics) : Univariate analysis is the simplest form of analyzing data. Uni means "one", so the data has only one variable (univariate). Univariate data requires to analyze each variable separately. Data is gathered for the purpose of answering a question, or more specifically, a research question. Univar...
Univariate (statistics) : The most frequently used graphical illustrations for univariate data are:
Univariate (statistics) : Univariate distribution is a dispersal type of a single random variable described either with a probability mass function (pmf) for discrete probability distribution, or probability density function (pdf) for continuous probability distribution. It is not to be confused with multivariate distr...
Univariate (statistics) : Univariate Univariate distribution Bivariate analysis Multivariate analysis List of probability distributions == References ==
Methodological advisor : A methodological advisor or statistical consultant provides methodological and statistical advice and guidance to clients interested in making decisions regarding the design of studies, the collection and analysis of data, and the presentation and dissemination of research findings. Trained in ...
Methodological advisor : Methodological advisors generally have post-graduate training in statistics and relevant practical experience. Advisors may also have significant education and experience in the particular field they work in. Some universities offer specific graduate programmes in fields such as biostatistics, ...
Methodological advisor : The role of a methodological advisors varies from project to project, but can include any point in the research cycle. While cross-sectional consulting may only occur at one point during a project, longitudinal consulting may mean that the advisor stays with the project from beginning to end. H...
Methodological advisor : Although statisticians were traditionally trained largely on a technical skill set, modern training focuses on more than methodological questions. It also emphasizes advisors to be proficient in communication, teamwork, and problem-solving skills. They have to be able to elicit explanations fro...
Methodological advisor : Statistician Management consulting List of university statistical consulting centers
Methodological advisor : Boen, J. R., & Zahn, D. A. (1982). Human Side of Statistical Consulting. Wadsworth Publishing Company. Cabrera, J., McDougall, A. (2002). Statistical Consulting. Springer. Derr, J. (1999). Statistical Consulting: A Guide to Effective Communication. Duxbury Press. Hand, D. J., & Everitt, B.S. (1...
Methodological advisor : Directory of Statistical Consultants provided by the Royal Statistical Society
Statistician : A statistician is a person who works with theoretical or applied statistics. The profession exists in both the private and public sectors. It is common to combine statistical knowledge with expertise in other subjects, and statisticians may work as employees or as statistical consultants.
Statistician : According to the United States Bureau of Labor Statistics, as of 2014, 26,970 jobs were classified as statistician in the United States. Of these people, approximately 30 percent worked for governments (federal, state, or local). As of October 2021, the median pay for statisticians in the United States w...