text
stringlengths 0
4.09k
|
|---|
Title: Combining Symmetry Breaking and Global Constraints
|
Abstract: We propose a new family of constraints which combine together lexicographical ordering constraints for symmetry breaking with other common global constraints. We give a general purpose propagator for this family of constraints, and show how to improve its complexity by exploiting properties of the included global constraints.
|
Title: A pseudo empirical likelihood approach for stratified samples with nonresponse
|
Abstract: Nonresponse is common in surveys. When the response probability of a survey variable $Y$ depends on $Y$ through an observed auxiliary categorical variable $Z$ (i.e., the response probability of $Y$ is conditionally independent of $Y$ given $Z$), a simple method often used in practice is to use $Z$ categories as imputation cells and construct estimators by imputing nonrespondents or reweighting respondents within each imputation cell. This simple method, however, is inefficient when some $Z$ categories have small sizes and ad hoc methods are often applied to collapse small imputation cells. Assuming a parametric model on the conditional probability of $Z$ given $Y$ and a nonparametric model on the distribution of $Y$, we develop a pseudo empirical likelihood method to provide more efficient survey estimators. Our method avoids any ad hoc collapsing small $Z$ categories, since reweighting or imputation is done across $Z$ categories. Asymptotic distributions for estimators of population means based on the pseudo empirical likelihood method are derived. For variance estimation, we consider a bootstrap procedure and its consistency is established. Some simulation results are provided to assess the finite sample performance of the proposed estimators.
|
Title: Real-time Texture Error Detection
|
Abstract: This paper advocates an improved solution for real-time error detection of texture errors that occurs in the production process in textile industry. The research is focused on the mono-color products with 3D texture model (Jaquard fabrics). This is a more difficult task than, for example, 2D multicolor textures.
|
Title: A new approach to Cholesky-based covariance regularization in high dimensions
|
Abstract: In this paper we propose a new regression interpretation of the Cholesky factor of the covariance matrix, as opposed to the well known regression interpretation of the Cholesky factor of the inverse covariance, which leads to a new class of regularized covariance estimators suitable for high-dimensional problems. Regularizing the Cholesky factor of the covariance via this regression interpretation always results in a positive definite estimator. In particular, one can obtain a positive definite banded estimator of the covariance matrix at the same computational cost as the popular banded estimator proposed by Bickel and Levina (2008b), which is not guaranteed to be positive definite. We also establish theoretical connections between banding Cholesky factors of the covariance matrix and its inverse and constrained maximum likelihood estimation under the banding constraint, and compare the numerical performance of several methods in simulations and on a sonar data example.
|
Title: The Nonparanormal: Semiparametric Estimation of High Dimensional Undirected Graphs
|
Abstract: Recent methods for estimating sparse undirected graphs for real-valued data in high dimensional problems rely heavily on the assumption of normality. We show how to use a semiparametric Gaussian copula--or "nonparanormal"--for high dimensional inference. Just as additive models extend linear models by replacing linear functions with a set of one-dimensional smooth functions, the nonparanormal extends the normal by transforming the variables by smooth functions. We derive a method for estimating the nonparanormal, study the method's theoretical properties, and show that it works well in many examples.
|
Title: Compressive Sensing Using Low Density Frames
|
Abstract: We consider the compressive sensing of a sparse or compressible signal $\bf x \in \mathbb R^M$. We explicitly construct a class of measurement matrices, referred to as the low density frames, and develop decoding algorithms that produce an accurate estimate $$ even in the presence of additive noise. Low density frames are sparse matrices and have small storage requirements. Our decoding algorithms for these frames have $O(M)$ complexity. Simulation results are provided, demonstrating that our approach significantly outperforms state-of-the-art recovery algorithms for numerous cases of interest. In particular, for Gaussian sparse signals and Gaussian noise, we are within 2 dB range of the theoretical lower bound in most cases.
|
Title: Component-Wise Markov Chain Monte Carlo: Uniform and Geometric Ergodicity under Mixing and Composition
|
Abstract: It is common practice in Markov chain Monte Carlo to update the simulation one variable (or sub-block of variables) at a time, rather than conduct a single full-dimensional update. When it is possible to draw from each full-conditional distribution associated with the target this is just a Gibbs sampler. Often at least one of the Gibbs updates is replaced with a Metropolis-Hastings step, yielding a Metropolis-Hastings-within-Gibbs algorithm. Strategies for combining component-wise updates include composition, random sequence and random scans. While these strategies can ease MCMC implementation and produce superior empirical performance compared to full-dimensional updates, the theoretical convergence properties of the associated Markov chains have received limited attention. We present conditions under which some component-wise Markov chains converge to the stationary distribution at a geometric rate. We pay particular attention to the connections between the convergence rates of the various component-wise strategies. This is important since it ensures the existence of tools that an MCMC practitioner can use to be as confident in the simulation results as if they were based on independent and identically distributed samples. We illustrate our results in two examples including a hierarchical linear mixed model and one involving maximum likelihood estimation for mixed models.
|
Title: Online Estimation of SAT Solving Runtime
|
Abstract: We present an online method for estimating the cost of solving SAT problems. Modern SAT solvers present several challenges to estimate search cost including non-chronological backtracking, learning and restarts. Our method uses a linear model trained on data gathered at the start of search. We show the effectiveness of this method using random and structured problems. We demonstrate that predictions made in early restarts can be used to improve later predictions. We also show that we can use such cost estimations to select a solver from a portfolio.
|
Title: Support points of locally optimal designs for nonlinear models with two parameters
|
Abstract: We propose a new approach for identifying the support points of a locally optimal design when the model is a nonlinear model. In contrast to the commonly used geometric approach, we use an approach based on algebraic tools. Considerations are restricted to models with two parameters, and the general results are applied to often used special cases, including logistic, probit, double exponential and double reciprocal models for binary data, a loglinear Poisson regression model for count data, and the Michaelis--Menten model. The approach, which is also of value for multi-stage experiments, works both with constrained and unconstrained design regions and is relatively easy to implement.
|
Title: Modeling the Experience of Emotion
|
Abstract: Affective computing has proven to be a viable field of research comprised of a large number of multidisciplinary researchers resulting in work that is widely published. The majority of this work consists of computational models of emotion recognition, computational modeling of causal factors of emotion and emotion expression through rendered and robotic faces. A smaller part is concerned with modeling the effects of emotion, formal modeling of cognitive appraisal theory and models of emergent emotions. Part of the motivation for affective computing as a field is to better understand emotional processes through computational modeling. One of the four major topics in affective computing is computers that have emotions (the others are recognizing, expressing and understanding emotions). A critical and neglected aspect of having emotions is the experience of emotion (Barrett, Mesquita, Ochsner, and Gross, 2007): what does the content of an emotional episode look like, how does this content change over time and when do we call the episode emotional. Few modeling efforts have these topics as primary focus. The launch of a journal on synthetic emotions should motivate research initiatives in this direction, and this research should have a measurable impact on emotion research in psychology. I show that a good way to do so is to investigate the psychological core of what an emotion is: an experience. I present ideas on how the experience of emotion could be modeled and provide evidence that several computational models of emotion are already addressing the issue.
|
Title: On Requirements for Programming Exercises from an E-learning Perspective
|
Abstract: In this work, we deal with the question of modeling programming exercises for novices pointing to an e-learning scenario. Our purpose is to identify basic requirements, raise some key questions and propose potential answers from a conceptual perspective. Presented as a general picture, we hypothetically situate our work in a general context where e-learning instructional material needs to be adapted to form part of an introductory Computer Science (CS) e-learning course at the CS1-level. Meant is a potential course which aims at improving novices skills and knowledge on the essentials of programming by using e-learning based approaches in connection (at least conceptually) with a general host framework like Activemath (www.activemath.org). Our elaboration covers contextual and, particularly, cognitive elements preparing the terrain for eventual research stages in a derived project, as indicated. We concentrate our main efforts on reasoning mechanisms about exercise complexity that can eventually offer tool support for the task of exercise authoring. We base our requirements analysis on our own perception of the exercise subsystem provided by Activemath especially within the domain reasoner area. We enrich the analysis by bringing to the discussion several relevant contextual elements from the CS1 courses, its definition and implementation. Concerning cognitive models and exercises, we build upon the principles of Bloom's Taxonomy as a relatively standardized basis and use them as a framework for study and analysis of complexity in basic programming exercises. Our analysis includes requirements for the domain reasoner which are necessary for the exercise analysis. We propose for such a purpose a three-layered conceptual model considering exercise evaluation, programming and metaprogramming.
|
Title: Tagging multimedia stimuli with ontologies
|
Abstract: Successful management of emotional stimuli is a pivotal issue concerning Affective Computing (AC) and the related research. As a subfield of Artificial Intelligence, AC is concerned not only with the design of computer systems and the accompanying hardware that can recognize, interpret, and process human emotions, but also with the development of systems that can trigger human emotional response in an ordered and controlled manner. This requires the maximum attainable precision and efficiency in the extraction of data from emotionally annotated databases While these databases do use keywords or tags for description of the semantic content, they do not provide either the necessary flexibility or leverage needed to efficiently extract the pertinent emotional content. Therefore, to this extent we propose an introduction of ontologies as a new paradigm for description of emotionally annotated data. The ability to select and sequence data based on their semantic attributes is vital for any study involving metadata, semantics and ontological sorting like the Semantic Web or the Social Semantic Desktop, and the approach described in the paper facilitates reuse in these areas as well.
|
Title: Estimation of cosmological parameters using adaptive importance sampling
|
Abstract: We present a Bayesian sampling algorithm called adaptive importance sampling or Population Monte Carlo (PMC), whose computational workload is easily parallelizable and thus has the potential to considerably reduce the wall-clock time required for sampling, along with providing other benefits. To assess the performance of the approach for cosmological problems, we use simulated and actual data consisting of CMB anisotropies, supernovae of type Ia, and weak cosmological lensing, and provide a comparison of results to those obtained using state-of-the-art Markov Chain Monte Carlo (MCMC). For both types of data sets, we find comparable parameter estimates for PMC and MCMC, with the advantage of a significantly lower computational time for PMC. In the case of WMAP5 data, for example, the wall-clock time reduces from several days for MCMC to a few hours using PMC on a cluster of processors. Other benefits of the PMC approach, along with potential difficulties in using the approach, are analysed and discussed.
|
Title: Algorithms for Weighted Boolean Optimization
|
Abstract: The Pseudo-Boolean Optimization (PBO) and Maximum Satisfiability (MaxSAT) problems are natural optimization extensions of Boolean Satisfiability (SAT). In the recent past, different algorithms have been proposed for PBO and for MaxSAT, despite the existence of straightforward mappings from PBO to MaxSAT and vice-versa. This papers proposes Weighted Boolean Optimization (WBO), a new unified framework that aggregates and extends PBO and MaxSAT. In addition, the paper proposes a new unsatisfiability-based algorithm for WBO, based on recent unsatisfiability-based algorithms for MaxSAT. Besides standard MaxSAT, the new algorithm can also be used to solve weighted MaxSAT and PBO, handling pseudo-Boolean constraints either natively or by translation to clausal form. Experimental results illustrate that unsatisfiability-based algorithms for MaxSAT can be orders of magnitude more efficient than existing dedicated algorithms. Finally, the paper illustrates how other algorithms for either PBO or MaxSAT can be extended to WBO.
|
Title: Decomposition, Reformulation, and Diving in University Course Timetabling
|
Abstract: In many real-life optimisation problems, there are multiple interacting components in a solution. For example, different components might specify assignments to different kinds of resource. Often, each component is associated with different sets of soft constraints, and so with different measures of soft constraint violation. The goal is then to minimise a linear combination of such measures. This paper studies an approach to such problems, which can be thought of as multiphase exploitation of multiple objective-/value-restricted submodels. In this approach, only one computationally difficult component of a problem and the associated subset of objectives is considered at first. This produces partial solutions, which define interesting neighbourhoods in the search space of the complete problem. Often, it is possible to pick the initial component so that variable aggregation can be performed at the first stage, and the neighbourhoods to be explored next are guaranteed to contain feasible solutions. Using integer programming, it is then easy to implement heuristics producing solutions with bounds on their quality. Our study is performed on a university course timetabling problem used in the 2007 International Timetabling Competition, also known as the Udine Course Timetabling Problem. In the proposed heuristic, an objective-restricted neighbourhood generator produces assignments of periods to events, with decreasing numbers of violations of two period-related soft constraints. Those are relaxed into assignments of events to days, which define neighbourhoods that are easier to search with respect to all four soft constraints. Integer programming formulations for all subproblems are given and evaluated using ILOG CPLEX 11. The wider applicability of this approach is analysed and discussed.
|
Title: Efficient Human Computation
|
Abstract: Collecting large labeled data sets is a laborious and expensive task, whose scaling up requires division of the labeling workload between many teachers. When the number of classes is large, miscorrespondences between the labels given by the different teachers are likely to occur, which, in the extreme case, may reach total inconsistency. In this paper we describe how globally consistent labels can be obtained, despite the absence of teacher coordination, and discuss the possible efficiency of this process in terms of human labor. We define a notion of label efficiency, measuring the ratio between the number of globally consistent labels obtained and the number of labels provided by distributed teachers. We show that the efficiency depends critically on the ratio alpha between the number of data instances seen by a single teacher, and the number of classes. We suggest several algorithms for the distributed labeling problem, and analyze their efficiency as a function of alpha. In addition, we provide an upper bound on label efficiency for the case of completely uncoordinated teachers, and show that efficiency approaches 0 as the ratio between the number of labels each teacher provides and the number of classes drops (i.e. alpha goes to 0).
|
Title: Symmetry Breaking Using Value Precedence
|
Abstract: We present a comprehensive study of the use of value precedence constraints to break value symmetry. We first give a simple encoding of value precedence into ternary constraints that is both efficient and effective at breaking symmetry. We then extend value precedence to deal with a number of generalizations like wreath value and partial interchangeability. We also show that value precedence is closely related to lexicographical ordering. Finally, we consider the interaction between value precedence and symmetry breaking constraints for variable symmetries.
|
Title: Complexity of Terminating Preference Elicitation
|
Abstract: Complexity theory is a useful tool to study computational issues surrounding the elicitation of preferences, as well as the strategic manipulation of elections aggregating together preferences of multiple agents. We study here the complexity of determining when we can terminate eliciting preferences, and prove that the complexity depends on the elicitation strategy. We show, for instance, that it may be better from a computational perspective to elicit all preferences from one agent at a time than to elicit individual preferences from multiple agents. We also study the connection between the strategic manipulation of an election and preference elicitation. We show that what we can manipulate affects the computational complexity of manipulation. In particular, we prove that there are voting rules which are easy to manipulate if we can change all of an agent's vote, but computationally intractable if we can change only some of their preferences. This suggests that, as with preference elicitation, a fine-grained view of manipulation may be informative. Finally, we study the connection between predicting the winner of an election and preference elicitation. Based on this connection, we identify a voting rule where it is computationally difficult to decide the probability of a candidate winning given a probability distribution over the votes.
|
Title: The Complexity of Reasoning with Global Constraints
|
Abstract: Constraint propagation is one of the techniques central to the success of constraint programming. To reduce search, fast algorithms associated with each constraint prune the domains of variables. With global (or non-binary) constraints, the cost of such propagation may be much greater than the quadratic cost for binary constraints. We therefore study the computational complexity of reasoning with global constraints. We first characterise a number of important questions related to constraint propagation. We show that such questions are intractable in general, and identify dependencies between the tractability and intractability of the different questions. We then demonstrate how the tools of computational complexity can be used in the design and analysis of specific global constraints. In particular, we illustrate how computational complexity can be used to determine when a lesser level of local consistency should be enforced, when constraints can be safely generalized, when decomposing constraints will reduce the amount of pruning, and when combining constraints is tractable.
|
Title: Breaking Value Symmetry
|
Abstract: One common type of symmetry is when values are symmetric. For example, if we are assigning colours (values) to nodes (variables) in a graph colouring problem then we can uniformly interchange the colours throughout a colouring. For a problem with value symmetries, all symmetric solutions can be eliminated in polynomial time. However, as we show here, both static and dynamic methods to deal with symmetry have computational limitations. With static methods, pruning all symmetric values is NP-hard in general. With dynamic methods, we can take exponential time on problems which static methods solve without search.
|
Title: Tetravex is NP-complete
|
Abstract: Tetravex is a widely played one person computer game in which you are given $n^2$ unit tiles, each edge of which is labelled with a number. The objective is to place each tile within a $n$ by $n$ square such that all neighbouring edges are labelled with an identical number. Unfortunately, playing Tetravex is computationally hard. More precisely, we prove that deciding if there is a tiling of the Tetravex board is NP-complete. Deciding where to place the tiles is therefore NP-hard. This may help to explain why Tetravex is a good puzzle. This result compliments a number of similar results for one person games involving tiling. For example, NP-completeness results have been shown for: the offline version of Tetris, KPlumber (which involves rotating tiles containing drawings of pipes to make a connected network), and shortest sliding puzzle problems. It raises a number of open questions. For example, is the infinite version Turing-complete? How do we generate Tetravex problems which are truly puzzling as random NP-complete problems are often surprising easy to solve? Can we observe phase transition behaviour? What about the complexity of the problem when it is guaranteed to have an unique solution? How do we generate puzzles with unique solutions?
|
Title: Stochastic Constraint Programming: A Scenario-Based Approach
|
Abstract: To model combinatorial decision problems involving uncertainty and probability, we introduce scenario based stochastic constraint programming. Stochastic constraint programs contain both decision variables, which we can set, and stochastic variables, which follow a discrete probability distribution. We provide a semantics for stochastic constraint programs based on scenario trees. Using this semantics, we can compile stochastic constraint programs down into conventional (non-stochastic) constraint programs. This allows us to exploit the full power of existing constraint solvers. We have implemented this framework for decision making under uncertainty in stochastic OPL, a language which is based on the OPL constraint modelling language [Hentenryck et al., 1999]. To illustrate the potential of this framework, we model a wide range of problems in areas as diverse as portfolio diversification, agricultural planning and production/inventory management.
|
Title: Stochastic Constraint Programming
|
Abstract: To model combinatorial decision problems involving uncertainty and probability, we introduce stochastic constraint programming. Stochastic constraint programs contain both decision variables (which we can set) and stochastic variables (which follow a probability distribution). They combine together the best features of traditional constraint satisfaction, stochastic integer programming, and stochastic satisfiability. We give a semantics for stochastic constraint programs, and propose a number of complete algorithms and approximation procedures. Finally, we discuss a number of extensions of stochastic constraint programming to relax various assumptions like the independence between stochastic variables, and compare with other approaches for decision making under uncertainty.
|
Title: The Digital Restoration of Da Vinci's Sketches
|
Abstract: A sketch, found in one of Leonardo da Vinci's notebooks and covered by the written notes of this genius, has been recently restored. The restoration reveals a possible self-portrait of the artist, drawn when he was young. Here, we discuss the discovery of this self-portrait and the procedure used for restoration. Actually, this is a restoration performed on the digital image of the sketch, a procedure that can easily extended and applied to ancient documents for studies of art and palaeography.
|
Title: Definition of evidence fusion rules on the basis of Referee Functions
|
Abstract: This chapter defines a new concept and framework for constructing fusion rules for evidences. This framework is based on a referee function, which does a decisional arbitrament conditionally to basic decisions provided by the several sources of information. A simple sampling method is derived from this framework. The purpose of this sampling approach is to avoid the combinatorics which are inherent to the definition of fusion rules of evidences. This definition of the fusion rule by the means of a sampling process makes possible the construction of several rules on the basis of an algorithmic implementation of the referee function, instead of a mathematical formulation. Incidentally, it is a versatile and intuitive way for defining rules. The framework is implemented for various well known evidence rules. On the basis of this framework, new rules for combining evidences are proposed, which takes into account a consensual evaluation of the sources of information.
|
Title: Taking Advantage of Sparsity in Multi-Task Learning
|
Abstract: We study the problem of estimating multiple linear regression equations for the purpose of both prediction and variable selection. Following recent work on multi-task learning Argyriou et al. [2008], we assume that the regression vectors share the same sparsity pattern. This means that the set of relevant predictor variables is the same across the different equations. This assumption leads us to consider the Group Lasso as a candidate estimation method. We show that this estimator enjoys nice sparsity oracle inequalities and variable selection properties. The results hold under a certain restricted eigenvalue condition and a coherence condition on the design matrix, which naturally extend recent work in Bickel et al. [2007], Lounici [2008]. In particular, in the multi-task learning scenario, in which the number of tasks can grow, we are able to remove completely the effect of the number of predictor variables in the bounds. Finally, we show how our results can be extended to more general noise distributions, of which we only require the variance to be finite.
|
Title: Heuristic Reasoning on Graph and Game Complexity of Sudoku
|
Abstract: The Sudoku puzzle has achieved worldwide popularity recently, and attracted great attention of the computational intelligence community. Sudoku is always considered as Satisfiability Problem or Constraint Satisfaction Problem. In this paper, we propose to focus on the essential graph structure underlying the Sudoku puzzle. First, we formalize Sudoku as a graph. Then a solving algorithm based on heuristic reasoning on the graph is proposed. The related r-Reduction theorem, inference theorem and their properties are proved, providing the formal basis for developments of Sudoku solving systems. In order to evaluate the difficulty levels of puzzles, a quantitative measurement of the complexity level of Sudoku puzzles based on the graph structure and information theory is proposed. Experimental results show that all the puzzles can be solved fast using the proposed heuristic reasoning, and that the proposed game complexity metrics can discriminate difficulty levels of puzzles perfectly.
|
Title: Expectations of Random Sets and Their Boundaries Using Oriented Distance Functions
|
Abstract: Shape estimation and object reconstruction are common problems in image analysis. Mathematically, viewing objects in the image plane as random sets reduces the problem of shape estimation to inference about sets. Currently existing definitions of the expected set rely on different criteria to construct the expectation. This paper introduces new definitions of the expected set and the expected boundary, based on oriented distance functions. The proposed expectations have a number of attractive properties, including inclusion relations, convexity preservation and equivariance with respect to rigid motions. The paper introduces a special class of separable oriented distance functions for parametric sets and gives the definition and properties of separable random closed sets. Further, the definitions of the empirical mean set and the empirical mean boundary are proposed and empirical evidence of the consistency of the boundary estimator is presented. In addition, the paper gives loss functions for set inference in frequentist framework and shows how some of the existing expectations arise naturally as optimal estimators. The proposed definitions of the set and boundary expectations are illustrated on theoretical examples and real data.
|
Title: Free actions and Grassmanian variety
|
Abstract: An algebraic notion of representational consistency is defined. A theorem relating it to free actions is proved. A metrizability problem of the quotient (a shape space) is discussed. This leads to a new algebraic variety with a metrizability result. A concrete example is given from stereo vision.
|
Title: Confidence Regions for Means of Random Sets using Oriented Distance Functions
|
Abstract: Image analysis frequently deals with shape estimation and image reconstruction. The ob jects of interest in these problems may be thought of as random sets, and one is interested in finding a representative, or expected, set. We consider a definition of set expectation using oriented distance functions and study the properties of the associated empirical set. Conditions are given such that the empirical average is consistent, and a method to calculate a confidence region for the expected set is introduced. The proposed method is applied to both real and simulated data examples.
|
Title: Contracting preference relations for database applications
|
Abstract: The binary relation framework has been shown to be applicable to many real-life preference handling scenarios. Here we study preference contraction: the problem of discarding selected preferences. We argue that the property of minimality and the preservation of strict partial orders are crucial for contractions. Contractions can be further constrained by specifying which preferences should be protected. We consider two classes of preference relations: finite and finitely representable. We present algorithms for computing minimal and preference-protecting minimal contractions for finite as well as finitely representable preference relations. We study relationships between preference change in the binary relation framework and belief change in the belief revision theory. We also introduce some preference query optimization techniques which can be used in the presence of contraction. We evaluate the proposed algorithms experimentally and present the results.
|
Title: SMART: A statistical framework for optimal design matrix generation with application to fMRI
|
Abstract: The general linear model (GLM) is a well established tool for analyzing functional magnetic resonance imaging (fMRI) data. Most fMRI analyses via GLM proceed in a massively univariate fashion where the same design matrix is used for analyzing data from each voxel. A major limitation of this approach is the locally varying nature of signals of interest as well as associated confounds. This local variability results in a potentially large bias and uncontrolled increase in variance for the contrast of interest. The main contributions of this paper are two fold (1) We develop a statistical framework called SMART that enables estimation of an optimal design matrix while explicitly controlling the bias variance decomposition over a set of potential design matrices and (2) We develop and validate a numerical algorithm for computing optimal design matrices for general fMRI data sets. The implications of this framework include the ability to match optimally the magnitude of underlying signals to their true magnitudes while also matching the "null" signals to zero size thereby optimizing both the sensitivity and specificity of signal detection. By enabling the capture of multiple profiles of interest using a single contrast (as opposed to an F-test) in a way that optimizes for both bias and variance enables the passing of first level parameter estimates and their variances to the higher level for group analysis which is not possible using F-tests. We demonstrate the application of this approach to in vivo pharmacological fMRI data capturing the acute response to a drug infusion, to task-evoked, block design fMRI and to the estimation of a haemodynamic response function (HRF) response in event-related fMRI. Our framework is quite general and has potentially wide applicability to a variety of disciplines.
|
Title: Feature selection in omics prediction problems using cat scores and false nondiscovery rate control
|
Abstract: We revisit the problem of feature selection in linear discriminant analysis (LDA), that is, when features are correlated. First, we introduce a pooled centroids formulation of the multiclass LDA predictor function, in which the relative weights of Mahalanobis-transformed predictors are given by correlation-adjusted $t$-scores (cat scores). Second, for feature selection we propose thresholding cat scores by controlling false nondiscovery rates (FNDR). Third, training of the classifier is based on James--Stein shrinkage estimates of correlations and variances, where regularization parameters are chosen analytically without resampling. Overall, this results in an effective and computationally inexpensive framework for high-dimensional prediction with natural feature selection. The proposed shrinkage discriminant procedures are implemented in the R package ``sda'' available from the R repository CRAN.
|
Title: Multiagent Learning in Large Anonymous Games
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.