text
stringlengths
0
4.09k
Title: The Synthesis of Regression Slopes in Meta-Analysis
Abstract: Research on methods of meta-analysis (the synthesis of related study results) has dealt with many simple study indices, but less attention has been paid to the issue of summarizing regression slopes. In part this is because of the many complications that arise when real sets of regression models are accumulated. We outline the complexities involved in synthesizing slopes, describe existing methods of analysis and present a multivariate generalized least squares approach to the synthesis of regression slopes.
Title: On the Distribution of the Adaptive LASSO Estimator
Abstract: We study the distribution of the adaptive LASSO estimator (Zou (2006)) in finite samples as well as in the large-sample limit. The large-sample distributions are derived both for the case where the adaptive LASSO estimator is tuned to perform conservative model selection as well as for the case where the tuning results in consistent model selection. We show that the finite-sample as well as the large-sample distributions are typically highly non-normal, regardless of the choice of the tuning parameter. The uniform convergence rate is also obtained, and is shown to be slower than $n^-1/2$ in case the estimator is tuned to perform consistent model selection. In particular, these results question the statistical relevance of the `oracle' property of the adaptive LASSO estimator established in Zou (2006). Moreover, we also provide an impossibility result regarding the estimation of the distribution function of the adaptive LASSO estimator.The theoretical results, which are obtained for a regression model with orthogonal design, are complemented by a Monte Carlo study using non-orthogonal regressors.
Title: Recursive Bias Estimation and $L_2$ Boosting
Abstract: This paper presents a general iterative bias correction procedure for regression smoothers. This bias reduction schema is shown to correspond operationally to the $L_2$ Boosting algorithm and provides a new statistical interpretation for $L_2$ Boosting. We analyze the behavior of the Boosting algorithm applied to common smoothers $S$ which we show depend on the spectrum of $I-S$. We present examples of common smoother for which Boosting generates a divergent sequence. The statistical interpretation suggest combining algorithm with an appropriate stopping rule for the iterative procedure. Finally we illustrate the practical finite sample performances of the iterative smoother via a simulation study. simulations.
Title: Methods to integrate a language model with semantic information for a word prediction component
Abstract: Most current word prediction systems make use of n-gram language models (LM) to estimate the probability of the following word in a phrase. In the past years there have been many attempts to enrich such language models with further syntactic or semantic information. We want to explore the predictive powers of Latent Semantic Analysis (LSA), a method that has been shown to provide reliable information on long-distance semantic dependencies between words in a context. We present and evaluate here several methods that integrate LSA-based information with a standard language model: a semantic cache, partial reranking, and different forms of interpolation. We found that all methods show significant improvements, compared to the 4-gram baseline, and most of them to a simple cache model as well.
Title: Concerning Olga, the Beautiful Little Street Dancer (Adjectives as Higher-Order Polymorphic Functions)
Abstract: In this paper we suggest a typed compositional seman-tics for nominal compounds of the form [Adj Noun] that models adjectives as higher-order polymorphic functions, and where types are assumed to represent concepts in an ontology that reflects our commonsense view of the world and the way we talk about it in or-dinary language. In addition to [Adj Noun] compounds our proposal seems also to suggest a plausible explana-tion for well known adjective ordering restrictions.
Title: Information Width
Abstract: Kolmogorov argued that the concept of information exists also in problems with no underlying stochastic model (as Shannon's information representation) for instance, the information contained in an algorithm or in the genome. He introduced a combinatorial notion of entropy and information $I(x:\sy)$ conveyed by a binary string $x$ about the unknown value of a variable $\sy$. The current paper poses the following questions: what is the relationship between the information conveyed by $x$ about $\sy$ to the description complexity of $x$ ? is there a notion of cost of information ? are there limits on how efficient $x$ conveys information ? To answer these questions Kolmogorov's definition is extended and a new concept termed \em information width which is similar to $n$-widths in approximation theory is introduced. Information of any input source, e.g., sample-based, general side-information or a hybrid of both can be evaluated by a single common formula. An application to the space of binary functions is considered.
Title: On the Complexity of Binary Samples
Abstract: Consider a class $\mH$ of binary functions $h: X\to\-1, +1\$ on a finite interval $X=[0, B]\subset \Real$. Define the \em sample width of $h$ on a finite subset (a sample) $S\subset X$ as $\w_S(h) \equiv \min_x\in S |\w_h(x)|$, where $\w_h(x) = h(x) \max\a\geq 0: h(z)=h(x), x-a\leq z\leq x+a\$. Let $_\ell$ be the space of all samples in $X$ of cardinality $\ell$ and consider sets of wide samples, i.e., \em hypersets which are defined as $A_\beta, h = \S\in _\ell: \w_S(h) \geq \beta\$. Through an application of the Sauer-Shelah result on the density of sets an upper estimate is obtained on the growth function (or trace) of the class $\A_\beta, h: h\in\mH\$, $\beta>0$, i.e., on the number of possible dichotomies obtained by intersecting all hypersets with a fixed collection of samples $S\in_\ell$ of cardinality $m$. The estimate is $2\sum_i=0^2\lfloor B/(2\beta)\rfloorm-\ell\choose i$.
Title: Automatic Text Area Segmentation in Natural Images
Abstract: We present a hierarchical method for segmenting text areas in natural images. The method assumes that the text is written with a contrasting color on a more or less uniform background. But no assumption is made regarding the language or character set used to write the text. In particular, the text can contain simple graphics or symbols. The key feature of our approach is that we first concentrate on finding the background of the text, before testing whether there is actually text on the background. Since uniform areas are easy to find in natural images, and since text backgrounds define areas which contain "holes" (where the text is written) we thus look for uniform areas containing "holes" and label them as text backgrounds candidates. Each candidate area is then further tested for the presence of text within its convex hull. We tested our method on a database of 65 images including English and Urdu text. The method correctly segmented all the text areas in 63 of these images, and in only 4 of these were areas that do not contain text also segmented.
Title: Shallow Models for Non-Iterative Modal Logics
Abstract: The methods used to establish PSPACE-bounds for modal logics can roughly be grouped into two classes: syntax driven methods establish that exhaustive proof search can be performed in polynomial space whereas semantic approaches directly construct shallow models. In this paper, we follow the latter approach and establish generic PSPACE-bounds for a large and heterogeneous class of modal logics in a coalgebraic framework. In particular, no complete axiomatisation of the logic under scrutiny is needed. This does not only complement our earlier, syntactic, approach conceptually, but also covers a wide variety of new examples which are difficult to harness by purely syntactic means. Apart from re-proving known complexity bounds for a large variety of structurally different logics, we apply our method to obtain previously unknown PSPACE-bounds for Elgesem's logic of agency and for graded modal logic over reflexive frames.
Title: Covariance estimation for multivariate conditionally Gaussian dynamic linear models
Abstract: In multivariate time series, the estimation of the covariance matrix of the observation innovations plays an important role in forecasting as it enables the computation of the standardized forecast error vectors as well as it enables the computation of confidence bounds of the forecasts. We develop an on-line, non-iterative Bayesian algorithm for estimation and forecasting. It is empirically found that, for a range of simulated time series, the proposed covariance estimator has good performance converging to the true values of the unknown observation covariance matrix. Over a simulated time series, the new method approximates the correct estimates, produced by a non-sequential Monte Carlo simulation procedure, which is used here as the gold standard. The special, but important, vector autoregressive (VAR) and time-varying VAR models are illustrated by considering London metal exchange data consisting of spot prices of aluminium, copper, lead and zinc.
Title: Posterior mean and variance approximation for regression and time series problems
Abstract: This paper develops a methodology for approximating the posterior first two moments of the posterior distribution in Bayesian inference. Partially specified probability models, which are defined only by specifying means and variances, are constructed based upon second-order conditional independence, in order to facilitate posterior updating and prediction of required distributional quantities. Such models are formulated particularly for multivariate regression and time series analysis with unknown observational variance-covariance components. The similarities and differences of these models with the Bayes linear approach are established. Several subclasses of important models, including regression and time series models with errors following multivariate $t$, inverted multivariate $t$ and Wishart distributions, are discussed in detail. Two numerical examples consisting of simulated data and of US investment and change in inventory data illustrate the proposed methodology.
Title: Multivariate stochastic volatility with Bayesian dynamic linear models
Abstract: This paper develops a Bayesian procedure for estimation and forecasting of the volatility of multivariate time series. The foundation of this work is the matrix-variate dynamic linear model, for the volatility of which we adopt a multiplicative stochastic evolution, using Wishart and singular multivariate beta distributions. A diagonal matrix of discount factors is employed in order to discount the variances element by element and therefore allowing a flexible and pragmatic variance modelling approach. Diagnostic tests and sequential model monitoring are discussed in some detail. The proposed estimation theory is applied to a four-dimensional time series, comprising spot prices of aluminium, copper, lead and zinc of the London metal exchange. The empirical findings suggest that the proposed Bayesian procedure can be effectively applied to financial data, overcoming many of the disadvantages of existing volatility models.
Title: Multivariate control charts based on Bayesian state space models
Abstract: This paper develops a new multivariate control charting method for vector autocorrelated and serially correlated processes. The main idea is to propose a Bayesian multivariate local level model, which is a generalization of the Shewhart-Deming model for autocorrelated processes, in order to provide the predictive error distribution of the process and then to apply a univariate modified EWMA control chart to the logarithm of the Bayes' factors of the predictive error density versus the target error density. The resulting chart is proposed as capable to deal with both the non-normality and the autocorrelation structure of the log Bayes' factors. The new control charting scheme is general in application and it has the advantage to control simultaneously not only the process mean vector and the dispersion covariance matrix, but also the entire target distribution of the process. Two examples of London metal exchange data and of production time series data illustrate the capabilities of the new control chart.
Title: Dynamic generalized linear models for non-Gaussian time series forecasting
Abstract: The purpose of this paper is to provide a discussion, with illustrating examples, on Bayesian forecasting for dynamic generalized linear models (DGLMs). Adopting approximate Bayesian analysis, based on conjugate forms and on Bayes linear estimation, we describe the theoretical framework and then we provide detailed examples of response distributions, including binomial, Poisson, negative binomial, geometric, normal, log-normal, gamma, exponential, Weibull, Pareto, beta, and inverse Gaussian. We give numerical illustrations for all distributions (except for the normal). Putting together all the above distributions, we give a unified Bayesian approach to non-Gaussian time series analysis, with applications from finance and medicine to biology and the behavioural sciences. Throughout the models we discuss Bayesian forecasting and, for each model, we derive the multi-step forecast mean. Finally, we describe model assessment using the likelihood function, and Bayesian model monitoring.
Title: Forecasting with time-varying vector autoregressive models
Abstract: The purpose of this paper is to propose a time-varying vector autoregressive model (TV-VAR) for forecasting multivariate time series. The model is casted into a state-space form that allows flexible description and analysis. The volatility covariance matrix of the time series is modelled via inverted Wishart and singular multivariate beta distributions allowing a fully conjugate Bayesian inference. Model performance and model comparison is done via the likelihood function, sequential Bayes factors, the mean of squared standardized forecast errors, the mean of absolute forecast errors (known also as mean absolute deviation), and the mean forecast error. Bayes factors are also used in order to choose the autoregressive order of the model. Multi-step forecasting is discussed in detail and a flexible formula is proposed to approximate the forecast function. Two examples, consisting of bivariate data of IBM shares and of foreign exchange (FX) rates for 8 currencies, illustrate the methods. For the IBM data we discuss model performance and multi-step forecasting in some detail. For the FX data we discuss sequential portfolio allocation; for both data sets our empirical findings suggest that the TV-VAR models outperform the widely used VAR models.
Title: Multivariate stochastic volatility using state space models
Abstract: A Bayesian procedure is developed for multivariate stochastic volatility, using state space models. An autoregressive model for the log-returns is employed. We generalize the inverted Wishart distribution to allow for different correlation structure between the observation and state innovation vectors and we extend the convolution between the Wishart and the multivariate singular beta distribution. A multiplicative model based on the generalized inverted Wishart and multivariate singular beta distributions is proposed for the evolution of the volatility and a flexible sequential volatility updating is employed. The proposed algorithm for the volatility is fast and computationally cheap and it can be used for on-line forecasting. The methods are illustrated with an example consisting of foreign exchange rates data of 8 currencies. The empirical results suggest that time-varying correlations can be estimated efficiently, even in situations of high dimensional data.
Title: Global Sensitivity Analysis of Stochastic Computer Models with joint metamodels
Abstract: The global sensitivity analysis method, used to quantify the influence of uncertain input variables on the response variability of a numerical model, is applicable to deterministic computer code (for which the same set of input variables gives always the same output value). This paper proposes a global sensitivity analysis methodology for stochastic computer code (having a variability induced by some uncontrollable variables). The framework of the joint modeling of the mean and dispersion of heteroscedastic data is used. To deal with the complexity of computer experiment outputs, non parametric joint models (based on Generalized Additive Models and Gaussian processes) are discussed. The relevance of these new models is analyzed in terms of the obtained variance-based sensitivity indices with two case studies. Results show that the joint modeling approach leads accurate sensitivity index estimations even when clear heteroscedasticity is present.
Title: Hierarchical Additive Modeling of Nonlinear Association with Spatial Correlations-An Application to Relate Alcohol Outlet Density and Neighborhood Assault Rates
Abstract: Previous studies have suggested a link between alcohol outlets and assaultive violence. In this paper, we explore the effects of alcohol availability on assault crimes at the census tract level over time. The statistical analysis is challenged by several features of the data: (1) the effects of possible covariates (for example, the alcohol outlet density of each census tract) on the assaultive crime rates may be complex; (2) the covariates may be highly correlated with each other; (3) there are a lot of missing inputs in the data; and (4) spatial correlations exist in the outcome assaultive crime rates. We propose a hierarchical additive model, where the nonlinear correlations and the complex interaction effects are modeled using the multiple additive regression trees (MART) and the spatial variances in the assaultive rates that cannot be explained by the specified covariates are smoothed trough the Conditional Autoregressive (CAR) model. We develop a two-stage algorithm that connect the non-parametric trees with CAR to look for important variables covariates associated with the assaultive crime rates, while taking account of the spatial correlations among adjacent census tracts. The proposed methods are applied to the Los Angeles assaultive data (1990-1999) and compared with traditional method.
Title: The distribution of the maximum of a first order moving average: the continuous case
Abstract: We give the distribution of $M_n$, the maximum of a sequence of $n$ observations from a moving average of order 1. Solutions are first given in terms of repeated integrals and then for the case where the underlying independent random variables have an absolutely continuous density. When the correlation is positive, $$ P(M_n %\max^n_i=1 X_i \leq x) =\ \sum_j=1^\infty \beta_jx \nu_jx^n \approx B_x \nu_1x^n $$ where %$\X_i\$ is a moving average of order 1 with positive correlation, and $\\nu_jx\$ are the eigenvalues (singular values) of a Fredholm kernel and $\nu_1x$ is the eigenvalue of maximum magnitude. A similar result is given when the correlation is negative. The result is analogous to large deviations expansions for estimates, since the maximum need not be standardized to have a limit. % there are more terms, and $$P(M_n <x) \approx B'_x\ (1+\nu_1x)^n.$$ For the continuous case the integral equations for the left and right eigenfunctions are converted to first order linear differential equations. The eigenvalues satisfy an equation of the form $$\sum_i=1^\infty w_i(\lambda-\theta_i)^-1=\lambda-\theta_0$$ for certain known weights $\w_i\$ and eigenvalues $\\theta_i\$ of a given matrix. This can be solved by truncating the sum to an increasing number of terms.
Title: The distribution of the maximum of a first order moving average: the discrete case
Abstract: We give the distribution of $M_n$, the maximum of a sequence of $n$ observations from a moving average of order 1. Solutions are first given in terms of repeated integrals and then for the case where the underlying independent random variables are discrete. When the correlation is positive, $$ P(M_n \max^n_i=1 X_i \leq x) = \sum_j=1^\infty \beta_jx \nu_jx^n \approx B_x r1x^n $$ where $\\nu_jx\$ are the eigenvalues of a certain matrix, $r_1x$ is the maximum magnitude of the eigenvalues, and $I$ depends on the number of possible values of the underlying random variables. The eigenvalues do not depend on $x$ only on its range.
Title: A Conversation with Ingram Olkin
Abstract: Ingram Olkin was born on July 23, 1924 in Waterbury, Connecticut. His family moved to New York in 1934 and he graduated from DeWitt Clinton High School in 1941. He served three years in the Air Force during World War II and obtained a B.S. in mathematics at the City College of New York in 1947. After receiving an M.A. in mathematical statistics from Columbia in 1949, he completed his graduate studies in the Department of Statistics at the University of North Carolina in 1951. His dissertation was written under the direction of S. N. Roy and Harold Hotelling. He joined the Department of Mathematics at Michigan State University in 1951 as an Assistant Professor, subsequently being promoted to Professor. In 1960, he took a position as Chair of the Department of Statistics at the University of Minnesota. He moved to Stanford University in 1961 to take a joint position as Professor of Statistics and Professor of Education; he was also Chair of the Department of Statistics from 1973--1976. In 2007, Ingram became Professor Emeritus. Ingram was Editor of the Annals of Mathematical Statistics (1971--1972) and served as the first editor of the Annals of Statistics from 1972--1974. He was a primary force in the founding of the Journal of Educational Statistics, for which he was also Associate Editor during 1977--1985. In 1984, he was President of the Institute of Mathematical Statistics. Among his many professional activities, he has served as Chair of the Committee of Presidents of Statistical Societies (COPSS), Chair of the Committee on Applied and Theoretical Statistics of the National Research Council, Chair of the Management Board of the American Education Research Association, and as Trustee for the National Institute of Statistical Sciences. He has been honored by the American Statistical Association (ASA) with a Wilks Medal (1992) and a Founder's Award (1992). The American Psychological Association gave him a Lifetime Contribution Award (1997) and he was elected to the National Academy of Education in 2005. He received the COPSS Elizabeth L. Scott Award in 1998 and delivered the R. A. Fisher Lecture in 2000. In 2003, the City University of New York gave him a Townsend Harris Medal. An author of 5 books, an editor of 10 books, and an author of more than 200 publications, Ingram has made major contributions to statistics and education. His research has focused on multivariate analysis, majorization and inequalities, distribution theory, and meta-analysis. A volume in celebration of Ingram's 65th birthday contains a brief biography and an interview [Gleser, Perlman, Press and Sampson (1989)]. Ingram was chosen in 1997 to participate in the American Statistical Association Distinguished Statistician Video Series and a videotaped conversation and a lecture (Olkin, 1997) are available from the ASA (1997, DS041, DS042).
Title: V-fold cross-validation improved: V-fold penalization
Abstract: We study the efficiency of V-fold cross-validation (VFCV) for model selection from the non-asymptotic viewpoint, and suggest an improvement on it, which we call ``V-fold penalization''. Considering a particular (though simple) regression problem, we prove that VFCV with a bounded V is suboptimal for model selection, because it ``overpenalizes'' all the more that V is large. Hence, asymptotic optimality requires V to go to infinity. However, when the signal-to-noise ratio is low, it appears that overpenalizing is necessary, so that the optimal V is not always the larger one, despite of the variability issue. This is confirmed by some simulated data. In order to improve on the prediction performance of VFCV, we define a new model selection procedure, called ``V-fold penalization'' (penVF). It is a V-fold subsampling version of Efron's bootstrap penalties, so that it has the same computational cost as VFCV, while being more flexible. In a heteroscedastic regression framework, assuming the models to have a particular structure, we prove that penVF satisfies a non-asymptotic oracle inequality with a leading constant that tends to 1 when the sample size goes to infinity. In particular, this implies adaptivity to the smoothness of the regression function, even with a highly heteroscedastic noise. Moreover, it is easy to overpenalize with penVF, independently from the V parameter. A simulation study shows that this results in a significant improvement on VFCV in non-asymptotic situations.
Title: A New Family of Random Graphs for Testing Spatial Segregation
Abstract: We discuss a graph-based approach for testing spatial point patterns. This approach falls under the category of data-random graphs, which have been introduced and used for statistical pattern recognition in recent years. Our goal is to test complete spatial randomness against segregation and association between two or more classes of points. To attain this goal, we use a particular type of parametrized random digraph called proximity catch digraph (PCD) which is based based on relative positions of the data points from various classes. The statistic we employ is the relative density of the PCD. When scaled properly, the relative density of the PCD is a $U$-statistic. We derive the asymptotic distribution of the relative density, using the standard central limit theory of $U$-statistics. The finite sample performance of the test statistic is evaluated by Monte Carlo simulations, and the asymptotic performance is assessed via Pitman's asymptotic efficiency, thereby yielding the optimal parameters for testing. Furthermore, the methodology discussed in this article is also valid for data in multiple dimensions.
Title: Relative Density of the Random $r$-Factor Proximity Catch Digraph for Testing Spatial Patterns of Segregation and Association
Abstract: Statistical pattern classification methods based on data-random graphs were introduced recently. In this approach, a random directed graph is constructed from the data using the relative positions of the data points from various classes. Different random graphs result from different definitions of the proximity region associated with each data point and different graph statistics can be employed for data reduction. The approach used in this article is based on a parameterized family of proximity maps determining an associated family of data-random digraphs. The relative arc density of the digraph is used as the summary statistic, providing an alternative to the domination number employed previously. An important advantage of the relative arc density is that, properly re-scaled, it is a $U$-statistic, facilitating analytic study of its asymptotic distribution using standard $U$-statistic central limit theory. The approach is illustrated with an application to the testing of spatial patterns of segregation and association. Knowledge of the asymptotic distribution allows evaluation of the Pitman and Hodges-Lehmann asymptotic efficacies, and selection of the proximity map parameter to optimize efficiency. Furthermore the approach presented here also has the advantage of validity for data in any dimension.
Title: The Use of Domination Number of a Random Proximity Catch Digraph for Testing Spatial Patterns of Segregation and Association
Abstract: Priebe et al. (2001) introduced the class cover catch digraphs and computed the distribution of the domination number of such digraphs for one dimensional data. In higher dimensions these calculations are extremely difficult due to the geometry of the proximity regions; and only upper-bounds are available. In this article, we introduce a new type of data-random proximity map and the associated (di)graph in $\mathbb R^d$. We find the asymptotic distribution of the domination number and use it for testing spatial point patterns of segregation and association.
Title: Bayesian Checking of the Second Levels of Hierarchical Models
Abstract: Hierarchical models are increasingly used in many applications. Along with this increased use comes a desire to investigate whether the model is compatible with the observed data. Bayesian methods are well suited to eliminate the many (nuisance) parameters in these complicated models; in this paper we investigate Bayesian methods for model checking. Since we contemplate model checking as a preliminary, exploratory analysis, we concentrate on objective Bayesian methods in which careful specification of an informative prior distribution is avoided. Numerous examples are given and different proposals are investigated and critically compared.
Title: Comment: Bayesian Checking of the Second Levels of Hierarchical Models
Abstract: We discuss the methods of Evans and Moshonov [Bayesian Analysis 1 (2006) 893--914, Bayesian Statistics and Its Applications (2007) 145--159] concerning checking for prior-data conflict and their relevance to the method proposed in this paper. [arXiv:0802.0743]
Title: Comment: Bayesian Checking of the Second Levels of Hierarchical Models
Abstract: Comment: Bayesian Checking of the Second Levels of Hierarchical Models [arXiv:0802.0743]
Title: Comment: Bayesian Checking of the Second Levels of Hierarchical Models
Abstract: Comment: Bayesian Checking of the Second Levels of Hierarchical Models [arXiv:0802.0743]
Title: Comment: Bayesian Checking of the Second Level of Hierarchical Models: Cross-Validated Posterior Predictive Checks Using Discrepancy Measures
Abstract: Comment: Bayesian Checking of the Second Level of Hierarchical Models [arXiv:0802.0743]
Title: Rejoinder: Bayesian Checking of the Second Levels of Hierarchical Models
Abstract: Rejoinder: Bayesian Checking of the Second Levels of Hierarchical Models [arXiv:0802.0743]
Title: A multiple covariance approach to PLS regression with several predictor groups: Structural Equation Exploratory Regression
Abstract: A variable group Y is assumed to depend upon R thematic variable groups X 1, >..., X R . We assume that components in Y depend linearly upon components in the Xr's. In this work, we propose a multiple covariance criterion which extends that of PLS regression to this multiple predictor groups situation. On this criterion, we build a PLS-type exploratory method - Structural Equation Exploratory Regression (SEER) - that allows to simultaneously perform dimension reduction in groups and investigate the linear model of the components. SEER uses the multidimensional structure of each group. An application example is given.
Title: Mod\'elisation factorielle des interactions entre deux ensembles d'observations : la m\'ethode PLS-FILM (Partial Least Squares Factor Interaction Linear Modelling)
Abstract: In this work, we consider a data array encoding interactions between two sets of observations respectively referred to as "subjects" and "objects". Besides, descriptions of subjects and objects are available through two variable sets. We propose a geometrically grounded exploratory technique to analyze the interactions using descriptions of subjects and objects: interactions are modelled using a hierarchy of subject-factors and object-factors built up from these descriptions. Our method bridges the gap between those of Chessel (RLQ analysis) and Martens (L-PLS), although it only has rank 1 components in common with them.
Title: Data-driven calibration of penalties for least-squares regression