text
stringlengths 0
4.09k
|
|---|
Abstract: A stochastic epidemic model is defined in which each individual belongs to a household, a secondary grouping (typically school or workplace) and also the community as a whole. Moreover, infectious contacts take place in these three settings according to potentially different rates. For this model we consider how different kinds of data can be used to estimate the infection rate parameters with a view to understanding what can and cannot be inferred, and with what precision. Among other things we find that temporal data can be of considerable inferential benefit compared to final size data, that the degree of heterogeneity in the data can have a considerable effect on inference for non-household transmission, and that inferences can be materially different from those obtained from a model with two levels of mixing. Keywords: Basic reproduction number, Bayesian inference, Epidemic model, Infectious disease data, Markov chain Monte Carlo, Networks.
|
Title: Rigorous confidence bounds for MCMC under a geometric drift condition
|
Abstract: We assume a drift condition towards a small set and bound the mean square error of estimators obtained by taking averages along a single trajectory of a Markov chain Monte Carlo algorithm. We use these bounds to construct fixed-width nonasymptotic confidence intervals. For a possibly unbounded function $f:\stany \to R,$ let $I=\int_\stany f(x) \pi(x) dx$ be the value of interest and $_t,n=(1/n)\sum_i=t^t+n-1f(X_i)$ its MCMC estimate. Precisely, we derive lower bounds for the length of the trajectory $n$ and burn-in time $t$ which ensure that $$P(|_t,n-I|\leq \varepsilon)\geq 1-\alpha.$$ The bounds depend only and explicitly on drift parameters, on the $V-$norm of $f,$ where $V$ is the drift function and on precision and confidence parameters $\varepsilon, \alpha.$ Next we analyse an MCMC estimator based on the median of multiple shorter runs that allows for sharper bounds for the required total simulation cost. In particular the methodology can be applied for computing Bayesian estimators in practically relevant models. We illustrate our bounds numerically in a simple example.
|
Title: Simulation reductions for the Ising model
|
Abstract: Polynomial time reductions between problems have long been used to delineate problem classes. Simulation reductions also exist, where an oracle for simulation from some probability distribution can be employed together with an oracle for Bernoulli draws in order to obtain a draw from a different distribution. Here linear time simulation reductions are given for: the Ising spins world to the Ising subgraphs world and the Ising subgraphs world to the Ising spins world. This answers a long standing question of whether such a direct relationship between these two versions of the Ising model existed. Moreover, these reductions result in the first method for perfect simulation from the subgraphs world and a new Swendsen-Wang style Markov chain for the Ising model. The method used is to write the desired distribution with set parameters as a mixture of distributions where the parameters are at their extreme values.
|
Title: Classification by Set Cover: The Prototype Vector Machine
|
Abstract: We introduce a new nearest-prototype classifier, the prototype vector machine (PVM). It arises from a combinatorial optimization problem which we cast as a variant of the set cover problem. We propose two algorithms for approximating its solution. The PVM selects a relatively small number of representative points which can then be used for classification. It contains 1-NN as a special case. The method is compatible with any dissimilarity measure, making it amenable to situations in which the data are not embedded in an underlying feature space or in which using a non-Euclidean metric is desirable. Indeed, we demonstrate on the much studied ZIP code data how the PVM can reap the benefits of a problem-specific metric. In this example, the PVM outperforms the highly successful 1-NN with tangent distance, and does so retaining fewer than half of the data points. This example highlights the strengths of the PVM in yielding a low-error, highly interpretable model. Additionally, we apply the PVM to a protein classification problem in which a kernel-based distance is used.
|
Title: Online EM Algorithm for Hidden Markov Models
|
Abstract: Online (also called "recursive" or "adaptive") estimation of fixed model parameters in hidden Markov models is a topic of much interest in times series modelling. In this work, we propose an online parameter estimation algorithm that combines two key ideas. The first one, which is deeply rooted in the Expectation-Maximization (EM) methodology consists in reparameterizing the problem using complete-data sufficient statistics. The second ingredient consists in exploiting a purely recursive form of smoothing in HMMs based on an auxiliary recursion. Although the proposed online EM algorithm resembles a classical stochastic approximation (or Robbins-Monro) algorithm, it is sufficiently different to resist conventional analysis of convergence. We thus provide limited results which identify the potential limiting points of the recursion as well as the large-sample behavior of the quantities involved in the algorithm. The performance of the proposed algorithm is numerically evaluated through simulations in the case of a noisily observed Markov chain. In this case, the algorithm reaches estimation results that are comparable to that of the maximum likelihood estimator for large sample sizes.
|
Title: Bayesian estimation of a bivariate copula using the Jeffreys prior
|
Abstract: A bivariate distribution with continuous margins can be uniquely decomposed via a copula and its marginal distributions. We consider the problem of estimating the copula function and adopt a Bayesian approach. On the space of copula functions, we construct a finite-dimensional approximation subspace that is parametrized by a doubly stochastic matrix. A major problem here is the selection of a prior distribution on the space of doubly stochastic matrices also known as the Birkhoff polytope. The main contributions of this paper are the derivation of a simple formula for the Jeffreys prior and showing that it is proper. It is known in the literature that for a complex problem like the one treated here, the above results are difficult to obtain. The Bayes estimator resulting from the Jeffreys prior is then evaluated numerically via Markov chain Monte Carlo methodology. A rather extensive simulation experiment is carried out. In many cases, the results favour the Bayes estimator over frequentist estimators such as the standard kernel estimator and Deheuvels' estimator in terms of mean integrated squared error.
|
Title: Reconfiguration of 3D Crystalline Robots Using O(log n) Parallel Moves
|
Abstract: We consider the theoretical model of Crystalline robots, which have been introduced and prototyped by the robotics community. These robots consist of independently manipulable unit-square atoms that can extend/contract arms on each side and attach/detach from neighbors. These operations suffice to reconfigure between any two given (connected) shapes. The worst-case number of sequential moves required to transform one connected configuration to another is known to be Theta(n). However, in principle, atoms can all move simultaneously. We develop a parallel algorithm for reconfiguration that runs in only O(log n) parallel steps, although the total number of operations increases slightly to Theta(nlogn). The result is the first (theoretically) almost-instantaneous universally reconfigurable robot built from simple units.
|
Title: Sequential Quantile Prediction of Time Series
|
Abstract: Motivated by a broad range of potential applications, we address the quantile prediction problem of real-valued time series. We present a sequential quantile forecasting model based on the combination of a set of elementary nearest neighbor-type predictors called "experts" and show its consistency under a minimum of conditions. Our approach builds on the methodology developed in recent years for prediction of individual sequences and exploits the quantile structure as a minimizer of the so-called pinball loss function. We perform an in-depth analysis of real-world data sets and show that this nonparametric strategy generally outperforms standard quantile prediction methods
|
Title: A Backward Particle Interpretation of Feynman-Kac Formulae
|
Abstract: We design a particle interpretation of Feynman-Kac measures on path spaces based on a backward Markovian representation combined with a traditional mean field particle interpretation of the flow of their final time marginals. In contrast to traditional genealogical tree based models, these new particle algorithms can be used to compute normalized additive functionals "on-the-fly" as well as their limiting occupation measures with a given precision degree that does not depend on the final time horizon. We provide uniform convergence results w.r.t. the time horizon parameter as well as functional central limit theorems and exponential concentration estimates. We also illustrate these results in the context of computational physics and imaginary time Schroedinger type partial differential equations, with a special interest in the numerical approximation of the invariant measure associated to $h$-processes.
|
Title: Convex Multiview Fisher Discriminant Analysis
|
Abstract: Section 1.3 was incorrect, and 2.1 will be removed from further submissions. A rewritten version will be posted in the future.
|
Title: A bayesian approach to the estimation of maps between riemannian manifolds, II: examples
|
Abstract: Let M be a smooth compact oriented manifold without boundary, imbedded in a euclidean space E and let f be a smooth map of M into a Riemannian manifold N. An unknown state x in M is observed via X=x+su where s>0 is a small parameter and u is a white Gaussian noise. For a given smooth prior on M and smooth estimators g of the map f we have derived a second-order asymptotic expansion for the related Bayesian risk (see arXiv:0705.2540). In this paper, we apply this technique to a variety of examples. The second part examines the first-order conditions for equality-constrained regression problems. The geometric tools that are utilised in our earlier paper are naturally applicable to these regression problems.
|
Title: Convergence of Nonparametric Long-Memory Phase I Designs
|
Abstract: We examine nonparametric dose-finding designs that use toxicity estimates based on all available data at each dose allocation decision. We prove that one such design family, called here "interval design", converges almost surely to the maximum tolerated dose (MTD), if the MTD is the only dose level whose toxicity rate falls within the pre-specified interval around the desired target rate. Another nonparametric family, called "point design", has a positive probability of not converging. In a numerical sensitivity study, a diverse sample of dose-toxicity scenarios was randomly generated. On this sample, the "interval design" convergence conditions are met far more often than the conditions for one-parameter design convergence (the Shen-O'Quigley conditions), suggesting that the interval-design conditions are less restrictive. Implications of these theoretical and numerical results for small-sample behavior of the designs, and for future research, are discussed.
|
Title: Dynamic quantum clustering: a method for visual exploration of structures in data
|
Abstract: A given set of data-points in some feature space may be associated with a Schrodinger equation whose potential is determined by the data. This is known to lead to good clustering solutions. Here we extend this approach into a full-fledged dynamical scheme using a time-dependent Schrodinger equation. Moreover, we approximate this Hamiltonian formalism by a truncated calculation within a set of Gaussian wave functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition or feature filtering.
|
Title: Semantic Robot Vision Challenge: Current State and Future Directions
|
Abstract: The Semantic Robot Vision Competition provided an excellent opportunity for our research lab to integrate our many ideas under one umbrella, inspiring both collaboration and new research. The task, visual search for an unknown object, is relevant to both the vision and robotics communities. Moreover, since the interplay of robotics and vision is sometimes ignored, the competition provides a venue to integrate two communities. In this paper, we outline a number of modifications to the competition to both improve the state-of-the-art and increase participation.
|
Title: Sparse Canonical Correlation Analysis
|
Abstract: We present a novel method for solving Canonical Correlation Analysis (CCA) in a sparse convex framework using a least squares approach. The presented method focuses on the scenario when one is interested in (or limited to) a primal representation for the first view while having a dual representation for the second view. Sparse CCA (SCCA) minimises the number of features used in both the primal and dual projections while maximising the correlation between the two views. The method is demonstrated on two paired corpuses of English-French and English-Spanish for mate-retrieval. We are able to observe, in the mate-retreival, that when the number of the original features is large SCCA outperforms Kernel CCA (KCCA), learning the common semantic space from a sparse set of features.
|
Title: A nonparametric independence test using random permutations
|
Abstract: We propose a new nonparametric test for the supposition of independence between two continuous random variables. The test is based on the size of the longest increasing subsequence of a random permutation. We identified the independence assumption between the two continuous variables with the space of permutation equipped with the uniform distribution and we show the exact distribution of the statistic. We calculate the distribution for several sample sizes. Through a simulation study we estimate the power of our test for diverse alternative hypothesis under the null hypothesis of independence.
|
Title: Predictive validities: figures of merit or veils of deception?
|
Abstract: The ETS has recently released new estimates of validities of the GRE for predicting cumulative graduate GPA. They average in the middle thirties - twice as high as those previously reported by a number of independent investigators. It is shown in the first part of this paper that this unexpected finding can be traced to a flawed methodology that tends to inflate multiple correlation estimates, especially those of populations values near zero. Secondly, the issue of upward corrections of validity estimates for restriction of range is taken up. It is shown that they depend on assumptions that are rarely met by the data. Finally, it is argued more generally that conventional test theory, which is couched in terms of correlations and variances, is not only unnecessarily abstract but, more importantly, incomplete, since the practical utility of a test does not only depend on its validity, but also on base-rates and admission quotas. A more direct and conclusive method for gauging the utility of a test involves misclassification rates, and entirely dispenses with questionable assumptions and post-hoc "corrections". On applying this approach to the GRE, it emerges (1) that the GRE discriminates against ethnic and economic minorities, and (2) that it often produces more erroneous decisions than a purely random admissions policy would.
|
Title: Approximation of Average Run Length of Moving Sum Algorithms Using Multivariate Probabilities
|
Abstract: Among the various procedures used to detect potential changes in a stochastic process the moving sum algorithms are very popular due to their intuitive appeal and good statistical performance. One of the important design parameters of a change detection algorithm is the expected interval between false positives, also known as the average run length (ARL). Computation of the ARL usually involves numerical procedures but in some cases it can be approximated using a series involving multivariate probabilities. In this paper, we present an analysis of this series approach by providing sufficient conditions for convergence and derive an error bound. Using simulation studies, we show that the series approach is applicable to moving average and filtered derivative algorithms. For moving average algorithms, we compare our results with previously known bounds. We use two special cases to illustrate our observations.
|
Title: Computational Understanding and Manipulation of Symmetries
|
Abstract: For natural and artificial systems with some symmetry structure, computational understanding and manipulation can be achieved without learning by exploiting the algebraic structure. Here we describe this algebraic coordinatization method and apply it to permutation puzzles. Coordinatization yields a structural understanding, not just solutions for the puzzles.
|
Title: Another Look at Quantum Neural Computing
|
Abstract: The term quantum neural computing indicates a unity in the functioning of the brain. It assumes that the neural structures perform classical processing and that the virtual particles associated with the dynamical states of the structures define the underlying quantum state. We revisit the concept and also summarize new arguments related to the learning modes of the brain in response to sensory input that may be aggregated in three types: associative, reorganizational, and quantum. The associative and reorganizational types are quite apparent based on experimental findings; it is much harder to establish that the brain as an entity exhibits quantum properties. We argue that the reorganizational behavior of the brain may be viewed as inner adjustment corresponding to its quantum behavior at the system level. Not only neural structures but their higher abstractions also may be seen as whole entities. We consider the dualities associated with the behavior of the brain and how these dualities are bridged.
|
Title: Practical approach to programmable analog circuits with memristors
|
Abstract: We suggest an approach to use memristors (resistors with memory) in programmable analog circuits. Our idea consists in a circuit design in which low voltages are applied to memristors during their operation as analog circuit elements and high voltages are used to program the memristor's states. This way, as it was demonstrated in recent experiments, the state of memristors does not essentially change during analog mode operation. As an example of our approach, we have built several programmable analog circuits demonstrating memristor-based programming of threshold, gain and frequency.
|
Title: Nonparametric estimation of the volatility function in a high-frequency model corrupted by noise
|
Abstract: We consider the models Y_i,n=\int_0^i/n \sigma(s)dW_s+\tau(i/n)\epsilon_i,n, and \tilde Y_i,n=\sigma(i/n)W_i/n+\tau(i/n)\epsilon_i,n, i=1,...,n, where W_t denotes a standard Brownian motion and \epsilon_i,n are centered i.i.d. random variables with E(\epsilon_i,n^2)=1 and finite fourth moment. Furthermore, \sigma and \tau are unknown deterministic functions and W_t and (\epsilon_1,n,...,\epsilon_n,n) are assumed to be independent processes. Based on a spectral decomposition of the covariance structures we derive series estimators for \sigma^2 and \tau^2 and investigate their rate of convergence of the MISE in dependence of their smoothness. To this end specific basis functions and their corresponding Sobolev ellipsoids are introduced and we show that our estimators are optimal in minimax sense. Our work is motivated by microstructure noise models. Our major finding is that the microstructure noise \epsilon_i,n introduces an additionally degree of ill-posedness of 1/2; irrespectively of the tail behavior of \epsilon_i,n. The method is illustrated by a small numerical study.
|
Title: Quantifying Rational Belief
|
Abstract: Some criticisms that have been raised against the Cox approach to probability theory are addressed. Should we use a single real number to measure a degree of rational belief? Can beliefs be compared? Are the Cox axioms obvious? Are there counterexamples to Cox? Rather than justifying Cox's choice of axioms we follow a different path and derive the sum and product rules of probability theory as the unique (up to regraduations) consistent representations of the Boolean AND and OR operations.
|
Title: Non-quadratic convex regularized reconstruction of MR images from spiral acquisitions
|
Abstract: Combining fast MR acquisition sequences and high resolution imaging is a major issue in dynamic imaging. Reducing the acquisition time can be achieved by using non-Cartesian and sparse acquisitions. The reconstruction of MR images from these measurements is generally carried out using gridding that interpolates the missing data to obtain a dense Cartesian k-space filling. The MR image is then reconstructed using a conventional Fast Fourier Transform. The estimation of the missing data unavoidably introduces artifacts in the image that remain difficult to quantify. A general reconstruction method is proposed to take into account these limitations. It can be applied to any sampling trajectory in k-space, Cartesian or not, and specifically takes into account the exact location of the measured data, without making any interpolation of the missing data in k-space. Information about the expected characteristics of the imaged object is introduced to preserve the spatial resolution and improve the signal to noise ratio in a regularization framework. The reconstructed image is obtained by minimizing a non-quadratic convex objective function. An original rewriting of this criterion is shown to strongly improve the reconstruction efficiency. Results on simulated data and on a real spiral acquisition are presented and discussed.
|
Title: Rate Constrained Random Access over a Fading Channel
|
Abstract: In this paper, we consider uplink transmissions involving multiple users communicating with a base station over a fading channel. We assume that the base station does not coordinate the transmissions of the users and hence the users employ random access communication. The situation is modeled as a non-cooperative repeated game with incomplete information. Each user attempts to minimize its long term power consumption subject to a minimum rate requirement. We propose a two timescale stochastic gradient algorithm (TTSGA) for tuning the users' transmission probabilities. The algorithm includes a 'waterfilling threshold update mechanism' that ensures that the rate constraints are satisfied. We prove that under the algorithm, the users' transmission probabilities converge to a Nash equilibrium. Moreover, we also prove that the rate constraints are satisfied; this is also demonstrated using simulation studies.
|
Title: Relative Expected Improvement in Kriging Based Optimization
|
Abstract: We propose an extension of the concept of Expected Improvement criterion commonly used in Kriging based optimization. We extend it for more complex Kriging models, e.g. models using derivatives. The target field of application are CFD problems, where objective function are extremely expensive to evaluate, but the theory can be also used in other fields.
|
Title: Geometric Analysis of the Conformal Camera for Intermediate-Level Vision and Perisaccadic Perception
|
Abstract: A binocular system developed by the author in terms of projective Fourier transform (PFT) of the conformal camera, which numerically integrates the head, eyes, and visual cortex, is used to process visual information during saccadic eye movements. Although we make three saccades per second at the eyeball's maximum speed of 700 deg/sec, our visual system accounts for these incisive eye movements to produce a stable percept of the world. This visual constancy is maintained by neuronal receptive field shifts in various retinotopically organized cortical areas prior to saccade onset, giving the brain access to visual information from the saccade's target before the eyes' arrival. It integrates visual information acquisition across saccades. Our modeling utilizes basic properties of PFT. First, PFT is computable by FFT in complex logarithmic coordinates that approximate the retinotopy. Second, a translation in retinotopic (logarithmic) coordinates, modeled by the shift property of the Fourier transform, remaps the presaccadic scene into a postsaccadic reference frame. It also accounts for the perisaccadic mislocalization observed by human subjects in laboratory experiments. Because our modeling involves cross-disciplinary areas of conformal geometry, abstract and computational harmonic analysis, computational vision, and visual neuroscience, we include the corresponding background material and elucidate how these different areas interwove in our modeling of primate perception. In particular, we present the physiological and behavioral facts underlying the neural processes related to our modeling. We also emphasize the conformal camera's geometry and discuss how it is uniquely useful in the intermediate-level vision computational aspects of natural scene understanding.
|
Title: Construction of Hilbert Transform Pairs of Wavelet Bases and Gabor-like Transforms
|
Abstract: We propose a novel method for constructing Hilbert transform (HT) pairs of wavelet bases based on a fundamental approximation-theoretic characterization of scaling functions--the B-spline factorization theorem. In particular, starting from well-localized scaling functions, we construct HT pairs of biorthogonal wavelet bases of L^2(R) by relating the corresponding wavelet filters via a discrete form of the continuous HT filter. As a concrete application of this methodology, we identify HT pairs of spline wavelets of a specific flavor, which are then combined to realize a family of complex wavelets that resemble the optimally-localized Gabor function for sufficiently large orders. Analytic wavelets, derived from the complexification of HT wavelet pairs, exhibit a one-sided spectrum. Based on the tensor-product of such analytic wavelets, and, in effect, by appropriately combining four separable biorthogonal wavelet bases of L^2(R^2), we then discuss a methodology for constructing 2D directional-selective complex wavelets. In particular, analogous to the HT correspondence between the components of the 1D counterpart, we relate the real and imaginary components of these complex wavelets using a multi-dimensional extension of the HT--the directional HT. Next, we construct a family of complex spline wavelets that resemble the directional Gabor functions proposed by Daugman. Finally, we present an efficient FFT-based filterbank algorithm for implementing the associated complex wavelet transform.
|
Title: On the Shiftability of Dual-Tree Complex Wavelet Transforms
|
Abstract: The dual-tree complex wavelet transform (DT-CWT) is known to exhibit better shift-invariance than the conventional discrete wavelet transform. We propose an amplitude-phase representation of the DT-CWT which, among other things, offers a direct explanation for the improvement in the shift-invariance. The representation is based on the shifting action of the group of fractional Hilbert transform (fHT) operators, which extends the notion of arbitrary phase-shifts from sinusoids to finite-energy signals (wavelets in particular). In particular, we characterize the shiftability of the DT-CWT in terms of the shifting property of the fHTs. At the heart of the representation are certain fundamental invariances of the fHT group, namely that of translation, dilation, and norm, which play a decisive role in establishing the key properties of the transform. It turns out that these fundamental invariances are exclusive to this group. Next, by introducing a generalization of the Bedrosian theorem for the fHT operator, we derive an explicitly understanding of the shifting action of the fHT for the particular family of wavelets obtained through the modulation of lowpass functions (e.g., the Shannon and Gabor wavelet). This, in effect, links the corresponding dual-tree transform with the framework of windowed-Fourier analysis. Finally, we extend these ideas to the multi-dimensional setting by introducing a directional extension of the fHT, the fractional directional Hilbert transform. In particular, we derive a signal representation involving the superposition of direction-selective wavelets with appropriate phase-shifts, which helps explain the improved shift-invariance of the transform along certain preferential directions.
|
Title: A Cognitive Mind-map Framework to Foster Trust
|
Abstract: The explorative mind-map is a dynamic framework, that emerges automatically from the input, it gets. It is unlike a verificative modeling system where existing (human) thoughts are placed and connected together. In this regard, explorative mind-maps change their size continuously, being adaptive with connectionist cells inside; mind-maps process data input incrementally and offer lots of possibilities to interact with the user through an appropriate communication interface. With respect to a cognitive motivated situation like a conversation between partners, mind-maps become interesting as they are able to process stimulating signals whenever they occur. If these signals are close to an own understanding of the world, then the conversational partner becomes automatically more trustful than if the signals do not or less match the own knowledge scheme. In this (position) paper, we therefore motivate explorative mind-maps as a cognitive engine and propose these as a decision support engine to foster trust.
|
Title: The Optimal Unbiased Value Estimator and its Relation to LSTD, TD and MC
|
Abstract: In this analytical study we derive the optimal unbiased value estimator (MVU) and compare its statistical risk to three well known value estimators: Temporal Difference learning (TD), Monte Carlo estimation (MC) and Least-Squares Temporal Difference Learning (LSTD). We demonstrate that LSTD is equivalent to the MVU if the Markov Reward Process (MRP) is acyclic and show that both differ for most cyclic MRPs as LSTD is then typically biased. More generally, we show that estimators that fulfill the Bellman equation can only be unbiased for special cyclic MRPs. The main reason being the probability measures with which the expectations are taken. These measure vary from state to state and due to the strong coupling by the Bellman equation it is typically not possible for a set of value estimators to be unbiased with respect to each of these measures. Furthermore, we derive relations of the MVU to MC and TD. The most important one being the equivalence of MC to the MVU and to LSTD for undiscounted MRPs in which MC has the same amount of information. In the discounted case this equivalence does not hold anymore. For TD we show that it is essentially unbiased for acyclic MRPs and biased for cyclic MRPs. We also order estimators according to their risk and present counter-examples to show that no general ordering exists between the MVU and LSTD, between MC and LSTD and between TD and MC. Theoretical results are supported by examples and an empirical evaluation.
|
Title: Maximizing profit using recommender systems
|
Abstract: Traditional recommendation systems make recommendations based solely on the customer's past purchases, product ratings and demographic data without considering the profitability the items being recommended. In this work we study the question of how a vendor can directly incorporate the profitability of items into its recommender so as to maximize its expected profit while still providing accurate recommendations. Our approach uses the output of any traditional recommender system and adjust them according to item profitabilities. Our approach is parameterized so the vendor can control how much the recommendation incorporating profits can deviate from the traditional recommendation. We study our approach under two settings and show that it achieves approximately 22% more profit than traditional recommendations.
|
Title: Chaotic Transitions in Wall Following Robots
|
Abstract: In this paper we examine how simple agents similar to Braitenberg vehicles can exhibit chaotic movement patterns. The agents are wall following robots as described by Steve Mesburger and Alfred Hubler in their paper "Chaos in Wall Following Robots". These agents uses a simple forward facing distance sensor, with a limited field of view "phi" for navigation. An agent drives forward at a constant velocity and uses the sensor to turn right when it is too close to an object and left when it is too far away. For a flat wall the agent stays a fixed distance from the wall and travels along it, regardless of the sensor's capabilities. But, if the wall represents a periodic function, the agent drives on a periodic path when the sensor has a narrow field of view. The agent's trajectory transitions to chaos when the sensor's field of view is increased. Numerical experiments were performed with square, triangle, and sawtooth waves for the wall, to find this pattern. The bifurcations of the agents were analyzed, finding both border collision and period doubling bifurcations. Detailed experimental results will be reported separately.
|
Title: Uncovering delayed patterns in noisy and irregularly sampled time series: an astronomy application
|
Abstract: We study the problem of estimating the time delay between two signals representing delayed, irregularly sampled and noisy versions of the same underlying pattern. We propose and demonstrate an evolutionary algorithm for the (hyper)parameter estimation of a kernel-based technique in the context of an astronomical problem, namely estimating the time delay between two gravitationally lensed signals from a distant quasar. Mixed types (integer and real) are used to represent variables within the evolutionary algorithm. We test the algorithm on several artificial data sets, and also on real astronomical observations of quasar Q0957+561. By carrying out a statistical analysis of the results we present a detailed comparison of our method with the most popular methods for time delay estimation in astrophysics. Our method yields more accurate and more stable time delay estimates: for Q0957+561, we obtain 419.6 days for the time delay between images A and B. Our methodology can be readily applied to current state-of-the-art optical monitoring data in astronomy, but can also be applied in other disciplines involving similar time series data.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.