text
stringlengths
0
4.09k
Title: Overlapping stochastic block models with application to the French political blogosphere
Abstract: Complex systems in nature and in society are often represented as networks, describing the rich set of interactions between objects of interest. Many deterministic and probabilistic clustering methods have been developed to analyze such structures. Given a network, almost all of them partition the vertices into disjoint clusters, according to their connection profile. However, recent studies have shown that these techniques were too restrictive and that most of the existing networks contained overlapping clusters. To tackle this issue, we present in this paper the Overlapping Stochastic Block Model. Our approach allows the vertices to belong to multiple clusters, and, to some extent, generalizes the well-known Stochastic Block Model [Nowicki and Snijders (2001)]. We show that the model is generically identifiable within classes of equivalence and we propose an approximate inference procedure, based on global and local variational techniques. Using toy data sets as well as the French Political Blogosphere network and the transcriptional network of Saccharomyces cerevisiae, we compare our work with other approaches.
Title: Node harvest
Abstract: When choosing a suitable technique for regression and classification with multivariate predictor variables, one is often faced with a tradeoff between interpretability and high predictive accuracy. To give a classical example, classification and regression trees are easy to understand and interpret. Tree ensembles like Random Forests provide usually more accurate predictions. Yet tree ensembles are also more difficult to analyze than single trees and are often criticized, perhaps unfairly, as `black box' predictors. Node harvest is trying to reconcile the two aims of interpretability and predictive accuracy by combining positive aspects of trees and tree ensembles. Results are very sparse and interpretable and predictive accuracy is extremely competitive, especially for low signal-to-noise data. The procedure is simple: an initial set of a few thousand nodes is generated randomly. If a new observation falls into just a single node, its prediction is the mean response of all training observation within this node, identical to a tree-like prediction. A new observation falls typically into several nodes and its prediction is then the weighted average of the mean responses across all these nodes. The only role of node harvest is to `pick' the right nodes from the initial large ensemble of nodes by choosing node weights, which amounts in the proposed algorithm to a quadratic programming problem with linear inequality constraints. The solution is sparse in the sense that only very few nodes are selected with a nonzero weight. This sparsity is not explicitly enforced. Maybe surprisingly, it is not necessary to select a tuning parameter for optimal predictive accuracy. Node harvest can handle mixed data and missing values and is shown to be simple to interpret and competitive in predictive accuracy on a variety of data sets.
Title: Finite element model selection using Particle Swarm Optimization
Abstract: This paper proposes the application of particle swarm optimization (PSO) to the problem of finite element model (FEM) selection. This problem arises when a choice of the best model for a system has to be made from set of competing models, each developed a priori from engineering judgment. PSO is a population-based stochastic search algorithm inspired by the behaviour of biological entities in nature when they are foraging for resources. Each potentially correct model is represented as a particle that exhibits both individualistic and group behaviour. Each particle moves within the model search space looking for the best solution by updating the parameters values that define it. The most important step in the particle swarm algorithm is the method of representing models which should take into account the number, location and variables of parameters to be updated. One example structural system is used to show the applicability of PSO in finding an optimal FEM. An optimal model is defined as the model that has the least number of updated parameters and has the smallest parameter variable variation from the mean material properties. Two different objective functions are used to compare performance of the PSO algorithm.
Title: Repeated Auctions with Learning for Spectrum Access in Cognitive Radio Networks
Abstract: In this paper, spectrum access in cognitive radio networks is modeled as a repeated auction game subject to monitoring and entry costs. For secondary users, sensing costs are incurred as the result of primary users' activity. Furthermore, each secondary user pays the cost of transmissions upon successful bidding for a channel. Knowledge regarding other secondary users' activity is limited due to the distributed nature of the network. The resulting formulation is thus a dynamic game with incomplete information. In this paper, an efficient bidding learning algorithm is proposed based on the outcome of past transactions. As demonstrated through extensive simulations, the proposed distributed scheme outperforms a myopic one-stage algorithm, and can achieve a good balance between efficiency and fairness.
Title: State of the Art Review for Applying Computational Intelligence and Machine Learning Techniques to Portfolio Optimisation
Abstract: Computational techniques have shown much promise in the field of Finance, owing to their ability to extract sense out of dauntingly complex systems. This paper reviews the most promising of these techniques, from traditional computational intelligence methods to their machine learning siblings, with particular view to their application in optimising the management of a portfolio of financial instruments. The current state of the art is assessed, and prospective further work is assessed and recommended
Title: Positive Semidefinite Metric Learning with Boosting
Abstract: The learning of appropriate distance metrics is a critical problem in image classification and retrieval. In this work, we propose a boosting-based technique, termed \BoostMetric, for learning a Mahalanobis distance metric. One of the primary difficulties in learning such a metric is to ensure that the Mahalanobis matrix remains positive semidefinite. Semidefinite programming is sometimes used to enforce this constraint, but does not scale well. \BoostMetric is instead based on a key observation that any positive semidefinite matrix can be decomposed into a linear positive combination of trace-one rank-one matrices. \BoostMetric thus uses rank-one positive semidefinite matrices as weak learners within an efficient and scalable boosting-based learning process. The resulting method is easy to implement, does not require tuning, and can accommodate various types of constraints. Experiments on various datasets show that the proposed algorithm compares favorably to those state-of-the-art methods in terms of classification accuracy and running time.
Title: Importance sampling methods for Bayesian discrimination between embedded models
Abstract: This paper surveys some well-established approaches on the approximation of Bayes factors used in Bayesian model choice, mostly as covered in Chen et al. (2000). Our focus here is on methods that are based on importance sampling strategies rather than variable dimension techniques like reversible jump MCMC, including: crude Monte Carlo, maximum likelihood based importance sampling, bridge and harmonic mean sampling, as well as Chib's method based on the exploitation of a functional equality. We demonstrate in this survey how these different methods can be efficiently implemented for testing the significance of a predictive variable in a probit model. Finally, we compare their performances on a real dataset.
Title: A Stochastic Model for Collaborative Recommendation
Abstract: Collaborative recommendation is an information-filtering technique that attempts to present information items (movies, music, books, news, images, Web pages, etc.) that are likely of interest to the Internet user. Traditionally, collaborative systems deal with situations with two types of variables, users and items. In its most common form, the problem is framed as trying to estimate ratings for items that have not yet been consumed by a user. Despite wide-ranging literature, little is known about the statistical properties of recommendation systems. In fact, no clear probabilistic model even exists allowing us to precisely describe the mathematical forces driving collaborative filtering. To provide an initial contribution to this, we propose to set out a general sequential stochastic model for collaborative recommendation and analyze its asymptotic performance as the number of users grows. We offer an in-depth analysis of the so-called cosine-type nearest neighbor collaborative method, which is one of the most widely used algorithms in collaborative filtering. We establish consistency of the procedure under mild assumptions on the model. Rates of convergence and examples are also provided.
Title: Fractional differentiation based image processing
Abstract: There are many resources useful for processing images, most of them freely available and quite friendly to use. In spite of this abundance of tools, a study of the processing methods is still worthy of efforts. Here, we want to discuss the possibilities arising from the use of fractional differential calculus. This calculus evolved in the research field of pure mathematics until 1920, when applied science started to use it. Only recently, fractional calculus was involved in image processing methods. As we shall see, the fractional calculation is able to enhance the quality of images, with interesting possibilities in edge detection and image restoration. We suggest also the fractional differentiation as a tool to reveal faint objects in astronomical images.
Title: Maximum entropy Edgeworth estimates of the number of integer points in polytopes
Abstract: Abstract: The number of points $x=(x_1 ,x_2 ,...x_n)$ that lie in an integer cube $C$ in $R^n$ and satisfy the constraints $\sum_j h_ij(x_j )=s_i ,1\le i\le d$ is approximated by an Edgeworth-corrected Gaussian formula based on the maximum entropy density $p$ on $x \in C$, that satisfies $E\sum_j h_ij(x_j )=s_i ,1\le i\le d$. Under $p$, the variables $X_1 ,X_2 ,...X_n $ are independent with densities of exponential form. Letting $S_i$ denote the random variable $\sum_j h_ij(X_j )$, conditional on $S=s, X$ is uniformly distributed over the integers in $C$ that satisfy $S=s$. The number of points in $C$ satisfying $S=s$ is $p \S=s\\exp (I(p))$ where $I(p)$ is the entropy of the density $p$. We estimate $p \S=s\$ by $p_Z(s)$, the density at $s$ of the multivariate Gaussian $Z$ with the same first two moments as $S$; and when $d$ is large we use in addition an Edgeworth factor that requires the first four moments of $S$ under $p$. The asymptotic validity of the Edgeworth-corrected estimate is proved and demonstrated for counting contingency tables with given row and column sums as the number of rows and columns approaches infinity, and demonstrated for counting the number of graphs with a given degree sequence, as the number of vertices approaches infinity.
Title: $L_0$ regularized estimation for nonlinear models that have sparse underlying linear structures
Abstract: We study the estimation of $\beta$ for the nonlinear model $y = f(X\beta) + \epsilon$ when $f$ is a nonlinear transformation that is known, $\beta$ has sparse nonzero coordinates, and the number of observations can be much smaller than that of parameters ($n\ll p$). We show that in order to bound the $L_2$ error of the $L_0$ regularized estimator $\hat\beta$, i.e., $\|\hat\beta - \beta\|_2$, it is sufficient to establish two conditions. Based on this, we obtain bounds of the $L_2$ error for (1) $L_0$ regularized maximum likelihood estimation (MLE) for exponential linear models and (2) $L_0$ regularized least square (LS) regression for the more general case where $f$ is analytic. For the analytic case, we rely on power series expansion of $f$, which requires taking into account the singularities of $f$.
Title: Effectiveness and Limitations of Statistical Spam Filters
Abstract: In this paper we discuss the techniques involved in the design of the famous statistical spam filters that include Naive Bayes, Term Frequency-Inverse Document Frequency, K-Nearest Neighbor, Support Vector Machine, and Bayes Additive Regression Tree. We compare these techniques with each other in terms of accuracy, recall, precision, etc. Further, we discuss the effectiveness and limitations of statistical filters in filtering out various types of spam from legitimate e-mails.
Title: Variable selection and updating in model-based discriminant analysis for high dimensional data with food authenticity applications
Abstract: Food authenticity studies are concerned with determining if food samples have been correctly labeled or not. Discriminant analysis methods are an integral part of the methodology for food authentication. Motivated by food authenticity applications, a model-based discriminant analysis method that includes variable selection is presented. The discriminant analysis model is fitted in a semi-supervised manner using both labeled and unlabeled data. The method is shown to give excellent classification performance on several high-dimensional multiclass food authenticity data sets with more variables than observations. The variables selected by the proposed method provide information about which variables are meaningful for classification purposes. A headlong search strategy for variable selection is shown to be efficient in terms of computation and achieves excellent classification performance. In applications to several food authenticity data sets, our proposed method outperformed default implementations of Random Forests, AdaBoost, transductive SVMs and Bayesian Multinomial Regression by substantial margins.
Title: A Component Based Heuristic Search Method with Evolutionary Eliminations
Abstract: Nurse rostering is a complex scheduling problem that affects hospital personnel on a daily basis all over the world. This paper presents a new component-based approach with evolutionary eliminations, for a nurse scheduling problem arising at a major UK hospital. The main idea behind this technique is to decompose a schedule into its components (i.e. the allocated shift pattern of each nurse), and then to implement two evolutionary elimination strategies mimicking natural selection and natural mutation process on these components respectively to iteratively deliver better schedules. The worthiness of all components in the schedule has to be continuously demonstrated in order for them to remain there. This demonstration employs an evaluation function which evaluates how well each component contributes towards the final objective. Two elimination steps are then applied: the first elimination eliminates a number of components that are deemed not worthy to stay in the current schedule; the second elimination may also throw out, with a low level of probability, some worthy components. The eliminated components are replenished with new ones using a set of constructive heuristics using local optimality criteria. Computational results using 52 data instances demonstrate the applicability of the proposed approach in solving real-world problems.
Title: An Agent Based Classification Model
Abstract: The major function of this model is to access the UCI Wisconsin Breast Can- cer data-set[1] and classify the data items into two categories, which are normal and anomalous. This kind of classifi cation can be referred as anomaly detection, which discriminates anomalous behaviour from normal behaviour in computer systems. One popular solution for anomaly detection is Artifi cial Immune Sys- tems (AIS). AIS are adaptive systems inspired by theoretical immunology and observed immune functions, principles and models which are applied to prob- lem solving. The Dendritic Cell Algorithm (DCA)[2] is an AIS algorithm that is developed specifi cally for anomaly detection. It has been successfully applied to intrusion detection in computer security. It is believed that agent-based mod- elling is an ideal approach for implementing AIS, as intelligent agents could be the perfect representations of immune entities in AIS. This model evaluates the feasibility of re-implementing the DCA in an agent-based simulation environ- ment called AnyLogic, where the immune entities in the DCA are represented by intelligent agents. If this model can be successfully implemented, it makes it possible to implement more complicated and adaptive AIS models in the agent-based simulation environment.
Title: Behavior Subtraction
Abstract: Background subtraction has been a driving engine for many computer vision and video analytics tasks. Although its many variants exist, they all share the underlying assumption that photometric scene properties are either static or exhibit temporal stationarity. While this works in some applications, the model fails when one is interested in discovering \it changes in scene dynamics rather than those in a static background; detection of unusual pedestrian and motor traffic patterns is but one example. We propose a new model and computational framework that address this failure by considering stationary scene dynamics as a ``background'' with which observed scene dynamics are compared. Central to our approach is the concept of an \it event, that we define as short-term scene dynamics captured over a time window at a specific spatial location in the camera field of view. We compute events by time-aggregating motion labels, obtained by background subtraction, as well as object descriptors (e.g., object size). Subsequently, we characterize events probabilistically, but use a low-memory, low-complexity surrogates in practical implementation. Using these surrogates amounts to \it behavior subtraction, a new algorithm with some surprising properties. As demonstrated here, behavior subtraction is an effective tool in anomaly detection and localization. It is resilient to spurious background motion, such as one due to camera jitter, and is content-blind, i.e., it works equally well on humans, cars, animals, and other objects in both uncluttered and highly-cluttered scenes. Clearly, treating video as a collection of events rather than colored pixels opens new possibilities for video analytics.
Title: An Evolutionary Squeaky Wheel Optimisation Approach to Personnel Scheduling
Abstract: The quest for robust heuristics that are able to solve more than one problem is ongoing. In this paper, we present, discuss and analyse a technique called Evolutionary Squeaky Wheel Optimisation and apply it to two different personnel scheduling problems. Evolutionary Squeaky Wheel Optimisation improves the original Squeaky Wheel Optimisation's effectiveness and execution speed by incorporating two extra steps (Selection and Mutation) for added evolution. In the Evolutionary Squeaky Wheel Optimisation, a cycle of Analysis-Selection-Mutation-Prioritization-Construction continues until stopping conditions are reached. The aim of the Analysis step is to identify below average solution components by calculating a fitness value for all components. The Selection step then chooses amongst these underperformers and discards some probabilistically based on fitness. The Mutation step further discards a few components at random. Solutions can become incomplete and thus repairs may be required. The repairs are carried out by using the Prioritization to first produce priorities that determine an order by which the following Construction step then schedules the remaining components. Therefore, improvement in the Evolutionary Squeaky Wheel Optimisation is achieved by selective solution disruption mixed with interative improvement and constructive repair. Strong experimental results are reported on two different domains of personnel scheduling: bus and rail driver scheduling and hospital nurse scheduling.
Title: A Conversation with Murray Rosenblatt
Abstract: On an exquisite March day in 2006, David Brillinger and Richard Davis sat down with Murray and Ady Rosenblatt at their home in La Jolla, California for an enjoyable day of reminiscences and conversation. Our mentor, Murray Rosenblatt, was born on September 7, 1926 in New York City and attended City College of New York before entering graduate school at Cornell University in 1946. After completing his Ph.D. in 1949 under the direction of the renowned probabilist Mark Kac, the Rosenblatts' moved to Chicago where Murray became an instructor/assistant professor in the Committee of Statistics at the University of Chicago. Murray's academic career then took him to the University of Indiana and Brown University before his joining the University of California at San Diego in 1964. Along the way, Murray established himself as one of the most celebrated and leading figures in probability and statistics with particular emphasis on time series and Markov processes. In addition to being a fellow of the Institute of Mathematical Statistics and American Association for the Advancement of Science, he was a Guggenheim fellow (1965--1966, 1971--1972) and was elected to the National Academy of Sciences in 1984. Among his many contributions, Murray conducted seminal work on density estimation, central limit theorems under strong mixing, spectral domain methods and long memory processes. Murray and Ady Rosenblatt were married in 1949 and have two children, Karin and Daniel.
Title: Efficient Bayesian analysis of multiple changepoint models with dependence across segments
Abstract: We consider Bayesian analysis of a class of multiple changepoint models. While there are a variety of efficient ways to analyse these models if the parameters associated with each segment are independent, there are few general approaches for models where the parameters are dependent. Under the assumption that the dependence is Markov, we propose an efficient online algorithm for sampling from an approximation to the posterior distribution of the number and position of the changepoints. In a simulation study, we show that the approximation introduced is negligible. We illustrate the power of our approach through fitting piecewise polynomial models to data, under a model which allows for either continuity or discontinuity of the underlying curve at each changepoint. This method is competitive with, or out-performs, other methods for inferring curves from noisy data; and uniquely it allows for inference of the locations of discontinuities in the underlying curve.
Title: An Evening Spent with Bill van Zwet
Abstract: Willem Rutger van Zwet was born in Leiden, the Netherlands, on March 31, 1934. He received his high school education at the Gymnasium Haganum in The Hague and obtained his Masters degree in Mathematics at the University of Leiden in 1959. After serving in the army for almost two years, he obtained his Ph.D. at the University of Amsterdam in 1964, with Jan Hemelrijk as advisor. In 1965, he was appointed Associate Professor of Statistics at the University of Leiden and promoted to Full Professor in 1968. He remained in Leiden until his retirement in 1999, while also serving as Associate Professor at the University of Oregon (1965), William Newman Professor at the University of North Carolina at Chapel Hill (1990--1996), frequent visitor and Miller Professor (1997) at the University of California at Berkeley, director of the Thomas Stieltjes Institute of Mathematics in the Netherlands (1992--1999), and founding director of the European research institute EURANDOM (1997--2000). At Leiden, he was Dean of the School of Mathematics and Natural Sciences (1982--1984). He served as chair of the scientific council and member of the board of the Mathematics Centre at Amsterdam (1983--1996) and the Leiden University Fund (1993--2005).
Title: An Idiotypic Immune Network as a Short Term Learning Architecture for Mobile Robots
Abstract: A combined Short-Term Learning (STL) and Long-Term Learning (LTL) approach to solving mobile robot navigation problems is presented and tested in both real and simulated environments. The LTL consists of rapid simulations that use a Genetic Algorithm to derive diverse sets of behaviours. These sets are then transferred to an idiotypic Artificial Immune System (AIS), which forms the STL phase, and the system is said to be seeded. The combined LTL-STL approach is compared with using STL only, and with using a handdesigned controller. In addition, the STL phase is tested when the idiotypic mechanism is turned off. The results provide substantial evidence that the best option is the seeded idiotypic system, i.e. the architecture that merges LTL with an idiotypic AIS for the STL. They also show that structurally different environments can be used for the two phases without compromising transferability
Title: An Immune Inspired Approach to Anomaly Detection
Abstract: The immune system provides a rich metaphor for computer security: anomaly detection that works in nature should work for machines. However, early artificial immune system approaches for computer security had only limited success. Arguably, this was due to these artificial systems being based on too simplistic a view of the immune system. We present here a second generation artificial immune system for process anomaly detection. It improves on earlier systems by having different artificial cell types that process information. Following detailed information about how to build such second generation systems, we find that communication between cells types is key to performance. Through realistic testing and validation we show that second generation artificial immune systems are capable of anomaly detection beyond generic system policies. The paper concludes with a discussion and outline of the next steps in this exciting area of computer security.
Title: An Immune Inspired Network Intrusion Detection System Utilising Correlation Context
Abstract: Network Intrusion Detection Systems (NIDS) are computer systems which monitor a network with the aim of discerning malicious from benign activity on that network. While a wide range of approaches have met varying levels of success, most IDSs rely on having access to a database of known attack signatures which are written by security experts. Nowadays, in order to solve problems with false positive alerts, correlation algorithms are used to add additional structure to sequences of IDS alerts. However, such techniques are of no help in discovering novel attacks or variations of known attacks, something the human immune system (HIS) is capable of doing in its own specialised domain. This paper presents a novel immune algorithm for application to the IDS problem. The goal is to discover packets containing novel variations of attacks covered by an existing signature base.
Title: Faster Algorithms for Max-Product Message-Passing
Abstract: Maximum A Posteriori inference in graphical models is often solved via message-passing algorithms, such as the junction-tree algorithm, or loopy belief-propagation. The exact solution to this problem is well known to be exponential in the size of the model's maximal cliques after it is triangulated, while approximate inference is typically exponential in the size of the model's factors. In this paper, we take advantage of the fact that many models have maximal cliques that are larger than their constituent factors, and also of the fact that many factors consist entirely of latent variables (i.e., they do not depend on an observation). This is a common case in a wide variety of applications, including grids, trees, and ring-structured models. In such cases, we are able to decrease the exponent of complexity for message-passing by 0.5 for both exact and approximate inference.
Title: Algorithms for Image Analysis and Combination of Pattern Classifiers with Application to Medical Diagnosis
Abstract: Medical Informatics and the application of modern signal processing in the assistance of the diagnostic process in medical imaging is one of the more recent and active research areas today. This thesis addresses a variety of issues related to the general problem of medical image analysis, specifically in mammography, and presents a series of algorithms and design approaches for all the intermediate levels of a modern system for computer-aided diagnosis (CAD). The diagnostic problem is analyzed with a systematic approach, first defining the imaging characteristics and features that are relevant to probable pathology in mammo-grams. Next, these features are quantified and fused into new, integrated radio-logical systems that exhibit embedded digital signal processing, in order to improve the final result and minimize the radiological dose for the patient. In a higher level, special algorithms are designed for detecting and encoding these clinically interest-ing imaging features, in order to be used as input to advanced pattern classifiers and machine learning models. Finally, these approaches are extended in multi-classifier models under the scope of Game Theory and optimum collective deci-sion, in order to produce efficient solutions for combining classifiers with minimum computational costs for advanced diagnostic systems. The material covered in this thesis is related to a total of 18 published papers, 6 in scientific journals and 12 in international conferences.
Title: A Fuzzy Petri Nets Model for Computing With Words
Abstract: Motivated by Zadeh's paradigm of computing with words rather than numbers, several formal models of computing with words have recently been proposed. These models are based on automata and thus are not well-suited for concurrent computing. In this paper, we incorporate the well-known model of concurrent computing, Petri nets, together with fuzzy set theory and thereby establish a concurrency model of computing with words--fuzzy Petri nets for computing with words (FPNCWs). The new feature of such fuzzy Petri nets is that the labels of transitions are some special words modeled by fuzzy sets. By employing the methodology of fuzzy reasoning, we give a faithful extension of an FPNCW which makes it possible for computing with more words. The language expressiveness of the two formal models of computing with words, fuzzy automata for computing with words and FPNCWs, is compared as well. A few small examples are provided to illustrate the theoretical development.
Title: Citation Statistics
Abstract: This is a report about the use and misuse of citation data in the assessment of scientific research. The idea that research assessment must be done using ``simple and objective'' methods is increasingly prevalent today. The ``simple and objective'' methods are broadly interpreted as bibliometrics, that is, citation data and the statistics derived from them. There is a belief that citation statistics are inherently more accurate because they substitute simple numbers for complex judgments, and hence overcome the possible subjectivity of peer review. But this belief is unfounded.
Title: Comment: Bibliometrics in the Context of the UK Research Assessment Exercise
Abstract: Research funding and reputation in the UK have, for over two decades, been increasingly dependent on a regular peer-review of all UK departments. This is to move to a system more based on bibliometrics. Assessment exercises of this kind influence the behavior of institutions, departments and individuals, and therefore bibliometrics will have effects beyond simple measurement. [arXiv:0910.3529]
Title: Comment: Citation Statistics
Abstract: We discuss the paper "Citation Statistics" by the Joint Committee on Quantitative Assessment of Research [arXiv:0910.3529]. In particular, we focus on a necessary feature of "good" measures for ranking scientific authors: that good measures must able to accurately distinguish between authors.
Title: Comment: Citation Statistics
Abstract: Comment on "Citation Statistics" [arXiv:0910.3529]
Title: Comment: Citation Statistics
Abstract: Comment on "Citation Statistics" [arXiv:0910.3529]
Title: Rejoinder: Citation Statistics
Abstract: Rejoinder to "Citation Statistics" [arXiv:0910.3529]
Title: On Learning Finite-State Quantum Sources
Abstract: We examine the complexity of learning the distributions produced by finite-state quantum sources. We show how prior techniques for learning hidden Markov models can be adapted to the quantum generator model to find that the analogous state of affairs holds: information-theoretically, a polynomial number of samples suffice to approximately identify the distribution, but computationally, the problem is as hard as learning parities with noise, a notorious open question in computational learning theory.