id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
1209.0880
On Solving the Oriented Two-Dimensional Bin Packing Problem under Free Guillotine Cutting: Exploiting the Power of Probabilistic Solution Construction
cs.AI
Two-dimensional bin packing problems are highly relevant combinatorial optimization problems. They find a large number of applications, for example, in the context of transportation or warehousing, and for the cutting of different materials such as glass, wood or metal. In this work we deal with the oriented two-dimensional bin packing problem under free guillotine cutting. In this specific problem a set of oriented rectangular items is given which must be packed into a minimum number of bins of equal size. The first algorithm proposed in this work is a randomized multi-start version of a constructive one-pass heuristic from the literature. Additionally we propose the use of this randomized one-pass heuristic within an evolutionary algorithm. The results of the two proposed algorithms are compared to the best approaches from the literature. In particular the evolutionary algorithm compares very favorably to current state-of-the-art approaches. The optimal solution for 4 previously unsolved instances could be found.
1209.0911
Conquering the rating bound problem in neighborhood-based collaborative filtering: a function recovery approach
cs.IR cs.AI cs.HC
As an important tool for information filtering in the era of socialized web, recommender systems have witnessed rapid development in the last decade. As benefited from the better interpretability, neighborhood-based collaborative filtering techniques, such as item-based collaborative filtering adopted by Amazon, have gained a great success in many practical recommender systems. However, the neighborhood-based collaborative filtering method suffers from the rating bound problem, i.e., the rating on a target item that this method estimates is bounded by the observed ratings of its all neighboring items. Therefore, it cannot accurately estimate the unobserved rating on a target item, if its ground truth rating is actually higher (lower) than the highest (lowest) rating over all items in its neighborhood. In this paper, we address this problem by formalizing rating estimation as a task of recovering a scalar rating function. With a linearity assumption, we infer all the ratings by optimizing the low-order norm, e.g., the $l_1/2$-norm, of the second derivative of the target scalar function, while remaining its observed ratings unchanged. Experimental results on three real datasets, namely Douban, Goodreads and MovieLens, demonstrate that the proposed approach can well overcome the rating bound problem. Particularly, it can significantly improve the accuracy of rating estimation by 37% than the conventional neighborhood-based methods.
1209.0913
Structuring Relevant Feature Sets with Multiple Model Learning
cs.LG
Feature selection is one of the most prominent learning tasks, especially in high-dimensional datasets in which the goal is to understand the mechanisms that underly the learning dataset. However most of them typically deliver just a flat set of relevant features and provide no further information on what kind of structures, e.g. feature groupings, might underly the set of relevant features. In this paper we propose a new learning paradigm in which our goal is to uncover the structures that underly the set of relevant features for a given learning problem. We uncover two types of features sets, non-replaceable features that contain important information about the target variable and cannot be replaced by other features, and functionally similar features sets that can be used interchangeably in learned models, given the presence of the non-replaceable features, with no change in the predictive performance. To do so we propose a new learning algorithm that learns a number of disjoint models using a model disjointness regularization constraint together with a constraint on the predictive agreement of the disjoint models. We explore the behavior of our approach on a number of high-dimensional datasets, and show that, as expected by their construction, these satisfy a number of properties. Namely, model disjointness, a high predictive agreement, and a similar predictive performance to models learned on the full set of relevant features. The ability to structure the set of relevant features in such a manner can become a valuable tool in different applications of scientific knowledge discovery.
1209.0935
Characterizing Successful Formulas: the Multi-agent Case
cs.MA cs.LO
Characterization of successful formulas in Public Announcement Logic (PAL) is a well known open problem in Dynamic Epistemic Logic. Recently, Holliday and ICard have given a complete characterization for the single agent case. However, the problem for the multi-agent case is open. This paper gives a partial solution to the problem, characterizing the subclass of the language consisting of unary operators, and discusses methods to give a complete solution.
1209.0997
Direct computation of diagnoses for ontology debugging
cs.AI
Modern ontology debugging methods allow efficient identification and localization of faulty axioms defined by a user while developing an ontology. The ontology development process in this case is characterized by rather frequent and regular calls to a reasoner resulting in an early user awareness of modeling errors. In such a scenario an ontology usually includes only a small number of conflict sets, i.e. sets of axioms preserving the faults. This property allows efficient use of standard model-based diagnosis techniques based on the application of hitting set algorithms to a number of given conflict sets. However, in many use cases such as ontology alignment the ontologies might include many more conflict sets than in usual ontology development settings, thus making precomputation of conflict sets and consequently ontology diagnosis infeasible. In this paper we suggest a debugging approach based on a direct computation of diagnoses that omits calculation of conflict sets. Embedded in an ontology debugger, the proposed algorithm is able to identify diagnoses for an ontology which includes a large number of faults and for which application of standard diagnosis methods fails. The evaluation results show that the approach is practicable and is able to identify a fault in adequate time.
1209.0999
Visual Exploration of Simulated and Measured Blood Flow
cs.GR cs.CV
Morphology of cardiovascular tissue is influenced by the unsteady behavior of the blood flow and vice versa. Therefore, the pathogenesis of several cardiovascular diseases is directly affected by the blood-flow dynamics. Understanding flow behavior is of vital importance to understand the cardiovascular system and potentially harbors a considerable value for both diagnosis and risk assessment. The analysis of hemodynamic characteristics involves qualitative and quantitative inspection of the blood-flow field. Visualization plays an important role in the qualitative exploration, as well as the definition of relevant quantitative measures and its validation. There are two main approaches to obtain information about the blood flow: simulation by computational fluid dynamics, and in-vivo measurements. Although research on blood flow simulation has been performed for decades, many open problems remain concerning accuracy and patient-specific solutions. Possibilities for real measurement of blood flow have recently increased considerably by new developments in magnetic resonance imaging which enable the acquisition of 3D quantitative measurements of blood-flow velocity fields. This chapter presents the visualization challenges for both simulation and real measurements of unsteady blood-flow fields.
1209.1011
Kleisli Database Instances
cs.DB math.CT
We use monads to relax the atomicity requirement for data in a database. Depending on the choice of monad, the database fields may contain generalized values such as lists or sets of values, or they may contain exceptions such as various types of nulls. The return operation for monads ensures that any ordinary database instance will count as one of these generalized instances, and the bind operation ensures that generalized values behave well under joins of foreign key sequences. Different monads allow for vastly different types of information to be stored in the database. For example, we show that classical concepts like Markov chains, graphs, and finite state automata are each perfectly captured by a different monad on the same schema.
1209.1032
On Scalable Video Streaming over Cognitive Radio Cellular and Ad Hoc Networks
cs.IT cs.NI math.IT
Video content delivery over wireless networks is expected to grow drastically in the coming years. In this paper, we investigate the challenging problem of video over cognitive radio (CR) networks. Although having high potential, this problem brings about a new level of technical challenges. After reviewing related work, we first address the problem of video over infrastructure-based CR networks, and then extend the problem to video over non-infrastructure-based ad hoc CR networks. We present formulations of cross-layer optimization problems as well as effective algorithms to solving the problems. The proposed algorithms are analyzed with respect to their optimality and validate with simulations.
1209.1033
The Annealing Sparse Bayesian Learning Algorithm
cs.IT cs.LG math.IT
In this paper we propose a two-level hierarchical Bayesian model and an annealing schedule to re-enable the noise variance learning capability of the fast marginalized Sparse Bayesian Learning Algorithms. The performance such as NMSE and F-measure can be greatly improved due to the annealing technique. This algorithm tends to produce the most sparse solution under moderate SNR scenarios and can outperform most concurrent SBL algorithms while pertains small computational load.
1209.1048
Performance Analysis Of Neuro Genetic Algorithm Applied On Detecting Proportion Of Components In Manhole Gas Mixture
cs.NE cs.CV
The article presents performance analysis of a real valued neuro genetic algorithm applied for the detection of proportion of the gases found in manhole gas mixture. The neural network (NN) trained using genetic algorithm (GA) leads to concept of neuro genetic algorithm, which is used for implementing an intelligent sensory system for the detection of component gases present in manhole gas mixture Usually a manhole gas mixture contains several toxic gases like Hydrogen Sulfide, Ammonia, Methane, Carbon Dioxide, Nitrogen Oxide, and Carbon Monoxide. A semiconductor based gas sensor array used for sensing manhole gas components is an integral part of the proposed intelligent system. It consists of many sensor elements, where each sensor element is responsible for sensing particular gas component. Multiple sensors of different gases used for detecting gas mixture of multiple gases, results in cross-sensitivity. The cross-sensitivity is a major issue and the problem is viewed as pattern recognition problem. The objective of this article is to present performance analysis of the real valued neuro genetic algorithm which is applied for multiple gas detection.
1209.1064
A Max-Product EM Algorithm for Reconstructing Markov-tree Sparse Signals from Compressive Samples
stat.ML cs.IT math.IT
We propose a Bayesian expectation-maximization (EM) algorithm for reconstructing Markov-tree sparse signals via belief propagation. The measurements follow an underdetermined linear model where the regression-coefficient vector is the sum of an unknown approximately sparse signal and a zero-mean white Gaussian noise with an unknown variance. The signal is composed of large- and small-magnitude components identified by binary state variables whose probabilistic dependence structure is described by a Markov tree. Gaussian priors are assigned to the signal coefficients given their state variables and the Jeffreys' noninformative prior is assigned to the noise variance. Our signal reconstruction scheme is based on an EM iteration that aims at maximizing the posterior distribution of the signal and its state variables given the noise variance. We construct the missing data for the EM iteration so that the complete-data posterior distribution corresponds to a hidden Markov tree (HMT) probabilistic graphical model that contains no loops and implement its maximization (M) step via a max-product algorithm. This EM algorithm estimates the vector of state variables as well as solves iteratively a linear system of equations to obtain the corresponding signal estimate. We select the noise variance so that the corresponding estimated signal and state variables obtained upon convergence of the EM iteration have the largest marginal posterior distribution. We compare the proposed and existing state-of-the-art reconstruction methods via signal and image reconstruction experiments.
1209.1073
Reply to 'Comments on Integer SEC-DED codes for low power communications'
cs.IT math.IT
This paper is a reply to the comments on 'Integer SEC-DED codes for low power communications'.
1209.1077
Learning Probability Measures with respect to Optimal Transport Metrics
cs.LG stat.ML
We study the problem of estimating, in the sense of optimal transport metrics, a measure which is assumed supported on a manifold embedded in a Hilbert space. By establishing a precise connection between optimal transport metrics, optimal quantization, and learning theory, we derive new probabilistic bounds for the performance of a classic algorithm in unsupervised learning (k-means), when used to produce a probability measure derived from the data. In the course of the analysis, we arrive at new lower bounds, as well as probabilistic upper bounds on the convergence rate of the empirical law of large numbers, which, unlike existing bounds, are applicable to a wide class of measures.
1209.1086
Robustness and Generalization for Metric Learning
cs.LG cs.AI stat.ML
Metric learning has attracted a lot of interest over the last decade, but the generalization ability of such methods has not been thoroughly studied. In this paper, we introduce an adaptation of the notion of algorithmic robustness (previously introduced by Xu and Mannor) that can be used to derive generalization bounds for metric learning. We further show that a weak notion of robustness is in fact a necessary and sufficient condition for a metric learning algorithm to generalize. To illustrate the applicability of the proposed framework, we derive generalization results for a large family of existing metric learning algorithms, including some sparse formulations that are not covered by previous results.
1209.1114
Speed Tracking of a Linear Induction Motor - Enumerative Nonlinear Model Predictive Control
cs.SY math.OC
Direct torque control is considered as one of the most efficient techniques for speed and/or position tracking control of induction motor drives. However, this control scheme has several drawbacks: the switching frequency may exceed the maximum allowable switching frequency of the inverters, and the ripples in current and torque, especially at low speed tracking, may be too large. In this paper we propose a new approach that overcomes these problems. The suggested controller is a model predictive controller which directly controls the inverter switches. It is easy to implement in real time and it outperforms all previous approaches. Simulation results show that the new approach has as good tracking properties as any other scheme, and that it reduces the average inverter switching frequency about 95% as compared to classical direct torque control.
1209.1121
Learning Manifolds with K-Means and K-Flats
cs.LG stat.ML
We study the problem of estimating a manifold from random samples. In particular, we consider piecewise constant and piecewise linear estimators induced by k-means and k-flats, and analyze their performance. We extend previous results for k-means in two separate directions. First, we provide new results for k-means reconstruction on manifolds and, secondly, we prove reconstruction bounds for higher-order approximation (k-flats), for which no known results were previously available. While the results for k-means are novel, some of the technical tools are well-established in the literature. In the case of k-flats, both the results and the mathematical tools are new.
1209.1122
On Learning with Finite Memory
cs.GT cs.SI
We consider an infinite collection of agents who make decisions, sequentially, about an unknown underlying binary state of the world. Each agent, prior to making a decision, receives an independent private signal whose distribution depends on the state of the world. Moreover, each agent also observes the decisions of its last K immediate predecessors. We study conditions under which the agent decisions converge to the correct value of the underlying state. We focus on the case where the private signals have bounded information content and investigate whether learning is possible, that is, whether there exist decision rules for the different agents that result in the convergence of their sequence of individual decisions to the correct state of the world. We first consider learning in the almost sure sense and show that it is impossible, for any value of K. We then explore the possibility of convergence in probability of the decisions to the correct state. Here, a distinction arises: if K equals 1, learning in probability is impossible under any decision rule, while for K greater or equal to 2, we design a decision rule that achieves it. We finally consider a new model, involving forward looking strategic agents, each of which maximizes the discounted sum (over all agents) of the probabilities of a correct decision. (The case, studied in previous literature, of myopic agents who maximize the probability of their own decision being correct is an extreme special case.) We show that for any value of K, for any equilibrium of the associated Bayesian game, and under the assumption that each private signal has bounded information content, learning in probability fails to obtain.
1209.1123
Stabilizability and Norm-Optimal Control Design subject to Sparsity Constraints
cs.SY math.DS math.OC
Consider that a linear time-invariant (LTI) plant is given and that we wish to design a stabilizing controller for it. Admissible controllers are LTI and must comply with a pre-selected sparsity pattern. The sparsity pattern is assumed to be quadratically invariant (QI) with respect to the plant, which, from prior results, guarantees that there is a convex parametrization of all admissible stabilizing controllers provided that an initial admissible stable stabilizing controller is provided. This paper addresses the previously unsolved problem of determining necessary and sufficient conditions for the existence of an admissible stabilizing controller. The main idea is to cast the existence of such a controller as the feasibility of an exact model-matching problem with stability restrictions, which can be tackled using existing methods. Furthermore, we show that, when it exists, the solution of the model-matching problem can be used to compute an admissible stabilizing controller. This method also leads to a convex parametrization that may be viewed as an extension of Youla's classical approach so as to incorporate sparsity constraints. Applications of this parametrization on the design of norm-optimal controllers via convex methods are also explored. An illustrative example is provided, and a special case is discussed for which the exact model matching problem has a unique and easily computable solution.
1209.1125
Video Data Visualization System: Semantic Classification And Personalization
cs.IR cs.CV cs.MM
We present in this paper an intelligent video data visualization tool, based on semantic classification, for retrieving and exploring a large scale corpus of videos. Our work is based on semantic classification resulting from semantic analysis of video. The obtained classes will be projected in the visualization space. The graph is represented by nodes and edges, the nodes are the keyframes of video documents and the edges are the relation between documents and the classes of documents. Finally, we construct the user's profile, based on the interaction with the system, to render the system more adequate to its references.
1209.1128
Capacity achieving multiwrite WOM codes
cs.IT cs.CC math.IT
In this paper we give an explicit construction of a capacity achieving family of binary t-write WOM codes for any number of writes t, that have a polynomial time encoding and decoding algorithms. The block length of our construction is N=(t/\epsilon)^{O(t/(\delta\epsilon))} when \epsilon is the gap to capacity and encoding and decoding run in time N^{1+\delta}. This is the first deterministic construction achieving these parameters. Our techniques also apply to larger alphabets.
1209.1139
Control of Noisy Differential-Drive Vehicles from Time-Bounded Temporal Logic Specifications
cs.RO
We address the problem of controlling a noisy differential drive mobile robot such that the probability of satisfying a specification given as a Bounded Linear Temporal Logic (BLTL) formula over a set of properties at the regions in the environment is maximized. We assume that the vehicle can determine its precise initial position in a known map of the environment. However, inspired by practical limitations, we assume that the vehicle is equipped with noisy actuators and, during its motion in the environment, it can only measure the angular velocity of its wheels using limited accuracy incremental encoders. Assuming the duration of the motion is finite, we map the measurements to a Markov Decision Process (MDP). We use recent results in Statistical Model Checking (SMC) to obtain an MDP control policy that maximizes the probability of satisfaction. We translate this policy to a vehicle feedback control strategy and show that the probability that the vehicle satisfies the specification in the environment is bounded from below by the probability of satisfying the specification on the MDP. We illustrate our method with simulations and experimental results.
1209.1150
On dually flat Randers metrics
math.DG cs.IT math.IT
In this paper, I will show how to use beta-deformations to deal with dual flatness of Randers metrics. beta-deformations is a new method in Riemann-Finsler geometry, it is introduced by the author(see arxiv:1209.0845). Later on I will provide more applications of the new kind of deformations in Finsler geometry.
1209.1154
Generalized Formulation of Weighted Optimal Guidance Laws with Impact Angle Constraint
cs.SY
The purpose of this paper is to investigate the generalized formulation of weighted optimal guidance laws with impact angle constraint. From the generalized formulation, we explicitly find the feasible set of weighting functions that lead to analytical forms of weighted optimal guidance laws. This result has potential significance because it can provide additional degrees of freedom in designing a guidance law that accomplishes the specified guidance objective.
1209.1180
Distributed Optimal Beamformers for Cognitive Radios Robust to Channel Uncertainties
cs.IT math.IT
Through spatial multiplexing and diversity, multi-input multi-output (MIMO) cognitive radio (CR) networks can markedly increase transmission rates and reliability, while controlling the interference inflicted to peer nodes and primary users (PUs) via beamforming. The present paper optimizes the design of transmit- and receive-beamformers for ad hoc CR networks when CR-to-CR channels are known, but CR-to-PU channels cannot be estimated accurately. Capitalizing on a norm-bounded channel uncertainty model, the optimal beamforming design is formulated to minimize the overall mean-square error (MSE) from all data streams, while enforcing protection of the PU system when the CR-to-PU channels are uncertain. Even though the resultant optimization problem is non-convex, algorithms with provable convergence to stationary points are developed by resorting to block coordinate ascent iterations, along with suitable convex approximation techniques. Enticingly, the novel schemes also lend themselves naturally to distributed implementations. Numerical tests are reported to corroborate the analytical findings.
1209.1181
FCM Based Blood Vessel Segmentation Method for Retinal Images
cs.CV
Segmentation of blood vessels in retinal images provides early diagnosis of diseases like glaucoma, diabetic retinopathy and macular degeneration. Among these diseases occurrence of Glaucoma is most frequent and has serious ocular consequences that can even lead to blindness, if it is not detected early. The clinical criteria for the diagnosis of glaucoma include intraocular pressure measurement, optic nerve head evaluation, retinal nerve fiber layer and visual field defects. This form of blood vessel segmentation helps in early detection for ophthalmic diseases, and potentially reduces the risk of blindness. The low-contrast images at the retina owing to narrow blood vessels of the retina are difficult to extract. These low contrast images are, however useful in revealing certain systemic diseases. Motivated by the goals of improving detection of such vessels, this present work proposes an algorithm for segmentation of blood vessels and compares the results between expert ophthalmologist hand-drawn ground-truths and segmented image(i.e. the output of the present work).Sensitivity, specificity, positive predictive value (PPV), positive likelihood ratio (PLR) and accuracy are used to evaluate overall performance.It is found that this work segments blood vessels successfully with sensitivity, specificity, PPV, PLR and accuracy of 99.62%, 54.66%, 95.08%, 219.72 and 95.03%, respectively.
1209.1198
Multivariate Interpolation Formula over Finite Fields and Its Applications in Coding Theory
cs.IT math.IT
A multivariate interpolation formula (MVIF) over finite fields is presented by using the proposed Kronecker delta function. The MVIF can be applied to yield polynomial relations over the base field among homogeneous symmetric rational functions. Besides the property that all the coefficients are coming from the base field, there is also a significant one on the degrees of the obtained polynomial; namely, the degree of each term satisfies certain condition. Next, for any cyclic codes the unknown syndrome representation can also be provided by the proposed MVIF and also has the same properties. By applying the unknown syndrome representation and the Berlekamp-Massey algorithm, one-step decoding algorithms can be developed to determine the error locator polynomials for arbitrary cyclic codes.
1209.1224
Wavelet Based Normal and Abnormal Heart Sound Identification using Spectrogram Analysis
cs.CV
The present work proposes a computer-aided normal and abnormal heart sound identification based on Discrete Wavelet Transform (DWT), it being useful for tele-diagnosis of heart diseases. Due to the presence of Cumulative Frequency components in the spectrogram, DWT is applied on the spectro-gram up to n level to extract the features from the individual approximation components. One dimensional feature vector is obtained by evaluating the Row Mean of the approximation components of these spectrograms. For this present approach, the set of spectrograms has been considered as the database, rather than raw sound samples. Minimum Euclidean distance is computed between feature vector of the test sample and the feature vectors of the stored samples to identify the heart sound. By applying this algorithm, almost 82% of accuracy was achieved.
1209.1236
Coordination of autonomic functionalities in communications networks
cs.NI cs.SY
Future communication networks are expected to feature autonomic (or self-organizing) mechanisms to ease deployment (self-configuration), tune parameters automatically (self-optimization) and repair the network (self-healing). Self-organizing mechanisms have been designed as stand-alone entities, even though multiple mechanisms will run in parallel in operational networks. An efficient coordination mechanism will be the major enabler for large scale deployment of self-organizing networks. We model self-organizing mechanisms as control loops, and study the conditions for stability when running control loops in parallel. Based on control theory and Lyapunov stability, we propose a coordination mechanism to stabilize the system, which can be implemented in a distributed fashion. The mechanism remains valid in the presence of measurement noise via stochastic approximation. Instability and coordination in the context of wireless networks are illustrated with two examples and the influence of network geometry is investigated. We are essentially concerned with linear systems, and the applicability of our results for non-linear systems is discussed.
1209.1291
The degrees of freedom of MIMO networks with full-duplex receiver cooperation but no CSIT
cs.IT math.IT
The question of whether the degrees of freedom (DoF) of multi-user networks can be enhanced even under isotropic fading and no channel state information (or output feedback) at the transmitters (CSIT) is investigated. Toward this end, the two-user MIMO (multiple-input, multiple-output) broadcast and interference channels are studied with no side-information whatsoever at the transmitters and with receivers equipped with full-duplex radios. The full-duplex feature allows for receiver cooperation because each receiver, in addition to receiving the signals sent by the transmitters, can also simultaneously transmit a signal in the same band to the other receiver. Unlike the case of MIMO networks with CSIT and full-duplex receivers, for which DoF are known, it is shown that for MIMO networks with no CSIT, full-duplex receiver cooperation is beneficial to such an extent that even the DoF region is enhanced. Indeed, for important classes of two-user MIMO broadcast and interference channels, defined by certain relationships on numbers of antennas at different terminals, the exact DoF regions are established. The key to achieving DoF-optimal performance for such networks are new retro-cooperative interference alignment schemes. Their optimality is established via the DoF analysis of certain genie-aided or enhanced version of those networks.
1209.1295
Period Distribution of Inversive Pseudorandom Number Generators Over Finite Fields
cs.IT math.IT
In this paper, we focus on analyzing the period distribution of the inversive pseudorandom number generators (IPRNGs) over finite field $({\rm Z}_{N},+,\times)$, where $N>3$ is a prime. The sequences generated by the IPRNGs are transformed to 2-dimensional linear feedback shift register (LFSR) sequences. By employing the generating function method and the finite field theory, the period distribution is obtained analytically. The analysis process also indicates how to choose the parameters and the initial values such that the IPRNGs fit specific periods. The analysis results show that there are many small periods if $N$ is not chosen properly. The experimental examples show the effectiveness of the theoretical analysis.
1209.1300
Input Scheme for Hindi Using Phonetic Mapping
cs.CL
Written Communication on Computers requires knowledge of writing text for the desired language using Computer. Mostly people do not use any other language besides English. This creates a barrier. To resolve this issue we have developed a scheme to input text in Hindi using phonetic mapping scheme. Using this scheme we generate intermediate code strings and match them with pronunciations of input text. Our system show significant success over other input systems available.
1209.1301
Evaluation of Computational Grammar Formalisms for Indian Languages
cs.CL
Natural Language Parsing has been the most prominent research area since the genesis of Natural Language Processing. Probabilistic Parsers are being developed to make the process of parser development much easier, accurate and fast. In Indian context, identification of which Computational Grammar Formalism is to be used is still a question which needs to be answered. In this paper we focus on this problem and try to analyze different formalisms for Indian languages.
1209.1317
Lossy joint source-channel coding in the finite blocklength regime
cs.IT math.IT
This paper finds new tight finite-blocklength bounds for the best achievable lossy joint source-channel code rate, and demonstrates that joint source-channel code design brings considerable performance advantage over a separate one in the non-asymptotic regime. A joint source-channel code maps a block of $k$ source symbols onto a length$-n$ channel codeword, and the fidelity of reproduction at the receiver end is measured by the probability $\epsilon$ that the distortion exceeds a given threshold $d$. For memoryless sources and channels, it is demonstrated that the parameters of the best joint source-channel code must satisfy $nC - kR(d) \approx \sqrt{nV + k \mathcal V(d)} Q(\epsilon)$, where $C$ and $V$ are the channel capacity and channel dispersion, respectively; $R(d)$ and $\mathcal V(d)$ are the source rate-distortion and rate-dispersion functions; and $Q$ is the standard Gaussian complementary cdf. Symbol-by-symbol (uncoded) transmission is known to achieve the Shannon limit when the source and channel satisfy a certain probabilistic matching condition. In this paper we show that even when this condition is not satisfied, symbol-by-symbol transmission is, in some cases, the best known strategy in the non-asymptotic regime.
1209.1318
Finding and Recommending Scholarly Articles
cs.IR astro-ph.IM cs.DL physics.soc-ph
The rate at which scholarly literature is being produced has been increasing at approximately 3.5 percent per year for decades. This means that during a typical 40 year career the amount of new literature produced each year increases by a factor of four. The methods scholars use to discover relevant literature must change. Just like everybody else involved in information discovery, scholars are confronted with information overload. Two decades ago, this discovery process essentially consisted of paging through abstract books, talking to colleagues and librarians, and browsing journals. A time-consuming process, which could even be longer if material had to be shipped from elsewhere. Now much of this discovery process is mediated by online scholarly information systems. All these systems are relatively new, and all are still changing. They all share a common goal: to provide their users with access to the literature relevant to their specific needs. To achieve this each system responds to actions by the user by displaying articles which the system judges relevant to the user's current needs. Recently search systems which use particularly sophisticated methodologies to recommend a few specific papers to the user have been called "recommender systems". These methods are in line with the current use of the term "recommender system" in computer science. We do not adopt this definition, rather we view systems like these as components in a larger whole, which is presented by the scholarly information systems themselves. In what follows we view the recommender system as an aspect of the entire information system; one which combines the massive memory capacities of the machine with the cognitive abilities of the human user to achieve a human-machine synergy.
1209.1322
Differentially Private Grids for Geospatial Data
cs.CR cs.DB
In this paper, we tackle the problem of constructing a differentially private synopsis for two-dimensional datasets such as geospatial datasets. The current state-of-the-art methods work by performing recursive binary partitioning of the data domains, and constructing a hierarchy of partitions. We show that the key challenge in partition-based synopsis methods lies in choosing the right partition granularity to balance the noise error and the non-uniformity error. We study the uniform-grid approach, which applies an equi-width grid of a certain size over the data domain and then issues independent count queries on the grid cells. This method has received no attention in the literature, probably due to the fact that no good method for choosing a grid size was known. Based on an analysis of the two kinds of errors, we propose a method for choosing the grid size. Experimental results validate our method, and show that this approach performs as well as, and often times better than, the state-of-the-art methods. We further introduce a novel adaptive-grid method. The adaptive grid method lays a coarse-grained grid over the dataset, and then further partitions each cell according to its noisy count. Both levels of partitions are then used in answering queries over the dataset. This method exploits the need to have finer granularity partitioning over dense regions and, at the same time, coarse partitioning over sparse regions. Through extensive experiments on real-world datasets, we show that this approach consistently and significantly outperforms the uniform-grid method and other state-of-the-art methods.
1209.1323
An Empirical Study of How Users Adopt Famous Entities
cs.SI physics.soc-ph
Users of social networking services construct their personal social networks by creating asymmetric and symmetric social links. Users usually follow friends and selected famous entities that include celebrities and news agencies. In this paper, we investigate how users follow famous entities. We statically and dynamically analyze data within a huge social networking service with a manually classified set of famous entities. The results show that the in-degree of famous entities does not fit to power-law distribution. Conversely, the maximum number of famous followees in one category for each user shows power-law property. To our best knowledge, there is no research work on this topic with human-chosen famous entity dataset in real life. These findings might be helpful in microblogging marketing and user classification.
1209.1351
Emergence of influential spreaders in modified rumor models
physics.soc-ph cs.SI
The burst in the use of online social networks over the last decade has provided evidence that current rumor spreading models miss some fundamental ingredients in order to reproduce how information is disseminated. In particular, recent literature has revealed that these models fail to reproduce the fact that some nodes in a network have an influential role when it comes to spread a piece of information. In this work, we introduce two mechanisms with the aim of filling the gap between theoretical and experimental results. The first model introduces the assumption that spreaders are not always active whereas the second model considers the possibility that an ignorant is not interested in spreading the rumor. In both cases, results from numerical simulations show a higher adhesion to real data than classical rumor spreading models. Our results shed some light on the mechanisms underlying the spreading of information and ideas in large social systems and pave the way for more realistic diffusion models.
1209.1360
Multiclass Learning with Simplex Coding
stat.ML cs.LG
In this paper we discuss a novel framework for multiclass learning, defined by a suitable coding/decoding strategy, namely the simplex coding, that allows to generalize to multiple classes a relaxation approach commonly used in binary classification. In this framework, a relaxation error analysis can be developed avoiding constraints on the considered hypotheses class. Moreover, we show that in this setting it is possible to derive the first provably consistent regularized method with training/tuning complexity which is independent to the number of classes. Tools from convex analysis are introduced that can be used beyond the scope of this paper.
1209.1380
The Sample Complexity of Search over Multiple Populations
cs.IT math.IT stat.ML
This paper studies the sample complexity of searching over multiple populations. We consider a large number of populations, each corresponding to either distribution P0 or P1. The goal of the search problem studied here is to find one population corresponding to distribution P1 with as few samples as possible. The main contribution is to quantify the number of samples needed to correctly find one such population. We consider two general approaches: non-adaptive sampling methods, which sample each population a predetermined number of times until a population following P1 is found, and adaptive sampling methods, which employ sequential sampling schemes for each population. We first derive a lower bound on the number of samples required by any sampling scheme. We then consider an adaptive procedure consisting of a series of sequential probability ratio tests, and show it comes within a constant factor of the lower bound. We give explicit expressions for this constant when samples of the populations follow Gaussian and Bernoulli distributions. An alternative adaptive scheme is discussed which does not require full knowledge of P1, and comes within a constant factor of the optimal scheme. For comparison, a lower bound on the sampling requirements of any non-adaptive scheme is presented.
1209.1402
Joint Spatial Division and Multiplexing
cs.IT math.IT
We propose Joint Spatial Division and Multiplexing (JSDM), an approach to multiuser MIMO downlink that exploits the structure of the correlation of the channel vectors in order to allow for a large number of antennas at the base station while requiring reduced-dimensional Channel State Information at the Transmitter (CSIT). This allows for significant savings both in the downlink training and in the CSIT feedback from the user terminals to the base station, thus making the use of a large number of base station antennas potentially suitable also for Frequency Division Duplexing (FDD) systems, for which uplink/downlink channel reciprocity cannot be exploited. JSDM forms the multiuser MIMO downlink precoder by concatenating a pre-beamforming matrix, which depends only on the channel second-order statistics, with a classical multiuser precoder, based on the instantaneous knowledge of the resulting reduced dimensional effective channels. We prove a simple condition under which JSDM incurs no loss of optimality with respect to the full CSIT case. For linear uniformly spaced arrays, we show that such condition is closely approached when the number of antennas is large. For this case, we use Szego asymptotic theory of large Toeplitz matrices to design a DFT-based pre-beamforming scheme requiring only coarse information about the users angles of arrival and angular spread. Finally, we extend these ideas to the case of a two-dimensional base station antenna array, with 3-dimensional beamforming, including multiple beams in the elevation angle direction. We provide guidelines for the pre-beamforming optimization and calculate the system spectral efficiency under proportional fairness and maxmin fairness criteria, showing extremely attractive performance. Our numerical results are obtained via an asymptotic random matrix theory tool known as deterministic equivalent approximation.
1209.1411
Connections between Human Dynamics and Network Science
physics.soc-ph cond-mat.stat-mech cs.SI physics.data-an
The increasing availability of large-scale data on human behavior has catalyzed simultaneous advances in network theory, capturing the scaling properties of the interactions between a large number of individuals, and human dynamics, quantifying the temporal characteristics of human activity patterns. These two areas remain disjoint, each pursuing as separate lines of inquiry. Here we report a series of generic relationships between the quantities characterizing these two areas by demonstrating that the degree and link weight distributions in social networks can be expressed in terms of the dynamical exponents characterizing human activity patterns. We test the validity of these theoretical predictions on datasets capturing various facets of human interactions, from mobile calls to tweets.
1209.1421
Blackboard Rules for Coordinating Context-aware Applications in Mobile Ad Hoc Networks
cs.MA
Thanks to improvements in wireless communication technologies and increasing computing power in hand-held devices, mobile ad hoc networks are becoming an ever-more present reality. Coordination languages are expected to become important means in supporting this type of interaction. To this extent we argue the interest of the Bach coordination language as a middleware that can handle and react to context changes as well as cope with unpredictable physical interruptions that occur in opportunistic network connections. More concretely, our proposal is based on blackboard rules that model declaratively the actions to be taken once the blackboard content reaches a predefined state, but also that manage the engagement and disengagement of hosts and transient sharing of blackboards. The idea of reactiveness has already been introduced in previous work, but as will be appreciated by the reader, this article presents a new perspective, more focused on a declarative setting.
1209.1423
A model for cross-cultural reciprocal interactions through mass media
physics.soc-ph cond-mat.stat-mech cs.SI nlin.CD
We investigate the problem of cross-cultural interactions through mass media in a model where two populations of social agents, each with its own internal dynamics, get information about each other through reciprocal global interactions. As the agent dynamics, we employ Axelrod's model for social influence. The global interaction fields correspond to the statistical mode of the states of the agents and represent mass media messages on the cultural trend originating in each population. Several phases are found in the collective behavior of either population depending on parameter values: two homogeneous phases, one having the state of the global field acting on that population, and the other consisting of a state different from that reached by the applied global field; and a disordered phase. In addition, the system displays nontrivial effects: (i) the emergence of a largest minority group of appreciable size sharing a state different from that of the applied global field; (ii) the appearance of localized ordered states for some values of parameters when the entire system is observed, consisting of one population in a homogeneous state and the other in a disordered state. This last situation can be considered as a social analogue to a chimera state arising in globally coupled populations of oscillators.
1209.1424
Multiuser Diversity for the Cognitive Uplink with Generalized Fading and Reduced Primary's Cooperation
cs.IT math.IT
In cognitive multiple access networks, feedback is an important mechanism to convey secondary transmitter primary base station (STPB) channel gains from the primary base station (PBS) to the secondary base station (SBS). This paper investigates the optimal sum-rate capacity scaling laws for cognitive multiple access networks in feedback limited communication scenarios. First, an efficient feedback protocol called $K$-smallest channel gains ($K$-SCGs) feedback protocol is proposed in which the PBS feeds back the $\K$ smallest out of $N$ STPB channel gains to the SBS. Second, the sum-rate performance of the $K$-SCG feedback protocol is studied for three network types when transmission powers of secondary users (SUs) are optimally allocated. The network types considered are total-power-and-interference-limited (TPIL), interference-limited (IL) and individual-power-and-interference-limited (IPIL) networks. For each network type studied, we provide a sufficient condition on $\K$ such that the $K$-SCG feedback protocol is {\em asymptotically} optimal in the sense that the secondary network sum-rate scaling behavior under the $K$-SCG feedback protocol is the same with that under the full-feedback protocol. We allow distributions of secondary-transmitter-secondary-base-station (STSB), and STPB channel power gains to belong to a fairly general class of distributions called class $\mathcal{C}$-distributions that includes commonly used fading models.
1209.1425
The End of an Architectural Era for Analytical Databases
cs.DB cs.DC
Traditional enterprise warehouse solutions center around an analytical database system that is monolithic and inflexible: data needs to be extracted, transformed, and loaded into the rigid relational form before analysis. It takes years of sophisticated planning to provision and deploy a warehouse; adding new hardware resources to an existing warehouse is an equally lengthy and daunting task. Additionally, modern data analysis employs statistical methods that go well beyond the typical roll-up and drill-down capabilities provided by warehouse systems. Although it is possible to implement such methods using a combination of SQL and UDFs, query engines in relational databases are ill-suited for these. The Hadoop ecosystem introduces a suite of tools for data analytics that overcome some of the problems of traditional solutions. These systems, however, forgo years of warehouse research. Memory is significantly underutilized in Hadoop clusters, and execution engine is naive compared with its relational counterparts. It is time to rethink the design of data warehouse systems and take the best from both worlds. The new generation of warehouse systems should be modular, high performance, fault-tolerant, easy to provision, and designed to support both SQL query processing and machine learning applications. This paper references the Shark system developed at Berkeley as an initial attempt.
1209.1426
Power Control and Multiuser Diversity for the Distributed Cognitive Uplink
cs.IT math.IT
This paper studies optimum power control and sum-rate scaling laws for the distributed cognitive uplink. It is first shown that the optimum distributed power control policy is in the form of a threshold based water-filling power control. Each secondary user executes the derived power control policy in a distributed fashion by using local knowledge of its direct and interference channel gains such that the resulting aggregate (average) interference does not disrupt primary's communication. Then, the tight sum-rate scaling laws are derived as a function of the number of secondary users $N$ under the optimum distributed power control policy. The fading models considered to derive sum-rate scaling laws are general enough to include Rayleigh, Rician and Nakagami fading models as special cases. When transmissions of secondary users are limited by both transmission and interference power constraints, it is shown that the secondary network sum-rate scales according to $\frac{1}{\e{}n_h}\log\logp{N}$, where $n_h$ is a parameter obtained from the distribution of direct channel power gains. For the case of transmissions limited only by interference constraints, on the other hand, the secondary network sum-rate scales according to $\frac{1}{\e{}\gamma_g}\logp{N}$, where $\gamma_g$ is a parameter obtained from the distribution of interference channel power gains. These results indicate that the distributed cognitive uplink is able to achieve throughput scaling behavior similar to that of the centralized cognitive uplink up to a pre-log multiplier $\frac{1}{\e{}}$, whilst primary's quality-of-service requirements are met. The factor $\frac{1}{\e{}}$ can be interpreted as the cost of distributed implementation of the cognitive uplink.
1209.1428
Challenges and Directions for Engineering Multi-agent Systems
cs.MA cs.SE
In this talk I review where we stand regarding the engineering of multi-agent systems. There is both good news and bad news. The good news is that over the past decade we've made considerable progress on techniques for engineering multi-agent systems: we have good, usable methodologies, and mature tools. Furthermore, we've seen a wide range of demonstrated applications, and have even begun to quantify the advantages of agent technology. However, industry involvement in AAMAS appears to be declining (as measured by industry sponsorship of the conference), and industry affiliated attendants at AAMAS 2012 were few (1-2%). Furthermore, looking at the applications of agents being reported at recent AAMAS, usage of Agent Oriented Software Engineering (AOSE) and of Agent Oriented Programming Languages (AOPLs) is quite limited. This observation is corroborated by the results of a 2008 survey by Frank and Virginia Dignum. Based on these observations, I make five recommendations: (1) Re-engage with industry; (2) Stop designing AOPLs and AOSE methodologies ... and instead ... (3) Move to the "macro" level: develop techniques for designing and implementing interaction, integrate micro (single cognitive agent) and macro (MAS) design and implementation; (4) Develop techniques for the Assurance of MAS; and (5) Re-engage with the US.
1209.1434
Communicating Processes with Data for Supervisory Coordination
cs.SY
We employ supervisory controllers to safely coordinate high-level discrete(-event) behavior of distributed components of complex systems. Supervisory controllers observe discrete-event system behavior, make a decision on allowed activities, and communicate the control signals to the involved parties. Models of the supervisory controllers can be automatically synthesized based on formal models of the system components and a formalization of the safe coordination (control) requirements. Based on the obtained models, code generation can be used to implement the supervisory controllers in software, on a PLC, or an embedded (micro)processor. In this article, we develop a process theory with data that supports a model-based systems engineering framework for supervisory coordination. We employ communication to distinguish between the different flows of information, i.e., observation and supervision, whereas we employ data to specify the coordination requirements more compactly, and to increase the expressivity of the framework. To illustrate the framework, we remodel an industrial case study involving coordination of maintenance procedures of a printing process of a high-tech Oce printer.
1209.1450
On spatial selectivity and prediction across conditions with fMRI
stat.ML cs.LG
Researchers in functional neuroimaging mostly use activation coordinates to formulate their hypotheses. Instead, we propose to use the full statistical images to define regions of interest (ROIs). This paper presents two machine learning approaches, transfer learning and selection transfer, that are compared upon their ability to identify the common patterns between brain activation maps related to two functional tasks. We provide some preliminary quantification of these similarities, and show that selection transfer makes it possible to set a spatial scale yielding ROIs that are more specific to the context of interest than with transfer learning. In particular, selection transfer outlines well known regions such as the Visual Word Form Area when discriminating between different visual tasks.
1209.1476
The effect of network structure on phase transitions in queuing networks
physics.soc-ph cs.SI physics.data-an
Recently, De Martino et al have presented a general framework for the study of transportation phenomena on complex networks. One of their most significant achievements was a deeper understanding of the phase transition from the uncongested to the congested phase at a critical traffic load. In this paper, we also study phase transition in transportation networks using a discrete time random walk model. Our aim is to establish a direct connection between the structure of the graph and the value of the critical traffic load. Applying spectral graph theory, we show that the original results of De Martino et al showing that the critical loading depends only on the degree sequence of the graph -- suggesting that different graphs with the same degree sequence have the same critical loading if all other circumstances are fixed -- is valid only if the graph is dense enough. For sparse graphs, higher order corrections, related to the local structure of the network, appear.
1209.1479
Communication dynamics in finite capacity social networks
physics.soc-ph cs.SI
In communication networks structure and dynamics are tightly coupled. The structure controls the flow of information and is itself shaped by the dynamical process of information exchanged between nodes. In order to reconcile structure and dynamics, a generic model, based on the local interaction between nodes, is considered for the communication in large social networks. In agreement with data from a large human organization, we show that the flow is non-Markovian and controlled by the temporal limitations of individuals. We confirm the versatility of our model by predicting simultaneously the degree-dependent node activity, the balance between information input and output of nodes and the degree distribution. Finally, we quantify the limitations to network analysis when it is based on data sampled over a finite period of time.
1209.1481
Image Mining from Gel Diagrams in Biomedical Publications
cs.IR q-bio.QM
Authors of biomedical publications often use gel images to report experimental results such as protein-protein interactions or protein expressions under different conditions. Gel images offer a way to concisely communicate such findings, not all of which need to be explicitly discussed in the article text. This fact together with the abundance of gel images and their shared common patterns makes them prime candidates for image mining endeavors. We introduce an approach for the detection of gel images, and present an automatic workflow to analyze them. We are able to detect gel segments and panels at high accuracy, and present first results for the identification of gene names in these images. While we cannot provide a complete solution at this point, we present evidence that this kind of image mining is feasible.
1209.1483
Underspecified Scientific Claims in Nanopublications
cs.DL cs.IR
The application range of nanopublications --- small entities of scientific results in RDF representation --- could be greatly extended if complete formal representations are not mandatory. To that aim, we present an approach to represent and interlink scientific claims in an underspecified way, based on independent English sentences.
1209.1557
Learning Model-Based Sparsity via Projected Gradient Descent
stat.ML cs.LG math.OC
Several convex formulation methods have been proposed previously for statistical estimation with structured sparsity as the prior. These methods often require a carefully tuned regularization parameter, often a cumbersome or heuristic exercise. Furthermore, the estimate that these methods produce might not belong to the desired sparsity model, albeit accurately approximating the true parameter. Therefore, greedy-type algorithms could often be more desirable in estimating structured-sparse parameters. So far, these greedy methods have mostly focused on linear statistical models. In this paper we study the projected gradient descent with non-convex structured-sparse parameter model as the constraint set. Should the cost function have a Stable Model-Restricted Hessian the algorithm produces an approximation for the desired minimizer. As an example we elaborate on application of the main results to estimation in Generalized Linear Model.
1209.1558
A Comparative Study between Moravec and Harris Corner Detection of Noisy Images Using Adaptive Wavelet Thresholding Technique
cs.CV
In this paper a comparative study between Moravec and Harris Corner Detection has been done for obtaining features required to track and recognize objects within a noisy image. Corner detection of noisy images is a challenging task in image processing. Natural images often get corrupted by noise during acquisition and transmission. As Corner detection of these noisy images does not provide desired results, hence de-noising is required. Adaptive wavelet thresholding approach is applied for the same.
1209.1563
Wavelet Based QRS Complex Detection of ECG Signal
cs.CV
The Electrocardiogram (ECG) is a sensitive diagnostic tool that is used to detect various cardiovascular diseases by measuring and recording the electrical activity of the heart in exquisite detail. A wide range of heart condition is determined by thorough examination of the features of the ECG report. Automatic extraction of time plane features is important for identification of vital cardiac diseases. This paper presents a multi-resolution wavelet transform based system for detection 'P', 'Q', 'R', 'S', 'T' peaks complex from original ECG signal. 'R-R' time lapse is an important minutia of the ECG signal that corresponds to the heartbeat of the concerned person. Abrupt increase in height of the 'R' wave or changes in the measurement of the 'R-R' denote various anomalies of human heart. Similarly 'P-P', 'Q-Q', 'S-S', 'T-T' also corresponds to different anomalies of heart and their peak amplitude also envisages other cardiac diseases. In this proposed method the 'PQRST' peaks are marked and stored over the entire signal and the time interval between two consecutive 'R' peaks and other peaks interval are measured to detect anomalies in behavior of heart, if any. The peaks are achieved by the composition of Daubeheissub bands wavelet of original ECG signal. The accuracy of the 'PQRST' complex detection and interval measurement is achieved up to 100% with high exactitude by processing and thresholding the original ECG signal.
1209.1652
Power-laws and the Conservation of Information in discrete token systems: Part 2 The role of defect
cs.IT math-ph math.IT math.MP q-bio.GN
In a matching paper (arXiv:1207.5027), I proved that Conservation of Size and Information in a discrete token based system is overwhelmingly likely to lead to a power-law component size distribution with respect to the size of its unique alphabet. This was substantiated to a very high level of significance using some 55 million lines of source code of mixed provenance. The principle was also applied to show that average gene length should be constant in an animal kingdom where the same constraints appear to hold, the implication being that Conservation of Information plays a similar role in discrete token-based systems as the Conservation of Energy does in physical systems. In this part 2, the role of defect will be explored and a functional behaviour for defect derived to be consistent with the power-law behaviour substantiated above. This will be supported by further experimental data and the implications explored.
1209.1679
Bayesian Quantized Network Coding via Belief Propagation
cs.IT math.IT
In this paper, we propose an alternative for routing based packet forwarding, which uses network coding to increase transmission efficiency, in terms of both compression and error resilience. This non-adaptive encoding is called quantized network coding, which involves random linear mapping in the real field, followed by quantization to cope with the finite capacity of the links. At the gateway node, which collects received quantized network coder packets, minimum mean squared error decoding is performed, by using belief propagation in the factor graph representation. Our simulation results show a significant improvement, in terms of the number of required packets to recover the messages, which can be interpreted as an embedded distributed source coding for correlated messages.
1209.1688
Rank Centrality: Ranking from Pair-wise Comparisons
cs.LG stat.ML
The question of aggregating pair-wise comparisons to obtain a global ranking over a collection of objects has been of interest for a very long time: be it ranking of online gamers (e.g. MSR's TrueSkill system) and chess players, aggregating social opinions, or deciding which product to sell based on transactions. In most settings, in addition to obtaining a ranking, finding `scores' for each object (e.g. player's rating) is of interest for understanding the intensity of the preferences. In this paper, we propose Rank Centrality, an iterative rank aggregation algorithm for discovering scores for objects (or items) from pair-wise comparisons. The algorithm has a natural random walk interpretation over the graph of objects with an edge present between a pair of objects if they are compared; the score, which we call Rank Centrality, of an object turns out to be its stationary probability under this random walk. To study the efficacy of the algorithm, we consider the popular Bradley-Terry-Luce (BTL) model (equivalent to the Multinomial Logit (MNL) for pair-wise comparisons) in which each object has an associated score which determines the probabilistic outcomes of pair-wise comparisons between objects. In terms of the pair-wise marginal probabilities, which is the main subject of this paper, the MNL model and the BTL model are identical. We bound the finite sample error rates between the scores assumed by the BTL model and those estimated by our algorithm. In particular, the number of samples required to learn the score well with high probability depends on the structure of the comparison graph. When the Laplacian of the comparison graph has a strictly positive spectral gap, e.g. each item is compared to a subset of randomly chosen items, this leads to dependence on the number of samples that is nearly order-optimal.
1209.1695
Decentralized Stochastic Control with Partial History Sharing: A Common Information Approach
cs.SY math.OC
A general model of decentralized stochastic control called partial history sharing information structure is presented. In this model, at each step the controllers share part of their observation and control history with each other. This general model subsumes several existing models of information sharing as special cases. Based on the information commonly known to all the controllers, the decentralized problem is reformulated as an equivalent centralized problem from the perspective of a coordinator. The coordinator knows the common information and select prescriptions that map each controller's local information to its control actions. The optimal control problem at the coordinator is shown to be a partially observable Markov decision process (POMDP) which is solved using techniques from Markov decision theory. This approach provides (a) structural results for optimal strategies, and (b) a dynamic program for obtaining optimal strategies for all controllers in the original decentralized problem. Thus, this approach unifies the various ad-hoc approaches taken in the literature. In addition, the structural results on optimal control strategies obtained by the proposed approach cannot be obtained by the existing generic approach (the person-by-person approach) for obtaining structural results in decentralized problems; and the dynamic program obtained by the proposed approach is simpler than that obtained by the existing generic approach (the designer's approach) for obtaining dynamic programs in decentralized problems.
1209.1711
Programming Languages for Scientific Computing
cs.PL cs.CE cs.MS
Scientific computation is a discipline that combines numerical analysis, physical understanding, algorithm development, and structured programming. Several yottacycles per year on the world's largest computers are spent simulating problems as diverse as weather prediction, the properties of material composites, the behavior of biomolecules in solution, and the quantum nature of chemical compounds. This article is intended to review specfic languages features and their use in computational science. We will review the strengths and weaknesses of different programming styles, with examples taken from widely used scientific codes.
1209.1716
Classification of binary systematic codes of small defect
cs.IT math.IT
In this paper non-trivial non-linear binary systematic AMDS codes are classified in terms of their weight distributions, employing only elementary techniques. In particular, we show that their length and minimum distance completely determine the weight distribution.
1209.1719
Semi-metric networks for recommender systems
cs.IR cond-mat.stat-mech cs.SI
Weighted graphs obtained from co-occurrence in user-item relations lead to non-metric topologies. We use this semi-metric behavior to issue recommendations, and discuss its relationship to transitive closure on fuzzy graphs. Finally, we test the performance of this method against other item- and user-based recommender systems on the Movielens benchmark. We show that including highly semi-metric edges in our recommendation algorithms leads to better recommendations.
1209.1727
Bandits with heavy tail
stat.ML cs.LG
The stochastic multi-armed bandit problem is well understood when the reward distributions are sub-Gaussian. In this paper we examine the bandit problem under the weaker assumption that the distributions have moments of order 1+\epsilon, for some $\epsilon \in (0,1]$. Surprisingly, moments of order 2 (i.e., finite variance) are sufficient to obtain regret bounds of the same order as under sub-Gaussian reward distributions. In order to achieve such regret, we define sampling strategies based on refined estimators of the mean such as the truncated empirical mean, Catoni's M-estimator, and the median-of-means estimator. We also derive matching lower bounds that also show that the best achievable regret deteriorates when \epsilon <1.
1209.1734
Load Distribution Composite Design Pattern for Genetic Algorithm-Based Autonomic Computing Systems
cs.SE cs.DC cs.NE
Current autonomic computing systems are ad hoc solutions that are designed and implemented from the scratch. When designing software, in most cases two or more patterns are to be composed to solve a bigger problem. A composite design patterns shows a synergy that makes the composition more than just the sum of its parts which leads to ready-made software architectures. As far as we know, there are no studies on composition of design patterns for autonomic computing domain. In this paper we propose pattern-oriented software architecture for self-optimization in autonomic computing system using design patterns composition and multi objective evolutionary algorithms that software designers and/or programmers can exploit to drive their work. Main objective of the system is to reduce the load in the server by distributing the population to clients. We used Case Based Reasoning, Database Access, and Master Slave design patterns. We evaluate the effectiveness of our architecture with and without design patterns compositions. The use of composite design patterns in the architecture and quantitative measurements are presented. A simple UML class diagram is used to describe the architecture.
1209.1739
Design of Spectrum Sensing Policy for Multi-user Multi-band Cognitive Radio Network
cs.LG cs.NI
Finding an optimal sensing policy for a particular access policy and sensing scheme is a laborious combinatorial problem that requires the system model parameters to be known. In practise the parameters or the model itself may not be completely known making reinforcement learning methods appealing. In this paper a non-parametric reinforcement learning-based method is developed for sensing and accessing multi-band radio spectrum in multi-user cognitive radio networks. A suboptimal sensing policy search algorithm is proposed for a particular multi-user multi-band access policy and the randomized Chair-Varshney rule. The randomized Chair-Varshney rule is used to reduce the probability of false alarms under a constraint on the probability of detection that protects the primary user. The simulation results show that the proposed method achieves a sum profit (e.g. data rate) close to the optimal sensing policy while achieving the desired probability of detection.
1209.1751
Information content versus word length in random typing
physics.data-an cond-mat.stat-mech cs.CL
Recently, it has been claimed that a linear relationship between a measure of information content and word length is expected from word length optimization and it has been shown that this linearity is supported by a strong correlation between information content and word length in many languages (Piantadosi et al. 2011, PNAS 108, 3825-3826). Here, we study in detail some connections between this measure and standard information theory. The relationship between the measure and word length is studied for the popular random typing process where a text is constructed by pressing keys at random from a keyboard containing letters and a space behaving as a word delimiter. Although this random process does not optimize word lengths according to information content, it exhibits a linear relationship between information content and word length. The exact slope and intercept are presented for three major variants of the random typing process. A strong correlation between information content and word length can simply arise from the units making a word (e.g., letters) and not necessarily from the interplay between a word and its context as proposed by Piantadosi et al. In itself, the linear relation does not entail the results of any optimization process.
1209.1759
Difference of Normals as a Multi-Scale Operator in Unorganized Point Clouds
cs.CV
A novel multi-scale operator for unorganized 3D point clouds is introduced. The Difference of Normals (DoN) provides a computationally efficient, multi-scale approach to processing large unorganized 3D point clouds. The application of DoN in the multi-scale filtering of two different real-world outdoor urban LIDAR scene datasets is quantitatively and qualitatively demonstrated. In both datasets the DoN operator is shown to segment large 3D point clouds into scale-salient clusters, such as cars, people, and lamp posts towards applications in semi-automatic annotation, and as a pre-processing step in automatic object recognition. The application of the operator to segmentation is evaluated on a large public dataset of outdoor LIDAR scenes with ground truth annotations.
1209.1788
On the Use of Lee's Protocol for Speckle-Reducing Techniques
cs.CV
This paper presents two new MAP (Maximum a Posteriori) filters for speckle noise reduction and a Monte Carlo procedure for the assessment of their performance. In order to quantitatively evaluate the results obtained using these new filters, with respect to classical ones, a Monte Carlo extension of Lee's protocol is proposed. This extension of the protocol shows that its original version leads to inconsistencies that hamper its use as a general procedure for filter assessment. Some solutions for these inconsistencies are proposed, and a consistent comparison of speckle-reducing filters is provided.
1209.1794
A New Similairty Measure For Spatial Personalization
cs.DB
Extracting the relevant information by exploiting the spatial data warehouse becomes increasingly hard. In fact, because of the enormous amount of data stored in the spatial data warehouse, the user, usually, don't know what part of the cube contain the relevant information and what the forthcoming query should be. As a solution, we propose to study the similarity between the behaviors of the users, in term of the spatial MDX queries launched on the system, as a basis to recommend the next relevant MDX query to the current user. This paper introduces a new similarity measure for comparing spatial MDX queries. The proposed similarity measure could directly support the development of spatial personalization approaches. The proposed similarity measure takes into account the basic components of the similarity assessment models: the topology, the direction and the distance.
1209.1797
Securing Your Transactions: Detecting Anomalous Patterns In XML Documents
cs.CR cs.LG
XML transactions are used in many information systems to store data and interact with other systems. Abnormal transactions, the result of either an on-going cyber attack or the actions of a benign user, can potentially harm the interacting systems and therefore they are regarded as a threat. In this paper we address the problem of anomaly detection and localization in XML transactions using machine learning techniques. We present a new XML anomaly detection framework, XML-AD. Within this framework, an automatic method for extracting features from XML transactions was developed as well as a practical method for transforming XML features into vectors of fixed dimensionality. With these two methods in place, the XML-AD framework makes it possible to utilize general learning algorithms for anomaly detection. Central to the functioning of the framework is a novel multi-univariate anomaly detection algorithm, ADIFA. The framework was evaluated on four XML transactions datasets, captured from real information systems, in which it achieved over 89% true positive detection rate with less than a 0.2% false positive rate.
1209.1800
An Empirical Study of MAUC in Multi-class Problems with Uncertain Cost Matrices
cs.LG
Cost-sensitive learning relies on the availability of a known and fixed cost matrix. However, in some scenarios, the cost matrix is uncertain during training, and re-train a classifier after the cost matrix is specified would not be an option. For binary classification, this issue can be successfully addressed by methods maximizing the Area Under the ROC Curve (AUC) metric. Since the AUC can measure performance of base classifiers independent of cost during training, and a larger AUC is more likely to lead to a smaller total cost in testing using the threshold moving method. As an extension of AUC to multi-class problems, MAUC has attracted lots of attentions and been widely used. Although MAUC also measures performance of base classifiers independent of cost, it is unclear whether a larger MAUC of classifiers is more likely to lead to a smaller total cost. In fact, it is also unclear what kinds of post-processing methods should be used in multi-class problems to convert base classifiers into discrete classifiers such that the total cost is as small as possible. In the paper, we empirically explore the relationship between MAUC and the total cost of classifiers by applying two categories of post-processing methods. Our results suggest that a larger MAUC is also beneficial. Interestingly, simple calibration methods that convert the output matrix into posterior probabilities perform better than existing sophisticated post re-optimization methods.
1209.1826
A spatio-spectral hybridization for edge preservation and noisy image restoration via local parametric mixtures and Lagrangian relaxation
stat.ME cs.CV stat.AP
This paper investigates a fully unsupervised statistical method for edge preserving image restoration and compression using a spatial decomposition scheme. Smoothed maximum likelihood is used for local estimation of edge pixels from mixture parametric models of local templates. For the complementary smooth part the traditional L2-variational problem is solved in the Fourier domain with Thin Plate Spline (TPS) regularization. It is well known that naive Fourier compression of the whole image fails to restore a piece-wise smooth noisy image satisfactorily due to Gibbs phenomenon. Images are interpreted as relative frequency histograms of samples from bi-variate densities where the sample sizes might be unknown. The set of discontinuities is assumed to be completely unsupervised Lebesgue-null, compact subset of the plane in the continuous formulation of the problem. Proposed spatial decomposition uses a widely used topological concept, partition of unity. The decision on edge pixel neighborhoods are made based on the multiple testing procedure of Holms. Statistical summary of the final output is decomposed into two layers of information extraction, one for the subset of edge pixels and the other for the smooth region. Robustness is also demonstrated by applying the technique on noisy degradation of clean images.
1209.1873
Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
stat.ML cs.LG math.OC
Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to their strong theoretical guarantees. While the closely related Dual Coordinate Ascent (DCA) method has been implemented in various software packages, it has so far lacked good convergence analysis. This paper presents a new analysis of Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods enjoy strong theoretical guarantees that are comparable or better than SGD. This analysis justifies the effectiveness of SDCA for practical applications.
1209.1885
Parametric Constructive Kripke-Semantics for Standard Multi-Agent Belief and Knowledge (Knowledge As Unbiased Belief)
cs.LO cs.AI cs.DC cs.MA
We propose parametric constructive Kripke-semantics for multi-agent KD45-belief and S5-knowledge in terms of elementary set-theoretic constructions of two basic functional building blocks, namely bias (or viewpoint) and visibility, functioning also as the parameters of the doxastic and epistemic accessibility relation. The doxastic accessibility relates two possible worlds whenever the application of the composition of bias with visibility to the first world is equal to the application of visibility to the second world. The epistemic accessibility is the transitive closure of the union of our doxastic accessibility and its converse. Therefrom, accessibility relations for common and distributed belief and knowledge can be constructed in a standard way. As a result, we obtain a general definition of knowledge in terms of belief that enables us to view S5-knowledge as accurate (unbiased and thus true) KD45-belief, negation-complete belief and knowledge as exact KD45-belief and S5-knowledge, respectively, and perfect S5-knowledge as precise (exact and accurate) KD45-belief, and all this generically for arbitrary functions of bias and visibility. Our results can be seen as a semantic complement to previous foundational results by Halpern et al. about the (un)definability and (non-)reducibility of knowledge in terms of and to belief, respectively.
1209.1899
A matrix approach for computing extensions of argumentation frameworks
cs.AI
The matrices and their sub-blocks are introduced into the study of determining various extensions in the sense of Dung's theory of argumentation frameworks. It is showed that each argumentation framework has its matrix representations, and the core semantics defined by Dung can be characterized by specific sub-blocks of the matrix. Furthermore, the elementary permutations of a matrix are employed by which an efficient matrix approach for finding out all extensions under a given semantics is obtained. Different from several established approaches, such as the graph labelling algorithm, Constraint Satisfaction Problem algorithm, the matrix approach not only put the mathematic idea into the investigation for finding out various extensions, but also completely achieve the goal to compute all the extensions needed.
1209.1911
Progressive Differences Convolutional Low-Density Parity-Check Codes
cs.IT math.IT
We present a new family of low-density parity-check (LDPC) convolutional codes that can be designed using ordered sets of progressive differences. We study their properties and define a subset of codes in this class that have some desirable features, such as fixed minimum distance and Tanner graphs without short cycles. The design approach we propose ensures that these properties are guaranteed independently of the code rate. This makes these codes of interest in many practical applications, particularly when high rate codes are needed for saving bandwidth. We provide some examples of coded transmission schemes exploiting this new class of codes.
1209.1960
A Comparative Study of Efficient Initialization Methods for the K-Means Clustering Algorithm
cs.LG cs.CV
K-means is undoubtedly the most widely used partitional clustering algorithm. Unfortunately, due to its gradient descent nature, this algorithm is highly sensitive to the initial placement of the cluster centers. Numerous initialization methods have been proposed to address this problem. In this paper, we first present an overview of these methods with an emphasis on their computational efficiency. We then compare eight commonly used linear time complexity initialization methods on a large and diverse collection of data sets using various performance criteria. Finally, we analyze the experimental results using non-parametric statistical tests and provide recommendations for practitioners. We demonstrate that popular initialization methods often perform poorly and that there are in fact strong alternatives to these methods.
1209.1983
Toward a New Protocol to Evaluate Recommender Systems
cs.IR cs.PF
In this paper, we propose an approach to analyze the performance and the added value of automatic recommender systems in an industrial context. We show that recommender systems are multifaceted and can be organized around 4 structuring functions: help users to decide, help users to compare, help users to discover, help users to explore. A global off line protocol is then proposed to evaluate recommender systems. This protocol is based on the definition of appropriate evaluation measures for each aforementioned function. The evaluation protocol is discussed from the perspective of the usefulness and trust of the recommendation. A new measure called Average Measure of Impact is introduced. This measure evaluates the impact of the personalized recommendation. We experiment with two classical methods, K-Nearest Neighbors (KNN) and Matrix Factorization (MF), using the well known dataset: Netflix. A segmentation of both users and items is proposed to finely analyze where the algorithms perform well or badly. We show that the performance is strongly dependent on the segments and that there is no clear correlation between the RMSE and the quality of the recommendation.
1209.2058
Safe and Stabilizing Distributed Multi-Path Cellular Flows
cs.RO cs.DC cs.MA cs.SY
We study the problem of distributed traffic control in the partitioned plane, where the movement of all entities (robots, vehicles, etc.) within each partition (cell) is coupled. Establishing liveness in such systems is challenging, but such analysis will be necessary to apply such distributed traffic control algorithms in applications like coordinating robot swarms and the intelligent highway system. We present a formal model of a distributed traffic control protocol that guarantees minimum separation between entities, even as some cells fail. Once new failures cease occurring, in the case of a single target, the protocol is guaranteed to self-stabilize and the entities with feasible paths to the target cell make progress towards it. For multiple targets, failures may cause deadlocks in the system, so we identify a class of non-deadlocking failures where all entities are able to make progress to their respective targets. The algorithm relies on two general principles: temporary blocking for maintenance of safety and local geographical routing for guaranteeing progress. Our assertional proofs may serve as a template for the analysis of other distributed traffic control protocols. We present simulation results that provide estimates of throughput as a function of entity velocity, safety separation, single-target path complexity, failure-recovery rates, and multi-target path complexity.
1209.2066
Data Processing Bounds for Scalar Lossy Source Codes with Side Information at the Decoder
cs.IT math.IT
In this paper, we introduce new lower bounds on the distortion of scalar fixed-rate codes for lossy compression with side information available at the receiver. These bounds are derived by presenting the relevant random variables as a Markov chain and applying generalized data processing inequalities a la Ziv and Zakai. We show that by replacing the logarithmic function with other functions, in the data processing theorem we formulate, we obtain new lower bounds on the distortion of scalar coding with side information at the decoder. The usefulness of these results is demonstrated for uniform sources and the convex function $Q(t)=t^{1-\alpha}$, $\alpha>1$. The bounds in this case are shown to be better than one can obtain from the Wyner-Ziv rate-distortion function.
1209.2070
Content-based Multi-media Retrieval Technology
cs.MM cs.IR
This paper gives a summary of the content-based Image Retrieval and Content-based Audio Retrieval, which are two parts of the Content-based Retrieval. Content-based Retrieval is the retrieval based on the features of the content. Generally, it is a way to extract features of the media data and find other data with the similar features from the database automatically. Content-based Retrieval can not only work on discrete media like texts, but also can be used on continuous media, such as video and audio.
1209.2079
Error Rate Analysis of GF(q) Network Coded Detect-and-Forward Wireless Relay Networks Using Equivalent Relay Channel Models
cs.IT math.IT
This paper investigates simple means of analyzing the error rate performance of a general q-ary Galois Field network coded detect-and-forward cooperative relay network with known relay error statistics at the destination. Equivalent relay channels are used in obtaining an approximate error rate of the relay network, from which the diversity order is found. Error rate analyses using equivalent relay channel models are shown to be closely matched with simulation results. Using the equivalent relay channels, low complexity receivers are developed whose performances are close to that of the optimal maximum likelihood receiver.
1209.2082
Blind Image Deblurring by Spectral Properties of Convolution Operators
cs.CV
In this paper, we study the problem of recovering a sharp version of a given blurry image when the blur kernel is unknown. Previous methods often introduce an image-independent regularizer (such as Gaussian or sparse priors) on the desired blur kernel. We shall show that the blurry image itself encodes rich information about the blur kernel. Such information can be found through analyzing and comparing how the spectrum of an image as a convolution operator changes before and after blurring. Our analysis leads to an effective convex regularizer on the blur kernel which depends only on the given blurry image. We show that the minimizer of this regularizer guarantees to give good approximation to the blur kernel if the original image is sharp enough. By combining this powerful regularizer with conventional image deblurring techniques, we show how we could significantly improve the deblurring results through simulations and experiments on real images. In addition, our analysis and experiments help explaining a widely accepted doctrine; that is, the edges are good features for deblurring.
1209.2086
On Cooperative Relay Networks with Video Applications
cs.IT cs.NI math.IT
In this paper, we investigate the problem of cooperative relay in CR networks for further enhanced network performance. In particular, we focus on the two representative cooperative relay strategies, and develop optimal spectrum sensing and $p$-Persistent CSMA for spectrum access. Then, we study the problem of cooperative relay in CR networks for video streaming. We incorporate interference alignment to allow transmitters collaboratively send encoded signals to all CR users. In the cases of a single licensed channel and multiple licensed channels with channel bonding, we develop an optimal distributed algorithm with proven convergence and convergence speed. In the case of multiple channels without channel bonding, we develop a greedy algorithm with bounded performance.
1209.2088
Spreading Processes and Large Components in Ordered, Directed Random Graphs
math.CO cs.DM cs.SI
Order the vertices of a directed random graph \math{v_1,...,v_n}; edge \math{(v_i,v_j)} for \math{i<j} exists independently with probability \math{p}. This random graph model is related to certain spreading processes on networks. We consider the component reachable from \math{v_1} and prove existence of a sharp threshold \math{p^*=\log n/n} at which this reachable component transitions from \math{o(n)} to \math{\Omega(n)}.
1209.2097
Semantic web applications with regard to math and environment
cs.DL cs.IR
The following is an outline of possible strategies in using semantic web techniques and math with regard to environmental issues. The article uses concrete examples and applications and provides partially a rather basic treatment of semantic web techniques and math in order to adress a broader audience.
1209.2137
Decoding billions of integers per second through vectorization
cs.IR cs.DB
In many important applications -- such as search engines and relational database systems -- data is stored in the form of arrays of integers. Encoding and, most importantly, decoding of these arrays consumes considerable CPU time. Therefore, substantial effort has been made to reduce costs associated with compression and decompression. In particular, researchers have exploited the superscalar nature of modern processors and SIMD instructions. Nevertheless, we introduce a novel vectorized scheme called SIMD-BP128 that improves over previously proposed vectorized approaches. It is nearly twice as fast as the previously fastest schemes on desktop processors (varint-G8IU and PFOR). At the same time, SIMD-BP128 saves up to 2 bits per integer. For even better compression, we propose another new vectorized scheme (SIMD-FastPFOR) that has a compression ratio within 10% of a state-of-the-art scheme (Simple-8b) while being two times faster during decoding.
1209.2138
Optimality Properties, Distributed Strategies, and Measurement-Based Evaluation of Coordinated Multicell OFDMA Transmission
cs.IT math.IT
The throughput of multicell systems is inherently limited by interference and the available communication resources. Coordinated resource allocation is the key to efficient performance, but the demand on backhaul signaling and computational resources grows rapidly with number of cells, terminals, and subcarriers. To handle this, we propose a novel multicell framework with dynamic cooperation clusters where each terminal is jointly served by a small set of base stations. Each base station coordinates interference to neighboring terminals only, thus limiting backhaul signalling and making the framework scalable. This framework can describe anything from interference channels to ideal joint multicell transmission. The resource allocation (i.e., precoding and scheduling) is formulated as an optimization problem (P1) with performance described by arbitrary monotonic functions of the signal-to-interference-and-noise ratios (SINRs) and arbitrary linear power constraints. Although (P1) is non-convex and difficult to solve optimally, we are able to prove: 1) Optimality of single-stream beamforming; 2) Conditions for full power usage; and 3) A precoding parametrization based on a few parameters between zero and one. These optimality properties are used to propose low-complexity strategies: both a centralized scheme and a distributed version that only requires local channel knowledge and processing. We evaluate the performance on measured multicell channels and observe that the proposed strategies achieve close-to-optimal performance among centralized and distributed solutions, respectively. In addition, we show that multicell interference coordination can give substantial improvements in sum performance, but that joint transmission is very sensitive to synchronization errors and that some terminals can experience performance degradations.
1209.2139
Fused Multiple Graphical Lasso
cs.LG stat.ML
In this paper, we consider the problem of estimating multiple graphical models simultaneously using the fused lasso penalty, which encourages adjacent graphs to share similar structures. A motivating example is the analysis of brain networks of Alzheimer's disease using neuroimaging data. Specifically, we may wish to estimate a brain network for the normal controls (NC), a brain network for the patients with mild cognitive impairment (MCI), and a brain network for Alzheimer's patients (AD). We expect the two brain networks for NC and MCI to share common structures but not to be identical to each other; similarly for the two brain networks for MCI and AD. The proposed formulation can be solved using a second-order method. Our key technical contribution is to establish the necessary and sufficient condition for the graphs to be decomposable. Based on this key property, a simple screening rule is presented, which decomposes the large graphs into small subgraphs and allows an efficient estimation of multiple independent (small) subgraphs, dramatically reducing the computational cost. We perform experiments on both synthetic and real data; our results demonstrate the effectiveness and efficiency of the proposed approach.
1209.2163
Modeling controversies in the press: the case of the abnormal bees' death
physics.soc-ph cs.CL
The controversy about the cause(s) of abnormal death of bee colonies in France is investigated through an extensive analysis of the french speaking press. A statistical analysis of textual data is first performed on the lexicon used by journalists to describe the facts and to present associated informations during the period 1998-2010. Three states are identified to explain the phenomenon. The first state asserts a unique cause, the second one focuses on multifactor causes and the third one states the absence of current proof. Assigning each article to one of the three states, we are able to follow the associated opinion dynamics among the journalists over 13 years. Then, we apply the Galam sequential probabilistic model of opinion dynamic to those data. Assuming journalists are either open mind or inflexible about their respective opinions, the results are reproduced precisely provided we account for a series of annual changes in the proportions of respective inflexibles. The results shed a new counter intuitive light on the various pressure supposed to apply on the journalists by either chemical industries or beekeepers and experts or politicians. The obtained dynamics of respective inflexibles shows the possible effect of lobbying, the inertia of the debate and the net advantage gained by the first whistleblowers.
1209.2177
On-off Threshold Models of Social Contagion
physics.soc-ph cs.SI
We study binary state contagion dynamics on a social network where nodes act in response to the average state of their neighborhood. We model the competing tendencies of imitation and non-conformity by incorporating an off-threshold into standard threshold models of behavior. In this way, we attempt to capture important aspects of fashions and general societal trends. Allowing varying amounts of stochasticity in both the network and node responses, we find different outcomes in the random and deterministic versions of the model. In the limit of a large, dense network, however, we show that these dynamics coincide. The dynamical behavior of the system ranges from steady state to chaotic depending on network connectivity and update synchronicity. We construct a mean field theory for general random networks. In the undirected case, the mean field theory predicts that the dynamics on the network are a smoothed version of the average node response dynamics. We compare our theory to extensive simulations on Poisson random graphs with node responses that average to the chaotic tent map.
1209.2178
Continuous Queries for Multi-Relational Graphs
cs.DB cs.SI
Acting on time-critical events by processing ever growing social media or news streams is a major technical challenge. Many of these data sources can be modeled as multi-relational graphs. Continuous queries or techniques to search for rare events that typically arise in monitoring applications have been studied extensively for relational databases. This work is dedicated to answer the question that emerges naturally: how can we efficiently execute a continuous query on a dynamic graph? This paper presents an exact subgraph search algorithm that exploits the temporal characteristics of representative queries for online news or social media monitoring. The algorithm is based on a novel data structure called the Subgraph Join Tree (SJ-Tree) that leverages the structural and semantic characteristics of the underlying multi-relational graph. The paper concludes with extensive experimentation on several real-world datasets that demonstrates the validity of this approach.
1209.2179
Downlink Noncoherent Cooperation without Transmitter Phase Alignment
cs.IT math.IT
Multicell joint processing can mitigate inter-cell interference and thereby increase the spectral efficiency of cellular systems. Most previous work has assumed phase-aligned (coherent) transmissions from different base transceiver stations (BTSs), which is difficult to achieve in practice. In this work, a noncoherent cooperative transmission scheme for the downlink is studied, which does not require phase alignment. The focus is on jointly serving two users in adjacent cells sharing the same resource block. The two BTSs partially share their messages through a backhaul link, and each BTS transmits a superposition of two codewords, one for each receiver. Each receiver decodes its own message, and treats the signals for the other receiver as background noise. With narrowband transmissions the achievable rate region and maximum achievable weighted sum rate are characterized by optimizing the power allocation (and the beamforming vectors in the case of multiple transmit antennas) at each BTS between its two codewords. For a wideband (multicarrier) system, a dual formulation of the optimal power allocation problem across sub-carriers is presented, which can be efficiently solved by numerical methods. Results show that the proposed cooperation scheme can improve the sum rate substantially in the low to moderate signal-to-noise ratio (SNR) range.
1209.2191
MapReduce is Good Enough? If All You Have is a Hammer, Throw Away Everything That's Not a Nail!
cs.DC cs.DB
Hadoop is currently the large-scale data analysis "hammer" of choice, but there exist classes of algorithms that aren't "nails", in the sense that they are not particularly amenable to the MapReduce programming model. To address this, researchers have proposed MapReduce extensions or alternative programming models in which these algorithms can be elegantly expressed. This essay espouses a very different position: that MapReduce is "good enough", and that instead of trying to invent screwdrivers, we should simply get rid of everything that's not a nail. To be more specific, much discussion in the literature surrounds the fact that iterative algorithms are a poor fit for MapReduce: the simple solution is to find alternative non-iterative algorithms that solve the same problem. This essay captures my personal experiences as an academic researcher as well as a software engineer in a "real-world" production analytics environment. From this combined perspective I reflect on the current state and future of "big data" research.
1209.2192
Power Allocation for Conventional and Buffer-Aided Link Adaptive Relaying Systems with Energy Harvesting Nodes
cs.IT math.IT
Energy harvesting (EH) nodes can play an important role in cooperative communication systems which do not have a continuous power supply. In this paper, we consider the optimization of conventional and buffer-aided link adaptive EH relaying systems, where an EH source communicates with the destination via an EH decode-and-forward relay. In conventional relaying, source and relay transmit signals in consecutive time slots whereas in buffer-aided link adaptive relaying, the state of the source-relay and relay-destination channels determines whether the source or the relay is selected for transmission. Our objective is to maximize the system throughput over a finite number of transmission time slots for both relaying protocols. In case of conventional relaying, we propose an offline and several online joint source and relay transmit power allocation schemes. For offline power allocation, we formulate an optimization problem which can be solved optimally. For the online case, we propose a dynamic programming (DP) approach to compute the optimal online transmit power. To alleviate the complexity inherent to DP, we also propose several suboptimal online power allocation schemes. For buffer-aided link adaptive relaying, we show that the joint offline optimization of the source and relay transmit powers along with the link selection results in a mixed integer non-linear program which we solve optimally using the spatial branch-and-bound method. We also propose an efficient online power allocation scheme and a naive online power allocation scheme for buffer-aided link adaptive relaying. Our results show that link adaptive relaying provides performance improvement over conventional relaying at the expense of a higher computational complexity.
1209.2194
Cooperative learning in multi-agent systems from intermittent measurements
math.OC cs.LG cs.MA cs.SY
Motivated by the problem of tracking a direction in a decentralized way, we consider the general problem of cooperative learning in multi-agent systems with time-varying connectivity and intermittent measurements. We propose a distributed learning protocol capable of learning an unknown vector $\mu$ from noisy measurements made independently by autonomous nodes. Our protocol is completely distributed and able to cope with the time-varying, unpredictable, and noisy nature of inter-agent communication, and intermittent noisy measurements of $\mu$. Our main result bounds the learning speed of our protocol in terms of the size and combinatorial features of the (time-varying) networks connecting the nodes.
1209.2204
How is non-knowledge represented in economic theory?
q-fin.GN cs.AI stat.AP
In this article, we address the question of how non-knowledge about future events that influence economic agents' decisions in choice settings has been formally represented in economic theory up to date. To position our discussion within the ongoing debate on uncertainty, we provide a brief review of historical developments in economic theory and decision theory on the description of economic agents' choice behaviour under conditions of uncertainty, understood as either (i) ambiguity, or (ii) unawareness. Accordingly, we identify and discuss two approaches to the formalisation of non-knowledge: one based on decision-making in the context of a state space representing the exogenous world, as in Savage's axiomatisation and some successor concepts (ambiguity as situations with unknown probabilities), and one based on decision-making over a set of menus of potential future opportunities, providing the possibility of derivation of agents' subjective state spaces (unawareness as situation with imperfect subjective knowledge of all future events possible). We also discuss impeding challenges of the formalisation of non-knowledge.
1209.2262
A single-photon sampling architecture for solid-state imaging
cs.IT math.IT physics.ins-det
Advances in solid-state technology have enabled the development of silicon photomultiplier sensor arrays capable of sensing individual photons. Combined with high-frequency time-to-digital converters (TDCs), this technology opens up the prospect of sensors capable of recording with high accuracy both the time and location of each detected photon. Such a capability could lead to significant improvements in imaging accuracy, especially for applications operating with low photon fluxes such as LiDAR and positron emission tomography. The demands placed on on-chip readout circuitry imposes stringent trade-offs between fill factor and spatio-temporal resolution, causing many contemporary designs to severely underutilize the technology's full potential. Concentrating on the low photon flux setting, this paper leverages results from group testing and proposes an architecture for a highly efficient readout of pixels using only a small number of TDCs, thereby also reducing both cost and power consumption. The design relies on a multiplexing technique based on binary interconnection matrices. We provide optimized instances of these matrices for various sensor parameters and give explicit upper and lower bounds on the number of TDCs required to uniquely decode a given maximum number of simultaneous photon arrivals. To illustrate the strength of the proposed architecture, we note a typical digitization result of a 120x120 photodiode sensor on a 30um x 30um pitch with a 40ps time resolution and an estimated fill factor of approximately 70%, using only 161 TDCs. The design guarantees registration and unique recovery of up to 4 simultaneous photon arrivals using a fast decoding algorithm. In a series of realistic simulations of scintillation events in clinical positron emission tomography the design was able to recover the spatio-temporal location of 98.6% of all photons that caused pixel firings.
1209.2274
PCA-Based Relevance Feedback in Document Image Retrieval
cs.IR
Research has been devoted in the past few years to relevance feedback as an effective solution to improve performance of information retrieval systems. Relevance feedback refers to an interactive process that helps to improve the retrieval performance. In this paper we propose the use of relevance feedback to improve document image retrieval System (DIRS) performance. This paper compares a variety of strategies for positive and negative feedback. In addition, feature subspace is extracted and updated during the feedback process using a Principal Component Analysis (PCA) technique and based on user's feedback. That is, in addition to reducing the dimensionality of feature spaces, a proper subspace for each type of features is obtained in the feedback process to further improve the retrieval accuracy. Experiments show that using relevance Feedback in DIR achieves better performance than common DIR.