id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
1311.6916
Spectral Compressive Sensing with Model Selection
cs.IT math.IT
The performance of existing approaches to the recovery of frequency-sparse signals from compressed measurements is limited by the coherence of required sparsity dictionaries and the discretization of frequency parameter space. In this paper, we adopt a parametric joint recovery-estimation method based on model selection in spectral compressive sensing. Numerical experiments show that our approach outperforms most state-of-the-art spectral CS recovery approaches in fidelity, tolerance to noise and computation efficiency.
1311.6932
A novel framework for image forgery localization
cs.CV
Image forgery localization is a very active and open research field for the difficulty to handle the large variety of manipulations a malicious user can perform by means of more and more sophisticated image editing tools. Here, we propose a localization framework based on the fusion of three very different tools, based, respectively, on sensor noise, patch-matching, and machine learning. The binary masks provided by these tools are finally fused based on some suitable reliability indexes. According to preliminary experiments on the training set, the proposed framework provides often a very good localization accuracy and sometimes valuable clues for visual scrutiny.
1311.6934
Image forgery detection based on the fusion of machine learning and block-matching methods
cs.CV
Dense local descriptors and machine learning have been used with success in several applications, like classification of textures, steganalysis, and forgery detection. We develop a new image forgery detector building upon some descriptors recently proposed in the steganalysis field suitably merging some of such descriptors, and optimizing a SVM classifier on the available training set. Despite the very good performance, very small forgeries are hardly ever detected because they contribute very little to the descriptors. Therefore we also develop a simple, but extremely specific, copy-move detector based on region matching and fuse decisions so as to reduce the missing detection rate. Overall results appear to be extremely encouraging.
1311.6976
Dimensionality reduction for click-through rate prediction: Dense versus sparse representation
stat.ML cs.LG stat.AP stat.ME
In online advertising, display ads are increasingly being placed based on real-time auctions where the advertiser who wins gets to serve the ad. This is called real-time bidding (RTB). In RTB, auctions have very tight time constraints on the order of 100ms. Therefore mechanisms for bidding intelligently such as clickthrough rate prediction need to be sufficiently fast. In this work, we propose to use dimensionality reduction of the user-website interaction graph in order to produce simplified features of users and websites that can be used as predictors of clickthrough rate. We demonstrate that the Infinite Relational Model (IRM) as a dimensionality reduction offers comparable predictive performance to conventional dimensionality reduction schemes, while achieving the most economical usage of features and fastest computations at run-time. For applications such as real-time bidding, where fast database I/O and few computations are key to success, we thus recommend using IRM based features as predictors to exploit the recommender effects from bipartite graphs.
1311.7038
Group Coding with Complex Isometries
math.CO cs.IT math.IT math.RT
We investigate group coding for arbitrary finite groups acting linearly on a vector space. These yield robust codes based on real or complex matrix groups. We give necessary and sufficient conditions for correct subgroup decoding using geometric notions of minimal length coset representatives. The infinite family of complex reflection groups G(r,1,n) produces effective codes of arbitrarily large size that can be decoded in relatively few steps.
1311.7045
Phase retrieval from low-rate samples
cs.IT math.IT
The paper considers the phase retrieval problem in N-dimensional complex vector spaces. It provides two sets of deterministic measurement vectors which guarantee signal recovery for all signals, excluding only a specific subspace and a union of subspaces, respectively. A stable analytic reconstruction procedure of low complexity is given. Additionally it is proven that signal recovery from these measurements can be solved exactly via a semidefinite program. A practical implementation with 4 deterministic diffraction patterns is provided and some numerical experiments with noisy measurements complement the analytic approach.
1311.7071
Sparse Linear Dynamical System with Its Application in Multivariate Clinical Time Series
cs.AI cs.LG stat.ML
Linear Dynamical System (LDS) is an elegant mathematical framework for modeling and learning multivariate time series. However, in general, it is difficult to set the dimension of its hidden state space. A small number of hidden states may not be able to model the complexities of a time series, while a large number of hidden states can lead to overfitting. In this paper, we study methods that impose an $\ell_1$ regularization on the transition matrix of an LDS model to alleviate the problem of choosing the optimal number of hidden states. We incorporate a generalized gradient descent method into the Maximum a Posteriori (MAP) framework and use Expectation Maximization (EM) to iteratively achieve sparsity on the transition matrix of an LDS model. We show that our Sparse Linear Dynamical System (SLDS) improves the predictive performance when compared to ordinary LDS on a multivariate clinical time series dataset.
1311.7080
Cross-Domain Sparse Coding
cs.CV stat.ML
Sparse coding has shown its power as an effective data representation method. However, up to now, all the sparse coding approaches are limited within the single domain learning problem. In this paper, we extend the sparse coding to cross domain learning problem, which tries to learn from a source domain to a target domain with significant different distribution. We impose the Maximum Mean Discrepancy (MMD) criterion to reduce the cross-domain distribution difference of sparse codes, and also regularize the sparse codes by the class labels of the samples from both domains to increase the discriminative ability. The encouraging experiment results of the proposed cross-domain sparse coding algorithm on two challenging tasks --- image classification of photograph and oil painting domains, and multiple user spam detection --- show the advantage of the proposed method over other cross-domain data representation methods.
1311.7084
Explicit rank-metric codes list-decodable with optimal redundancy
cs.IT cs.CC cs.DM math.IT
We construct an explicit family of linear rank-metric codes over any field ${\mathbb F}_h$ that enables efficient list decoding up to a fraction $\rho$ of errors in the rank metric with a rate of $1-\rho-\epsilon$, for any desired $\rho \in (0,1)$ and $\epsilon > 0$. Previously, a Monte Carlo construction of such codes was known, but this is in fact the first explicit construction of positive rate rank-metric codes for list decoding beyond the unique decoding radius. Our codes are subcodes of the well-known Gabidulin codes, which encode linearized polynomials of low degree via their values at a collection of linearly independent points. The subcode is picked by restricting the message polynomials to an ${\mathbb F}_h$-subspace that evades the structured subspaces over an extension field ${\mathbb F}_{h^t}$ that arise in the linear-algebraic list decoder for Gabidulin codes due to Guruswami and Xing (STOC'13). This subspace is obtained by combining subspace designs contructed by Guruswami and Kopparty (FOCS'13) with subspace evasive varieties due to Dvir and Lovett (STOC'12). We establish a similar result for subspace codes, which are a collection of subspaces, every pair of which have low-dimensional intersection, and which have received much attention recently in the context of network coding. We also give explicit subcodes of folded Reed-Solomon (RS) codes with small folding order that are list-decodable (in the Hamming metric) with optimal redundancy, motivated by the fact that list decoding RS codes reduces to list decoding such folded RS codes. However, as we only list decode a subcode of these codes, the Johnson radius continues to be the best known error fraction for list decoding RS codes.
1311.7113
Systematic Codes for Rank Modulation
cs.IT math.IT
The goal of this paper is to construct systematic error-correcting codes for permutations and multi-permutations in the Kendall's $\tau$-metric. These codes are important in new applications such as rank modulation for flash memories. The construction is based on error-correcting codes for multi-permutations and a partition of the set of permutations into error-correcting codes. For a given large enough number of information symbols $k$, and for any integer $t$, we present a construction for ${(k+r,k)}$ systematic $t$-error-correcting codes, for permutations from $S_{k+r}$, with less redundancy symbols than the number of redundancy symbols in the codes of the known constructions. In particular, for a given $t$ and for sufficiently large $k$ we can obtain $r=t+1$. The same construction is also applied to obtain related systematic error-correcting codes for multi-permutations.
1311.7139
Introduction to Neutrosophic Measure, Neutrosophic Integral, and Neutrosophic Probability
cs.AI
In this paper, we introduce for the first time the notions of neutrosophic measure and neutrosophic integral, and we develop the 1995 notion of neutrosophic probability. We present many practical examples. It is possible to define the neutrosophic measure and consequently the neutrosophic integral and neutrosophic probability in many ways, because there are various types of indeterminacies, depending on the problem we need to solve. Neutrosophics study the indeterminacy. Indeterminacy is different from randomness. It can be caused by physical space materials and type of construction, by items involved in the space, etc.
1311.7183
Knowledge-Aided STAP Using Low Rank and Geometry Properties
cs.IT math.IT
This paper presents knowledge-aided space-time adaptive processing (KA-STAP) algorithms that exploit the low-rank dominant clutter and the array geometry properties (LRGP) for airborne radar applications. The core idea is to exploit the fact that the clutter subspace is only determined by the space-time steering vectors, {red}{where the Gram-Schmidt orthogonalization approach is employed to compute the clutter subspace. Specifically, for a side-looking uniformly spaced linear array, the} algorithm firstly selects a group of linearly independent space-time steering vectors using LRGP that can represent the clutter subspace. By performing the Gram-Schmidt orthogonalization procedure, the orthogonal bases of the clutter subspace are obtained, followed by two approaches to compute the STAP filter weights. To overcome the performance degradation caused by the non-ideal effects, a KA-STAP algorithm that combines the covariance matrix taper (CMT) is proposed. For practical applications, a reduced-dimension version of the proposed KA-STAP algorithm is also developed. The simulation results illustrate the effectiveness of our proposed algorithms, and show that the proposed algorithms converge rapidly and provide a SINR improvement over existing methods when using a very small number of snapshots.
1311.7184
Using Multiple Samples to Learn Mixture Models
stat.ML cs.LG
In the mixture models problem it is assumed that there are $K$ distributions $\theta_{1},\ldots,\theta_{K}$ and one gets to observe a sample from a mixture of these distributions with unknown coefficients. The goal is to associate instances with their generating distributions, or to identify the parameters of the hidden distributions. In this work we make the assumption that we have access to several samples drawn from the same $K$ underlying distributions, but with different mixing weights. As with topic modeling, having multiple samples is often a reasonable assumption. Instead of pooling the data into one sample, we prove that it is possible to use the differences between the samples to better recover the underlying structure. We present algorithms that recover the underlying structure under milder assumptions than the current state of art when either the dimensionality or the separation is high. The methods, when applied to topic modeling, allow generalization to words not present in the training data.
1311.7186
A Novel Illumination-Invariant Loss for Monocular 3D Pose Estimation
cs.CV
The problem of identifying the 3D pose of a known object from a given 2D image has important applications in Computer Vision. Our proposed method of registering a 3D model of a known object on a given 2D photo of the object has numerous advantages over existing methods. It does not require prior training, knowledge of the camera parameters, explicit point correspondences or matching features between the image and model. Unlike techniques that estimate a partial 3D pose (as in an overhead view of traffic or machine parts on a conveyor belt), our method estimates the complete 3D pose of the object. It works on a single static image from a given view under varying and unknown lighting conditions. For this purpose we derive a novel illumination-invariant distance measure between the 2D photo and projected 3D model, which is then minimised to find the best pose parameters. Results for vehicle pose detection in real photographs are presented.
1311.7194
Real-time High Resolution Fusion of Depth Maps on GPU
cs.GR cs.CV
A system for live high quality surface reconstruction using a single moving depth camera on a commodity hardware is presented. High accuracy and real-time frame rate is achieved by utilizing graphics hardware computing capabilities via OpenCL and by using sparse data structure for volumetric surface representation. Depth sensor pose is estimated by combining serial texture registration algorithm with iterative closest points algorithm (ICP) aligning obtained depth map to the estimated scene model. Aligned surface is then fused into the scene. Kalman filter is used to improve fusion quality. Truncated signed distance function (TSDF) stored as block-based sparse buffer is used to represent surface. Use of sparse data structure greatly increases accuracy of scanned surfaces and maximum scanning area. Traditional GPU implementation of volumetric rendering and fusion algorithms were modified to exploit sparsity to achieve desired performance. Incorporation of texture registration for sensor pose estimation and Kalman filter for measurement integration improved accuracy and robustness of scanning process.
1311.7198
ADMM Algorithm for Graphical Lasso with an $\ell_{\infty}$ Element-wise Norm Constraint
cs.LG math.OC stat.ML
We consider the problem of Graphical lasso with an additional $\ell_{\infty}$ element-wise norm constraint on the precision matrix. This problem has applications in high-dimensional covariance decomposition such as in \citep{Janzamin-12}. We propose an ADMM algorithm to solve this problem. We also use a continuation strategy on the penalty parameter to have a fast implemenation of the algorithm.
1311.7200
Searching and Establishment of S-P-O Relationships for Linked RDF Graphs : An Adaptive Approach
cs.IR cs.DB
In the coming era of semantic web linked data analysis is a very burning issue for efficient searching and retrieval of information. One way of establishing this link is to implement subject predicate object relationship through Set Theory approach which is already done in our previous work. For analyzing inter relationship between two RDF Graphs, RDF- Schema (RDFS) should also be taken care of. In the present paper, an adaptive combination rule based framework has been proposed for establishment of S P O relationship and RDF Graph searching is reported. Hence the identification of criteria for inter-relationship of RDF Graphs opens up new road in semantic search.
1311.7204
A Hybrid Web Recommendation System based on the Improved Association Rule Mining Algorithm
cs.IR
As the growing interest of web recommendation systems those are applied to deliver customized data for their users, we started working on this system. Generally the recommendation systems are divided into two major categories such as collaborative recommendation system and content based recommendation system. In case of collaborative recommen-dation systems, these try to seek out users who share same tastes that of given user as well as recommends the websites according to the liking given user. Whereas the content based recommendation systems tries to recommend web sites similar to those web sites the user has liked. In the recent research we found that the efficient technique based on asso-ciation rule mining algorithm is proposed in order to solve the problem of web page recommendation. Major problem of the same is that the web pages are given equal importance. Here the importance of pages changes according to the fre-quency of visiting the web page as well as amount of time user spends on that page. Also recommendation of newly added web pages or the pages those are not yet visited by users are not included in the recommendation set. To over-come this problem, we have used the web usage log in the adaptive association rule based web mining where the asso-ciation rules were applied to personalization. This algorithm was purely based on the Apriori data mining algorithm in order to generate the association rules. However this method also suffers from some unavoidable drawbacks. In this paper we are presenting and investigating the new approach based on weighted Association Rule Mining Algorithm and text mining. This is improved algorithm which adds semantic knowledge to the results, has more efficiency and hence gives better quality and performances as compared to existing approaches.
1311.7213
Finding a Maximum Clique using Ant Colony Optimization and Particle Swarm Optimization in Social Networks
cs.SI cs.NE
Interaction between users in online social networks plays a key role in social network analysis. One on important types of social group is full connected relation between some users, which known as clique structure. Therefore finding a maximum clique is essential for some analysis. In this paper, we proposed a new method using ant colony optimization algorithm and particle swarm optimization algorithm. In the proposed method, in order to attain better results, it is improved process of pheromone update by particle swarm optimization. Simulation results on popular standard social network benchmarks in comparison standard ant colony optimization algorithm are shown a relative enhancement of proposed algorithm.
1311.7215
Solving Minimum Vertex Cover Problem Using Learning Automata
cs.AI cs.DM
Minimum vertex cover problem is an NP-Hard problem with the aim of finding minimum number of vertices to cover graph. In this paper, a learning automaton based algorithm is proposed to find minimum vertex cover in graph. In the proposed algorithm, each vertex of graph is equipped with a learning automaton that has two actions in the candidate or non-candidate of the corresponding vertex cover set. Due to characteristics of learning automata, this algorithm significantly reduces the number of covering vertices of graph. The proposed algorithm based on learning automata iteratively minimize the candidate vertex cover through the update its action probability. As the proposed algorithm proceeds, a candidate solution nears to optimal solution of the minimum vertex cover problem. In order to evaluate the proposed algorithm, several experiments conducted on DIMACS dataset which compared to conventional methods. Experimental results show the major superiority of the proposed algorithm over the other methods.
1311.7219
Partitioning Clustering algorithms for handling numerical and categorical data: a review
cs.DB
Clustering is widely used in different field such as biology, psychology, and economics. Most traditional clustering algorithms are limited to handling datasets that contain either numeric or categorical attributes. However, datasets with mixed types of attributes are common in real life data mining applications. In this paper, we review partitioning based algorithm such as K-prototype, Extension of K-prototype, K-histogram, Fuzzy approaches, genetic approaches, etc. These algorithm works on both numerical and categorical data. The approaches has been proposed to handle mixed data are based on four different perceptive: i) split data set into two part such that each part contain either numerical or categorical data, then apply separate clustering algorithm on each data set, finally combined the result of both clustering algorithm, ii) converting categorical attribute into numerical attribute and apply numerical attribute clustering algorithm; iii) discrimination of numerical attribute and apply categorical based clustering algorithm; iv) Conversion of the categorical attributes into binary ones and apply any numerical based clustering algorithm
1311.7225
Link Quality Control Mechanism for Selective and Opportunistic AF Relaying in Cooperative ARQs: A MLSD Perspective
cs.IT math.IT
Incorporating relaying techniques into Automatic Repeat reQuest (ARQ) mechanisms gives a general impression of diversity and throughput enhancements. Allowing overhearing among multiple relays is also a known approach to increase the number of participating relays in ARQs. However, when opportunistic amplify-and-forward (AF) relaying is applied to cooperative ARQs, the system design becomes nontrivial and even involved. Based on outage analysis, the spatial and temporal diversities are first found sensitive to the received signal qualities of relays, and a link quality control mechanism is then developed to prescreen candidate relays in order to explore the diversity of cooperative ARQs with a selective and opportunistic AF (SOAF) relaying method. According to the analysis, the temporal and spatial diversities can be fully exploited if proper thresholds are set for each hop along the relaying routes. The SOAF relaying method is further examined from a packet delivery viewpoint. By the principle of the maximum likelihood sequence detection (MLSD), sufficient conditions on the link quality are established for the proposed SOAF-relaying-based ARQ scheme to attain its potential diversity order in the packet error rates (PERs) of MLSD. The conditions depend on the minimum codeword distance and the average signal-to-noise ratio (SNR). Furthermore, from a heuristic viewpoint, we also develop a threshold searching algorithm for the proposed SOAF relaying and link quality method to exploit both the diversity and the SNR gains in PER. The effectiveness of the proposed thresholding mechanism is verified via simulations with trellis codes.
1311.7235
Downscaling of global solar irradiation in R
physics.ao-ph cs.CE
A methodology for downscaling solar irradiation from satellite-derived databases is described using R software. Different packages such as raster, parallel, solaR, gstat, sp and rasterVis are considered in this study for improving solar resource estimation in areas with complex topography, in which downscaling is a very useful tool for reducing inherent deviations in satellite-derived irradiation databases, which lack of high global spatial resolution. A topographical analysis of horizon blocking and sky-view is developed with a digital elevation model to determine what fraction of hourly solar irradiation reaches the Earth's surface. Eventually, kriging with external drift is applied for a better estimation of solar irradiation throughout the region analyzed. This methodology has been implemented as an example within the region of La Rioja in northern Spain, and the mean absolute error found is a striking 25.5% lower than with the original database.
1311.7237
Beamforming for MISO Interference Channels with QoS and RF Energy Transfer
cs.IT math.IT
We consider a multiuser multiple-input single-output interference channel where the receivers are characterized by both quality-of-service (QoS) and radio-frequency (RF) energy harvesting (EH) constraints. We consider the power splitting RF-EH technique where each receiver divides the received signal into two parts a) for information decoding and b) for battery charging. The minimum required power that supports both the QoS and the RF-EH constraints is formulated as an optimization problem that incorporates the transmitted power and the beamforming design at each transmitter as well as the power splitting ratio at each receiver. We consider both the cases of fixed beamforming and when the beamforming design is incorporated into the optimization problem. For fixed beamforming we study three standard beamforming schemes, the zero-forcing (ZF), the regularized zero-forcing (RZF) and the maximum ratio transmission (MRT); a hybrid scheme, MRT-ZF, comprised of a linear combination of MRT and ZF beamforming is also examined. The optimal solution for ZF beamforming is derived in closed-form, while optimization algorithms based on second-order cone programming are developed for MRT, RZF and MRT-ZF beamforming to solve the problem. In addition, the joint-optimization of beamforming and power allocation is studied using semidefinite programming (SDP) with the aid of rank relaxation.
1311.7245
Multiuser Broadcast Erasure Channel with Feedback and Side Information, and Related Index Coding Results
cs.IT math.IT
We consider the N-user broadcast erasure channel with public feedback and side information. Before the beginning of transmission, each receiver knows a function of the messages of some of the other receivers. This situation arises naturally in wireless and in particular cognitive networks where a node may overhear transmitted messages destined to other nodes before transmission over a given broadcast channel begins. We provide an upper bound to the capacity region of this system. Furthermore, when the side information is linear, we show that the bound is tight for the case of two-user broadcast channels. The special case where each user knows the whole or nothing of the message of each other node, constitutes a generalization of the index coding problem. For this instance, and when there are no channel errors, we show that the bound reduces to the known Maximum Weighted Acyclic Induced Subgraph bound. We also show how to convert the capacity upper bound to transmission completion rate (broadcast rate) lower bound and provide examples of codes for certain information graphs for which the bound is either achieved of closely approximated
1311.7251
Spatially-Adaptive Reconstruction in Computed Tomography using Neural Networks
cs.CV cs.LG cs.NE
We propose a supervised machine learning approach for boosting existing signal and image recovery methods and demonstrate its efficacy on example of image reconstruction in computed tomography. Our technique is based on a local nonlinear fusion of several image estimates, all obtained by applying a chosen reconstruction algorithm with different values of its control parameters. Usually such output images have different bias/variance trade-off. The fusion of the images is performed by feed-forward neural network trained on a set of known examples. Numerical experiments show an improvement in reconstruction quality relatively to existing direct and iterative reconstruction methods.
1311.7295
Glasgow's Stereo Image Database of Garments
cs.RO cs.CV
To provide insight into cloth perception and manipulation with an active binocular robotic vision system, we compiled a database of 80 stereo-pair colour images with corresponding horizontal and vertical disparity maps and mask annotations, for 3D garment point cloud rendering has been created and released. The stereo-image garment database is part of research conducted under the EU-FP7 Clothes Perception and Manipulation (CloPeMa) project and belongs to a wider database collection released through CloPeMa (www.clopema.eu). This database is based on 16 different off-the-shelve garments. Each garment has been imaged in five different pose configurations on the project's binocular robot head. A full copy of the database is made available for scientific research only at https://sites.google.com/site/ugstereodatabase/.
1311.7298
List decoding - random coding exponents and expurgated exponents
cs.IT math.IT
Some new results are derived concerning random coding error exponents and expurgated exponents for list decoding with a deterministic list size $L$. Two asymptotic regimes are considered, the fixed list-size regime, where $L$ is fixed independently of the block length $n$, and the exponential list-size, where $L$ grows exponentially with $n$. We first derive a general upper bound on the list-decoding average error probability, which is suitable for both regimes. This bound leads to more specific bounds in the two regimes. In the fixed list-size regime, the bound is related to known bounds and we establish its exponential tightness. In the exponential list-size regime, we establish the achievability of the well known sphere packing lower bound. Relations to guessing exponents are also provided. An immediate byproduct of our analysis in both regimes is the universality of the maximum mutual information (MMI) list decoder in the error exponent sense. Finally, we consider expurgated bounds at low rates, both using Gallager's approach and the Csisz\'ar-K\"orner-Marton approach, which is, in general better (at least for $L=1$). The latter expurgated bound, which involves the notion of {\it multi-information}, is also modified to apply to continuous alphabet channels, and in particular, to the Gaussian memoryless channel, where the expression of the expurgated bound becomes quite explicit.
1311.7302
Spectral and Energy Efficiency Trade-Offs in Cellular Networks
cs.IT math.IT
This paper presents a simple and effective method to study the spectral and energy efficiency (SE-EE) trade-off in cellular networks, an issue that has attracted significant recent interest in the wireless community. The proposed theoretical framework is based on an optimal radio resource allocation of transmit power and bandwidth for the downlink direction, applicable for an orthogonal cellular network. The analysis is initially focused on a single cell scenario, for which in addition to the solution of the main SE-EE optimization problem, it is proved that a traffic repartition scheme can also be adopted as a way to simplify this approach. By exploiting this interesting result along with properties of stochastic geometry, this work is extended to a more challenging multi-cell environment, where interference is shown to play an essential role and for this reason several interference reduction techniques are investigated. Special attention is also given to the case of low signal to noise ratio (SNR) and a way to evaluate the upper bound on EE in this regime is provided. This methodology leads to tractable analytical results under certain common channel properties, and thus allows the study of various models without the need for demanding system-level simulations.
1311.7307
Schemas for Unordered XML on a DIME
cs.DB
We investigate schema languages for unordered XML having no relative order among siblings. First, we propose unordered regular expressions (UREs), essentially regular expressions with unordered concatenation instead of standard concatenation, that define languages of unordered words to model the allowed content of a node (i.e., collections of the labels of children). However, unrestricted UREs are computationally too expensive as we show the intractability of two fundamental decision problems for UREs: membership of an unordered word to the language of a URE and containment of two UREs. Consequently, we propose a practical and tractable restriction of UREs, disjunctive interval multiplicity expressions (DIMEs). Next, we employ DIMEs to define languages of unordered trees and propose two schema languages: disjunctive interval multiplicity schema (DIMS), and its restriction, disjunction-free interval multiplicity schema (IMS). We study the complexity of the following static analysis problems: schema satisfiability, membership of a tree to the language of a schema, schema containment, as well as twig query satisfiability, implication, and containment in the presence of schema. Finally, we study the expressive power of the proposed schema languages and compare them with yardstick languages of unordered trees (FO, MSO, and Presburger constraints) and DTDs under commutative closure. Our results show that the proposed schema languages are capable of expressing many practical languages of unordered trees and enjoy desirable computational properties.
1311.7327
Unobtrusive Low Cost Pupil Size Measurements using Web cameras
cs.CV
Unobtrusive every day health monitoring can be of important use for the elderly population. In particular, pupil size may be a valuable source of information, since, apart from pathological cases, it can reveal the emotional state, the fatigue and the ageing. To allow for unobtrusive monitoring to gain acceptance, one should seek for efficient methods of monitoring using com- mon low-cost hardware. This paper describes a method for monitoring pupil sizes using a common web camera in real time. Our method works by first detecting the face and the eyes area. Subsequently, optimal iris and sclera location and radius, modelled as ellipses, are found using efficient filtering. Finally, the pupil center and radius is estimated by optimal filtering within the area of the iris. Experimental result show both the efficiency and the effectiveness of our approach.
1311.7359
Zak transforms and Gabor frames of totally positive functions and exponential B-splines
cs.IT math.IT math.NA
We study totally positive (TP) functions of finite type and exponential B-splines as window functions for Gabor frames. We establish the connection of the Zak transform of these two classes of functions and prove that the Zak transforms have only one zero in their fundamental domain of quasi-periodicity. Our proof is based on the variation-diminishing property of shifts of exponential B-splines. For the exponential B-spline B_m of order m, we determine a large set of lattice parameters a,b>0 such that the Gabor family of time-frequency shifts is a frame for L^2(R). By the connection of its Zak transform to the Zak transform of TP functions of finite type, our result provides an alternative proof that TP functions of finite type provide Gabor frames for all lattice parameters with ab<1. For even two-sided exponentials and the related exponential B-spline of order 2, we find lower frame-bounds A, which show the asymptotically linear decay A (1-ab) as the density ab of the time-frequency lattice tends to the critical density ab=1.
1311.7373
Limited-Feedback-Based Channel-Aware Power Allocation for Linear Distributed Estimation
cs.IT math.IT
This paper investigates the problem of distributed best linear unbiased estimation (BLUE) of a random parameter at the fusion center (FC) of a wireless sensor network (WSN). In particular, the application of limited-feedback strategies for the optimal power allocation in distributed estimation is studied. In order to find the BLUE estimator of the unknown parameter, the FC combines spatially distributed, linearly processed, noisy observations of local sensors received through orthogonal channels corrupted by fading and additive Gaussian noise. Most optimal power-allocation schemes proposed in the literature require the feedback of the exact instantaneous channel state information from the FC to local sensors. This paper proposes a limited-feedback strategy in which the FC designs an optimal codebook containing the optimal power-allocation vectors, in an iterative offline process, based on the generalized Lloyd algorithm with modified distortion functions. Upon observing a realization of the channel vector, the FC finds the closest codeword to its corresponding optimal power-allocation vector and broadcasts the index of the codeword. Each sensor will then transmit its analog observations using its optimal quantized amplification gain. This approach eliminates the requirement for infinite-rate digital feedback links and is scalable, especially in large WSNs.
1311.7385
Algorithmic Identification of Probabilities
cs.LG
TThe problem is to identify a probability associated with a set of natural numbers, given an infinite data sequence of elements from the set. If the given sequence is drawn i.i.d. and the probability mass function involved (the target) belongs to a computably enumerable (c.e.) or co-computably enumerable (co-c.e.) set of computable probability mass functions, then there is an algorithm to almost surely identify the target in the limit. The technical tool is the strong law of large numbers. If the set is finite and the elements of the sequence are dependent while the sequence is typical in the sense of Martin-L\"of for at least one measure belonging to a c.e. or co-c.e. set of computable measures, then there is an algorithm to identify in the limit a computable measure for which the sequence is typical (there may be more than one such measure). The technical tool is the theory of Kolmogorov complexity. We give the algorithms and consider the associated predictions.
1311.7388
Web Mining Techniques in E-Commerce Applications
cs.IR
Today web is the best medium of communication in modern business. Many companies are redefining their business strategies to improve the business output. Business over internet provides the opportunity to customers and partners where their products and specific business can be found. Nowadays online business breaks the barrier of time and space as compared to the physical office. Big companies around the world are realizing that e-commerce is not just buying and selling over Internet, rather it improves the efficiency to compete with other giants in the market. For this purpose data mining sometimes called as knowledge discovery is used. Web mining is data mining technique that is applied to the WWW. There are vast quantities of information available over the Internet.
1311.7401
Shape from Texture using Locally Scaled Point Processes
stat.AP cs.CV
Shape from texture refers to the extraction of 3D information from 2D images with irregular texture. This paper introduces a statistical framework to learn shape from texture where convex texture elements in a 2D image are represented through a point process. In a first step, the 2D image is preprocessed to generate a probability map corresponding to an estimate of the unnormalized intensity of the latent point process underlying the texture elements. The latent point process is subsequently inferred from the probability map in a non-parametric, model free manner. Finally, the 3D information is extracted from the point pattern by applying a locally scaled point process model where the local scaling function represents the deformation caused by the projection of a 3D surface onto a 2D image.
1311.7434
Observability, Identifiability and Sensitivity of Vision-Aided Navigation
cs.RO
We analyze the observability of motion estimates from the fusion of visual and inertial sensors. Because the model contains unknown parameters, such as sensor biases, the problem is usually cast as a mixed identification/filtering, and the resulting observability analysis provides a necessary condition for any algorithm to converge to a unique point estimate. Unfortunately, most models treat sensor bias rates as noise, independent of other states including biases themselves, an assumption that is patently violated in practice. When this assumption is lifted, the resulting model is not observable, and therefore past analyses cannot be used to conclude that the set of states that are indistinguishable from the measurements is a singleton. In other words, the resulting model is not observable. We therefore re-cast the analysis as one of sensitivity: Rather than attempting to prove that the indistinguishable set is a singleton, which is not the case, we derive bounds on its volume, as a function of characteristics of the input and its sufficient excitation. This provides an explicit characterization of the indistinguishable set that can be used for analysis and validation purposes.
1311.7442
Irreducibility is Minimum Synergy Among Parts
cs.IT math.IT
For readers already familiar with Partial Information Decomposition (PID), we show that PID's definition of synergy enables quantifying at least four different notions of irreducibility. First, we show four common notions of "parts" give rise to a spectrum of four distinct measures of irreducibility. Second, we introduce a nonnegative expression based on PID for each notion of irreducibility. Third, we delineate these four notions of irreducibility with exemplary binary circuits. This work will become more useful once the complexity community has converged on a palatable $\operatorname{I}_{\cap}$ or $\operatorname{I}_{\cup}$ measure.
1311.7466
Linear Network Error Correction Multicast/Broadcast/Dispersion/Generic Codes
cs.IT math.IT
In the practical network communications, many internal nodes in the network are required to not only transmit messages but decode source messages. For different applications, four important classes of linear network codes in network coding theory, i.e., linear multicast, linear broadcast, linear dispersion, and generic network codes, have been studied extensively. More generally, when channels of communication networks are noisy, information transmission and error correction have to be under consideration simultaneously, and thus these four classes of linear network codes are generalized to linear network error correction (LNEC) coding, and we say them LNEC multicast, broadcast, dispersion, and generic codes, respectively. Furthermore, in order to characterize their efficiency of information transmission and error correction, we propose the (weakly, strongly) extended Singleton bounds for them, and define the corresponding optimal codes, i.e., LNEC multicast/broadcast/dispersion/generic MDS codes, which satisfy the corresponding Singleton bounds with equality. The existences of such MDS codes are discussed in detail by algebraic methods and the constructive algorithms are also proposed.
1311.7518
Power Penalty Due to First-order PMD in Optical OFDM/QAM and FBMC/OQAM Transmission System
cs.IT math.IT
Polarization mode dispersion (PMD) is a challenge for high-data-rate optical-communication systems. More researches are desirable for impairments that is induced by PMD in high-speed optical orthogonal frequency division multiplexing (OFDM) transmission system. In this paper, an approximately analytical method for evaluating the power penalty due to first-order PMD in optical OFDM with quadrature amplitude modulation (OFDM/QAM) and filter bank based multi-carrier with offset quadrature amplitude modulation (FBMC/OQAM) transmission system is presented. The simulation results show that, compared with the single carrier with quadrature phase shift keying(SC-QPSK), both the OFDM/QAM and the FBMC/OQAM can decrease the power penalty caused by PMD by half. Furthermore, the FBMC/OQAM shows better power penalty immunity than the OFDM/QAM under the influence of first order PMD.
1311.7562
Dynamic coupling design for nonlinear output agreement and time-varying flow control
cs.SY
This paper studies the problem of output agreement in networks of nonlinear dynamical systems under time-varying disturbances, using dynamic diffusive couplings. Necessary conditions are derived for general networks of nonlinear systems, and these conditions are explicitly interpreted as conditions relating the node dynamics and the network topology. For the class of incrementally passive systems, necessary and sufficient conditions for output agreement are derived. The approach proposed in the paper lends itself to solve flow control problems in distribution networks. As a first case study, the internal model approach is used for designing a controller that achieves an optimal routing and inventory balancing in a dynamic transportation network with storage and time-varying supply and demand. It is in particular shown that the time-varying optimal routing problem can be solved by applying an internal model controller to the dual variables of a certain convex network optimization problem. As a second case study, we show that droop-controllers in microgrids have also an interpretation as internal model controllers.
1311.7584
On the Communication Complexity of Secure Computation
cs.CR cs.IT math.IT
Information theoretically secure multi-party computation (MPC) is a central primitive of modern cryptography. However, relatively little is known about the communication complexity of this primitive. In this work, we develop powerful information theoretic tools to prove lower bounds on the communication complexity of MPC. We restrict ourselves to a 3-party setting in order to bring out the power of these tools without introducing too many complications. Our techniques include the use of a data processing inequality for residual information - i.e., the gap between mutual information and G\'acs-K\"orner common information, a new information inequality for 3-party protocols, and the idea of distribution switching by which lower bounds computed under certain worst-case scenarios can be shown to apply for the general case. Using these techniques we obtain tight bounds on communication complexity by MPC protocols for various interesting functions. In particular, we show concrete functions that have "communication-ideal" protocols, which achieve the minimum communication simultaneously on all links in the network. Also, we obtain the first explicit example of a function that incurs a higher communication cost than the input length in the secure computation model of Feige, Kilian and Naor (1994), who had shown that such functions exist. We also show that our communication bounds imply tight lower bounds on the amount of randomness required by MPC protocols for many interesting functions.
1311.7590
Universal Polar Decoding with Channel Knowledge at the Encoder
cs.IT math.IT
Polar coding over a class of binary discrete memoryless channels with channel knowledge at the encoder is studied. It is shown that polar codes achieve the capacity of convex and one-sided classes of symmetric channels.
1311.7662
The Power of Asymmetry in Binary Hashing
cs.LG cs.CV cs.IR
When approximating binary similarity using the hamming distance between short binary hashes, we show that even if the similarity is symmetric, we can have shorter and more accurate hashes by using two distinct code maps. I.e. by approximating the similarity between $x$ and $x'$ as the hamming distance between $f(x)$ and $g(x')$, for two distinct binary codes $f,g$, rather than as the hamming distance between $f(x)$ and $f(x')$.
1311.7679
Combination of Diverse Ranking Models for Personalized Expedia Hotel Searches
cs.LG
The ICDM Challenge 2013 is to apply machine learning to the problem of hotel ranking, aiming to maximize purchases according to given hotel characteristics, location attractiveness of hotels, user's aggregated purchase history and competitive online travel agency information for each potential hotel choice. This paper describes the solution of team "binghsu & MLRush & BrickMover". We conduct simple feature engineering work and train different models by each individual team member. Afterwards, we use listwise ensemble method to combine each model's output. Besides describing effective model and features, we will discuss about the lessons we learned while using deep learning in this competition.
1312.0001
A Proposal for the Characterization of Multi-Dimensional Inter-relationships of RDF Graphs Based on Set Theoretic Approach
cs.DB
In this paper a Set Theoretic approach has been reported for analyzing inter-relationship between any numbers of RDF Graphs. An RDF Graph represents triples in Resource Description Format of semantic web. So the identification and characterization of criteria for inter-relationship of RDF Graphs shows a new road in semantic search. Using set theoretic approach, a sound framing criteria can be designed that examine whether two RDF Graphs are related and if yes, how these relationships could be described with formal set theory. Along with this, by introducing RDF Schema, the inter-relationship status is refined into n-dimensional induced relationships.
1312.0022
On products and powers of linear codes under componentwise multiplication
cs.IT math.AG math.IT
In this text we develop the formalism of products and powers of linear codes under componentwise multiplication. As an expanded version of the author's talk at AGCT-14, focus is put mostly on basic properties and descriptive statements that could otherwise probably not fit in a regular research paper. On the other hand, more advanced results and applications are only quickly mentioned with references to the literature. We also point out a few open problems. Our presentation alternates between two points of view, which the theory intertwines in an essential way: that of combinatorial coding, and that of algebraic geometry. In appendices that can be read independently, we investigate topics in multilinear algebra over finite fields, notably we establish a criterion for a symmetric multilinear map to admit a symmetric algorithm, or equivalently, for a symmetric tensor to decompose as a sum of elementary symmetric tensors.
1312.0032
Top-k Query Answering in Datalog+/- Ontologies under Subjective Reports (Technical Report)
cs.AI cs.DB
The use of preferences in query answering, both in traditional databases and in ontology-based data access, has recently received much attention, due to its many real-world applications. In this paper, we tackle the problem of top-k query answering in Datalog+/- ontologies subject to the querying user's preferences and a collection of (subjective) reports of other users. Here, each report consists of scores for a list of features, its author's preferences among the features, as well as other information. Theses pieces of information of every report are then combined, along with the querying user's preferences and his/her trust into each report, to rank the query results. We present two alternative such rankings, along with algorithms for top-k (atomic) query answering under these rankings. We also show that, under suitable assumptions, these algorithms run in polynomial time in the data complexity. We finally present more general reports, which are associated with sets of atoms rather than single atoms.
1312.0040
Dynamic Interference Management
cs.IT math.IT
A linear interference network is considered. Long-term fluctuations (shadow fading) in the wireless channel can lead to any link being erased with probability p. Each receiver is interested in one unique message that can be available at M transmitters. In a cellular downlink scenario, the case where M=1 reflects the cell association problem, and the case where M>1 reflects the problem of setting up the backhaul links for Coordinated Multi-Point (CoMP) transmission. In both cases, we analyze Degrees of Freedom (DoF) optimal schemes for the case of no erasures, and propose new schemes with better average DoF performance at high probabilities of erasure. For M=1, we characterize the average per user DoF, and identify the optimal assignment of messages to transmitters at each value of p. For general values of M, we show that there is no strategy for assigning messages to transmitters in large networks that is optimal for all values of p.
1312.0042
Boosting the Basic Counting on Distributed Streams
cs.DS cs.DB cs.DC
We revisit the classic basic counting problem in the distributed streaming model that was studied by Gibbons and Tirthapura (GT). In the solution for maintaining an $(\epsilon,\delta)$-estimate, as what GT's method does, we make the following new contributions: (1) For a bit stream of size $n$, where each bit has a probability at least $\gamma$ to be 1, we exponentially reduced the average total processing time from GT's $\Theta(n \log(1/\delta))$ to $O((1/(\gamma\epsilon^2))(\log^2 n) \log(1/\delta))$, thus providing the first sublinear-time streaming algorithm for this problem. (2) In addition to an overall much faster processing speed, our method provides a new tradeoff that a lower accuracy demand (a larger value for $\epsilon$) promises a faster processing speed, whereas GT's processing speed is $\Theta(n \log(1/\delta))$ in any case and for any $\epsilon$. (3) The worst-case total time cost of our method matches GT's $\Theta(n\log(1/\delta))$, which is necessary but rarely occurs in our method. (4) The space usage overhead in our method is a lower order term compared with GT's space usage and occurs only $O(\log n)$ times during the stream processing and is too negligible to be detected by the operating system in practice. We further validate these solid theoretical results with experiments on both real-world and synthetic data, showing that our method is faster than GT's by a factor of several to several thousands depending on the stream size and accuracy demands, without any detectable space usage overhead. Our method is based on a faster sampling technique that we design for boosting GT's method and we believe this technique can be of other interest.
1312.0048
Stochastic Optimization of Smooth Loss
cs.LG
In this paper, we first prove a high probability bound rather than an expectation bound for stochastic optimization with smooth loss. Furthermore, the existing analysis requires the knowledge of optimal classifier for tuning the step size in order to achieve the desired bound. However, this information is usually not accessible in advanced. We also propose a strategy to address the limitation.
1312.0049
One-Class Classification: Taxonomy of Study and Review of Techniques
cs.LG cs.AI
One-class classification (OCC) algorithms aim to build classification models when the negative class is either absent, poorly sampled or not well defined. This unique situation constrains the learning of efficient classifiers by defining class boundary just with the knowledge of positive class. The OCC problem has been considered and applied under many research themes, such as outlier/novelty detection and concept learning. In this paper we present a unified view of the general problem of OCC by presenting a taxonomy of study for OCC problems, which is based on the availability of training data, algorithms used and the application domains applied. We further delve into each of the categories of the proposed taxonomy and present a comprehensive literature review of the OCC algorithms, techniques and methodologies with a focus on their significance, limitations and applications. We conclude our paper by discussing some open research problems in the field of OCC and present our vision for future research.
1312.0054
Energy Harvesting Broadband Communication Systems with Processing Energy Cost
cs.IT math.IT
Communication over a broadband fading channel powered by an energy harvesting transmitter is studied. Assuming non-causal knowledge of energy/data arrivals and channel gains, optimal transmission schemes are identified by taking into account the energy cost of the processing circuitry as well as the transmission energy. A constant processing cost for each active sub-channel is assumed. Three different system objectives are considered: i) throughput maximization, in which the total amount of transmitted data by a deadline is maximized for a backlogged transmitter with a finite capacity battery; ii) energy maximization, in which the remaining energy in an infinite capacity battery by a deadline is maximized such that all the arriving data packets are delivered; iii) transmission completion time minimization, in which the delivery time of all the arriving data packets is minimized assuming infinite size battery. For each objective, a convex optimization problem is formulated, the properties of the optimal transmission policies are identified, and an algorithm which computes an optimal transmission policy is proposed. Finally, based on the insights gained from the offline optimizations, low-complexity online algorithms performing close to the optimal dynamic programming solution for the throughput and energy maximization problems are developed under the assumption that the energy/data arrivals and channel states are known causally at the transmitter.
1312.0060
On the Secrecy Capacity of Block Fading Channels with a Hybrid Adversary
cs.IT math.IT
We consider a block fading wiretap channel, where a transmitter attempts to send messages securely to a receiver in the presence of a hybrid half-duplex adversary, which arbitrarily decides to either jam or eavesdrop the transmitter-to- receiver channel. We provide bounds to the secrecy capacity for various possibilities on receiver feedback and show special cases where the bounds are tight. We show that, without any feedback from the receiver, the secrecy capacity is zero if the transmitter-to-adversary channel stochastically dominates the effective transmitter-to-receiver channel. However, the secrecy capacity is non-zero even when the receiver is allowed to feed back only one bit at the end of each block. Our novel achievable strategy improves the rates proposed in the literature for the non-hybrid adversarial model. We also analyze the effect of multiple adversaries and delay constraints on the secrecy capacity. We show that our novel time sharing approach leads to positive secrecy rates even under strict delay constraints.
1312.0072
Improving Texture Categorization with Biologically Inspired Filtering
cs.CV
Within the domain of texture classification, a lot of effort has been spent on local descriptors, leading to many powerful algorithms. However, preprocessing techniques have received much less attention despite their important potential for improving the overall classification performance. We address this question by proposing a novel, simple, yet very powerful biologically-inspired filtering (BF) which simulates the performance of human retina. In the proposed approach, given a texture image, after applying a DoG filter to detect the "edges", we first split the filtered image into two "maps" alongside the sides of its edges. The feature extraction step is then carried out on the two "maps" instead of the input image. Our algorithm has several advantages such as simplicity, robustness to illumination and noise, and discriminative power. Experimental results on three large texture databases show that with an extremely low computational cost, the proposed method improves significantly the performance of many texture classification systems, notably in noisy environments. The source codes of the proposed algorithm can be downloaded from https://sites.google.com/site/nsonvu/code.
1312.0086
A Framework for Genetic Algorithms Based on Hadoop
cs.NE cs.DC
Genetic Algorithms (GAs) are powerful metaheuristic techniques mostly used in many real-world applications. The sequential execution of GAs requires considerable computational power both in time and resources. Nevertheless, GAs are naturally parallel and accessing a parallel platform such as Cloud is easy and cheap. Apache Hadoop is one of the common services that can be used for parallel applications. However, using Hadoop to develop a parallel version of GAs is not simple without facing its inner workings. Even though some sequential frameworks for GAs already exist, there is no framework supporting the development of GA applications that can be executed in parallel. In this paper is described a framework for parallel GAs on the Hadoop platform, following the paradigm of MapReduce. The main purpose of this framework is to allow the user to focus on the aspects of GA that are specific to the problem to be addressed, being sure that this task is going to be correctly executed on the Cloud with a good performance. The framework has been also exploited to develop an application for Feature Subset Selection problem. A preliminary analysis of the performance of the developed GA application has been performed using three datasets and shown very promising performance.
1312.0116
Communication Through Collisions: Opportunistic Utilization of Past Receptions
cs.IT math.IT
When several wireless users are sharing the spectrum, packet collision is a simple, yet widely used model for interference. Under this model, when transmitters cause interference at any of the receivers, their collided packets are discarded and need to be retransmitted. However, in reality, that receiver can still store its analog received signal and utilize it for decoding the packets in the future (for example, by successive interference cancellation techniques). In this work, we propose a physical layer model for wireless packet networks that allows for such flexibility at the receivers. We assume that the transmitters will be aware of the state of the channel (i.e. when and where collisions occur, or an unintended receiver overhears the signal) with some delay, and propose several coding opportunities that can be utilized by the transmitters to exploit the available signal at the receivers for interference management (as opposed to discarding them). We analyze the achievable throughput of our strategy in a canonical interference channel with two transmitter-receiver pairs, and demonstrate the gain over conventional schemes. By deriving an outer-bound, we also prove the optimality of our scheme for the corresponding model.
1312.0127
Characterizing and Extending Answer Set Semantics using Possibility Theory
cs.AI cs.LO
Answer Set Programming (ASP) is a popular framework for modeling combinatorial problems. However, ASP cannot easily be used for reasoning about uncertain information. Possibilistic ASP (PASP) is an extension of ASP that combines possibilistic logic and ASP. In PASP a weight is associated with each rule, where this weight is interpreted as the certainty with which the conclusion can be established when the body is known to hold. As such, it allows us to model and reason about uncertain information in an intuitive way. In this paper we present new semantics for PASP, in which rules are interpreted as constraints on possibility distributions. Special models of these constraints are then identified as possibilistic answer sets. In addition, since ASP is a special case of PASP in which all the rules are entirely certain, we obtain a new characterization of ASP in terms of constraints on possibility distributions. This allows us to uncover a new form of disjunction, called weak disjunction, that has not been previously considered in the literature. In addition to introducing and motivating the semantics of weak disjunction, we also pinpoint its computational complexity. In particular, while the complexity of most reasoning tasks coincides with standard disjunctive ASP, we find that brave reasoning for programs with weak disjunctions is easier.
1312.0132
Critical Graphs in Index Coding
cs.IT cs.DM math.IT
In this paper we define critical graphs as minimal graphs that support a given set of rates for the index coding problem, and study them for both the one-shot and asymptotic setups. For the case of equal rates, we find the critical graph with minimum number of edges for both one-shot and asymptotic cases. For the general case of possibly distinct rates, we show that for one-shot and asymptotic linear index coding, as well as asymptotic non-linear index coding, each critical graph is a union of disjoint strongly connected subgraphs (USCS). On the other hand, we identify a non-USCS critical graph for a one-shot non-linear index coding problem. Next, we identify a few graph structures that are critical. We also generalize some of our results to the groupcast problem. In addition, we show that the capacity region of the index coding is additive for union of disjoint graphs.
1312.0144
Knowing Whether
cs.AI
Knowing whether a proposition is true means knowing that it is true or knowing that it is false. In this paper, we study logics with a modal operator Kw for knowing whether but without a modal operator K for knowing that. This logic is not a normal modal logic, because we do not have Kw (phi -> psi) -> (Kw phi -> Kw psi). Knowing whether logic cannot define many common frame properties, and its expressive power less than that of basic modal logic over classes of models without reflexivity. These features make axiomatizing knowing whether logics non-trivial. We axiomatize knowing whether logic over various frame classes. We also present an extension of knowing whether logic with public announcement operators and we give corresponding reduction axioms for that. We compare our work in detail to two recent similar proposals.
1312.0146
Impact of Co-Channel Interference on Performance of Multi-Hop Relaying over Nakagami-$m$ Fading Channels
cs.IT math.IT
This paper studies the impact of co-channel interferences (CCIs) on the system performance of multi-hop amplify-and-forward (AF) relaying, in a simple and explicit way. For generality, the desired channels along consecutive relaying hops and the CCIs at all nodes are subject to Nakagami-$m$ fading with different shape factors. This study reveals that the diversity gain is determined only by the fading shape factor of the desired channels, regardless of the interference and the number of relaying hops. On the other hand, although the coding gain is in general a complex function of various system parameters, if the desired channels are subject to Rayleigh fading, the coding gain is inversely proportional to the accumulated interference at the destination, i.e. the product of the number of relaying hops and the average interference-to-noise ratio, irrespective of the fading distribution of the CCIs.
1312.0148
On Replacing PID Controller with ANN Controller for DC Motor Position Control
cs.SY
The process industry implements many techniques with certain parameters in its operations to control the working of several actuators on field. Amongst these actuators, DC motor is a very common machine. The angular position of DC motor can be controlled to drive many processes such as the arm of a robot. The most famous and well known controller for such applications is PID controller. It uses proportional, integral and derivative functions to control the input signal before sending it to the plant unit. In this paper, another controller based on Artificial Neural Network (ANN) control is examined to replace the PID controller for controlling the angular position of a DC motor to drive a robot arm. Simulation is performed in MATLAB after training the neural network (supervised learning) and it is shown that results are acceptable and applicable in process industry for reference control applications. The paper also indicates that the ANN controller can be less complicated and less costly to implement in industrial control applications as compared to some other proposed schemes.
1312.0156
Datom: Towards modular data management
cs.DB
Recent technology breakthroughs have enabled data collection of unprecedented scale, rate, variety and complexity that has led to an explosion in data management requirements. Existing theories and techniques are not adequate to fulfil these requirements. We endeavour to rethink the way data management research is being conducted and we propose to work towards modular data management that will allow for unification of the expression of data management problems and systematization of their solution. The core of such an approach is the novel notion of a datom, i.e. a data management atom, which encapsulates generic data management provision. The datom is the foundation for comparison, customization and re-usage of data management problems and solutions. The proposed approach can signal a revolution in data management research and a long anticipated evolution in data management engineering.
1312.0158
An algebraic characterization of injectivity in phase retrieval
math.FA cs.IT math.AG math.IT
A complex frame is a collection of vectors that span $\mathbb{C}^M$ and define measurements, called intensity measurements, on vectors in $\mathbb{C}^M$. In purely mathematical terms, the problem of phase retrieval is to recover a complex vector from its intensity measurements, namely the modulus of its inner product with these frame vectors. We show that any vector is uniquely determined (up to a global phase factor) from $4M-4$ generic measurements. To prove this, we identify the set of frames defining non-injective measurements with the projection of a real variety and bound its dimension.
1312.0162
A Typology of Collaboration Platform Users
cs.CY cs.HC cs.SI stat.ML
In this paper we present a review of the existing typologies of Internet service users. We zoom in on social networking services including blogs and crowdsourcing websites. Based on the results of the analysis of the considered typologies obtained by means of FCA we developed a new user typology of a certain class of Internet services, namely a collaboration innovation platform. Cluster analysis of data extracted from the collaboration platform Witology was used to divide more than 500 participants into six groups based on three activity indicators: idea generation, commenting, and evaluation (assigning marks) The obtained groups and their percentages appear to follow the "90 - 9 - 1" rule.
1312.0169
Entropy and the Predictability of Online Life
physics.soc-ph cs.SI
Using mobile phone records and information theory measures, our daily lives have been recently shown to follow strict statistical regularities, and our movement patterns are to a large extent predictable. Here, we apply entropy and predictability measures to two data sets of the behavioral actions and the mobility of a large number of players in the virtual universe of a massive multiplayer online game. We find that movements in virtual human lives follow the same high levels of predictability as offline mobility, where future movements can to some extent be predicted well if the temporal correlations of visited places are accounted for. Time series of behavioral actions show similar high levels of predictability, even when temporal correlations are neglected. Entropy conditional on specific behavioral actions reveals that in terms of predictability negative behavior has a wider variety than positive actions. The actions which contain information to best predict an individual's subsequent action are negative, such as attacks or enemy markings, while positive actions of friendship marking, trade and communication contain the least amount of predictive information. These observations show that predicting behavioral actions requires less information than predicting the mobility patterns of humans for which the additional knowledge of past visited locations is crucial, and that the type and sign of a social relation has an essential impact on the ability to determine future behavior.
1312.0171
Complex networks as an emerging property of hierarchical preferential attachment
physics.soc-ph cs.SI
Real complex systems are not rigidly structured; no clear rules or blueprints exist for their construction. Yet, amidst their apparent randomness, complex structural properties universally emerge. We propose that an important class of complex systems can be modeled as an organization of many embedded levels (potentially infinite in number), all of them following the same universal growth principle known as preferential attachment. We give examples of such hierarchy in real systems, for instance in the pyramid of production entities of the film industry. More importantly, we show how real complex networks can be interpreted as a projection of our model, from which their scale independence, their clustering, their hierarchy, their fractality and their navigability naturally emerge. Our results suggest that complex networks, viewed as growing systems, can be quite simple, and that the apparent complexity of their structure is largely a reflection of their unobserved hierarchical nature.
1312.0182
Query Segmentation for Relevance Ranking in Web Search
cs.IR
In this paper, we try to answer the question of how to improve the state-of-the-art methods for relevance ranking in web search by query segmentation. Here, by query segmentation it is meant to segment the input query into segments, typically natural language phrases, so that the performance of relevance ranking in search is increased. We propose employing the re-ranking approach in query segmentation, which first employs a generative model to create top $k$ candidates and then employs a discriminative model to re-rank the candidates to obtain the final segmentation result. The method has been widely utilized for structure prediction in natural language processing, but has not been applied to query segmentation, as far as we know. Furthermore, we propose a new method for using the result of query segmentation in relevance ranking, which takes both the original query words and the segmented query phrases as units of query representation. We investigate whether our method can improve three relevance models, namely BM25, key n-gram model, and dependency model. Our experimental results on three large scale web search datasets show that our method can indeed significantly improve relevance ranking in all the three cases.
1312.0189
Empowering Evolving Social Network Users with Privacy Rights
cs.DB cs.CR cs.SI
Considerable concerns exist over privacy on social networks, and huge debates persist about how to extend the artifacts users need to effectively protect their rights to privacy. While many interesting ideas have been proposed, no single approach appears to be comprehensive enough to be the front runner. In this paper, we propose a comprehensive and novel reference conceptual model for privacy in constantly evolving social networks and establish its novelty by briefly contrasting it with contemporary research. We also present the contours of a possible query language that we can develop with desirable features in light of the reference model, and refer to a new query language, {\em PiQL}, developed on the basis of this model that aims to support user driven privacy policy authoring and enforcement. The strength of our model is that such extensions are now possible by developing appropriate linguistic constructs as part of query languages such as SQL, as demonstrated in PiQL.
1312.0200
A Combined Approach for Constraints over Finite Domains and Arrays
cs.LO cs.AI cs.SE
Arrays are ubiquitous in the context of software verification. However, effective reasoning over arrays is still rare in CP, as local reasoning is dramatically ill-conditioned for constraints over arrays. In this paper, we propose an approach combining both global symbolic reasoning and local consistency filtering in order to solve constraint systems involving arrays (with accesses, updates and size constraints) and finite-domain constraints over their elements and indexes. Our approach, named FDCC, is based on a combination of a congruence closure algorithm for the standard theory of arrays and a CP solver over finite domains. The tricky part of the work lies in the bi-directional communication mechanism between both solvers. We identify the significant information to share, and design ways to master the communication overhead. Experiments on random instances show that FDCC solves more formulas than any portfolio combination of the two solvers taken in isolation, while overhead is kept reasonable.
1312.0202
Sparse Time Frequency Representations and Dynamical Systems
cs.IT math.IT
In this paper, we establish a connection between the recently developed data-driven time-frequency analysis \cite{HS11,HS13-1} and the classical second order differential equations. The main idea of the data-driven time-frequency analysis is to decompose a multiscale signal into a sparsest collection of Intrinsic Mode Functions (IMFs) over the largest possible dictionary via nonlinear optimization. These IMFs are of the form $a(t) \cos(\theta(t))$ where the amplitude $a(t)$ is positive and slowly varying. The non-decreasing phase function $\theta(t)$ is determined by the data and in general depends on the signal in a nonlinear fashion. One of the main results of this paper is that we show that each IMF can be associated with a solution of a second order ordinary differential equation of the form $\ddot{x}+p(x,t)\dot{x}+q(x,t)=0$. Further, we propose a localized variational formulation for this problem and develop an effective $l^1$-based optimization method to recover $p(x,t)$ and $q(x,t)$ by looking for a sparse representation of $p$ and $q$ in terms of the polynomial basis. Depending on the form of nonlinearity in $p(x,t)$ and $q(x,t)$, we can define the degree of nonlinearity for the associated IMF. %and the corresponding coefficients for the associated highest order nonlinear terms. This generalizes a concept recently introduced by Prof. N. E. Huang et al. \cite{Huang11}. Numerical examples will be provided to illustrate the robustness and stability of the proposed method for data with or without noise. This manuscript should be considered as a proof of concept.
1312.0229
Five Disruptive Technology Directions for 5G
cs.NI cs.IT math.IT
New research directions will lead to fundamental changes in the design of future 5th generation (5G) cellular networks. This paper describes five technologies that could lead to both architectural and component disruptive design changes: device-centric architectures, millimeter Wave, Massive-MIMO, smarter devices, and native support to machine-2-machine. The key ideas for each technology are described, along with their potential impact on 5G and the research challenges that remain.
1312.0232
Stochastic continuum armed bandit problem of few linear parameters in high dimensions
stat.ML cs.LG math.OC
We consider a stochastic continuum armed bandit problem where the arms are indexed by the $\ell_2$ ball $B_{d}(1+\nu)$ of radius $1+\nu$ in $\mathbb{R}^d$. The reward functions $r :B_{d}(1+\nu) \rightarrow \mathbb{R}$ are considered to intrinsically depend on $k \ll d$ unknown linear parameters so that $r(\mathbf{x}) = g(\mathbf{A} \mathbf{x})$ where $\mathbf{A}$ is a full rank $k \times d$ matrix. Assuming the mean reward function to be smooth we make use of results from low-rank matrix recovery literature and derive an efficient randomized algorithm which achieves a regret bound of $O(C(k,d) n^{\frac{1+k}{2+k}} (\log n)^{\frac{1}{2+k}})$ with high probability. Here $C(k,d)$ is at most polynomial in $d$ and $k$ and $n$ is the number of rounds or the sampling budget which is assumed to be known beforehand.
1312.0256
Analysis of Regularized LS Reconstruction and Random Matrix Ensembles in Compressed Sensing
cs.IT math.IT
Performance of regularized least-squares estimation in noisy compressed sensing is analyzed in the limit when the dimensions of the measurement matrix grow large. The sensing matrix is considered to be from a class of random ensembles that encloses as special cases standard Gaussian, row-orthogonal, geometric and so-called T-orthogonal constructions. Source vectors that have non-uniform sparsity are included in the system model. Regularization based on l1-norm and leading to LASSO estimation, or basis pursuit denoising, is given the main emphasis in the analysis. Extensions to l2-norm and "zero-norm" regularization are also briefly discussed. The analysis is carried out using the replica method in conjunction with some novel matrix integration results. Numerical experiments for LASSO are provided to verify the accuracy of the analytical results. The numerical experiments show that for noisy compressed sensing, the standard Gaussian ensemble is a suboptimal choice for the measurement matrix. Orthogonal constructions provide a superior performance in all considered scenarios and are easier to implement in practical applications. It is also discovered that for non-uniform sparsity patterns the T-orthogonal matrices can further improve the mean square error behavior of the reconstruction when the noise level is not too high. However, as the additive noise becomes more prominent in the system, the simple row-orthogonal measurement matrix appears to be the best choice out of the considered ensembles.
1312.0264
Competitive Fragmentation Modeling of ESI-MS/MS spectra for putative metabolite identification
cs.CE
Electrospray tandem mass spectrometry (ESI-MS/MS) is commonly used in high throughput metabolomics. One of the key obstacles to the effective use of this technology is the difficulty in interpreting measured spectra to accurately and efficiently identify metabolites. Traditional methods for automated metabolite identification compare the target MS or MS/MS spectrum to the spectra in a reference database, ranking candidates based on the closeness of the match. However the limited coverage of available databases has led to an interest in computational methods for predicting reference MS/MS spectra from chemical structures. This work proposes a probabilistic generative model for the MS/MS fragmentation process, which we call Competitive Fragmentation Modeling (CFM), and a machine learning approach for learning parameters for this model from MS/MS data. We show that CFM can be used in both a MS/MS spectrum prediction task (ie, predicting the mass spectrum from a chemical structure), and in a putative metabolite identification task (ranking possible structures for a target MS/MS spectrum). In the MS/MS spectrum prediction task, CFM shows significantly improved performance when compared to a full enumeration of all peaks corresponding to substructures of the molecule. In the metabolite identification task, CFM obtains substantially better rankings for the correct candidate than existing methods (MetFrag and FingerID) on tripeptide and metabolite data, when querying PubChem or KEGG for candidate structures of similar mass.
1312.0285
Distributed Data Placement via Graph Partitioning
cs.DB
With the widespread use of shared-nothing clusters of servers, there has been a proliferation of distributed object stores that offer high availability, reliability and enhanced performance for MapReduce-style workloads. However, relational workloads cannot always be evaluated efficiently using MapReduce without extensive data migrations, which cause network congestion and reduced query throughput. We study the problem of computing data placement strategies that minimize the data communication costs incurred by typical relational query workloads in a distributed setting. Our main contribution is a reduction of the data placement problem to the well-studied problem of {\sc Graph Partitioning}, which is NP-Hard but for which efficient approximation algorithms exist. The novelty and significance of this result lie in representing the communication cost exactly and using standard graphs instead of hypergraphs, which were used in prior work on data placement that optimized for different objectives (not communication cost). We study several practical extensions of the problem: with load balancing, with replication, with materialized views, and with complex query plans consisting of sequences of intermediate operations that may be computed on different servers. We provide integer linear programs (IPs) that may be used with any IP solver to find an optimal data placement. For the no-replication case, we use publicly available graph partitioning libraries (e.g., METIS) to efficiently compute nearly-optimal solutions. For the versions with replication, we introduce two heuristics that utilize the {\sc Graph Partitioning} solution of the no-replication case. Using the TPC-DS workload, it may take an IP solver weeks to compute an optimal data placement, whereas our reduction produces nearly-optimal solutions in seconds.
1312.0286
Efficient Learning and Planning with Compressed Predictive States
cs.LG stat.ML
Predictive state representations (PSRs) offer an expressive framework for modelling partially observable systems. By compactly representing systems as functions of observable quantities, the PSR learning approach avoids using local-minima prone expectation-maximization and instead employs a globally optimal moment-based algorithm. Moreover, since PSRs do not require a predetermined latent state structure as an input, they offer an attractive framework for model-based reinforcement learning when agents must plan without a priori access to a system model. Unfortunately, the expressiveness of PSRs comes with significant computational cost, and this cost is a major factor inhibiting the use of PSRs in applications. In order to alleviate this shortcoming, we introduce the notion of compressed PSRs (CPSRs). The CPSR learning approach combines recent advancements in dimensionality reduction, incremental matrix decomposition, and compressed sensing. We show how this approach provides a principled avenue for learning accurate approximations of PSRs, drastically reducing the computational costs associated with learning while also providing effective regularization. Going further, we propose a planning framework which exploits these learned models. And we show that this approach facilitates model-learning and planning in large complex partially observable domains, a task that is infeasible without the principled use of compression.
1312.0288
Preliminary Results on 3D Channel Modeling: From Theory to Standardization
cs.IT math.IT
Three dimensional beamforming (3D) (also elevation beamforming) is now gaining a growing interest among researchers in wireless communication. The reason can be attributed to its potential to enable a variety of strategies like sector or user specific elevation beamforming and cell-splitting. Since these techniques cannot be directly supported by current LTE releases, the 3GPP is now working on defining the required technical specifications. In particular, a large effort is currently made to get accurate 3D channel models that support the elevation dimension. This step is necessary as it will evaluate the potential of 3D and FD(Full Dimensional) beamforming techniques to benefit from the richness of real channels. This work aims at presenting the on-going 3GPP study item "Study on 3D-channel model for Elevation Beamforming and FD-MIMO studies for LTE", and positioning it with respect to previous standardization works.
1312.0317
Evolutionary Dynamics of Information Diffusion over Social Networks
cs.SI physics.soc-ph
Current social networks are of extremely large-scale generating tremendous information flows at every moment. How information diffuse over social networks has attracted much attention from both industry and academics. Most of the existing works on information diffusion analysis are based on machine learning methods focusing on social network structure analysis and empirical data mining. However, the dynamics of information diffusion, which are heavily influenced by network users' decisions, actions and their socio-economic interactions, is generally ignored by most of existing works. In this paper, we propose an evolutionary game theoretic framework to model the dynamic information diffusion process in social networks. Specifically, we derive the information diffusion dynamics in complete networks, uniform degree and non-uniform degree networks, with the highlight of two special networks, Erd\H{o}s-R\'enyi random network and the Barab\'asi-Albert scale-free network. We find that the dynamics of information diffusion over these three kinds of networks are scale-free and the same with each other when the network scale is sufficiently large. To verify our theoretical analysis, we perform simulations for the information diffusion over synthetic networks and real-world Facebook networks. Moreover, we also conduct experiment on Twitter hashtags dataset, which shows that the proposed game theoretic model can well fit and predict the information diffusion over real social networks.
1312.0336
A Unifying Framework for the Electrical Structure-Based Approach to PMU Placement in Electric Power Systems
cs.SY
The electrical structure of the power grid is utilized to address the phasor measurement unit (PMU) placement problem. First, we derive the connectivity matrix of the network using the resistance distance metric and employ it in the linear program formulation to obtain the optimal number of PMUs, for complete network observability without zero injection measurements. This approach was developed by the author in an earlier work, but the solution methodology to address the location problem did not fully utilize the electrical properties of the network, resulting in an ambiguity. In this paper, we settle this issue by exploiting the coupling structure of the grid derived using the singular value decomposition (SVD)-based analysis of the resistance distance matrix to solve the location problem. Our study, which is based on recent advances in complex networks that promote the electrical structure of the grid over its topological structure and the SVD analysis which throws light on the electrical coupling of the network, results in a unified framework for the electrical structure-based PMU placement. The proposed method is tested on IEEE bus systems, and the results uncover intriguing connections between the singular vectors and average resistance distance between buses in the network.
1312.0363
Optimal Stochastic Coordinated Beamforming for Wireless Cooperative Networks with CSI Uncertainty
cs.IT math.IT
Transmit optimization and resource allocation for wireless cooperative networks with channel state information (CSI) uncertainty are important but challenging problems in terms of both the uncertainty modeling and performance op- timization. In this paper, we establish a generic stochastic coordinated beamforming (SCB) framework that provides flex- ibility in the channel uncertainty modeling, while guaranteeing optimality in the transmission strategies. We adopt a general stochastic model for the CSI uncertainty, which is applicable for various practical scenarios. The SCB problem turns out to be a joint chance constrained program (JCCP) and is known to be highly intractable. In contrast to all the previous algo- rithms for JCCP that can only find feasible but sub-optimal solutions, we propose a novel stochastic DC (difference-of-convex) programming algorithm with optimality guarantee, which can serve as the benchmark for evaluating heuristic and sub-optimal algorithms. The key observation is that the highly intractable probability constraint can be equivalently reformulated as a DC constraint. This further enables efficient algorithms to achieve optimality. Simulation results will illustrate the convergence, conservativeness, stability and performance gains of the proposed algorithm.
1312.0372
Polar Codes: Graph Representation and Duality
cs.IT math.IT
In this paper, we present an iterative construction of a polar code and develop properties of the dual of a polar code. Based on this approach, belief propagation of a polar code can be presented in the context of low-density parity check codes.
1312.0403
Asymptotic Rate Analysis of Downlink Multi-user Systems with Co-located and Distributed Antennas
cs.IT math.IT
A great deal of efforts have been made on the performance evaluation of distributed antenna systems (DASs). Most of them assume a regular base-station (BS) antenna layout where the number of BS antennas is usually small. With the growing interest in cellular systems with large antenna arrays at BSs, it becomes increasingly important for us to study how the BS antenna layout affects the rate performance when a massive number of BS antennas are employed. This paper presents a comparative study of the asymptotic rate performance of downlink multi-user systems with multiple BS antennas either co-located or uniformly distributed within a circular cell. Two representative linear precoding schemes, maximum ratio transmission (MRT) and zero-forcing beamforming (ZFBF), are considered, with which the effect of BS antenna layout on the rate performance is characterized. The analysis shows that as the number of BS antennas $L$ and the number of users $K$ grow infinitely while $L/K{\rightarrow}\upsilon$, the asymptotic average user rates with the co-located antenna (CA) layout for both MRT and ZFBF are logarithmic functions of the ratio $\upsilon$. With the distributed antenna (DA) layout, in contrast, the scaling behavior of the average user rate closely depends on the precoding schemes. With ZFBF, for instance, the average user rate grows unboundedly as $L, K{\rightarrow} \infty$ and $L/K{\rightarrow}\upsilon{>}1$, which indicates that substantial rate gains over the CA layout can be achieved when the number of BS antennas $L$ is large. The gain, nevertheless, becomes marginal when MRT is adopted.
1312.0412
Practical Collapsed Stochastic Variational Inference for the HDP
cs.LG
Recent advances have made it feasible to apply the stochastic variational paradigm to a collapsed representation of latent Dirichlet allocation (LDA). While the stochastic variational paradigm has successfully been applied to an uncollapsed representation of the hierarchical Dirichlet process (HDP), no attempts to apply this type of inference in a collapsed setting of non-parametric topic modeling have been put forward so far. In this paper we explore such a collapsed stochastic variational Bayes inference for the HDP. The proposed online algorithm is easy to implement and accounts for the inference of hyper-parameters. First experiments show a promising improvement in predictive performance.
1312.0451
Consistency of weighted majority votes
math.PR cs.LG stat.ML
We revisit the classical decision-theoretic problem of weighted expert voting from a statistical learning perspective. In particular, we examine the consistency (both asymptotic and finitary) of the optimal Nitzan-Paroush weighted majority and related rules. In the case of known expert competence levels, we give sharp error estimates for the optimal rule. When the competence levels are unknown, they must be empirically estimated. We provide frequentist and Bayesian analyses for this situation. Some of our proof techniques are non-standard and may be of independent interest. The bounds we derive are nearly optimal, and several challenging open problems are posed. Experimental results are provided to illustrate the theory.
1312.0465
A pattern-driven approach to biomedical ontology engineering
cs.CE cs.DL
Developing ontologies can be expensive, time-consuming, as well as difficult to develop and maintain. This is especially true for more expressive and/or larger ontologies. Some ontologies are, however, relatively repetitive, reusing design patterns; building these with both generic and bespoke patterns should reduce duplication and increase regularity which in turn should impact on the cost of development. Here we report on the usage of patterns applied to two biomedical ontologies: firstly a novel ontology for karyotypes which has been built ground-up using a pattern based approach; and, secondly, our initial refactoring of the SIO ontology to make explicit use of patterns at development time. To enable this, we use the Tawny-OWL library which enables full-programmatic development of ontologies. We show how this approach can generate large numbers of classes from much simpler data structures which is highly beneficial within biomedical ontology engineering.
1312.0482
Learning Semantic Representations for the Phrase Translation Model
cs.CL
This paper presents a novel semantic-based phrase translation model. A pair of source and target phrases are projected into continuous-valued vector representations in a low-dimensional latent semantic space, where their translation score is computed by the distance between the pair in this new space. The projection is performed by a multi-layer neural network whose weights are learned on parallel training data. The learning is aimed to directly optimize the quality of end-to-end machine translation results. Experimental evaluation has been performed on two Europarl translation tasks, English-French and German-English. The results show that the new semantic-based phrase translation model significantly improves the performance of a state-of-the-art phrase-based statistical machine translation sys-tem, leading to a gain of 0.7-1.0 BLEU points.
1312.0485
Precise Semidefinite Programming Formulation of Atomic Norm Minimization for Recovering d-Dimensional ($d\geq 2$) Off-the-Grid Frequencies
cs.IT math.IT math.OC stat.ML
Recent research in off-the-grid compressed sensing (CS) has demonstrated that, under certain conditions, one can successfully recover a spectrally sparse signal from a few time-domain samples even though the dictionary is continuous. In particular, atomic norm minimization was proposed in \cite{tang2012csotg} to recover $1$-dimensional spectrally sparse signal. However, in spite of existing research efforts \cite{chi2013compressive}, it was still an open problem how to formulate an equivalent positive semidefinite program for atomic norm minimization in recovering signals with $d$-dimensional ($d\geq 2$) off-the-grid frequencies. In this paper, we settle this problem by proposing equivalent semidefinite programming formulations of atomic norm minimization to recover signals with $d$-dimensional ($d\geq 2$) off-the-grid frequencies.
1312.0489
Capacity Based Evacuation with Dynamic Exit Signs
cs.SY
Exit paths in buildings are designed to minimise evacuation time when the building is at full capacity. We present an evacuation support system which does this regardless of the number of evacuees. The core concept is to even-out congestion in the building by diverting evacuees to less-congested paths in order to make maximal usage of all accessible routes throughout the entire evacuation process. The system issues a set of flow-optimal routes using a capacity-constrained routing algorithm which anticipates evolutions in path metrics using the concept of "future capacity reservation". In order to direct evacuees in an intuitive manner whilst implementing the routing algorithm's scheme, we use dynamic exit signs, i.e. whose pointing direction can be controlled. To make this system practical and minimise reliance on sensors during the evacuation, we use an evacuee mobility model and make several assumptions on the characteristics of the evacuee flow. We validate this concept using simulations, and show how the underpinning assumptions may limit the system's performance, especially in low-headcount evacuations.
1312.0493
Bidirectional Recursive Neural Networks for Token-Level Labeling with Structure
cs.LG cs.CL stat.ML
Recently, deep architectures, such as recurrent and recursive neural networks have been successfully applied to various natural language processing tasks. Inspired by bidirectional recurrent neural networks which use representations that summarize the past and future around an instance, we propose a novel architecture that aims to capture the structural information around an input, and use it to label instances. We apply our method to the task of opinion expression extraction, where we employ the binary parse tree of a sentence as the structure, and word vector representations as the initial representation of a single token. We conduct preliminary experiments to investigate its performance and compare it to the sequential approach.
1312.0510
Fault Tolerance of Small-World Regular and Stochastic Interconnection Networks
cs.SI cs.DC physics.soc-ph
Resilience of the most important properties of stochastic and regular (deterministic) small-world interconnection networks is studied. It is shown that in the broad range of values of the fraction of faulty nodes the networks under consideration possess high fault tolerance, the deterministic networks being slightly better than the stochastic ones.
1312.0512
Sensing-Aware Kernel SVM
cs.LG
We propose a novel approach for designing kernels for support vector machines (SVMs) when the class label is linked to the observation through a latent state and the likelihood function of the observation given the state (the sensing model) is available. We show that the Bayes-optimum decision boundary is a hyperplane under a mapping defined by the likelihood function. Combining this with the maximum margin principle yields kernels for SVMs that leverage knowledge of the sensing model in an optimal way. We derive the optimum kernel for the bag-of-words (BoWs) sensing model and demonstrate its superior performance over other kernels in document and image classification tasks. These results indicate that such optimum sensing-aware kernel SVMs can match the performance of rather sophisticated state-of-the-art approaches.
1312.0516
Grid Topology Identification using Electricity Prices
cs.LG cs.SY stat.AP stat.ML
The potential of recovering the topology of a grid using solely publicly available market data is explored here. In contemporary whole-sale electricity markets, real-time prices are typically determined by solving the network-constrained economic dispatch problem. Under a linear DC model, locational marginal prices (LMPs) correspond to the Lagrange multipliers of the linear program involved. The interesting observation here is that the matrix of spatiotemporally varying LMPs exhibits the following property: Once premultiplied by the weighted grid Laplacian, it yields a low-rank and sparse matrix. Leveraging this rich structure, a regularized maximum likelihood estimator (MLE) is developed to recover the grid Laplacian from the LMPs. The convex optimization problem formulated includes low rank- and sparsity-promoting regularizers, and it is solved using a scalable algorithm. Numerical tests on prices generated for the IEEE 14-bus benchmark provide encouraging topology recovery results.
1312.0525
Near Optimal Compressed Sensing of a Class of Sparse Low-Rank Matrices via Sparse Power Factorization
cs.IT math.IT
Compressed sensing of simultaneously sparse and low-rank matrices enables recovery of sparse signals from a few linear measurements of their bilinear form. One important question is how many measurements are needed for a stable reconstruction in the presence of measurement noise. Unlike conventional compressed sensing for sparse vectors, where convex relaxation via the $\ell_1$-norm achieves near optimal performance, for compressed sensing of sparse low-rank matrices, it has been shown recently Oymak et al. that convex programmings using the nuclear norm and the mixed norm are highly suboptimal even in the noise-free scenario. We propose an alternating minimization algorithm called sparse power factorization (SPF) for compressed sensing of sparse rank-one matrices. For a class of signals whose sparse representation coefficients are fast-decaying, SPF achieves stable recovery of the rank-1 matrix formed by their outer product and requires number of measurements within a logarithmic factor of the information-theoretic fundamental limit. For the recovery of general sparse low-rank matrices, we propose subspace-concatenated SPF (SCSPF), which has analogous near optimal performance guarantees to SPF in the rank-1 case. Numerical results show that SPF and SCSPF empirically outperform convex programmings using the best known combinations of mixed norm and nuclear norm.
1312.0579
SpeedMachines: Anytime Structured Prediction
cs.LG
Structured prediction plays a central role in machine learning applications from computational biology to computer vision. These models require significantly more computation than unstructured models, and, in many applications, algorithms may need to make predictions within a computational budget or in an anytime fashion. In this work we propose an anytime technique for learning structured prediction that, at training time, incorporates both structural elements and feature computation trade-offs that affect test-time inference. We apply our technique to the challenging problem of scene understanding in computer vision and demonstrate efficient and anytime predictions that gradually improve towards state-of-the-art classification performance as the allotted time increases.
1312.0624
Efficient coordinate-descent for orthogonal matrices through Givens rotations
cs.LG stat.ML
Optimizing over the set of orthogonal matrices is a central component in problems like sparse-PCA or tensor decomposition. Unfortunately, such optimization is hard since simple operations on orthogonal matrices easily break orthogonality, and correcting orthogonality usually costs a large amount of computation. Here we propose a framework for optimizing orthogonal matrices, that is the parallel of coordinate-descent in Euclidean spaces. It is based on {\em Givens-rotations}, a fast-to-compute operation that affects a small number of entries in the learned matrix, and preserves orthogonality. We show two applications of this approach: an algorithm for tensor decomposition that is used in learning mixture models, and an algorithm for sparse-PCA. We study the parameter regime where a Givens rotation approach converges faster and achieves a superior model on a genome-wide brain-wide mRNA expression dataset.
1312.0631
Phase Transitions in Community Detection: A Solvable Toy Model
cs.SI cond-mat.stat-mech physics.soc-ph stat.ML
Recently, it was shown that there is a phase transition in the community detection problem. This transition was first computed using the cavity method, and has been proved rigorously in the case of $q=2$ groups. However, analytic calculations using the cavity method are challenging since they require us to understand probability distributions of messages. We study analogous transitions in so-called "zero-temperature inference" model, where this distribution is supported only on the most-likely messages. Furthermore, whenever several messages are equally likely, we break the tie by choosing among them with equal probability. While the resulting analysis does not give the correct values of the thresholds, it does reproduce some of the qualitative features of the system. It predicts a first-order detectability transition whenever $q > 2$, while the finite-temperature cavity method shows that this is the case only when $q > 4$. It also has a regime analogous to the "hard but detectable" phase, where the community structure can be partially recovered, but only when the initial messages are sufficiently accurate. Finally, we study a semisupervised setting where we are given the correct labels for a fraction $\rho$ of the nodes. For $q > 2$, we find a regime where the accuracy jumps discontinuously at a critical value of $\rho$.
1312.0641
Simple Bounds for Noisy Linear Inverse Problems with Exact Side Information
cs.IT math.IT math.OC math.ST stat.TH
This paper considers the linear inverse problem where we wish to estimate a structured signal $x$ from its corrupted observations. When the problem is ill-posed, it is natural to make use of a convex function $f(\cdot)$ that exploits the structure of the signal. For example, $\ell_1$ norm can be used for sparse signals. To carry out the estimation, we consider two well-known convex programs: 1) Second order cone program (SOCP), and, 2) Lasso. Assuming Gaussian measurements, we show that, if precise information about the value $f(x)$ or the $\ell_2$-norm of the noise is available, one can do a particularly good job at estimation. In particular, the reconstruction error becomes proportional to the "sparsity" of the signal rather than the ambient dimension of the noise vector. We connect our results to existing works and provide a discussion on the relation of our results to the standard least-squares problem. Our error bounds are non-asymptotic and sharp, they apply to arbitrary convex functions and do not assume any distribution on the noise.
1312.0649
Dynamics of Trends and Attention in Chinese Social Media
cs.SI cs.CY physics.soc-ph
There has been a tremendous rise in the growth of online social networks all over the world in recent years. It has facilitated users to generate a large amount of real-time content at an incessant rate, all competing with each other to attract enough attention and become popular trends. While Western online social networks such as Twitter have been well studied, the popular Chinese microblogging network Sina Weibo has had relatively lower exposure. In this paper, we analyze in detail the temporal aspect of trends and trend-setters in Sina Weibo, contrasting it with earlier observations in Twitter. We find that there is a vast difference in the content shared in China when compared to a global social network such as Twitter. In China, the trends are created almost entirely due to the retweets of media content such as jokes, images and videos, unlike Twitter where it has been shown that the trends tend to have more to do with current global events and news stories. We take a detailed look at the formation, persistence and decay of trends and examine the key topics that trend in Sina Weibo. One of our key findings is that retweets are much more common in Sina Weibo and contribute a lot to creating trends. When we look closer, we observe that most trends in Sina Weibo are due to the continuous retweets of a small percentage of fraudulent accounts. These fake accounts are set up to artificially inflate certain posts, causing them to shoot up into Sina Weibo's trending list, which are in turn displayed as the most popular topics to users.
1312.0650
Differential Games of Competition in Online Content Diffusion
cs.SI cs.GT
Access to online contents represents a large share of the Internet traffic. Most such contents are multimedia items which are user-generated, i.e., posted online by the contents' owners. In this paper we focus on how those who provide contents can leverage online platforms in order to profit from their large base of potential viewers. Actually, platforms like Vimeo or YouTube provide tools to accelerate the dissemination of contents, i.e., recommendation lists and other re-ranking mechanisms. Hence, the popularity of a content can be increased by paying a cost for advertisement: doing so, it will appear with some priority in the recommendation lists and will be accessed more frequently by the platform users. Ultimately, such acceleration mechanism engenders a competition among online contents to gain popularity. In this context, our focus is on the structure of the acceleration strategies which a content provider should use in order to optimally promote a content given a certain daily budget. Such a best response indeed depends on the strategies adopted by competing content providers. Also, it is a function of the potential popularity of a content and the fee paid for the platform advertisement service. We formulate the problem as a differential game and we solve it for the infinite horizon case by deriving the structure of certain Nash equilibria of the game.