id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1402.1726 | For-all Sparse Recovery in Near-Optimal Time | cs.DS cs.IT math.IT | An approximate sparse recovery system in $\ell_1$ norm consists of parameters
$k$, $\epsilon$, $N$, an $m$-by-$N$ measurement $\Phi$, and a recovery
algorithm, $\mathcal{R}$. Given a vector, $\mathbf{x}$, the system approximates
$x$ by $\widehat{\mathbf{x}} = \mathcal{R}(\Phi\mathbf{x})$, which must satisfy
$\|\widehat{\mathbf{x}}-\mathbf{x}\|_1 \leq
(1+\epsilon)\|\mathbf{x}-\mathbf{x}_k\|_1$. We consider the 'for all' model, in
which a single matrix $\Phi$, possibly 'constructed' non-explicitly using the
probabilistic method, is used for all signals $\mathbf{x}$. The best existing
sublinear algorithm by Porat and Strauss (SODA'12) uses $O(\epsilon^{-3}
k\log(N/k))$ measurements and runs in time $O(k^{1-\alpha}N^\alpha)$ for any
constant $\alpha > 0$.
In this paper, we improve the number of measurements to $O(\epsilon^{-2} k
\log(N/k))$, matching the best existing upper bound (attained by super-linear
algorithms), and the runtime to $O(k^{1+\beta}\textrm{poly}(\log
N,1/\epsilon))$, with a modest restriction that $\epsilon \leq (\log k/\log
N)^{\gamma}$, for any constants $\beta,\gamma > 0$. When $k\leq \log^c N$ for
some $c>0$, the runtime is reduced to $O(k\textrm{poly}(N,1/\epsilon))$. With
no restrictions on $\epsilon$, we have an approximation recovery system with $m
= O(k/\epsilon \log(N/k)((\log N/\log k)^\gamma + 1/\epsilon))$ measurements.
|
1402.1736 | Flows in Complex Networks: Theory, Algorithms, and Application to
Lennard-Jones Cluster Rearrangement | cond-mat.stat-mech cs.CE | A set of analytical and computational tools based on transition path theory
(TPT) is proposed to analyze flows in complex networks. Specifically, TPT is
used to study the statistical properties of the reactive trajectories by which
transitions occur between specific groups of nodes on the network. Sampling
tools are built upon the outputs of TPT that allow to generate these reactive
trajectories directly, or even transition paths that travel from one group of
nodes to the other without making any detour and carry the same probability
current as the reactive trajectories. These objects permit to characterize the
mechanism of the transitions, for example by quantifying the width of the tubes
by which these transitions occur, the location and distribution of their
dynamical bottlenecks, etc. These tools are applied to a network modeling the
dynamics of the Lennard-Jones cluster with 38 atoms (LJ38) and used to
understand the mechanism by which this cluster rearranges itself between its
two most likely states at various temperatures.
|
1402.1754 | Two-stage Sampled Learning Theory on Distributions | math.ST cs.LG math.FA stat.ML stat.TH | We focus on the distribution regression problem: regressing to a real-valued
response from a probability distribution. Although there exist a large number
of similarity measures between distributions, very little is known about their
generalization performance in specific learning tasks. Learning problems
formulated on distributions have an inherent two-stage sampled difficulty: in
practice only samples from sampled distributions are observable, and one has to
build an estimate on similarities computed between sets of points. To the best
of our knowledge, the only existing method with consistency guarantees for
distribution regression requires kernel density estimation as an intermediate
step (which suffers from slow convergence issues in high dimensions), and the
domain of the distributions to be compact Euclidean. In this paper, we provide
theoretical guarantees for a remarkably simple algorithmic alternative to solve
the distribution regression problem: embed the distributions to a reproducing
kernel Hilbert space, and learn a ridge regressor from the embeddings to the
outputs. Our main contribution is to prove the consistency of this technique in
the two-stage sampled setting under mild conditions (on separable, topological
domains endowed with kernels). For a given total number of observations, we
derive convergence rates as an explicit function of the problem difficulty. As
a special case, we answer a 15-year-old open question: we establish the
consistency of the classical set kernel [Haussler, 1999; Gartner et. al, 2002]
in regression, and cover more recent kernels on distributions, including those
due to [Christmann and Steinwart, 2010].
|
1402.1757 | Frequency-Based Patrolling with Heterogeneous Agents and Limited
Communication | cs.MA cs.AI | This paper investigates multi-agent frequencybased patrolling of
intersecting, circle graphs under conditions where graph nodes have non-uniform
visitation requirements and agents have limited ability to communicate. The
task is modeled as a partially observable Markov decision process, and a
reinforcement learning solution is developed. Each agent generates its own
policy from Markov chains, and policies are exchanged only when agents occupy
the same or adjacent nodes. This constraint on policy exchange models sparse
communication conditions over large, unstructured environments. Empirical
results provide perspectives on convergence properties, agent cooperation, and
generalization of learned patrolling policies to new instances of the task. The
emergent behavior indicates learned coordination strategies between
heterogeneous agents for patrolling large, unstructured regions as well as the
ability to generalize to dynamic variation in node visitation requirements.
|
1402.1759 | Performance Improvement of OFDM System Using Iterative Signal Clipping
With Various Window Techniques for PAPR Reduction | cs.IT math.IT | OFDM signals demonstrates high fluctuations termed as Peak to Average Power
Ratio (PAPR).The problem of OFDM is the frequent occurrence of high Peaks in
the time domain signal which in turn reduces the efficiency of transmit high
power amplifier.In this paper we discussed clipping and filtering technique
which is easy to implement and reduces the amount of PAPR by clipping the peak
of the maximum power signal.This technique clips the OFDM signal to a
predefined threshold and uses a filter to eliminate the out-of-band
radiation.Moreover, analysis of PAPR is given by varying different filters.The
study is focused to reduce PAPR by iterative clipping and filtering method. The
symbol error rate performances for different modulation techniques have been
countered.Each clipping noise sample is multiplied by a window
function(e.g.Hanning,Kaiser, or Hamming) to suppress the out-of-band noise.It
is shown that clipping and different filtering techniques for improvement in
the SER performance and provides further reduction in PAPR.
|
1402.1761 | On Scalability of Wireless Networks: A Practical Primer for Large Scale
Cooperation | cs.IT cs.NI math.IT | An intuitive overview of the scalability of a variety of types of wireless
networks is presented. Simple heuris- tic arguments are demonstrated here for
scaling laws presented in other works, as well as for conditions not previously
considered in the literature. Unicast and multicast messages, topology,
hierarchy, and effects of reliability protocols are discussed. We show how two
key factors, bottlenecks and erasures, can often domi- nate the network scaling
behavior. Scaling of through- put or delay with the number of transmitting
nodes, the number of receiving nodes, and the file size is described.
|
1402.1774 | From the Information Bottleneck to the Privacy Funnel | cs.IT math.IT | We focus on the privacy-utility trade-off encountered by users who wish to
disclose some information to an analyst, that is correlated with their private
data, in the hope of receiving some utility. We rely on a general privacy
statistical inference framework, under which data is transformed before it is
disclosed, according to a probabilistic privacy mapping. We show that when the
log-loss is introduced in this framework in both the privacy metric and the
distortion metric, the privacy leakage and the utility constraint can be
reduced to the mutual information between private data and disclosed data, and
between non-private data and disclosed data respectively. We justify the
relevance and generality of the privacy metric under the log-loss by proving
that the inference threat under any bounded cost function can be upper-bounded
by an explicit function of the mutual information between private data and
disclosed data. We then show that the privacy-utility tradeoff under the
log-loss can be cast as the non-convex Privacy Funnel optimization, and we
leverage its connection to the Information Bottleneck, to provide a greedy
algorithm that is locally optimal. We evaluate its performance on the US census
dataset.
|
1402.1777 | On the Dynamics of Social Media Popularity: A YouTube Case Study | cs.SI physics.soc-ph | Understanding the factors that impact the popularity dynamics of social media
can drive the design of effective information services, besides providing
valuable insights to content generators and online advertisers. Taking YouTube
as case study, we analyze how video popularity evolves since upload, extracting
popularity trends that characterize groups of videos. We also analyze the
referrers that lead users to videos, correlating them, features of the video
and early popularity measures with the popularity trend and total observed
popularity the video will experience. Our findings provide fundamental
knowledge about popularity dynamics and its implications for services such as
advertising and search.
|
1402.1778 | Analysis of a heterogeneous social network of humans and cultural
objects | cs.SI cs.CY physics.data-an physics.soc-ph | Modern online social platforms enable their members to be involved in a broad
range of activities like getting friends, joining groups, posting/commenting
resources and so on. In this paper we investigate whether a correlation emerges
across the different activities a user can take part in. To perform our
analysis we focused on aNobii, a social platform with a world-wide user base of
book readers, who like to post their readings, give ratings, review books and
discuss them with friends and fellow readers. aNobii presents a heterogeneous
structure: i) part social network, with user-to-user interactions, ii) part
interest network, with the management of book collections, and iii) part
folksonomy, with books that are tagged by the users. We analyzed a complete and
anonymized snapshot of aNobii and we focused on three specific activities a
user can perform, namely her tagging behavior, her tendency to join groups and
her aptitude to compile a wishlist reporting the books she is planning to read.
In this way each user is associated with a tag-based, a group-based and a
wishlist-based profile. Experimental analysis carried out by means of
Information Theory tools like entropy and mutual information suggests that
tag-based and group-based profiles are in general more informative than
wishlist-based ones. Furthermore, we discover that the degree of correlation
between the three profiles associated with the same user tend to be small.
Hence, user profiling cannot be reduced to considering just any one type of
user activity (although important) but it is crucial to incorporate multiple
dimensions to effectively describe users preferences and behavior.
|
1402.1780 | Cascading Failures in Power Grids - Analysis and Algorithms | cs.SY | This paper focuses on cascading line failures in the transmission system of
the power grid. Recent large-scale power outages demonstrated the limitations
of percolation- and epid- emic-based tools in modeling cascades. Hence, we
study cascades by using computational tools and a linearized power flow model.
We first obtain results regarding the Moore-Penrose pseudo-inverse of the power
grid admittance matrix. Based on these results, we study the impact of a single
line failure on the flows on other lines. We also illustrate via simulation the
impact of the distance and resistance distance on the flow increase following a
failure, and discuss the difference from the epidemic models. We then study the
cascade properties, considering metrics such as the distance between failures
and the fraction of demand (load) satisfied after the cascade (yield). We use
the pseudo-inverse of admittance matrix to develop an efficient algorithm to
identify the cascading failure evolution, which can be a building block for
cascade mitigation. Finally, we show that finding the set of lines whose
removal has the most significant impact (under various metrics) is NP-Hard and
introduce a simple heuristic for the minimum yield problem. Overall, the
results demonstrate that using the resistance distance and the pseudo-inverse
of admittance matrix provides important insights and can support the
development of efficient algorithms.
|
1402.1783 | Active Clustering with Model-Based Uncertainty Reduction | cs.LG cs.CV stat.ML | Semi-supervised clustering seeks to augment traditional clustering methods by
incorporating side information provided via human expertise in order to
increase the semantic meaningfulness of the resulting clusters. However, most
current methods are \emph{passive} in the sense that the side information is
provided beforehand and selected randomly. This may require a large number of
constraints, some of which could be redundant, unnecessary, or even detrimental
to the clustering results. Thus in order to scale such semi-supervised
algorithms to larger problems it is desirable to pursue an \emph{active}
clustering method---i.e. an algorithm that maximizes the effectiveness of the
available human labor by only requesting human input where it will have the
greatest impact. Here, we propose a novel online framework for active
semi-supervised spectral clustering that selects pairwise constraints as
clustering proceeds, based on the principle of uncertainty reduction. Using a
first-order Taylor expansion, we decompose the expected uncertainty reduction
problem into a gradient and a step-scale, computed via an application of matrix
perturbation theory and cluster-assignment entropy, respectively. The resulting
model is used to estimate the uncertainty reduction potential of each sample in
the dataset. We then present the human user with pairwise queries with respect
to only the best candidate sample. We evaluate our method using three different
image datasets (faces, leaves and dogs), a set of common UCI machine learning
datasets and a gene dataset. The results validate our decomposition formulation
and show that our method is consistently superior to existing state-of-the-art
techniques, as well as being robust to noise and to unknown numbers of
clusters.
|
1402.1792 | Binary Excess Risk for Smooth Convex Surrogates | cs.LG stat.ML | In statistical learning theory, convex surrogates of the 0-1 loss are highly
preferred because of the computational and theoretical virtues that convexity
brings in. This is of more importance if we consider smooth surrogates as
witnessed by the fact that the smoothness is further beneficial both
computationally- by attaining an {\it optimal} convergence rate for
optimization, and in a statistical sense- by providing an improved {\it
optimistic} rate for generalization bound. In this paper we investigate the
smoothness property from the viewpoint of statistical consistency and show how
it affects the binary excess risk. We show that in contrast to optimization and
generalization errors that favor the choice of smooth surrogate loss, the
smoothness of loss function may degrade the binary excess risk. Motivated by
this negative result, we provide a unified analysis that integrates
optimization error, generalization bound, and the error in translating convex
excess risk into a binary excess risk when examining the impact of smoothness
on the binary excess risk. We show that under favorable conditions appropriate
choice of smooth convex loss will result in a binary excess risk that is better
than $O(1/\sqrt{n})$.
|
1402.1794 | In silico Proteome Cleavage Reveals Iterative Digestion Strategy for
High Sequence Coverage | q-bio.GN cs.CE | In the post-genome era, biologists have sought to measure the complete
complement of proteins, termed proteomics. Currently, the most effective method
to measure the proteome is with shotgun, or bottom-up, proteomics, in which the
proteome is digested into peptides that are identified followed by protein
inference. Despite continuous improvements to all steps of the shotgun
proteomics workflow, observed proteome coverage is often low; some proteins are
identified by a single peptide sequence. Complete proteome sequence coverage
would allow comprehensive characterization of RNA splicing variants and all
post translational modifications, which would drastically improve the accuracy
of biological models. There are many reasons for the sequence coverage deficit,
but ultimately peptide length determines sequence observability. Peptides that
are too short are lost because they match many protein sequences and their true
origin is ambiguous. The maximum observable peptide length is determined by
several analytical challenges. This paper explores computationally how peptide
lengths produced from several common proteome digestion methods limit
observable proteome coverage. Iterative proteome cleavage strategies are also
explored. These simulations reveal that maximized proteome coverage can be
achieved by use of an iterative digestion protocol involving multiple proteases
and chemical cleavages that theoretically allow 91.1% proteome coverage.
|
1402.1801 | Efficient Low Dose X-ray CT Reconstruction through Sparsity-Based MAP
Modeling | stat.AP cs.CV | Ultra low radiation dose in X-ray Computed Tomography (CT) is an important
clinical objective in order to minimize the risk of carcinogenesis. Compressed
Sensing (CS) enables significant reductions in radiation dose to be achieved by
producing diagnostic images from a limited number of CT projections. However,
the excessive computation time that conventional CS-based CT reconstruction
typically requires has limited clinical implementation. In this paper, we first
demonstrate that a thorough analysis of CT reconstruction through a Maximum a
Posteriori objective function results in a weighted compressive sensing
problem. This analysis enables us to formulate a low dose fan beam and helical
cone beam CT reconstruction. Subsequently, we provide an efficient solution to
the formulated CS problem based on a Fast Composite Splitting Algorithm-Latent
Expected Maximization (FCSA-LEM) algorithm. In the proposed method we use
pseudo polar Fourier transform as the measurement matrix in order to decrease
the computational complexity; and rebinning of the projections to parallel rays
in order to extend its application to fan beam and helical cone beam scans. The
weight involved in the proposed weighted CS model, denoted by Error Adaptation
Weight (EAW), is calculated based on the statistical characteristics of CT
reconstruction and is a function of Poisson measurement noise and rebinning
interpolation error. Simulation results show that low computational complexity
of the proposed method made the fast recovery of the CT images possible and
using EAW reduces the reconstruction error by one order of magnitude. Recovery
of a high quality 512$\times$ 512 image was achieved in less than 20 sec on a
desktop computer without numerical optimizations.
|
1402.1814 | Foundation for Frequent Pattern Mining Algorithms Implementation | cs.DB | As with the development of the IT technologies, the amount of accumulated
data is also increasing. Thus the role of data mining comes into picture.
Association rule mining becomes one of the significant responsibilities of
descriptive technique which can be defined as discovering meaningful patterns
from large collection of data. The frequent pattern mining algorithms determine
the frequent patterns from a database. Mining frequent itemset is very
fundamental part of association rule mining. Many algorithms have been proposed
from last many decades including majors are Apriori, Direct Hashing and
Pruning, FP-Growth, ECLAT etc. The aim of this study is to analyze the existing
techniques for mining frequent patterns and evaluate the performance of them by
comparing Apriori and DHP algorithms in terms of candidate generation, database
and transaction pruning. This creates a foundation to develop newer algorithm
for frequent pattern mining.
|
1402.1815 | On the Performance of Optimized Dense Device-to-Device Wireless Networks | cs.IT math.IT | We consider a D2D wireless network where $n$ users are densely deployed in a
squared planar region and communicate with each other without the help of a
wired infrastructure. For this network, we examine the 3-phase hierarchical
cooperation (HC) scheme and the 2-phase improved HC scheme based on the concept
of {\em network multiple access}. Exploiting recent results on the optimality
of treating interference as noise in Gaussian interference channels, we
optimize the achievable average per-link rate and not just its scaling law. In
addition, we provide further improvements on both the previously proposed
hierarchical cooperation schemes by a more efficient use of TDMA and spatial
reuse. Thanks to our explicit achievable rate expressions, we can compare HC
scheme with multihop routing (MR), where the latter can be regarded as the
current practice of D2D wireless networks. Our results show that the improved
and optimized HC schemes yield very significant rate gains over MR in realistic
conditions of channel propagation exponents, signal to noise ratio, and number
of users. This sheds light on the long-standing question about the real
advantage of HC scheme over MR beyond the well-known scaling laws analysis. In
contrast, we also show that our rate optimization is non-trivial, since when HC
is applied with off-the-shelf choice of the system parameters, no significant
rate gain with respect to MR is achieved. We also show that for large pathloss
exponent the sum rate is a nearly linear function of the number of users $n$ in
the range of networks of practical size. This also sheds light on a
long-standing dispute on the effective achievability of linear sum rate scaling
with HC. Finally, we notice that the achievable sum rate for large $\alpha$ is
much larger than for small $\alpha$. This suggests that HC scheme may be a very
effective approach for networks operating at mm-waves.
|
1402.1834 | The Generalized Statistical Complexity of PolSAR Data | cs.IT math.IT | This paper presents and discusses the use of a new feature for PolSAR
imagery: the Generalized Statistical Complexity. This measure is able to
capture the disorder of the data by means of the entropy, as well as its
departure from a reference distribution. The latter component is obtained by
measuring a stochastic distance between two models: the $\mathcal G^0$ and the
Gamma laws. Preliminary results on the intensity components of AIRSAR image of
San Francisco are encouraging.
|
1402.1862 | Periodic Behaviors in Constrained Multi-agent Systems | cs.SY cs.MA | In this paper, we provide two discrete-time multi-agent models which generate
periodic behaviors. The first one is a multi-agent system of identical double
integrators with input saturation constraints, while the other one is a
multi-agent system of identical neutrally stable system with input saturation
constraints. In each case, we show that if the feedback gain parameters of the
local controller satisfy a certain condition, the multi-agent system exhibits a
periodic solution.
|
1402.1864 | An Inequality with Applications to Structured Sparsity and Multitask
Dictionary Learning | cs.LG stat.ML | From concentration inequalities for the suprema of Gaussian or Rademacher
processes an inequality is derived. It is applied to sharpen existing and to
derive novel bounds on the empirical Rademacher complexities of unit balls in
various norms appearing in the context of structured sparsity and multitask
dictionary learning or matrix factorization. A key role is played by the
largest eigenvalue of the data covariance matrix.
|
1402.1869 | On the Number of Linear Regions of Deep Neural Networks | stat.ML cs.LG cs.NE | We study the complexity of functions computable by deep feedforward neural
networks with piecewise linear activations in terms of the symmetries and the
number of linear regions that they have. Deep networks are able to sequentially
map portions of each layer's input-space to the same output. In this way, deep
models compute functions that react equally to complicated patterns of
different inputs. The compositional structure of these functions enables them
to re-use pieces of computation exponentially often in terms of the network's
depth. This paper investigates the complexity of such compositional maps and
contributes new theoretical results regarding the advantage of depth for neural
networks with piecewise linear activation functions. In particular, our
analysis is not specific to a single family of models, and as an example, we
employ it for rectifier and maxout networks. We improve complexity bounds from
pre-existing work and investigate the behavior of units in higher layers.
|
1402.1879 | Sparse Illumination Learning and Transfer for Single-Sample Face
Recognition with Image Corruption and Misalignment | cs.CV | Single-sample face recognition is one of the most challenging problems in
face recognition. We propose a novel algorithm to address this problem based on
a sparse representation based classification (SRC) framework. The new algorithm
is robust to image misalignment and pixel corruption, and is able to reduce
required gallery images to one sample per class. To compensate for the missing
illumination information traditionally provided by multiple gallery images, a
sparse illumination learning and transfer (SILT) technique is introduced. The
illumination in SILT is learned by fitting illumination examples of auxiliary
face images from one or more additional subjects with a sparsely-used
illumination dictionary. By enforcing a sparse representation of the query
image in the illumination dictionary, the SILT can effectively recover and
transfer the illumination and pose information from the alignment stage to the
recognition stage. Our extensive experiments have demonstrated that the new
algorithms significantly outperform the state of the art in the single-sample
regime and with less restrictions. In particular, the single-sample face
alignment accuracy is comparable to that of the well-known Deformable SRC
algorithm using multiple gallery images per class. Furthermore, the face
recognition accuracy exceeds those of the SRC and Extended SRC algorithms using
hand labeled alignment initialization.
|
1402.1881 | Tactical Fixed Job Scheduling with Spread-Time Constraints | cs.DS cs.CE | We address the tactical fixed job scheduling problem with spread-time
constraints. In such a problem, there are a fixed number of classes of machines
and a fixed number of groups of jobs. Jobs of the same group can only be
processed by machines of a given set of classes. All jobs have their fixed
start and end times. Each machine is associated with a cost according to its
machine class. Machines have spread-time constraints, with which each machine
is only available for $L$ consecutive time units from the start time of the
earliest job assigned to it. The objective is to minimize the total cost of the
machines used to process all the jobs. For this strongly NP-hard problem, we
develop a branch-and-price algorithm, which solves instances with up to $300$
jobs, as compared with CPLEX, which cannot solve instances of $100$ jobs. We
further investigate the influence of machine flexibility by computational
experiments. Our results show that limited machine flexibility is sufficient in
most situations.
|
1402.1892 | Thresholding Classifiers to Maximize F1 Score | stat.ML cs.IR cs.LG | This paper provides new insight into maximizing F1 scores in the context of
binary classification and also in the context of multilabel classification. The
harmonic mean of precision and recall, F1 score is widely used to measure the
success of a binary classifier when one class is rare. Micro average, macro
average, and per instance average F1 scores are used in multilabel
classification. For any classifier that produces a real-valued output, we
derive the relationship between the best achievable F1 score and the
decision-making threshold that achieves this optimum. As a special case, if the
classifier outputs are well-calibrated conditional probabilities, then the
optimal threshold is half the optimal F1 score. As another special case, if the
classifier is completely uninformative, then the optimal behavior is to
classify all examples as positive. Since the actual prevalence of positive
examples typically is low, this behavior can be considered undesirable. As a
case study, we discuss the results, which can be surprising, of applying this
procedure when predicting 26,853 labels for Medline documents.
|
1402.1896 | Correlated Orienteering Problem and it Application to Persistent
Monitoring Tasks | cs.RO | We propose a novel non-linear extension to the Orienteering Problem (OP),
called the Correlated Orienteering Problem (COP). We use COP to model the
planning of informative tours for the persistent monitoring of a spatiotemporal
field with time-invariant spatial correlations, in which the tours are
constrained to have limited length. Our focus in this paper is QCOP a quadratic
COP formulation that only looks at correlations between neighboring nodes in a
node network. The main feature of QCOP is a quadratic utility function
capturing the said spatial correlation. QCOP may be solved using mixed integer
quadratic programming (MIQP), with the resulting anytime algorithm capable of
planning multiple disjoint tours that maximize the quadratic utility. In
particular, our algorithm can quickly plan a near-optimal tour over a network
with up to $150$ nodes. Besides performing extensive simulation studies to
verify the algorithm's correctness and characterize its performance, we also
successfully applied it to two realistic persistent monitoring tasks: (i)
estimation over a synthetic spatiotemporal field, and (ii) estimating the
temperature distribution in the state of Massachusetts.
|
1402.1899 | Analysis of A Nonsmooth Optimization Approach to Robust Estimation | cs.SY math.OC | In this paper, we consider the problem of identifying a linear map from
measurements which are subject to intermittent and arbitarily large errors.
This is a fundamental problem in many estimation-related applications such as
fault detection, state estimation in lossy networks, hybrid system
identification, robust estimation, etc. The problem is hard because it exhibits
some intrinsic combinatorial features. Therefore, obtaining an effective
solution necessitates relaxations that are both solvable at a reasonable cost
and effective in the sense that they can return the true parameter vector. The
current paper discusses a nonsmooth convex optimization approach and provides a
new analysis of its behavior. In particular, it is shown that under appropriate
conditions on the data, an exact estimate can be recovered from data corrupted
by a large (even infinite) number of gross errors.
|
1402.1921 | A Hybrid Loss for Multiclass and Structured Prediction | cs.LG cs.AI cs.CV | We propose a novel hybrid loss for multiclass and structured prediction
problems that is a convex combination of a log loss for Conditional Random
Fields (CRFs) and a multiclass hinge loss for Support Vector Machines (SVMs).
We provide a sufficient condition for when the hybrid loss is Fisher consistent
for classification. This condition depends on a measure of dominance between
labels--specifically, the gap between the probabilities of the best label and
the second best label. We also prove Fisher consistency is necessary for
parametric consistency when learning models such as CRFs. We demonstrate
empirically that the hybrid loss typically performs least as well as--and often
better than--both of its constituent losses on a variety of tasks, such as
human action recognition. In doing so we also provide an empirical comparison
of the efficacy of probabilistic and margin based approaches to multiclass and
structured prediction.
|
1402.1931 | MCA Learning Algorithm for Incident Signals Estimation: A Review | cs.NE | Recently there has been many works on adaptive subspace filtering in the
signal processing literature. Most of them are concerned with tracking the
signal subspace spanned by the eigenvectors corresponding to the eigenvalues of
the covariance matrix of the signal plus noise data. Minor Component Analysis
(MCA) is important tool and has a wide application in telecommunications,
antenna array processing, statistical parametric estimation, etc. As an
important feature extraction technique, MCA is a statistical method of
extracting the eigenvector associated with the smallest eigenvalue of the
covariance matrix. In this paper, we will present a MCA learning algorithm to
extract minor component from input signals, and the learning rate parameter is
also presented, which ensures fast convergence of the algorithm, because it has
direct effect on the convergence of the weight vector and the error level is
affected by this value. MCA is performed to determine the estimated DOA.
Simulation results will be furnished to illustrate the theoretical results
achieved.
|
1402.1936 | Integer Set Compression and Statistical Modeling | cs.IT cs.DS math.IT | Compression of integer sets and sequences has been extensively studied for
settings where elements follow a uniform probability distribution. In addition,
methods exist that exploit clustering of elements in order to achieve higher
compression performance. In this work, we address the case where enumeration of
elements may be arbitrary or random, but where statistics is kept in order to
estimate probabilities of elements. We present a recursive subset-size encoding
method that is able to benefit from statistics, explore the effects of
permuting the enumeration order based on element probabilities, and discuss
general properties and possibilities for this class of compression problem.
|
1402.1939 | Maximum Entropy, Word-Frequency, Chinese Characters, and Multiple
Meanings | physics.soc-ph cs.CL | The word-frequency distribution of a text written by an author is well
accounted for by a maximum entropy distribution, the RGF (random group
formation)-prediction. The RGF-distribution is completely determined by the a
priori values of the total number of words in the text (M), the number of
distinct words (N) and the number of repetitions of the most common word
(k_max). It is here shown that this maximum entropy prediction also describes a
text written in Chinese characters. In particular it is shown that although the
same Chinese text written in words and Chinese characters have quite
differently shaped distributions, they are nevertheless both well predicted by
their respective three a priori characteristic values. It is pointed out that
this is analogous to the change in the shape of the distribution when
translating a given text to another language. Another consequence of the
RGF-prediction is that taking a part of a long text will change the input
parameters (M, N, k_max) and consequently also the shape of the frequency
distribution. This is explicitly confirmed for texts written in Chinese
characters. Since the RGF-prediction has no system-specific information beyond
the three a priori values (M, N, k_max), any specific language characteristic
has to be sought in systematic deviations from the RGF-prediction and the
measured frequencies. One such systematic deviation is identified and, through
a statistical information theoretical argument and an extended RGF-model, it is
proposed that this deviation is caused by multiple meanings of Chinese
characters. The effect is stronger for Chinese characters than for Chinese
words. The relation between Zipf's law, the Simon-model for texts and the
present results are discussed.
|
1402.1946 | Anomaly Detection Based on Access Behavior and Document Rank Algorithm | cs.NI cs.CR cs.IR | Distributed denial of service(DDos) attack is ongoing dangerous threat to the
Internet. Commonly, DDos attacks are carried out at the network layer, e.g. SYN
flooding, ICMP flooding and UDP flooding, which are called Distributed denial
of service attacks. The intention of these DDos attacks is to utilize the
network bandwidth and deny service to authorize users of the victim systems.
Obtain from the low layers, new application-layer-based DDos attacks utilizing
authorize HTTP requests to overload victim resources are more undetectable.
When these are taking place during crowd events of any popular website, this is
the case is very serious. The state-of-art approaches cannot handle the
situation where there is no considerable deviation between the normal and the
attackers activity. The page rank and proximity graph representation of online
web accesses takes much time in practice. There should be less computational
complexity, than of proximity graph search. Hence proposing Web Access Table
mechanism to hold the data such as "who accessed what and how many times, and
their rank on average" to find the anomalous web access behavior. The system
takes less computational complexity and may produce considerable time
complexity.
|
1402.1947 | Classification Tree Diagrams in Health Informatics Applications | cs.IR cs.CV cs.LG | Health informatics deal with the methods used to optimize the acquisition,
storage and retrieval of medical data, and classify information in healthcare
applications. Healthcare analysts are particularly interested in various
computer informatics areas such as; knowledge representation from data, anomaly
detection, outbreak detection methods and syndromic surveillance applications.
Although various parametric and non-parametric approaches are being proposed to
classify information from data, classification tree diagrams provide an
interactive visualization to analysts as compared to other methods. In this
work we discuss application of classification tree diagrams to classify
information from medical data in healthcare applications.
|
1402.1956 | Revisiting the Learned Clauses Database Reduction Strategies | cs.AI | In this paper, we revisit an important issue of CDCL-based SAT solvers,
namely the learned clauses database management policies. Our motivation takes
its source from a simple observation on the remarkable performances of both
random and size-bounded reduction strategies. We first derive a simple
reduction strategy, called Size-Bounded Randomized strategy (in short SBR),
that combines maintaing short clauses (of size bounded by k), while deleting
randomly clauses of size greater than k. The resulting strategy outperform the
state-of-the-art, namely the LBD based one, on SAT instances taken from the
last SAT competition. Reinforced by the interest of keeping short clauses, we
propose several new dynamic variants, and we discuss their performances.
|
1402.1958 | Better Optimism By Bayes: Adaptive Planning with Rich Models | cs.AI cs.LG stat.ML | The computational costs of inference and planning have confined Bayesian
model-based reinforcement learning to one of two dismal fates: powerful
Bayes-adaptive planning but only for simplistic models, or powerful, Bayesian
non-parametric models but using simple, myopic planning strategies such as
Thompson sampling. We ask whether it is feasible and truly beneficial to
combine rich probabilistic models with a closer approximation to fully Bayesian
planning. First, we use a collection of counterexamples to show formal problems
with the over-optimism inherent in Thompson sampling. Then we leverage
state-of-the-art techniques in efficient Bayes-adaptive planning and
non-parametric Bayesian methods to perform qualitatively better than both
existing conventional algorithms and Thompson sampling on two contextual
bandit-like problems.
|
1402.1971 | Direct Processing of Run Length Compressed Document Image for
Segmentation and Characterization of a Specified Block | cs.CV | Extracting a block of interest referred to as segmenting a specified block in
an image and studying its characteristics is of general research interest, and
could be a challenging if such a segmentation task has to be carried out
directly in a compressed image. This is the objective of the present research
work. The proposal is to evolve a method which would segment and extract a
specified block, and carry out its characterization without decompressing a
compressed image, for two major reasons that most of the image archives contain
images in compressed format and decompressing an image indents additional
computing time and space. Specifically in this research work, the proposal is
to work on run-length compressed document images.
|
1402.1973 | Dictionary learning for fast classification based on soft-thresholding | cs.CV cs.LG stat.ML | Classifiers based on sparse representations have recently been shown to
provide excellent results in many visual recognition and classification tasks.
However, the high cost of computing sparse representations at test time is a
major obstacle that limits the applicability of these methods in large-scale
problems, or in scenarios where computational power is restricted. We consider
in this paper a simple yet efficient alternative to sparse coding for feature
extraction. We study a classification scheme that applies the soft-thresholding
nonlinear mapping in a dictionary, followed by a linear classifier. A novel
supervised dictionary learning algorithm tailored for this low complexity
classification architecture is proposed. The dictionary learning problem, which
jointly learns the dictionary and linear classifier, is cast as a difference of
convex (DC) program and solved efficiently with an iterative DC solver. We
conduct experiments on several datasets, and show that our learning algorithm
that leverages the structure of the classification problem outperforms generic
learning procedures. Our simple classifier based on soft-thresholding also
competes with the recent sparse coding classifiers, when the dictionary is
learned appropriately. The adopted classification scheme further requires less
computational time at the testing stage, compared to other classifiers. The
proposed scheme shows the potential of the adequately trained soft-thresholding
mapping for classification and paves the way towards the development of very
efficient classification methods for vision problems.
|
1402.1986 | Recommandation mobile, sensible au contexte de contenus \'evolutifs:
Contextuel-E-Greedy | cs.AI | We introduce in this paper an algorithm named Contextuel-E-Greedy that
tackles the dynamicity of the user's content. It is based on dynamic
exploration/exploitation tradeoff and can adaptively balance the two aspects by
deciding which situation is most relevant for exploration or exploitation. The
experimental results demonstrate that our algorithm outperforms surveyed
algorithms.
|
1402.1987 | Quantifying Human Mobility Perturbation and Resilience in Natural
Disasters | physics.soc-ph cs.SI | Human mobility is influenced by environmental change and natural disasters.
Researchers have used trip distance distribution, radius of gyration of
movements, and individuals' visited locations to understand and capture human
mobility patterns and trajectories. However, our knowledge of human movements
during natural disasters is limited owing to both a lack of empirical data and
the low precision of available data. Here, we studied human mobility using
high-resolution movement data from individuals in New York City during and for
several days after Hurricane Sandy in 2012. We found the human movements
followed truncated power-law distributions during and after Hurricane Sandy,
although the {\beta} value was noticeably larger during the first 24 hours
after the storm struck. Also, we examined two parameters: the center of mass
and the radius of gyration of each individual's movements. We found that their
values during perturbation states and steady states are highly correlated,
suggesting human mobility data obtained in steady states can possibly predict
the perturbation state. Our results demonstrate that human movement
trajectories experienced significant perturbations during hurricanes, but also
exhibited high resilience. We expect the study will stimulate future research
on the perturbation and inherent resilience of human mobility under the
influence of natural disasters. For example, mobility patterns in coastal urban
areas could be examined as tropical cyclones approach, gain or dissipate in
strength, and as the path of the storm changes. Understanding nuances of human
mobility under the influence of disasters will enable more effective
evacuation, emergency response planning and development of strategies and
policies to reduce fatality, injury, and economic loss.
|
1402.1992 | Euler/X: A Toolkit for Logic-based Taxonomy Integration | cs.LO cs.DB | We introduce Euler/X, a toolkit for logic-based taxonomy integration. Given
two taxonomies and a set of alignment constraints between them, Euler/X
provides tools for detecting, explaining, and reconciling inconsistencies;
finding all possible merges between (consistent) taxonomies; and visualizing
merge results. Euler/X employs a number of different underlying reasoning
systems, including first-order reasoners (Prover9 and Mace4), answer set
programming (DLV and Potassco), and RCC reasoners (PyRCC8). We demonstrate the
features of Euler/X and provide experimental results showing its feasibility on
various synthetic and real-world examples.
|
1402.2011 | Locality and Availability in Distributed Storage | cs.IT math.IT | This paper studies the problem of code symbol availability: a code symbol is
said to have $(r, t)$-availability if it can be reconstructed from $t$ disjoint
groups of other symbols, each of size at most $r$. For example, $3$-replication
supports $(1, 2)$-availability as each symbol can be read from its $t= 2$ other
(disjoint) replicas, i.e., $r=1$. However, the rate of replication must vanish
like $\frac{1}{t+1}$ as the availability increases.
This paper shows that it is possible to construct codes that can support a
scaling number of parallel reads while keeping the rate to be an arbitrarily
high constant. It further shows that this is possible with the minimum distance
arbitrarily close to the Singleton bound. This paper also presents a bound
demonstrating a trade-off between minimum distance, availability and locality.
Our codes match the aforementioned bound and their construction relies on
combinatorial objects called resolvable designs.
From a practical standpoint, our codes seem useful for distributed storage
applications involving hot data, i.e., the information which is frequently
accessed by multiple processes in parallel.
|
1402.2013 | Foreground segmentation based on multi-resolution and matting | cs.CV | We propose a foreground segmentation algorithm that does foreground
extraction under different scales and refines the result by matting. First, the
input image is filtered and resampled to 5 different resolutions. Then each of
them is segmented by adaptive figure-ground classification and the best
segmentation is automatically selected by an evaluation score that maximizes
the difference between foreground and background. This segmentation is
upsampled to the original size, and a corresponding trimap is built.
Closed-form matting is employed to label the boundary region, and the result is
refined by a final figure-ground classification. Experiments show the success
of our method in treating challenging images with cluttered background and
adapting to loose initial bounding-box.
|
1402.2016 | Leveraging Long-Term Predictions and Online-Learning in Agent-based
Multiple Person Tracking | cs.CV | We present a multiple-person tracking algorithm, based on combining particle
filters and RVO, an agent-based crowd model that infers collision-free
velocities so as to predict pedestrian's motion. In addition to position and
velocity, our tracking algorithm can estimate the internal goals (desired
destination or desired velocity) of the tracked pedestrian in an online manner,
thus removing the need to specify this information beforehand. Furthermore, we
leverage the longer-term predictions of RVO by deriving a higher-order particle
filter, which aggregates multiple predictions from different prior time steps.
This yields a tracker that can recover from short-term occlusions and spurious
noise in the appearance model. Experimental results show that our tracking
algorithm is suitable for predicting pedestrians' behaviors online without
needing scene priors or hand-annotated goal information, and improves tracking
in real-world crowded scenes under low frame rates.
|
1402.2020 | Binary Stereo Matching | cs.CV | In this paper, we propose a novel binary-based cost computation and
aggregation approach for stereo matching problem. The cost volume is
constructed through bitwise operations on a series of binary strings. Then this
approach is combined with traditional winner-take-all strategy, resulting in a
new local stereo matching algorithm called binary stereo matching (BSM). Since
core algorithm of BSM is based on binary and integer computations, it has a
higher computational efficiency than previous methods. Experimental results on
Middlebury benchmark show that BSM has comparable performance with
state-of-the-art local stereo methods in terms of both quality and speed.
Furthermore, experiments on images with radiometric differences demonstrate
that BSM is more robust than previous methods under these changes, which is
common under real illumination.
|
1402.2025 | Nonlinear Kalman filter based on duality relations between continuous
and discrete-state stochastic processes | cs.SY cond-mat.stat-mech | A new application of duality relations of stochastic processes is
demonstrated. Although conventional usages of the duality relations need
analytical solutions for the dual processes, we here employ numerical solutions
of the dual processes and investigate the usefulness. As a demonstration,
estimation problems of hidden variables in stochastic differential equations
are discussed. Employing algebraic probability theory, a little complicated
birth-death process is derived from the stochastic differential equations, and
an estimation method based on the ensemble Kalman filter is proposed. As a
result, the possibility for making faster computational algorithms based on the
duality concepts is shown.
|
1402.2031 | Deeply Coupled Auto-encoder Networks for Cross-view Classification | cs.CV cs.LG cs.NE | The comparison of heterogeneous samples extensively exists in many
applications, especially in the task of image classification. In this paper, we
propose a simple but effective coupled neural network, called Deeply Coupled
Autoencoder Networks (DCAN), which seeks to build two deep neural networks,
coupled with each other in every corresponding layers. In DCAN, each deep
structure is developed via stacking multiple discriminative coupled
auto-encoders, a denoising auto-encoder trained with maximum margin criterion
consisting of intra-class compactness and inter-class penalty. This single
layer component makes our model simultaneously preserve the local consistency
and enhance its discriminative capability. With increasing number of layers,
the coupled networks can gradually narrow the gap between the two views.
Extensive experiments on cross-view image classification tasks demonstrate the
superiority of our method over state-of-the-art methods.
|
1402.2032 | An Achievable Rate-Distortion Region for the Multiple Descriptions
Problem | cs.IT math.IT | A multiple-descriptions (MD) coding strategy is proposed and an inner bound
to the achievable rate-distortion region is derived. The scheme utilizes linear
codes. It is shown in two different MD set-ups that the linear coding scheme
achieves a larger rate-distortion region than previously known random coding
strategies. Furthermore, it is shown via an example that the best known random
coding scheme for the set-up can be improved by including additional randomly
generated codebooks.
|
1402.2042 | Ad Hoc Networking With Cost-Effective Infrastructure: Generalized
Capacity Scaling | cs.IT math.IT | Capacity scaling of a large hybrid network with unit node density, consisting
of $n$ wireless ad hoc nodes, base stations (BSs) equipped with multiple
antennas, and one remote central processor (RCP), is analyzed when wired
backhaul links between the BSs and the RCP are rate-limited. We deal with a
general scenario where the number of BSs, the number of antennas at each BS,
and the backhaul link rate can scale at arbitrary rates relative to $n$ (i.e.,
we introduce three scaling parameters). We first derive the minimum backhaul
link rate required to achieve the same capacity scaling law as in the
infinite-capacity backhaul link case. Assuming an arbitrary rate scaling of
each backhaul link, a generalized achievable throughput scaling law is then
analyzed in the network based on using one of pure multihop, hierarchical
cooperation, and two infrastructure-supported routing protocols, and moreover,
three-dimensional information-theoretic operating regimes are explicitly
identified according to the three scaling parameters. In particular, we show
the case where our network having a power limitation is also fundamentally in
the degrees-of-freedom- or infrastructure-limited regime, or both. In addition,
a generalized cut-set upper bound under the network model is derived by cutting
not only the wireless connections but also the wired connections. It is shown
that our upper bound matches the achievable throughput scaling even under
realistic network conditions such that each backhaul link rate scales slower
than the aforementioned minimum-required backhaul link rate.
|
1402.2043 | Approachability in unknown games: Online learning meets multi-objective
optimization | stat.ML cs.LG math.ST stat.TH | In the standard setting of approachability there are two players and a target
set. The players play repeatedly a known vector-valued game where the first
player wants to have the average vector-valued payoff converge to the target
set which the other player tries to exclude it from this set. We revisit this
setting in the spirit of online learning and do not assume that the first
player knows the game structure: she receives an arbitrary vector-valued reward
vector at every round. She wishes to approach the smallest ("best") possible
set given the observed average payoffs in hindsight. This extension of the
standard setting has implications even when the original target set is not
approachable and when it is not obvious which expansion of it should be
approached instead. We show that it is impossible, in general, to approach the
best target set in hindsight and propose achievable though ambitious
alternative goals. We further propose a concrete strategy to approach these
goals. Our method does not require projection onto a target set and amounts to
switching between scalar regret minimization algorithms that are performed in
episodes. Applications to global cost minimization and to approachability under
sample path constraints are considered.
|
1402.2044 | A Second-order Bound with Excess Losses | stat.ML cs.LG math.ST stat.TH | We study online aggregation of the predictions of experts, and first show new
second-order regret bounds in the standard setting, which are obtained via a
version of the Prod algorithm (and also a version of the polynomially weighted
average algorithm) with multiple learning rates. These bounds are in terms of
excess losses, the differences between the instantaneous losses suffered by the
algorithm and the ones of a given expert. We then demonstrate the interest of
these bounds in the context of experts that report their confidences as a
number in the interval [0,1] using a generic reduction to the standard setting.
We conclude by two other applications in the standard setting, which improve
the known bounds in case of small excess losses and show a bounded regret
against i.i.d. sequences of losses.
|
1402.2056 | Key parameters generation of the navigation data of GPS Simulator | cs.IT math.IT | The development of the GPS (Global Positioning System) signal simulator
involving to a number of key technologies, in which the generation of
navigation message has important significance. Based on analysis of the
structure of GPS navigation data, the paper researches the production of
telemetry word and handover word, parity check code, time parameters and star
clock. Using disturbing force equation and Lagrange planetary motion equation
extrapolate ephemeris parameters whose feasibility is verified through the
Matlab software finally.
|
1402.2058 | Probabilistic Interpretation of Linear Solvers | math.OC cs.LG cs.NA math.NA math.PR stat.ML | This manuscript proposes a probabilistic framework for algorithms that
iteratively solve unconstrained linear problems $Bx = b$ with positive definite
$B$ for $x$. The goal is to replace the point estimates returned by existing
methods with a Gaussian posterior belief over the elements of the inverse of
$B$, which can be used to estimate errors. Recent probabilistic interpretations
of the secant family of quasi-Newton optimization algorithms are extended.
Combined with properties of the conjugate gradient algorithm, this leads to
uncertainty-calibrated methods with very limited cost overhead over conjugate
gradients, a self-contained novel interpretation of the quasi-Newton and
conjugate gradient algorithms, and a foundation for new nonlinear optimization
methods.
|
1402.2071 | Attribute Dependencies for Data with Grades | cs.LO cs.DB | This paper examines attribute dependencies in data that involve grades, such
as a grade to which an object is red or a grade to which two objects are
similar. We thus extend the classical agenda by allowing graded, or fuzzy,
attributes instead of Boolean attributes in case of attribute implications, and
allowing approximate match based on degrees of similarity instead of exact
match in case of functional dependencies. In a sense, we move from bivalence,
inherently present in the now-available theories of dependencies, to a more
flexible setting that involves grades. Such a shift has far-reaching
consequences. We argue that a reasonable theory of dependencies may be
developed by making use of mathematical fuzzy logic. Namely, the theory of
dependencies is then based on a solid logic calculus the same way the classical
dependencies are based on classical logic. For instance, rather than handling
degrees of similarity in an ad hoc manner, we consistently treat them as truth
values, the same way as true (match) and false (mismatch) are treated in
classical theories. In addition, several notions intuitively embraced in the
presence of grades, such as a degree of validity of a particular dependence or
a degree of entailment, naturally emerge and receive a conceptually clean
treatment in the presented approach. In the paper, we discuss motivations,
provide basic notions of syntax and semantics, and develop basic results which
include entailment of dependencies, associated closure structures, a logic of
dependencies with two versions of completeness theorem, results and algorithms
regarding complete non-redundant sets of dependencies, relationship to and a
possible reductionist interface to classical dependencies, and relationship to
functional dependencies over domains with similarity.
|
1402.2073 | Mining Images in Biomedical Publications: Detection and Analysis of Gel
Diagrams | cs.IR | Authors of biomedical publications use gel images to report experimental
results such as protein-protein interactions or protein expressions under
different conditions. Gel images offer a concise way to communicate such
findings, not all of which need to be explicitly discussed in the article text.
This fact together with the abundance of gel images and their shared common
patterns makes them prime candidates for automated image mining and parsing. We
introduce an approach for the detection of gel images, and present a workflow
to analyze them. We are able to detect gel segments and panels at high
accuracy, and present preliminary results for the identification of gene names
in these images. While we cannot provide a complete solution at this point, we
present evidence that this kind of image mining is feasible.
|
1402.2086 | Guaranteed Non-quadratic Performance for Quantum Systems with Nonlinear
Uncertainties | quant-ph cs.SY math.OC | This paper presents a robust performance analysis result for a class of
uncertain quantum systems containing sector bounded nonlinearities arising from
perturbations to the system Hamiltonian. An LMI condition is given for
calculating a guaranteed upper bound on a non-quadratic cost function. This
result is illustrated with an example involving a Josephson junction in an
electromagnetic cavity.
|
1402.2088 | Signal Reconstruction Framework Based On Projections Onto Epigraph Set
Of A Convex Cost Function (PESC) | math.OC cs.CV | A new signal processing framework based on making orthogonal Projections onto
the Epigraph Set of a Convex cost function (PESC) is developed. In this way it
is possible to solve convex optimization problems using the well-known
Projections onto Convex Set (POCS) approach. In this algorithm, the dimension
of the minimization problem is lifted by one and a convex set corresponding to
the epigraph of the cost function is defined. If the cost function is a convex
function in $R^N$, the corresponding epigraph set is also a convex set in
R^{N+1}. The PESC method provides globally optimal solutions for
total-variation (TV), filtered variation (FV), L_1, L_2, and entropic cost
function based convex optimization problems. In this article, the PESC based
denoising and compressive sensing algorithms are developed. Simulation examples
are presented.
|
1402.2091 | Artificial Noise Revisited | cs.IT math.IT | The artificial noise (AN) scheme, proposed by Goel and Negi, is being
considered as one of the key enabling technology for secure communications over
multiple-input multiple-output (MIMO) wiretap channels. However, the decrease
in secrecy rate due to the increase in the number of Eve's antennas is not well
understood. In this paper, we develop an analytical framework to characterize
the secrecy rate of the AN scheme as a function of Eve's signal-to-noise ratio
(SNR), Bob's SNR, the number of antennas in each terminal, and the power
allocation scheme. We first derive a closed-form expression for the average
secrecy rate. We then derive a closed-form expression for the asymptotic
instantaneous secrecy rate with large number of antennas at all terminals.
Finally, we derive simple lower and upper bounds on the average and
instantaneous secrecy rate that provide a tool for the system design.
|
1402.2092 | Near-Optimally Teaching the Crowd to Classify | cs.LG | How should we present training examples to learners to teach them
classification rules? This is a natural problem when training workers for
crowdsourcing labeling tasks, and is also motivated by challenges in
data-driven online education. We propose a natural stochastic model of the
learners, modeling them as randomly switching among hypotheses based on
observed feedback. We then develop STRICT, an efficient algorithm for selecting
examples to teach to workers. Our solution greedily maximizes a submodular
surrogate objective function in order to select examples to show to the
learners. We prove that our strategy is competitive with the optimal teaching
policy. Moreover, for the special case of linear separators, we prove that an
exponential reduction in error probability can be achieved. Our experiments on
simulated workers as well as three real image annotation tasks on Amazon
Mechanical Turk show the effectiveness of our teaching algorithm.
|
1402.2114 | Ubiquitous Smart Home System Using Android Application | cs.CY cs.SY | This paper presents a flexible standalone, low-cost smart home system, which
is based on the Android app communicating with the micro-web server providing
more than the switching functionalities. The Arduino Ethernet is used to
eliminate the use of a personal computer (PC) keeping the cost of the overall
system to a minimum while voice activation is incorporated for switching
functionalities. Devices such as light switches, power plugs, temperature
sensors, humidity sensors, current sensors, intrusion detection sensors,
smoke/gas sensors and sirens have been integrated in the system to demonstrate
the feasibility and effectiveness of the proposed smart home system. The smart
home app is tested and it is able to successfully perform the smart home
operations such as switching functionalities, automatic environmental control
and intrusion detection, in the later case where an email is generated and the
siren goes on.
|
1402.2145 | Using content features to enhance performance of user-based
collaborative filtering performance of user-based collaborative filtering | cs.IR | Content-based and collaborative filtering methods are the most successful
solutions in recommender systems. Content based method is based on items
attributes. This method checks the features of users favourite items and then
proposes the items which have the most similar characteristics with those
items. Collaborative filtering method is based on the determination of similar
items or similar users, which are called item-based and user-based
collaborative filtering, respectively.In this paper we propose a hybrid method
that integrates collaborative filtering and content-based methods. The proposed
method can be viewed as user-based Collaborative filtering technique. However
to find users with similar taste with active user, we used content features of
the item under investigation to put more emphasis on users rating for similar
items. In other words two users are similar if their ratings are similar on
items that have similar context. This is achieved by assigning a weight to each
rating when calculating the similarity of two users.We used movielens data set
to access the performance of the proposed method in comparison with basic
user-based collaborative filtering and other popular methods.
|
1402.2188 | Handwritten Character Recognition In Malayalam Scripts- A Review | cs.CV | Handwritten character recognition is one of the most challenging and ongoing
areas of research in the field of pattern recognition. HCR research is matured
for foreign languages like Chinese and Japanese but the problem is much more
complex for Indian languages. The problem becomes even more complicated for
South Indian languages due to its large character set and the presence of
vowels modifiers and compound characters. This paper provides an overview of
important contributions and advances in offline as well as online handwritten
character recognition of Malayalam scripts.
|
1402.2224 | Characterizing the Sample Complexity of Private Learners | cs.CR cs.LG | In 2008, Kasiviswanathan et al. defined private learning as a combination of
PAC learning and differential privacy. Informally, a private learner is applied
to a collection of labeled individual information and outputs a hypothesis
while preserving the privacy of each individual. Kasiviswanathan et al. gave a
generic construction of private learners for (finite) concept classes, with
sample complexity logarithmic in the size of the concept class. This sample
complexity is higher than what is needed for non-private learners, hence
leaving open the possibility that the sample complexity of private learning may
be sometimes significantly higher than that of non-private learning.
We give a combinatorial characterization of the sample size sufficient and
necessary to privately learn a class of concepts. This characterization is
analogous to the well known characterization of the sample complexity of
non-private learning in terms of the VC dimension of the concept class. We
introduce the notion of probabilistic representation of a concept class, and
our new complexity measure RepDim corresponds to the size of the smallest
probabilistic representation of the concept class.
We show that any private learning algorithm for a concept class C with sample
complexity m implies RepDim(C)=O(m), and that there exists a private learning
algorithm with sample complexity m=O(RepDim(C)). We further demonstrate that a
similar characterization holds for the database size needed for privately
computing a large class of optimization problems and also for the well studied
problem of private data release.
|
1402.2231 | Compressive sensing for dynamic spectrum access networks: Techniques and
tradeoffs | cs.NI cs.IT math.IT | We explore the practical costs and benefits of CS for dynamic spectrum access
(DSA) networks. Firstly, we review several fast and practical techniques for
energy detection without full reconstruction and provide theoretical
guarantees. We also define practical metrics to measure the performance of
these techniques. Secondly, we perform comprehensive experiments comparing the
techniques on real signals captured over the air. Our results show that we can
significantly compressively acquire the signal while still accurately
determining spectral occupancy.
|
1402.2232 | Image Search Reranking | cs.IR cs.CV | The existing methods for image search reranking suffer from the
unfaithfulness of the assumptions under which the text-based images search
result. The resulting images contain more irrelevant images. Hence the re
ranking concept arises to re rank the retrieved images based on the text around
the image and data of data of image and visual feature of image. A number of
methods are differentiated for this re-ranking. The high ranked images are used
as noisy data and a k means algorithm for classification is learned to rectify
the ranking further. We are study the affect ability of the cross validation
method to this training data. The pre eminent originality of the overall method
is in collecting text/metadata of image and visual features in order to achieve
an automatic ranking of the images. Supervision is initiated to learn the model
weights offline, previous to reranking process. While model learning needs
manual labeling of the results for a some limited queries, the resulting model
is query autonomous and therefore applicable to any other query .Examples are
given for a selection of other classes like vehicles, animals and other
classes.
|
1402.2237 | Coordination Avoidance in Database Systems (Extended Version) | cs.DB | Minimizing coordination, or blocking communication between concurrently
executing operations, is key to maximizing scalability, availability, and high
performance in database systems. However, uninhibited coordination-free
execution can compromise application correctness, or consistency. When is
coordination necessary for correctness? The classic use of serializable
transactions is sufficient to maintain correctness but is not necessary for all
applications, sacrificing potential scalability. In this paper, we develop a
formal framework, invariant confluence, that determines whether an application
requires coordination for correct execution. By operating on application-level
invariants over database states (e.g., integrity constraints), invariant
confluence analysis provides a necessary and sufficient condition for safe,
coordination-free execution. When programmers specify their application
invariants, this analysis allows databases to coordinate only when anomalies
that might violate invariants are possible. We analyze the invariant confluence
of common invariants and operations from real-world database systems (i.e.,
integrity constraints) and applications and show that many are invariant
confluent and therefore achievable without coordination. We apply these results
to a proof-of-concept coordination-avoiding database prototype and demonstrate
sizable performance gains compared to serializable execution, notably a 25-fold
improvement over prior TPC-C New-Order performance on a 200 server cluster.
|
1402.2238 | Information-theoretically Optimal Sparse PCA | cs.IT math.IT math.ST stat.TH | Sparse Principal Component Analysis (PCA) is a dimensionality reduction
technique wherein one seeks a low-rank representation of a data matrix with
additional sparsity constraints on the obtained representation. We consider two
probabilistic formulations of sparse PCA: a spiked Wigner and spiked Wishart
(or spiked covariance) model. We analyze an Approximate Message Passing (AMP)
algorithm to estimate the underlying signal and show, in the high dimensional
limit, that the AMP estimates are information-theoretically optimal. As an
immediate corollary, our results demonstrate that the posterior expectation of
the underlying signal, which is often intractable to compute, can be obtained
using a polynomial-time scheme. Our results also effectively provide a
single-letter characterization of the sparse PCA problem.
|
1402.2255 | Robust Phase Retrieval and Super-Resolution from One Bit Coded
Diffraction Patterns | cs.IT math.IT math.ST stat.AP stat.TH | In this paper we study a realistic setup for phase retrieval, where the
signal of interest is modulated or masked and then for each modulation or mask
a diffraction pattern is collected, producing a coded diffraction pattern (CDP)
[CLM13]. We are interested in the setup where the resolution of the collected
CDP is limited by the Fraunhofer diffraction limit of the imaging system.
We investigate a novel approach based on a geometric quantization scheme of
phase-less linear measurements into (one-bit) coded diffraction patterns, and a
corresponding recovery scheme. The key novelty in this approach consists in
comparing pairs of coded diffractions patterns across frequencies: the one bit
measurements obtained rely on the order statistics of the un-quantized
measurements rather than their values . This results in a robust phase
recovery, and unlike currently available methods, allows to efficiently perform
phase recovery from measurements affected by severe (possibly unknown) non
linear, rank preserving perturbations, such as distortions. Another important
feature of this approach consists in the fact that it enables also
super-resolution and blind-deconvolution, beyond the diffraction limit of a
given imaging system.
|
1402.2297 | Connecting Dream Networks Across Cultures | cs.SI physics.soc-ph | Many species dream, yet there remain many open research questions in the
study of dreams. The symbolism of dreams and their interpretation is present in
cultures throughout history. Analysis of online data sources for dream
interpretation using network science leads to understanding symbolism in dreams
and their associated meaning. In this study, we introduce dream interpretation
networks for English, Chinese and Arabic that represent different cultures from
various parts of the world. We analyze communities in these networks, finding
that symbols within a community are semantically related. The central nodes in
communities give insight about cultures and symbols in dreams. The community
structure of different networks highlights cultural similarities and
differences. Interconnections between different networks are also identified by
translating symbols from different languages into English. Structural
correlations across networks point out relationships between cultures.
Similarities between network communities are also investigated by analysis of
sentiment in symbol interpretations. We find that interpretations within a
community tend to have similar sentiment. Furthermore, we cluster communities
based on their sentiment, yielding three main categories of positive, negative,
and neutral dream symbols.
|
1402.2300 | Feature and Variable Selection in Classification | cs.LG cs.AI stat.ML | The amount of information in the form of features and variables avail- able
to machine learning algorithms is ever increasing. This can lead to classifiers
that are prone to overfitting in high dimensions, high di- mensional models do
not lend themselves to interpretable results, and the CPU and memory resources
necessary to run on high-dimensional datasets severly limit the applications of
the approaches. Variable and feature selection aim to remedy this by finding a
subset of features that in some way captures the information provided best. In
this paper we present the general methodology and highlight some specific
approaches.
|
1402.2308 | Predicting Crowd Behavior with Big Public Data | cs.SI physics.soc-ph | With public information becoming widely accessible and shared on today's web,
greater insights are possible into crowd actions by citizens and non-state
actors such as large protests and cyber activism. We present efforts to predict
the occurrence, specific timeframe, and location of such actions before they
occur based on public data collected from over 300,000 open content web sources
in 7 languages, from all over the world, ranging from mainstream news to
government publications to blogs and social media. Using natural language
processing, event information is extracted from content such as type of event,
what entities are involved and in what role, sentiment and tone, and the
occurrence time range of the event discussed. Statements made on Twitter about
a future date from the time of posting prove particularly indicative. We
consider in particular the case of the 2013 Egyptian coup d'etat. The study
validates and quantifies the common intuition that data on social media (beyond
mainstream news sources) are able to predict major events.
|
1402.2324 | Universal Matrix Completion | stat.ML cs.IT cs.LG math.IT | The problem of low-rank matrix completion has recently generated a lot of
interest leading to several results that offer exact solutions to the problem.
However, in order to do so, these methods make assumptions that can be quite
restrictive in practice. More specifically, the methods assume that: a) the
observed indices are sampled uniformly at random, and b) for every new matrix,
the observed indices are sampled afresh. In this work, we address these issues
by providing a universal recovery guarantee for matrix completion that works
for a variety of sampling schemes. In particular, we show that if the set of
sampled indices come from the edges of a bipartite graph with large spectral
gap (i.e. gap between the first and the second singular value), then the
nuclear norm minimization based method exactly recovers all low-rank matrices
that satisfy certain incoherence properties. Moreover, we also show that under
certain stricter incoherence conditions, $O(nr^2)$ uniformly sampled entries
are enough to recover any rank-$r$ $n\times n$ matrix, in contrast to the
$O(nr\log n)$ sample complexity required by other matrix completion algorithms
as well as existing analyses of the nuclear norm method.
|
1402.2331 | Computational Limits for Matrix Completion | cs.CC cs.LG | Matrix Completion is the problem of recovering an unknown real-valued
low-rank matrix from a subsample of its entries. Important recent results show
that the problem can be solved efficiently under the assumption that the
unknown matrix is incoherent and the subsample is drawn uniformly at random.
Are these assumptions necessary?
It is well known that Matrix Completion in its full generality is NP-hard.
However, little is known if make additional assumptions such as incoherence and
permit the algorithm to output a matrix of slightly higher rank. In this paper
we prove that Matrix Completion remains computationally intractable even if the
unknown matrix has rank $4$ but we are allowed to output any constant rank
matrix, and even if additionally we assume that the unknown matrix is
incoherent and are shown $90%$ of the entries. This result relies on the
conjectured hardness of the $4$-Coloring problem. We also consider the positive
semidefinite Matrix Completion problem. Here we show a similar hardness result
under the standard assumption that $\mathrm{P}\ne \mathrm{NP}.$
Our results greatly narrow the gap between existing feasibility results and
computational lower bounds. In particular, we believe that our results give the
first complexity-theoretic justification for why distributional assumptions are
needed beyond the incoherence assumption in order to obtain positive results.
On the technical side, we contribute several new ideas on how to encode hard
combinatorial problems in low-rank optimization problems. We hope that these
techniques will be helpful in further understanding the computational limits of
Matrix Completion and related problems.
|
1402.2333 | Modeling sequential data using higher-order relational features and
predictive training | cs.LG cs.CV stat.ML | Bi-linear feature learning models, like the gated autoencoder, were proposed
as a way to model relationships between frames in a video. By minimizing
reconstruction error of one frame, given the previous frame, these models learn
"mapping units" that encode the transformations inherent in a sequence, and
thereby learn to encode motion. In this work we extend bi-linear models by
introducing "higher-order mapping units" that allow us to encode
transformations between frames and transformations between transformations.
We show that this makes it possible to encode temporal structure that is more
complex and longer-range than the structure captured within standard bi-linear
models. We also show that a natural way to train the model is by replacing the
commonly used reconstruction objective with a prediction objective which forces
the model to correctly predict the evolution of the input multiple steps into
the future. Learning can be achieved by back-propagating the multi-step
prediction through time. We test the model on various temporal prediction
tasks, and show that higher-order mappings and predictive training both yield a
significant improvement over bi-linear models in terms of prediction accuracy.
|
1402.2335 | Sparsity averaging for radio-interferometric imaging | astro-ph.IM cs.CV | We propose a novel regularization method for compressive imaging in the
context of the compressed sensing (CS) theory with coherent and redundant
dictionaries. Natural images are often complicated and several types of
structures can be present at once. It is well known that piecewise smooth
images exhibit gradient sparsity, and that images with extended structures are
better encapsulated in wavelet frames. Therefore, we here conjecture that
promoting average sparsity or compressibility over multiple frames rather than
single frames is an extremely powerful regularization prior.
|
1402.2343 | New Codes and Inner Bounds for Exact Repair in Distributed Storage
Systems | cs.IT math.IT | We study the exact-repair tradeoff between storage and repair bandwidth in
distributed storage systems (DSS). We give new inner bounds for the tradeoff
region and provide code constructions that achieve these bounds.
|
1402.2351 | TrendLearner: Early Prediction of Popularity Trends of User Generated
Content | cs.SI cs.IR | We here focus on the problem of predicting the popularity trend of user
generated content (UGC) as early as possible. Taking YouTube videos as case
study, we propose a novel two-step learning approach that: (1) extracts
popularity trends from previously uploaded objects, and (2) predicts trends for
new content. Unlike previous work, our solution explicitly addresses the
inherent tradeoff between prediction accuracy and remaining interest in the
content after prediction, solving it on a per-object basis. Our experimental
results show great improvements of our solution over alternatives, and its
applicability to improve the accuracy of state-of-the-art popularity prediction
methods.
|
1402.2359 | Machine Learner for Automated Reasoning 0.4 and 0.5 | cs.LG cs.AI cs.LO | Machine Learner for Automated Reasoning (MaLARea) is a learning and reasoning
system for proving in large formal libraries where thousands of theorems are
available when attacking a new conjecture, and a large number of related
problems and proofs can be used to learn specific theorem-proving knowledge.
The last version of the system has by a large margin won the 2013 CASC LTB
competition. This paper describes the motivation behind the methods used in
MaLARea, discusses the general approach and the issues arising in evaluation of
such system, and describes the Mizar@Turing100 and CASC'24 versions of MaLARea.
|
1402.2363 | Animation of 3D Human Model Using Markerless Motion Capture Applied To
Sports | cs.GR cs.CV | Markerless motion capture is an active research in 3D virtualization. In
proposed work we presented a system for markerless motion capture for 3D human
character animation, paper presents a survey on motion and skeleton tracking
techniques which are developed or are under development. The paper proposed a
method to transform the motion of a performer to a 3D human character (model),
the 3D human character performs similar movements as that of a performer in
real time. In the proposed work, human model data will be captured by Kinect
camera, processed data will be applied on 3D human model for animation. 3D
human model is created using open source software (MakeHuman). Anticipated
dataset for sport activity is considered as input which can be applied to any
HCI application.
|
1402.2394 | GraphX: Unifying Data-Parallel and Graph-Parallel Analytics | cs.DB | From social networks to language modeling, the growing scale and importance
of graph data has driven the development of numerous new graph-parallel systems
(e.g., Pregel, GraphLab). By restricting the computation that can be expressed
and introducing new techniques to partition and distribute the graph, these
systems can efficiently execute iterative graph algorithms orders of magnitude
faster than more general data-parallel systems. However, the same restrictions
that enable the performance gains also make it difficult to express many of the
important stages in a typical graph-analytics pipeline: constructing the graph,
modifying its structure, or expressing computation that spans multiple graphs.
As a consequence, existing graph analytics pipelines compose graph-parallel and
data-parallel systems using external storage systems, leading to extensive data
movement and complicated programming model.
To address these challenges we introduce GraphX, a distributed graph
computation framework that unifies graph-parallel and data-parallel
computation. GraphX provides a small, core set of graph-parallel operators
expressive enough to implement the Pregel and PowerGraph abstractions, yet
simple enough to be cast in relational algebra. GraphX uses a collection of
query optimization techniques such as automatic join rewrites to efficiently
implement these graph-parallel operators. We evaluate GraphX on real-world
graphs and workloads and demonstrate that GraphX achieves comparable
performance as specialized graph computation systems, while outperforming them
in end-to-end graph pipelines. Moreover, GraphX achieves a balance between
expressiveness, performance, and ease of use.
|
1402.2426 | Imaging with Rays: Microscopy, Medical Imaging, and Computer Vision | cs.CV | In this paper we broadly consider techniques which utilize projections on
rays for data collection, with particular emphasis on optical techniques. We
formulate a variety of imaging techniques as either special cases or extensions
of tomographic reconstruction. We then consider how the techniques must be
extended to describe objects containing occlusion, as with a self-occluding
opaque object. We formulate the reconstruction problem as a regularized
nonlinear optimization problem to simultaneously solve for object brightness
and attenuation, where the attenuation can become infinite. We demonstrate
various simulated examples for imaging opaque objects, including sparse point
sources, a conventional multiview reconstruction technique, and a
super-resolving technique which exploits occlusion to resolve an image.
|
1402.2427 | An evaluation of keyword extraction from online communication for the
characterisation of social relations | cs.SI cs.CL cs.IR | The set of interpersonal relationships on a social network service or a
similar online community is usually highly heterogenous. The concept of tie
strength captures only one aspect of this heterogeneity. Since the unstructured
text content of online communication artefacts is a salient source of
information about a social relationship, we investigate the utility of keywords
extracted from the message body as a representation of the relationship's
characteristics as reflected by the conversation topics. Keyword extraction is
performed using standard natural language processing methods. Communication
data and human assessments of the extracted keywords are obtained from Facebook
users via a custom application. The overall positive quality assessment
provides evidence that the keywords indeed convey relevant information about
the relationship.
|
1402.2440 | Validation Experiments for LBM Simulations of Electron Beam Melting | cs.CE | This paper validates 3D simulation results of electron beam melting (EBM)
processes comparing experimental and numerical data. The physical setup is
presented which is discretized by a three dimensional (3D) thermal lattice
Boltzmann method (LBM). An experimental process window is used for the
validation depending on the line energy injected into the metal powder bed and
the scan velocity of the electron beam. In the process window the EBM products
are classified into the categories, porous, good and swelling, depending on the
quality of the surface. The same parameter sets are used to generate a
numerical process window. A comparison of numerical and experimental process
windows shows a good agreement. This validates the EBM model and justifies
simulations for future improvements of EBM processes. In particular numerical
simulations can be used to explain future process window scenarios and find the
best parameter set for a good surface quality and dense products.
|
1402.2447 | A comparison of linear and non-linear calibrations for speaker
recognition | stat.ML cs.LG | In recent work on both generative and discriminative score to
log-likelihood-ratio calibration, it was shown that linear transforms give good
accuracy only for a limited range of operating points. Moreover, these methods
required tailoring of the calibration training objective functions in order to
target the desired region of best accuracy. Here, we generalize the linear
recipes to non-linear ones. We experiment with a non-linear, non-parametric,
discriminative PAV solution, as well as parametric, generative,
maximum-likelihood solutions that use Gaussian, Student's T and
normal-inverse-Gaussian score distributions. Experiments on NIST SRE'12 scores
suggest that the non-linear methods provide wider ranges of optimal accuracy
and can be trained without having to resort to objective function tailoring.
|
1402.2453 | Sliding window and compressive sensing for low-field dynamic magnetic
resonance imaging | cs.CE physics.med-ph | We describe an acquisition/processing procedure for image reconstruction in
dynamic Magnetic Resonance Imaging (MRI). The approach requires sliding window
to record a set of trajectories in the k-space, standard regularization to
reconstruct an estimate of the object and compressed sensing to recover image
residuals. We validated this approach in the case of specific simulated
experiments and, in the case of real measurements, we showed that the procedure
is reliable even in the case of data acquired by means of a low-field scanner.
|
1402.2461 | Distributions of Upper PAPR and Lower PAPR of OFDM Signals in Visible
Light Communications | cs.IT math.IT | Orthogonal frequency-division multiplexing (OFDM) in visible light
communications (VLC) inherits the disadvantage of high peak-to-average power
ratio (PAPR) from OFDM in radio frequency (RF) communications. The upper peak
power and lower peak power of real-valued VLC-OFDM signals are both limited by
the dynamic constraints of light emitting diodes (LEDs). The efficiency and
transmitted electrical power are directly related with the upper PAPR (UPAPR)
and lower PAPR (LPAPR) of VLC-OFDM. In this paper, we will derive the
complementary cumulative distribution function (CCDF) of UPAPR and LPAPR, and
investigate the joint distribution of UPAPR and LPAPR.
|
1402.2479 | Coalitional Games with Overlapping Coalitions for Interference
Management in Small Cell Networks | cs.GT cs.IT math.IT | In this paper, we study the problem of cooperative interference management in
an OFDMA two-tier small cell network. In particular, we propose a novel
approach for allowing the small cells to cooperate, so as to optimize their
sum-rate, while cooperatively satisfying their maximum transmit power
constraints. Unlike existing work which assumes that only disjoint groups of
cooperative small cells can emerge, we formulate the small cells' cooperation
problem as a coalition formation game with overlapping coalitions. In this
game, each small cell base station can choose to participate in one or more
cooperative groups (or coalitions) simultaneously, so as to optimize the
tradeoff between the benefits and costs associated with cooperation. We study
the properties of the proposed overlapping coalition formation game and we show
that it exhibits negative externalities due to interference. Then, we propose a
novel decentralized algorithm that allows the small cell base stations to
interact and self-organize into a stable overlapping coalitional structure.
Simulation results show that the proposed algorithm results in a notable
performance advantage in terms of the total system sum-rate, relative to the
noncooperative case and the classical algorithms for coalitional games with
non-overlapping coalitions.
|
1402.2482 | Performance of Social Network Sensors During Hurricane Sandy | cs.SI physics.soc-ph | Information flow during catastrophic events is a critical aspect of disaster
management. Modern communication platforms, in particular online social
networks, provide an opportunity to study such flow, and a mean to derive
early-warning sensors, improving emergency preparedness and response.
Performance of the social networks sensor method, based on topological and
behavioural properties derived from the "friendship paradox", is studied here
for over 50 million Twitter messages posted before, during, and after Hurricane
Sandy. We find that differences in user's network centrality effectively
translate into moderate awareness advantage (up to 26 hours); and that
geo-location of users within or outside of the hurricane-affected area plays
significant role in determining the scale of such advantage. Emotional response
appears to be universal regardless of the position in the network topology, and
displays characteristic, easily detectable patterns, opening a possibility of
implementing a simple "sentiment sensing" technique to detect and locate
disasters.
|
1402.2487 | Materialized View Replacement using Markovs Analysis | cs.DB | Materialized view is used in large data centric applications to expedite
query processing. The efficiency of materialized view depends on degree of
result found against the queries over the existing materialized views.
Materialized views are constructed following different methodologies. Thus the
efficacy of the materialized views depends on the methodology based on which
these are formed. Construction of materialized views are often time consuming
and moreover after a certain time the performance of the materialized views
degrade when the nature of queries change. In this situation either new
materialized views could be constructed from scratch or the existing views
could be upgraded. Fresh construction of materialized views has higher time
complexity hence the modification of the existing views is a better
solution.Modification process of materialized view is classified under
materialized view maintenance scheme. Materialized view maintenance is a
continuous process and the system could be tuned to ensure a constant rate of
performance. If a materialized view construction process is not supported by
materialized view maintenance scheme that system would suffer from performance
degradation. In this paper a new materialized view maintenance scheme is
proposed using markovs analysis to ensure consistent performance. Markovs
analysis is chosen here to predict steady state probability over initial
probability.
|
1402.2489 | The Fair Distribution of Power to Electric Vehicles: An Alternative to
Pricing | cs.NI cs.SY | As the popularity of electric vehicles increases, the demand for more power
can increase more rapidly than our ability to install additional generating
capacity. In the long term we expect that the supply and demand will become
balanced. However, in the interim the rate at which electric vehicles can be
deployed will depend on our ability to charge these vehicles without
inconveniencing their owners. In this paper, we investigate using fairness
mechanisms to distribute power to electric vehicles on a smart grid. We assume
that during peak demand there is insufficient power to charge all the vehicles
simultaneously. In each five minute interval of time we select a subset of the
vehicles to charge, based upon information about the vehicles. We evaluate the
selection mechanisms using published data on the current demand for electric
power as a function of time of day, current driving habits for commuting, and
the current rates at which electric vehicles can be charged on home outlets. We
found that conventional selection strategies, such as first-come-first-served
or round robin, may delay a significant fraction of the vehicles by more than
two hours, even when the total available power over the course of a day is two
or three times the power required by the vehicles. However, a selection
mechanism that minimizes the maximum delay can reduce the delays to a few
minutes, even when the capacity available for charging electric vehicles
exceeds their requirements by as little as 5%.
|
1402.2507 | Force-Guiding Particle Chains for Shape-Shifting Displays | cs.RO | We present design and implementation of a chain of particles that can be
programmed to fold the chain into a given curve. The particles guide an
external force to fold, therefore the particles are simple and amenable for
miniaturization. A chain can consist of a large number of such particles. Using
multiple of these chains, a shape-shifting display can be constructed that
folds its initially flat surface to approximate a given 3D shape that can be
touched and modified by users, for example, enabling architects to
interactively view, touch, and modify a 3D model of a building.
|
1402.2509 | Achieve Better Ranking Accuracy Using CloudRank Framework for Cloud
Services | cs.DC cs.IR | Building high quality cloud applications becomes an urgently required
research problem. Nonfunctional performance of cloud services is usually
described by quality-of-service (QoS). In cloud applications, cloud services
are invoked remotely by internet connections. The QoS Ranking of cloud services
for a user cannot be transferred directly to another user, since the locations
of the cloud applications are quite different. Personalized QoS Ranking is
required to evaluate all candidate services at the user - side but it is
impractical in reality. To get QoS values, the service candidates are usually
required and it is very expensive. To avoid time consuming and expensive
realworld service invocations, this paper proposes a CloudRank framework which
predicts the QoS ranking directly without predicting the corresponding QoS
values. This framework provides an accurate ranking but the QoS values are same
in both algorithms so, an optimal VM allocation policy is used to improve the
QoS performance of cloud services and it also provides better ranking accuracy
than CloudRank2 algorithm.
|
1402.2551 | Modeling European Options | cs.CE | Option contracts can be valued by using the Black-Scholes equation, a partial
differential equation with initial conditions. An exact solution for European
style options is known. The computation time and the error need to be minimized
simultaneously. In this paper, the authors have solved the Black-Scholes
equation by employing a reasonably accurate implicit method. Options with known
analytic solutions have been evaluated. Furthermore, an overall second order
accurate space and time discretization has been accomplished in this paper.
|
1402.2561 | The CQC Algorithm: Cycling in Graphs to Semantically Enrich and Enhance
a Bilingual Dictionary | cs.CL | Bilingual machine-readable dictionaries are knowledge resources useful in
many automatic tasks. However, compared to monolingual computational lexicons
like WordNet, bilingual dictionaries typically provide a lower amount of
structured information, such as lexical and semantic relations, and often do
not cover the entire range of possible translations for a word of interest. In
this paper we present Cycles and Quasi-Cycles (CQC), a novel algorithm for the
automated disambiguation of ambiguous translations in the lexical entries of a
bilingual machine-readable dictionary. The dictionary is represented as a
graph, and cyclic patterns are sought in the graph to assign an appropriate
sense tag to each translation in a lexical entry. Further, we use the
algorithms output to improve the quality of the dictionary itself, by
suggesting accurate solutions to structural problems such as misalignments,
partial alignments and missing entries. Finally, we successfully apply CQC to
the task of synonym extraction.
|
1402.2562 | \'Etude cognitive des processus de construction d'une requ\^ete dans un
syst\`eme de gestion de connaissances m\'edicales | cs.IR cs.CL | This article presents the Cogni-CISMeF project, which aims at improving
medical information search in the CISMeF system (Catalog and Index of
French-language health resources) by including a conversational agent to
interact with the user in natural language. To study the cognitive processes
involved during the information search, a bottom-up methodology was adopted.
Experimentation has been set up to obtain human dialogs between a user (playing
the role of patient) dealing with medical information search and a CISMeF
expert refining the request. The analysis of these dialogs underlined the use
of discursive evidence: vocabulary, reformulation, implicit or explicit
expression of user intentions, conversational sequences, etc. A model of
artificial agent is proposed. It leads the user in its information search by
proposing to him examples, assistance and choices. This model was implemented
and integrated in the CISMeF system. ---- Cet article d\'ecrit le projet
Cogni-CISMeF qui propose un module de dialogue Homme-Machine \`a int\'egrer
dans le syst\`eme d'indexation de connaissances m\'edicales CISMeF (Catalogue
et Index des Sites M\'edicaux Francophones). Nous avons adopt\'e une d\'emarche
de mod\'elisation cognitive en proc\'edant \`a un recueil de corpus de
dialogues entre un utilisateur (jouant le r\^ole d'un patient) d\'esirant une
information m\'edicale et un expert CISMeF af inant cette demande pour
construire la requ\^ete. Nous avons analys\'e la structure des dialogues ainsi
obtenus et avons \'etudi\'e un certain nombre d'indices discursifs :
vocabulaire employ\'e, marques de reformulation, commentaires m\'eta et
\'epilinguistiques, expression implicite ou explicite des intentions de
l'utilisateur, encha\^inement conversationnel, etc. De cette analyse, nous
avons construit un mod\`ele d'agent artificiel dot\'e de capacit\'es cognitives
capables d'aider l'utilisateur dans sa t\^ache de recherche d'information. Ce
mod\`ele a \'et\'e impl\'ement\'e et int\'egr\'e dans le syst\`eme CISMeF.
|
1402.2583 | Coordinated Output Regulation of Heterogeneous Linear Systems under
Switching Topologies | cs.SY cs.MA math.OC | This paper constructs a framework to describe and study the coordinated
output regulation problem for multiple heterogeneous linear systems. Each agent
is modeled as a general linear multiple-input multiple-output system with an
autonomous exosystem which represents the individual offset from the group
reference for the agent. The multi-agent system as a whole has a group
exogenous state which represents the tracking reference for the whole group.
Under the constraints that the group exogenous output is only locally available
to each agent and that the agents have only access to their neighbors'
information, we propose observer-based feedback controllers to solve the
coordinated output regulation problem using output feedback information. A
high-gain approach is used and the information interactions are allowed to be
switched over a finite set of fixed networks containing both graphs that have a
directed spanning tree and graphs that do not. The fundamental relationship
between the information interactions, the dwell time, the non-identical
dynamics of different agents, and the high-gain parameters is given.
Simulations are shown to validate the theoretical results.
|
1402.2594 | Online Nonparametric Regression | stat.ML cs.LG math.ST stat.TH | We establish optimal rates for online regression for arbitrary classes of
regression functions in terms of the sequential entropy introduced in (Rakhlin,
Sridharan, Tewari, 2010). The optimal rates are shown to exhibit a phase
transition analogous to the i.i.d./statistical learning case, studied in
(Rakhlin, Sridharan, Tsybakov 2013). In the frequently encountered situation
when sequential entropy and i.i.d. empirical entropy match, our results point
to the interesting phenomenon that the rates for statistical learning with
squared loss and online nonparametric regression are the same.
In addition to a non-algorithmic study of minimax regret, we exhibit a
generic forecaster that enjoys the established optimal rates. We also provide a
recipe for designing online regression algorithms that can be computationally
efficient. We illustrate the techniques by deriving existing and new
forecasters for the case of finite experts and for online linear regression.
|
1402.2601 | Near Oracle Performance and Block Analysis of Signal Space Greedy
Methods | math.NA cs.IT math.IT | Compressive sampling (CoSa) is a new methodology which demonstrates that
sparse signals can be recovered from a small number of linear measurements.
Greedy algorithms like CoSaMP have been designed for this recovery, and
variants of these methods have been adapted to the case where sparsity is with
respect to some arbitrary dictionary rather than an orthonormal basis. In this
work we present an analysis of the so-called Signal Space CoSaMP method when
the measurements are corrupted with mean-zero white Gaussian noise. We
establish near-oracle performance for recovery of signals sparse in some
arbitrary dictionary. In addition, we analyze the block variant of the method
for signals whose supports obey a block structure, extending the method into
the model-based compressed sensing framework. Numerical experiments confirm
that the block method significantly outperforms the standard method in these
settings.
|
1402.2603 | Small Cell In-Band Wireless Backhaul in Massive MIMO Systems: A
Cooperation of Next-Generation Techniques | cs.IT math.IT | Massive multiple-inputmultiple-output (MIMO) systems, dense small-cells
(SCs), and full duplex are three candidate techniques for next-generation
communication systems. The cooperation of next-generation techniques could
offer more benefits, e.g., SC in-band wireless backhaul in massive MIMO
systems. In this paper, three strategies of SC in-band wireless backhaul in
massive MIMO systems are introduced and compared, i.e., complete time-division
duplex (CTDD), zero-division duplex (ZDD), and ZDD with interference rejection
(ZDD-IR). Simulation results demonstrate that SC in-band wireless backhaul has
the potential to improve the throughput for massive MIMO systems. Specifically,
among the three strategies, CTDD is the simplest one and could achieve decent
throughput improvement. Depending on conditions, with the self-interference
cancellation capability at SCs, ZDD could achieve better throughput than CTDD,
even with residual self-interference. Moreover, ZDD-IR requires the additional
interference rejection process at the BS compared to ZDD, but it could
generally achieve better throughput than CTDD and ZDD.
|
1402.2606 | A Fast Two Pass Multi-Value Segmentation Algorithm based on Connected
Component Analysis | cs.CV | Connected component analysis (CCA) has been heavily used to label binary
images and classify segments. However, it has not been well-exploited to
segment multi-valued natural images. This work proposes a novel multi-value
segmentation algorithm that utilizes CCA to segment color images. A user
defined distance measure is incorporated in the proposed modified CCA to
identify and segment similar image regions. The raw output of the algorithm
consists of distinctly labelled segmented regions. The proposed algorithm has a
unique design architecture that provides several benefits: 1) it can be used to
segment any multi-channel multi-valued image; 2) the distance
measure/segmentation criteria can be application-specific and 3) an absolute
linear-time implementation allows easy extension for real-time video
segmentation. Experimental demonstrations of the aforesaid benefits are
presented along with the comparison results on multiple datasets with current
benchmark algorithms. A number of possible application areas are also
identified and results on real-time video segmentation has been presented to
show the promise of the proposed method.
|
1402.2634 | Cooperative Set Aggregation for Multiple Lagrangian Systems | cs.SY math.OC | In this paper, we study the cooperative set tracking problem for a group of
Lagrangian systems. Each system observes a convex set as its local target. The
intersection of these local sets is the group aggregation target. We first
propose a control law based on each system's own target sensing and information
exchange with neighbors. With necessary connectivity for both cases of fixed
and switching communication graphs, multiple Lagrangian systems are shown to
achieve rendezvous on the intersection of all the local target sets while the
vectors of generalized coordinate derivatives are driven to zero. Then, we
introduce the collision avoidance control term into set aggregation control to
ensure group dispersion. By defining an ultimate bound on the final generalized
coordinate between each system and the intersection of all the local target
sets, we show that multiple Lagrangian systems approach a bounded region near
the intersection of all the local target sets while the collision avoidance is
guaranteed during the movement. In addition, the vectors of generalized
coordinate derivatives of all the mechanical systems are shown to be driven to
zero. Simulation results are given to validate the theoretical results.
|
1402.2637 | Identifiability Scaling Laws in Bilinear Inverse Problems | cs.IT math.IT | A number of ill-posed inverse problems in signal processing, like blind
deconvolution, matrix factorization, dictionary learning and blind source
separation share the common characteristic of being bilinear inverse problems
(BIPs), i.e. the observation model is a function of two variables and
conditioned on one variable being known, the observation is a linear function
of the other variable. A key issue that arises for such inverse problems is
that of identifiability, i.e. whether the observation is sufficient to
unambiguously determine the pair of inputs that generated the observation.
Identifiability is a key concern for applications like blind equalization in
wireless communications and data mining in machine learning. Herein, a unifying
and flexible approach to identifiability analysis for general conic prior
constrained BIPs is presented, exploiting a connection to low-rank matrix
recovery via lifting. We develop deterministic identifiability conditions on
the input signals and examine their satisfiability in practice for three
classes of signal distributions, viz. dependent but uncorrelated, independent
Gaussian, and independent Bernoulli. In each case, scaling laws are developed
that trade-off probability of robust identifiability with the complexity of the
rank two null space. An added appeal of our approach is that the rank two null
space can be partly or fully characterized for many bilinear problems of
interest (e.g. blind deconvolution). We present numerical experiments involving
variations on the blind deconvolution problem that exploit a characterization
of the rank two null space and demonstrate that the scaling laws offer good
estimates of identifiability.
|
1402.2642 | A comprehensive analysis of the geometry of TDOA maps in localisation
problems | math-ph cs.CE cs.SD gr-qc math.AC math.MP | In this manuscript we consider the well-established problem of TDOA-based
source localization and propose a comprehensive analysis of its solutions for
arbitrary sensor measurements and placements. More specifically, we define the
TDOA map from the physical space of source locations to the space of range
measurements (TDOAs), in the specific case of three receivers in 2D space. We
then study the identifiability of the model, giving a complete analytical
characterization of the image of this map and its invertibility. This analysis
has been conducted in a completely mathematical fashion, using many different
tools which make it valid for every sensor configuration. These results are the
first step towards the solution of more general problems involving, for
example, a larger number of sensors, uncertainty in their placement, or lack of
synchronization.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.