id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1304.4383 | Convolutional Network-Coded Cooperation in Multi-Source Networks with a
Multi-Antenna Relay | cs.NI cs.IT math.IT | We propose a novel cooperative transmission scheme called "Convolutional
Network-Coded Cooperation" (CNCC) for a network including N sources, one
M-antenna relay, and one common destination. The source-relay (S-R) channels
are assumed to be Nakagami-m fading, while the source-destination (S-D) and the
relay-destination (R-D) channels are considered Rayleigh fading. The CNCC
scheme exploits the generator matrix of a good (N+M', N, v) systematic
convolutional code, with the free distance of d_free designed over GF(2), as
the network coding matrix which is run by the network's nodes, such that the
systematic symbols are directly transmitted from the sources, and the parity
symbols are sent by the best antenna of the relay. An upper bound on the BER of
the sources, and consequently, the achieved diversity orders are obtained. The
numerical results indicate that the CNCC scheme outperforms the other
cooperative schemes considered, in terms of the diversity order and the network
throughput. The simulation results confirm the accuracy of the theoretical
analysis.
|
1304.4407 | Stable Recovery with Analysis Decomposable Priors | cs.IT math.FA math.IT math.OC | In this paper, we investigate in a unified way the structural properties of
solutions to inverse problems. These solutions are regularized by the generic
class of semi-norms defined as a decomposable norm composed with a linear
operator, the so-called analysis type decomposable prior. This encompasses
several well-known analysis-type regularizations such as the discrete total
variation (in any dimension), analysis group-Lasso or the nuclear norm. Our
main results establish sufficient conditions under which uniqueness and
stability to a bounded noise of the regularized solution are guaranteed. Along
the way, we also provide a strong sufficient uniqueness result that is of
independent interest and goes beyond the case of decomposable norms.
|
1304.4415 | Mining to Compact CNF Propositional Formulae | cs.AI | In this paper, we propose a first application of data mining techniques to
propositional satisfiability. Our proposed Mining4SAT approach aims to discover
and to exploit hidden structural knowledge for reducing the size of
propositional formulae in conjunctive normal form (CNF). Mining4SAT combines
both frequent itemset mining techniques and Tseitin's encoding for a compact
representation of CNF formulae. The experiments of our Mining4SAT approach show
interesting reductions of the sizes of many application instances taken from
the last SAT competitions.
|
1304.4428 | Simplified Compute-and-Forward and Its Performance Analysis | cs.IT math.IT | The compute-and-forward (CMF) method has shown a great promise as an
innovative approach to exploit interference toward achieving higher network
throughput. The CMF was primarily introduced by means of information theory
tools. While there have been some recent works discussing different aspects of
efficient and practical implementation of CMF, there are still some issues that
are not covered. In this paper, we first introduce a method to decrease the
implementation complexity of the CMF method. We then evaluate the exact outage
probability of our proposed simplified CMF scheme, and hereby provide an upper
bound on the outage probability of the optimum CMF in all SNR values, and a
close approximation of its outage probability in low SNR regimes. We also
evaluate the effect of the channel estimation error (CEE) on the performance of
both optimum and our proposed simplified CMF by simulations. Our simulation
results indicate that the proposed method is more robust against CEE than the
optimum CMF method for the examples considered.
|
1304.4453 | Engineering Parallel Algorithms for Community Detection in Massive
Networks | cs.DC cs.SI | The amount of graph-structured data has recently experienced an enormous
growth in many applications. To transform such data into useful information,
fast analytics algorithms and software tools are necessary. One common graph
analytics kernel is disjoint community detection (or graph clustering). Despite
extensive research on heuristic solvers for this task, only few parallel codes
exist, although parallelism will be necessary to scale to the data volume of
real-world applications. We address the deficit in computing capability by a
flexible and extensible community detection framework with shared-memory
parallelism. Within this framework we design and implement efficient parallel
community detection heuristics: A parallel label propagation scheme; the first
large-scale parallelization of the well-known Louvain method, as well as an
extension of the method adding refinement; and an ensemble scheme combining the
above. In extensive experiments driven by the algorithm engineering paradigm,
we identify the most successful parameters and combinations of these
algorithms. We also compare our implementations with state-of-the-art
competitors. The processing rate of our fastest algorithm often reaches 50M
edges/second. We recommend the parallel Louvain method and our variant with
refinement as both qualitatively strong and fast. Our methods are suitable for
massive data sets with billions of edges.
|
1304.4464 | The Deterministic Multicast Capacity of 4-Node Relay Networks | cs.IT math.IT | In this paper, we completely characterize the deterministic capacity region
of a four-node relay network with no direct links between the nodes, where each
node communicates with the three other nodes via a relay. Towards this end, we
develop an upper bound on the deterministic capacity region, based on the
notion of a one-sided genie. To establish achievability, we use the detour
schemes that achieve the upper bound by routing specific bits via indirect
paths instead of sending them directly.
|
1304.4520 | Sentiment Analysis : A Literature Survey | cs.CL | Our day-to-day life has always been influenced by what people think. Ideas
and opinions of others have always affected our own opinions. The explosion of
Web 2.0 has led to increased activity in Podcasting, Blogging, Tagging,
Contributing to RSS, Social Bookmarking, and Social Networking. As a result
there has been an eruption of interest in people to mine these vast resources
of data for opinions. Sentiment Analysis or Opinion Mining is the computational
treatment of opinions, sentiments and subjectivity of text. In this report, we
take a look at the various challenges and applications of Sentiment Analysis.
We will discuss in details various approaches to perform a computational
treatment of sentiments and opinions. Various supervised or data-driven
techniques to SA like Na\"ive Byes, Maximum Entropy, SVM, and Voted Perceptrons
will be discussed and their strengths and drawbacks will be touched upon. We
will also see a new dimension of analyzing sentiments by Cognitive Psychology
mainly through the work of Janyce Wiebe, where we will see ways to detect
subjectivity, perspective in narrative and understanding the discourse
structure. We will also study some specific topics in Sentiment Analysis and
the contemporary works in those areas.
|
1304.4523 | Origins of power-law degree distribution in the heterogeneity of human
activity in social networks | physics.soc-ph cond-mat.stat-mech cs.SI | The probability distribution of number of ties of an individual in a social
network follows a scale-free power-law. However, how this distribution arises
has not been conclusively demonstrated in direct analyses of people's actions
in social networks. Here, we perform a causal inference analysis and find an
underlying cause for this phenomenon. Our analysis indicates that heavy-tailed
degree distribution is causally determined by similarly skewed distribution of
human activity. Specifically, the degree of an individual is entirely random -
following a "maximum entropy attachment" model - except for its mean value
which depends deterministically on the volume of the users' activity. This
relation cannot be explained by interactive models, like preferential
attachment, since the observed actions are not likely to be caused by
interactions with other people.
|
1304.4535 | Heterogeneous patterns enhancing static and dynamic texture
classification | cs.CV | Some mixtures, such as colloids like milk, blood, and gelatin, have
homogeneous appearance when viewed with the naked eye, however, to observe them
at the nanoscale is possible to understand the heterogeneity of its components.
The same phenomenon can occur in pattern recognition in which it is possible to
see heterogeneous patterns in texture images. However, current methods of
texture analysis can not adequately describe such heterogeneous patterns.
Common methods used by researchers analyse the image information in a global
way, taking all its features in an integrated manner. Furthermore, multi-scale
analysis verifies the patterns at different scales, but still preserving the
homogeneous analysis. On the other hand various methods use textons to
represent the texture, breaking texture down into its smallest unit. To tackle
this problem, we propose a method to identify texture patterns not small as
textons at distinct scales enhancing the separability among different types of
texture. We find sub patterns of texture according to the scale and then group
similar patterns for a more refined analysis. Tests were performed in four
static texture databases and one dynamic one. Results show that our method
provides better classification rate compared with conventional approaches both
in static and in dynamic texture.
|
1304.4567 | Multiple-Antenna Interference Network with Receive Antenna Joint
Processing and Real Interference Alignment | cs.IT math.IT | In this paper, the degrees of freedom (DoF) regions of constant coefficient
multiple antenna interference channels are investigated. First, we consider a
$K$-user Gaussian interference channel with $M_k$ antennas at transmitter $k$,
$1\le k\le K$, and $N_j$ antennas at receiver $j$, $1\le j\le K$, denoted as a
$(K,[M_k],[N_j])$ channel. Relying on a result of simultaneous Diophantine
approximation, a real interference alignment scheme with joint receive antenna
processing is developed. The scheme is used to obtain an achievable DoF region.
The proposed DoF region includes two previously known results as special cases,
namely 1) the total DoF of a $K$-user interference channel with $N$ antennas at
each node, $(K, [N], [N])$ channel, is $NK/2$; and 2) the total DoF of a $(K,
[M], [N])$ channel is at least $KMN/(M+N)$. We next explore
constant-coefficient interference networks with $K$ transmitters and $J$
receivers, all having $N$ antennas. Each transmitter emits an independent
message and each receiver requests an arbitrary subset of the messages.
Employing the novel joint receive antenna processing, the DoF region for this
set-up is obtained. We finally consider wireless X networks where each node is
allowed to have an arbitrary number of antennas. It is shown that the joint
receive antenna processing can be used to establish an achievable DoF region,
which is larger than what is possible with antenna splitting. As a special case
of the derived achievable DoF region for constant coefficient X network, the
total DoF of wireless X networks with the same number of antennas at all nodes
and with joint antenna processing is tight while the best inner bound based on
antenna splitting cannot meet the outer bound. Finally, we obtain a DoF region
outer bound based on the technique of transmitter grouping.
|
1304.4577 | Empirical Centroid Fictitious Play: An Approach For Distributed Learning
In Multi-Agent Games | math.OC cs.GT cs.SY | The paper is concerned with distributed learning in large-scale games. The
well-known fictitious play (FP) algorithm is addressed, which, despite
theoretical convergence results, might be impractical to implement in
large-scale settings due to intense computation and communication requirements.
An adaptation of the FP algorithm, designated as the empirical centroid
fictitious play (ECFP), is presented. In ECFP players respond to the centroid
of all players' actions rather than track and respond to the individual actions
of every player. Convergence of the ECFP algorithm in terms of average
empirical frequency (a notion made precise in the paper) to a subset of the
Nash equilibria is proven under the assumption that the game is a potential
game with permutation invariant potential function. A more general formulation
of ECFP is then given (which subsumes FP as a special case) and convergence
results are given for the class of potential games. Furthermore, a distributed
formulation of the ECFP algorithm is presented, in which, players endowed with
a (possibly sparse) preassigned communication graph, engage in local,
non-strategic information exchange to eventually agree on a common equilibrium.
Convergence results are proven for the distributed ECFP algorithm.
|
1304.4578 | Spatial Compressive Sensing for MIMO Radar | cs.IT math.IT | We study compressive sensing in the spatial domain to achieve target
localization, specifically direction of arrival (DOA), using multiple-input
multiple-output (MIMO) radar. A sparse localization framework is proposed for a
MIMO array in which transmit and receive elements are placed at random. This
allows for a dramatic reduction in the number of elements needed, while still
attaining performance comparable to that of a filled (Nyquist) array. By
leveraging properties of structured random matrices, we develop a bound on the
coherence of the resulting measurement matrix, and obtain conditions under
which the measurement matrix satisfies the so-called isotropy property. The
coherence and isotropy concepts are used to establish uniform and non-uniform
recovery guarantees within the proposed spatial compressive sensing framework.
In particular, we show that non-uniform recovery is guaranteed if the product
of the number of transmit and receive elements, MN (which is also the number of
degrees of freedom), scales with K(log(G))^2, where K is the number of targets
and G is proportional to the array aperture and determines the angle
resolution. In contrast with a filled virtual MIMO array where the product MN
scales linearly with G, the logarithmic dependence on G in the proposed
framework supports the high-resolution provided by the virtual array aperture
while using a small number of MIMO radar elements. In the numerical results we
show that, in the proposed framework, compressive sensing recovery algorithms
are capable of better performance than classical methods, such as beamforming
and MUSIC.
|
1304.4602 | Characterizing and curating conversation threads: Expansion, focus,
volume, re-entry | cs.SI physics.soc-ph | Discussion threads form a central part of the experience on many Web sites,
including social networking sites such as Facebook and Google Plus and
knowledge creation sites such as Wikipedia. To help users manage the challenge
of allocating their attention among the discussions that are relevant to them,
there has been a growing need for the algorithmic curation of on-line
conversations --- the development of automated methods to select a subset of
discussions to present to a user.
Here we consider two key sub-problems inherent in conversational curation:
length prediction --- predicting the number of comments a discussion thread
will receive --- and the novel task of re-entry prediction --- predicting
whether a user who has participated in a thread will later contribute another
comment to it. The first of these sub-problems arises in estimating how
interesting a thread is, in the sense of generating a lot of conversation; the
second can help determine whether users should be kept notified of the progress
of a thread to which they have already contributed. We develop and evaluate a
range of approaches for these tasks, based on an analysis of the network
structure and arrival pattern among the participants, as well as a novel
dichotomy in the structure of long threads. We find that for both tasks,
learning-based approaches using these sources of information yield improvements
for all the performance metrics we used.
|
1304.4610 | Spectral Compressed Sensing via Structured Matrix Completion | cs.IT cs.LG math.IT math.NA stat.ML | The paper studies the problem of recovering a spectrally sparse object from a
small number of time domain samples. Specifically, the object of interest with
ambient dimension $n$ is assumed to be a mixture of $r$ complex
multi-dimensional sinusoids, while the underlying frequencies can assume any
value in the unit disk. Conventional compressed sensing paradigms suffer from
the {\em basis mismatch} issue when imposing a discrete dictionary on the
Fourier representation. To address this problem, we develop a novel
nonparametric algorithm, called enhanced matrix completion (EMaC), based on
structured matrix completion. The algorithm starts by arranging the data into a
low-rank enhanced form with multi-fold Hankel structure, then attempts recovery
via nuclear norm minimization. Under mild incoherence conditions, EMaC allows
perfect recovery as soon as the number of samples exceeds the order of
$\mathcal{O}(r\log^{2} n)$. We also show that, in many instances, accurate
completion of a low-rank multi-fold Hankel matrix is possible when the number
of observed entries is proportional to the information theoretical limits
(except for a logarithmic gap). The robustness of EMaC against bounded noise
and its applicability to super resolution are further demonstrated by numerical
experiments.
|
1304.4613 | On the Benefits of Sampling in Privacy Preserving Statistical Analysis
on Distributed Databases | cs.CR cs.DB cs.DS | We consider a problem where mutually untrusting curators possess portions of
a vertically partitioned database containing information about a set of
individuals. The goal is to enable an authorized party to obtain aggregate
(statistical) information from the database while protecting the privacy of the
individuals, which we formalize using Differential Privacy. This process can be
facilitated by an untrusted server that provides storage and processing
services but should not learn anything about the database. This work describes
a data release mechanism that employs Post Randomization (PRAM), encryption and
random sampling to maintain privacy, while allowing the authorized party to
conduct an accurate statistical analysis of the data. Encryption ensures that
the storage server obtains no information about the database, while PRAM and
sampling ensures individual privacy is maintained against the authorized party.
We characterize how much the composition of random sampling with PRAM increases
the differential privacy of system compared to using PRAM alone. We also
analyze the statistical utility of our system, by bounding the estimation error
- the expected l2-norm error between the true empirical distribution and the
estimated distribution - as a function of the number of samples, PRAM noise,
and other system parameters. Our analysis shows a tradeoff between increasing
PRAM noise versus decreasing the number of samples to maintain a desired level
of privacy, and we determine the optimal number of samples that balances this
tradeoff and maximizes the utility. In experimental simulations with the UCI
"Adult Data Set" and with synthetically generated data, we confirm that the
theoretically predicted optimal number of samples indeed achieves close to the
minimal empirical error, and that our analytical error bounds match well with
the empirical results.
|
1304.4621 | Optimal Multiuser Zero-Forcing with Per-Antenna Power Constraints for
Network MIMO Coordination | cs.IT math.IT math.OC | We consider a multi-cell multiple-input multiple-output (MIMO) coordinated
downlink transmission, also known as network MIMO, under per-antenna power
constraints. We investigate a simple multiuser zero-forcing (ZF) linear
precoding technique known as block diagonalization (BD) for network MIMO. The
optimal form of BD with per-antenna power constraints is proposed. It involves
a novel approach of optimizing the precoding matrices over the entire null
space of other users' transmissions.
An iterative gradient descent method is derived by solving the dual of the
throughput maximization problem, which finds the optimal precoding matrices
globally and efficiently. The comprehensive simulations illustrate several
network MIMO coordination advantages when the optimal BD scheme is used. Its
achievable throughput is compared with the capacity region obtained through the
recently established duality concept under per-antenna power constraints.
|
1304.4624 | Robust Joint Precoder and Equalizer Design in MIMO Communication Systems | cs.IT math.IT math.OC | We address joint design of robust precoder and equalizer in a MIMO
communication system using the minimization of weighted sum of mean square
errors. In addition to imperfect knowledge of channel state information, we
also account for inaccurate awareness of interference plus noise covariance
matrix and power shaping matrix. We follow the worst-case model for imperfect
knowledge of these matrices. First, we derive the worst-case values of these
matrices. Then, we transform the joint precoder and equalizer optimization
problem into a convex scalar optimization problem. Further, the solution to
this problem will be simplified to a depressed quartic equation, the
closed-form expressions for roots of which are known. Finally, we propose an
iterative algorithm to obtain the worst-case robust transceivers.
|
1304.4627 | On the Optimality of Linear Precoding for Secrecy in the MIMO Broadcast
Channel | cs.IT math.IT | We study the optimality of linear precoding for the two-receiver
multiple-input multiple-output (MIMO) Gaussian broadcast channel (BC) with
confidential messages. Secret dirty-paper coding (SDPC) is optimal under an
input covariance constraint, but there is no computable secrecy capacity
expression for the general MIMO case under an average power constraint. In
principle, for this case, the secrecy capacity region could be found through an
exhaustive search over the set of all possible matrix power constraints.
Clearly, this search, coupled with the complexity of dirty-paper encoding and
decoding, motivates the consideration of low complexity linear precoding as an
alternative. We prove that for a two-user MIMO Gaussian BC under an input
covariance constraint, linear precoding is optimal and achieves the same
secrecy rate region as S-DPC if the input covariance constraint satisfies a
specific condition, and we characterize the corresponding optimal linear
precoders. We then use this result to derive a closed-form sub-optimal
algorithm based on linear precoding for an average power constraint. Numerical
results indicate that the secrecy rate region achieved by this algorithm is
close to that obtained by the optimal S-DPC approach with a search over all
suitable input covariance matrices.
|
1304.4633 | PAC Quasi-automatizability of Resolution over Restricted Distributions | cs.DS cs.LG cs.LO | We consider principled alternatives to unsupervised learning in data mining
by situating the learning task in the context of the subsequent analysis task.
Specifically, we consider a query-answering (hypothesis-testing) task: In the
combined task, we decide whether an input query formula is satisfied over a
background distribution by using input examples directly, rather than invoking
a two-stage process in which (i) rules over the distribution are learned by an
unsupervised learning algorithm and (ii) a reasoning algorithm decides whether
or not the query formula follows from the learned rules. In a previous work
(2013), we observed that the learning task could satisfy numerous desirable
criteria in this combined context -- effectively matching what could be
achieved by agnostic learning of CNFs from partial information -- that are not
known to be achievable directly. In this work, we show that likewise, there are
reasoning tasks that are achievable in such a combined context that are not
known to be achievable directly (and indeed, have been seriously conjectured to
be impossible, cf. (Alekhnovich and Razborov, 2008)). Namely, we test for a
resolution proof of the query formula of a given size in quasipolynomial time
(that is, "quasi-automatizing" resolution). The learning setting we consider is
a partial-information, restricted-distribution setting that generalizes
learning parities over the uniform distribution from partial information,
another task that is known not to be achievable directly in various models (cf.
(Ben-David and Dichterman, 1998) and (Michael, 2010)).
|
1304.4634 | Speckle Reduction in Polarimetric SAR Imagery with Stochastic Distances
and Nonlocal Means | cs.IT cs.CV cs.GR math.IT stat.AP stat.ML | This paper presents a technique for reducing speckle in Polarimetric
Synthetic Aperture Radar (PolSAR) imagery using Nonlocal Means and a
statistical test based on stochastic divergences. The main objective is to
select homogeneous pixels in the filtering area through statistical tests
between distributions. This proposal uses the complex Wishart model to describe
PolSAR data, but the technique can be extended to other models. The weights of
the location-variant linear filter are function of the p-values of tests which
verify the hypothesis that two samples come from the same distribution and,
therefore, can be used to compute a local mean. The test stems from the family
of (h-phi) divergences which originated in Information Theory. This novel
technique was compared with the Boxcar, Refined Lee and IDAN filters. Image
quality assessment methods on simulated and real data are employed to validate
the performance of this approach. We show that the proposed filter also
enhances the polarimetric entropy and preserves the scattering information of
the targets.
|
1304.4642 | Easy and hard functions for the Boolean hidden shift problem | quant-ph cs.CC cs.LG | We study the quantum query complexity of the Boolean hidden shift problem.
Given oracle access to f(x+s) for a known Boolean function f, the task is to
determine the n-bit string s. The quantum query complexity of this problem
depends strongly on f. We demonstrate that the easiest instances of this
problem correspond to bent functions, in the sense that an exact one-query
algorithm exists if and only if the function is bent. We partially characterize
the hardest instances, which include delta functions. Moreover, we show that
the problem is easy for random functions, since two queries suffice. Our
algorithm for random functions is based on performing the pretty good
measurement on several copies of a certain state; its analysis relies on the
Fourier transform. We also use this approach to improve the quantum rejection
sampling approach to the Boolean hidden shift problem.
|
1304.4648 | Construction of Self-dual Codes over $F_p+vF_p$ | cs.IT math.IT | In this paper, we determine all self-dual codes over $F_p+vF_p$ ($v^2=v$) in
terms of self-dual codes over the finite field $F_p$ and give an explicit
construction for self-dual codes over $F_p+vF_p$, where $p$ is a prime.
|
1304.4652 | A Health Monitoring System for Elder and Sick Persons | cs.CV cs.HC | This paper discusses a vision based health monitoring system which would be
very easy in use and deployment. Elder and sick people who are not able to talk
or walk they are dependent on other human beings for their daily needs and need
continuous monitoring. The developed system provides facility to the sick or
elder person to describe his or her need to their caretaker in lingual
description by showing particular hand gesture with the developed system. This
system uses fingertip detection technique for gesture extraction and artificial
neural network for gesture classification and recognition. The system is able
to work in different light conditions and can be connected to different devices
to announce users need on a distant location.
|
1304.4657 | DELTACON: A Principled Massive-Graph Similarity Function | cs.SI physics.soc-ph | How much did a network change since yesterday? How different is the wiring
between Bob's brain (a left-handed male) and Alice's brain (a right-handed
female)? Graph similarity with known node correspondence, i.e. the detection of
changes in the connectivity of graphs, arises in numerous settings. In this
work, we formally state the axioms and desired properties of the graph
similarity functions, and evaluate when state-of-the-art methods fail to detect
crucial connectivity changes in graphs. We propose DeltaCon, a principled,
intuitive, and scalable algorithm that assesses the similarity between two
graphs on the same nodes (e.g. employees of a company, customers of a mobile
carrier). Experiments on various synthetic and real graphs showcase the
advantages of our method over existing similarity measures. Finally, we employ
DeltaCon to real applications: (a) we classify people to groups of high and low
creativity based on their brain connectivity graphs, and (b) do temporal
anomaly detection in the who-emails-whom Enron graph.
|
1304.4658 | Personalized PageRank to a Target Node | cs.DS cs.SI | Personalalized PageRank uses random walks to determine the importance or
authority of nodes in a graph from the point of view of a given source node.
Much past work has considered how to compute personalized PageRank from a given
source node to other nodes. In this work we consider the problem of computing
personalized PageRanks to a given target node from all source nodes. This
problem can be interpreted as finding who supports the target or who is
interested in the target.
We present an efficient algorithm for computing personalized PageRank to a
given target up to any given accuracy. We give a simple analysis of our
algorithm's running time in both the average case and the parameterized
worst-case. We show that for any graph with $n$ nodes and $m$ edges, if the
target node is randomly chosen and the teleport probability $\alpha$ is given,
the algorithm will compute a result with $\epsilon$ error in time
$O\left(\frac{1}{\alpha \epsilon} \left(\frac{m}{n} + \log(n)\right)\right)$.
This is much faster than the previously proposed method of computing
personalized PageRank separately from every source node, and it is comparable
to the cost of computing personalized PageRank from a single source. We present
results from experiments on the Twitter graph which show that the constant
factors in our running time analysis are small and our algorithm is efficient
in practice.
|
1304.4661 | Fast Exact Shortest-Path Distance Queries on Large Networks by Pruned
Landmark Labeling | cs.DS cs.DB | We propose a new exact method for shortest-path distance queries on
large-scale networks. Our method precomputes distance labels for vertices by
performing a breadth-first search from every vertex. Seemingly too obvious and
too inefficient at first glance, the key ingredient introduced here is pruning
during breadth-first searches. While we can still answer the correct distance
for any pair of vertices from the labels, it surprisingly reduces the search
space and sizes of labels. Moreover, we show that we can perform 32 or 64
breadth-first searches simultaneously exploiting bitwise operations. We
experimentally demonstrate that the combination of these two techniques is
efficient and robust on various kinds of large-scale real-world networks. In
particular, our method can handle social networks and web graphs with hundreds
of millions of edges, which are two orders of magnitude larger than the limits
of previous exact methods, with comparable query time to those of previous
methods.
|
1304.4662 | Tracking of Fingertips and Centres of Palm using KINECT | cs.CV | Hand Gesture is a popular way to interact or control machines and it has been
implemented in many applications. The geometry of hand is such that it is hard
to construct in virtual environment and control the joints but the
functionality and DOF encourage researchers to make a hand like instrument.
This paper presents a novel method for fingertips detection and centres of
palms detection distinctly for both hands using MS KINECT in 3D from the input
image. KINECT facilitates us by providing the depth information of foreground
objects. The hands were segmented using the depth vector and centres of palms
were detected using distance transformation on inverse image. This result would
be used to feed the inputs to the robotic hands to emulate human hands
operation.
|
1304.4666 | Multi-Branch MMSE Decision Feedback Detection Algorithms with Error
Propagation Mitigation for Multi-Antenna Systems | cs.IT math.IT | In this work we propose novel decision feedback (DF) detection algorithms
with error propagation mitigation capabilities for multi-input multi-output
(MIMO) spatial multiplexing systems based on multiple processing branches. The
novel strategies for detection exploit different patterns, orderings and
constraints for the design of the feedforward and feedback filters. We present
constrained minimum mean-squared error (MMSE) filters designed with constraints
on the shape and magnitude of the feedback filters for the multi-branch MIMO
receivers and show that the proposed MMSE design does not require a significant
additional complexity over the single-branch MMSE design. The proposed
multi-branch MMSE DF detectors are compared with several existing detectors and
are shown to achieve a performance close to the optimal maximum likelihood
detector while requiring significantly lower complexity.
|
1304.4679 | A Method Based on Total Variation for Network Modularity Optimization
using the MBO Scheme | cs.SI math.OC physics.soc-ph | The study of network structure is pervasive in sociology, biology, computer
science, and many other disciplines. One of the most important areas of network
science is the algorithmic detection of cohesive groups of nodes called
"communities". One popular approach to find communities is to maximize a
quality function known as {\em modularity} to achieve some sort of optimal
clustering of nodes. In this paper, we interpret the modularity function from a
novel perspective: we reformulate modularity optimization as a minimization
problem of an energy functional that consists of a total variation term and an
$\ell_2$ balance term. By employing numerical techniques from image processing
and $\ell_1$ compressive sensing -- such as convex splitting and the
Merriman-Bence-Osher (MBO) scheme -- we develop a variational algorithm for the
minimization problem. We present our computational results using both synthetic
benchmark networks and real data.
|
1304.4682 | What are Chinese Talking about in Hot Weibos? | cs.SI cs.CY physics.soc-ph | SinaWeibo is a Twitter-like social network service emerging in China in
recent years. People can post weibos (microblogs) and communicate with others
on it. Based on a dataset of 650 million weibos from August 2009 to January
2012 crawled from APIs of SinaWeibo, we study the hot ones that have been
reposted for at least 1000 times. We find that hot weibos can be roughly
classified into eight categories, i.e. Entertainment & Fashion, Hot Social
Events, Leisure & Mood, Life & Health, Seeking for Help, Sales Promotion,
Fengshui & Fortune and Deleted Weibos. In particular, Leisure & Mood and Hot
Social Events account for almost 65% of all the hot weibos. This reflects very
well the fundamental dual-structure of the current society of China: On the one
hand, economy has made a great progress and quite a part of people are now
living a relatively prosperous and fairly easy life. On the other hand, there
still exist quite a lot of serious social problems, such as government
corruptions and environmental pollutions. It is also shown that users' posting
and reposting behaviors are greatly affected by their identity factors (gender,
verification status, and regional location). For instance, (1) Two thirds of
the hot weibos are created by male users. (2) Although verified users account
for only 0.1% in SinaWeibo, 46.5% of the hot weibos are contributed by them.
Very interestingly, 39.2% are written by SPA users. A more or less pathetic
fact is that only 14.4% of the hot weibos are created by grassroots (individual
users that are neither SPA nor verified). (3) Users from different areas of
China have distinct posting and reposting behaviors which usually reflect very
their local cultures. Homophily is also examined for people's reposting
behaviors.
|
1304.4693 | Structured Lattice Codes for Some Two-User Gaussian Networks with
Cognition, Coordination and Two Hops | cs.IT math.IT | We study a number of two-user interference networks with multiple-antenna
transmitters/receivers, transmitter side information in the form of linear
combinations (over finite-field) of the information messages, and two-hop
relaying. We start with a Cognitive Interference Channel (CIC) where one of the
transmitters (non-cognitive) has knowledge of a rank-1 linear combination of
the two information messages, while the other transmitter (cognitive) has
access to a rank-2 linear combination of the same messages. This is referred to
as the Network-Coded CIC, since such linear combination may be the result of
some random linear network coding scheme implemented in the backbone wired
network. For such channel we develop an achievable region based on a few novel
concepts: Precoded Compute and Forward (PCoF) with Channel Integer Alignment
(CIA), combined with standard Dirty-Paper Coding. We also develop a capacity
region outer bound and find the sum symmetric GDoF of the Network-Coded CIC.
Through the GDoF characterization, we show that knowing "mixed data" (linear
combinations of the information messages) provides an unbounded spectral
efficiency gain over the classical CIC counterpart, if the ratio of SNR to INR
is larger than certain threshold. Then, we consider a Gaussian relay network
having the two-user MIMO IC as the main building block. We use PCoF with CIA to
convert the MIMO IC into a deterministic finite-field IC. Then, we use a linear
precoding scheme over the finite-field to eliminate interference in the
finite-field domain. Using this unified approach, we characterize the symmetric
sum rate of the two-user MIMO IC with coordination, cognition, and two-hops. We
also provide finite-SNR results which show that the proposed coding schemes are
competitive against state of the art interference avoidance based on orthogonal
access, for Rayleigh fading channels.
|
1304.4704 | Measuring and Modeling Behavioral Decision Dynamics in Collective
Evacuation | physics.soc-ph cs.SI | Identifying and quantifying factors influencing human decision making remains
an outstanding challenge, impacting the performance and predictability of
social and technological systems. In many cases, system failures are traced to
human factors including congestion, overload, miscommunication, and delays.
Here we report results of a behavioral network science experiment, targeting
decision making in a natural disaster. In each scenario, individuals are faced
with a forced "go" versus "no go" evacuation decision, based on information
available on competing broadcast and peer-to-peer sources. In this controlled
setting, all actions and observations are recorded prior to the decision,
enabling development of a quantitative decision making model that accounts for
the disaster likelihood, severity, and temporal urgency, as well as competition
between networked individuals for limited emergency resources. Individual
differences in behavior within this social setting are correlated with
individual differences in inherent risk attitudes, as measured by standard
psychological assessments. Identification of robust methods for quantifying
human decisions in the face of risk has implications for policy in disasters
and other threat scenarios.
|
1304.4711 | Automated Switching System for Skin Pixel Segmentation in Varied
Lighting | cs.CV | In Computer Vision, colour-based spatial techniquesoften assume a static skin
colour model. However, skin colour perceived by a camera can change when
lighting changes. In common real environment multiple light sources impinge on
the skin. Moreover, detection techniques may vary when the image under study is
taken under different lighting condition than the one that was earlier under
consideration. Therefore, for robust skin pixel detection, a dynamic skin
colour model that can cope with the changes must be employed. This paper shows
that skin pixel detection in a digital colour image can be significantly
improved by employing automated colour space switching methods. In the root of
the switching technique which is employed in this study, lies the statistical
mean of value of the skin pixels in the image which in turn has been derived
from the Value, measures as a third component of the HSV. The study is based on
experimentations on a set of images where capture time conditions varying from
highly illuminated to almost dark.
|
1304.4731 | On Synchronization of Interdependent Networks | cs.SY math.OC nlin.AO | It is well-known that the synchronization of diffusively-coupled systems on
networks strongly depends on the network topology. In particular, the so-called
algebraic connectivity $\mu_{N-1}$, or the smallest non-zero eigenvalue of the
discrete Laplacian operator plays a crucial role on synchronization, graph
partitioning, and network robustness. In our study, synchronization is placed
in the general context of networks-of-networks, where single network models are
replaced by a more realistic hierarchy of interdependent networks. The present
work shows, analytically and numerically, how the algebraic connectivity
experiences sharp transitions after the addition of sufficient links among
interdependent networks.
|
1304.4765 | Robust Noise Filtering in Image Sequences | cs.CV | Image sequences filtering have recently become a very important technical
problem especially with the advent of new technology in multimedia and video
systems applications. Often image sequences are corrupted by some amount of
noise introduced by the image sensor and therefore inherently present in the
imaging process. The main problem in the image sequences is how to deal with
spatio-temporal and non stationary signals. In this paper, we propose a robust
method for noise removal of image sequence based on coupled spatial and
temporal anisotropic diffusion. The idea is to achieve an adaptive smoothing in
both spatial and temporal directions, by solving a nonlinear diffusion
equation. This allows removing noise while preserving all spatial and temporal
discontinuities
|
1304.4778 | Finite-Length Scaling of Polar Codes | cs.IT math.IT | Consider a binary-input memoryless output-symmetric channel $W$. Such a
channel has a capacity, call it $I(W)$, and for any $R<I(W)$ and strictly
positive constant $P_{\rm e}$ we know that we can construct a coding scheme
that allows transmission at rate $R$ with an error probability not exceeding
$P_{\rm e}$. Assume now that we let the rate $R$ tend to $I(W)$ and we ask how
we have to "scale" the blocklength $N$ in order to keep the error probability
fixed to $P_{\rm e}$. We refer to this as the "finite-length scaling" behavior.
This question was addressed by Strassen as well as Polyanskiy, Poor and Verdu,
and the result is that $N$ must grow at least as the square of the reciprocal
of $I(W)-R$.
Polar codes are optimal in the sense that they achieve capacity. In this
paper, we are asking to what degree they are also optimal in terms of their
finite-length behavior. Our approach is based on analyzing the dynamics of the
un-polarized channels. The main results of this paper can be summarized as
follows. Consider the sum of Bhattacharyya parameters of sub-channels chosen
(by the polar coding scheme) to transmit information. If we require this sum to
be smaller than a given value $P_{\rm e}>0$, then the required block-length $N$
scales in terms of the rate $R < I(W)$ as $N \geq
\frac{\alpha}{(I(W)-R)^{\underline{\mu}}}$, where $\alpha$ is a positive
constant that depends on $P_{\rm e}$ and $I(W)$, and $\underline{\mu} = 3.579$.
Also, we show that with the same requirement on the sum of Bhattacharyya
parameters, the block-length scales in terms of the rate like $N \leq
\frac{\beta}{(I(W)-R)^{\overline{\mu}}}$, where $\beta$ is a constant that
depends on $P_{\rm e}$ and $I(W)$, and $\overline{\mu}=6$.
|
1304.4795 | Recursive Mechanism: Towards Node Differential Privacy and Unrestricted
Joins [Full Version, Draft 0.1] | cs.DB | Existing studies on differential privacy mainly consider aggregation on data
sets where each entry corresponds to a particular participant to be protected.
In many situations, a user may pose a relational algebra query on a sensitive
database, and desires differentially private aggregation on the result of the
query. However, no known work is capable to release this kind of aggregation
when the query contains unrestricted join operations. This severely limits the
applications of existing differential privacy techniques because many data
analysis tasks require unrestricted joins. One example is subgraph counting on
a graph. Existing methods for differentially private subgraph counting address
only edge differential privacy and are subject to very simple subgraphs. Before
this work, whether any nontrivial graph statistics can be released with
reasonable accuracy under node differential privacy is still an open problem.
In this paper, we propose a novel differentially private mechanism to release
an approximation to a linear statistic of the result of some positive
relational algebra calculation over a sensitive database. Unrestricted joins
are supported in our mechanism. The error bound of the approximate answer is
roughly proportional to the \emph{empirical sensitivity} of the query --- a new
notion that measures the maximum possible change to the query answer when a
participant withdraws its data from the sensitive database. For subgraph
counting, our mechanism provides the first solution to achieve node
differential privacy, for any kind of subgraphs.
|
1304.4806 | Unsupervised model-free representation learning | cs.LG q-bio.QM stat.ML | Numerous control and learning problems face the situation where sequences of
high-dimensional highly dependent data are available but no or little feedback
is provided to the learner, which makes any inference rather challenging. To
address this challenge, we formulate the following problem. Given a series of
observations $X_0,\dots,X_n$ coming from a large (high-dimensional) space
$\mathcal X$, find a representation function $f$ mapping $\mathcal X$ to a
finite space $\mathcal Y$ such that the series $f(X_0),\dots,f(X_n)$ preserves
as much information as possible about the original time-series dependence in
$X_0,\dots,X_n$. We show that, for stationary time series, the function $f$ can
be selected as the one maximizing a certain information criterion that we call
time-series information. Some properties of this functions are investigated,
including its uniqueness and consistency of its empirical estimates.
Implications for the problem of optimal control are presented.
|
1304.4811 | Modulation Coding for Flash Memories | cs.IT math.IT | The aggressive scaling down of flash memories has threatened data reliability
since the scaling down of cell sizes gives rise to more serious degradation
mechanisms such as cell-to-cell interference and lateral charge spreading. The
effect of these mechanisms has pattern dependency and some data patterns are
more vulnerable than other ones. In this paper, we will categorize data
patterns taking into account degradation mechanisms and pattern dependency. In
addition, we propose several modulation coding schemes to improve the data
reliability by transforming original vulnerable data patterns into more robust
ones.
|
1304.4821 | Coding for Memory with Stuck-at Defects | cs.IT math.IT | In this paper, we propose an encoding scheme for partitioned linear block
codes (PLBC) which mask the stuck-at defects in memories. In addition, we
derive an upper bound and the estimate of the probability that masking fails.
Numerical results show that PLBC can efficiently mask the defects with the
proposed encoding scheme. Also, we show that our upper bound is very tight by
using numerical results.
|
1304.4837 | Friends, Strangers, and the Value of Ego Networks for Recommendation | cs.SI physics.soc-ph | Two main approaches to using social network information in recommendation
have emerged: augmenting collaborative filtering with social data and
algorithms that use only ego-centric data. We compare the two approaches using
movie and music data from Facebook, and hashtag data from Twitter. We find that
recommendation algorithms based only on friends perform no worse than those
based on the full network, even though they require much less data and
computational resources. Further, our evidence suggests that locality of
preference, or the non-random distribution of item preferences in a social
network, is a driving force behind the value of incorporating social network
information into recommender algorithms. When locality is high, as in Twitter
data, simple k-nn recommenders do better based only on friends than they do if
they draw from the entire network. These results help us understand when, and
why, social network information is likely to support recommendation systems,
and show that systems that see ego-centric slices of a complete network (such
as websites that use Facebook logins) or have computational limitations (such
as mobile devices) may profitably use ego-centric recommendation algorithms.
|
1304.4865 | On the Generalized Hermite-Based Lattice Boltzmann Construction, Lattice
Sets, Weights, Moments, Distribution Functions and High-Order Models | cs.CE physics.comp-ph physics.flu-dyn | The influence of the use of the generalized Hermite polynomial on the
Hermite-based lattice Boltzmann (LB) construction approach, lattice sets, the
thermal weights, moments and the equilibrium distribution function (EDF) are
addressed. A new moment system is proposed. The theoretical possibility to
obtain a high-order Hermite-based LB model capable to exactly match some first
hydrodynamic moments thermally 1) on-Cartesian lattice, 2) with thermal weights
in the EDF, 3) whilst the highest possible hydrodynamic moments that are
exactly matched are obtained with the shortest on-Cartesian lattice sets with
some fixed real-valued temperatures, is also analyzed.
Keywords: Lattice Boltzmann, fluid dynamics, kinetic theory, distribution
function
|
1304.4889 | Hands-free Evolution of 3D-printable Objects via Eye Tracking | cs.NE cs.HC | Interactive evolution has shown the potential to create amazing and complex
forms in both 2-D and 3-D settings. However, the algorithm is slow and users
quickly become fatigued. We propose that the use of eye tracking for
interactive evolution systems will both reduce user fatigue and improve
evolutionary success. We describe a systematic method for testing the
hypothesis that eye tracking driven interactive evolution will be a more
successful and easier-to-use design method than traditional interactive
evolution methods driven by mouse clicks. We provide preliminary results that
support the possibility of this proposal, and lay out future work to
investigate these advantages in extensive clinical trials.
|
1304.4893 | Formation control with binary information | cs.SY | In this paper, we study the problem of formation keeping of a network of
strictly passive systems when very coarse information is exchanged. We assume
that neighboring agents only know whether their relative position is larger or
smaller than the prescribed one. This assumption results in very simple control
laws that direct the agents closer or away from each other and take values in
finite sets. We show that the task of formation keeping while tracking a
desired trajectory and rejecting matched disturbances is still achievable under
the very coarse information scenario. In contrast with other results of
practical convergence with coarse or quantized information, here the control
task is achieved exactly.
|
1304.4910 | A Junction Tree Framework for Undirected Graphical Model Selection | stat.ML cs.AI cs.IT math.IT | An undirected graphical model is a joint probability distribution defined on
an undirected graph G*, where the vertices in the graph index a collection of
random variables and the edges encode conditional independence relationships
among random variables. The undirected graphical model selection (UGMS) problem
is to estimate the graph G* given observations drawn from the undirected
graphical model. This paper proposes a framework for decomposing the UGMS
problem into multiple subproblems over clusters and subsets of the separators
in a junction tree. The junction tree is constructed using a graph that
contains a superset of the edges in G*. We highlight three main properties of
using junction trees for UGMS. First, different regularization parameters or
different UGMS algorithms can be used to learn different parts of the graph.
This is possible since the subproblems we identify can be solved independently
of each other. Second, under certain conditions, a junction tree based UGMS
algorithm can produce consistent results with fewer observations than the usual
requirements of existing algorithms. Third, both our theoretical and
experimental results show that the junction tree framework does a significantly
better job at finding the weakest edges in a graph than existing methods. This
property is a consequence of both the first and second properties. Finally, we
note that our framework is independent of the choice of the UGMS algorithm and
can be used as a wrapper around standard UGMS algorithms for more accurate
graph estimation.
|
1304.4925 | h-approximation: History-Based Approximation of Possible World Semantics
as ASP | cs.AI | We propose an approximation of the Possible Worlds Semantics (PWS) for action
planning. A corresponding planning system is implemented by a transformation of
the action specification to an Answer-Set Program. A novelty is support for
postdiction wrt. (a) the plan existence problem in our framework can be solved
in NP, as compared to $\Sigma_2^P$ for non-approximated PWS of Baral(2000); and
(b) the planner generates optimal plans wrt. a minimal number of actions in
$\Delta_2^P$. We demo the planning system with standard problems, and
illustrate its integration in a larger software framework for robot control in
a smart home.
|
1304.4927 | Homogeneous Weights and M\"obius Functions on Finite Rings | cs.IT math.IT math.RA | The homogeneous weights and the M\"obius functions and Euler phi-functions on
finite rings are discussed; some computational formulas for these functions on
finite principal ideal rings are characterized; for the residue rings of
integers, they are reduced to the classical number-theoretical M\"obius
functions and the classical number-theoretical Euler phi-functions.
|
1304.4965 | Improvement/Extension of Modular Systems as Combinatorial Reengineering
(Survey) | cs.AI | The paper describes development (improvement/extension) approaches for
composite (modular) systems (as combinatorial reengineering). The following
system improvement/extension actions are considered: (a) improvement of systems
component(s) (e.g., improvement of a system component, replacement of a system
component); (b) improvement of system component interconnection
(compatibility); (c) joint improvement improvement of system components(s) and
their interconnection; (d) improvement of system structure (replacement of
system part(s), addition of a system part, deletion of a system part,
modification of system structure). The study of system improvement approaches
involve some crucial issues: (i) scales for evaluation of system components and
component compatibility (quantitative scale, ordinal scale, poset-like scale,
scale based on interval multiset estimate), (ii) evaluation of integrated
system quality, (iii) integration methods to obtain the integrated system
quality. The system improvement/extension strategies can be examined as
seleciton/combination of the improvement action(s) above and as modification of
system structure. The strategies are based on combinatorial optimization
problems (e.g., multicriteria selection, knapsack problem, multiple choice
problem, combinatorial synthesis based on morphological clique problem,
assignment/reassignment problem, graph recoloring problem, spanning problems,
hotlink assignment). Here, heuristics are used. Various system
improvement/extension strategies are presented including illustrative numerical
examples.
|
1304.4974 | Fast exact digital differential analyzer for circle generation | cs.GR cs.SY | In the first part of the paper we present a short review of applications of
digital differential analyzers (DDA) to generation of circles showing that they
can be treated as one-step numerical schemes. In the second part we present and
discuss a novel fast algorithm based on a two-step numerical scheme (explicit
midpoint rule). Although our algorithm is as cheap as the simplest one-step DDA
algoritm (and can be represented in terms of shifts and additions), it
generates circles with maximal accuracy, i.e., it is exact up to round-off
errors.
|
1304.4994 | Polygon Matching and Indexing Under Affine Transformations | cs.CV | Given a collection $\{Z_1,Z_2,\ldots,Z_m\}$ of $n$-sided polygons in the
plane and a query polygon $W$ we give algorithms to find all $Z_\ell$ such that
$W=f(Z_\ell)$ with $f$ an unknown similarity transformation in time independent
of the size of the collection. If $f$ is a known affine transformation, we show
how to find all $Z_\ell$ such that $W=f(Z_\ell)$ in $O(n+\log(m))$ time.
For a pair $W,W^\prime$ of polygons we can find all the pairs
$Z_\ell,Z_{\ell^\prime}$ such that $W=f(Z_\ell)$ and
$W^\prime=f(Z_{\ell^\prime})$ for an unknown affine transformation $f$ in
$O(m+n)$ time.
For the case of triangles we also give bounds for the problem of matching
triangles with variable vertices, which is equivalent to affine matching
triangles in noisy conditions.
|
1304.5007 | Building one-time memories from isolated qubits | quant-ph cs.IT math.IT | One-time memories (OTM's) are simple tamper-resistant cryptographic devices,
which can be used to implement one-time programs, a very general form of
software protection and program obfuscation. Here we investigate the
possibility of building OTM's using quantum mechanical devices. It is known
that OTM's cannot exist in a fully-quantum world or in a fully-classical world.
Instead, we propose a new model based on "isolated qubits" -- qubits that can
only be accessed using local operations and classical communication (LOCC).
This model combines a quantum resource (single-qubit measurements) with a
classical restriction (on communication between qubits), and can be implemented
using current technologies, such as nitrogen vacancy centers in diamond. In
this model, we construct OTM's that are information-theoretically secure
against one-pass LOCC adversaries that use 2-outcome measurements.
Our construction resembles Wiesner's old idea of quantum conjugate coding,
implemented using random error-correcting codes; our proof of security uses
entropy chaining to bound the supremum of a suitable empirical process. In
addition, we conjecture that our random codes can be replaced by some class of
efficiently-decodable codes, to get computationally-efficient OTM's that are
secure against computationally-bounded LOCC adversaries.
In addition, we construct data-hiding states, which allow an LOCC sender to
encode an (n-O(1))-bit messsage into n qubits, such that at most half of the
message can be extracted by a one-pass LOCC receiver, but the whole message can
be extracted by a general quantum receiver.
|
1304.5038 | One condition for solution uniqueness and robustness of both
l1-synthesis and l1-analysis minimizations | cs.IT math.IT math.OC | The $\ell_1$-synthesis model and the $\ell_1$-analysis model recover
structured signals from their undersampled measurements. The solution of former
is a sparse sum of dictionary atoms, and that of the latter makes sparse
correlations with dictionary atoms. This paper addresses the question: when can
we trust these models to recover specific signals? We answer the question with
a condition that is both necessary and sufficient to guarantee the recovery to
be unique and exact and, in presence of measurement noise, to be robust. The
condition is one--for--all in the sense that it applies to both of the
$\ell_1$-synthesis and $\ell_1$-analysis models, to both of their constrained
and unconstrained formulations, and to both the exact recovery and robust
recovery cases. Furthermore, a convex infinity--norm program is introduced for
numerically verifying the condition. A comprehensive comparison with related
existing conditions are included.
|
1304.5051 | Constraint Satisfaction over Generalized Staircase Constraints | cs.AI cs.DS | One of the key research interests in the area of Constraint Satisfaction
Problem (CSP) is to identify tractable classes of constraints and develop
efficient solutions for them. In this paper, we introduce generalized staircase
(GS) constraints which is an important generalization of one such tractable
class found in the literature, namely, staircase constraints. GS constraints
are of two kinds, down staircase (DS) and up staircase (US). We first examine
several properties of GS constraints, and then show that arc consistency is
sufficient to determine a solution to a CSP over DS constraints. Further, we
propose an optimal O(cd) time and space algorithm to compute arc consistency
for GS constraints where c is the number of constraints and d is the size of
the largest domain. Next, observing that arc consistency is not necessary for
solving a DSCSP, we propose a more efficient algorithm for solving it. With
regard to US constraints, arc consistency is not known to be sufficient to
determine a solution, and therefore, methods such as path consistency or
variable elimination are required. Since arc consistency acts as a subroutine
for these existing methods, replacing it by our optimal O(cd) arc consistency
algorithm produces a more efficient method for solving a USCSP.
|
1304.5063 | Combinaison d'information visuelle, conceptuelle, et contextuelle pour
la construction automatique de hierarchies semantiques adaptees a
l'annotation d'images | cs.CV cs.LG cs.MM | This paper proposes a new methodology to automatically build semantic
hierarchies suitable for image annotation and classification. The building of
the hierarchy is based on a new measure of semantic similarity. The proposed
measure incorporates several sources of information: visual, conceptual and
contextual as we defined in this paper. The aim is to provide a measure that
best represents image semantics. We then propose rules based on this measure,
for the building of the final hierarchy, and which explicitly encode
hierarchical relationships between different concepts. Therefore, the built
hierarchy is used in a semantic hierarchical classification framework for image
annotation. Our experiments and results show that the hierarchy built improves
classification results.
Ce papier propose une nouvelle methode pour la construction automatique de
hierarchies semantiques adaptees a la classification et a l'annotation
d'images. La construction de la hierarchie est basee sur une nouvelle mesure de
similarite semantique qui integre plusieurs sources d'informations: visuelle,
conceptuelle et contextuelle que nous definissons dans ce papier. L'objectif
est de fournir une mesure qui est plus proche de la semantique des images. Nous
proposons ensuite des regles, basees sur cette mesure, pour la construction de
la hierarchie finale qui encode explicitement les relations hierarchiques entre
les differents concepts. La hierarchie construite est ensuite utilisee dans un
cadre de classification semantique hierarchique d'images en concepts visuels.
Nos experiences et resultats montrent que la hierarchie construite permet
d'ameliorer les resultats de la classification.
|
1304.5069 | The Tap code - a code similar to Morse code for communication by tapping | cs.IT math.IT | A code is presented for fast, easy and efficient communication over channels
that allow only two signal types: a single sound (e.g. a knock), or no sound
(i.e. silence). This is a true binary code while Morse code is a ternary code
and does not work in such situations. Thus the presented code is more universal
than Morse and can be used in much more situations. Additionally it is very
tolerant to variations in signal strength or duration. The paper contains
various ways in which the code can be derived, that all lead to the same code.
It also contains a comparison to other, similar codes, including the Morse
code, in regards to efficiency and other attributes. The replacement of Morse
code with Tap code is not proposed.
|
1304.5073 | Blind Non-parametric Statistics for Multichannel Detection Based on
Statistical Covariances | cs.IT math.IT | We consider the problem of detecting the presence of a spatially correlated
multichannel signal corrupted by additive Gaussian noise (i.i.d across
sensors). No prior knowledge is assumed about the system parameters such as the
noise variance, number of sources and correlation among signals. It is well
known that the GLRT statistics for this composite hypothesis testing problem
are asymptotically optimal and sensitive to variation in system model or its
parameter. To address these shortcomings we present a few non-parametric
statistics which are functions of the elements of Bartlett decomposed sample
covariance matrix. They are designed such that the detection performance is
immune to the uncertainty in the knowledge of noise variance. The analysis
presented verifies the invariability of threshold value and identifies a few
specific scenarios where the proposed statistics have better performance
compared to GLRT statistics. The sensitivity of the statistic to correlation
among streams, number of sources and sample size at low signal to noise ratio
are discussed.
|
1304.5075 | On the Rate of Information Loss in Memoryless Systems | cs.IT math.IT | In this work we present results about the rate of (relative) information loss
induced by passing a real-valued, stationary stochastic process through a
memoryless system. We show that for a special class of systems the information
loss rate is closely related to the difference of differential entropy rates of
the input and output processes. It is further shown that the rate of (relative)
information loss is bounded from above by the (relative) information loss the
system induces on a random variable distributed according to the process's
marginal distribution.
As a side result, in this work we present sufficient conditions such that for
a continuous-valued Markovian input process also the output process possesses
the Markov property.
|
1304.5084 | Extended Object Tracking with Random Hypersurface Models | cs.SY | The Random Hypersurface Model (RHM) is introduced that allows for estimating
a shape approximation of an extended object in addition to its kinematic state.
An RHM represents the spatial extent by means of randomly scaled versions of
the shape boundary. In doing so, the shape parameters and the measurements are
related via a measurement equation that serves as the basis for a Gaussian
state estimator. Specific estimators are derived for elliptic and star-convex
shapes.
|
1304.5097 | Targeted Social Mobilisation in a Global Manhunt | physics.soc-ph cs.CY cs.SI | Social mobilization, the ability to mobilize large numbers of people via
social networks to achieve highly distributed tasks, has received significant
attention in recent times. This growing capability, facilitated by modern
communication technology, is highly relevant to endeavors which require the
search for individuals that posses rare information or skill, such as finding
medical doctors during disasters, or searching for missing people. An open
question remains, as to whether in time-critical situations, people are able to
recruit in a targeted manner, or whether they resort to so-called blind search,
recruiting as many acquaintances as possible via broadcast communication. To
explore this question, we examine data from our recent success in the U.S.
State Department's Tag Challenge, which required locating and photographing 5
target persons in 5 different cities in the United States and Europe in less
than 12 hours, based only on a single mug-shot. We find that people are able to
consistently route information in a targeted fashion even under increasing time
pressure. We derive an analytical model for global mobilization and use it to
quantify the extent to which people were targeting others during recruitment.
Our model estimates that approximately 1 in 3 messages were of targeted fashion
during the most time-sensitive period of the challenge.This is a novel
observation at such short temporal scales, and calls for opportunities for
devising viral incentive schemes that provide distance- or time-sensitive
rewards to approach the target geography more rapidly, with applications in
multiple areas from emergency preparedness, to political mobilization.
|
1304.5099 | Expressando Atributos N\~ao-Funcionais em Workflows Cient\'ificos | cs.CE cs.SE | In this paper we present OSC, a scientific workflow specification language
based on software architecture principles. In contrast with other approaches,
OSC employs connectors as first-class constructs. In this way, we leverage
reusability and compositionality in the workflow modeling process, specially in
the configuration of mechanisms that manage non-functional attributes.
|
1304.5112 | Simplifying Generalized Belief Propagation on Redundant Region Graphs | cs.IT cond-mat.dis-nn math.IT | The cluster variation method has been developed into a general theoretical
framework for treating short-range correlations in many-body systems after it
was first proposed by Kikuchi in 1951. On the numerical side, a message-passing
approach called generalized belief propagation (GBP) was proposed by Yedidia,
Freeman and Weiss about a decade ago as a way of computing the minimal value of
the cluster variational free energy and the marginal distributions of clusters
of variables. However the GBP equations are often redundant, and it is quite a
non-trivial task to make the GBP iteration converges to a fixed point. These
drawbacks hinder the application of the GBP approach to finite-dimensional
frustrated and disordered systems.
In this work we report an alternative and simple derivation of the GBP
equations starting from the partition function expression. Based on this
derivation we propose a natural and systematic way of removing the redundance
of the GBP equations. We apply the simplified generalized belief propagation
(SGBP) equations to the two-dimensional and the three-dimensional ferromagnetic
Ising model and Edwards-Anderson spin glass model. The numerical results
confirm that the SGBP message-passing approach is able to achieve satisfactory
performance on these model systems. We also suggest that a subset of the SGBP
equations can be neglected in the numerical iteration process without affecting
the final results.
|
1304.5150 | The Least Degraded and the Least Upgraded Channel with respect to a
Channel Family | cs.IT math.IT | Given a family of binary-input memoryless output-symmetric (BMS) channels
having a fixed capacity, we derive the BMS channel having the highest (resp.
lowest) capacity among all channels that are degraded (resp. upgraded) with
respect to the whole family. We give an explicit characterization of this
channel as well as an explicit formula for the capacity of this channel.
|
1304.5153 | A composition theorem for bisimulation functions | cs.SY | The standard engineering approach to modelling of complex systems is highly
compositional. In order to be able to understand (or to control) the behavior
of a complex dynamical systems, it is often desirable, if not necessary, to
view this system as an interconnection of smaller interacting subsystems, each
of these subsystems having its own functionalities. In this paper, we propose a
compositional approach to the computation of bisimulation functions for
dynamical systems. Bisimulation functions are quantitative generalizations of
the classical bisimulation relations. They have been shown useful for
simulation-based verification or for the computation of approximate symbolic
abstractions of dynamical systems. In this technical note, we present a
constructive result for the composition of bisimulation functions. For a
complex dynamical system consisting of several interconnected subsystems, it
allows us to compute a bisimulation function from the knowledge of a
bisimulation function for each of the subsystem.
|
1304.5159 | Interactive POMDP Lite: Towards Practical Planning to Predict and
Exploit Intentions for Interacting with Self-Interested Agents | cs.AI cs.MA | A key challenge in non-cooperative multi-agent systems is that of developing
efficient planning algorithms for intelligent agents to interact and perform
effectively among boundedly rational, self-interested agents (e.g., humans).
The practicality of existing works addressing this challenge is being
undermined due to either the restrictive assumptions of the other agents'
behavior, the failure in accounting for their rationality, or the prohibitively
expensive cost of modeling and predicting their intentions. To boost the
practicality of research in this field, we investigate how intention prediction
can be efficiently exploited and made practical in planning, thereby leading to
efficient intention-aware planning frameworks capable of predicting the
intentions of other agents and acting optimally with respect to their predicted
intentions. We show that the performance losses incurred by the resulting
planning policies are linearly bounded by the error of intention prediction.
Empirical evaluations through a series of stochastic games demonstrate that our
policies can achieve better and more robust performance than the
state-of-the-art algorithms.
|
1304.5168 | Image Retrieval based on Bag-of-Words model | cs.IR cs.LG | This article gives a survey for bag-of-words (BoW) or bag-of-features model
in image retrieval system. In recent years, large-scale image retrieval shows
significant potential in both industry applications and research problems. As
local descriptors like SIFT demonstrate great discriminative power in solving
vision problems like object recognition, image classification and annotation,
more and more state-of-the-art large scale image retrieval systems are trying
to rely on them. A common way to achieve this is first quantizing local
descriptors into visual words, and then applying scalable textual indexing and
retrieval schemes. We call this model as bag-of-words or bag-of-features model.
The goal of this survey is to give an overview of this model and introduce
different strategies when building the system based on this model.
|
1304.5185 | Temporal Description Logic for Ontology-Based Data Access (Extended
Version) | cs.LO cs.AI | Our aim is to investigate ontology-based data access over temporal data with
validity time and ontologies capable of temporal conceptual modelling. To this
end, we design a temporal description logic, TQL, that extends the standard
ontology language OWL 2 QL, provides basic means for temporal conceptual
modelling and ensures first-order rewritability of conjunctive queries for
suitably defined data instances with validity time.
|
1304.5212 | Object Tracking in Videos: Approaches and Issues | cs.CV | Mobile object tracking has an important role in the computer vision
applications. In this paper, we use a tracked target-based taxonomy to present
the object tracking algorithms. The tracked targets are divided into three
categories: points of interest, appearance and silhouette of mobile objects.
Advantages and limitations of the tracking approaches are also analyzed to find
the future directions in the object tracking domain.
|
1304.5213 | Carbon Dating The Web: Estimating the Age of Web Resources | cs.IR cs.DL | In the course of web research it is often necessary to estimate the creation
datetime for web resources (in the general case, this value can only be
estimated). While it is feasible to manually establish likely datetime values
for small numbers of resources, this becomes infeasible if the collection is
large. We present "carbon date", a simple web application that estimates the
creation date for a URI by polling a number of sources of evidence and
returning a machine-readable structure with their respective values. To
establish a likely datetime, we poll bitly for the first time someone shortened
the URI, topsy for the first time someone tweeted the URI, a Memento aggregator
for the first time it appeared in a public web archive, Google's time of last
crawl, and the Last-Modified HTTP response header of the resource itself. We
also examine the backlinks of the URI as reported by Google and apply the same
techniques for the resources that link to the URI. We evaluated our tool on a
gold-standard data set of 1200 URIs in which the creation date was manually
verified. We were able to estimate a creation date for 75.90% of the resources,
with 32.78% having the correct value. Given the different nature of the URIs,
the union of the various methods produces the best results. While the Google
last crawl date and topsy account for nearly 66% of the closest answers,
eliminating the web archives or Last-Modified from the results produces the
largest overall negative impact on the results. The carbon date application is
available for download or use via a webAPI.
|
1304.5220 | Scaling Exponent of List Decoders with Applications to Polar Codes | cs.IT math.IT | Motivated by the significant performance gains which polar codes experience
under successive cancellation list decoding, their scaling exponent is studied
as a function of the list size. In particular, the error probability is fixed
and the trade-off between block length and back-off from capacity is analyzed.
A lower bound is provided on the error probability under $\rm MAP$ decoding
with list size $L$ for any binary-input memoryless output-symmetric channel and
for any class of linear codes such that their minimum distance is unbounded as
the block length grows large. Then, it is shown that under $\rm MAP$ decoding,
although the introduction of a list can significantly improve the involved
constants, the scaling exponent itself, i.e., the speed at which capacity is
approached, stays unaffected for any finite list size. In particular, this
result applies to polar codes, since their minimum distance tends to infinity
as the block length increases. A similar result is proved for genie-aided
successive cancellation decoding when transmission takes place over the binary
erasure channel, namely, the scaling exponent remains constant for any fixed
number of helps from the genie. Note that since genie-aided successive
cancellation decoding might be strictly worse than successive cancellation list
decoding, the problem of establishing the scaling exponent of the latter
remains open.
|
1304.5251 | Applications of Dynamical Systems in Engineering | cs.SY | This paper presents the current possible applications of Dynamical Systems in
Engineering. The applications of chaos, fractals have proven to be an exciting
and fruitful endeavor. These applications are highly diverse ranging over such
fields as Electrical, Electronics and Computer Engineering. Dynamical Systems
theory describes general patterns found in the solution of systems of nonlinear
equations. The theory focuses upon those equations representing the change of
processes in time. This paper offers the issue of applying dynamical systems
methods to a wider circle of Engineering problems. There are three components
to our approach: ongoing and possible applications of Fractals, Chaos Theory
and Dynamical Systems. Some basic and useful computer simulation of Dynamical
System related problems have been shown also.
|
1304.5260 | Effects of mixing in threshold models of social behavior | physics.soc-ph cs.SI | We consider the dynamics of an extension of the influential Granovetter model
of social behavior, where individuals are affected by their personal
preferences and observation of the neighbors' behavior. Individuals are
arranged in a network (usually, the square lattice) and each has a state and a
fixed threshold for behavior changes. We simulate the system asynchronously
either by picking a random individual and either update its state or exchange
it with another randomly chosen individual (mixing). We describe the dynamics
analytically in the fast-mixing limit by using the mean-field approximation and
investigate it mainly numerically in case of a finite mixing. We show that the
dynamics converge to a manifold in state space, which determines the possible
equilibria, and show how to estimate the projection of manifold by using
simulated trajectories, emitted from different initial points.
We show that the effects of considering the network can be decomposed into
finite-neighborhood effects, and finite-mixing-rate effects, which have
qualitatively similar effects. Both of these effects increase the tendency of
the system to move from a less-desired equilibrium to the "ground state". Our
findings can be used to probe shifts in behavioral norms and have implications
for the role of information flow in determining when social norms that have
become unpopular (such as foot binding or female genital cutting) persist or
vanish.
|
1304.5299 | Austerity in MCMC Land: Cutting the Metropolis-Hastings Budget | cs.LG stat.ML | Can we make Bayesian posterior MCMC sampling more efficient when faced with
very large datasets? We argue that computing the likelihood for N datapoints in
the Metropolis-Hastings (MH) test to reach a single binary decision is
computationally inefficient. We introduce an approximate MH rule based on a
sequential hypothesis test that allows us to accept or reject samples with high
confidence using only a fraction of the data required for the exact MH rule.
While this method introduces an asymptotic bias, we show that this bias can be
controlled and is more than offset by a decrease in variance due to our ability
to draw more samples per unit of time.
|
1304.5304 | Exclusion and Guard Zones in DC-CDMA Ad Hoc Networks | cs.IT math.IT | The central issue in direct-sequence code-division multiple-access (DS-CDMA)
ad hoc networks is the prevention of a near-far problem. This paper considers
two types of guard zones that may be used to control the near-far problem: a
fundamental exclusion zone and an additional CSMA guard zone that may be
established by the carrier-sense multiple-access (CSMA) protocol. In the
exclusion zone, no mobiles are physically present, modeling the minimum
physical separation among mobiles that is always present in actual networks.
Potentially interfering mobiles beyond a transmitting mobile's exclusion zone,
but within its CSMA guard zone, are deactivated by the protocol. This paper
provides an analysis of DS-CSMA networks with either or both types of guard
zones. A network of finite extent with a finite number of mobiles and uniform
clustering as the spatial distribution is modeled. The analysis applies a
closed-form expression for the outage probability in the presence of Nakagami
fading, conditioned on the network geometry. The tradeoffs between exclusion
zones and CSMA guard zones are explored for DS-CDMA and unspread networks. The
spreading factor and the guard-zone radius provide design flexibility in
achieving specified levels of average outage probability and transmission
capacity. The advantage of an exclusion zone over a CSMA guard zone is that
since the network is not thinned, the number of active mobiles remains
constant, and higher transmission capacities can be achieved.
|
1304.5319 | A Joint Intensity and Depth Co-Sparse Analysis Model for Depth Map
Super-Resolution | cs.CV | High-resolution depth maps can be inferred from low-resolution depth
measurements and an additional high-resolution intensity image of the same
scene. To that end, we introduce a bimodal co-sparse analysis model, which is
able to capture the interdependency of registered intensity and depth
information. This model is based on the assumption that the co-supports of
corresponding bimodal image structures are aligned when computed by a suitable
pair of analysis operators. No analytic form of such operators exist and we
propose a method for learning them from a set of registered training signals.
This learning process is done offline and returns a bimodal analysis operator
that is universally applicable to natural scenes. We use this to exploit the
bimodal co-sparse analysis model as a prior for solving inverse problems, which
leads to an efficient algorithm for depth map super-resolution.
|
1304.5350 | Parallel Gaussian Process Optimization with Upper Confidence Bound and
Pure Exploration | cs.LG stat.ML | In this paper, we consider the challenge of maximizing an unknown function f
for which evaluations are noisy and are acquired with high cost. An iterative
procedure uses the previous measures to actively select the next estimation of
f which is predicted to be the most useful. We focus on the case where the
function can be evaluated in parallel with batches of fixed size and analyze
the benefit compared to the purely sequential procedure in terms of cumulative
regret. We introduce the Gaussian Process Upper Confidence Bound and Pure
Exploration algorithm (GP-UCB-PE) which combines the UCB strategy and Pure
Exploration in the same batch of evaluations along the parallel iterations. We
prove theoretical upper bounds on the regret with batches of size K for this
procedure which show the improvement of the order of sqrt{K} for fixed
iteration cost over purely sequential versions. Moreover, the multiplicative
constants involved have the property of being dimension-free. We also confirm
empirically the efficiency of GP-UCB-PE on real and synthetic problems compared
to state-of-the-art competitors.
|
1304.5357 | Exact-Regenerating Codes between MBR and MSR Points | cs.DC cs.IT math.IT | In this paper we study distributed storage systems with exact repair. We give
a construction for regenerating codes between the minimum storage regenerating
(MSR) and the minimum bandwidth regenerating (MBR) points and show that in the
case that the parameters n, k, and d are close to each other our constructions
are close to optimal when comparing to the known capacity when only functional
repair is required. We do this by showing that when the distances of the
parameters n, k, and d are fixed but the actual values approach to infinity,
the fraction of the performance of our codes with exact repair and the known
capacity of codes with functional repair approaches to one.
|
1304.5384 | Quantum Popov robust stability analysis of an optical cavity containing
a saturated Kerr medium | quant-ph cs.SY math.OC | This paper applies results on the robust stability of nonlinear quantum
systems to a system consisting an optical cavity containing a saturated Kerr
medium. The system is characterized by a Hamiltonian operator which contains a
non-quadratic term involving a quartic function of the annihilation and
creation operators. A saturated version of the Kerr nonlinearity leads to a
sector bounded nonlinearity which enables a quantum small gain theorem to be
applied to this system in order to analyze its stability. Also, a non-quadratic
version of a quantum Popov stability criterion is presented and applied to
analyze the stability of this system.
|
1304.5402 | Context-Independent Centrality Measures Underestimate the Vulnerability
of Power Grids | physics.soc-ph cs.SI nlin.AO | Power grids vulnerability is a key issue in society. A component failure may
trigger cascades of failures across the grid and lead to a large blackout.
Complex network approaches have shown a direction to study some of the problems
faced by power grids. Within Complex Network Analysis structural
vulnerabilities of power grids have been studied mostly using purely
topological approaches, which assumes that flow of power is dictated by
shortest paths. However, this fails to capture the real flow characteristics of
power grids. We have proposed a flow redistribution mechanism that closely
mimics the flow in power grids using the PTDF. With this mechanism we enhance
existing cascading failure models to study the vulnerability of power grids.
We apply the model to the European high-voltage grid to carry out a
comparative study for a number of centrality measures. `Centrality' gives an
indication of the criticality of network components. Our model offers a way to
find those centrality measures that give the best indication of node
vulnerability in the context of power grids, by considering not only the
network topology but also the power flowing through the network. In addition,
we use the model to determine the spare capacity that is needed to make the
grid robust to targeted attacks. We also show a brief comparison of the end
results with other power grid systems to generalise the result.
|
1304.5404 | A scalable computational framework for establishing long-term behavior
of stochastic reaction networks | q-bio.MN cs.SY math.OC math.PR | Reaction networks are systems in which the populations of a finite number of
species evolve through predefined interactions. Such networks are found as
modeling tools in many biological disciplines such as biochemistry, ecology,
epidemiology, immunology, systems biology and synthetic biology. It is now
well-established that, for small population sizes, stochastic models for
biochemical reaction networks are necessary to capture randomness in the
interactions. The tools for analyzing such models, however, still lag far
behind their deterministic counterparts. In this paper, we bridge this gap by
developing a constructive framework for examining the long-term behavior and
stability properties of the reaction dynamics in a stochastic setting. In
particular, we address the problems of determining ergodicity of the reaction
dynamics, which is analogous to having a globally attracting fixed point for
deterministic dynamics. We also examine when the statistical moments of the
underlying process remain bounded with time and when they converge to their
steady state values. The framework we develop relies on a blend of ideas from
probability theory, linear algebra and optimization theory. We demonstrate that
the stability properties of a wide class of biological networks can be assessed
from our sufficient theoretical conditions that can be recast as efficient and
scalable linear programs, well-known for their tractability. It is notably
shown that the computational complexity is often linear in the number of
species. We illustrate the validity, the efficiency and the wide applicability
of our results on several reaction networks arising in biochemistry, systems
biology, epidemiology and ecology. The biological implications of the results
as well as an example of a non-ergodic biological network are also discussed.
|
1304.5409 | Separating the Real from the Synthetic: Minutiae Histograms as
Fingerprints of Fingerprints | cs.CV cs.AI cs.DB | In this study we show that by the current state-of-the-art synthetically
generated fingerprints can easily be discriminated from real fingerprints. We
propose a method based on second order extended minutiae histograms (MHs) which
can distinguish between real and synthetic prints with very high accuracy. MHs
provide a fixed-length feature vector for a fingerprint which are invariant
under rotation and translation. This 'test of realness' can be applied to
synthetic fingerprints produced by any method. In this work, tests are
conducted on the 12 publicly available databases of FVC2000, FVC2002 and
FVC2004 which are well established benchmarks for evaluating the performance of
fingerprint recognition algorithms; 3 of these 12 databases consist of
artificial fingerprints generated by the SFinGe software. Additionally, we
evaluate the discriminative performance on a database of synthetic fingerprints
generated by the software of Bicz versus real fingerprint images. We conclude
with suggestions for the improvement of synthetic fingerprint generation.
|
1304.5416 | The Worst Case ISI channels and the Uniqueness of the Corresponding
Minimum Eigenvalue | cs.IT math.IT | Intersymbol interference (ISI) is a major cause of degradation in the
receiver performance of high-speed data communications systems. This arises
mainly due to limited bandwidth available. The minimum Euclidean distance
between any two symbol sequences is an important parameter in this case at
moderate to high signal to noise ratios. It is proven here that as ISI
increases the minimum distance strictly decreases when the worst case scenario
is considered. From this it follows that the minimum eigenvalue of the worst
case ISI channel of a given length is unique.
|
1304.5449 | Solving WCSP by Extraction of Minimal Unsatisfiable Cores | cs.AI | Usual techniques to solve WCSP are based on cost transfer operations coupled
with a branch and bound algorithm. In this paper, we focus on an approach
integrating extraction and relaxation of Minimal Unsatisfiable Cores in order
to solve this problem. We decline our approach in two ways: an incomplete,
greedy, algorithm and a complete one.
|
1304.5457 | Personalized Academic Research Paper Recommendation System | cs.IR cs.DL cs.LG | A huge number of academic papers are coming out from a lot of conferences and
journals these days. In these circumstances, most researchers rely on key-based
search or browsing through proceedings of top conferences and journals to find
their related work. To ease this difficulty, we propose a Personalized Academic
Research Paper Recommendation System, which recommends related articles, for
each researcher, that may be interesting to her/him. In this paper, we first
introduce our web crawler to retrieve research papers from the web. Then, we
define similarity between two research papers based on the text similarity
between them. Finally, we propose our recommender system developed using
collaborative filtering methods. Our evaluation results demonstrate that our
system recommends good quality research papers.
|
1304.5479 | Local Backbones | cs.CC cs.AI | A backbone of a propositional CNF formula is a variable whose truth value is
the same in every truth assignment that satisfies the formula. The notion of
backbones for CNF formulas has been studied in various contexts. In this paper,
we introduce local variants of backbones, and study the computational
complexity of detecting them. In particular, we consider k-backbones, which are
backbones for sub-formulas consisting of at most k clauses, and iterative
k-backbones, which are backbones that result after repeated instantiations of
k-backbones. We determine the parameterized complexity of deciding whether a
variable is a k-backbone or an iterative k-backbone for various restricted
formula classes, including Horn, definite Horn, and Krom. We also present some
first empirical results regarding backbones for CNF-Satisfiability (SAT). The
empirical results we obtain show that a large fraction of the backbones of
structured SAT instances are local, in contrast to random instances, which
appear to have few local backbones.
|
1304.5504 | Optimal Stochastic Strongly Convex Optimization with a Logarithmic
Number of Projections | cs.LG stat.ML | We consider stochastic strongly convex optimization with a complex inequality
constraint. This complex inequality constraint may lead to computationally
expensive projections in algorithmic iterations of the stochastic gradient
descent~(SGD) methods. To reduce the computation costs pertaining to the
projections, we propose an Epoch-Projection Stochastic Gradient
Descent~(Epro-SGD) method. The proposed Epro-SGD method consists of a sequence
of epochs; it applies SGD to an augmented objective function at each iteration
within the epoch, and then performs a projection at the end of each epoch.
Given a strongly convex optimization and for a total number of $T$ iterations,
Epro-SGD requires only $\log(T)$ projections, and meanwhile attains an optimal
convergence rate of $O(1/T)$, both in expectation and with a high probability.
To exploit the structure of the optimization problem, we propose a proximal
variant of Epro-SGD, namely Epro-ORDA, based on the optimal regularized dual
averaging method. We apply the proposed methods on real-world applications; the
empirical results demonstrate the effectiveness of our methods.
|
1304.5507 | Analysing Mood Patterns in the United Kingdom through Twitter Content | cs.SI physics.soc-ph | Social Media offer a vast amount of geo-located and time-stamped textual
content directly generated by people. This information can be analysed to
obtain insights about the general state of a large population of users and to
address scientific questions from a diversity of disciplines. In this work, we
estimate temporal patterns of mood variation through the use of emotionally
loaded words contained in Twitter messages, possibly reflecting underlying
circadian and seasonal rhythms in the mood of the users. We present a method
for computing mood scores from text using affective word taxonomies, and apply
it to millions of tweets collected in the United Kingdom during the seasons of
summer and winter. Our analysis results in the detection of strong and
statistically significant circadian patterns for all the investigated mood
types. Seasonal variation does not seem to register any important divergence in
the signals, but a periodic oscillation within a 24-hour period is identified
for each mood type. The main common characteristic for all emotions is their
mid-morning peak, however their mood score patterns differ in the evenings.
|
1304.5530 | Inexact Coordinate Descent: Complexity and Preconditioning | math.OC cs.AI stat.ML | In this paper we consider the problem of minimizing a convex function using a
randomized block coordinate descent method. One of the key steps at each
iteration of the algorithm is determining the update to a block of variables.
Existing algorithms assume that in order to compute the update, a particular
subproblem is solved exactly. In his work we relax this requirement, and allow
for the subproblem to be solved inexactly, leading to an inexact block
coordinate descent method. Our approach incorporates the best known results for
exact updates as a special case. Moreover, these theoretical guarantees are
complemented by practical considerations: the use of iterative techniques to
determine the update as well as the use of preconditioning for further
acceleration.
|
1304.5545 | Designing Electronic Markets for Defeasible-based Contractual Agents | cs.MA | The design of punishment policies applied to specific domains linking agents
actions to material penalties is an open research issue. The proposed framework
applies principles of contract law to set penalties: expectation damages,
opportunity cost, reliance damages, and party design remedies. In order to
decide which remedy provides maximum welfare within an electronic market, a
simulation environment called DEMCA (Designing Electronic Markets for
Contractual Agents) was developed. Knowledge representation and the reasoning
capabilities of the agents are based on an extended version of temporal
defeasible logic.
|
1304.5550 | OntoRich - A Support Tool for Semi-Automatic Ontology Enrichment and
Evaluation | cs.AI | This paper presents the OntoRich framework, a support tool for semi-automatic
ontology enrichment and evaluation. The WordNet is used to extract candidates
for dynamic ontology enrichment from RSS streams. With the integration of
OpenNLP the system gains access to syntactic analysis of the RSS news. The
enriched ontologies are evaluated against several qualitative metrics.
|
1304.5554 | Enacting Social Argumentative Machines in Semantic Wikipedia | cs.AI | This research advocates the idea of combining argumentation theory with the
social web technology, aiming to enact large scale or mass argumentation. The
proposed framework allows mass-collaborative editing of structured arguments in
the style of semantic wikipedia. The long term goal is to apply the abstract
machinery of argumentation theory to more practical applications based on human
generated arguments, such as deliberative democracy, business negotiation, or
self-care. The ARGNET system was developed based on ther Semantic MediaWiki
framework and on the Argument Interchange Format (AIF) ontology.
|
1304.5565 | Computing Pathways to Systems Biology: Key Contributions of
Computational Methods in Pathway Identification | q-bio.MN cs.CE | Understanding large molecular networks consisting of entities such as genes,
proteins or RNAs that interact in complex ways to drive the cellular machinery
has been an active focus of systems biology. Computational approaches have
played a key role in systems biology by complementing theoretical and
experimental approaches. Here we roadmap some key contributions of
computational methods developed over the last decade in the reconstruction of
biological pathways. We position these contributions in a 'systems biology
perspective' to reemphasize their roles in unraveling cellular mechanisms and
to understand 'systems biology diseases' including cancer.
|
1304.5566 | A Markov Model for Ontology Alignment | cs.DB cs.AI | The explosion of available data along with the need to integrate and utilize
that data has led to a pressing interest in data integration techniques. In
terms of Semantic Web technologies, Ontology Alignment is a key step in the
process of integrating heterogeneous knowledge bases. In this paper, we present
the Edge Confidence technique, a modification and improvement over the popular
Similarity Flooding technique for Ontology Alignment.
|
1304.5568 | DORI: Distributed Outdoor Robotic Instruments | cs.RO | DORI (Distributed Outdoor Robotic Instruments) is a remotely controlled
vehicle that is designed to simulate a planetary exploration mission. DORI is
equipped with over 20 environmental sensors and can perform basic data
analysis, logging and remote upload. The individual components are distributed
across a fault-tolerant bus for redundancy. A partial sensor list includes
atmospheric pressure, rainfall, wind speed, GPS, gyroscopic inertia, linear
acceleration, magnetic field strength, temperature, laser and ultrasonic
distance sensing, as well as digital audio and video capture. The project uses
recycled consumer electronics devices as a low-cost source for sensor
components. This report describes the hardware design of DORI including sensor
electronics, embedded firmware, and physical construction.
|
1304.5574 | Maximum-rate Transmission with Improved Diversity Gain for Interference
Networks | cs.IT math.IT | Interference alignment (IA) was shown effective for interference management
to improve transmission rate in terms of the degree of freedom (DoF) gain. On
the other hand, orthogonal space-time block codes (STBCs) were widely used in
point-to-point multi-antenna channels to enhance transmission reliability in
terms of the diversity gain. In this paper, we connect these two ideas, i.e.,
IA and space-time block coding, to improve the designs of alignment precoders
for multi-user networks. Specifically, we consider the use of Alamouti codes
for IA because of its rate-one transmission and achievability of full diversity
in point-to-point systems. The Alamouti codes protect the desired link by
introducing orthogonality between the two symbols in one Alamouti codeword, and
create alignment at the interfering receiver. We show that the proposed
alignment methods can maintain the maximum DoF gain and improve the ergodic
mutual information in the long-term regime, while increasing the diversity gain
to 2 in the short-term regime. The presented examples of interference networks
have two antennas at each node and include the two-user X channel, the
interferring multi-access channel (IMAC), and the interferring broadcast
channel (IBC).
|
1304.5575 | Inverse Density as an Inverse Problem: The Fredholm Equation Approach | cs.LG stat.ML | In this paper we address the problem of estimating the ratio $\frac{q}{p}$
where $p$ is a density function and $q$ is another density, or, more generally
an arbitrary function. Knowing or approximating this ratio is needed in various
problems of inference and integration, in particular, when one needs to average
a function with respect to one probability distribution, given a sample from
another. It is often referred as {\it importance sampling} in statistical
inference and is also closely related to the problem of {\it covariate shift}
in transfer learning as well as to various MCMC methods. It may also be useful
for separating the underlying geometry of a space, say a manifold, from the
density function defined on it.
Our approach is based on reformulating the problem of estimating
$\frac{q}{p}$ as an inverse problem in terms of an integral operator
corresponding to a kernel, and thus reducing it to an integral equation, known
as the Fredholm problem of the first kind. This formulation, combined with the
techniques of regularization and kernel methods, leads to a principled
kernel-based framework for constructing algorithms and for analyzing them
theoretically.
The resulting family of algorithms (FIRE, for Fredholm Inverse Regularized
Estimator) is flexible, simple and easy to implement.
We provide detailed theoretical analysis including concentration bounds and
convergence rates for the Gaussian kernel in the case of densities defined on
$\R^d$, compact domains in $\R^d$ and smooth $d$-dimensional sub-manifolds of
the Euclidean space.
We also show experimental results including applications to classification
and semi-supervised learning within the covariate shift framework and
demonstrate some encouraging experimental comparisons. We also show how the
parameters of our algorithms can be chosen in a completely unsupervised manner.
|
1304.5583 | Distributed Low-rank Subspace Segmentation | cs.CV cs.DC cs.LG stat.ML | Vision problems ranging from image clustering to motion segmentation to
semi-supervised learning can naturally be framed as subspace segmentation
problems, in which one aims to recover multiple low-dimensional subspaces from
noisy and corrupted input data. Low-Rank Representation (LRR), a convex
formulation of the subspace segmentation problem, is provably and empirically
accurate on small problems but does not scale to the massive sizes of modern
vision datasets. Moreover, past work aimed at scaling up low-rank matrix
factorization is not applicable to LRR given its non-decomposable constraints.
In this work, we propose a novel divide-and-conquer algorithm for large-scale
subspace segmentation that can cope with LRR's non-decomposable constraints and
maintains LRR's strong recovery guarantees. This has immediate implications for
the scalability of subspace segmentation, which we demonstrate on a benchmark
face recognition dataset and in simulations. We then introduce novel
applications of LRR-based subspace segmentation to large-scale semi-supervised
learning for multimedia event detection, concept detection, and image tagging.
In each case, we obtain state-of-the-art results and order-of-magnitude speed
ups.
|
1304.5587 | Color image denoising by chromatic edges based vector valued diffusion | cs.CV | In this letter we propose to denoise digital color images via an improved
geometric diffusion scheme. By introducing edges detected from all three color
channels into the diffusion the proposed scheme avoids color smearing
artifacts. Vector valued diffusion is used to control the smoothing and the
geometry of color images are taken into consideration. Color edge strength
function computed from different planes is introduced and it stops the
diffusion spread across chromatic edges. Experimental results indicate that the
scheme achieves good denoising with edge preservation when compared to other
related schemes.
|
1304.5590 | Distributed Constrained Optimization by Consensus-Based Primal-Dual
Perturbation Method | cs.SY math.OC | Various distributed optimization methods have been developed for solving
problems which have simple local constraint sets and whose objective function
is the sum of local cost functions of distributed agents in a network.
Motivated by emerging applications in smart grid and distributed sparse
regression, this paper studies distributed optimization methods for solving
general problems which have a coupled global cost function and have inequality
constraints. We consider a network scenario where each agent has no global
knowledge and can access only its local mapping and constraint functions. To
solve this problem in a distributed manner, we propose a consensus-based
distributed primal-dual perturbation (PDP) algorithm. In the algorithm, agents
employ the average consensus technique to estimate the global cost and
constraint functions via exchanging messages with neighbors, and meanwhile use
a local primal-dual perturbed subgradient method to approach a global optimum.
The proposed PDP method not only can handle smooth inequality constraints but
also non-smooth constraints such as some sparsity promoting constraints arising
in sparse optimization. We prove that the proposed PDP algorithm converges to
an optimal primal-dual solution of the original problem, under standard problem
and network assumptions. Numerical results illustrating the performance of the
proposed algorithm for a distributed demand response control problem in smart
grid are also presented.
|
1304.5594 | Dew Point modelling using GEP based multi objective optimization | cs.NE | Different techniques are used to model the relationship between temperatures,
dew point and relative humidity. Gene expression programming is capable of
modelling complex realities with great accuracy, allowing at the same time, the
extraction of knowledge from the evolved models compared to other learning
algorithms. We aim to use Gene Expression Programming for modelling of dew
point. Generally, accuracy of the model is the only objective used by selection
mechanism of GEP. This will evolve large size models with low training error.
To avoid this situation, use of multiple objectives, like accuracy and size of
the model are preferred by Genetic Programming practitioners. Solution to a
multi-objective problem is a set of solutions which satisfies the objectives
given by decision maker. Multi objective based GEP will be used to evolve
simple models. Various algorithms widely used for multi objective optimization,
like NSGA II and SPEA 2, are tested on different test problems. The results
obtained thereafter gives idea that SPEA 2 is better than NSGA II based on the
features like execution time, number of solutions obtained and convergence
rate. We selected SPEA 2 for dew point prediction. The multi-objective base GEP
produces accurate and simpler (smaller) solutions compared to solutions
produced by plain GEP for dew point predictions. Thus multi objective base GEP
produces better solutions by considering the dual objectives of fitness and
size of the solution. These simple models can be used to predict future values
of dew point.
|
1304.5610 | Tight Performance Bounds for Approximate Modified Policy Iteration with
Non-Stationary Policies | math.OC cs.AI | We consider approximate dynamic programming for the infinite-horizon
stationary $\gamma$-discounted optimal control problem formalized by Markov
Decision Processes. While in the exact case it is known that there always
exists an optimal policy that is stationary, we show that when using value
function approximation, looking for a non-stationary policy may lead to a
better performance guarantee. We define a non-stationary variant of MPI that
unifies a broad family of approximate DP algorithms of the literature. For this
algorithm we provide an error propagation analysis in the form of a performance
bound of the resulting policies that can improve the usual performance bound by
a factor $O(1-\gamma)$, which is significant when the discount factor $\gamma$
is close to 1. Doing so, our approach unifies recent results for Value and
Policy Iteration. Furthermore, we show, by constructing a specific
deterministic MDP, that our performance guarantee is tight.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.