id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1403.6192 | Quantum Synchronizable Codes From Quadratic Residue Codes and Their
Supercodes | cs.IT math.IT | Quantum synchronizable codes are quantum error-correcting codes designed to
correct the effects of both quantum noise and block synchronization errors.
While it is known that quantum synchronizable codes can be constructed from
cyclic codes that satisfy special properties, only a few classes of cyclic
codes have been proved to give promising quantum synchronizable codes. In this
paper, using quadratic residue codes and their supercodes, we give a simple
construction for quantum synchronizable codes whose synchronization
capabilities attain the upper bound. The method is applicable to cyclic codes
of prime length.
|
1403.6199 | Predicting Successful Memes using Network and Community Structure | cs.SI cs.CY physics.data-an physics.soc-ph | We investigate the predictability of successful memes using their early
spreading patterns in the underlying social networks. We propose and analyze a
comprehensive set of features and develop an accurate model to predict future
popularity of a meme given its early spreading patterns. Our paper provides the
first comprehensive comparison of existing predictive frameworks. We categorize
our features into three groups: influence of early adopters, community
concentration, and characteristics of adoption time series. We find that
features based on community structure are the most powerful predictors of
future success. We also find that early popularity of a meme is not a good
predictor of its future popularity, contrary to common belief. Our methods
outperform other approaches, particularly in the task of detecting very popular
or unpopular memes.
|
1403.6213 | Embedding Cryptographic Features in Compressive Sensing | cs.CR cs.IT math.IT | Compressive sensing (CS) has been widely studied and applied in many fields.
Recently, the way to perform secure compressive sensing (SCS) has become a
topic of growing interest. The existing works on SCS usually take the sensing
matrix as a key and the resultant security level is not evaluated in depth.
They can only be considered as a preliminary exploration on SCS, but a concrete
and operable encipher model is not given yet. In this paper, we are going to
investigate SCS in a systematic way. The relationship between CS and
symmetric-key cipher indicates some possible encryption models. To this end, we
propose the two-level protection models (TLPM) for SCS which are developed from
measurements taking and something else, respectively. It is believed that these
models will provide a new point of view and stimulate further research in both
CS and cryptography. Specifically, an efficient and secure encryption scheme
for parallel compressive sensing (PCS) is designed by embedding a two-layer
protection in PCS using chaos. The first layer is undertaken by random
permutation on a two-dimensional signal, which is proved to be an acceptable
permutation with overwhelming probability. The other layer is to sample the
permuted signal column by column with the same chaotic measurement matrix,
which satisfies the restricted isometry property of PCS with overwhelming
probability. Both the random permutation and the measurement matrix are
constructed under the control of a chaotic system. Simulation results show that
unlike the general joint compression and encryption schemes in which encryption
always leads to the same or a lower compression ratio, the proposed approach of
embedding encryption in PCS actually improves the compression performance.
Besides, the proposed approach possesses high transmission robustness against
additive Gaussian white noise and cropping attack.
|
1403.6214 | Analysis of Linear Quantum Optical Networks | quant-ph cs.SY | This paper is concerned with the analysis of linear quantum optical networks.
It provides a systematic approach to the construction a model for a given
quantum network in terms of a system of quantum stochastic differential
equations. This corresponds to a classical state space model. The linear
quantum optical networks under consideration consist of interconnections
between optical cavities, optical squeezers, and beamsplitters. These models
can then be used in the design of quantum feedback control systems for these
networks.
|
1403.6225 | H-infinity control problem for general discrete-time systems | math.OC cs.SY | The paper considers the suboptimal H-infinity control problem for a general
discrete-time system (whose transfer function matrix is allowed to be improper
or polynomial). The parametrization of output feedback controllers is given in
a realization-based setting, involves two generalized algebraic Riccati
equations, and features the same elegant simplicity of the standard (proper)
case. A relevant real-life numerical example proves the effectiveness of our
approach.
|
1403.6248 | Classroom Video Assessment and Retrieval via Multiple Instance Learning | cs.IR cs.CY cs.LG | We propose a multiple instance learning approach to content-based retrieval
of classroom video for the purpose of supporting human assessing the learning
environment. The key element of our approach is a mapping between the semantic
concepts of the assessment system and features of the video that can be
measured using techniques from the fields of computer vision and speech
analysis. We report on a formative experiment in content-based video retrieval
involving trained experts in the Classroom Assessment Scoring System, a widely
used framework for assessment and improvement of learning environments. The
results of this experiment suggest that our approach has potential application
to productivity enhancement in assessment and to broader retrieval tasks.
|
1403.6260 | Capturing and Recognizing Objects Appearance Employing Eigenspace | cs.CV | This paper presents a method of capturing objects appearances from its
environment and it also describes how to recognize unknown appearances creating
an eigenspace. This representation and recognition can be done automatically
taking objects various appearances by using robotic vision from a defined
environment. This technique also allows extracting objects from some sort of
complicated scenes. In this case, some of object appearances are taken with
defined occlusions and eigenspaces are created by accepting both of
non-occluded and occluded appearances together. Eigenspace is constructed
successfully every times when a new object appears, and various appearances
accumulated gradually. A sequence of appearances is generated from its
accumulated shapes, which is used for recognition of the unknown objects
appearances. Various objects environments are shown in the experiment to
capture objects appearances and experimental results show effectiveness of the
proposed approach.
|
1403.6274 | Arguments for Nested Patterns in Neural Ensembles | cs.NE q-bio.NC | This paper describes a relatively simple way of allowing a brain model to
self-organise its concept patterns through nested structures. Time is a key
element and a simulator would be able to show how patterns may form and then
fire in sequence, as part of a search or thought process. It uses a very simple
equation to show how the inhibitors in particular, can switch off certain
areas, to allow other areas to become the prominent ones and thereby define the
current brain state. This allows for a small amount of control over what
appears to be a chaotic structure inside of the brain. It is attractive because
it is still mostly mechanical and therefore can be added as an automatic
process, or the modelling of that. The paper also describes how the nested
pattern structure can be used as a basic counting mechanism.
|
1403.6275 | A Tiered Move-making Algorithm for General Non-submodular Pairwise
Energies | cs.CV | A large number of problems in computer vision can be modelled as energy
minimization problems in a Markov Random Field (MRF) or Conditional Random
Field (CRF) framework. Graph-cuts based $\alpha$-expansion is a standard
move-making method to minimize the energy functions with sub-modular pairwise
terms. However, certain problems require more complex pairwise terms where the
$\alpha$-expansion method is generally not applicable.
In this paper, we propose an iterative {\em tiered move making algorithm}
which is able to handle general pairwise terms. Each move to the next
configuration is based on the current labeling and an optimal tiered move,
where each tiered move requires one application of the dynamic programming
based tiered labeling method introduced in Felzenszwalb et. al.
\cite{tiered_cvpr_felzenszwalbV10}. The algorithm converges to a local minimum
for any general pairwise potential, and we give a theoretical analysis of the
properties of the algorithm, characterizing the situations in which we can
expect good performance. We first evaluate our method on an object-class
segmentation problem using the Pascal VOC-11 segmentation dataset where we
learn general pairwise terms. Further we evaluate the algorithm on many other
benchmark labeling problems such as stereo, image segmentation, image stitching
and image denoising. Our method consistently gets better accuracy and energy
values than alpha-expansion, loopy belief propagation (LBP), quadratic
pseudo-boolean optimization (QPBO), and is competitive with TRWS.
|
1403.6290 | Spectral Sparse Representation for Clustering: Evolved from PCA,
K-means, Laplacian Eigenmap, and Ratio Cut | cs.CV | Dimensionality reduction, cluster analysis, and sparse representation are
basic components in machine learning. However, their relationships have not yet
been fully investigated. In this paper, we find that the spectral graph theory
underlies a series of these elementary methods and can unify them into a
complete framework. The methods include PCA, K-means, Laplacian eigenmap (LE),
ratio cut (Rcut), and a new sparse representation method developed by us,
called spectral sparse representation (SSR). Further, extended relations to
conventional over-complete sparse representations (e.g., method of optimal
directions, KSVD), manifold learning (e.g., kernel PCA, multidimensional
scaling, Isomap, locally linear embedding), and subspace clustering (e.g.,
sparse subspace clustering, low-rank representation) are incorporated. We show
that, under an ideal condition from the spectral graph theory, PCA, K-means,
LE, and Rcut are unified together. And when the condition is relaxed, the
unification evolves to SSR, which lies in the intermediate between PCA/LE and
K-mean/Rcut. An efficient algorithm, NSCrt, is developed to solve the sparse
codes of SSR. SSR combines merits of both sides: its sparse codes reduce
dimensionality of data meanwhile revealing cluster structure. For its inherent
relation to cluster analysis, the codes of SSR can be directly used for
clustering. Scut, a clustering approach derived from SSR reaches the
state-of-the-art performance in the spectral clustering family. The one-shot
solution obtained by Scut is comparable to the optimal result of K-means that
are run many times. Experiments on various data sets demonstrate the properties
and strengths of SSR, NSCrt, and Scut.
|
1403.6315 | Cost Effective Rumor Containment in Social Networks | physics.soc-ph cs.SI | The spread of rumors through social media and online social networks can not
only disrupt the daily lives of citizens but also result in loss of life and
property. A rumor spreads when individuals, who are unable decide the
authenticity of the information, mistake the rumor as genuine information and
pass it on to their acquaintances. We propose a solution where a set of
individuals (based on their degree) in the social network are trained and
provided resources to help them distinguish a rumor from genuine information.
By formulating an optimization problem we calculate the optimum set of
individuals, who must undergo training, and the quality of training that
minimizes the expected training cost and ensures an upper bound on the size of
the rumor outbreak. Our primary contribution is that although the optimization
problem turns out to be non convex, we show that the problem is equivalent to
solving a set of linear programs. This result also allows us to solve the
problem of minimizing the size of rumor outbreak for a given cost budget. The
optimum solution displays an interesting pattern which can be implemented as a
heuristic. These results can prove to be very useful for social planners and
law enforcement agencies for preventing dangerous rumors and misinformation
epidemics.
|
1403.6318 | Stabilizing dual-energy X-ray computed tomography reconstructions using
patch-based regularization | cs.CV physics.med-ph | Recent years have seen growing interest in exploiting dual- and multi-energy
measurements in computed tomography (CT) in order to characterize material
properties as well as object shape. Material characterization is performed by
decomposing the scene into constitutive basis functions, such as Compton
scatter and photoelectric absorption functions. While well motivated
physically, the joint recovery of the spatial distribution of photoelectric and
Compton properties is severely complicated by the fact that the data are
several orders of magnitude more sensitive to Compton scatter coefficients than
to photoelectric absorption, so small errors in Compton estimates can create
large artifacts in the photoelectric estimate. To address these issues, we
propose a model-based iterative approach which uses patch-based regularization
terms to stabilize inversion of photoelectric coefficients, and solve the
resulting problem though use of computationally attractive Alternating
Direction Method of Multipliers (ADMM) solution techniques. Using simulations
and experimental data acquired on a commercial scanner, we demonstrate that the
proposed processing can lead to more stable material property estimates which
should aid materials characterization in future dual- and multi-energy CT
systems.
|
1403.6330 | Problem Complexity in Parallel Problem Solving | cs.SI physics.soc-ph | Recent works examine the relationship between the communication structure and
the performance of a group in a problem solving task. Some conclude that
inefficient communication networks with long paths outperform efficient
networks on the long run. Others find no influence of the network topology on
group performance. We contribute to this discussion by examining the role of
problem complexity. In particular, we study whether and how the complexity of
the problem at hand moderates the influence of the communication network on
group performance. Results obtained from multi-agent modelling suggest that
problem complexity indeed has an influence. We observe an influence of the
network only for problems of moderate difficulty. For easier or harder
problems, the influence of network topology becomes weaker or irrelevant, which
offers a possible explanation for inconsistencies in the literature.
|
1403.6348 | Updating Formulas and Algorithms for Computing Entropy and Gini Index
from Time-Changing Data Streams | cs.AI cs.LG | Despite growing interest in data stream mining the most successful
incremental learners, such as VFDT, still use periodic recomputation to update
attribute information gains and Gini indices. This note provides simple
incremental formulas and algorithms for computing entropy and Gini index from
time-changing data streams.
|
1403.6351 | Submodularity of Energy Related Controllability Metrics | math.OC cs.SY | The quantification of controllability and observability has recently received
new interest in the context of large, complex networks of dynamical systems. A
fundamental but computationally difficult problem is the placement or selection
of actuators and sensors that optimize real-valued controllability and
observability metrics of the network. We show that several classes of energy
related metrics associated with the controllability Gramian in linear dynamical
systems have a strong structural property, called submodularity. This property
allows for an approximation guarantee by using a simple greedy heuristic for
their maximization. The results are illustrated for randomly generated systems
and for placement of power electronic actuators in a model of the European
power grid.
|
1403.6358 | Immunophenotypes of Acute Myeloid Leukemia From Flow Cytometry Data
Using Templates | q-bio.QM cs.CE | Motivation: We investigate whether a template-based classification pipeline
could be used to identify immunophenotypes in (and thereby classify) a
heterogeneous disease with many subtypes. The disease we consider here is Acute
Myeloid Leukemia, which is heterogeneous at the morphologic, cytogenetic and
molecular levels, with several known subtypes. The prognosis and treatment for
AML depends on the subtype.
Results: We apply flowMatch, an algorithmic pipeline for flow cytometry data
created in earlier work, to compute templates succinctly summarizing classes of
AML and healthy samples. We develop a scoring function that accounts for
features of the AML data such as heterogeneity to identify immunophenotypes
corresponding to various AML subtypes, including APL. All of the AML samples in
the test set are classified correctly with high confidence.
Availability: flowMatch is available at
www.bioconductor.org/packages/devel/bioc/html/flowMatch.html; programs specific
to immunophenotyping AML are at www.cs.purdue.edu/homes/aazad/software.html.
|
1403.6361 | Weak locking capacity of quantum channels can be much larger than
private capacity | quant-ph cs.IT math.IT | We show that it is possible for the so-called weak locking capacity of a
quantum channel [Guha et al., PRX 4:011016, 2014] to be much larger than its
private capacity. Both reflect different ways of capturing the notion of
reliable communication via a quantum system while leaking almost no information
to an eavesdropper; the difference is that the latter imposes an intrinsically
quantum security criterion whereas the former requires only a weaker, classical
condition. The channels for which this separation is most straightforward to
establish are the complementary channels of classical-quantum (cq-)channels,
and hence a subclass of Hadamard channels. We also prove that certain symmetric
channels (related to photon number splitting) have positive weak locking
capacity in the presence of a vanishingly small pre-shared secret, whereas
their private capacity is zero.
These findings are powerful illustrations of the difference between two
apparently natural notions of privacy in quantum systems, relevant also to
quantum key distribution (QKD): the older, naive one based on accessible
information, contrasting with the new, composable one embracing the quantum
nature of the eavesdropper's information.
Assuming an additivity conjecture for constrained minimum output Renyi
entropies, the techniques of the first part demonstrate a single-letter formula
for the weak locking capacity of complements to cq-channels, coinciding with a
general upper bound of Guha et al. for these channels. Furthermore, still
assuming this additivity conjecture, this upper bound is given an operational
interpretation for general channels as the maximum weak locking capacity of the
channel activated by a suitable noiseless channel.
|
1403.6367 | A Framework for Hybrid Systems with Denial-of-Service Security Attack | cs.LO cs.CR cs.SY | Hybrid systems are integrations of discrete computation and continuous
physical evolution. The physical components of such systems introduce safety
requirements, the achievement of which asks for the correct monitoring and
control from the discrete controllers. However, due to denial-of-service
security attack, the expected information from the controllers is not received
and as a consequence the physical systems may fail to behave as expected. This
paper proposes a formal framework for expressing denial-of-service security
attack in hybrid systems. As a virtue, a physical system is able to plan for
reasonable behavior in case the ideal control fails due to unreliable
communication, in such a way that the safety of the system upon
denial-of-service is still guaranteed. In the context of the modeling language,
we develop an inference system for verifying safety of hybrid systems, without
putting any assumptions on how the environments behave. Based on the inference
system, we implement an interactive theorem prover and have applied it to check
an example taken from train control system.
|
1403.6381 | An efficiency dependency parser using hybrid approach for tamil language | cs.CL | Natural language processing is a prompt research area across the country.
Parsing is one of the very crucial tool in language analysis system which aims
to forecast the structural relationship among the words in a given sentence.
Many researchers have already developed so many language tools but the accuracy
is not meet out the human expectation level, thus the research is still exists.
Machine translation is one of the major application area under Natural Language
Processing. While translation between one language to another language, the
structure identification of a sentence play a key role. This paper introduces
the hybrid way to solve the identification of relationship among the given
words in a sentence. In existing system is implemented using rule based
approach, which is not suited in huge amount of data. The machine learning
approaches is suitable for handle larger amount of data and also to get better
accuracy via learning and training the system. The proposed approach takes a
Tamil sentence as an input and produce the result of a dependency relation as a
tree like structure using hybrid approach. This proposed tool is very helpful
for researchers and act as an odd-on improve the quality of existing
approaches.
|
1403.6382 | CNN Features off-the-shelf: an Astounding Baseline for Recognition | cs.CV | Recent results indicate that the generic descriptors extracted from the
convolutional neural networks are very powerful. This paper adds to the
mounting evidence that this is indeed the case. We report on a series of
experiments conducted for different recognition tasks using the publicly
available code and model of the \overfeat network which was trained to perform
object classification on ILSVRC13. We use features extracted from the \overfeat
network as a generic image representation to tackle the diverse range of
recognition tasks of object image classification, scene recognition, fine
grained recognition, attribute detection and image retrieval applied to a
diverse set of datasets. We selected these tasks and datasets as they gradually
move further away from the original task and data the \overfeat network was
trained to solve. Astonishingly, we report consistent superior results compared
to the highly tuned state-of-the-art systems in all the visual classification
tasks on various datasets. For instance retrieval it consistently outperforms
low memory footprint methods except for sculptures dataset. The results are
achieved using a linear SVM classifier (or $L2$ distance in case of retrieval)
applied to a feature representation of size 4096 extracted from a layer in the
net. The representations are further modified using simple augmentation
techniques e.g. jittering. The results strongly suggest that features obtained
from deep learning with convolutional nets should be the primary candidate in
most visual recognition tasks.
|
1403.6392 | Implementation of an Automatic Sign Language Lexical Annotation
Framework based on Propositional Dynamic Logic | cs.CL | In this paper, we present the implementation of an automatic Sign Language
(SL) sign annotation framework based on a formal logic, the Propositional
Dynamic Logic (PDL). Our system relies heavily on the use of a specific variant
of PDL, the Propositional Dynamic Logic for Sign Language (PDLSL), which lets
us describe SL signs as formulae and corpora videos as labeled transition
systems (LTSs). Here, we intend to show how a generic annotation system can be
constructed upon these underlying theoretical principles, regardless of the
tracking technologies available or the input format of corpora. With this in
mind, we generated a development framework that adapts the system to specific
use cases. Furthermore, we present some results obtained by our application
when adapted to one distinct case, 2D corpora analysis with pre-processed
tracking information. We also present some insights on how such a technology
can be used to analyze 3D real-time data, captured with a depth device.
|
1403.6397 | Evaluating topic coherence measures | cs.LG cs.CL cs.IR | Topic models extract representative word sets - called topics - from word
counts in documents without requiring any semantic annotations. Topics are not
guaranteed to be well interpretable, therefore, coherence measures have been
proposed to distinguish between good and bad topics. Studies of topic coherence
so far are limited to measures that score pairs of individual words. For the
first time, we include coherence measures from scientific philosophy that score
pairs of more complex word subsets and apply them to topic scoring.
|
1403.6426 | High Performance Solutions for Big-data GWAS | q-bio.GN cs.CE cs.MS | In order to associate complex traits with genetic polymorphisms, genome-wide
association studies process huge datasets involving tens of thousands of
individuals genotyped for millions of polymorphisms. When handling these
datasets, which exceed the main memory of contemporary computers, one faces two
distinct challenges: 1) Millions of polymorphisms and thousands of phenotypes
come at the cost of hundreds of gigabytes of data, which can only be kept in
secondary storage; 2) the relatedness of the test population is represented by
a relationship matrix, which, for large populations, can only fit in the
combined main memory of a distributed architecture. In this paper, by using
distributed resources such as Cloud or clusters, we address both challenges:
The genotype and phenotype data is streamed from secondary storage using a
double buffer- ing technique, while the relationship matrix is kept across the
main memory of a distributed memory system. With the help of these solutions,
we develop separate algorithms for studies involving only one or a multitude of
traits. We show that these algorithms sustain high-performance and allow the
analysis of enormous datasets.
|
1403.6508 | Multi-agent Inverse Reinforcement Learning for Two-person Zero-sum Games | cs.GT cs.AI cs.LG | The focus of this paper is a Bayesian framework for solving a class of
problems termed multi-agent inverse reinforcement learning (MIRL). Compared to
the well-known inverse reinforcement learning (IRL) problem, MIRL is formalized
in the context of stochastic games, which generalize Markov decision processes
to game theoretic scenarios. We establish a theoretical foundation for
competitive two-agent zero-sum MIRL problems and propose a Bayesian solution
approach in which the generative model is based on an assumption that the two
agents follow a minimax bi-policy. Numerical results are presented comparing
the Bayesian MIRL method with two existing methods in the context of an
abstract soccer game. Investigation centers on relationships between the extent
of prior information and the quality of learned rewards. Results suggest that
covariance structure is more important than mean value in reward priors.
|
1403.6512 | Non-characterizability of belief revision: an application of finite
model theory | math.LO cs.AI cs.LO | A formal framework is given for the characterizability of a class of belief
revision operators, defined using minimization over a class of partial
preorders, by postulates. It is shown that for partial orders
characterizability implies a definability property of the class of partial
orders in monadic second-order logic. Based on a non-definability result for a
class of partial orders, an example is given of a non-characterizable class of
revision operators. This appears to be the first non-characterizability result
in belief revision.
|
1403.6530 | Variance-Constrained Actor-Critic Algorithms for Discounted and Average
Reward MDPs | cs.LG math.OC stat.ML | In many sequential decision-making problems we may want to manage risk by
minimizing some measure of variability in rewards in addition to maximizing a
standard criterion. Variance related risk measures are among the most common
risk-sensitive criteria in finance and operations research. However, optimizing
many such criteria is known to be a hard problem. In this paper, we consider
both discounted and average reward Markov decision processes. For each
formulation, we first define a measure of variability for a policy, which in
turn gives us a set of risk-sensitive criteria to optimize. For each of these
criteria, we derive a formula for computing its gradient. We then devise
actor-critic algorithms that operate on three timescales - a TD critic on the
fastest timescale, a policy gradient (actor) on the intermediate timescale, and
a dual ascent for Lagrange multipliers on the slowest timescale. In the
discounted setting, we point out the difficulty in estimating the gradient of
the variance of the return and incorporate simultaneous perturbation approaches
to alleviate this. The average setting, on the other hand, allows for an actor
update using compatible features to estimate the gradient of the variance. We
establish the convergence of our algorithms to locally risk-sensitive optimal
policies. Finally, we demonstrate the usefulness of our algorithms in a traffic
signal control application.
|
1403.6540 | The quest for optimal sampling: Computationally efficient,
structure-exploiting measurements for compressed sensing | math.FA cs.IT math.IT | An intriguing phenomenon in many instances of compressed sensing is that the
reconstruction quality is governed not just by the overall sparsity of the
signal, but also on its structure. This paper is about understanding this
phenomenon, and demonstrating how it can be fruitfully exploited by the design
of suitable sampling strategies in order to outperform more standard compressed
sensing techniques based on random matrices.
|
1403.6555 | Modify-and-Forward for Securing Cooperative Relay Communications | cs.IT math.IT | We proposed a new physical layer technique that can enhance the security of
cooperative relay communications. The proposed approach modifies the decoded
message at the relay according to the unique channel state between the relay
and the destination such that the destination can utilize the modified message
to its advantage while the eavesdropper cannot. We present a practical method
for securely sharing the modification rule between the legitimate partners and
present the secrecy outage probability in a quasi-static fading channel. It is
demonstrated that the proposed scheme can provide a significant improvement
over other schemes when the relay can successfully decode the source message.
|
1403.6561 | Transmit Power Minimization for MIMO Systems of Exponential Average BER
with Fixed Outage Probability | cs.IT math.IT | This paper is concerned with a wireless system operating in MIMO fading
channels with channel state information being known at both transmitter and
receiver. By spatiotemporal subchannel selection and power control, it aims to
minimize the average transmit power (ATP) of the MIMO system while achieving an
exponential type of average bit error rate (BER) for each data stream. Under
the constraints of a given fixed individual outage probability (OP) and average
BER for each subchannel, based on a traditional upper bound and a dynamic upper
bound of Q function, two closed-form ATP expressions are derived, respectively,
and they correspond to two different power allocation schemes. Numerical
results are provided to validate the theoretical analysis, and show that the
power allocation scheme with the dynamic upper bound can achieve more power
savings than the one with the traditional upper bound.
|
1403.6566 | Image Retargeting by Content-Aware Synthesis | cs.GR cs.CV | Real-world images usually contain vivid contents and rich textural details,
which will complicate the manipulation on them. In this paper, we design a new
framework based on content-aware synthesis to enhance content-aware image
retargeting. By detecting the textural regions in an image, the textural image
content can be synthesized rather than simply distorted or cropped. This method
enables the manipulation of textural & non-textural regions with different
strategy since they have different natures. We propose to retarget the textural
regions by content-aware synthesis and non-textural regions by fast
multi-operators. To achieve practical retargeting applications for general
images, we develop an automatic and fast texture detection method that can
detect multiple disjoint textural regions. We adjust the saliency of the image
according to the features of the textural regions. To validate the proposed
method, comparisons with state-of-the-art image targeting techniques and a user
study were conducted. Convincing visual results are shown to demonstrate the
effectiveness of the proposed method.
|
1403.6600 | How Crossover Speeds Up Building-Block Assembly in Genetic Algorithms | cs.NE cs.DS | We re-investigate a fundamental question: how effective is crossover in
Genetic Algorithms in combining building blocks of good solutions? Although
this has been discussed controversially for decades, we are still lacking a
rigorous and intuitive answer. We provide such answers for royal road functions
and OneMax, where every bit is a building block. For the latter we show that
using crossover makes every ($\mu$+$\lambda$) Genetic Algorithm at least twice
as fast as the fastest evolutionary algorithm using only standard bit mutation,
up to small-order terms and for moderate $\mu$ and $\lambda$. Crossover is
beneficial because it effectively turns fitness-neutral mutations into
improvements by combining the right building blocks at a later stage. Compared
to mutation-based evolutionary algorithms, this makes multi-bit mutations more
useful. Introducing crossover changes the optimal mutation rate on OneMax from
$1/n$ to $(1+\sqrt{5})/2 \cdot 1/n \approx 1.618/n$. This holds both for
uniform crossover and $k$-point crossover. Experiments and statistical tests
confirm that our findings apply to a broad class of building-block functions.
|
1403.6614 | QCMC: Quasi-conformal Parameterizations for Multiply-connected domains | cs.CG cs.CV math.DG | This paper presents a method to compute the {\it quasi-conformal
parameterization} (QCMC) for a multiply-connected 2D domain or surface. QCMC
computes a quasi-conformal map from a multiply-connected domain $S$ onto a
punctured disk $D_S$ associated with a given Beltrami differential. The
Beltrami differential, which measures the conformality distortion, is a
complex-valued function $\mu:S\to\mathbb{C}$ with supremum norm strictly less
than 1. Every Beltrami differential gives a conformal structure of $S$. Hence,
the conformal module of $D_S$, which are the radii and centers of the inner
circles, can be fully determined by $\mu$, up to a M\"obius transformation. In
this paper, we propose an iterative algorithm to simultaneously search for the
conformal module and the optimal quasi-conformal parameterization. The key idea
is to minimize the Beltrami energy subject to the boundary constraints. The
optimal solution is our desired quasi-conformal parameterization onto a
punctured disk. The parameterization of the multiply-connected domain
simplifies numerical computations and has important applications in various
fields, such as in computer graphics and vision. Experiments have been carried
out on synthetic data together with real multiply-connected Riemann surfaces.
Results show that our proposed method can efficiently compute quasi-conformal
parameterizations of multiply-connected domains and outperforms other
state-of-the-art algorithms. Applications of the proposed parameterization
technique have also been explored.
|
1403.6636 | Sign Language Lexical Recognition With Propositional Dynamic Logic | cs.CL | This paper explores the use of Propositional Dynamic Logic (PDL) as a
suitable formal framework for describing Sign Language (SL), the language of
deaf people, in the context of natural language processing. SLs are visual,
complete, standalone languages which are just as expressive as oral languages.
Signs in SL usually correspond to sequences of highly specific body postures
interleaved with movements, which make reference to real world objects,
characters or situations. Here we propose a formal representation of SL signs,
that will help us with the analysis of automatically-collected hand tracking
data from French Sign Language (FSL) video corpora. We further show how such a
representation could help us with the design of computer aided SL verification
tools, which in turn would bring us closer to the development of an automatic
recognition system for these languages.
|
1403.6652 | DeepWalk: Online Learning of Social Representations | cs.SI cs.LG | We present DeepWalk, a novel approach for learning latent representations of
vertices in a network. These latent representations encode social relations in
a continuous vector space, which is easily exploited by statistical models.
DeepWalk generalizes recent advancements in language modeling and unsupervised
feature learning (or deep learning) from sequences of words to graphs. DeepWalk
uses local information obtained from truncated random walks to learn latent
representations by treating walks as the equivalent of sentences. We
demonstrate DeepWalk's latent representations on several multi-label network
classification tasks for social networks such as BlogCatalog, Flickr, and
YouTube. Our results show that DeepWalk outperforms challenging baselines which
are allowed a global view of the network, especially in the presence of missing
information. DeepWalk's representations can provide $F_1$ scores up to 10%
higher than competing methods when labeled data is sparse. In some experiments,
DeepWalk's representations are able to outperform all baseline methods while
using 60% less training data. DeepWalk is also scalable. It is an online
learning algorithm which builds useful incremental results, and is trivially
parallelizable. These qualities make it suitable for a broad class of real
world applications such as network classification, and anomaly detection.
|
1403.6661 | One-sided asymptotically mean stationary channels | cs.IT math.IT | This paper proposes an analysis of asymptotically mean stationary (AMS)
communication channels. A hierarchy based on stability properties
(stationarity, quasi-stationarity, recurrence and asymptotically mean
stationarity) of channels is identified. Stationary channels are a subclass of
quasi-stationary channels which are a subclass of recurrent AMS channels which
are a subclass of AMS channels. These classes are proved to be stable under
Markovian composition of channels (e.g., the cascade of AMS channels is an AMS
channel). Characterizations of channels of each class are given. Some
properties of the quasi-stationary mean of a channel are established. Finally,
ergodicity conditions of AMS channels are gathered.
|
1403.6676 | Bitcoin Transaction Malleability and MtGox | cs.CR cs.CE | In Bitcoin, transaction malleability describes the fact that the signatures
that prove the ownership of bitcoins being transferred in a transaction do not
provide any integrity guarantee for the signatures themselves. This allows an
attacker to mount a malleability attack in which it intercepts, modifies, and
rebroadcasts a transaction, causing the transaction issuer to believe that the
original transaction was not confirmed. In February 2014 MtGox, once the
largest Bitcoin exchange, closed and filed for bankruptcy claiming that
attackers used malleability attacks to drain its accounts. In this work we use
traces of the Bitcoin network for over a year preceding the filing to show
that, while the problem is real, there was no widespread use of malleability
attacks before the closure of MtGox.
|
1403.6703 | Towards the Asymptotic Sum Capacity of the MIMO Cellular Two-Way Relay
Channel | cs.IT math.IT | In this paper, we consider the transceiver and relay design for
multiple-input multiple-output (MIMO) cellular two-way relay channel (cTWRC),
where a multi-antenna base station (BS) exchanges information with multiple
multi-antenna mobile stations via a multi-antenna relay station (RS). We
propose a novel two-way relaying scheme to approach the sum capacity of the
MIMO cTWRC.
|
1403.6706 | Beyond L2-Loss Functions for Learning Sparse Models | stat.ML cs.CV cs.LG math.OC | Incorporating sparsity priors in learning tasks can give rise to simple, and
interpretable models for complex high dimensional data. Sparse models have
found widespread use in structure discovery, recovering data from corruptions,
and a variety of large scale unsupervised and supervised learning problems.
Assuming the availability of sufficient data, these methods infer dictionaries
for sparse representations by optimizing for high-fidelity reconstruction. In
most scenarios, the reconstruction quality is measured using the squared
Euclidean distance, and efficient algorithms have been developed for both batch
and online learning cases. However, new application domains motivate looking
beyond conventional loss functions. For example, robust loss functions such as
$\ell_1$ and Huber are useful in learning outlier-resilient models, and the
quantile loss is beneficial in discovering structures that are the
representative of a particular quantile. These new applications motivate our
work in generalizing sparse learning to a broad class of convex loss functions.
In particular, we consider the class of piecewise linear quadratic (PLQ) cost
functions that includes Huber, as well as $\ell_1$, quantile, Vapnik, hinge
loss, and smoothed variants of these penalties. We propose an algorithm to
learn dictionaries and obtain sparse codes when the data reconstruction
fidelity is measured using any smooth PLQ cost function. We provide convergence
guarantees for the proposed algorithm, and demonstrate the convergence behavior
using empirical experiments. Furthermore, we present three case studies that
require the use of PLQ cost functions: (i) robust image modeling, (ii) tag
refinement for image annotation and retrieval and (iii) computing empirical
confidence limits for subspace clustering.
|
1403.6717 | Smooth Entropy Transfer of Quantum Gravity Information Processing | quant-ph cs.IT gr-qc hep-th math.IT | We introduce the term smooth entanglement entropy transfer, a phenomenon that
is a consequence of the causality-cancellation property of the quantum gravity
environment. The causality-cancellation of the quantum gravity space removes
the causal dependencies of the local systems. We study the physical effects of
the causality-cancellation and show that it stimulates entropy transfer between
the quantum gravity environment and the independent local systems of the
quantum gravity space. The entropy transfer reduces the entropies of the
contributing local systems and increases the entropy of the quantum gravity
environment. We discuss the space-time geometry structure of the quantum
gravity environment and the local quantum systems. We propose the space-time
geometry model of the smooth entropy transfer. We reveal on a smooth Cauchy
slice that the space-time geometry of the quantum gravity environment
dynamically adapts to the vanishing causality. We define the corresponding
Hamiltonians and the causal development of the quantum gravity environment in a
non-fixed causality structure. We prove that the Cauchy area expansion, along
with the dilation of the Rindler horizon area of the quantum gravity
environment, is a strict corollary of the causality-cancellation of the quantum
gravity environment.
|
1403.6741 | Network coding for multicasting over Rayleigh fading multi access
channels | cs.NI cs.IT math.IT | This paper examines the problem of rate allocation for multicasting over slow
Rayleigh fading channels using network coding. In the proposed model, the
network is treated as a collection of Rayleigh fading multiple access channels.
In this model, rate allocation scheme that is based solely on the statistics of
the channels is presented. The rate allocation scheme is aimed at minimizing
the outage probability. An upper bound is presented for the probability of
outage in the fading multiple access channel. A suboptimal solution based on
this bound is given. A distributed primal-dual gradient algorithm is derived to
solve the rate allocation problem.
|
1403.6758 | Facility Location in Evolving Metrics | cs.SI cs.DS | Understanding the dynamics of evolving social or infrastructure networks is a
challenge in applied areas such as epidemiology, viral marketing, or urban
planning. During the past decade, data has been collected on such networks but
has yet to be fully analyzed. We propose to use information on the dynamics of
the data to find stable partitions of the network into groups. For that
purpose, we introduce a time-dependent, dynamic version of the facility
location problem, that includes a switching cost when a client's assignment
changes from one facility to another. This might provide a better
representation of an evolving network, emphasizing the abrupt change of
relationships between subjects rather than the continuous evolution of the
underlying network. We show that in realistic examples this model yields indeed
better fitting solutions than optimizing every snapshot independently. We
present an $O(\log nT)$-approximation algorithm and a matching hardness result,
where $n$ is the number of clients and $T$ the number of time steps. We also
give an other algorithms with approximation ratio $O(\log nT)$ for the variant
where one pays at each time step (leasing) for each open facility.
|
1403.6774 | Optimized imaging using non-rigid registration | cs.CV | The extraordinary improvements of modern imaging devices offer access to data
with unprecedented information content. However, widely used image processing
methodologies fall far short of exploiting the full breadth of information
offered by numerous types of scanning probe, optical, and electron
microscopies. In many applications, it is necessary to keep measurement
intensities below a desired threshold. We propose a methodology for extracting
an increased level of information by processing a series of data sets
suffering, in particular, from high degree of spatial uncertainty caused by
complex multiscale motion during the acquisition process. An important role is
played by a nonrigid pixel-wise registration method that can cope with low
signal-to-noise ratios. This is accompanied by formulating objective quality
measures which replace human intervention and visual inspection in the
processing chain. Scanning transmission electron microscopy of siliceous
zeolite material exhibits the above-mentioned obstructions and therefore serves
as orientation and a test of our procedures.
|
1403.6794 | KPCA Spatio-temporal trajectory point cloud classifier for recognizing
human actions in a CBVR system | cs.IR cs.CV | We describe a content based video retrieval (CBVR) software system for
identifying specific locations of a human action within a full length film, and
retrieving similar video shots from a query. For this, we introduce the concept
of a trajectory point cloud for classifying unique actions, encoded in a
spatio-temporal covariant eigenspace, where each point is characterized by its
spatial location, local Frenet-Serret vector basis, time averaged curvature and
torsion and the mean osculating hyperplane. Since each action can be
distinguished by their unique trajectories within this space, the trajectory
point cloud is used to define an adaptive distance metric for classifying
queries against stored actions. Depending upon the distance to other
trajectories, the distance metric uses either large scale structure of the
trajectory point cloud, such as the mean distance between cloud centroids or
the difference in hyperplane orientation, or small structure such as the time
averaged curvature and torsion, to classify individual points in a fuzzy-KNN.
Our system can function in real-time and has an accuracy greater than 93% for
multiple action recognition within video repositories. We demonstrate the use
of our CBVR system in two situations: by locating specific frame positions of
trained actions in two full featured films, and video shot retrieval from a
database with a web search application.
|
1403.6807 | Optimal Spectrum Auction Design with Two-Dimensional Truthful
Revelations under Uncertain Spectrum Availability | cs.NI cs.GT cs.IT cs.SY math.IT | In this paper, we propose a novel sealed-bid auction framework to address the
problem of dynamic spectrum allocation in cognitive radio (CR) networks. We
design an optimal auction mechanism that maximizes the moderator's expected
utility, when the spectrum is not available with certainty. We assume that the
moderator employs collaborative spectrum sensing in order to make a reliable
inference about spectrum availability. Due to the presence of a collision cost
whenever the moderator makes an erroneous inference, and a sensing cost at each
CR, we investigate feasibility conditions that guarantee a non-negative utility
at the moderator. We present tight theoretical-bounds on instantaneous network
throughput and also show that our algorithm provides maximum throughput if the
CRs have i.i.d. valuations. Since the moderator fuses CRs' sensing decisions to
obtain a global inference regarding spectrum availability, we propose a novel
strategy-proof fusion rule that encourages the CRs to simultaneously reveal
truthful sensing decisions, along with truthful valuations to the moderator.
Numerical examples are also presented to provide insights into the performance
of the proposed auction under different scenarios.
|
1403.6822 | Comparison of Multi-agent and Single-agent Inverse Learning on a
Simulated Soccer Example | cs.LG cs.GT | We compare the performance of Inverse Reinforcement Learning (IRL) with the
relative new model of Multi-agent Inverse Reinforcement Learning (MIRL). Before
comparing the methods, we extend a published Bayesian IRL approach that is only
applicable to the case where the reward is only state dependent to a general
one capable of tackling the case where the reward depends on both state and
action. Comparison between IRL and MIRL is made in the context of an abstract
soccer game, using both a game model in which the reward depends only on state
and one in which it depends on both state and action. Results suggest that the
IRL approach performs much worse than the MIRL approach. We speculate that the
underperformance of IRL is because it fails to capture equilibrium information
in the manner possible in MIRL.
|
1403.6838 | Quantifying Information Overload in Social Media and its Impact on
Social Contagions | cs.SI physics.soc-ph | Information overload has become an ubiquitous problem in modern society.
Social media users and microbloggers receive an endless flow of information,
often at a rate far higher than their cognitive abilities to process the
information. In this paper, we conduct a large scale quantitative study of
information overload and evaluate its impact on information dissemination in
the Twitter social media site. We model social media users as information
processing systems that queue incoming information according to some policies,
process information from the queue at some unknown rates and decide to forward
some of the incoming information to other users. We show how timestamped data
about tweets received and forwarded by users can be used to uncover key
properties of their queueing policies and estimate their information processing
rates and limits. Such an understanding of users' information processing
behaviors allows us to infer whether and to what extent users suffer from
information overload.
Our analysis provides empirical evidence of information processing limits for
social media users and the prevalence of information overloading. The most
active and popular social media users are often the ones that are overloaded.
Moreover, we find that the rate at which users receive information impacts
their processing behavior, including how they prioritize information from
different sources, how much information they process, and how quickly they
process information. Finally, the susceptibility of a social media user to
social contagions depends crucially on the rate at which she receives
information. An exposure to a piece of information, be it an idea, a convention
or a product, is much less effective for users that receive information at
higher rates, meaning they need more exposures to adopt a particular contagion.
|
1403.6863 | Online Learning of k-CNF Boolean Functions | cs.LG | This paper revisits the problem of learning a k-CNF Boolean function from
examples in the context of online learning under the logarithmic loss. In doing
so, we give a Bayesian interpretation to one of Valiant's celebrated PAC
learning algorithms, which we then build upon to derive two efficient, online,
probabilistic, supervised learning algorithms for predicting the output of an
unknown k-CNF Boolean function. We analyze the loss of our methods, and show
that the cumulative log-loss can be upper bounded, ignoring logarithmic
factors, by a polynomial function of the size of each example.
|
1403.6888 | Fast Localization of Facial Landmark Points | cs.CV | Localization of salient facial landmark points, such as eye corners or the
tip of the nose, is still considered a challenging computer vision problem
despite recent efforts. This is especially evident in unconstrained
environments, i.e., in the presence of background clutter and large head pose
variations. Most methods that achieve state-of-the-art accuracy are slow, and,
thus, have limited applications. We describe a method that can accurately
estimate the positions of relevant facial landmarks in real-time even on
hardware with limited processing power, such as mobile devices. This is
achieved with a sequence of estimators based on ensembles of regression trees.
The trees use simple pixel intensity comparisons in their internal nodes and
this makes them able to process image regions very fast. We test the developed
system on several publicly available datasets and analyse its processing speed
on various devices. Experimental results show that our method has practical
value.
|
1403.6901 | Automatic Segmentation of Broadcast News Audio using Self Similarity
Matrix | cs.SD cs.LG cs.MM | Generally audio news broadcast on radio is com- posed of music, commercials,
news from correspondents and recorded statements in addition to the actual news
read by the newsreader. When news transcripts are available, automatic
segmentation of audio news broadcast to time align the audio with the text
transcription to build frugal speech corpora is essential. We address the
problem of identifying segmentation in the audio news broadcast corresponding
to the news read by the newsreader so that they can be mapped to the text
transcripts. The existing techniques produce sub-optimal solutions when used to
extract newsreader read segments. In this paper, we propose a new technique
which is able to identify the acoustic change points reliably using an acoustic
Self Similarity Matrix (SSM). We describe the two pass technique in detail and
verify its performance on real audio news broadcast of All India Radio for
different languages.
|
1403.6922 | Covering numbers of $L_p$-balls of convex sets and functions | cs.IT math.IT math.PR math.ST stat.TH | We prove bounds for the covering numbers of classes of convex functions and
convex sets in Euclidean space. Previous results require the underlying convex
functions or sets to be uniformly bounded. We relax this assumption and replace
it with weaker integral constraints. Existing results can be recovered as
special cases of our results.
|
1403.6929 | Upper Bound on Singlet Fraction of Two Qubit Mixed Entangled States | quant-ph cs.IT math.IT | We demonstrate the possibility of achieving the maximum possible singlet
fraction using a entangled mixed two-qubit state as a resource. For this, we
establish a tight upper bound on singlet fraction and show that the maximal
singlet fraction obtained in \cite{Verstraete} does not attain the obtained
upper bound on the singlet fraction. Interestingly, we found that the required
upper bound can in fact be achieved using local filtering operations.
|
1403.6931 | A New Approach to User Scheduling in Massive Multi-User MIMO Broadcast
Channels | cs.IT math.IT | In this paper, a new user-scheduling-and-beamforming method is proposed for
multi-user massive multiple-input multiple-output (massive MIMO) broadcast
channels in the context of two-stage beamforming. The key ideas of the proposed
scheduling method are 1) to use a set of orthogonal reference beams and
construct a double cone around each reference beam to select `nearly-optimal'
semi-orthogonal users based only on channel quality indicator (CQI) feedback
and 2) to apply post-user-selection beam refinement with zero-forcing
beamforming (ZFBF) based on channel state information (CSI) feedback only from
the selected users. It is proved that the proposed scheduling-and-beamforming
method is asymptotically optimal as the number of users increases. Furthermore,
the proposed scheduling-and-beamforming method almost achieves the performance
of the existing semi-orthogonal user selection with ZFBF (SUS-ZFBF) that
requires full CSI feedback from all users, with significantly reduced feedback
overhead which is even less than that required by random beamforming.
|
1403.6946 | The NUbots Team Description Paper 2014 | cs.RO | The NUbots team, from The University of Newcastle, Australia, has had a
strong record of success in the RoboCup Standard Platform League since first
entering in 2002. The team has also competed within the RoboCup Humanoid
Kid-Size League since 2012. The 2014 team brings a renewed focus on software
architecture, modularity, and the ability to easily share code. This paper
summarizes the history of the NUbots team, describes the roles and research of
the team members, gives an overview of the NUbots' robots and software system,
and addresses relevant research projects within the the Newcastle Robotics
Laboratory.
|
1403.6950 | Pyramidal Fisher Motion for Multiview Gait Recognition | cs.CV | The goal of this paper is to identify individuals by analyzing their gait.
Instead of using binary silhouettes as input data (as done in many previous
works) we propose and evaluate the use of motion descriptors based on densely
sampled short-term trajectories. We take advantage of state-of-the-art people
detectors to define custom spatial configurations of the descriptors around the
target person. Thus, obtaining a pyramidal representation of the gait motion.
The local motion features (described by the Divergence-Curl-Shear descriptor)
extracted on the different spatial areas of the person are combined into a
single high-level gait descriptor by using the Fisher Vector encoding. The
proposed approach, coined Pyramidal Fisher Motion, is experimentally validated
on the recent `AVA Multiview Gait' dataset. The results show that this new
approach achieves promising results in the problem of gait recognition.
|
1403.6952 | Optimal pricing control in distribution networks with time-varying
supply and demand | math.OC cs.SY | This paper studies the problem of optimal flow control in dynamic inventory
systems. A dynamic optimal distribution problem, including time-varying supply
and demand, capacity constraints on the transportation lines, and convex flow
cost functions of Legendre-type, is formalized and solved. The time-varying
optimal flow is characterized in terms of the time-varying dual variables of a
corresponding network optimization problem. A dynamic feedback controller is
proposed that regulates the flows asymptotically to the optimal flows and
achieves in addition a balancing of all storage levels.
|
1403.6953 | Applications Oriented Input Design in Time-Domain Through Cyclic Methods | cs.SY | In this paper we propose a method for applications oriented input design for
linear systems under time-domain constraints on the amplitude of input and
output signals. The method guarantees a desired control performance for the
estimated model in minimum time, by imposing some lower bound on the
information matrix. The problem is formulated as a time domain optimization
problem, which is non-convex. This is addressed through an alternating method,
where we separate the problem into two steps and at each step we optimize the
cost function with respect to one of two variables. We alternate between these
two steps until convergence. A time recursive input design algorithm is
performed, which enables us to use the algorithm with control. Therefore, a
receding horizon framework is used to solve each optimization problem. Finally,
we illustrate the method with two numerical examples which show the good
ability of the proposed approach in generating an optimal input signal.
|
1403.6958 | Compressive Pattern Matching on Multispectral Data | cs.CV | We introduce a new constrained minimization problem that performs template
and pattern detection on a multispectral image in a compressive sensing
context. We use an original minimization problem from Guo and Osher that uses
$L_1$ minimization techniques to perform template detection in a multispectral
image. We first adapt this minimization problem to work with compressive
sensing data. Then we extend it to perform pattern detection using a formal
transform called the spectralization along a pattern. That extension brings out
the problem of measurement reconstruction. We introduce shifted measurements
that allow us to reconstruct all the measurement with a small overhead and we
give an optimality constraint for simple patterns. We present numerical results
showing the performances of the original minimization problem and the
compressed ones with different measurement rates and applied on remotely sensed
data.
|
1403.6968 | LINVIEW: Incremental View Maintenance for Complex Analytical Queries | cs.DB cs.NA | Many analytics tasks and machine learning problems can be naturally expressed
by iterative linear algebra programs. In this paper, we study the incremental
view maintenance problem for such complex analytical queries. We develop a
framework, called LINVIEW, for capturing deltas of linear algebra programs and
understanding their computational cost. Linear algebra operations tend to cause
an avalanche effect where even very local changes to the input matrices spread
out and infect all of the intermediate results and the final view, causing
incremental view maintenance to lose its performance benefit over
re-evaluation. We develop techniques based on matrix factorizations to contain
such epidemics of change. As a consequence, our techniques make incremental
view maintenance of linear algebra practical and usually substantially cheaper
than re-evaluation. We show, both analytically and experimentally, the
usefulness of these techniques when applied to standard analytics tasks. Our
evaluation demonstrates the efficiency of LINVIEW in generating parallel
incremental programs that outperform re-evaluation techniques by more than an
order of magnitude.
|
1403.6974 | Design and Analysis of a Greedy Pursuit for Distributed Compressed
Sensing | cs.IT math.IT | We consider a distributed compressed sensing scenario where many sensors
measure correlated sparse signals and the sensors are connected through a
network. Correlation between sparse signals is modeled by a partial common
support-set. For such a scenario, the main objective of this paper is to
develop a greedy pursuit algorithm. We develop a distributed parallel pursuit
(DIPP) algorithm based on exchange of information about estimated support-sets
at sensors. The exchange of information helps to improve estimation of the
partial common support-set, that in turn helps to gradually improve estimation
of support-sets in all sensors, leading to a better quality reconstruction
performance. We provide restricted isometry property (RIP) based theoretical
analysis on the algorithm's convergence and reconstruction performance. Under
certain theoretical requirements on the quality of information exchange over
network and RIP parameters of sensor nodes, we show that the DIPP algorithm
converges to a performance level that depends on a scaled additive measurement
noise power (convergence in theory) where the scaling coefficient is a function
of RIP parameters and information processing quality parameters. Using
simulations, we show practical reconstruction performance of DIPP vis-a-vis
amount of undersampling, signal-to-measurement-noise ratios and
network-connectivity conditions.
|
1403.6977 | Utility Maximization for Uplink MU-MIMO: Combining Spectral-Energy
Efficiency and Fairness | cs.NI cs.IT math.IT | Driven by green communications, energy efficiency (EE) has become a new
important criterion for designing wireless communication systems. However, high
EE often leads to low spectral efficiency (SE), which spurs the research on
EE-SE tradeoff. In this paper, we focus on how to maximize the utility in
physical layer for an uplink multi-user multiple-input multipleoutput (MU-MIMO)
system, where we will not only consider EE-SE tradeoff in a unified way, but
also ensure user fairness. We first formulate the utility maximization problem,
but it turns out to be non-convex. By exploiting the structure of this problem,
we find a convexization procedure to convert the original nonconvex problem
into an equivalent convex problem, which has the same global optimum with the
original problem. Following the convexization procedure, we present a
centralized algorithm to solve the utility maximization problem, but it
requires the global information of all users. Thus we propose a primal-dual
distributed algorithm which does not need global information and just consumes
a small amount of overhead. Furthermore, we have proved that the distributed
algorithm can converge to the global optimum. Finally, the numerical results
show that our approach can both capture user diversity for EE-SE tradeoff and
ensure user fairness, and they also validate the effectiveness of our
primal-dual distributed algorithm.
|
1403.6982 | Parallel BCC with One Common and Two Confidential Messages and Imperfect
CSIT | cs.IT math.IT | We consider a broadcast communication system over parallel sub-channels where
the transmitter sends three messages: a common message to two users, and two
confidential messages to each user which need to be kept secret from the other
user. We assume partial channel state information at the transmitter (CSIT),
stemming from noisy channel estimation. The first contribution of this paper is
the characterization of the secrecy capacity region boundary as the solution of
weighted sum-rate problems, with suitable weights. Partial CSIT is addressed by
adding a margin to the estimated channel gains. The second paper contribution
is the solution of this problem in an almost closed-form, where only two single
real parameters must be optimized, e.g., through dichotomic searches. On the
one hand, the considered problem generalizes existing literature where only two
out of the three messages are transmitted. On the other hand, the solution
finds also practical applications into the resource allocation of orthogonal
frequency division multiplexing (OFDM) systems with both secrecy and fairness
constraints.
|
1403.6985 | A Fast Minimal Infrequent Itemset Mining Algorithm | cs.DB | A novel fast algorithm for finding quasi identifiers in large datasets is
presented. Performance measurements on a broad range of datasets demonstrate
substantial reductions in run-time relative to the state of the art and the
scalability of the algorithm to realistically-sized datasets up to several
million records.
|
1403.7007 | Hierarchical Coded Caching | cs.IT cs.NI math.IT | Caching of popular content during off-peak hours is a strategy to reduce
network loads during peak hours. Recent work has shown significant benefits of
designing such caching strategies not only to deliver part of the content
locally, but also to provide coded multicasting opportunities even among users
with different demands. Exploiting both of these gains was shown to be
approximately optimal for caching systems with a single layer of caches.
Motivated by practical scenarios, we consider in this work a hierarchical
content delivery network with two layers of caches. We propose a new caching
scheme that combines two basic approaches. The first approach provides coded
multicasting opportunities within each layer; the second approach provides
coded multicasting opportunities across multiple layers. By striking the right
balance between these two approaches, we show that the proposed scheme achieves
the optimal communication rates to within a constant multiplicative and
additive gap. We further show that there is no tension between the rates in
each of the two layers up to the aforementioned gap. Thus, both layers can
simultaneously operate at approximately the minimum rate.
|
1403.7012 | On the Degrees of freedom of the K-user MISO Interference Channel with
imperfect delayed CSIT | cs.IT math.IT | This work investigates the degrees of freedom (DoF) of the K-user
multiple-input single-output (MISO) interference channel (IC) with imperfect
delayed channel state information at the transmitters (dCSIT). For this
setting, new DoF inner bonds are provided, and benchmarked with
cooperation-based outer bounds. The achievability result is based on a
precoding scheme that aligns the interfering received signals through time,
exploiting the concept of Retrospective Interference Alignment (RIA). The
proposed approach outperforms all previous known schemes. Furthermore, we study
the proposed scheme under channel estimation errors (CEE) on the reported
dCSIT, and derive a closed-form expression for the achievable DoF with
imperfect dCSIT.
|
1403.7017 | Retrospective Interference Alignment for the 3-user MIMO Interference
Channel with delayed CSIT | cs.IT math.IT | The degrees of freedom (DoF) of the 3-user multiple input multiple output
interference channel (3-user MIMO IC) are investigated where there is delayed
channel state information at the transmitters (dCSIT). We generalize the ideas
of Maleki et al. about {\it Retrospective Interference Alignment (RIA)} to be
applied to the MIMO IC, where transmitters and receivers are equipped with
$(M,N)$ antennas, respectively. We propose a two-phase transmission scheme
where the number of slots per phase and number of transmitted symbols are
optimized by solving a maximization problem. Finally, we review the existing
achievable DoF results in the literature as a function of the ratio between
transmitting and receiving antennas $\rho=M/N$. The proposed scheme improves
all other strategies when $\rho \in \left(\frac{1}{2}, \frac{31}{32} \right]$.
|
1403.7019 | An internal model approach to (optimal) frequency regulation in power
grids with time-varying voltages | cs.SY math.OC | This paper studies the problem of frequency regulation in power grids under
unknown and possible time-varying load changes, while minimizing the generation
costs. We formulate this problem as an output agreement problem for
distribution networks and address it using incremental passivity and
distributed internal-model-based controllers. Incremental passivity enables a
systematic approach to study convergence to the steady state with zero
frequency deviation and to design the controller in the presence of
time-varying voltages, whereas the internal-model principle is applied to
tackle the uncertain nature of the loads.
|
1403.7022 | Abstraction of Elementary Hybrid Systems by Variable Transformation | cs.SY | Elementary hybrid systems (EHSs) are those hybrid systems (HSs) containing
elementary functions such as exp, ln, sin, cos, etc. EHSs are very common in
practice, especially in safety-critical domains. Due to the non-polynomial
expressions which lead to undecidable arithmetic, verification of EHSs is very
hard. Existing approaches based on partition of state space or
over-approximation of reachable sets suffer from state explosion or inflation
of numerical errors. In this paper, we propose a symbolic abstraction approach
that reduces EHSs to polynomial hybrid systems (PHSs), by replacing all
non-polynomial terms with newly introduced variables. Thus the verification of
EHSs is reduced to the one of PHSs, enabling us to apply all the
well-established verification techniques and tools for PHSs to EHSs. In this
way, it is possible to avoid the limitations of many existing methods. We
illustrate the abstraction approach and its application in safety verification
of EHSs by several real world examples.
|
1403.7057 | Closed-Form Training of Conditional Random Fields for Large Scale Image
Segmentation | cs.LG cs.CV | We present LS-CRF, a new method for very efficient large-scale training of
Conditional Random Fields (CRFs). It is inspired by existing closed-form
expressions for the maximum likelihood parameters of a generative graphical
model with tree topology. LS-CRF training requires only solving a set of
independent regression problems, for which closed-form expression as well as
efficient iterative solvers are available. This makes it orders of magnitude
faster than conventional maximum likelihood learning for CRFs that require
repeated runs of probabilistic inference. At the same time, the models learned
by our method still allow for joint inference at test time. We apply LS-CRF to
the task of semantic image segmentation, showing that it is highly efficient,
even for loopy models where probabilistic inference is problematic. It allows
the training of image segmentation models from significantly larger training
sets than had been used previously. We demonstrate this on two new datasets
that form a second contribution of this paper. They consist of over 180,000
images with figure-ground segmentation annotations. Our large-scale experiments
show that the possibilities of CRF-based image segmentation are far from
exhausted, indicating, for example, that semi-supervised learning and the use
of non-linear predictors are promising directions for achieving higher
segmentation accuracy in the future.
|
1403.7074 | Analyzing Network Reliability Using Structural Motifs | cs.SI cond-mat.stat-mech math.CO physics.soc-ph q-bio.PE | This paper uses the reliability polynomial, introduced by Moore and Shannon
in 1956, to analyze the effect of network structure on diffusive dynamics such
as the spread of infectious disease. We exhibit a representation for the
reliability polynomial in terms of what we call {\em structural motifs} that is
well suited for reasoning about the effect of a network's structural properties
on diffusion across the network. We illustrate by deriving several general
results relating graph structure to dynamical phenomena.
|
1403.7087 | Conclusions from a NAIVE Bayes Operator Predicting the Medicare 2011
Transaction Data Set | cs.LG cs.CY physics.data-an | Introduction: The United States Federal Government operates one of the worlds
largest medical insurance programs, Medicare, to ensure payment for clinical
services for the elderly, illegal aliens and those without the ability to pay
for their care directly. This paper evaluates the Medicare 2011 Transaction
Data Set which details the transfer of funds from Medicare to private and
public clinical care facilities for specific clinical services for the
operational year 2011. Methods: Data mining was conducted to establish the
relationships between reported and computed transaction values in the data set
to better understand the drivers of Medicare transactions at a programmatic
level. Results: The models averaged 88 for average model accuracy and 38 for
average Kappa during training. Some reported classes are highly independent
from the available data as their predictability remains stable regardless of
redaction of supporting and contradictory evidence. DRG or procedure type
appears to be unpredictable from the available financial transaction values.
Conclusions: Overlay hypotheses such as charges being driven by the volume
served or DRG being related to charges or payments is readily false in this
analysis despite 28 million Americans being billed through Medicare in 2011 and
the program distributing over 70 billion in this transaction set alone. It may
be impossible to predict the dependencies and data structures the payer of last
resort without data from payers of first and second resort. Political concerns
about Medicare would be better served focusing on these first and second order
payer systems as what Medicare costs is not dependent on Medicare itself.
|
1403.7100 | A study on cost behaviors of binary classification measures in
class-imbalanced problems | cs.LG | This work investigates into cost behaviors of binary classification measures
in a background of class-imbalanced problems. Twelve performance measures are
studied, such as F measure, G-means in terms of accuracy rates, and of recall
and precision, balance error rate (BER), Matthews correlation coefficient
(MCC), Kappa coefficient, etc. A new perspective is presented for those
measures by revealing their cost functions with respect to the class imbalance
ratio. Basically, they are described by four types of cost functions. The
functions provides a theoretical understanding why some measures are suitable
for dealing with class-imbalanced problems. Based on their cost functions, we
are able to conclude that G-means of accuracy rates and BER are suitable
measures because they show "proper" cost behaviors in terms of "a
misclassification from a small class will cause a greater cost than that from a
large class". On the contrary, F1 measure, G-means of recall and precision, MCC
and Kappa coefficient measures do not produce such behaviors so that they are
unsuitable to serve our goal in dealing with the problems properly.
|
1403.7102 | Echo chamber amplification and disagreement effects in the political
activity of Twitter users | cs.SI cs.CY physics.soc-ph | Online social networks have emerged as a significant platform for political
discourse. In this paper we investigate what affects the level of participation
of users in the political discussion. Specifically, are users more likely to be
active when they are surrounded by like-minded individuals, or, alternatively,
when their environment is heterogeneous, and so their messages might be carried
to people with differing views. To answer this question, we analyzed the
activity of about 200K Twitter users who expressed explicit support for one of
the candidates of the 2012 US presidential election. We quantified the level of
political activity (PA) of users by the fraction of political tweets in their
posts, and analyzed the relationship between PA and measures of the users'
political environment. These measures were designed to assess the
likemindedness, e.g., the fraction of users with similar political views, of
their virtual and geographic environments. Our results showed that high PA is
usually obtained by users in politically balanced virtual environment. This is
in line with the disagreement theory of political science that states that a
user's PA is invigorated by the disagreement with their peers. Our results also
show that users surrounded by politically like-minded virtual peers tend to
have low PA. This observation contradicts the echo chamber amplification theory
that states that a person tends to be more politically active when surrounded
by like-minded people. Finally, we observe that the likemindedness of the
geographical environment does not affect PA. We thus conclude that PA of users
is independent of the likemindedness of their geographical environment and is
correlated with likemindedness of their virtual environment. The exact form of
correlation manifests the phenomenon of disagreement and, in a majority of
settings, contradicts the echo chamber amplification theory.
|
1403.7105 | Physiological Control of Human Heart Rate and Oxygen Consumption during
Rhythmic Exercises | q-bio.QM cs.SY | Physical exercise has significant benefits for humans in improving the health
and quality of their lives, by improving the functional performance of their
cardiovascular and respiratory systems. However, it is very important to
control the workload, e.g. the frequency of body movements, within the
capability of the individual to maximise the efficiency of the exercise. The
workload is generally represented in terms of heart rate (HR) and oxygen
consumption VO2. We focus particularly on the control of HR and VO2 using the
workload of an individual body movement, also known as the exercise rate (ER),
in this research. The first part of this report deals with the modelling and
control of HR during an unknown type of rhythmic exercise. A novel feature of
the developed system is to control HR via manipulating ER as a control input.
The relation between ER and HR is modelled using a simple autoregressive model
with unknown parameters. The parameters of the model are estimated using a
Kalman filter and an indirect adaptive H1 controller is designed. The
performance of the system is tested and validated on six subjects during rowing
and cycling exercise. The results demonstrate that the designed control system
can regulate HR to a predefined profile. The second part of this report deals
with the problem of estimating VO2 during rhythmic exercise, as the direct
measurement of VO2 is not realisable in these environments. Therefore,
non-invasive sensors are used to measure HR, RespR, and ER to estimate VO2. The
developed approach for cycling and rowing exercise predicts the percentage
change in maximum VO2 from the resting to the exercising phases, using a
Hammerstein model.. Results show that the average quality of fit in both
exercises is improved as the intensity of exercise is increased.
|
1403.7123 | User Cooperation in Wireless Powered Communication Networks | cs.IT math.IT | This paper studies user cooperation in the emerging wireless powered
communication network (WPCN) for throughput optimization. For the purpose of
exposition, we consider a two-user WPCN, in which one hybrid access point
(H-AP) broadcasts wireless energy to two distributed users in the downlink (DL)
and the users transmit their independent information using their individually
harvested energy to the H-AP in the uplink (UL) through
time-division-multiple-access (TDMA). We propose user cooperation in the WPCN
where the user which is nearer to the H-AP and has a better channel for DL
energy harvesting and UL information transmission uses part of its allocated UL
time and DL harvested energy to help to relay the far user's information to the
H-AP, in order to achieve more balanced throughput optimization. We maximize
the weighted sum-rate (WSR) of the two users by jointly optimizing the time and
power allocations in the network for both wireless energy transfer in the DL
and wireless information transmission and relaying in the UL. Simulation
results show that the proposed user cooperation scheme can effectively improve
the achievable throughput in the WPCN with desired user fairness.
|
1403.7137 | A Sampling Filter for Non-Gaussian Data Assimilation | cs.CE stat.CO | Data assimilation combines information from models, measurements, and priors
to estimate the state of a dynamical system such as the atmosphere. The
Ensemble Kalman filter (EnKF) is a family of ensemble-based data assimilation
approaches that has gained wide popularity due its simple formulation, ease of
implementation, and good practical results. Most EnKF algorithms assume that
the underlying probability distributions are Gaussian. Although this assumption
is well accepted, it is too restrictive when applied to large nonlinear models,
nonlinear observation operators, and large levels of uncertainty. Several
approaches have been proposed in order to avoid the Gaussianity assumption. One
of the most successful strategies is the maximum likelihood ensemble filter
(MLEF) which computes a maximum a posteriori estimate of the state assuming the
posterior distribution is Gaussian. MLEF is designed to work with nonlinear and
even non-differentiable observation operators, and shows good practical
performance. However, there are limits to the degree of nonlinearity that MLEF
can handle. This paper proposes a new ensemble-based data assimilation method,
named the "sampling filter", which obtains the analysis by sampling directly
from the posterior distribution. The sampling strategy is based on a Hybrid
Monte Carlo (HMC) approach that can handle non-Gaussian probability
distributions. Numerical experiments are carried out using the Lorenz-96 model
and observation operators with different levels of non-linearity and
differentiability. The proposed filter is also tested with shallow water model
on a sphere with linear observation operator. The results show that the
sampling filter can perform well even in highly nonlinear situations were EnKF
and MLEF filters diverge.
|
1403.7152 | Management of dangerous goods in container terminal with MAS model | cs.MA | In a container terminal, many operations occur within the storage area:
containers import, containers export and containers shifting. All these
operations require the respect of many rules and even laws in order to
guarantee the port safety and to prevent risks, especially when hazardous
material is concerned. In this paper, we propose a hybrid architecture, using a
Cellular Automaton and a Multi-Agent System to handle the dangerous container
storage problem. It is an optimization problem since the aim is to improve the
container terminal configuration, that is, the way hazardous containers are
dispatched through the terminal to improve its security. In our model, we
consider containers as agents, in order to use a Multi-Agent System for the
decision aid software, and a Cellular Automaton for modelling the terminal
itself. To validate our approach many tests have been performed and the results
show the relevance of our model.
|
1403.7162 | Information Retrieval (IR) through Semantic Web (SW): An Overview | cs.IR | A large amount of data is present on the web. It contains huge number of web
pages and to find suitable information from them is very cumbersome task. There
is need to organize data in formal manner so that user can easily access and
use them. To retrieve information from documents, we have many Information
Retrieval (IR) techniques. Current IR techniques are not so advanced that they
can be able to exploit semantic knowledge within documents and give precise
results. IR technology is major factor responsible for handling annotations in
Semantic Web (SW) languages and in the present paper knowledgeable
representation languages used for retrieving information are discussed.
|
1403.7164 | Tight Bounds for Symmetric Divergence Measures and a Refined Bound for
Lossless Source Coding | cs.IT math.IT math.PR | Tight bounds for several symmetric divergence measures are derived in terms
of the total variation distance. It is shown that each of these bounds is
attained by a pair of 2 or 3-element probability distributions. An application
of these bounds for lossless source coding is provided, refining and improving
a certain bound by Csisz\'{a}r. Another application of these bounds has been
recently introduced by Yardi. et al. for channel-code detection.
|
1403.7175 | Low-Rank and Low-Order Decompositions for Local System Identification | math.OC cs.SY | As distributed systems increase in size, the need for scalable algorithms
becomes more and more important. We argue that in the context of system
identification, an essential building block of any scalable algorithm is the
ability to estimate local dynamics within a large interconnected system. We
show that in what we term the "full interconnection measurement" setting, this
task is easily solved using existing system identification methods. We also
propose a promising heuristic for the "hidden interconnection measurement"
case, in which contributions to local measurements from both local and global
dynamics need to be separated. Inspired by the machine learning literature, and
in particular by convex approaches to rank minimization and matrix
decomposition, we exploit the fact that the transfer function of the local
dynamics is low-order, but full-rank, while the transfer function of the global
dynamics is high-order, but low-rank, to formulate this separation task as a
nuclear norm minimization.
|
1403.7178 | Offshore Wind Farm Layout Optimization Using Adapted Genetic Algorithm:
A different perspective | cs.NE | In this paper we study the problem of optimal layout of an offshore wind farm
to minimize the wake effect impacts. Considering the specific requirements of
concerned offshore wind farm, we propose an adaptive genetic algorithm (AGA)
which introduces location swaps to replace random crossovers in conventional
GAs. That way the total number of turbines in the resulting layout will be
effectively kept to the initially specified value. We experiment the proposed
AGA method on three cases with free wind speed of 12 m/s, 20 m/s, and a typical
offshore wind distribution setting respectively. Numerical results verify the
effectiveness of our proposed algorithm which achieves a much faster
convergence compared to conventional GA algorithms.
|
1403.7209 | Acceleration of a Full-scale Industrial CFD Application with OP2 | cs.CE cs.PF | Hydra is a full-scale industrial CFD application used for the design of
turbomachinery at Rolls Royce plc. It consists of over 300 parallel loops with
a code base exceeding 50K lines and is capable of performing complex
simulations over highly detailed unstructured mesh geometries. Unlike simpler
structured-mesh applications, which feature high speed-ups when accelerated by
modern processor architectures, such as multi-core and many-core processor
systems, Hydra presents major challenges in data organization and movement that
need to be overcome for continued high performance on emerging platforms. We
present research in achieving this goal through the OP2 domain-specific
high-level framework. OP2 targets the domain of unstructured mesh problems and
follows the design of an active library using source-to-source translation and
compilation to generate multiple parallel implementations from a single
high-level application source for execution on a range of back-end hardware
platforms. We chart the conversion of Hydra from its original hand-tuned
production version to one that utilizes OP2, and map out the key difficulties
encountered in the process. To our knowledge this research presents the first
application of such a high-level framework to a full scale production code.
Specifically we show (1) how different parallel implementations can be achieved
with an active library framework, even for a highly complicated industrial
application such as Hydra, and (2) how different optimizations targeting
contrasting parallel architectures can be applied to the whole application,
seamlessly, reducing developer effort and increasing code longevity.
Performance results demonstrate that not only the same runtime performance as
that of the hand-tuned original production code could be achieved, but it can
be significantly improved on conventional processor systems. Additionally, we
achieve further...
|
1403.7232 | On the Performance of Short Block Codes over Finite-State Channels in
the Rare-Transition Regime | cs.IT math.IT | As the mobile application landscape expands, wireless networks are tasked
with supporting different connection profiles, including real-time traffic and
delay-sensitive communications. Among many ensuing engineering challenges is
the need to better understand the fundamental limits of forward error
correction in non-asymptotic regimes. This article characterizes the
performance of random block codes over finite-state channels and evaluates
their queueing performance under maximum-likelihood decoding. In particular,
classical results from information theory are revisited in the context of
channels with rare transitions, and bounds on the probabilities of decoding
failure are derived for random codes. This creates an analysis framework where
channel dependencies within and across codewords are preserved. Such results
are subsequently integrated into a queueing problem formulation. For instance,
it is shown that, for random coding on the Gilbert-Elliott channel, the
performance analysis based on upper bounds on error probability provides very
good estimates of system performance and optimum code parameters. Overall, this
study offers new insights about the impact of channel correlation on the
performance of delay-aware, point-to-point communication links. It also
provides novel guidelines on how to select code rates and block lengths for
real-time traffic over wireless communication infrastructures.
|
1403.7239 | Asynchronous Orthogonal Differential Decoding for Multiple Access
Channels | cs.IT math.IT | We propose several differential decoding schemes for asynchronous multi-user
MIMO systems based on orthogonal space-time block codes (OSTBCs) where neither
the transmitters nor the receiver has knowledge of the channel. First, we
derive novel low complexity differential decoders by performing interference
cancelation in time and employing different decoding methods. The decoding
complexity of these schemes grows linearly with the number of users. We then
present additional differential decoding schemes that perform significantly
better than our low complexity decoders and outperform the existing synchronous
differential schemes but require higher decoding complexity compared to our low
complexity decoders. The proposed schemes work for any square OSTBC, any
constant amplitude constellation, any number of users, and any number of
receive antennas. Furthermore, we analyze the diversity of the proposed schemes
and derive conditions under which our schemes provide full diversity. For the
cases of two and four transmit antennas, we provide examples of PSK
constellations to achieve full diversity. Simulation results show that our
differential schemes provide good performance. To the best of our knowledge,
the proposed differential detection schemes are the first differential schemes
for asynchronous multi-user systems.
|
1403.7242 | Network Weirdness: Exploring the Origins of Network Paradoxes | cs.SI cs.CY physics.soc-ph | Social networks have many counter-intuitive properties, including the
"friendship paradox" that states, on average, your friends have more friends
than you do. Recently, a variety of other paradoxes were demonstrated in online
social networks. This paper explores the origins of these network paradoxes.
Specifically, we ask whether they arise from mathematical properties of the
networks or whether they have a behavioral origin. We show that sampling from
heavy-tailed distributions always gives rise to a paradox in the mean, but not
the median. We propose a strong form of network paradoxes, based on utilizing
the median, and validate it empirically using data from two online social
networks. Specifically, we show that for any user the majority of user's
friends and followers have more friends, followers, etc. than the user, and
that this cannot be explained by statistical properties of sampling. Next, we
explore the behavioral origins of the paradoxes by using the shuffle test to
remove correlations between node degrees and attributes. We find that paradoxes
for the mean persist in the shuffled network, but not for the median. We
demonstrate that strong paradoxes arise due to the assortativity of user
attributes, including degree, and correlation between degree and attribute.
|
1403.7248 | Updating RDFS ABoxes and TBoxes in SPARQL | cs.DB | Updates in RDF stores have recently been standardised in the SPARQL 1.1
Update specification. However, computing answers entailed by ontologies in
triple stores is usually treated orthogonal to updates. Even the W3C's recent
SPARQL 1.1 Update language and SPARQL 1.1 Entailment Regimes specifications
explicitly exclude a standard behaviour how SPARQL endpoints should treat
entailment regimes other than simple entailment in the context of updates. In
this paper, we take a first step to close this gap. We define a fragment of
SPARQL basic graph patterns corresponding to (the RDFS fragment of) DL-Lite and
the corresponding SPARQL update language, dealing with updates both of ABox and
of TBox statements. We discuss possible semantics along with potential
strategies for implementing them. We treat both, (i) materialised RDF stores,
which store all entailed triples explicitly, and (ii) reduced RDF Stores, that
is, redundancy-free RDF stores that do not store any RDF triples (corresponding
to DL-Lite ABox statements) entailed by others already.
|
1403.7264 | Proceedings Second Workshop on Synthesis | cs.LO cs.SY | Software synthesis is rapidly developing into an important research area with
vast potential for practical application. The SYNT Workshop on Synthesis aims
to bringing together researchers interested in synthesis to present both
ongoing and mature work on all aspects of automated synthesis and its
applications.
The second iteration of SYNT took place in Saint Petersburg, Russia, and was
co-located with the 25th International Conference on Computer Aided
Verification. The workshop included eleven presentations covering the full
scope of the emerging synthesis community, as well as a discussion lead by Swen
Jacobs on the organization of two new synthesis competitions focusing on
reactive synthesis and syntax-guided functional synthesis respectively.
|
1403.7292 | A Mining Method to Create Knowledge Map by Analysing the Data Resource | cs.AI | The fundamental step in measuring the robustness of a system is the synthesis
of the so called Process Map.This is generally based on the user raw data
material.Process Maps are of fundamental importance towards the understanding
of the nature of a system in that they indicate which variables are causally
related and which are particularly important.This paper represent the system
Map or business structure map to understand business criteria studying the
various aspects of the company.The business structure map or knowledge map or
Process map are used to increase the growth of the company by giving some
useful measures according to the business criteria.This paper also deals with
the different company strategy to reduce the risk factors.Process Map is
helpful for building such knowledge successfully.Making decisions from such map
in a highly complex situation requires more knowledge and resources.
|
1403.7296 | Tile optimization for area in FPGA based hardware acceleration of
peptide identification | cs.CE | Advances in life sciences over the last few decades have lead to the
generation of a huge amount of biological data. Computing research has become a
vital part in driving biological discovery where analysis and categorization of
biological data are involved. String matching algorithms can be applied for
protein/gene sequence matching and with the phenomenal increase in the size of
string databases to be analyzed, software implementations of these algorithms
seems to have hit a hard limit and hardware acceleration is increasingly being
sought. Several hardware platforms such as Field Programmable Gate Arrays
(FPGA), Graphics Processing Units (GPU) and Chip Multi Processors (CMP) are
being explored as hardware platforms. In this paper, we give a comprehensive
overview of the literature on hardware acceleration of string matching
algorithms, we take an FPGA hardware exploration and expedite the design time
by a design automation technique. Further, our design automation is also
optimized for better hardware utilization through optimizing the number of
peptides that can be represented in an FPGA tile. The results indicate
significant improvements in design time and hardware utilization which are
reported in this paper.
|
1403.7308 | Data Generators for Learning Systems Based on RBF Networks | stat.ML cs.AI cs.LG | There are plenty of problems where the data available is scarce and
expensive. We propose a generator of semi-artificial data with similar
properties to the original data which enables development and testing of
different data mining algorithms and optimization of their parameters. The
generated data allow a large scale experimentation and simulations without
danger of overfitting. The proposed generator is based on RBF networks, which
learn sets of Gaussian kernels. These Gaussian kernels can be used in a
generative mode to generate new data from the same distributions. To assess
quality of the generated data we evaluated the statistical properties of the
generated data, structural similarity and predictive similarity using
supervised and unsupervised learning techniques. To determine usability of the
proposed generator we conducted a large scale evaluation using 51 UCI data
sets. The results show a considerable similarity between the original and
generated data and indicate that the method can be useful in several
development and simulation scenarios. We analyze possible improvements in
classification performance by adding different amounts of generated data to the
training set, performance on high dimensional data sets, and conditions when
the proposed approach is successful.
|
1403.7311 | Performance Evaluation of Raster Based Shape Vectors in Object
Recognition | cs.CV | Object recognition is still an impediment in the field of computer vision and
multimedia retrieval.Defining an object model is a critical task. Shape
information of an object play a critical role in the process of object
recognition. Extraction of boundary information of an object from the
multimedia data and classifying this information with associated objects is the
primary step towards object recognition. Rasters play an important role while
computing object boundary. The trade-off lies with the dimensionality of the
object versus computational cost while achieving maximum efficiency. In this
treatise an attempt is made to evaluate the performance of circular and spiral
raster models in terms of average retrieval efficiency and computational cost.
|
1403.7315 | HRank: A Path based Ranking Framework in Heterogeneous Information
Network | cs.IR | Recently, there is a surge of interests on heterogeneous information network
analysis. As a newly emerging network model, heterogeneous information networks
have many unique features (e.g., complex structure and rich semantics) and a
number of interesting data mining tasks have been exploited in this kind of
networks, such as similarity measure, clustering, and classification. Although
evaluating the importance of objects has been well studied in homogeneous
networks, it is not yet exploited in heterogeneous networks. In this paper, we
study the ranking problem in heterogeneous networks and propose the HRank
framework to evaluate the importance of multiple types of objects and meta
paths. Since the importance of objects depends upon the meta paths in
heterogeneous networks, HRank develops a path based random walk process.
Moreover, a constrained meta path is proposed to subtly capture the rich
semantics in heterogeneous networks. Furthermore, HRank can simultaneously
determine the importance of objects and meta paths through applying the tensor
analysis. Extensive experiments on three real datasets show that HRank can
effectively evaluate the importance of objects and paths together. Moreover,
the constrained meta path shows its potential on mining subtle semantics by
obtaining more accurate ranking results.
|
1403.7317 | On the Outage Probability of the Full-Duplex Interference-Limited Relay
Channel | cs.IT math.IT | In this paper, we study the performance, in terms of the asymptotic error
probability, of a user which communicates with a destination with the aid of a
full-duplex in-band relay. We consider that the network is
interference-limited, and interfering users are distributed as a Poisson point
process. In this case, the asymptotic error probability is upper bounded by the
outage probability (OP). We investigate the outage behavior for well-known
cooperative schemes, namely, decode-and-forward (DF) and compress-and-forward
(CF) considering fading and path loss. For DF we determine the exact OP and
develop upper bounds which are tight in typical operating conditions. Also, we
find the correlation coefficient between source and relay signals which
minimizes the OP when the density of interferers is small. For CF, the
achievable rates are determined by the spatial correlation of the
interferences, and a straightforward analysis isn't possible. To handle this
issue, we show the rate with correlated noises is at most one bit worse than
with uncorrelated noises, and thus find an upper bound on the performance of
CF. These results are useful to evaluate the performance and to optimize
relaying schemes in the context of full-duplex wireless networks.
|
1403.7321 | Learning detectors quickly using structured covariance matrices | cs.CV | Computer vision is increasingly becoming interested in the rapid estimation
of object detectors. Canonical hard negative mining strategies are slow as they
require multiple passes of the large negative training set. Recent work has
demonstrated that if the distribution of negative examples is assumed to be
stationary, then Linear Discriminant Analysis (LDA) can learn comparable
detectors without ever revisiting the negative set. Even with this insight,
however, the time to learn a single object detector can still be on the order
of tens of seconds on a modern desktop computer. This paper proposes to
leverage the resulting structured covariance matrix to obtain detectors with
identical performance in orders of magnitude less time and memory. We elucidate
an important connection to the correlation filter literature, demonstrating
that these can also be trained without ever revisiting the negative set.
|
1403.7322 | Exploiting Delay Correlation for Multi-Antenna-Assisted High Speed Train
Communications | cs.IT math.IT | In High Speed Train Communications (HSTC), the most challenging issue is
coping with the extremely fast fading channel. Compared with its static
counterpart, channel estimation on the move consumes excessive energy and
spectrum to achieve similar performance. To address this issue, we exploit the
delay correlation inherent in the linear spatial-temporal structure of
multi-antenna array, based on which the rapid fading channel may be
approximated by a virtual slow-fading channel. Subsequently, error probability
and spectral efficiency are re-examined for this staticized channel. In
particular, we formulate the quantitative tradeoff between the two metrics of
interest, by adjusting the pilot percentage in each frame. Numerical results
verify the good performance of the proposed scheme and elucidate the tradeoff.
|
1403.7335 | Emotion Analysis Platform on Chinese Microblog | cs.CL cs.CY cs.IR | Weibo, as the largest social media service in China, has billions of messages
generated every day. The huge number of messages contain rich sentimental
information. In order to analyze the emotional changes in accordance with time
and space, this paper presents an Emotion Analysis Platform (EAP), which
explores the emotional distribution of each province, so that can monitor the
global pulse of each province in China. The massive data of Weibo and the
real-time requirements make the building of EAP challenging. In order to solve
the above problems, emoticons, emotion lexicon and emotion-shifting rules are
adopted in EAP to analyze the emotion of each tweet. In order to verify the
effectiveness of the platform, case study on the Sichuan earthquake is done,
and the analysis result of the platform accords with the fact. In order to
analyze from quantity, we manually annotate a test set and conduct experiment
on it. The experimental results show that the macro-Precision of EAP reaches
80% and the EAP works effectively.
|
1403.7365 | Expectation-Maximization Technique and Spatial-Adaptation Applied to
Pel-Recursive Motion Estimation | cs.CV | Pel-recursive motion estimation isa well-established approach. However, in
the presence of noise, it becomes an ill-posed problem that requires
regularization. In this paper, motion vectors are estimated in an iterative
fashion by means of the Expectation-Maximization (EM) algorithm and a Gaussian
data model. Our proposed algorithm also utilizes the local image properties of
the scene to improve the motion vector estimates following a spatially adaptive
approach. Numerical experiments are presented that demonstrate the merits of
our method.
|
1403.7373 | Difficulty Rating of Sudoku Puzzles: An Overview and Evaluation | cs.AI | How can we predict the difficulty of a Sudoku puzzle? We give an overview of
difficulty rating metrics and evaluate them on extensive dataset on human
problem solving (more then 1700 Sudoku puzzles, hundreds of solvers). The best
results are obtained using a computational model of human solving activity.
Using the model we show that there are two sources of the problem difficulty:
complexity of individual steps (logic operations) and structure of dependency
among steps. We also describe metrics based on analysis of solutions under
relaxed constraints -- a novel approach inspired by phase transition phenomenon
in the graph coloring problem. In our discussion we focus not just on the
performance of individual metrics on the Sudoku puzzle, but also on their
generalizability and applicability to other problems.
|
1403.7400 | Big Questions for Social Media Big Data: Representativeness, Validity
and Other Methodological Pitfalls | cs.SI physics.soc-ph | Large-scale databases of human activity in social media have captured
scientific and policy attention, producing a flood of research and discussion.
This paper considers methodological and conceptual challenges for this emergent
field, with special attention to the validity and representativeness of social
media big data analyses. Persistent issues include the over-emphasis of a
single platform, Twitter, sampling biases arising from selection by hashtags,
and vague and unrepresentative sampling frames. The socio-cultural complexity
of user behavior aimed at algorithmic invisibility (such as subtweeting,
mock-retweeting, use of "screen captures" for text, etc.) further complicate
interpretation of big data social media. Other challenges include accounting
for field effects, i.e. broadly consequential events that do not diffuse only
through the network under study but affect the whole society. The application
of network methods from other fields to the study of human social activity may
not always be appropriate. The paper concludes with a call to action on
practical steps to improve our analytic capacity in this promising,
rapidly-growing field.
|
1403.7426 | An Overview of Hierarchical Task Network Planning | cs.AI | Hierarchies are the most common structure used to understand the world
better. In galaxies, for instance, multiple-star systems are organised in a
hierarchical system. Then, governmental and company organisations are
structured using a hierarchy, while the Internet, which is used on a daily
basis, has a space of domain names arranged hierarchically. Since Artificial
Intelligence (AI) planning portrays information about the world and reasons to
solve some of world's problems, Hierarchical Task Network (HTN) planning has
been introduced almost 40 years ago to represent and deal with hierarchies. Its
requirement for rich domain knowledge to characterise the world enables HTN
planning to be very useful, but also to perform well. However, the history of
almost 40 years obfuscates the current understanding of HTN planning in terms
of accomplishments, planning models, similarities and differences among
hierarchical planners, and its current and objective image. On top of these
issues, attention attracts the ability of hierarchical planning to truly cope
with the requirements of applications from the real world. We propose a
framework-based approach to remedy this situation. First, we provide a basis
for defining different formal models of hierarchical planning, and define two
models that comprise a large portion of HTN planners. Second, we provide a set
of concepts that helps to interpret HTN planners from the aspect of their
search space. Then, we analyse and compare the planners based on a variety of
properties organised in five segments, namely domain authoring, expressiveness,
competence, performance and applicability. Furthermore, we select Web service
composition as a real-world and current application, and classify and compare
the approaches that employ HTN planning to solve the problem of service
composition. Finally, we conclude with our findings and present directions for
future work.
|
1403.7429 | Distributed Reconstruction of Nonlinear Networks: An ADMM Approach | math.OC cs.DC cs.LG cs.SY | In this paper, we present a distributed algorithm for the reconstruction of
large-scale nonlinear networks. In particular, we focus on the identification
from time-series data of the nonlinear functional forms and associated
parameters of large-scale nonlinear networks. Recently, a nonlinear network
reconstruction problem was formulated as a nonconvex optimisation problem based
on the combination of a marginal likelihood maximisation procedure with
sparsity inducing priors. Using a convex-concave procedure (CCCP), an iterative
reweighted lasso algorithm was derived to solve the initial nonconvex
optimisation problem. By exploiting the structure of the objective function of
this reweighted lasso algorithm, a distributed algorithm can be designed. To
this end, we apply the alternating direction method of multipliers (ADMM) to
decompose the original problem into several subproblems. To illustrate the
effectiveness of the proposed methods, we use our approach to identify a
network of interconnected Kuramoto oscillators with different network sizes
(500~100,000 nodes).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.