id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1310.1811 | End-to-End Text Recognition with Hybrid HMM Maxout Models | cs.CV | The problem of detecting and recognizing text in natural scenes has proved to
be more challenging than its counterpart in documents, with most of the
previous work focusing on a single part of the problem. In this work, we
propose new solutions to the character and word recognition problems and then
show how to combine these solutions in an end-to-end text-recognition system.
We do so by leveraging the recently introduced Maxout networks along with
hybrid HMM models that have proven useful for voice recognition. Using these
elements, we build a tunable and highly accurate recognition system that beats
state-of-the-art results on all the sub-problems for both the ICDAR 2003 and
SVT benchmark datasets.
|
1310.1822 | Error Rate Analysis of Cognitive Radio Transmissions with Imperfect
Channel Sensing | cs.IT math.IT | This paper studies the symbol error rate performance of cognitive radio
transmissions in the presence of imperfect sensing decisions. Two different
transmission schemes, namely sensing-based spectrum sharing (SSS) and
opportunistic spectrum access (OSA), are considered. In both schemes, secondary
users first perform channel sensing, albeit with possible errors. In SSS,
depending on the sensing decisions, they adapt the transmission power level and
coexist with primary users in the channel. On the other hand, in OSA, secondary
users are allowed to transmit only when the primary user activity is not
detected. Initially, for both transmission schemes, general formulations for
the optimal decision rule and error probabilities are provided for arbitrary
modulation schemes under the assumptions that the receiver is equipped with the
sensing decision and perfect knowledge of the channel fading, and the primary
user's received faded signals at the secondary receiver has a Gaussian mixture
distribution. Subsequently, the general approach is specialized to rectangular
quadrature amplitude modulation (QAM). More specifically, optimal decision rule
is characterized for rectangular QAM, and closed-form expressions for the
average symbol error probability attained with the optimal detector are derived
under both transmit power and interference constraints. The effects of
imperfect channel sensing decisions, interference from the primary user and its
Gaussian mixture model, and the transmit power and interference constraints on
the error rate performance of cognitive transmissions are analyzed.
|
1310.1829 | Delineating geographical regions with networks of human interactions in
an extensive set of countries | cs.SI physics.soc-ph | Large-scale networks of human interaction, in particular country-wide
telephone call networks, can be used to redraw geographical maps by applying
algorithms of topological community detection. The geographic projections of
the emerging areas in a few recent studies on single regions have been
suggested to share two distinct properties: first, they are cohesive, and
second, they tend to closely follow socio-economic boundaries and are similar
to existing political regions in size and number. Here we use an extended set
of countries and clustering indices to quantify overlaps, providing ample
additional evidence for these observations using phone data from countries of
various scales across Europe, Asia, and Africa: France, the UK, Italy, Belgium,
Portugal, Saudi Arabia, and Ivory Coast. In our analysis we use the known
approach of partitioning country-wide networks, and an additional iterative
partitioning of each of the first level communities into sub-communities,
revealing that cohesiveness and matching of official regions can also be
observed on a second level if spatial resolution of the data is high enough.
The method has possible policy implications on the definition of the
borderlines and sizes of administrative regions.
|
1310.1840 | Parallel coordinate descent for the Adaboost problem | cs.LG math.OC stat.ML | We design a randomised parallel version of Adaboost based on previous studies
on parallel coordinate descent. The algorithm uses the fact that the logarithm
of the exponential loss is a function with coordinate-wise Lipschitz continuous
gradient, in order to define the step lengths. We provide the proof of
convergence for this randomised Adaboost algorithm and a theoretical
parallelisation speedup factor. We finally provide numerical examples on
learning problems of various sizes that show that the algorithm is competitive
with concurrent approaches, especially for large scale problems.
|
1310.1855 | Early Fire Detection Using HEP and Space-time Analysis | cs.CV cs.MM | In this article, a video base early fire alarm system is developed by
monitoring the smoke in the scene. There are two major contributions in this
work. First, to find the best texture feature for smoke detection, a general
framework, named Histograms of Equivalent Patterns (HEP), is adopted to achieve
an extensive evaluation of various kinds of texture features. Second, the
\emph{Block based Inter-Frame Difference} (BIFD) and a improved version of
LBP-TOP are proposed and ensembled to describe the space-time characteristics
of the smoke. In order to reduce the false alarms, the Smoke History Image
(SHI) is utilized to register the recent classification results of candidate
smoke blocks. Experimental results using SVM show that the proposed method can
achieve better accuracy and less false alarm compared with the state-of-the-art
technologies.
|
1310.1857 | Predictor-Based Tracking For Neuromuscular Electrical Stimulation | math.OC cs.SY | A new hybrid tracking controller for neuromuscular electrical stimulation is
proposed. The control scheme uses sampled measurements and is designed by
utilizing a numerical prediction of the state variables. The tracking error of
the closed-loop system converges exponentially to zero and robustness to
perturbations of the sampling schedule is exhibited. One of the novelties of
our approach is the ability to satisfy a state constraint imposed by the
physical system.
|
1310.1861 | Physical-Layer Cryptography Through Massive MIMO | cs.IT cs.CR math.IT | We propose the new technique of physical-layer cryptography based on using a
massive MIMO channel as a key between the sender and desired receiver, which
need not be secret. The goal is for low-complexity encoding and decoding by the
desired transmitter-receiver pair, whereas decoding by an eavesdropper is hard
in terms of prohibitive complexity. The decoding complexity is analyzed by
mapping the massive MIMO system to a lattice. We show that the eavesdropper's
decoder for the MIMO system with M-PAM modulation is equivalent to solving
standard lattice problems that are conjectured to be of exponential complexity
for both classical and quantum computers. Hence, under the widely-held
conjecture that standard lattice problems are hard to solve in the worst-case,
the proposed encryption scheme has a more robust notion of security than that
of the most common encryption methods used today such as RSA and
Diffie-Hellman. Additionally, we show that this scheme could be used to
securely communicate without a pre-shared secret and little computational
overhead. Thus, by exploiting the physical layer properties of the radio
channel, the massive MIMO system provides for low-complexity encryption
commensurate with the most sophisticated forms of application-layer encryption
that are currently known.
|
1310.1863 | Empowerment -- an Introduction | cs.AI cs.IT math.IT nlin.AO | This book chapter is an introduction to and an overview of the
information-theoretic, task independent utility function "Empowerment", which
is defined as the channel capacity between an agent's actions and an agent's
sensors. It quantifies how much influence and control an agent has over the
world it can perceive. This book chapter discusses the general idea behind
empowerment as an intrinsic motivation and showcases several previous
applications of empowerment to demonstrate how empowerment can be applied to
different sensor-motor configuration, and how the same formalism can lead to
different observed behaviors. Furthermore, we also present a fast approximation
for empowerment in the continuous domain.
|
1310.1869 | Singular Value Decomposition of Images from Scanned Photographic Plates | cs.CV astro-ph.IM cs.CE | We want to approximate the mxn image A from scanned astronomical photographic
plates (from the Sofia Sky Archive Data Center) by using far fewer entries than
in the original matrix. By using rank of a matrix, k we remove the redundant
information or noise and use as Wiener filter, when rank k<m or k<n. With this
approximation more than 98% compression ration of image of astronomical plate
without that image details, is obtained. The SVD of images from scanned
photographic plates (SPP) is considered and its possible image compression.
|
1310.1891 | Every list-decodable code for high noise has abundant near-optimal rate
puncturings | cs.IT math.IT | We show that any q-ary code with sufficiently good distance can be randomly
punctured to obtain, with high probability, a code that is list decodable up to
radius $1 - 1/q - \epsilon$ with near-optimal rate and list sizes. Our results
imply that "most" Reed-Solomon codes are list decodable beyond the Johnson
bound, settling the long-standing open question of whether any Reed Solomon
codes meet this criterion.
More precisely, we show that a Reed-Solomon code with random evaluation
points is, with high probability, list decodable up to radius $1 - \epsilon$
with list sizes $O(1/\epsilon)$ and rate $\Omega(\epsilon)$. As a second
corollary of our argument, we obtain improved bounds on the list decodability
of random linear codes over large fields.
Our approach exploits techniques from high dimensional probability. Previous
work used similar tools to obtain bounds on the list decodability of random
linear codes, but the bounds did not scale with the size of the alphabet. In
this paper, we use a chaining argument to deal with large alphabet sizes.
|
1310.1930 | Polytopic uncertainty for linear systems: New and old complexity results | cs.SY cs.CC math.DS | We survey the problem of deciding the stability or stabilizability of
uncertain linear systems whose region of uncertainty is a polytope. This
natural setting has applications in many fields of applied science, from
Control Theory to Systems Engineering to Biology. We focus on the algorithmic
decidability of this property when one is given a particular polytope. This
setting gives rise to several different algorithmic questions, depending on the
nature of time (discrete/continuous), the property asked
(stability/stabilizability), or the type of uncertainty (fixed/switching).
Several of these questions have been answered in the literature in the last
thirty years. We point out the ones that have remained open, and we answer all
of them, except one which we raise as an open question. In all the cases, the
results are negative in the sense that the questions are NP-hard. As a
byproduct, we obtain complexity results for several other matrix problems in
Systems and Control.
|
1310.1934 | Discriminative Features via Generalized Eigenvectors | cs.LG stat.ML | Representing examples in a way that is compatible with the underlying
classifier can greatly enhance the performance of a learning system. In this
paper we investigate scalable techniques for inducing discriminative features
by taking advantage of simple second order structure in the data. We focus on
multiclass classification and show that features extracted from the generalized
eigenvectors of the class conditional second moments lead to classifiers with
excellent empirical performance. Moreover, these features have attractive
theoretical properties, such as inducing representations that are invariant to
linear transformations of the input. We evaluate classifiers built from these
features on three different tasks, obtaining state of the art results.
|
1310.1942 | Containing Viral Spread on Sparse Random Graphs: Bounds, Algorithms, and
Experiments | math.PR cs.DM cs.SI math.CO | Viral spread on large graphs has many real-life applications such as malware
propagation in computer networks and rumor (or misinformation) spread in
Twitter-like online social networks. Although viral spread on large graphs has
been intensively analyzed on classical models such as
Susceptible-Infectious-Recovered, there still exits a deficit of effective
methods in practice to contain epidemic spread once it passes a critical
threshold. Against this backdrop, we explore methods of containing viral spread
in large networks with the focus on sparse random networks. The viral
containment strategy is to partition a large network into small components and
then to ensure the sanity of all messages delivered across different
components. With such a defense mechanism in place, an epidemic spread starting
from any node is limited to only those nodes belonging to the same component as
the initial infection node. We establish both lower and upper bounds on the
costs of inspecting inter-component messages. We further propose
heuristic-based approaches to partition large input graphs into small
components. Finally, we study the performance of our proposed algorithms under
different network topologies and different edge weight models.
|
1310.1947 | Bayesian Optimization With Censored Response Data | cs.AI cs.LG stat.ML | Bayesian optimization (BO) aims to minimize a given blackbox function using a
model that is updated whenever new evidence about the function becomes
available. Here, we address the problem of BO under partially right-censored
response data, where in some evaluations we only obtain a lower bound on the
function value. The ability to handle such response data allows us to
adaptively censor costly function evaluations in minimization problems where
the cost of a function evaluation corresponds to the function value. One
important application giving rise to such censored data is the
runtime-minimizing variant of the algorithm configuration problem: finding
settings of a given parametric algorithm that minimize the runtime required for
solving problem instances from a given distribution. We demonstrate that
terminating slow algorithm runs prematurely and handling the resulting
right-censored observations can substantially improve the state of the art in
model-based algorithm configuration.
|
1310.1949 | Least Squares Revisited: Scalable Approaches for Multi-class Prediction | cs.LG stat.ML | This work provides simple algorithms for multi-class (and multi-label)
prediction in settings where both the number of examples n and the data
dimension d are relatively large. These robust and parameter free algorithms
are essentially iterative least-squares updates and very versatile both in
theory and in practice. On the theoretical front, we present several variants
with convergence guarantees. Owing to their effective use of second-order
structure, these algorithms are substantially better than first-order methods
in many practical scenarios. On the empirical side, we present a scalable
stagewise variant of our approach, which achieves dramatic computational
speedups over popular optimization packages such as Liblinear and Vowpal Wabbit
on standard datasets (MNIST and CIFAR-10), while attaining state-of-the-art
accuracies.
|
1310.1953 | The dynamics of correlated novelties | physics.soc-ph cs.SI | One new thing often leads to another. Such correlated novelties are a
familiar part of daily life. They are also thought to be fundamental to the
evolution of biological systems, human society, and technology. By opening new
possibilities, one novelty can pave the way for others in a process that
Kauffman has called "expanding the adjacent possible". The dynamics of
correlated novelties, however, have yet to be quantified empirically or modeled
mathematically. Here we propose a simple mathematical model that mimics the
process of exploring a physical, biological or conceptual space that enlarges
whenever a novelty occurs. The model, a generalization of Polya's urn, predicts
statistical laws for the rate at which novelties happen (analogous to Heaps'
law) and for the probability distribution on the space explored (analogous to
Zipf's law), as well as signatures of the hypothesized process by which one
novelty sets the stage for another. We test these predictions on four data sets
of human activity: the edit events of Wikipedia pages, the emergence of tags in
annotation systems, the sequence of words in texts, and listening to new songs
in online music catalogues. By quantifying the dynamics of correlated
novelties, our results provide a starting point for a deeper understanding of
the ever-expanding adjacent possible and its role in biological, linguistic,
cultural, and technological evolution.
|
1310.1964 | Named entity recognition using conditional random fields with non-local
relational constraints | cs.CL | We begin by introducing the Computer Science branch of Natural Language
Processing, then narrowing the attention on its subbranch of Information
Extraction and particularly on Named Entity Recognition, discussing briefly its
main methodological approaches. It follows an introduction to state-of-the-art
Conditional Random Fields under the form of linear chains. Subsequently, the
idea of constrained inference as a way to model long-distance relationships in
a text is presented, based on an Integer Linear Programming representation of
the problem. Adding such relationships to the problem as automatically inferred
logical formulas, translatable into linear conditions, we propose to solve the
resulting more complex problem with the aid of Lagrangian relaxation, of which
some technical details are explained. Lastly, we give some experimental
results.
|
1310.1970 | The Classical-Quantum Multiple Access Channel with Conferencing Encoders
and with Common Messages | quant-ph cs.IT math-ph math.IT math.MP | We prove coding theorems for two scenarios of cooperating encoders for the
multiple access channel with two classical inputs and one quantum output. In
the first scenario (ccq-MAC with common messages), the two senders each have
their private messages, but would also like to transmit common messages. In the
second scenario (ccq-MAC with conferencing encoders), each sender has its own
set of messages, but they are allowed to use a limited amount of noiseless
classical communication amongst each other prior to encoding their messages.
This conferencing protocol may depend on each individual message they intend to
send. The two scenarios are related to each other not only in spirit - the
existence of near-optimal codes for the ccq-MAC with common messages is used
for proving the existence of near-optimal codes for the ccq-MAC with
conferencing encoders.
|
1310.1975 | ARKref: a rule-based coreference resolution system | cs.CL | ARKref is a tool for noun phrase coreference. It is a deterministic,
rule-based system that uses syntactic information from a constituent parser,
and semantic information from an entity recognition component. Its architecture
is based on the work of Haghighi and Klein (2009). ARKref was originally
written in 2009. At the time of writing, the last released version was in March
2011. This document describes that version, which is open-source and publicly
available at: http://www.ark.cs.cmu.edu/ARKref
|
1310.1976 | Feature Selection Strategies for Classifying High Dimensional
Astronomical Data Sets | astro-ph.IM cs.CV | The amount of collected data in many scientific fields is increasing, all of
them requiring a common task: extract knowledge from massive, multi parametric
data sets, as rapidly and efficiently possible. This is especially true in
astronomy where synoptic sky surveys are enabling new research frontiers in the
time domain astronomy and posing several new object classification challenges
in multi dimensional spaces; given the high number of parameters available for
each object, feature selection is quickly becoming a crucial task in analyzing
astronomical data sets. Using data sets extracted from the ongoing Catalina
Real-Time Transient Surveys (CRTS) and the Kepler Mission we illustrate a
variety of feature selection strategies used to identify the subsets that give
the most information and the results achieved applying these techniques to
three major astronomical problems.
|
1310.2001 | Overflow Probability of Variable-length Codes with Codeword Cost | cs.IT math.IT | Lossless variable-length source coding with codeword cost is considered for
general sources. The problem setting, where we impose on unequal costs on code
symbols, is called the variable-length coding with codeword cost. In this
problem, the infimum of average codeword cost have been determined for general
sources. On the other hand, overflow probability, which is defined as the
probability of codeword cost being above a threshold, have not been considered
yet. In this paper, we determine the infimum of achievable threshold in the
first-order sense and the second-order sense for general sources and compute it
for some special sources such as i.i.d. sources and mixed sources. A
relationship between the overflow probability of variable-length coding and the
error probability of fixed-length coding is also revealed. Our analysis is
based on the information-spectrum methods.
|
1310.2026 | Low-Complexity Interactive Algorithms for Synchronization from
Deletions, Insertions, and Substitutions | cs.IT cs.DS math.IT | Consider two remote nodes having binary sequences $X$ and $Y$, respectively.
$Y$ is an edited version of ${X}$, where the editing involves random deletions,
insertions, and substitutions, possibly in bursts. The goal is for the node
with $Y$ to reconstruct $X$ with minimal exchange of information over a
noiseless link. The communication is measured in terms of both the total number
of bits exchanged and the number of interactive rounds of communication.
This paper focuses on the setting where the number of edits is
$o(\tfrac{n}{\log n})$, where $n$ is the length of $X$. We first consider the
case where the edits are a mixture of insertions and deletions (indels), and
propose an interactive synchronization algorithm with near-optimal
communication rate and average computational complexity of $O(n)$ arithmetic
operations. The algorithm uses interaction to efficiently split the source
sequence into substrings containing exactly one deletion or insertion. Each of
these substrings is then synchronized using an optimal one-way synchronization
code based on the single-deletion correcting channel codes of Varshamov and
Tenengolts (VT codes).
We then build on this synchronization algorithm in three different ways.
First, it is modified to work with a single round of interaction. The reduction
in the number of rounds comes at the expense of higher communication, which is
quantified. Next, we present an extension to the practically important case
where the insertions and deletions may occur in (potentially large) bursts.
Finally, we show how to synchronize the sources to within a target Hamming
distance. This feature can be used to differentiate between substitution and
indel edits. In addition to theoretical performance bounds, we provide several
validating simulation results for the proposed algorithms.
|
1310.2028 | Codebook-Based Opportunistic Interference Alignment | cs.IT math.IT | Opportunistic interference alignment (OIA) asymptotically achieves the
optimal degrees-of-freedom (DoF) in interfering multiple-access channels
(IMACs) in a distributed fashion, as a certain user scaling condition is
satisfied. For the multiple-input multiple-output IMAC, it was shown that the
singular value decomposition (SVD)-based beamforming at the users fundamentally
reduces the user scaling condition required to achieve any target DoF compared
to that for the single-inputmultiple-output IMAC. In this paper, we tackle two
practical challenges of the existing SVD-based OIA: 1) the need of full
feedforward of the selected users' beamforming weight vectors and 2) a low rate
achieved based on the exiting zero-forcing (ZF) receiver. We first propose a
codebook-based OIA, in which the weight vectors are chosen from a pre-defined
codebook with a finite size so that information of the weight vectors can be
sent to the belonging BS with limited feedforward. We derive the codebook size
required to achieve the same user scaling condition as the SVD-based OIA case
for both Grassmannian and random codebooks. Surprisingly, it is shown that the
derived codebook size is the same for the two considered codebook approaches.
Second, we take into account an enhanced receiver at the base stations (BSs) in
pursuit of improving the achievable rate based on the ZF receiver. Assuming no
collaboration between the BSs, the interfering links between a BS and the
selected users in neighboring cells are difficult to be acquired at the
belonging BS. We propose the use of a simple minimum Euclidean distance
receiver operating with no information of the interfering links. With the help
of the OIA, we show that this new receiver asymptotically achieves the channel
capacity as the number of users increases.
|
1310.2037 | Coordinated Beamforming for Energy Efficient Transmission in Multicell
Multiuser Systems | cs.IT math.IT | In this paper we study energy efficient joint power allocation and
beamforming for coordinated multicell multiuser downlink systems. The
considered optimization problem is in a non-convex fractional form and hard to
tackle. We propose to first transform the original problem into an equivalent
optimization problem in a parametric subtractive form, by which we reach its
solution through a two-layer optimization scheme. The outer layer only involves
one-dimension search for the energy efficiency parameter which can be addressed
using the bi-section search, the key issue lies in the inner layer where a
non-fractional sub-problem needs to tackle. By exploiting the relationship
between the user rate and the mean square error, we then develop an iterative
algorithm to solve it. The convergence of this algorithm is proved and the
solution is further derived in closed-form. Our analysis also shows that the
proposed algorithm can be implemented in parallel with reasonable complexity.
Numerical results illustrate that our algorithm has a fast convergence and
achieves near-optimal energy efficiency. It is also observed that at the low
transmit power region, our solution almost achieves the optimal sum rate and
the optimal energy efficiency simultaneously; while at the middle-high transmit
power region, a certain sum rate loss is suffered in order to guarantee the
energy efficiency.
|
1310.2045 | A de Bruijn identity for symmetric stable laws | cs.IT math.IT math.PR | We show how some attractive information--theoretic properties of Gaussians
pass over to more general families of stable densities. We define a new score
function for symmetric stable laws, and use it to give a stable version of the
heat equation. Using this, we derive a version of the de Bruijn identity,
allowing us to write the derivative of relative entropy as an inner product of
score functions. We discuss maximum entropy properties of symmetric stable
densities.
|
1310.2049 | Fast Multi-Instance Multi-Label Learning | cs.LG | In many real-world tasks, particularly those involving data objects with
complicated semantics such as images and texts, one object can be represented
by multiple instances and simultaneously be associated with multiple labels.
Such tasks can be formulated as multi-instance multi-label learning (MIML)
problems, and have been extensively studied during the past few years. Existing
MIML approaches have been found useful in many applications; however, most of
them can only handle moderate-sized data. To efficiently handle large data
sets, in this paper we propose the MIMLfast approach, which first constructs a
low-dimensional subspace shared by all labels, and then trains label specific
linear models to optimize approximated ranking loss via stochastic gradient
descent. Although the MIML problem is complicated, MIMLfast is able to achieve
excellent performance by exploiting label relations with shared space and
discovering sub-concepts for complicated labels. Experiments show that the
performance of MIMLfast is highly competitive to state-of-the-art techniques,
whereas its time cost is much less; particularly, on a data set with 20K bags
and 180K instances, MIMLfast is more than 100 times faster than existing MIML
approaches. On a larger data set where none of existing approaches can return
results in 24 hours, MIMLfast takes only 12 minutes. Moreover, our approach is
able to identify the most representative instance for each label, and thus
providing a chance to understand the relation between input patterns and output
label semantics.
|
1310.2050 | A State Of the Art Report on Research in Multiple RGB-D sensor Setups | cs.CV | That the Microsoft Kinect, an RGB-D sensor, transformed the gaming and end
consumer sector has been anticipated by the developers. That it also impacted
in rigorous computer vision research has probably been a surprise to the whole
community. Shortly before the commercial deployment of its successor, Kinect
One, the research literature fills with resumees and state-of-the art papers to
summarize the development over the past 3 years. This particular report
describes significant research projects which have built on sensoring setups
that include two or more RGB-D sensors in one scene.
|
1310.2051 | Distributed Space-Time Coding for Full-Duplex Asynchronous Cooperative
Communications | cs.IT math.IT | In this paper, we propose two distributed linear convolutional space-time
coding (DLC-STC) schemes for full-duplex (FD) asynchronous cooperative
communications. The DLC-STC Scheme 1 is for the case of the complete loop
channel cancellation, which achieves the full asynchronous cooperative
diversity. The DLC-STC Scheme 2 is for the case of the partial loop channel
cancellation and amplifying, where some loop signals are used as the
self-coding instead of treated as interference to be directly cancelled. We
show this scheme can achieve full asynchronous cooperative diversity. We then
evaluate the performance of the two schemes when loop channel information is
not accurate and present an amplifying factor control method for the DLC-STC
Scheme 2 to improve its performance with inaccurate loop channel information.
Simulation results show that the DLC-STC Scheme 1 outperforms the DLC-STC
Scheme 2 and the delay diversity scheme if perfect or high quality loop channel
information is available at the relay, while the DLC-STC Scheme 2 achieves
better performance if the loop channel information is imperfect.
|
1310.2053 | The role of RGB-D benchmark datasets: an overview | cs.CV | The advent of the Microsoft Kinect three years ago stimulated not only the
computer vision community for new algorithms and setups to tackle well-known
problems in the community but also sparked the launch of several new benchmark
datasets to which future algorithms can be compared 019 to. This review of the
literature and industry developments concludes that the current RGB-D benchmark
datasets can be useful to determine the accuracy of a variety of applications
of a single or multiple RGB-D sensors.
|
1310.2055 | Distributed Linear Convolutional Space-Time Coding for Two-Relay
Full-Duplex Asynchronous Cooperative Networks | cs.IT math.IT | In this paper, a two-relay full-duplex asynchronous cooperative network with
the amplify-and-forward (AF) protocol is considered. We propose two distributed
space-time coding schemes for the cases with and without cross-talks,
respectively. In the first case, each relay can receive the signal sent by the
other through the cross-talk link. We first study the feasibility of cross-talk
cancellation in this network and show that the cross-talk interference cannot
be removed well. For this reason, we design space-time codes by utilizing the
cross-talk signals instead of removing them. In the other case, the self-coding
is realized individually through the loop channel at each relay node and the
signals from the two relay nodes form a space-time code. The achievable
cooperative diversity of both cases is investigated and the conditions to
achieve full cooperative diversity are presented. Simulation results verify the
theoretical analysis.
|
1310.2059 | Distributed Coordinate Descent Method for Learning with Big Data | stat.ML cs.DC cs.LG math.OC | In this paper we develop and analyze Hydra: HYbriD cooRdinAte descent method
for solving loss minimization problems with big data. We initially partition
the coordinates (features) and assign each partition to a different node of a
cluster. At every iteration, each node picks a random subset of the coordinates
from those it owns, independently from the other computers, and in parallel
computes and applies updates to the selected coordinates based on a simple
closed-form formula. We give bounds on the number of iterations sufficient to
approximately solve the problem with high probability, and show how it depends
on the data and on the partitioning. We perform numerical experiments with a
LASSO instance described by a 3TB matrix.
|
1310.2063 | Active causation and the origin of meaning | q-bio.PE cs.NE nlin.AO q-bio.NC | Purpose and meaning are necessary concepts for understanding mind and
culture, but appear to be absent from the physical world and are not part of
the explanatory framework of the natural sciences. Understanding how meaning
(in the broad sense of the term) could arise from a physical world has proven
to be a tough problem. The basic scheme of Darwinian evolution produces
adaptations that only represent apparent ("as if") goals and meaning. Here I
use evolutionary models to show that a slight, evolvable extension of the basic
scheme is sufficient to produce genuine goals. The extension, targeted
modulation of mutation rate, is known to be generally present in biological
cells, and gives rise to two phenomena that are absent from the non-living
world: intrinsic meaning and the ability to initiate goal-directed chains of
causation (active causation). The extended scheme accomplishes this by
utilizing randomness modulated by a feedback loop that is itself regulated by
evolutionary pressure. The mechanism can be extended to behavioural variability
as well, and thus shows how freedom of behaviour is possible. A further
extension to communication suggests that the active exchange of intrinsic
meaning between organisms may be the origin of consciousness, which in
combination with active causation can provide a physical basis for the
phenomenon of free will.
|
1310.2066 | A Simplified Approach for Quality Management in Data Warehouse | cs.DB cs.CY | Data warehousing is continuously gaining importance as organizations are
realizing the benefits of decision oriented data bases. However, the stumbling
block to this rapid development is data quality issues at various stages of
data warehousing. Quality can be defined as a measure of excellence or a state
free from defects. Users appreciate quality products and available literature
suggests that many organization`s have significant data quality problems that
have substantial social and economic impacts. A metadata based quality system
is introduced to manage quality of data in data warehouse. The approach is used
to analyze the quality of data warehouse system by checking the expected value
of quality parameters with that of actual values. The proposed approach is
supported with a metadata framework that can store additional information to
analyze the quality parameters, whenever required.
|
1310.2071 | Predicting Students' Performance Using ID3 And C4.5 Classification
Algorithms | cs.CY cs.LG | An educational institution needs to have an approximate prior knowledge of
enrolled students to predict their performance in future academics. This helps
them to identify promising students and also provides them an opportunity to
pay attention to and improve those who would probably get lower grades. As a
solution, we have developed a system which can predict the performance of
students from their previous performances using concepts of data mining
techniques under Classification. We have analyzed the data set containing
information about students, such as gender, marks scored in the board
examinations of classes X and XII, marks and rank in entrance examinations and
results in first year of the previous batch of students. By applying the ID3
(Iterative Dichotomiser 3) and C4.5 classification algorithms on this data, we
have predicted the general and individual performance of freshly admitted
students in future examinations.
|
1310.2079 | Mining The Relationship Between Demographic Variables And Brand
Associations | cs.CY cs.DB | This research aims to mine the relationship between demographic variables and
brand associations, and study the relative importance of these variables. The
study is conducted on fast-food restaurant brands chains in Jordan. The result
ranks and evaluates the demographic variables in relation with the brand
associations for the selected sample. Discovering brand associations according
to demographic variables reveals many facts and linkages in the context of
Jordanian culture. Suggestions are given accordingly for marketers to benefits
from to build their strategies and direct their decisions. Also, data mining
technique used in this study reflects a new trend for studying and analyzing
marketing samples.
|
1310.2085 | A Robust Variational Model for Positive Image Deconvolution | cs.CV | In this paper, an iterative method for robust deconvolution with positivity
constraints is discussed. It is based on the known variational interpretation
of the Richardson-Lucy iterative deconvolution as fixed-point iteration for the
minimisation of an information divergence functional under a multiplicative
perturbation model. The asymmetric penaliser function involved in this
functional is then modified into a robust penaliser, and complemented with a
regulariser. The resulting functional gives rise to a fixed point iteration
that we call robust and regularised Richardson-Lucy deconvolution. It achieves
an image restoration quality comparable to state-of-the-art robust variational
deconvolution with a computational efficiency similar to that of the original
Richardson-Lucy method. Experiments on synthetic and real-world image data
demonstrate the performance of the proposed method.
|
1310.2086 | An Iterative Method Applied to Correct the Actual Compressor Performance
to the Equivalent Performance under the Specified Reference Conditions | cs.SY | This paper proposes a correction method, which corrects the actual compressor
performance in real operating conditions to the equivalent performance under
specified reference condition. The purpose is to make fair comparisons between
actual performance against design performance or reference maps under the same
operating conditions. Then the abnormal operating conditions or early failure
indications can be identified through condition monitoring, which helps to
avoid mandatory shutdown and reduces maintenance costs. The corrections are
based on an iterative scheme, which simultaneously correct the main performance
parameters known as the polytropic head, the gas power, and the polytropic
efficiency. The excellent performance of the method is demonstrated by
performing the corrections over real industrial measurements.
|
1310.2089 | Double four-bar crank-slider mechanism dynamic balancing by
meta-heuristic algorithms | cs.AI | In this paper, a new method for dynamic balancing of double four-bar crank
slider mechanism by meta- heuristic-based optimization algorithms is proposed.
For this purpose, a proper objective function which is necessary for balancing
of this mechanism and corresponding constraints has been obtained by dynamic
modeling of the mechanism. Then PSO, ABC, BGA and HGAPSO algorithms have been
applied for minimizing the defined cost function in optimization step. The
optimization results have been studied completely by extracting the cost
function, fitness, convergence speed and runtime values of applied algorithms.
It has been shown that PSO and ABC are more efficient than BGA and HGAPSO in
terms of convergence speed and result quality. Also, a laboratory scale
experimental doublefour-bar crank-slider mechanism was provided for validating
the proposed balancing method practically.
|
1310.2098 | A short note on the axiomatic requirements of uncertainty measure | cs.IT cs.AI math.IT | In this note, we argue that the axiomatic requirement of range to the measure
of aggregated total uncertainty (ATU) in Dempster-Shafer theory is not
reasonable.
|
1310.2121 | Dynamics and termination cost of spatially coupled mean-field models | cond-mat.stat-mech cs.IT math.IT | This work is motivated by recent progress in information theory and signal
processing where the so-called `spatially coupled' design of systems leads to
considerably better performance. We address relevant open questions about
spatially coupled systems through the study of a simple Ising model. In
particular, we consider a chain of Curie-Weiss models that are coupled by
interactions up to a certain range. Indeed, it is well known that the pure
(uncoupled) Curie-Weiss model undergoes a first order phase transition driven
by the magnetic field, and furthermore, in the spinodal region such systems are
unable to reach equilibrium in sub-exponential time if initialized in the
metastable state. By contrast, the spatially coupled system is, instead, able
to reach the equilibrium even when initialized to the metastable state. The
equilibrium phase propagates along the chain in the form of a travelling wave.
Here we study the speed of the wave-front and the so-called `termination
cost'--- \textit{i.e.}, the conditions necessary for the propagation to occur.
We reach several interesting conclusions about optimization of the speed and
the cost.
|
1310.2125 | Retrieval of Experiments with Sequential Dirichlet Process Mixtures in
Model Space | stat.ML cs.IR stat.AP | We address the problem of retrieving relevant experiments given a query
experiment, motivated by the public databases of datasets in molecular biology
and other experimental sciences, and the need of scientists to relate to
earlier work on the level of actual measurement data. Since experiments are
inherently noisy and databases ever accumulating, we argue that a retrieval
engine should possess two particular characteristics. First, it should compare
models learnt from the experiments rather than the raw measurements themselves:
this allows incorporating experiment-specific prior knowledge to suppress noise
effects and focus on what is important. Second, it should be updated
sequentially from newly published experiments, without explicitly storing
either the measurements or the models, which is critical for saving storage
space and protecting data privacy: this promotes life long learning. We
formulate the retrieval as a ``supermodelling'' problem, of sequentially
learning a model of the set of posterior distributions, represented as sets of
MCMC samples, and suggest the use of Particle-Learning-based sequential
Dirichlet process mixture (DPM) for this purpose. The relevance measure for
retrieval is derived from the supermodel through the mixture representation. We
demonstrate the performance of the proposed retrieval method on simulated data
and molecular biological experiments.
|
1310.2127 | BloSEn: Blog Search Engine Based On Post Concept Clustering | cs.IR | This paper focuses on building a blog search engine which doesn't focus only
on keyword search but includes extended search capabilities. It also
incorporates the blog-post concept clustering which is based on the category
extracted from the blog post semantic content analysis. The proposed approach
is titled as "BloSen (Blog Search Engine)". It involves in extracting the posts
from blogs and parsing them to extract the blog elements and store them as
fields in a document format. Inverted index is being built on the fields of the
documents. Search is induced on the index and requested query is processed
based on the documents so far made from blog posts. It currently focuses on
Blogger and Wordpress hosted blogs since both these hosting services are the
most popular ones in the blogosphere. The proposed BloSen model is experimented
with a prototype implementation and the results of the experiments with the
user's relevance cumulative metric value of 95.44% confirms the efficiency of
the proposed model.
|
1310.2155 | Lower Bounds for Quantum Parameter Estimation | quant-ph cs.IT math-ph math.IT math.MP | The laws of quantum mechanics place fundamental limits on the accuracy of
measurements and therefore on the estimation of unknown parameters of a quantum
system. In this work, we prove lower bounds on the size of confidence regions
reported by any region estimator for a given ensemble of probe states and
probability of success. Our bounds are derived from a previously unnoticed
connection between the size of confidence regions and the error probabilities
of a corresponding binary hypothesis test. In group-covariant scenarios, we
find that there is an ultimate bound for any estimation scheme which depends
only on the representation-theoretic data of the probe system, and we evaluate
its asymptotics in the limit of many systems, establishing a general
"Heisenberg limit" for region estimation. We apply our results to several
examples, in particular to phase estimation, where our bounds allow us to
recover the well-known Heisenberg and shot-noise scaling.
|
1310.2169 | Efficient local behavioral change strategies to reduce the spread of
epidemics in networks | physics.soc-ph cs.SI | It has recently become established that the spread of infectious diseases
between humans is affected not only by the pathogen itself but also by changes
in behavior as the population becomes aware of the epidemic; for example,
social distancing. It is also well known that community structure (the
existence of relatively densely connected groups of vertices) in contact
networks influences the spread of disease. We propose a set of local strategies
for social distancing, based on community structure, that can be employed in
the event of an epidemic to reduce the epidemic size. Unlike most social
distancing methods, ours do not require individuals to know the disease state
(infected or susceptible, etc.) of others, and we do not make the unrealistic
assumption that the structure of the entire contact network is known. Instead,
the recommended behavior change is based only on an individual's local view of
the network. Each individual avoids contact with a fraction of his/her
contacts, using knowledge of his/her local network to decide which contacts
should be avoided. If the behavior change occurs only when an individual
becomes ill or aware of the disease, these strategies can substantially reduce
epidemic size with a relatively small cost, measured by the number of contacts
avoided.
|
1310.2182 | New Approach for Prediction Pre-cancer via Detecting Mutated in Tumor
Protein P53 | cs.CE q-bio.OT | Tumor protein P53 is believed to be involved in over half of human cancers
cases, the prediction of malignancies plays essential roles not only in advance
detection for cancer, but also in discovering effective prevention and
treatment of cancer, till now there isn't approach be able in prediction the
mutated in tumor protein P53 which is caused high ratio of human cancers like
breast, Blood, skin, liver, lung, bladder etc. This research proposed a new
approach for prediction pre-cancer via detection malignant mutations in tumor
protein P53 using bioinformatics tools like FASTA, BLAST, CLUSTALW and TP53
databases worldwide. Implement and apply this new approach of prediction
pre-cancer through mutations at tumor protein P53 shows an effective result
when used more specific parameters/features to extract the prediction result
that means when the user increase the number of filters of the results which
obtained from the database gives more specific diagnosis and classify, addition
that the detecting pre-cancer via prediction mutated tumor protein P53 will
reduces a person's cancers in the future by avoiding exposure to toxins,
radiation or monitoring themselves at older ages by change their food,
environment, even the pace of living. Also that new approach of prediction
pre-cancer will help if there is any treatment can give for that person to
therapy the mutated tumor protein P53. Index Terms (Normal Homology TP53 gene,
Tumor Protein P53, Oncogene Labs, GC and AT content, FASTA, BLAST, ClustalW)
|
1310.2206 | Group lifting structures for multirate filter banks I: Uniqueness of
lifting factorizations | cs.IT math.IT | Group lifting structures are introduced to provide an algebraic framework for
studying lifting factorizations of two-channel perfect reconstruction
finite-impulse-response (FIR) filter banks. The lifting factorizations
generated by a group lifting structure are characterized by Abelian groups of
lower and upper triangular lifting matrices, an Abelian group of unimodular
gain scaling matrices, and a set of base filter banks. Examples of group
lifting structures are given for linear phase lifting factorizations of the two
nontrivial classes of two-channel linear phase FIR filter banks, the whole- and
half-sample symmetric classes, including both the reversible and irreversible
cases. This covers the lifting specifications for whole-sample symmetric filter
banks in Parts 1 and 2 of the ISO/IEC JPEG 2000 still image coding standard.
The theory is used to address the uniqueness of lifting factorizations. With no
constraints on the lifting process, it is shown that lifting factorizations are
highly nonunique. When certain hypotheses developed in the paper are satisfied,
however, lifting factorizations generated by a group lifting structure are
shown to be unique. A companion paper applies the uniqueness results proven in
this paper to the linear phase group lifting structures for whole- and
half-sample symmetric filter banks.
|
1310.2208 | Group lifting structures for multirate filter banks II: Linear phase
filter banks | cs.IT math.IT | The theory of group lifting structures is applied to linear phase lifting
factorizations for the two nontrivial classes of two-channel linear phase
perfect reconstruction filter banks, the whole- and half-sample symmetric
classes. Group lifting structures defined for the reversible and irreversible
classes of whole- and half-sample symmetric filter banks are shown to satisfy
the hypotheses of the uniqueness theorem for group lifting structures. It
follows that linear phase group lifting factorizations of whole- and
half-sample symmetric filter banks are therefore independent of the
factorization methods used to construct them. These results cover the
specification of whole-sample symmetric filter banks in the ISO/IEC JPEG 2000
image coding standard.
|
1310.2217 | Lower Bounds on the Communication Complexity of Binary Local Quantum
Measurement Simulation | quant-ph cs.IT math.IT | We consider the problem of the classical simulation of quantum measurements
in the scenario of communication complexity. Regev and Toner (2007) have
presented a 2-bit protocol which simulates one particular correlation function
arising from binary projective quantum measurements on arbitrary state, and in
particular does not preserve local averages. The question of simulating other
correlation functions using a protocol with bounded communication, or
preserving local averages, has been posed as an open one. Within this paper we
resolve it in the negative: we show that any such protocol must have unbounded
communication for some subset of executions. In particular, we show that for
any protocol, there exist inputs for which the random variable describing the
number of communicated bits has arbitrarily large variance.
|
1310.2267 | A Partial Derandomization of PhaseLift using Spherical Designs | cs.IT math.IT quant-ph | The problem of retrieving phase information from amplitude measurements alone
has appeared in many scientific disciplines over the last century. PhaseLift is
a recently introduced algorithm for phase recovery that is computationally
efficient, numerically stable, and comes with rigorous performance guarantees.
PhaseLift is optimal in the sense that the number of amplitude measurements
required for phase reconstruction scales linearly with the dimension of the
signal. However, it specifically demands Gaussian random measurement vectors -
a limitation that restricts practical utility and obscures the specific
properties of measurement ensembles that enable phase retrieval. Here we
present a partial derandomization of PhaseLift that only requires sampling from
certain polynomial size vector configurations, called t-designs. Such
configurations have been studied in algebraic combinatorics, coding theory, and
quantum information. We prove reconstruction guarantees for a number of
measurements that depends on the degree t of the design. If the degree is
allowed to to grow logarithmically with the dimension, the bounds become tight
up to polylog-factors. Beyond the specific case of PhaseLift, this work
highlights the utility of spherical designs for the derandomization of data
recovery schemes.
|
1310.2273 | Semidefinite Programming Based Preconditioning for More Robust
Near-Separable Nonnegative Matrix Factorization | stat.ML cs.LG math.OC | Nonnegative matrix factorization (NMF) under the separability assumption can
provably be solved efficiently, even in the presence of noise, and has been
shown to be a powerful technique in document classification and hyperspectral
unmixing. This problem is referred to as near-separable NMF and requires that
there exists a cone spanned by a small subset of the columns of the input
nonnegative matrix approximately containing all columns. In this paper, we
propose a preconditioning based on semidefinite programming making the input
matrix well-conditioned. This in turn can improve significantly the performance
of near-separable NMF algorithms which is illustrated on the popular successive
projection algorithm (SPA). The new preconditioned SPA is provably more robust
to noise, and outperforms SPA on several synthetic data sets. We also show how
an active-set method allow us to apply the preconditioning on large-scale
real-world hyperspectral images.
|
1310.2274 | Accounting for Secondary Uncertainty: Efficient Computation of Portfolio
Risk Measures on Multi and Many Core Architectures | cs.DC cs.CE | Aggregate Risk Analysis is a computationally intensive and a data intensive
problem, thereby making the application of high-performance computing
techniques interesting. In this paper, the design and implementation of a
parallel Aggregate Risk Analysis algorithm on multi-core CPU and many-core GPU
platforms are explored. The efficient computation of key risk measures,
including Probable Maximum Loss (PML) and the Tail Value-at-Risk (TVaR) in the
presence of both primary and secondary uncertainty for a portfolio of property
catastrophe insurance treaties is considered. Primary Uncertainty is the the
uncertainty associated with whether a catastrophe event occurs or not in a
simulated year, while Secondary Uncertainty is the uncertainty in the amount of
loss when the event occurs.
A number of statistical algorithms are investigated for computing secondary
uncertainty. Numerous challenges such as loading large data onto hardware with
limited memory and organising it are addressed. The results obtained from
experimental studies are encouraging. Consider for example, an aggregate risk
analysis involving 800,000 trials, with 1,000 catastrophic events per trial, a
million locations, and a complex contract structure taking into account
secondary uncertainty. The analysis can be performed in just 41 seconds on a
GPU, that is 24x faster than the sequential counterpart on a fast multi-core
CPU. The results indicate that GPUs can be used to efficiently accelerate
aggregate risk analysis even in the presence of secondary uncertainty.
|
1310.2279 | A Mathematical Model, Implementation and Study of a Swarm System | cs.RO | The work reported in this paper is motivated towards the development of a
mathematical model for swarm systems based on macroscopic primitives. A pattern
formation and transformation model is proposed. The pattern transformation
model comprises two general methods for pattern transformation, namely a
macroscopic transformation and mathematical transformation method. The problem
of transformation is formally expressed and four special cases of
transformation are considered. Simulations to confirm the feasibility of the
proposed models and transformation methods are presented. Comparison between
the two transformation methods is also reported.
|
1310.2289 | Subband coding for large-scale scientific simulation data using JPEG
2000 | cs.IT cs.MM math.IT | The ISO/IEC JPEG 2000 image coding standard is a family of source coding
algorithms targeting high-resolution image communications. JPEG 2000 features
highly scalable embedded coding features that allow one to interactively zoom
out to reduced resolution thumbnails of enormous data sets or to zoom in on
highly localized regions of interest with very economical communications and
rendering requirements. While intended for fixed-precision input data, the
implementation of the irreversible version of the standard is often done
internally in floating point arithmetic. Moreover, the standard is designed to
support high-bit-depth data. Part 2 of the standard also provides support for
three-dimensional data sets such as multicomponent or volumetric imagery. These
features make JPEG 2000 an appealing candidate for highly scalable
communications coding and visualization of two- and three-dimensional data
produced by scientific simulation software. We present results of initial
experiments applying JPEG 2000 to scientific simulation data produced by the
Parallel Ocean Program (POP) global ocean circulation model, highlighting both
the promise and the many challenges this approach holds for scientific
visualization applications.
|
1310.2290 | Modelling Complexity for Policy: Opportunities and Challenges | cs.MA cs.CY nlin.AO physics.soc-ph | This chapter reviews the purpose and use of models from the field of complex
systems and, in particular, the implications of trying to use models to
understand or make decisions within complex situations, such as policy makers
usually face. A discussion of the different dimensions one can formalise
situations, the different purposes for models and the different kinds of
relationship they can have with the policy making process, is followed by an
examination of the compromises forced by the complexity of the target issues.
Several modelling approaches from complexity science are briefly described,
with notes as to their abilities and limitations. These approaches include
system dynamics, network theory, information theory, cellular automata, and
agent-based modelling. Some examples of policy models are presented and
discussed in the context of the previous analysis. Finally we conclude by
outlining some of the major pitfalls facing those wishing to use such models
for policy evaluation.
|
1310.2291 | Interactive Function Computation with Reconstruction Constraints | cs.IT math.IT | This paper investigates two-terminal interactive function computation with
reconstruction constraints. Each terminal wants to compute a (possibly
different) function of two correlated sources, but can only access one of the
sources directly. In addition to distortion constraints at the terminals, each
terminal is required to estimate the computed function value at the other
terminal in a lossy fashion, leading to the constrained reconstruction
constraint. A special case of constrained reconstruction is the common
reconstruction constraint, in which both terminals agree on the functions
computed with probability one. The terminals exchange information in multiple
rate constrained communication rounds. A characterization of the multi-round
rate-distortion region for the above problem with constrained reconstruction
constraints is provided. To gain more insights and to highlight the value of
interaction and order of communication, the rate-distortion region for
computing various functions of jointly Gaussian sources according to common
reconstruction constraints is studied.
|
1310.2296 | Interactive Relay Assisted Source Coding | cs.IT math.IT | This paper investigates a source coding problem in which two terminals
communicating through a relay wish to estimate one another's source within some
distortion constraint. The relay has access to side information that is
correlated with the sources. Two different schemes based on the order of
communication, \emph{distributed source coding/delivery} and \emph{two cascaded
rounds}, are proposed and inner and outer bounds for the resulting
rate-distortion regions are provided. Examples are provided to show that
neither rate-distortion region includes the other one.
|
1310.2298 | SAT-based Preprocessing for MaxSAT (extended version) | cs.AI | State-of-the-art algorithms for industrial instances of MaxSAT problem rely
on iterative calls to a SAT solver. Preprocessing is crucial for the
acceleration of SAT solving, and the key preprocessing techniques rely on the
application of resolution and subsumption elimination. Additionally,
satisfiability-preserving clause elimination procedures are often used. Since
MaxSAT computation typically involves a large number of SAT calls, we are
interested in whether an input instance to a MaxSAT problem can be preprocessed
up-front, i.e. prior to running the MaxSAT solver, rather than (or, in addition
to) during each iterative SAT solver call. The key requirement in this setting
is that the preprocessing has to be sound, i.e. so that the solution can be
reconstructed correctly and efficiently after the execution of a MaxSAT
algorithm on the preprocessed instance. While, as we demonstrate in this paper,
certain clause elimination procedures are sound for MaxSAT, it is well-known
that this is not the case for resolution and subsumption elimination. In this
paper we show how to adapt these preprocessing techniques to MaxSAT. To achieve
this we recast the MaxSAT problem in a recently introduced labelled-CNF
framework, and show that within the framework the preprocessing techniques can
be applied soundly. Furthermore, we show that MaxSAT algorithms restated in the
framework have a natural implementation on top of an incremental SAT solver. We
evaluate the prototype implementation of a MaxSAT algorithm WMSU1 in this
setting, demonstrate the effectiveness of preprocessing, and show overall
improvement with respect to non-incremental versions of the algorithm on some
classes of problems.
|
1310.2305 | Gain scaling for multirate filter banks | cs.IT math.IT | Eliminating two trivial degrees of freedom corresponding to the lowpass DC
response and the highpass Nyquist response in a two-channel multirate filter
bank seems simple enough. Nonetheless, the ISO/IEC JPEG 2000 image coding
standard manages to make this mundane task look totally mysterious. We reveal
the true meaning behind JPEG 2000's arcane specifications for filter bank
normalization and point out how the seemingly trivial matter of gain scaling
leads to highly nontrivial issues concerning uniqueness of lifting
factorizations.
|
1310.2306 | Robust Adaptive Control for Circadian Dynamics: Poincare Approach to
Backstepping Method | cs.SY | A mathematical model of the circadian dynamics in the form of Van der Pol
equation with an external force as a control is investigated. The combination
of backstepping method and differential-topological techniques based on the
Poincare's ideas is used. The robust model identification adaptive control for
a specific adaptation law is designed.
|
1310.2350 | The Generalized Traveling Salesman Problem solved with Ant Algorithms | cs.AI cs.NE | A well known N P-hard problem called the Generalized Traveling Salesman
Problem (GTSP) is considered. In GTSP the nodes of a complete undirected graph
are partitioned into clusters. The objective is to find a minimum cost tour
passing through exactly one node from each cluster. An exact exponential time
algorithm and an effective meta-heuristic algorithm for the problem are
presented. The meta-heuristic proposed is a modified Ant Colony System (ACS)
algorithm called Reinforcing Ant Colony System (RACS) which introduces new
correction rules in the ACS algorithm. Computational results are reported for
many standard test problems. The proposed algorithm is competitive with the
other already proposed heuristics for the GTSP in both solution quality and
computational time.
|
1310.2357 | SurpriseMe: an integrated tool for network community structure
characterization using Surprise maximization | q-bio.MN cs.SI physics.soc-ph | Detecting communities, densely connected groups may contribute to unravel the
underlying relationships among the units present in diverse biological networks
(e.g., interactome, coexpression networks, ecological networks, etc.). We
recently showed that communities can be very precisely characterized by
maximizing Surprise, a global network parameter. Here we present SurpriseMe, a
tool that integrates the outputs of seven of the best algorithms available to
estimate the maximum Surprise value. SurpriseMe also generates distance
matrices that allow to visualize the relationships among the solutions
generated by the algorithms. We show that the communities present in small and
medium-sized networks, with up to 10.000 nodes, can be easily characterized: on
standard PC computers, these analyses take less than an hour. Also, four of the
algorithms may quite rapidly analyze networks with up to 100.000 nodes, given
enough memory resources. Because of its performance and simplicity, SurpriseMe
is a reference tool for community structure characterization.
|
1310.2361 | Survey on Modelling Methods Applicable to Gene Regulatory Network | cs.CE | Gene Regulatory Network (GRN) plays an important role in knowing insight of
cellular life cycle. It gives information about at which different
environmental conditions genes of particular interest get over expressed or
under expressed. Modelling of GRN is nothing but finding interactive
relationships between genes. Interaction can be positive or negative. For
inference of GRN, time series data provided by Microarray technology is used.
Key factors to be considered while constructing GRN are scalability,
robustness, reliability and maximum detection of true positive interactions
between genes. This paper gives detailed technical review of existing methods
applied for building of GRN along with scope for future work.
|
1310.2367 | Handy Annotations within Oracle 10g | cs.DB | This paper describes practical observations during the Database system Lab.
Oracle 10g DBMS is used in the data base system lab and performed SQL queries
based many concepts like Data Definition Language Commands (DDL), Data
Modification Language Commands ((DML), Views, Integrity Constraints, Aggregate
functions, Joins and Abstract type . While performing practical during the lab
session, many problems occurred, in order to solve them many text books and
websites referred but could not obtain expected help from them. Even though by
spending much time in the database labs with Oracle 10g, tried in numerous
ways, as a final point expected output is achieved. This paper describes
annotations which were experimentally proved in the Database lab.
|
1310.2375 | Web Usage Mining: Pattern Discovery and Forecasting | cs.DB cs.IR | Web usage mining: automatic discovery of patterns in clickstreams and
associated data collected or generated as a result of user interactions with
one or more Web sites. This paper describes web usage mining for our college
log files to analyze the behavioral patterns and profiles of users interacting
with a Web site. The discovered patterns are represented as clusters that are
frequently accessed by groups of visitors with common interests. In this paper,
the visitors and hits were forecasted to predict the further access statistics.
|
1310.2381 | MDR Codes: A New Class of RAID-6 Codes with Optimal Rebuilding and
Encoding | cs.IT math.IT | As storage systems grow in size, device failures happen more frequently than
ever before. Given the commodity nature of hard drives employed, a storage
system needs to tolerate a certain number of disk failures while maintaining
data integrity, and to recover lost data with minimal interference to normal
disk I/O operations. RAID-6, which can tolerate up to two disk failures with
the minimum redundancy, is becoming widespread. However, traditional RAID-6
codes suffer from high disk I/O overhead during recovery. In this paper, we
propose a new family of RAID-6 codes, the Minimum Disk I/O Repairable (MDR)
codes, which achieve the optimal disk I/O overhead for single failure
recoveries. Moreover, we show that MDR codes can be encoded with the minimum
number of bit-wise XOR operations. Simulation results show that MDR codes help
to save about half of disk read operations than traditional RAID-6 codes, and
thus can reduce the recovery time by up to 40%.
|
1310.2385 | Topological Interference Management with Alternating Connectivity: The
Wyner-Type Three User Interference Channel | cs.IT math.IT | Interference management in a three-user interference channel with alternating
connectivity with only topological knowledge at the transmitters is considered.
The network has a Wyner-type channel flavor, i.e., for each connectivity state
the receivers observe at most one interference signal in addition to their
desired signal. Degrees of freedom (DoF) upper bounds and lower bounds are
derived. The lower bounds are obtained from a scheme based on joint encoding
across the alternating states. Given a uniform distribution among the
connectivity states, it is shown that the channel has 2+ 1/9 DoF. This provides
an increase in the DoF as compared to encoding over each state separately,
which achieves 2 DoF only.
|
1310.2396 | A necessary and sufficient condition for two relations to induce the
same definable set family | cs.AI | In Pawlak rough sets, the structure of the definable set families is simple
and clear, but in generalizing rough sets, the structure of the definable set
families is a bit more complex. There has been much research work focusing on
this topic. However, as a fundamental issue in relation based rough sets, under
what condition two relations induce the same definable set family has not been
discussed. In this paper, based on the concept of the closure of relations, we
present a necessary and sufficient condition for two relations to induce the
same definable set family.
|
1310.2408 | Improved Bayesian Logistic Supervised Topic Models with Data
Augmentation | cs.LG cs.CL stat.AP stat.ML | Supervised topic models with a logistic likelihood have two issues that
potentially limit their practical use: 1) response variables are usually
over-weighted by document word counts; and 2) existing variational inference
methods make strict mean-field assumptions. We address these issues by: 1)
introducing a regularization constant to better balance the two parts based on
an optimization formulation of Bayesian inference; and 2) developing a simple
Gibbs sampling algorithm by introducing auxiliary Polya-Gamma variables and
collapsing out Dirichlet variables. Our augment-and-collapse sampling algorithm
has analytical forms of each conditional distribution without making any
restricting assumptions and can be easily parallelized. Empirical results
demonstrate significant improvements on prediction performance and time
efficiency.
|
1310.2409 | Discriminative Relational Topic Models | cs.LG cs.IR stat.ML | Many scientific and engineering fields involve analyzing network data. For
document networks, relational topic models (RTMs) provide a probabilistic
generative process to describe both the link structure and document contents,
and they have shown promise on predicting network structures and discovering
latent topic representations. However, existing RTMs have limitations in both
the restricted model expressiveness and incapability of dealing with imbalanced
network data. To expand the scope and improve the inference accuracy of RTMs,
this paper presents three extensions: 1) unlike the common link likelihood with
a diagonal weight matrix that allows the-same-topic interactions only, we
generalize it to use a full weight matrix that captures all pairwise topic
interactions and is applicable to asymmetric networks; 2) instead of doing
standard Bayesian inference, we perform regularized Bayesian inference
(RegBayes) with a regularization parameter to deal with the imbalanced link
structure issue in common real networks and improve the discriminative ability
of learned latent representations; and 3) instead of doing variational
approximation with strict mean-field assumptions, we present collapsed Gibbs
sampling algorithms for the generalized relational topic models by exploring
data augmentation without making restricting assumptions. Under the generic
RegBayes framework, we carefully investigate two popular discriminative loss
functions, namely, the logistic log-loss and the max-margin hinge loss.
Experimental results on several real network datasets demonstrate the
significance of these extensions on improving the prediction performance, and
the time efficiency can be dramatically improved with a simple fast
approximation method.
|
1310.2410 | Sparse signal recovery by $\ell_q$ minimization under restricted
isometry property | cs.IT math.IT | In the context of compressed sensing, the nonconvex $\ell_q$ minimization
with $0<q<1$ has been studied in recent years. In this paper, by generalizing
the sharp bound for $\ell_1$ minimization of Cai and Zhang, we show that the
condition $\delta_{(s^q+1)k}<\dfrac{1}{\sqrt{s^{q-2}+1}}$ in terms of
\emph{restricted isometry constant (RIC)} can guarantee the exact recovery of
$k$-sparse signals in noiseless case and the stable recovery of approximately
$k$-sparse signals in noisy case by $\ell_q$ minimization. This result is more
general than the sharp bound for $\ell_1$ minimization when the order of RIC is
greater than $2k$ and illustrates the fact that a better approximation to
$\ell_0$ minimization is provided by $\ell_q$ minimization than that provided
by $\ell_1$ minimization.
|
1310.2418 | Linear Algorithm for Digital Euclidean Connected Skeleton | cs.CV | The skeleton is an essential shape characteristic providing a compact
representation of the studied shape. Its computation on the image grid raises
many issues. Due to the effects of discretization, the required properties of
the skeleton - thinness, homotopy to the shape, reversibility, connectivity -
may become incompatible. However, as regards practical use, the choice of a
specific skeletonization algorithm depends on the application. This allows to
classify the desired properties by order of importance, and tend towards the
most critical ones. Our goal is to make a skeleton dedicated to shape matching
for recognition. So, the discrete skeleton has to be thin - so that it can be
represented by a graph -, robust to noise, reversible - so that the initial
shape can be fully reconstructed - and homotopic to the shape. We propose a
linear-time skeletonization algorithm based on the squared Euclidean distance
map from which we extract the maximal balls and ridges. After a thinning and
pruning process, we obtain the skeleton. The proposed method is finally
compared to fairly recent methods.
|
1310.2431 | Practical Verification of Decision-Making in Agent-Based Autonomous
Systems | cs.LO cs.MA | We present a verification methodology for analysing the decision-making
component in agent-based hybrid systems. Traditionally hybrid automata have
been used to both implement and verify such systems, but hybrid automata based
modelling, programming and verification techniques scale poorly as the
complexity of discrete decision-making increases making them unattractive in
situations where complex logical reasoning is required. In the programming of
complex systems it has, therefore, become common to separate out logical
decision-making into a separate, discrete, component. However, verification
techniques have failed to keep pace with this development. We are exploring
agent-based logical components and have developed a model checking technique
for such components which can then be composed with a separate analysis of the
continuous part of the hybrid system. Among other things this allows program
model checkers to be used to verify the actual implementation of the
decision-making in hybrid autonomous systems.
|
1310.2435 | Interference Alignment via Message-Passing | cs.IT math.IT | We introduce an iterative solution to the problem of interference alignment
(IA) over MIMO channels based on a message-passing formulation. We propose a
parameterization of the messages that enables the computation of IA precoders
by a min-sum algorithm over continuous variable spaces -- under this
parameterization, suitable approximations of the messages can be computed in
closed-form. We show that the iterative leakage minimization algorithm of
Cadambe et al. is a special case of our message-passing algorithm, obtained for
a particular schedule. Finally, we show that the proposed algorithm compares
favorably to iterative leakage minimization in terms of convergence speed, and
discuss a distributed implementation.
|
1310.2441 | Pioneers of Influence Propagation in Social Networks | cs.SI cs.DM physics.soc-ph | With the growing importance of corporate viral marketing campaigns on online
social networks, the interest in studies of influence propagation through
networks is higher than ever. In a viral marketing campaign, a firm initially
targets a small set of pioneers and hopes that they would influence a sizeable
fraction of the population by diffusion of influence through the network. In
general, any marketing campaign might fail to go viral in the first try. As
such, it would be useful to have some guide to evaluate the effectiveness of
the campaign and judge whether it is worthy of further resources, and in case
the campaign has potential, how to hit upon a good pioneer who can make the
campaign go viral. In this paper, we present a diffusion model developed by
enriching the generalized random graph (a.k.a. configuration model) to provide
insight into these questions. We offer the intuition behind the results on this
model, rigorously proved in Blaszczyszyn & Gaurav(2013), and illustrate them
here by taking examples of random networks having prototypical degree
distributions - Poisson degree distribution, which is commonly used as a kind
of benchmark, and Power Law degree distribution, which is normally used to
approximate the real-world networks. On these networks, the members are assumed
to have varying attitudes towards propagating the information. We analyze three
cases, in particular - (1) Bernoulli transmissions, when a member influences
each of its friend with probability p; (2) Node percolation, when a member
influences all its friends with probability p and none with probability 1-p;
(3) Coupon-collector transmissions, when a member randomly selects one of his
friends K times with replacement. We assume that the configuration model is the
closest approximation of a large online social network, when the information
available about the network is very limited. The key insight offered by this
study from a firm's perspective is regarding how to evaluate the effectiveness
of a marketing campaign and do cost-benefit analysis by collecting relevant
statistical data from the pioneers it selects. The campaign evaluation
criterion is informed by the observation that if the parameters of the
underlying network and the campaign effectiveness are such that the campaign
can indeed reach a significant fraction of the population, then the set of good
pioneers also forms a significant fraction of the population. Therefore, in
such a case, the firms can even adopt the naive strategy of repeatedly picking
and targeting some number of pioneers at random from the population. With this
strategy, the probability of them picking a good pioneer will increase
geometrically fast with the number of tries.
|
1310.2451 | M-Power Regularized Least Squares Regression | stat.ML cs.LG math.PR | Regularization is used to find a solution that both fits the data and is
sufficiently smooth, and thereby is very effective for designing and refining
learning algorithms. But the influence of its exponent remains poorly
understood. In particular, it is unclear how the exponent of the reproducing
kernel Hilbert space~(RKHS) regularization term affects the accuracy and the
efficiency of kernel-based learning algorithms. Here we consider regularized
least squares regression (RLSR) with an RKHS regularization raised to the power
of m, where m is a variable real exponent. We design an efficient algorithm for
solving the associated minimization problem, we provide a theoretical analysis
of its stability, and we compare its advantage with respect to computational
complexity, speed of convergence and prediction accuracy to the classical
kernel ridge regression algorithm where the regularization exponent m is fixed
at 2. Our results show that the m-power RLSR problem can be solved efficiently,
and support the suggestion that one can use a regularization term that grows
significantly slower than the standard quadratic growth in the RKHS norm.
|
1310.2456 | Discrete Sparse Signals: Compressed Sensing by Combining OMP and the
Sphere Decoder | cs.IT math.IT | We study the reconstruction of discrete-valued sparse signals from
underdetermined systems of linear equations. On the one hand, classical
compressed sensing (CS) is designed to deal with real-valued sparse signals. On
the other hand, algorithms known from MIMO communications, especially the
sphere decoder (SD), are capable to reconstruct discrete-valued non-sparse
signals from well- or overdefined system of linear equations. Hence, a
combination of both approaches is required. We discuss strategies to include
the knowledge of the discrete nature of the signal in the reconstruction
process. For brevity, the exposition is done for combining the orthogonal
matching pursuit (OMP) with the SD; design guidelines are derived. It is shown
that by suitably combining OMP and SD an efficient low-complexity scheme for
the detection of discrete sparse signals is obtained.
|
1310.2473 | Improved Decoding Algorithms for Reed-Solomon Codes | cs.IT math.IT | In coding theory, Reed-Solomon codes are one of the most well-known and
widely used classes of error-correcting codes. In this thesis we study and
compare two major strategies known for their decoding procedure, the
Peterson-Gorenstein-Zierler (PGZ) and the Berlekamp-Massey (BM) decoder, in
order to improve existing decoding algorithms and propose faster new ones. In
particular we study a modified version of the PGZ decoder, which we will call
the fast Peterson-Gorenstein-Zierler (fPGZ) decoding algorithm. This
improvement was presented in 1997 by exploiting the Hankel structure of the
syndrome matrix. In this thesis we show that the fPGZ decoding algorithm can be
seen as a particular case of the BM one. Indeed we prove that the intermediate
outcomes obtained in the implementation of fPGZ are a subset of those of the BM
decoding algorithm. In this way, we also uncover the existing relationship
between the leading principal minors of syndrome matrix and the discrepancies
computed by the BM algorithm. Finally, thanks to the study done on the
structure of the syndrome matrix and its leading principal minors, we improve
the error value computation in both the decoding strategies studied
(specifically we prove new error value formulas for the fPGZ and the BM
decoding algorithm) and moreover we state a new iterative formulation of the
PGZ decoder well suited to a parallel implementation on integrated microchips.
Thus using techniques of linear algebra we obtain a parallel decoding algorithm
for Reed-Solomon codes with an O(e) computational time complexity, where e is
the number of errors which occurred, although a fairly large number of
elementary circuit elements is needed.
|
1310.2477 | Model-free control of nonlinear power converters | cs.SY math.OC | A new "model-free" control methodology is applied to a boost power converter.
The properties of the boost converter allow to evaluate the performances of the
model-free strategy in the case of switching nonlinear transfer functions,
regarding load variations. Our approach, which utilizes "intelligent" PI
controllers, does not require any converter model identification while ensuring
the stability and the robustness of the controlled system. Simulation results
show that, with a simple control structure, the proposed control method is
almost insensitive to fluctuations and large load variations.
|
1310.2479 | Spatio-temporal variation of conversational utterances on Twitter | physics.soc-ph cs.CL cs.SI | Conversations reflect the existing norms of a language. Previously, we found
that utterance lengths in English fictional conversations in books and movies
have shortened over a period of 200 years. In this work, we show that this
shortening occurs even for a brief period of 3 years (September 2009-December
2012) using 229 million utterances from Twitter. Furthermore, the subset of
geographically-tagged tweets from the United States show an inverse proportion
between utterance lengths and the state-level percentage of the Black
population. We argue that shortening of utterances can be explained by the
increasing usage of jargon including coined words.
|
1310.2490 | Degrees of Freedom of Generic Block-Fading MIMO Channels without A
Priori Channel State Information | cs.IT math.IT | We studynthe high-SNR capacity of generic MIMO Rayleigh block-fading channels
in the noncoherent setting where neither transmitter nor receiver has a priori
channel state information but both are aware of the channel statistics. In
contrast to the well-established constant block-fading model, we allow the
fading to vary within each block with a temporal correlation that is "generic"
(in the sense used in the interference-alignment literature). We show that the
number of degrees of freedom of a generic MIMO Rayleigh block-fading channel
with $T$ transmit antennas and block length $N$ is given by $T(1-1/N)$ provided
that $T<N$ and the number of receive antennas is at least $T(N-1)/(N-T)$. A
comparison with the constant block-fading channel (where the fading is constant
within each block) shows that, for large block lengths, generic correlation
increases the number of degrees of freedom by a factor of up to four.
|
1310.2493 | Combining Ontologies with Correspondences and Link Relations: The E-SHIQ
Representation Framework | cs.AI | Combining knowledge and beliefs of autonomous peers in distributed settings,
is a ma- jor challenge. In this paper we consider peers that combine ontologies
and reason jointly with their coupled knowledge. Ontologies are within the SHIQ
fragment of Description Logics. Although there are several representation
frameworks for modular Description Log- ics, each one makes crucial assumptions
concerning the subjectivity of peers' knowledge, the relation between the
domains over which ontologies are interpreted, the expressivity of the
constructors used for combining knowledge, and the way peers share their
knowledge. However in settings where autonomous peers can evolve and extend
their knowledge and beliefs independently from others, these assumptions may
not hold. In this article, we moti- vate the need for a representation
framework that allows peers to combine their knowledge in various ways,
maintaining the subjectivity of their own knowledge and beliefs, and that
reason collaboratively, constructing a tableau that is distributed among them,
jointly. The paper presents the proposed E-SHIQ representation framework, the
implementation of the E-SHIQ distributed tableau reasoner, and discusses the
efficiency of this reasoner.
|
1310.2514 | Maximal Cost-Bounded Reachability Probability on Continuous-Time Markov
Decision Processes | cs.SY | In this paper, we consider multi-dimensional maximal cost-bounded
reachability probability over continuous-time Markov decision processes
(CTMDPs). Our major contributions are as follows. Firstly, we derive an
integral characterization which states that the maximal cost-bounded
reachability probability function is the least fixed point of a system of
integral equations. Secondly, we prove that the maximal cost-bounded
reachability probability can be attained by a measurable deterministic
cost-positional scheduler. Thirdly, we provide a numerical approximation
algorithm for maximal cost-bounded reachability probability. We present these
results under the setting of both early and late schedulers.
|
1310.2527 | Treating clitics with minimalist grammars | cs.CL cs.LO | We propose an extension of Stabler's version of clitics treatment for a wider
coverage of the French language. For this, we present the lexical entries
needed in the lexicon. Then, we show the recognition of complex syntactic
phenomena as (left and right) dislo- cation, clitic climbing over modal and
extraction from determiner phrase. The aim of this presentation is the
syntax-semantic interface for clitics analyses in which we will stress on
clitic climbing over verb and raising verb.
|
1310.2539 | Intrinsic filtering on Lie groups with applications to attitude
estimation | cs.SY cs.RO math.OC | This paper proposes a probabilistic approach to the problem of intrinsic
filtering of a system on a matrix Lie group with invariance properties. The
problem of an invariant continuous-time model with discrete-time measurements
is cast into a rigorous stochastic and geometric framework. Building upon the
theory of continuous-time invariant observers, we show that, as in the linear
case, the error equation is a Markov chain that does not depend on the state
estimate. Thus, when the filter's gains are held fixed, and the filter admits
almost-global convergence properties with noise turned off, the noisy error's
distribution is proved to converge to a stationary distribution, providing
insight into the mathematical theory of filtering on Lie groups. For
engineering purposes we also introduce the discrete-time Invariant Extended
Kalman Filter, for which the trusted covariance matrix is shown to
asymptotically converge, and some numerically more involved sample-based
methods as well to compute the Kalman gains. The methods are applied to
attitude estimation, allowing to derive novel theoretical results in this
field, and illustrated through simulations on synthetic data.
|
1310.2547 | All Your Location are Belong to Us: Breaking Mobile Social Networks for
Automated User Location Tracking | cs.SI cs.CR | Many popular location-based social networks (LBSNs) support built-in
location-based social discovery with hundreds of millions of users around the
world. While user (near) realtime geographical information is essential to
enable location-based social discovery in LBSNs, the importance of user
location privacy has also been recognized by leading real-world LBSNs. To
protect user's exact geographical location from being exposed, a number of
location protection approaches have been adopted by the industry so that only
relative location information are publicly disclosed. These techniques are
assumed to be secure and are exercised on the daily base. In this paper, we
question the safety of these location-obfuscation techniques used by existing
LBSNs. We show, for the first time, through real world attacks that they can
all be easily destroyed by an attacker with the capability of no more than a
regular LBSN user. In particular, by manipulating location information fed to
LBSN client app, an ill-intended regular user can easily deduce the exact
location information by running LBSN apps as location oracle and performing a
series of attacking strategies. We develop an automated user location tracking
system and test it on the most popular LBSNs including Wechat, Skout and Momo.
We demonstrate its effectiveness and efficiency via a 3 week real-world
experiment with 30 volunteers. Our evaluation results show that we could
geo-locate a target with high accuracy and can readily recover users' Top 5
locations. We also propose to use grid reference system and location
classification to mitigate the attacks. Our work shows that the current
industrial best practices on user location privacy protection are completely
broken, and it is critical to address this immediate threat.
|
1310.2561 | Characterizing Strategic Cascades on Networks | cs.SI cs.GT physics.soc-ph | Transmission of disease, spread of information and rumors, adoption of new
products, and many other network phenomena can be fruitfully modeled as
cascading processes, where actions chosen by nodes influence the subsequent
behavior of neighbors in the network graph. Current literature on cascades
tends to assume nodes choose myopically based on the state of choices already
taken by other nodes. We examine the possibility of strategic choice, where
agents representing nodes anticipate the choices of others who have not yet
decided, and take into account their own influence on such choices. Our study
employs the framework of Chierichetti et al. [2012], who (under assumption of
myopic node behavior) investigate the scheduling of node decisions to promote
cascades of product adoptions preferred by the scheduler. We show that when
nodes behave strategically, outcomes can be extremely different. We exhibit
cases where in the strategic setting 100% of agents adopt, but in the myopic
setting only an arbitrarily small epsilon % do. Conversely, we present cases
where in the strategic setting 0% of agents adopt, but in the myopic setting
(100-epsilon)% do, for any constant epsilon > 0. Additionally, we prove some
properties of cascade processes with strategic agents, both in general and for
particular classes of graphs.
|
1310.2578 | On Minimum-time Paths of Bounded Curvature with Position-dependent
Constraints | math.OC cs.SY math.DS | We consider the problem of a particle traveling from an initial configuration
to a final configuration (given by a point in the plane along with a prescribed
velocity vector) in minimum time with non-homogeneous velocity and with
constraints on the minimum turning radius of the particle over multiple regions
of the state space. Necessary conditions for optimality of these paths are
derived to characterize the nature of optimal paths, both when the particle is
inside a region and when it crosses boundaries between neighboring regions.
These conditions are used to characterize families of optimal and nonoptimal
paths. Among the optimality conditions, we derive a "refraction" law at the
boundary of the regions that generalizes the so-called Snell's law of
refraction in optics to the case of paths with bounded curvature. Tools
employed to deduce our results include recent principles of optimality for
hybrid systems. The results are validated numerically.
|
1310.2592 | Consensus and Coherence in Fractal Networks | cs.SY | We consider first and second order consensus algorithms in networks with
stochastic disturbances. We quantify the deviation from consensus using the
notion of network coherence, which can be expressed as an $H_2$ norm of the
stochastic system. We use the setting of fractal networks to investigate the
question of whether a purely topological measure, such as the fractal
dimension, can capture the asymptotics of coherence in the large system size
limit. Our analysis for first-order systems is facilitated by connections
between first-order stochastic consensus and the global mean first passage time
of random walks. We then show how to apply similar techniques to analyze
second-order stochastic consensus systems. Our analysis reveals that two
networks with the same fractal dimension can exhibit different asymptotic
scalings for network coherence. Thus, this topological characterization of the
network does not uniquely determine coherence behavior. The question of whether
the performance of stochastic consensus algorithms in large networks can be
captured by purely topological measures, such as the spatial dimension, remains
open.
|
1310.2619 | Information Relaxation is Ultradiffusive | cs.SI cs.CY physics.soc-ph | We investigate how the overall response to a piece of information (a story or
an article) evolves and relaxes as a function of time in social networks like
Reddit, Digg and Youtube. This response or popularity is measured in terms of
the number of votes/comments that the story (or article) accrued over time. We
find that the temporal evolution of popularity can be described by a universal
function whose parameters depend upon the system under consideration. Unlike
most previous studies, which empirically investigated the dynamics of voting
behavior, we also give a theoretical interpretation of the observed behavior
using ultradiffusion.
Whether it is the inter-arrival time between two consecutive votes on a story
on Reddit or the comments on a video shared on Youtube, there is always a
hierarchy of time scales in information propagation. One vote/comment might
occur almost simultaneously with the previous, whereas another vote/comment
might occur hours after the preceding one. This hierarchy of time scales leads
us to believe that the dynamical response of users to information is
ultradiffusive in nature. We show that a ultradiffusion based stochastic
process can be used to rationalize the observed temporal evolution.
|
1310.2627 | A Sparse and Adaptive Prior for Time-Dependent Model Parameters | stat.ML cs.AI cs.LG | We consider the scenario where the parameters of a probabilistic model are
expected to vary over time. We construct a novel prior distribution that
promotes sparsity and adapts the strength of correlation between parameters at
successive timesteps, based on the data. We derive approximate variational
inference procedures for learning and prediction with this prior. We test the
approach on two tasks: forecasting financial quantities from relevant text, and
modeling language contingent on time-varying financial measurements.
|
1310.2632 | Bilinear Generalized Approximate Message Passing | cs.IT math.IT | We extend the generalized approximate message passing (G-AMP) approach,
originally proposed for high-dimensional generalized-linear regression in the
context of compressive sensing, to the generalized-bilinear case, which enables
its application to matrix completion, robust PCA, dictionary learning, and
related matrix-factorization problems. In the first part of the paper, we
derive our Bilinear G-AMP (BiG-AMP) algorithm as an approximation of the
sum-product belief propagation algorithm in the high-dimensional limit, where
central-limit theorem arguments and Taylor-series approximations apply, and
under the assumption of statistically independent matrix entries with known
priors. In addition, we propose an adaptive damping mechanism that aids
convergence under finite problem sizes, an expectation-maximization (EM)-based
method to automatically tune the parameters of the assumed priors, and two
rank-selection strategies. In the second part of the paper, we discuss the
specializations of EM-BiG-AMP to the problems of matrix completion, robust PCA,
and dictionary learning, and present the results of an extensive empirical
study comparing EM-BiG-AMP to state-of-the-art algorithms on each problem. Our
numerical results, using both synthetic and real-world datasets, demonstrate
that EM-BiG-AMP yields excellent reconstruction accuracy (often best in class)
while maintaining competitive runtimes and avoiding the need to tune
algorithmic parameters.
|
1310.2636 | The small-world effect is a modern phenomenon | physics.soc-ph cs.SI | The "small-world effect" is the observation that one can find a short chain
of acquaintances, often of no more than a handful of individuals, connecting
almost any two people on the planet. It is often expressed in the language of
networks, where it is equivalent to the statement that most pairs of
individuals are connected by a short path through the acquaintance network.
Although the small-world effect is well-established empirically for
contemporary social networks, we argue here that it is a relatively recent
phenomenon, arising only in the last few hundred years: for most of mankind's
tenure on Earth the social world was large, with most pairs of individuals
connected by relatively long chains of acquaintances, if at all. Our
conclusions are based on observations about the spread of diseases, which
travel over contact networks between individuals and whose dynamics can give us
clues to the structure of those networks even when direct network measurements
are not available. As an example we consider the spread of the Black Death in
14th-century Europe, which is known to have traveled across the continent in
well-defined waves of infection over the course of several years. Using
established epidemiological models, we show that such wave-like behavior can
occur only if contacts between individuals living far apart are exponentially
rare. We further show that if long-distance contacts are exponentially rare,
then the shortest chain of contacts between distant individuals is on average a
long one. The observation of the wave-like spread of a disease like the Black
Death thus implies a network without the small-world effect.
|
1310.2646 | Localized Iterative Methods for Interpolation in Graph Structured Data | cs.LG | In this paper, we present two localized graph filtering based methods for
interpolating graph signals defined on the vertices of arbitrary graphs from
only a partial set of samples. The first method is an extension of previous
work on reconstructing bandlimited graph signals from partially observed
samples. The iterative graph filtering approach very closely approximates the
solution proposed in the that work, while being computationally more efficient.
As an alternative, we propose a regularization based framework in which we
define the cost of reconstruction to be a combination of smoothness of the
graph signal and the reconstruction error with respect to the known samples,
and find solutions that minimize this cost. We provide both a closed form
solution and a computationally efficient iterative solution of the optimization
problem. The experimental results on the recommendation system datasets
demonstrate effectiveness of the proposed methods.
|
1310.2665 | Clustering Memes in Social Media | cs.SI cs.CY physics.data-an physics.soc-ph | The increasing pervasiveness of social media creates new opportunities to
study human social behavior, while challenging our capability to analyze their
massive data streams. One of the emerging tasks is to distinguish between
different kinds of activities, for example engineered misinformation campaigns
versus spontaneous communication. Such detection problems require a formal
definition of meme, or unit of information that can spread from person to
person through the social network. Once a meme is identified, supervised
learning methods can be applied to classify different types of communication.
The appropriate granularity of a meme, however, is hardly captured from
existing entities such as tags and keywords. Here we present a framework for
the novel task of detecting memes by clustering messages from large streams of
social data. We evaluate various similarity measures that leverage content,
metadata, network features, and their combinations. We also explore the idea of
pre-clustering on the basis of existing entities. A systematic evaluation is
carried out using a manually curated dataset as ground truth. Our analysis
shows that pre-clustering and a combination of heterogeneous features yield the
best trade-off between number of clusters and their quality, demonstrating that
a simple combination based on pairwise maximization of similarity is as
effective as a non-trivial optimization of parameters. Our approach is fully
automatic, unsupervised, and scalable for real-time detection of memes in
streaming data.
|
1310.2671 | Traveling Trends: Social Butterflies or Frequent Fliers? | cs.SI cs.CY physics.soc-ph | Trending topics are the online conversations that grab collective attention
on social media. They are continually changing and often reflect exogenous
events that happen in the real world. Trends are localized in space and time as
they are driven by activity in specific geographic areas that act as sources of
traffic and information flow. Taken independently, trends and geography have
been discussed in recent literature on online social media; although, so far,
little has been done to characterize the relation between trends and geography.
Here we investigate more than eleven thousand topics that trended on Twitter in
63 main US locations during a period of 50 days in 2013. This data allows us to
study the origins and pathways of trends, how they compete for popularity at
the local level to emerge as winners at the country level, and what dynamics
underlie their production and consumption in different geographic areas. We
identify two main classes of trending topics: those that surface locally,
coinciding with three different geographic clusters (East coast, Midwest and
Southwest); and those that emerge globally from several metropolitan areas,
coinciding with the major air traffic hubs of the country. These hubs act as
trendsetters, generating topics that eventually trend at the country level, and
driving the conversation across the country. This poses an intriguing
conjecture, drawing a parallel between the spread of information and diseases:
Do trends travel faster by airplane than over the Internet?
|
1310.2686 | New Families of $p$-ary Sequences of Period $\frac{p^n-1}{2}$ With Low
Maximum Correlation Magnitude | cs.IT math.IT | Let $p$ be an odd prime such that $p \equiv 3\;{\rm mod}\;4$ and $n$ be an
odd integer. In this paper, two new families of $p$-ary sequences of period $N
= \frac{p^n-1}{2}$ are constructed by two decimated $p$-ary m-sequences $m(2t)$
and $m(dt)$, where $d = 4$ and $d = (p^n + 1)/2=N+1$. The upper bound on the
magnitude of correlation values of two sequences in the family is derived using
Weil bound. Their upper bound is derived as $\frac{3}{\sqrt{2}}
\sqrt{N+\frac{1}{2}}+\frac{1}{2}$ and the family size is 4N, which is four
times the period of the sequence.
|
1310.2700 | Analyzing Big Data with Dynamic Quantum Clustering | physics.data-an cs.LG physics.comp-ph | How does one search for a needle in a multi-dimensional haystack without
knowing what a needle is and without knowing if there is one in the haystack?
This kind of problem requires a paradigm shift - away from hypothesis driven
searches of the data - towards a methodology that lets the data speak for
itself. Dynamic Quantum Clustering (DQC) is such a methodology. DQC is a
powerful visual method that works with big, high-dimensional data. It exploits
variations of the density of the data (in feature space) and unearths subsets
of the data that exhibit correlations among all the measured variables. The
outcome of a DQC analysis is a movie that shows how and why sets of data-points
are eventually classified as members of simple clusters or as members of - what
we call - extended structures. This allows DQC to be successfully used in a
non-conventional exploratory mode where one searches data for unexpected
information without the need to model the data. We show how this works for big,
complex, real-world datasets that come from five distinct fields: i.e., x-ray
nano-chemistry, condensed matter, biology, seismology and finance. These
studies show how DQC excels at uncovering unexpected, small - but meaningful -
subsets of the data that contain important information. We also establish an
important new result: namely, that big, complex datasets often contain
interesting structures that will be missed by many conventional clustering
techniques. Experience shows that these structures appear frequently enough
that it is crucial to know they can exist, and that when they do, they encode
important hidden information. In short, we not only demonstrate that DQC can be
flexibly applied to datasets that present significantly different challenges,
we also show how a simple analysis can be used to look for the needle in the
haystack, determine what it is, and find what this means.
|
1310.2703 | Max-Min Energy Efficient Beamforming for Multicell Multiuser Joint
Transmission Systems | cs.IT math.IT | Energy efficient communication technology has attracted much attention due to
the explosive growth of energy consumption in current wireless communication
systems. In this letter we focus on fairness-based energy efficiency and aim to
maximize the minimum user energy efficiency in the multicell multiuser joint
beamforming system, taking both dynamic and static power consumptions into
account. This optimization problem is a non-convex fractional programming
problem and hard to tackle. In order to find its solution, the original problem
is transformed into a parameterized polynomial subtractive form by exploiting
the relationship between the user rate and the minimum mean square error, and
using the fractional programming theorem. Furthermore, an iterative algorithm
with proved convergence is developed to achieve a near-optimal performance.
Numerical results validate the effectiveness of the proposed solution and show
that our algorithm significantly outperforms the max-min rate optimization
algorithm in terms of maximizing the minimum energy efficiency.
|
1310.2717 | Low-cost photoplethysmograph solutions using the Raspberry Pi | cs.SY | Photoplethysmography is a prevalent, non-invasive heart monitoring method. In
this paper an implementation of photoplethysmography on the Raspberry Pi is
presented. Two modulation techniques are discussed, which make possible to
measure these signals by the Raspberry Pi, using an external sound card as A/D
converter. Furthermore, it is shown, how can digital signal processing improve
signal quality. The presented methods can be used in low-cost cardiac function
monitoring, in telemedicine applications and in education as well, since cheap
and current hardware are used. Full documentation and open-source software for
the measurement available:
http://www.noise.inf.u-szeged.hu/Instruments/raspberryplet/
|
1310.2743 | Case Adaptation with Qualitative Algebras | cs.AI | This paper proposes an approach for the adaptation of spatial or temporal
cases in a case-based reasoning system. Qualitative algebras are used as
spatial and temporal knowledge representation languages. The intuition behind
this adaptation approach is to apply a substitution and then repair potential
inconsistencies, thanks to belief revision on qualitative algebras. A temporal
example from the cooking domain is given. (The paper on which this extended
abstract is based was the recipient of the best paper award of the 2012
International Conference on Case-Based Reasoning.)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.