id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1307.7389 | Complex scale-free networks with tunable power-law exponent and
clustering | physics.soc-ph cs.SI | We introduce a network evolution process motivated by the network of
citations in the scientific literature. In each iteration of the process a node
is born and directed links are created from the new node to a set of target
nodes already in the network. This set includes $m$ "ambassador" nodes and $l$
of each ambassador's descendants where $m$ and $l$ are random variables
selected from any choice of distributions $p_{l}$ and $q_{m}$. The process
mimics the tendency of authors to cite varying numbers of papers included in
the bibliographies of the other papers they cite. We show that the degree
distributions of the networks generated after a large number of iterations are
scale-free and derive an expression for the power-law exponent. In a particular
case of the model where the number of ambassadors is always the constant $m$
and the number of selected descendants from each ambassador is the constant
$l$, the power-law exponent is $(2l+1)/l$. For this example we derive
expressions for the degree distribution and clustering coefficient in terms of
$l$ and $m$. We conclude that the proposed model can be tuned to have the same
power law exponent and clustering coefficient of a broad range of the
scale-free distributions that have been studied empirically.
|
1307.7398 | ROSoClingo: A ROS package for ASP-based robot control | cs.RO cs.AI | Knowledge representation and reasoning capacities are vital to cognitive
robotics because they provide higher level cognitive functions for reasoning
about actions, environments, goals, perception, etc. Although Answer Set
Programming (ASP) is well suited for modelling such functions, there was so far
no seamless way to use ASP in a robotic environment. We address this
shortcoming and show how a recently developed reactive ASP system can be
harnessed to provide appropriate reasoning capacities within a robotic system.
To be more precise, we furnish a package integrating the reactive ASP solver
oClingo with the popular open-source robotic middleware ROS. The resulting
system, ROSoClingo, provides a generic way by which an ASP program can be used
to control the behaviour of a robot and to respond to the results of the
robot's actions.
|
1307.7401 | Properties of nonlinear noise in long, dispersion-uncompensated fiber
links | physics.optics cs.IT math.IT | We study the properties of nonlinear interference noise (NLIN) in fiber-optic
communications systems with large accumulated dispersion. Our focus is on
settling the discrepancy between the results of the Gaussian noise (GN) model
(according to which NLIN is additive Gaussian) and a recently published
time-domain analysis, which attributes drastically different properties to the
NLIN. Upon reviewing the two approaches we identify several unjustified
assumptions that are key in the derivation of the GN model, and that are
responsible for the discrepancy. We derive the true NLIN power and verify that
the NLIN is not additive Gaussian, but rather it depends strongly on the data
transmitted in the channel of interest. In addition we validate the time-domain
model numerically and demonstrate the strong dependence of the NLIN on the
interfering channels' modulation format.
|
1307.7405 | Reasoning for Moving Blocks Problem: Formal Representation and
Implementation | cs.RO cs.AI | The combined approach of the Qualitative Reasoning and Probabilistic
Functions for the knowledge representation is proposed. The method aims at
represent uncertain, qualitative knowledge that is essential for the moving
blocks task's execution. The attempt to formalize the commonsense knowledge is
performed with the Situation Calculus language for reasoning and robot's
beliefs representation. The method is implemented in the Prolog programming
language and tested for a specific simulated scenario. In most cases the
implementation enables us to solve a given task, i.e., move blocks to desired
positions. The example of robot's reasoning and main parts of the implemented
program's code are presented.
|
1307.7411 | Towards an Efficient Discovery of the Topological Representative
Subgraphs | cs.DB | With the emergence of graph databases, the task of frequent subgraph
discovery has been extensively addressed. Although the proposed approaches in
the literature have made this task feasible, the number of discovered frequent
subgraphs is still very high to be efficiently used in any further exploration.
Feature selection for graph data is a way to reduce the high number of frequent
subgraphs based on exact or approximate structural similarity. However, current
structural similarity strategies are not efficient enough in many real-world
applications, besides, the combinatorial nature of graphs makes it
computationally very costly. In order to select a smaller yet structurally
irredundant set of subgraphs, we propose a novel approach that mines the top-k
topological representative subgraphs among the frequent ones. Our approach
allows detecting hidden structural similarities that existing approaches are
unable to detect such as the density or the diameter of the subgraph. In
addition, it can be easily extended using any user defined structural or
topological attributes depending on the sought properties. Empirical studies on
real and synthetic graph datasets show that our approach is fast and scalable.
|
1307.7429 | Participation anticipating in elections using data mining methods | cs.CY cs.LG | Anticipating the political behavior of people will be considerable help for
election candidates to assess the possibility of their success and to be
acknowledged about the public motivations to select them. In this paper, we
provide a general schematic of the architecture of participation anticipating
system in presidential election by using KNN, Classification Tree and Na\"ive
Bayes and tools orange based on crisp which had hopeful output. To test and
assess the proposed model, we begin to use the case study by selecting 100
qualified persons who attend in 11th presidential election of Islamic republic
of Iran and anticipate their participation in Kohkiloye & Boyerahmad. We
indicate that KNN can perform anticipation and classification processes with
high accuracy in compared with two other algorithms to anticipate
participation.
|
1307.7432 | Data mining application for cyber space users tendency in blog writing:
a case study | cs.CY cs.LG | Blogs are the recent emerging media which relies on information technology
and technological advance. Since the mass media in some less-developed and
developing countries are in government service and their policies are developed
based on governmental interests, so blogs are provided for ideas and exchanging
opinions. In this paper, we highlighted performed simulations from obtained
information from 100 users and bloggers in Kohkiloye and Boyer Ahmad Province
and using Weka 3.6 tool and c4.5 algorithm by applying decision tree with more
than %82 precision for getting future tendency anticipation of users to
blogging and using in strategically areas.
|
1307.7435 | A new approach in dynamic traveling salesman problem: a hybrid of ant
colony optimization and descending gradient | cs.NE | Nowadays swarm intelligence-based algorithms are being used widely to
optimize the dynamic traveling salesman problem (DTSP). In this paper, we have
used mixed method of Ant Colony Optimization (AOC)and gradient descent to
optimize DTSP which differs with ACO algorithm in evaporation rate and
innovative data. This approach prevents premature convergence and scape from
local optimum spots and also makes it possible to find better solutions for
algorithm. In this paper, we are going to offer gradient descent and ACO
algorithm which in comparison to some former methods it shows that algorithm
has significantly improved routes optimization.
|
1307.7447 | Wireless Information and Power Transfer in Two-Way Amplify-and-Forward
Relaying Channels | cs.IT math.IT | The various wireless networks have made the ambient radio frequency signals
around the world. Wireless information and power transfer enables the devices
to recycle energy from these ambient radio frequency signals and process
information simultaneously. In this paper, we develop a wireless information
and power transfer protocol in two-way amplify-and-forward relaying channels,
where two sources exchange information via an energy harvesting relay node. The
relay node collects energy from the received signals and uses it to provide the
transmission power to forward the received signals. We analytically derive the
exact expressions of the outage probability, the ergodic capacity and the
finite-SNR diversity-multiplexing trade-off (DMT). Furthermore, the tight
closed-form upper and lower bounds of the outage probability and the ergodic
capacity are then developed. Moreover, the impact of the power splitting ratio
is also evaluated and analyzed. Finally, we show that compared to the
non-cooperative relaying scheme, the proposed protocol is a green solution to
offer higher transmission rate and more reliable communication without
consuming additional resource.
|
1307.7461 | Levels of Integration between Low-Level Reasoning and Task Planning | cs.RO cs.AI | We provide a systematic analysis of levels of integration between discrete
high-level reasoning and continuous low-level reasoning to address hybrid
planning problems in robotics. We identify four distinct strategies for such an
integration: (i) low-level checks are done for all possible cases in advance
and then this information is used during plan generation, (ii) low-level checks
are done exactly when they are needed during the search for a plan, (iii) first
all plans are computed and then infeasible ones are filtered, and (iv) by means
of replanning, after finding a plan, low-level checks identify whether it is
infeasible or not; if it is infeasible, a new plan is computed considering the
results of previous low- level checks. We perform experiments on hybrid
planning problems in robotic manipulation and legged locomotion domains
considering these four methods of integration, as well as some of their
combinations. We analyze the usefulness of levels of integration in these
domains, both from the point of view of computational efficiency (in time and
space) and from the point of view of plan quality relative to its feasibility.
We discuss advantages and disadvantages of each strategy in the light of
experimental results and provide some guidelines on choosing proper strategies
for a given domain.
|
1307.7466 | Integration of 3D Object Recognition and Planning for Robotic
Manipulation: A Preliminary Report | cs.AI cs.CV cs.RO | We investigate different approaches to integrating object recognition and
planning in a tabletop manipulation domain with the set of objects used in the
2012 RoboCup@Work competition. Results of our preliminary experiments show
that, with some approaches, close integration of perception and planning
improves the quality of plans, as well as the computation times of feasible
plans.
|
1307.7474 | Automatic Mammogram image Breast Region Extraction and Removal of
Pectoral Muscle | cs.CV | Currently Mammography is a most effective imaging modality used by
radiologists for the screening of breast cancer. Finding an accurate, robust
and efficient breast region segmentation technique still remains a challenging
problem in digital mammography. Extraction of the breast profile region and the
removal of pectoral muscle are essential pre-processing steps in Computer Aided
Diagnosis (CAD) system for the diagnosis of breast cancer. Primarily it allows
the search for abnormalities to be limited to the region of the breast tissue
without undue influence from the background of the mammogram. The presence of
pectoral muscle in mammograms biases detection procedures, which recommends
removing the pectoral muscle during mammogram image pre-processing. The
presence of pectoral muscle in mammograms may disturb or influence the
detection of breast cancer as the pectoral muscle and mammographic parenchymas
appear similar. The goal of breast region extraction is reducing the image size
without losing anatomic information, it improve the accuracy of the overall CAD
system. The main objective of this study is to propose an automated method to
identify the pectoral muscle in Medio-Lateral Oblique (MLO) view mammograms. In
this paper, we proposed histogram based 8-neighborhood connected component
labelling method for breast region extraction and removal of pectoral muscle.
The proposed method is evaluated by using the mean values of accuracy and
error. The comparative analysis shows that the proposed method identifies the
breast region more accurately.
|
1307.7494 | ReAct! An Interactive Tool for Hybrid Planning in Robotics | cs.AI cs.LO cs.RO | We present ReAct!, an interactive tool for high-level reasoning for cognitive
robotic applications. ReAct! enables robotic researchers to describe robots'
actions and change in dynamic domains, without having to know about the
syntactic and semantic details of the underlying formalism in advance, and
solve planning problems using state-of-the-art automated reasoners, without
having to learn about their input/output language or usage. In particular,
ReAct! can be used to represent sophisticated dynamic domains that feature
concurrency, indirect effects of actions, and state/transition constraints. It
allows for embedding externally defined calculations (e.g., checking for
collision-free continuous trajectories) into representations of hybrid domains
that require a tight integration of (discrete) high-level reasoning with
(continuous) geometric reasoning. ReAct! also enables users to solve planning
problems that involve complex goals. Such variety of utilities are useful for
robotic researchers to work on interesting and challenging domains, ranging
from service robotics to cognitive factories. ReAct! provides sample
formalizations of some action domains (e.g., multi-agent path planning, Tower
of Hanoi), as well as dynamic simulations of plans computed by a
state-of-the-art automated reasoner (e.g., a SAT solver or an ASP solver).
|
1307.7495 | Universal Polarization | cs.IT math.IT | A method to polarize channels universally is introduced. The method is based
on combining two distinct channels in each polarization step, as opposed to
Arikan's original method of combining identical channels. This creates an equal
number of only two types of channels, one of which becomes progressively better
as the other becomes worse. The locations of the good polarized channels are
independent of the underlying channel, guaranteeing universality. Polarizing
the good channels further with Arikan's method results in universal polar codes
of rate 1/2. The method is generalized to construct codes of arbitrary rates.
It is also shown that the less noisy ordering of channels is preserved under
polarization, and thus a good polar code for a given channel will perform well
over a less noisy one.
|
1307.7513 | An Approach Finding Frequent Items In Text Or Transactional Data Base By
Using BST To Improve The Efficiency Of Apriori Algorithm | cs.DB cs.DS | Data mining techniques have been widely used in various applications. Binary
search tree based frequent items is an effective method for automatically
recognize the most frequent items, least frequent items and average frequent
items. This paper presents a new approach in order to find out frequent items.
The word frequent item refers to how many times the item appeared in the given
input. This approach is used to find out item sets in any order using familiar
approach binary search tree. The method adapted here is in order to find out
frequent items by comparing and incrementing the counter variable in existing
transactional data base or text data. We are also representing different
approaches in frequent item sets and also propose an algorithmic approach for
the problem solving
|
1307.7521 | Union of Low-Rank Subspaces Detector | cs.IT cs.CV math.IT | The problem of signal detection using a flexible and general model is
considered. Due to applicability and flexibility of sparse signal
representation and approximation, it has attracted a lot of attention in many
signal processing areas. In this paper, we propose a new detection method based
on sparse decomposition in a union of subspaces (UoS) model. Our proposed
detector uses a dictionary that can be interpreted as a bank of matched
subspaces. This improves the performance of signal detection, as it is a
generalization for detectors. Low-rank assumption for the desired signals
implies that the representations of these signals in terms of some proper bases
would be sparse. Our proposed detector exploits sparsity in its decision rule.
We demonstrate the high efficiency of our method in the cases of voice activity
detection in speech processing.
|
1307.7533 | Stabilization of Linear Systems Over Gaussian Networks | math.OC cs.IT math.IT | The problem of remotely stabilizing a noisy linear time invariant plant over
a Gaussian relay network is addressed. The network is comprised of a sensor
node, a group of relay nodes and a remote controller. The sensor and the relay
nodes operate subject to an average transmit power constraint and they can
cooperate to communicate the observations of the plant's state to the remote
controller. The communication links between all nodes are modeled as Gaussian
channels. Necessary as well as sufficient conditions for mean-square
stabilization over various network topologies are derived. The sufficient
conditions are in general obtained using delay-free linear policies and the
necessary conditions are obtained using information theoretic tools. Different
settings where linear policies are optimal, asymptotically optimal (in certain
parameters of the system) and suboptimal have been identified. For the case
with noisy multi-dimensional sources controlled over scalar channels, it is
shown that linear time varying policies lead to minimum capacity requirements,
meeting the fundamental lower bound. For the case with noiseless sources and
parallel channels, non-linear policies which meet the lower bound have been
identified.
|
1307.7544 | On block coherence of frames | cs.IT math.IT | Block coherence of matrices plays an important role in analyzing the
performance of block compressed sensing recovery algorithms (Bajwa and Mixon,
2012). In this paper, we characterize two block coherence metrics: worst-case
and average block coherence. First, we present lower bounds on worst-case block
coherence, in both the general case and also when the matrix is constrained to
be a union of orthobases. We then present deterministic matrix constructions
based upon Kronecker products which obtain these lower bounds. We also
characterize the worst-case block coherence of random subspaces. Finally, we
present a flipping algorithm that can improve the average block coherence of a
matrix, while maintaining the worst-case block coherence of the original
matrix. We provide numerical examples which demonstrate that our proposed
deterministic matrix construction performs well in block compressed sensing.
|
1307.7545 | Multi-Objective Beamforming for Secure Communication in Systems with
Wireless Information and Power Transfer | cs.IT math.IT | In this paper, we study power allocation for secure communication in a
multiuser multiple-input single-output (MISO) downlink system with simultaneous
wireless information and power transfer. The receivers are able to harvest
energy from the radio frequency when they are idle. We propose a
multi-objective optimization problem for power allocation algorithm design
which incorporates two conflicting system objectives: total transmit power
minimization and energy harvesting efficiency maximization. The proposed
problem formulation takes into account a quality of service (QoS) requirement
for the system secrecy capacity. Our designs advocate the dual use of
artificial noise in providing secure communication and facilitating efficient
energy harvesting. The multi-objective optimization problem is non-convex and
is solved by a semidefinite programming (SDP) relaxation approach which results
in an approximate of solution.
A sufficient condition for the global optimal solution is revealed and the
accuracy of the approximation is examined. To strike a balance between
computational complexity and system performance, we propose two suboptimal
power allocation schemes. Numerical results not only demonstrate the excellent
performance of the proposed suboptimal schemes compared to baseline schemes,
but also unveil an interesting trade-off between energy harvesting efficiency
and total transmit power.
|
1307.7562 | On the convergence of weighted-average consensus | math.OC cs.SY | In this note we give sufficient conditions for the convergence of the
iterative algorithm called weighted-average consensus in directed graphs. We
study the discrete-time form of this algorithm. We use standard techniques from
matrix theory to prove the main result. As a particular case one can obtain
well-known results for non-weighted average consensus. We also give a corollary
for undirected graphs.
|
1307.7569 | Community detection for networks with unipartite and bipartite structure | physics.soc-ph cs.SI q-bio.QM | Finding community structures in networks is important in network science,
technology, and applications. To date, most algorithms that aim to find
community structures only focus either on unipartite or bipartite networks. A
unipartite network consists of one set of nodes and a bipartite network
consists of two nonoverlapping sets of nodes with only links joining the nodes
in different sets. However, a third type of network exists, defined here as the
mixture network. Just like a bipartite network, a mixture network also consists
of two sets of nodes, but some nodes may simultaneously belong to two sets,
which breaks the nonoverlapping restriction of a bipartite network. The mixture
network can be considered as a general case, with unipartite and bipartite
networks viewed as its limiting cases. A mixture network can represent not only
all the unipartite and bipartite networks, but also a wide range of real-world
networks that cannot be properly represented as either unipartite or bipartite
networks in fields such as biology and social science. Based on this
observation, we first propose a probabilistic model that can find modules in
unipartite, bipartite, and mixture networks in a unified framework based on the
link community model for a unipartite undirected network [B Ball et al (2011
Phys. Rev. E 84 036103)]. We test our algorithm on synthetic networks (both
overlapping and nonoverlapping communities) and apply it to two real-world
networks: a southern women bipartite network and a human transcriptional
regulatory mixture network. The results suggest that our model performs well
for all three types of networks, is competitive with other algorithms for
unipartite or bipartite networks, and is applicable to real-world networks.
|
1307.7577 | Safe Screening With Variational Inequalities and Its Application to
LASSO | cs.LG stat.ML | Sparse learning techniques have been routinely used for feature selection as
the resulting model usually has a small number of non-zero entries. Safe
screening, which eliminates the features that are guaranteed to have zero
coefficients for a certain value of the regularization parameter, is a
technique for improving the computational efficiency. Safe screening is gaining
increasing attention since 1) solving sparse learning formulations usually has
a high computational cost especially when the number of features is large and
2) one needs to try several regularization parameters to select a suitable
model. In this paper, we propose an approach called "Sasvi" (Safe screening
with variational inequalities). Sasvi makes use of the variational inequality
that provides the sufficient and necessary optimality condition for the dual
problem. Several existing approaches for Lasso screening can be casted as
relaxed versions of the proposed Sasvi, thus Sasvi provides a stronger safe
screening rule. We further study the monotone properties of Sasvi for Lasso,
based on which a sure removal regularization parameter can be identified for
each feature. Experimental results on both synthetic and real data sets are
reported to demonstrate the effectiveness of the proposed Sasvi for Lasso
screening.
|
1307.7597 | Context-aware QR-codes | cs.IT cs.CY cs.NI math.IT | This paper describes a new model for presenting local information based on
the network proximity. We present a novelty mobile mashup which combines Wi-Fi
proximity measurements with QR-codes. Our mobile mashup automatically adds
context information the content presented by QR-codes. It simplifies the
deployment schemes and allows to use unified presentation for all data points,
for example. This paper describes how to combine QR-codes and network proximity
information.
|
1307.7602 | Survey on Positioning System: Sampling methods | cs.IT cs.NI math.IT | Millimeter-accuracy Ultra-Wideband (UWB) positioning systems using the Time
Difference Of Arrival (TDOA) algorithm are able to be utilized in military and
many other important applications. Previous research on UWB positioning system
has achieved up to mm or sub-mm accuracy. However, one bottleneck in UWB system
is at sampling high resolution UWB signals, as well as high resolution timing
information. In this paper, UWB positioning systems are surveyed and we focus
on sampling methods for handling UWB signals. Among different sampling methods,
one traditional way is the sequential sampling method, which is not a real time
sampling method and blocks UWB positioning system to achieve higher precision.
Another way is by applying Compressed Sensing (CS) to UWB system for achieving
sub-mm positioning accuracy. In this paper, we compare different TDOA-based UWB
systems with different sampling methods. In particular, several CS-UWB
algorithms for UWB signal reconstruction are compared in terms of positioning
accuracy. Simulation results in 2D and 3D experiments demonstrate performance
of different algorithms including typical BCS, OMP and BP algorithms. CS-UWB is
also compared with UWB positioning system based on the sequential sampling
method.
|
1307.7622 | Distributed Energy Trading: The Multiple-Microgrid Case | math.OC cs.MA | In this paper, a distributed convex optimization framework is developed for
energy trading between islanded microgrids. More specifically, the problem
consists of several islanded microgrids that exchange energy flows by means of
an arbitrary topology. Due to scalability issues and in order to safeguard
local information on cost functions, a subgradient-based cost minimization
algorithm is proposed that converges to the optimal solution in a practical
number of iterations and with a limited communication overhead. Furthermore,
this approach allows for a very intuitive economics interpretation that
explains the algorithm iterations in terms of "supply--demand model" and
"market clearing". Numerical results are given in terms of convergence rate of
the algorithm and attained costs for different network topologies.
|
1307.7720 | Herding the Crowd: Automated Planning for Crowdsourced Planning | cs.AI cs.HC | There has been significant interest in crowdsourcing and human computation.
One subclass of human computation applications are those directed at tasks that
involve planning (e.g. travel planning) and scheduling (e.g. conference
scheduling). Much of this work appears outside the traditional automated
planning forums, and at the outset it is not clear whether automated planning
has much of a role to play in these human computation systems. Interestingly
however, work on these systems shows that even primitive forms of automated
oversight of the human planner does help in significantly improving the
effectiveness of the humans/crowd. In this paper, we will argue that the
automated oversight used in these systems can be viewed as a primitive
automated planner, and that there are several opportunities for more
sophisticated automated planning in effectively steering crowdsourced planning.
Straightforward adaptation of current planning technology is however hampered
by the mismatch between the capabilities of human workers and automated
planners. We identify two important challenges that need to be overcome before
such adaptation of planning technology can occur: (i) interpreting the inputs
of the human workers (and the requester) and (ii) steering or critiquing the
plans being produced by the human workers armed only with incomplete domain and
preference models. In this paper, we discuss approaches for handling these
challenges, and characterize existing human computation systems in terms of the
specific choices they make in handling these challenges.
|
1307.7729 | Spectral methods for network community detection and graph partitioning | physics.soc-ph cond-mat.stat-mech cs.SI physics.data-an | We consider three distinct and well studied problems concerning network
structure: community detection by modularity maximization, community detection
by statistical inference, and normalized-cut graph partitioning. Each of these
problems can be tackled using spectral algorithms that make use of the
eigenvectors of matrix representations of the network. We show that with
certain choices of the free parameters appearing in these spectral algorithms
the algorithms for all three problems are, in fact, identical, and hence that,
at least within the spectral approximations used here, there is no difference
between the modularity- and inference-based community detection methods, or
between either and graph partitioning.
|
1307.7757 | Household Electricity Consumption Data Cleansing | cs.CE | Load curve data in power systems refers to users' electrical energy
consumption data periodically collected with meters. It has become one of the
most important assets for modern power systems. Many operational decisions are
made based on the information discovered in the data. Load curve data, however,
usually suffers from corruptions caused by various factors, such as data
transmission errors or malfunctioning meters. To solve the problem, tremendous
research efforts have been made on load curve data cleansing. Most existing
approaches apply outlier detection methods from the supply side (i.e.,
electricity service providers), which may only have aggregated load data. In
this paper, we propose to seek aid from the demand side (i.e., electricity
service users). With the help of readily available knowledge on consumers'
appliances, we present a new appliance-driven approach to load curve data
cleansing. This approach utilizes data generation rules and a Sequential Local
Optimization Algorithm (SLOA) to solve the Corrupted Data Identification
Problem (CDIP). We evaluate the performance of SLOA with real-world trace data
and synthetic data. The results indicate that, comparing to existing load data
cleansing methods, such as B-spline smoothing, our approach has an overall
better performance and can effectively identify consecutive corrupted data.
Experimental results also demonstrate that our method is robust in various
tests. Our method provides a highly feasible and reliable solution to an
emerging industry application.
|
1307.7770 | A Connection between Good Rate-distortion Codes and Backward DMCs | cs.IT math.IT | Let $X^n\in\mathcal{X}^n$ be a sequence drawn from a discrete memoryless
source, and let $Y^n\in\mathcal{Y}^n$ be the corresponding reconstruction
sequence that is output by a good rate-distortion code. This paper establishes
a property of the joint distribution of $(X^n,Y^n)$. It is shown that for
$D>0$, the input-output statistics of a $R(D)$-achieving rate-distortion code
converge (in normalized relative entropy) to the output-input statistics of a
discrete memoryless channel (dmc). The dmc is "backward" in that it is a
channel from the reconstruction space $\mathcal{Y}^n$ to source space
$\mathcal{X}^n$. It is also shown that the property does not necessarily hold
when normalized relative entropy is replaced by variational distance.
|
1307.7779 | An Overview of Load Balancing in HetNets: Old Myths and Open Problems | cs.IT cs.NI math.IT | Matching the demand for resources ("load") with the supply of resources
("capacity") is a basic problem occurring across many fields of engineering,
logistics, and economics, and has been considered extensively both in the
Internet and in wireless networks. The ongoing evolution of cellular
communication networks into dense, organic, and irregular heterogeneous
networks ("HetNets") has elevated load-awareness to a central problem, and
introduces many new subtleties. This paper explains how several long-standing
assumptions about cellular networks need to be rethought in the context of a
load-balanced HetNet: we highlight these as three deeply entrenched myths that
we then dispel. We survey and compare the primary technical approaches to
HetNet load balancing: (centralized) optimization, game theory, Markov decision
processes, and the newly popular cell range expansion (a.k.a. "biasing"), and
draw design lessons for OFDMA-based cellular systems. We also identify several
open areas for future exploration.
|
1307.7793 | Multi-dimensional Parametric Mincuts for Constrained MAP Inference | cs.LG cs.AI | In this paper, we propose novel algorithms for inferring the Maximum a
Posteriori (MAP) solution of discrete pairwise random field models under
multiple constraints. We show how this constrained discrete optimization
problem can be formulated as a multi-dimensional parametric mincut problem via
its Lagrangian dual, and prove that our algorithm isolates all constraint
instances for which the problem can be solved exactly. These multiple solutions
enable us to even deal with `soft constraints' (higher order penalty
functions). Moreover, we propose two practical variants of our algorithm to
solve problems with hard constraints. We also show how our method can be
applied to solve various constrained discrete optimization problems such as
submodular minimization and shortest path computation. Experimental evaluation
using the foreground-background image segmentation problem with statistic
constraints reveals that our method is faster and its results are closer to the
ground truth labellings compared with the popular continuous relaxation based
methods.
|
1307.7795 | Protein (Multi-)Location Prediction: Using Location Inter-Dependencies
in a Probabilistic Framework | q-bio.QM cs.CE cs.LG q-bio.GN | Knowing the location of a protein within the cell is important for
understanding its function, role in biological processes, and potential use as
a drug target. Much progress has been made in developing computational methods
that predict single locations for proteins, assuming that proteins localize to
a single location. However, it has been shown that proteins localize to
multiple locations. While a few recent systems have attempted to predict
multiple locations of proteins, they typically treat locations as independent
or capture inter-dependencies by treating each locations-combination present in
the training set as an individual location-class. We present a new method and a
preliminary system we have developed that directly incorporates
inter-dependencies among locations into the multiple-location-prediction
process, using a collection of Bayesian network classifiers. We evaluate our
system on a dataset of single- and multi-localized proteins. Our results,
obtained by incorporating inter-dependencies are significantly higher than
those obtained by classifiers that do not use inter-dependencies. The
performance of our system on multi-localized proteins is comparable to a top
performing system (YLoc+), without restricting predictions to be based only on
location-combinations present in the training set.
|
1307.7796 | Emergence of scaling in human-interest dynamics | physics.soc-ph cs.SI | Human behaviors are often driven by human interests. Despite intense recent
efforts in exploring the dynamics of human behaviors, little is known about
human-interest dynamics, partly due to the extreme difficulty in accessing the
human mind from observations. However, the availability of large-scale data,
such as those from e-commerce and smart-phone communications, makes it possible
to probe into and quantify the dynamics of human interest. Using three
prototypical "big data" sets, we investigate the scaling behaviors associated
with human-interest dynamics. In particular, from the data sets we uncover
power-law scaling associated with the three basic quantities: (1) the length of
continuous interest, (2) the return time of visiting certain interest, and (3)
interest ranking and transition. We argue that there are three basic
ingredients underlying human-interest dynamics: preferential return to
previously visited interests, inertial effect, and exploration of new
interests. We develop a biased random-walk model, incorporating the three
ingredients, to account for the observed power-law scaling relations. Our study
represents the first attempt to understand the dynamical processes underlying
human interest, which has significant applications in science and engineering,
commerce, as well as defense, in terms of specific tasks such as recommendation
and human-behavior prediction.
|
1307.7800 | Efficient Energy Minimization for Enforcing Statistics | cs.CV | Energy minimization algorithms, such as graph cuts, enable the computation of
the MAP solution under certain probabilistic models such as Markov random
fields. However, for many computer vision problems, the MAP solution under the
model is not the ground truth solution. In many problem scenarios, the system
has access to certain statistics of the ground truth. For instance, in image
segmentation, the area and boundary length of the object may be known. In these
cases, we want to estimate the most probable solution that is consistent with
such statistics, i.e., satisfies certain equality or inequality constraints.
The above constrained energy minimization problem is NP-hard in general, and
is usually solved using Linear Programming formulations, which relax the
integrality constraints. This paper proposes a novel method that finds the
discrete optimal solution of such problems by maximizing the corresponding
Lagrangian dual. This method can be applied to any constrained energy
minimization problem whose unconstrained version is polynomial time solvable,
and can handle multiple, equality or inequality, and linear or non-linear
constraints. We demonstrate the efficacy of our method on the
foreground/background image segmentation problem, and show that it produces
impressive segmentation results with less error, and runs more than 20 times
faster than the state-of-the-art LP relaxation based approaches.
|
1307.7808 | Automated Attack Planning | cs.AI cs.CR | Penetration Testing is a methodology for assessing network security, by
generating and executing possible attacks. Doing so automatically allows for
regular and systematic testing. A key question then is how to automatically
generate the attacks. A natural way to address this issue is as an attack
planning problem. In this thesis, we are concerned with the specific context of
regular automated pentesting, and use the term "attack planning" in that sense.
The following three research directions are investigated.
First, we introduce a conceptual model of computer network attacks, based on
an analysis of the penetration testing practices. We study how this attack
model can be represented in the PDDL language. Then we describe an
implementation that integrates a classical planner with a penetration testing
tool. This allows us to automatically generate attack paths for real world
pentesting scenarios, and to validate these attacks by executing them.
Secondly, we present efficient probabilistic planning algorithms,
specifically designed for this problem, that achieve industrial-scale runtime
performance (able to solve scenarios with several hundred hosts and exploits).
These algorithms take into account the probability of success of the actions
and their expected cost (for example in terms of execution time, or network
traffic generated).
Finally, we take a different direction: instead of trying to improve the
efficiency of the solutions developed, we focus on improving the model of the
attacker. We model the attack planning problem in terms of partially observable
Markov decision processes (POMDP). This grounds penetration testing in a
well-researched formalism. POMDPs allow the modelling of information gathering
as an integral part of the problem, thus providing for the first time a means
to intelligently mix scanning actions with actual exploits.
|
1307.7809 | Les POMDP font de meilleurs hackers: Tenir compte de l'incertitude dans
les tests de penetration | cs.AI cs.CR | Penetration Testing is a methodology for assessing network security, by
generating and executing possible hacking attacks. Doing so automatically
allows for regular and systematic testing. A key question is how to generate
the attacks. This is naturally formulated as planning under uncertainty, i.e.,
under incomplete knowledge about the network configuration. Previous work uses
classical planning, and requires costly pre-processes reducing this uncertainty
by extensive application of scanning methods. By contrast, we herein model the
attack planning problem in terms of partially observable Markov decision
processes (POMDP). This allows to reason about the knowledge available, and to
intelligently employ scanning actions as part of the attack. As one would
expect, this accurate solution does not scale. We devise a method that relies
on POMDPs to find good attacks on individual machines, which are then composed
into an attack on the network as a whole. This decomposition exploits network
structure to the extent possible, making targeted approximations (only) where
needed. Evaluating this method on a suitably adapted industrial test suite, we
demonstrate its effectiveness in both runtime and solution quality.
|
1307.7810 | Accurate Decoding of Pooled Sequenced Data Using Compressed Sensing | q-bio.QM cs.CE cs.IT math.IT q-bio.GN | In order to overcome the limitations imposed by DNA barcoding when
multiplexing a large number of samples in the current generation of
high-throughput sequencing instruments, we have recently proposed a new
protocol that leverages advances in combinatorial pooling design (group
testing) doi:10.1371/journal.pcbi.1003010. We have also demonstrated how this
new protocol would enable de novo selective sequencing and assembly of large,
highly-repetitive genomes. Here we address the problem of decoding pooled
sequenced data obtained from such a protocol. Our algorithm employs a
synergistic combination of ideas from compressed sensing and the decoding of
error-correcting codes. Experimental results on synthetic data for the rice
genome and real data for the barley genome show that our novel decoding
algorithm enables significantly higher quality assemblies than the previous
approach.
|
1307.7811 | A Novel Combinatorial Method for Estimating Transcript Expression with
RNA-Seq: Bounding the Number of Paths | q-bio.QM cs.CE cs.DS | RNA-Seq technology offers new high-throughput ways for transcript
identification and quantification based on short reads, and has recently
attracted great interest. The problem is usually modeled by a weighted splicing
graph whose nodes stand for exons and whose edges stand for split alignments to
the exons. The task consists of finding a number of paths, together with their
expression levels, which optimally explain the coverages of the graph under
various fitness functions, such least sum of squares. In (Tomescu et al.
RECOMB-seq 2013) we showed that under general fitness functions, if we allow a
polynomially bounded number of paths in an optimal solution, this problem can
be solved in polynomial time by a reduction to a min-cost flow program. In this
paper we further refine this problem by asking for a bounded number k of paths
that optimally explain the splicing graph. This problem becomes NP-hard in the
strong sense, but we give a fast combinatorial algorithm based on dynamic
programming for it. In order to obtain a practical tool, we implement three
optimizations and heuristics, which achieve better performance on real data,
and similar or better performance on simulated data, than state-of-the-art
tools Cufflinks, IsoLasso and SLIDE. Our tool, called Traph, is available at
http://www.cs.helsinki.fi/gsa/traph/
|
1307.7813 | A polynomial delay algorithm for the enumeration of bubbles with length
constraints in directed graphs and its application to the detection of
alternative splicing in RNA-seq data | q-bio.QM cs.CE cs.DS | We present a new algorithm for enumerating bubbles with length constraints in
directed graphs. This problem arises in transcriptomics, where the question is
to identify all alternative splicing events present in a sample of mRNAs
sequenced by RNA-seq. This is the first polynomial-delay algorithm for this
problem and we show that in practice, it is faster than previous approaches.
This enables us to deal with larger instances and therefore to discover novel
alternative splicing events, especially long ones, that were previously
overseen using existing methods.
|
1307.7820 | Faster Algorithms for RNA-folding using the Four-Russians method | q-bio.QM cs.CE cs.DS | The secondary structure that maximizes the number of non-crossing matchings
between complimentary bases of an RNA sequence of length n can be computed in
O(n^3) time using Nussinov's dynamic programming algorithm. The Four-Russians
method is a technique that will reduce the running time for certain dynamic
programming algorithms by a multiplicative factor after a preprocessing step
where solutions to all smaller subproblems of a fixed size are exhaustively
enumerated and solved. Frid and Gusfield designed an O(\frac{n^3}{\log n})
algorithm for RNA folding using the Four-Russians technique. In their algorithm
the preprocessing is interleaved with the algorithm computation. (Algo. Mol.
Biol., 2010).
We simplify the algorithm and the analysis by doing the preprocessing once
prior to the algorithm computation. We call this the two-vector method. We also
show variants where instead of exhaustive preprocessing, we only solve the
subproblems encountered in the main algorithm once and memoize the results. We
give a simple proof of correctness and explore the practical advantages over
the earlier method. The Nussinov algorithm admits an O(n^2) time parallel
algorithm. We show a parallel algorithm using the two-vector idea that improves
the time bound to O(\frac{n^2}{log n}).
We discuss the organization of the data structures to exploit coalesced
memory access for fast running times. The ideas to organize the data structures
also help in improving the running time of the serial algorithms. For sequences
of length up to 6000 bases the parallel algorithm takes only about 2.5 seconds
and the two-vector serial method takes about 57 seconds on a desktop and 15
seconds on a server. Among the serial algorithms, the two-vector and memoized
versions are faster than the Frid-Gusfield algorithm by a factor of 3, and are
faster than Nussinov by up to a factor of 20.
|
1307.7821 | Algorithms for the Majority Rule (+) Consensus Tree and the Frequency
Difference Consensus Tree | cs.DS cs.CE q-bio.QM | This paper presents two new deterministic algorithms for constructing
consensus trees. Given an input of k phylogenetic trees with identical leaf
label sets and n leaves each, the first algorithm constructs the majority rule
(+) consensus tree in O(kn) time, which is optimal since the input size is
Omega(kn), and the second one constructs the frequency difference consensus
tree in min(O(kn^2), O(kn (k+log^2 n))) time.
|
1307.7824 | The generalized Robinson-Foulds metric | cs.DS cs.CE q-bio.QM | The Robinson-Foulds (RF) metric is arguably the most widely used measure of
phylogenetic tree similarity, despite its well-known shortcomings: For example,
moving a single taxon in a tree can result in a tree that has maximum distance
to the original one; but the two trees are identical if we remove the single
taxon. To this end, we propose a natural extension of the RF metric that does
not simply count identical clades but instead, also takes similar clades into
consideration. In contrast to previous approaches, our model requires the
matching between clades to respect the structure of the two trees, a property
that the classical RF metric exhibits, too. We show that computing this
generalized RF metric is, unfortunately, NP-hard. We then present a simple
Integer Linear Program for its computation, and evaluate it by an
all-against-all comparison of 100 trees from a benchmark data set. We find that
matchings that respect the tree structure differ significantly from those that
do not, underlining the importance of this natural condition.
|
1307.7825 | Computing the Skewness of the Phylogenetic Mean Pairwise Distance in
Linear Time | q-bio.QM cs.CE cs.DS | The phylogenetic Mean Pairwise Distance (MPD) is one of the most popular
measures for computing the phylogenetic distance between a given group of
species. More specifically, for a phylogenetic tree T and for a set of species
R represented by a subset of the leaf nodes of T, the MPD of R is equal to the
average cost of all possible simple paths in T that connect pairs of nodes in
R.
Among other phylogenetic measures, the MPD is used as a tool for deciding if
the species of a given group R are closely related. To do this, it is important
to compute not only the value of the MPD for this group but also the
expectation, the variance, and the skewness of this metric. Although efficient
algorithms have been developed for computing the expectation and the variance
the MPD, there has been no approach so far for computing the skewness of this
measure.
In the present work we describe how to compute the skewness of the MPD on a
tree T optimally, in Theta(n) time; here n is the size of the tree T. So far
this is the first result that leads to an exact, let alone efficient,
computation of the skewness for any popular phylogenetic distance measure.
Moreover, we show how we can compute in Theta(n) time several interesting
quantities in T that can be possibly used as building blocks for computing
efficiently the skewness of other phylogenetic measures.
|
1307.7828 | Characterizing Compatibility and Agreement of Unrooted Trees via Cuts in
Graphs | cs.DM cs.CE q-bio.QM | Deciding whether there is a single tree -a supertree- that summarizes the
evolutionary information in a collection of unrooted trees is a fundamental
problem in phylogenetics. We consider two versions of this question: agreement
and compatibility. In the first, the supertree is required to reflect precisely
the relationships among the species exhibited by the input trees. In the
second, the supertree can be more refined than the input trees.
Tree compatibility can be characterized in terms of the existence of a
specific kind of triangulation in a structure known as the display graph.
Alternatively, it can be characterized as a chordal graph sandwich problem in a
structure known as the edge label intersection graph. Here, we show that the
latter characterization yields a natural characterization of compatibility in
terms of minimal cuts in the display graph, which is closely related to
compatibility of splits. We then derive a characterization for agreement.
|
1307.7831 | Unifying Parsimonious Tree Reconciliation | q-bio.QM cs.CE cs.DS q-bio.PE | Evolution is a process that is influenced by various environmental factors,
e.g. the interactions between different species, genes, and biogeographical
properties. Hence, it is interesting to study the combined evolutionary history
of multiple species, their genes, and the environment they live in. A common
approach to address this research problem is to describe each individual
evolution as a phylogenetic tree and construct a tree reconciliation which is
parsimonious with respect to a given event model. Unfortunately, most of the
previous approaches are designed only either for host-parasite systems, for
gene tree/species tree reconciliation, or biogeography. Hence, a method is
desirable, which addresses the general problem of mapping phylogenetic trees
and covering all varieties of coevolving systems, including e.g., predator-prey
and symbiotic relationships. To overcome this gap, we introduce a generalized
cophylogenetic event model considering the combinatorial complete set of local
coevolutionary events. We give a dynamic programming based heuristic for
solving the maximum parsimony reconciliation problem in time O(n^2), for two
phylogenies each with at most n leaves. Furthermore, we present an exact
branch-and-bound algorithm which uses the results from the dynamic programming
heuristic for discarding partial reconciliations. The approach has been
implemented as a Java application which is freely available from
http://pacosy.informatik.uni-leipzig.de/coresym.
|
1307.7840 | On the Matrix Median Problem | q-bio.QM cs.CE cs.DM | The Genome Median Problem is an important problem in phylogenetic
reconstruction under rearrangement models. It can be stated as follows: given
three genomes, find a fourth that minimizes the sum of the pairwise
rearrangement distances between it and the three input genomes. Recently,
Feijao and Meidanis extended the algebraic theory for genome rearrangement to
allow for linear chromosomes, thus yielding a new rearrangement model (the
algebraic model), very close to the celebrated DCJ model. In this paper, we
study the genome median problem under the algebraic model, whose complexity is
currently open, proposing a more general form of the problem, the matrix median
problem. It is known that, for any metric distance, at least one of the corners
is a 4/3-approximation of the median. Our results allow us to compute up to
three additional matrix median candidates, all of them with approximation
ratios at least as good as the best corner, when the input matrices come from
genomes. From the application point of view, it is usually more interesting to
locate medians farther from the corners. We also show a fourth median candidate
that gives better results in cases we tried. However, we do not have proven
bounds for this fourth candidate yet.
|
1307.7842 | A Fixed-Parameter Algorithm for Minimum Common String Partition with Few
Duplications | cs.DS cs.CE q-bio.QM | Motivated by the study of genome rearrangements, the NP-hard Minimum Common
String Partition problems asks, given two strings, to split both strings into
an identical set of blocks. We consider an extension of this problem to
unbalanced strings, so that some elements may not be covered by any block. We
present an efficient fixed-parameter algorithm for the parameters number k of
blocks and maximum occurrence d of a letter in either string. We then evaluate
this algorithm on bacteria genomes and synthetic data.
|
1307.7848 | An Integrated System for 3D Gaze Recovery and Semantic Analysis of Human
Attention | cs.CV | This work describes a computer vision system that enables pervasive mapping
and monitoring of human attention. The key contribution is that our methodology
enables full 3D recovery of the gaze pointer, human view frustum and associated
human centered measurements directly into an automatically computed 3D model in
real-time. We apply RGB-D SLAM and descriptor matching methodologies for the 3D
modeling, localization and fully automated annotation of ROIs (regions of
interest) within the acquired 3D model. This innovative methodology will open
new avenues for attention studies in real world environments, bringing new
potential into automated processing for human factors technologies.
|
1307.7851 | Hybrid Affinity Propagation | cs.CV | In this paper, we address a problem of managing tagged images with hybrid
summarization. We formulate this problem as finding a few image exemplars to
represent the image set semantically and visually, and solve it in a hybrid way
by exploiting both visual and textual information associated with images. We
propose a novel approach, called homogeneous and heterogeneous message
propagation ($\text{H}^\text{2}\text{MP}$). Similar to the affinity propagation
(AP) approach, $\text{H}^\text{2}\text{MP}$ reduce the conventional
\emph{vector} message propagation to \emph{scalar} message propagation to make
the algorithm more efficient. Beyond AP that can only handle homogeneous data,
$\text{H}^\text{2}\text{MP}$ generalizes it to exploit extra heterogeneous
relations and the generalization is non-trivial as the reduction to scalar
messages from vector messages is more challenging. The main advantages of our
approach lie in 1) that $\text{H}^\text{2}\text{MP}$ exploits visual similarity
and in addition the useful information from the associated tags, including the
associations relation between images and tags and the relations within tags,
and 2) that the summary is both visually and semantically satisfactory. In
addition, our approach can also present a textual summary to a tagged image
collection, which can be used to automatically generate a textual description.
The experimental results demonstrate the effectiveness and efficiency of the
roposed approach.
|
1307.7852 | Scalable $k$-NN graph construction | cs.CV cs.LG stat.ML | The $k$-NN graph has played a central role in increasingly popular
data-driven techniques for various learning and vision tasks; yet, finding an
efficient and effective way to construct $k$-NN graphs remains a challenge,
especially for large-scale high-dimensional data. In this paper, we propose a
new approach to construct approximate $k$-NN graphs with emphasis in:
efficiency and accuracy. We hierarchically and randomly divide the data points
into subsets and build an exact neighborhood graph over each subset, achieving
a base approximate neighborhood graph; we then repeat this process for several
times to generate multiple neighborhood graphs, which are combined to yield a
more accurate approximate neighborhood graph. Furthermore, we propose a
neighborhood propagation scheme to further enhance the accuracy. We show both
theoretical and empirical accuracy and efficiency of our approach to $k$-NN
graph construction and demonstrate significant speed-up in dealing with large
scale visual data.
|
1307.7895 | Wavelet Analysis of Dynamic Behaviors of the Large Interconnected Power
System | cs.SY | In this paper, the simulation of the disturbance propagation through a large
power system is performed on the WSCC 127 bus test system. The signal frequency
analysis from several parts of the power system is performed by applying the
Wavelet Transform (WT). The results show that this approach provides the system
operators with some useful information regarding the identification of the
power system low-frequency electromechanical oscillations, the identification
of the coherent groups of generators and the insight into the speed retardation
of some parts of the power system. The ability to localize the disturbance is
based on the disturbance propagation through the power system and the
time-frequency analysis performed by using the WT is presented along with
detailed physical interpretation of the used approach.
|
1307.7897 | Energy Distribution of EEG Signals: EEG Signal Wavelet-Neural Network
Classifier | cs.NE q-bio.NC | In this paper, a wavelet-based neural network (WNN) classifier for
recognizing EEG signals is implemented and tested under three sets EEG signals
(healthy subjects, patients with epilepsy and patients with epileptic syndrome
during the seizure). First, the Discrete Wavelet Transform (DWT) with the
Multi-Resolution Analysis (MRA) is applied to decompose EEG signal at
resolution levels of the components of the EEG signal (delta, theta, alpha,
beta and gamma) and the Parsevals theorem are employed to extract the
percentage distribution of energy features of the EEG signal at different
resolution levels. Second, the neural network (NN) classifies these extracted
features to identify the EEGs type according to the percentage distribution of
energy features. The performance of the proposed algorithm has been evaluated
using in total 300 EEG signals. The results showed that the proposed classifier
has the ability of recognizing and classifying EEG signals efficiently.
|
1307.7925 | Detecting Superbubbles in Assembly Graphs | cs.DS cs.CE cs.DM q-bio.QM | We introduce a new concept of a subgraph class called a superbubble for
analyzing assembly graphs, and propose an efficient algorithm for detecting it.
Most assembly algorithms utilize assembly graphs like the de Bruijn graph or
the overlap graph constructed from reads. From these graphs, many assembly
algorithms first detect simple local graph structures (motifs), such as tips
and bubbles, mainly to find sequencing errors. These motifs are easy to detect,
but they are sometimes too simple to deal with more complex errors. The
superbubble is an extension of the bubble, which is also important for
analyzing assembly graphs. Though superbubbles are much more complex than
ordinary bubbles, we show that they can be efficiently enumerated. We propose
an average-case linear time algorithm (i.e., O(n+m) for a graph with n vertices
and m edges) for graphs with a reasonable model, though the worst-case time
complexity of our algorithm is quadratic (i.e., O(n(n+m))). Moreover, the
algorithm is practically very fast: Our experiments show that our algorithm
runs in reasonable time with a single CPU core even against a very large graph
of a whole human genome.
|
1307.7948 | On the accuracy of the Viterbi alignment | stat.ME cs.LG stat.CO | In a hidden Markov model, the underlying Markov chain is usually hidden.
Often, the maximum likelihood alignment (Viterbi alignment) is used as its
estimate. Although having the biggest likelihood, the Viterbi alignment can
behave very untypically by passing states that are at most unexpected. To avoid
such situations, the Viterbi alignment can be modified by forcing it not to
pass these states. In this article, an iterative procedure for improving the
Viterbi alignment is proposed and studied. The iterative approach is compared
with a simple bunch approach where a number of states with low probability are
all replaced at the same time. It can be seen that the iterative way of
adjusting the Viterbi alignment is more efficient and it has several advantages
over the bunch approach. The same iterative algorithm for improving the Viterbi
alignment can be used in the case of peeping, that is when it is possible to
reveal hidden states. In addition, lower bounds for classification
probabilities of the Viterbi alignment under different conditions on the model
parameters are studied.
|
1307.7970 | Short Term Memory Capacity in Networks via the Restricted Isometry
Property | cs.IT cs.NE math.IT | Cortical networks are hypothesized to rely on transient network activity to
support short term memory (STM). In this paper we study the capacity of
randomly connected recurrent linear networks for performing STM when the input
signals are approximately sparse in some basis. We leverage results from
compressed sensing to provide rigorous non asymptotic recovery guarantees,
quantifying the impact of the input sparsity level, the input sparsity basis,
and the network characteristics on the system capacity. Our analysis
demonstrates that network memory capacities can scale superlinearly with the
number of nodes, and in some situations can achieve STM capacities that are
much larger than the network size. We provide perfect recovery guarantees for
finite sequences and recovery bounds for infinite sequences. The latter
analysis predicts that network STM systems may have an optimal recovery length
that balances errors due to omission and recall mistakes. Furthermore, we show
that the conditions yielding optimal STM capacity can be embodied in several
network topologies, including networks with sparse or dense connectivities.
|
1307.7973 | Connecting Language and Knowledge Bases with Embedding Models for
Relation Extraction | cs.CL cs.IR cs.LG | This paper proposes a novel approach for relation extraction from free text
which is trained to jointly use information from the text and from existing
knowledge. Our model is based on two scoring functions that operate by learning
low-dimensional embeddings of words and of entities and relationships from a
knowledge base. We empirically show on New York Times articles aligned with
Freebase relations that our approach is able to efficiently use the extra
information provided by a large subset of Freebase data (4M entities, 23k
relationships) to improve over existing methods that rely on text features
alone.
|
1307.7974 | Image Tag Refinement by Regularized Latent Dirichlet Allocation | cs.IR | Tagging is nowadays the most prevalent and practical way to make images
searchable. However, in reality many manually-assigned tags are irrelevant to
image content and hence are not reliable for applications. A lot of recent
efforts have been conducted to refine image tags. In this paper, we propose to
do tag refinement from the angle of topic modeling and present a novel
graphical model, regularized Latent Dirichlet Allocation (rLDA). In the
proposed approach, tag similarity and tag relevance are jointly estimated in an
iterative manner, so that they can benefit from each other, and the multi-wise
relationships among tags are explored. Moreover, both the statistics of tags
and visual affinities of images in the corpus are explored to help topic
modeling. We also analyze the superiority of our approach from the deep
structure perspective. The experiments on tag ranking and image retrieval
demonstrate the advantages of the proposed method.
|
1307.7981 | Likelihood-ratio calibration using prior-weighted proper scoring rules | stat.ML cs.LG | Prior-weighted logistic regression has become a standard tool for calibration
in speaker recognition. Logistic regression is the optimization of the expected
value of the logarithmic scoring rule. We generalize this via a parametric
family of proper scoring rules. Our theoretical analysis shows how different
members of this family induce different relative weightings over a spectrum of
applications of which the decision thresholds range from low to high. Special
attention is given to the interaction between prior weighting and proper
scoring rule parameters. Experiments on NIST SRE'12 suggest that for
applications with low false-alarm rate requirements, scoring rules tailored to
emphasize higher score thresholds may give better accuracy than logistic
regression.
|
1307.7982 | Flavor Pairing in Medieval European Cuisine: A Study in Cooking with
Dirty Data | physics.soc-ph cs.CY cs.SI physics.data-an | An important part of cooking with computers is using statistical methods to
create new, flavorful ingredient combinations. The flavor pairing hypothesis
states that culinary ingredients with common chemical flavor components combine
well to produce pleasant dishes. It has been recently shown that this design
principle is a basis for modern Western cuisine and is reversed for Asian
cuisine.
Such data-driven analysis compares the chemistry of ingredients to ingredient
sets found in recipes. However, analytics-based generation of novel flavor
profiles can only be as good as the underlying chemical and recipe data.
Incomplete, inaccurate, and irrelevant data may degrade flavor pairing
inferences. Chemical data on flavor compounds is incomplete due to the nature
of the experiments that must be conducted to obtain it. Recipe data may have
issues due to text parsing errors, imprecision in textual descriptions of
ingredients, and the fact that the same ingredient may be known by different
names in different recipes. Moreover, the process of matching ingredients in
chemical data and recipe data may be fraught with mistakes. Much of the
`dirtiness' of the data cannot be cleansed even with manual curation.
In this work, we collect a new data set of recipes from Medieval Europe
before the Columbian Exchange and investigate the flavor pairing hypothesis
historically. To investigate the role of data incompleteness and error as part
of this hypothesis testing, we use two separate chemical compound data sets
with different levels of cleanliness. Notably, the different data sets give
conflicting conclusions about the flavor pairing hypothesis in Medieval Europe.
As a contribution towards social science, we obtain inferences about the
evolution of culinary arts when many new ingredients are suddenly made
available.
|
1307.7993 | Sharp Threshold for Multivariate Multi-Response Linear Regression via
Block Regularized Lasso | cs.LG stat.ML | In this paper, we investigate a multivariate multi-response (MVMR) linear
regression problem, which contains multiple linear regression models with
differently distributed design matrices, and different regression and output
vectors. The goal is to recover the support union of all regression vectors
using $l_1/l_2$-regularized Lasso. We characterize sufficient and necessary
conditions on sample complexity \emph{as a sharp threshold} to guarantee
successful recovery of the support union. Namely, if the sample size is above
the threshold, then $l_1/l_2$-regularized Lasso correctly recovers the support
union; and if the sample size is below the threshold, $l_1/l_2$-regularized
Lasso fails to recover the support union. In particular, the threshold
precisely captures the impact of the sparsity of regression vectors and the
statistical properties of the design matrices on sample complexity. Therefore,
the threshold function also captures the advantages of joint support union
recovery using multi-task Lasso over individual support recovery using
single-task Lasso.
|
1307.8007 | Classical-Quantum Arbitrarily Varying Wiretap Channel: Ahlswede
dichotomy, Positivity, Resources, Super Activation | cs.IT math.IT quant-ph | We establish Ahlswede dichotomy for arbitrarily varying classical-quantum
wiretap channels. This means that either the deterministic secrecy capacity of
an arbitrarily varying classical-quantum wiretap channel is zero or it equals
its randomness-assisted secrecy capacity. We analyze the secrecy capacity of
arbitrarily varying classical-quantum wiretap channels when the sender and the
receiver use various resources. It turns out that having randomness, common
randomness, and correlation as resources are very helpful for achieving a
positive deterministic secrecy capacity of arbitrarily varying
classical-quantum wiretap channels. We prove the phenomenon super-activation
for arbitrarily varying classical-quantum wiretap channels, i.e., if we use two
arbitrarily varying classical-quantum wiretap channels, both with zero
deterministic secrecy capacity together, they allow perfect secure
transmission.
|
1307.8012 | A Study on Classification in Imbalanced and Partially-Labelled Data
Streams | astro-ph.IM cs.LG | The domain of radio astronomy is currently facing significant computational
challenges, foremost amongst which are those posed by the development of the
world's largest radio telescope, the Square Kilometre Array (SKA). Preliminary
specifications for this instrument suggest that the final design will
incorporate between 2000 and 3000 individual 15 metre receiving dishes, which
together can be expected to produce a data rate of many TB/s. Given such a high
data rate, it becomes crucial to consider how this information will be
processed and stored to maximise its scientific utility. In this paper, we
consider one possible data processing scenario for the SKA, for the purposes of
an all-sky pulsar survey. In particular we treat the selection of promising
signals from the SKA processing pipeline as a data stream classification
problem. We consider the feasibility of classifying signals that arrive via an
unlabelled and heavily class imbalanced data stream, using currently available
algorithms and frameworks. Our results indicate that existing stream learners
exhibit unacceptably low recall on real astronomical data when used in standard
configuration; however, good false positive performance and comparable accuracy
to static learners, suggests they have definite potential as an on-line
solution to this particular big data challenge.
|
1307.8040 | Stabilization of Nonlinear Delay Systems Using Approximate Predictors
and High-Gain Observers | math.OC cs.SY | We provide a solution to the heretofore open problem of stabilization of
systems with arbitrarily long delays at the input and output of a nonlinear
system using output feedback only. The solution is global, employs the
predictor approach over the period that combines the input and output delays,
addresses nonlinear systems with sampled measurements and with control applied
using a zero-order hold, and requires that the sampling/holding periods be
sufficiently short, though not necessarily constant. Our approach considers a
class of globally Lipschitz strict-feedback systems with disturbances and
employs an appropriately constructed successive approximation of the predictor
map, a high-gain sampled-data observer, and a linear stabilizing feedback for
the delay-free system. The obtained results guarantee robustness to
perturbations of the sampling schedule and different sampling and holding
periods are considered. The approach is specialized to linear systems, where
the predictor is available explicitly.
|
1307.8049 | Optimistic Concurrency Control for Distributed Unsupervised Learning | cs.LG cs.AI cs.DC | Research on distributed machine learning algorithms has focused primarily on
one of two extremes - algorithms that obey strict concurrency constraints or
algorithms that obey few or no such constraints. We consider an intermediate
alternative in which algorithms optimistically assume that conflicts are
unlikely and if conflicts do arise a conflict-resolution protocol is invoked.
We view this "optimistic concurrency control" paradigm as particularly
appropriate for large-scale machine learning algorithms, particularly in the
unsupervised setting. We demonstrate our approach in three problem areas:
clustering, feature learning and online facility location. We evaluate our
methods via large-scale experiments in a cluster computing environment.
|
1307.8057 | Extracting Connected Concepts from Biomedical Texts using Fog Index | cs.CL cs.IR | In this paper, we establish Fog Index (FI) as a text filter to locate the
sentences in texts that contain connected biomedical concepts of interest. To
do so, we have used 24 random papers each containing four pairs of connected
concepts. For each pair, we categorize sentences based on whether they contain
both, any or none of the concepts. We then use FI to measure difficulty of the
sentences of each category and find that sentences containing both of the
concepts have low readability. We rank sentences of a text according to their
FI and select 30 percent of the most difficult sentences. We use an association
matrix to track the most frequent pairs of concepts in them. This matrix
reports that the first filter produces some pairs that hold almost no
connections. To remove these unwanted pairs, we use the Equally Weighted
Harmonic Mean of their Positive Predictive Value (PPV) and Sensitivity as a
second filter. Experimental results demonstrate the effectiveness of our
method.
|
1307.8060 | Extracting Information-rich Part of Texts using Text Denoising | cs.IR cs.CL | The aim of this paper is to report on a novel text reduction technique,
called Text Denoising, that highlights information-rich content when processing
a large volume of text data, especially from the biomedical domain. The core
feature of the technique, the text readability index, embodies the hypothesis
that complex text is more information-rich than the rest. When applied on tasks
like biomedical relation bearing text extraction, keyphrase indexing and
extracting sentences describing protein interactions, it is evident that the
reduced set of text produced by text denoising is more information-rich than
the rest.
|
1307.8083 | TOFEC: Achieving Optimal Throughput-Delay Trade-off of Cloud Storage
Using Erasure Codes | cs.NI cs.IR cs.PF | Our paper presents solutions using erasure coding, parallel connections to
storage cloud and limited chunking (i.e., dividing the object into a few
smaller segments) together to significantly improve the delay performance of
uploading and downloading data in and out of cloud storage.
TOFEC is a strategy that helps front-end proxy adapt to level of workload by
treating scalable cloud storage (e.g. Amazon S3) as a shared resource requiring
admission control. Under light workloads, TOFEC creates more smaller chunks and
uses more parallel connections per file, minimizing service delay. Under heavy
workloads, TOFEC automatically reduces the level of chunking (fewer chunks with
increased size) and uses fewer parallel connections to reduce overhead,
resulting in higher throughput and preventing queueing delay. Our trace-driven
simulation results show that TOFEC's adaptation mechanism converges to an
appropriate code that provides the optimal delay-throughput trade-off without
reducing system capacity. Compared to a non-adaptive strategy optimized for
throughput, TOFEC delivers 2.5x lower latency under light workloads; compared
to a non-adaptive strategy optimized for latency, TOFEC can scale to support
over 3x as many requests.
|
1307.8084 | Combining Answer Set Programming and POMDPs for Knowledge Representation
and Reasoning on Mobile Robots | cs.AI cs.RO | For widespread deployment in domains characterized by partial observability,
non-deterministic actions and unforeseen changes, robots need to adapt sensing,
processing and interaction with humans to the tasks at hand. While robots
typically cannot process all sensor inputs or operate without substantial
domain knowledge, it is a challenge to provide accurate domain knowledge and
humans may not have the time and expertise to provide elaborate and accurate
feedback. The architecture described in this paper combines declarative
programming and probabilistic reasoning to address these challenges, enabling
robots to: (a) represent and reason with incomplete domain knowledge, resolving
ambiguities and revising existing knowledge using sensor inputs and minimal
human feedback; and (b) probabilistically model the uncertainty in sensor input
processing and navigation. Specifically, Answer Set Programming (ASP), a
declarative programming paradigm, is combined with hierarchical partially
observable Markov decision processes (POMDPs), using domain knowledge to revise
probabilistic beliefs, and using positive and negative observations for early
termination of tasks that can no longer be pursued. All algorithms are
evaluated in simulation and on mobile robots locating target objects in indoor
domains.
|
1307.8104 | Neural Network Capacity for Multilevel Inputs | cs.NE | This paper examines the memory capacity of generalized neural networks.
Hopfield networks trained with a variety of learning techniques are
investigated for their capacity both for binary and non-binary alphabets. It is
shown that the capacity can be much increased when multilevel inputs are used.
New learning strategies are proposed to increase Hopfield network capacity, and
the scalability of these methods is also examined in respect to size of the
network. The ability to recall entire patterns from stimulation of a single
neuron is examined for the increased capacity networks.
|
1307.8136 | DeBaCl: A Python Package for Interactive DEnsity-BAsed CLustering | stat.ME cs.LG stat.ML | The level set tree approach of Hartigan (1975) provides a probabilistically
based and highly interpretable encoding of the clustering behavior of a
dataset. By representing the hierarchy of data modes as a dendrogram of the
level sets of a density estimator, this approach offers many advantages for
exploratory analysis and clustering, especially for complex and
high-dimensional data. Several R packages exist for level set tree estimation,
but their practical usefulness is limited by computational inefficiency,
absence of interactive graphical capabilities and, from a theoretical
perspective, reliance on asymptotic approximations. To make it easier for
practitioners to capture the advantages of level set trees, we have written the
Python package DeBaCl for DEnsity-BAsed CLustering. In this article we
illustrate how DeBaCl's level set tree estimates can be used for difficult
clustering tasks and interactive graphical data analysis. The package is
intended to promote the practical use of level set trees through improvements
in computational efficiency and a high degree of user customization. In
addition, the flexible algorithms implemented in DeBaCl enjoy finite sample
accuracy, as demonstrated in recent literature on density clustering. Finally,
we show the level set tree framework can be easily extended to deal with
functional data.
|
1307.8182 | POMDPs Make Better Hackers: Accounting for Uncertainty in Penetration
Testing | cs.AI cs.CR | Penetration Testing is a methodology for assessing network security, by
generating and executing possible hacking attacks. Doing so automatically
allows for regular and systematic testing. A key question is how to generate
the attacks. This is naturally formulated as planning under uncertainty, i.e.,
under incomplete knowledge about the network configuration. Previous work uses
classical planning, and requires costly pre-processes reducing this uncertainty
by extensive application of scanning methods. By contrast, we herein model the
attack planning problem in terms of partially observable Markov decision
processes (POMDP). This allows to reason about the knowledge available, and to
intelligently employ scanning actions as part of the attack. As one would
expect, this accurate solution does not scale. We devise a method that relies
on POMDPs to find good attacks on individual machines, which are then composed
into an attack on the network as a whole. This decomposition exploits network
structure to the extent possible, making targeted approximations (only) where
needed. Evaluating this method on a suitably adapted industrial test suite, we
demonstrate its effectiveness in both runtime and solution quality.
|
1307.8187 | Towards Minimax Online Learning with Unknown Time Horizon | cs.LG | We consider online learning when the time horizon is unknown. We apply a
minimax analysis, beginning with the fixed horizon case, and then moving on to
two unknown-horizon settings, one that assumes the horizon is chosen randomly
according to some known distribution, and the other which allows the adversary
full control over the horizon. For the random horizon setting with restricted
losses, we derive a fully optimal minimax algorithm. And for the adversarial
horizon setting, we prove a nontrivial lower bound which shows that the
adversary obtains strictly more power than when the horizon is fixed and known.
Based on the minimax solution of the random horizon setting, we then propose a
new adaptive algorithm which "pretends" that the horizon is drawn from a
distribution from a special family, but no matter how the actual horizon is
chosen, the worst-case regret is of the optimal rate. Furthermore, our
algorithm can be combined and applied in many ways, for instance, to online
convex optimization, follow the perturbed leader, exponential weights algorithm
and first order bounds. Experiments show that our algorithm outperforms many
other existing algorithms in an online linear optimization setting.
|
1307.8199 | Technical Report: An MGF-based Unified Framework to Determine the Joint
Statistics of Partial Sums of Ordered i.n.d. Random Variables | cs.IT cs.PF math.IT | The joint statistics of partial sums of ordered random variables (RVs) are
often needed for the accurate performance characterization of a wide variety of
wireless communication systems. A unified analytical framework to determine the
joint statistics of partial sums of ordered independent and identically
distributed (i.i.d.) random variables was recently presented. However, the
identical distribution assumption may not be valid in several real-world
applications. With this motivation in mind, we consider in this paper the more
general case in which the random variables are independent but not necessarily
identically distributed (i.n.d.). More specifically, we extend the previous
analysis and introduce a new more general unified analytical framework to
determine the joint statistics of partial sums of ordered i.n.d. RVs. Our
mathematical formalism is illustrated with an application on the exact
performance analysis of the capture probability of generalized selection
combining (GSC)-based RAKE receivers operating over frequency-selective fading
channels with a non-uniform power delay profile. We also discussed a couple of
other sample applications of the generic results presented in this work.
|
1307.8201 | Non-homogeneous Two-Rack Model for Distributed Storage Systems | cs.IT math.IT | In the traditional two-rack distributed storage system (DSS) model, due to
the assumption that the storage capacity of each node is the same, the minimum
bandwidth regenerating (MBR) point becomes infeasible. In this paper, we design
a new non-homogeneous two-rack model by proposing a generalization of the
threshold function used to compute the tradeoff curve. We prove that by having
the nodes in the rack with higher regenerating bandwidth stores more
information, all the points on the tradeoff curve, including the MBR point,
become feasible. Finally, we show how the non-homogeneous two-rack model
outperforms the traditional model in the tradeoff curve between the storage per
node and the repair bandwidth.
|
1307.8225 | A Novel Architecture for Relevant Blog Page Identifcation | cs.IR cs.CL | Blogs are undoubtedly the richest source of information available in
cyberspace. Blogs can be of various natures i.e. personal blogs which contain
posts on mixed issues or blogs can be domain specific which contains posts on
particular topics, this is the reason, they offer wide variety of relevant
information which is often focused. A general search engine gives back a huge
collection of web pages which may or may not give correct answers, as web is
the repository of information of all kinds and a user has to go through various
documents before he gets what he was originally looking for, which is a very
time consuming process. So, the search can be made more focused and accurate if
it is limited to blogosphere instead of web pages. The reason being that the
blogs are more focused in terms of information. So, User will only get related
blogs in response to his query. These results will be then ranked according to
our proposed method and are finally presented in front of user in descending
order
|
1307.8230 | An Information Theoretic Point of View to Contention Resolution | cs.IT cs.NI math.IT | We consider a slotted wireless network in an infrastructure setup with a base
station (or an access point) and N users. The wireless channel gain between the
base station and the users is assumed to be i.i.d., and the base station seeks
to schedule the user with the highest channel gain in every slot (opportunistic
scheduling). We assume that the identity of the user with the highest channel
gain is resolved using a series of contention slots and with feedback from the
base station. In this setup, we formulate the contention resolution problem for
opportunistic scheduling as identifying a random threshold (channel gain) that
separates the best channel from the other samples. We show that the average
delay to resolve contention is related to the entropy of the random threshold.
We illustrate our formulation by studying the opportunistic splitting algorithm
(OSA) for i.i.d. wireless channel [9]. We note that the thresholds of OSA
correspond to a maximal probability allocation scheme. We conjecture that
maximal probability allocation is an entropy minimizing strategy and a delay
minimizing strategy for i.i.d. wireless channel. Finally, we discuss the
applicability of this framework for few other network scenarios.
|
1307.8232 | Maximum-Hands-Off Control and L1 Optimality | math.OC cs.IT cs.SY math.IT | In this article, we propose a new paradigm of control, called a
maximum-hands-off control. A hands-off control is defined as a control that has
a much shorter support than the horizon length. The maximum-hands-off control
is the minimum-support (or sparsest) control among all admissible controls. We
first prove that a solution to an L1-optimal control problem gives a
maximum-hands-off control, and vice versa. This result rationalizes the use of
L1 optimality in computing a maximum-hands-off control. The solution has in
general the "bang-off-bang" property, and hence the control may be
discontinuous. We then propose an L1/L2-optimal control to obtain a continuous
hands-off control. Examples are shown to illustrate the effectiveness of the
proposed control method.
|
1307.8233 | A Prototyping Environment for Integrated Artificial Attention Systems | cs.CV | Artificial visual attention systems aim to support technical systems in
visual tasks by applying the concepts of selective attention observed in humans
and other animals. Such systems are typically evaluated against ground truth
obtained from human gaze-data or manually annotated test images. When applied
to robotics, the systems are required to be adaptable to the target system.
Here, we describe a flexible environment based on a robotic middleware layer
allowing the development and testing of attention-guided vision systems. In
such a framework, the systems can be tested with input from various sources,
different attention algorithms at the core, and diverse subsequent tasks.
|
1307.8240 | On Finding a Subset of Healthy Individuals from a Large Population | cs.IT math.IT | In this paper, we derive mutual information based upper and lower bounds on
the number of nonadaptive group tests required to identify a given number of
"non defective" items from a large population containing a small number of
"defective" items. We show that a reduction in the number of tests is
achievable compared to the approach of first identifying all the defective
items and then picking the required number of non-defective items from the
complement set. In the asymptotic regime with the population size $N
\rightarrow \infty$, to identify $L$ non-defective items out of a population
containing $K$ defective items, when the tests are reliable, our results show
that $\frac{C_s K}{1-o(1)} (\Phi(\alpha_0, \beta_0) + o(1))$ measurements are
sufficient, where $C_s$ is a constant independent of $N, K$ and $L$, and
$\Phi(\alpha_0, \beta_0)$ is a bounded function of $\alpha_0 \triangleq
\lim_{N\rightarrow \infty} \frac{L}{N-K}$ and $\beta_0 \triangleq
\lim_{N\rightarrow \infty} \frac{K} {N-K}$. Further, in the nonadaptive group
testing setup, we obtain rigorous upper and lower bounds on the number of tests
under both dilution and additive noise models. Our results are derived using a
general sparse signal model, by virtue of which, they are also applicable to
other important sparse signal based applications such as compressive sensing.
|
1307.8242 | Sparse Packetized Predictive Control for Networked Control over Erasure
Channels | cs.SY math.OC | We study feedback control over erasure channels with packet-dropouts. To
achieve robustness with respect to packet-dropouts, the controller transmits
data packets containing plant input predictions, which minimize a finite
horizon cost function. To reduce the data size of packets, we propose to adopt
sparsity-promoting optimizations, namely, ell-1-ell-2 and ell-2-constrained
ell-0 optimizations, for which efficient algorithms exist. We derive sufficient
conditions on design parameters, which guarantee (practical) stability of the
resulting feedback control systems when the number of consecutive
packet-dropouts is bounded.
|
1307.8250 | Critical Transitions in Social Network Activity | physics.soc-ph cs.SI nlin.AO physics.data-an | A large variety of complex systems in ecology, climate science, biomedicine
and engineering have been observed to exhibit tipping points, where the
internal dynamical state of the system abruptly changes. For example, such
critical transitions may result in the sudden change of ecological environments
and climate conditions. Data and models suggest that detectable warning signs
may precede some of these drastic events. This view is also corroborated by
abstract mathematical theory for generic bifurcations in stochastic multi-scale
systems. Whether the stochastic scaling laws used as warning signs are also
present in social networks that anticipate a-priori {\it unknown} events in
society is an exciting open problem, to which at present only highly
speculative answers can be given. Here, we instead provide a first step towards
tackling this formidable question by focusing on a-priori {\it known} events
and analyzing a social network data set with a focus on classical variance and
autocorrelation warning signs. Our results thus pertain to one absolutely
fundamental question: Can the stochastic warning signs known from other areas
also be detected in large-scale social network data? We answer this question
affirmatively as we find that several a-priori known events are preceded by
variance and autocorrelation growth. Our findings thus clearly establish the
necessary starting point to further investigate the relation between abstract
mathematical theory and various classes of critical transitions in social
networks.
|
1307.8269 | Introducing Access Control in Webdamlog | cs.DB | We survey recent work on the specification of an access control mechanism in
a collaborative environment. The work is presented in the context of the
WebdamLog language, an extension of datalog to a distributed context. We
discuss a fine-grained access control mechanism for intentional data based on
provenance as well as a control mechanism for delegation, i.e., for deploying
rules at remote peers.
|
1307.8279 | Tracking Extrema in Dynamic Environment using Multi-Swarm Cellular PSO
with Local Search | cs.AI cs.NE | Many real-world phenomena can be modelled as dynamic optimization problems.
In such cases, the environment problem changes dynamically and therefore,
conventional methods are not capable of dealing with such problems. In this
paper, a novel multi-swarm cellular particle swarm optimization algorithm is
proposed by clustering and local search. In the proposed algorithm, the search
space is partitioned into cells, while the particles identify changes in the
search space and form clusters to create sub-swarms. Then a local search is
applied to improve the solutions in the each cell. Simulation results for
static standard benchmarks and dynamic environments show superiority of the
proposed method over other alternative approaches.
|
1307.8305 | The Planning-ahead SMO Algorithm | cs.LG | The sequential minimal optimization (SMO) algorithm and variants thereof are
the de facto standard method for solving large quadratic programs for support
vector machine (SVM) training. In this paper we propose a simple yet powerful
modification. The main emphasis is on an algorithm improving the SMO step size
by planning-ahead. The theoretical analysis ensures its convergence to the
optimum. Experiments involving a large number of datasets were carried out to
demonstrate the superiority of the new algorithm.
|
1307.8320 | OMP Based Joint Sparsity Pattern Recovery Under Communication
Constraints | cs.IT math.IT | We address the problem of joint sparsity pattern recovery based on low
dimensional multiple measurement vectors (MMVs) in resource constrained
distributed networks. We assume that distributed nodes observe sparse signals
which share the same sparsity pattern and each node obtains measurements via a
low dimensional linear operator. When the measurements are collected at
distributed nodes in a communication network, it is often required that joint
sparse recovery be performed under inherent resource constraints such as
communication bandwidth and transmit/processing power.
We present two approaches to take the communication constraints into account
while performing common sparsity pattern recovery. First, we explore the use of
a shared multiple access channel (MAC) in forwarding observations residing at
each node to a fusion center. With MAC, while the bandwidth requirement does
not depend on the number of nodes, the fusion center has access to only a
linear combination of the observations. We discuss the conditions under which
the common sparsity pattern can be estimated reliably. Second, we develop two
collaborative algorithms based on Orthogonal Matching Pursuit (OMP), to jointly
estimate the common sparsity pattern in a decentralized manner with a low
communication overhead. In the proposed algorithms, each node exploits
collaboration among neighboring nodes by sharing a small amount of information
for fusion at different stages in estimating the indices of the true support in
a greedy manner. Efficiency and effectiveness of the proposed algorithms are
demonstrated via simulations along with a comparison with the most related
existing algorithms considering the trade-off between the performance gain and
the communication overhead.
|
1307.8327 | The Likelihood Encoder for Source Coding | cs.IT math.IT | The likelihood encoder with a random codebook is demonstrated as an effective
tool for source coding. Coupled with a soft covering lemma (associated with
channel resolvability), likelihood encoders yield simple achievability proofs
for known results, such as rate-distortion theory. They also produce a
tractable analysis for secure rate-distortion theory and strong coordination.
|
1307.8371 | The Power of Localization for Efficiently Learning Linear Separators
with Noise | cs.LG cs.CC cs.DS stat.ML | We introduce a new approach for designing computationally efficient learning
algorithms that are tolerant to noise, and demonstrate its effectiveness by
designing algorithms with improved noise tolerance guarantees for learning
linear separators.
We consider both the malicious noise model and the adversarial label noise
model. For malicious noise, where the adversary can corrupt both the label and
the features, we provide a polynomial-time algorithm for learning linear
separators in $\Re^d$ under isotropic log-concave distributions that can
tolerate a nearly information-theoretically optimal noise rate of $\eta =
\Omega(\epsilon)$. For the adversarial label noise model, where the
distribution over the feature vectors is unchanged, and the overall probability
of a noisy label is constrained to be at most $\eta$, we also give a
polynomial-time algorithm for learning linear separators in $\Re^d$ under
isotropic log-concave distributions that can handle a noise rate of $\eta =
\Omega\left(\epsilon\right)$.
We show that, in the active learning model, our algorithms achieve a label
complexity whose dependence on the error parameter $\epsilon$ is
polylogarithmic. This provides the first polynomial-time active learning
algorithm for learning linear separators in the presence of malicious noise or
adversarial label noise.
|
1307.8405 | Who and Where: People and Location Co-Clustering | cs.CV | In this paper, we consider the clustering problem on images where each image
contains patches in people and location domains. We exploit the correlation
between people and location domains, and proposed a semi-supervised
co-clustering algorithm to cluster images. Our algorithm updates the
correlation links at the runtime, and produces clustering in both domains
simultaneously. We conduct experiments in a manually collected dataset and a
Flickr dataset. The result shows that the such correlation improves the
clustering performance.
|
1307.8410 | Analysis of a Proportionally Fair and Locally Adaptive spatial Aloha in
Poisson Networks | cs.NI cs.IT math.IT math.PR | The proportionally fair sharing of the capacity of a Poisson network using
Spatial-Aloha leads to closed-form performance expressions in two extreme
cases: (1) the case without topology information, where the analysis boils down
to a parametric optimization problem leveraging stochastic geometry; (2) the
case with full network topology information, which was recently solved using
shot-noise techniques. We show that there exists a continuum of adaptive
controls between these two extremes, based on local stopping sets, which can
also be analyzed in closed form. We also show that these control schemes are
implementable, in contrast to the full information case which is not. As local
information increases, the performance levels of these schemes are shown to get
arbitrarily close to those of the full information scheme. The analytical
results are combined with discrete event simulation to provide a detailed
evaluation of the performance of this class of medium access controls.
|
1307.8430 | Fast Simultaneous Training of Generalized Linear Models (FaSTGLZ) | cs.LG stat.ML | We present an efficient algorithm for simultaneously training sparse
generalized linear models across many related problems, which may arise from
bootstrapping, cross-validation and nonparametric permutation testing. Our
approach leverages the redundancies across problems to obtain significant
computational improvements relative to solving the problems sequentially by a
conventional algorithm. We demonstrate our fast simultaneous training of
generalized linear models (FaSTGLZ) algorithm on a number of real-world
datasets, and we run otherwise computationally intensive bootstrapping and
permutation test analyses that are typically necessary for obtaining
statistically rigorous classification results and meaningful interpretation.
Code is freely available at http://liinc.bme.columbia.edu/fastglz.
|
1308.0002 | Packetized Predictive Control for Rate-Limited Networks via Sparse
Representation | cs.SY math.OC | We study a networked control architecture for linear time-invariant plants in
which an unreliable data-rate limited network is placed between the controller
and the plant input. The distinguishing aspect of the situation at hand is that
an unreliable data-rate limited network is placed between controller and the
plant input. To achieve robustness with respect to dropouts, the controller
transmits data packets containing plant input predictions, which minimize a
finite horizon cost function. In our formulation, we design sparse packets for
rate-limited networks, by adopting an an ell-0 optimization, which can be
effectively solved by an orthogonal matching pursuit method. Our formulation
ensures asymptotic stability of the control loop in the presence of bounded
packet dropouts. Simulation results indicate that the proposed controller
provides sparse control packets, thereby giving bit-rate reductions for the
case of memoryless scalar coding schemes when compared to the use of, more
common, quadratic cost functions, as in linear quadratic (LQ) control.
|
1308.0029 | Hierarchical self-organization of non-cooperating individuals | physics.soc-ph cs.SI physics.bio-ph | Hierarchy is one of the most conspicuous features of numerous natural,
technological and social systems. The underlying structures are typically
complex and their most relevant organizational principle is the ordering of the
ties among the units they are made of according to a network displaying
hierarchical features. In spite of the abundant presence of hierarchy no
quantitative theoretical interpretation of the origins of a multi-level,
knowledge-based social network exists. Here we introduce an approach which is
capable of reproducing the emergence of a multi-levelled network structure
based on the plausible assumption that the individuals (representing the nodes
of the network) can make the right estimate about the state of their changing
environment to a varying degree. Our model accounts for a fundamental feature
of knowledge-based organizations: the less capable individuals tend to follow
those who are better at solving the problems they all face. We find that
relatively simple rules lead to hierarchical self-organization and the specific
structures we obtain possess the two, perhaps most important features of
complex systems: a simultaneous presence of adaptability and stability. In
addition, the performance (success score) of the emerging networks is
significantly higher than the average expected score of the individuals without
letting them copy the decisions of the others. The results of our calculations
are in agreement with a related experiment and can be useful from the point of
designing the optimal conditions for constructing a given complex social
structure as well as understanding the hierarchical organization of such
biological structures of major importance as the regulatory pathways or the
dynamics of neural networks.
|
1308.0037 | Route Swarm: Wireless Network Optimization through Mobility | cs.SY cs.MA cs.NI cs.RO math.OC | In this paper, we demonstrate a novel hybrid architecture for coordinating
networked robots in sensing and information routing applications. The proposed
INformation and Sensing driven PhysIcally REconfigurable robotic network
(INSPIRE), consists of a Physical Control Plane (PCP) which commands agent
position, and an Information Control Plane (ICP) which regulates information
flow towards communication/sensing objectives. We describe an instantiation
where a mobile robotic network is dynamically reconfigured to ensure high
quality routes between static wireless nodes, which act as source/destination
pairs for information flow. The ICP commands the robots towards evenly
distributed inter-flow allocations, with intra-flow configurations that
maximize route quality. The PCP then guides the robots via potential-based
control to reconfigure according to ICP commands. This formulation, deemed
Route Swarm, decouples information flow and physical control, generating a
feedback between routing and sensing needs and robotic configuration. We
demonstrate our propositions through simulation under a realistic wireless
network regime.
|
1308.0041 | A Tractable Model for Non-Coherent Joint-Transmission Base Station
Cooperation | cs.IT cs.NI math.IT | This paper presents a tractable model for analyzing non-coherent joint
transmission base station (BS) cooperation, taking into account the irregular
BS deployment typically encountered in practice. Besides cellular-network
specific aspects such as BS density, channel fading, average path loss and
interference, the model also captures relevant cooperation mechanisms including
user-centric BS clustering and channel-dependent cooperation activation. The
locations of all BSs are modeled by a Poisson point process. Using tools from
stochastic geometry, the signal-to-interference-plus-noise ratio
($\mathtt{SINR}$) distribution with cooperation is precisely characterized in a
generality-preserving form. The result is then applied to practical design
problems of recent interest. We find that increasing the network-wide BS
density improves the $\mathtt{SINR}$, while the gains increase with the path
loss exponent. For pilot-based channel estimation, the average spectral
efficiency saturates at cluster sizes of around $7$ BSs for typical values,
irrespective of backhaul quality. Finally, it is shown that intra-cluster
frequency reuse is favorable in moderately loaded cells with generous
cooperation activation, while intra-cluster coordinated scheduling may be
better in lightly loaded cells with conservative cooperation activation.
|
1308.0047 | On Lattices and the Dualities of Information Measures | cs.IT math.IT q-bio.QM | Measures of dependence among variables, and measures of information content
and shared information have become valuable tools of multi-variable data
analysis. Information measures, like marginal entropies, mutual and
multi-information, have a number of significant advantages over more standard
statistical methods, like their reduced sensitivity to sampling limitations
than statistical estimates of probability densities. There are also interesting
applications of these measures to the theory of complexity and to statistical
mechanics. Their mathematical properties and relationships are therefore of
interest at several levels.
Of the interesting relationships between common information measures, perhaps
none are more intriguing and as elegant as the duality relationships based on
Mobius inversions. These inversions are directly related to the lattices
(posets) that describe these sets of variables and their multi-variable
measures. In this paper we describe extensions of the duality previously noted
by Bell to a range of measures, and show how the structure of the lattice
determines fundamental relationships of these functions. Our major result is a
set of interlinked duality relations among marginal entropies, interaction
information, and conditional interaction information. The implications of these
results include a flexible range of alternative formulations of
information-based measures, and a new set of sum rules that arise from
path-independent sums on the lattice. Our motivation is to advance the
fundamental integration of this set of ideas and relations, and to show
explicitly the ways in which all these measures are interrelated through
lattice properties. These ideas can be useful in constructing theories of
complexity, descriptions of large scale stochastic processes and systems, and
in devising algorithms and approximations for computations in multi-variable
data analysis.
|
1308.0075 | Polynomial-Phase Signal Direction-Finding & Source-Tracking with an
Acoustic Vector Sensor | stat.AP cs.IT math.IT | A new ESPRIT-based algorithm is proposed to estimate the direction-of-arrival
of an arbitrary degree polynomial-phase signal with a single acoustic vector
sensor. The proposed approach requires neither a priori knowledge of the
polynomial-phase signal's coefficients nor a priori knowledge of the
polynomial-phase signal's frequency-spectrum. A pre-processing technique is
also proposed to incorporate the single-forgetting-factor algorithm and
multiple-forgetting-factor adaptive tracking algorithm to track a
polynomial-phase signal using one acoustic vector sensor. Simulation results
verify the efficacy of the proposed direction finding and source tracking
algorithms.
|
1308.0094 | Improving Physical Layer Secrecy Using Full-Duplex Jamming Receivers | cs.IT math.IT | This paper studies secrecy rate optimization in a wireless network with a
single-antenna source, a multi-antenna destination and a multi-antenna
eavesdropper. This is an unfavorable scenario for secrecy performance as the
system is interference-limited. In the literature, assuming that the receiver
operates in half duplex (HD) mode, the aforementioned problem has been
addressed via use of cooperating nodes who act as jammers to confound the
eavesdropper. This paper investigates an alternative solution, which assumes
the availability of a full duplex (FD) receiver. In particular, while receiving
data, the receiver transmits jamming noise to degrade the eavesdropper channel.
The proposed self-protection scheme eliminates the need for external helpers
and provides system robustness. For the case in which global channel state
information is available, we aim to design the optimal jamming covariance
matrix that maximizes the secrecy rate and mitigates loop interference
associated with the FD operation. We consider both fixed and optimal linear
receiver design at the destination, and show that the optimal jamming
covariance matrix is rank-1, and can be found via an efficient 1-D search. For
the case in which only statistical information on the eavesdropper channel is
available, the optimal power allocation is studied in terms of ergodic and
outage secrecy rates. Simulation results verify the analysis and demonstrate
substantial performance gain over conventional HD operation at the destination.
|
1308.0102 | Mutual Information-Based Planning for Informative Windowed Forecasting
of Continuous-Time Linear Systems | cs.SY cs.IT math.IT | This paper presents expression of mutual information that defines the
information gain in planning of sensing resources, when the goal is to reduce
the forecast uncertainty of some quantities of interest and the system dynamics
is described as a continuous-time linear system. The method extends the
smoother approach in [5] to handle more general notion of verification entity -
continuous sequence of variables over some finite time window in the future.
The expression of mutual information for this windowed forecasting case is
derived and quantified, taking advantage of underlying conditional independence
structure and utilizing the fixed-interval smoothing formula with correlated
noises. Two numerical examples on (a) simplified weather forecasting with
moving verification paths, and (b) sensor network scheduling for tracking of
multiple moving targets are considered for validation of the proposed approach.
|
1308.0104 | A Fast Eigen Solution for Homogeneous Quadratic Minimization with at
most Three Constraints | math.NA cs.IT math.IT | We propose an eigenvalue based technique to solve the Homogeneous Quadratic
Constrained Quadratic Programming problem (HQCQP) with at most 3 constraints
which arise in many signal processing problems. Semi-Definite Relaxation (SDR)
is the only known approach and is computationally intensive. We study the
performance of the proposed fast eigen approach through simulations in the
context of MIMO relays and show that the solution converges to the solution
obtained using the SDR approach with significant reduction in complexity.
|
1308.0109 | Optimal Receiver Design for Diffusive Molecular Communication With Flow
and Additive Noise | cs.IT math.IT | In this paper, we perform receiver design for a diffusive molecular
communication environment. Our model includes flow in any direction, sources of
information molecules in addition to the transmitter, and enzymes in the
propagation environment to mitigate intersymbol interference. We characterize
the mutual information between receiver observations to show how often
independent observations can be made. We derive the maximum likelihood sequence
detector to provide a lower bound on the bit error probability. We propose the
family of weighted sum detectors for more practical implementation and derive
their expected bit error probability. Under certain conditions, the performance
of the optimal weighted sum detector is shown to be equivalent to a matched
filter. Receiver simulation results show the tradeoff in detector complexity
versus achievable bit error probability, and that a slow flow in any direction
can improve the performance of a weighted sum detector.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.