id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1204.6181
|
The conduciveness of CA-rule graphs
|
nlin.CG cs.NE
|
Given two subsets A and B of nodes in a directed graph, the conduciveness of
the graph from A to B is the ratio representing how many of the edges outgoing
from nodes in A are incoming to nodes in B. When the graph's nodes stand for
the possible solutions to certain problems of combinatorial optimization,
choosing its edges appropriately has been shown to lead to conduciveness
properties that provide useful insight into the performance of algorithms to
solve those problems. Here we study the conduciveness of CA-rule graphs, that
is, graphs whose node set is the set of all CA rules given a cell's number of
possible states and neighborhood size. We consider several different edge sets
interconnecting these nodes, both deterministic and random ones, and derive
analytical expressions for the resulting graph's conduciveness toward rules
having a fixed number of non-quiescent entries. We demonstrate that one of the
random edge sets, characterized by allowing nodes to be sparsely interconnected
across any Hamming distance between the corresponding rules, has the potential
of providing reasonable conduciveness toward the desired rules. We conjecture
that this may lie at the bottom of the best strategies known to date for
discovering complex rules to solve specific problems, all of an evolutionary
nature.
|
1204.6233
|
Strong Backdoors to Bounded Treewidth SAT
|
cs.DS cs.AI cs.CC cs.DM math.CO
|
There are various approaches to exploiting "hidden structure" in instances of
hard combinatorial problems to allow faster algorithms than for general
unstructured or random instances. For SAT and its counting version #SAT, hidden
structure has been exploited in terms of decomposability and strong backdoor
sets. Decomposability can be considered in terms of the treewidth of a graph
that is associated with the given CNF formula, for instance by considering
clauses and variables as vertices of the graph, and making a variable adjacent
with all the clauses it appears in. On the other hand, a strong backdoor set of
a CNF formula is a set of variables such that each possible partial assignment
to this set moves the formula into a fixed class for which (#)SAT can be solved
in polynomial time.
In this paper we combine the two above approaches. In particular, we study
the algorithmic question of finding a small strong backdoor set into the class
W_t of CNF formulas whose associated graphs have treewidth at most t. The main
results are positive:
(1) There is a cubic-time algorithm that, given a CNF formula F and two
constants k,t\ge 0, either finds a strong W_t-backdoor set of size at most 2^k,
or concludes that F has no strong W_t-backdoor set of size at most k.
(2) There is a cubic-time algorithm that, given a CNF formula F, computes the
number of satisfying assignments of F or concludes that sb_t(F)>k, for any pair
of constants k,t\ge 0. Here, sb_t(F) denotes the size of a smallest strong
W_t-backdoor set of F.
The significance of our results lies in the fact that they allow us to
exploit algorithmically a hidden structure in formulas that is not accessible
by any one of the two approaches (decomposability, backdoors) alone. Already a
backdoor size 1 on top of treewidth 1 (i.e., sb_1(F)=1) entails formulas of
arbitrarily large treewidth and arbitrarily large cycle cutsets.
|
1204.6250
|
Feature Selection for Generator Excitation Neurocontroller Development
Using Filter Technique
|
cs.SY cs.LG
|
Essentially, motive behind using control system is to generate suitable
control signal for yielding desired response of a physical process. Control of
synchronous generator has always remained very critical in power system
operation and control. For certain well known reasons power generators are
normally operated well below their steady state stability limit. This raises
demand for efficient and fast controllers. Artificial intelligence has been
reported to give revolutionary outcomes in the field of control engineering.
Artificial Neural Network (ANN), a branch of artificial intelligence has been
used for nonlinear and adaptive control, utilizing its inherent observability.
The overall performance of neurocontroller is dependent upon input features
too. Selecting optimum features to train a neurocontroller optimally is very
critical. Both quality and size of data are of equal importance for better
performance. In this work filter technique is employed to select independent
factors for ANN training.
|
1204.6284
|
The Network of French Legal Codes
|
cs.AI cs.SI physics.soc-ph
|
We propose an analysis of the codified Law of France as a structured system.
Fifty two legal codes are selected on the basis of explicit legal criteria and
considered as vertices with their mutual quotations forming the edges in a
network which properties are analyzed relying on graph theory. We find that a
group of 10 codes are simultaneously the most citing and the most cited by
other codes, and are also strongly connected together so forming a "rich club"
sub-graph. Three other code communities are also found that somewhat partition
the legal field is distinct thematic sub-domains. The legal interpretation of
this partition is opening new untraditional lines of research. We also
conjecture that many legal systems are forming such new kind of networks that
share some properties in common with small worlds but are far denser. We
propose to call "concentrated world".
|
1204.6321
|
Efficient Video Indexing on the Web: A System that Leverages User
Interactions with a Video Player
|
cs.MM cs.DL cs.HC cs.IR
|
In this paper, we propose a user-based video indexing method, that
automatically generates thumbnails of the most important scenes of an online
video stream, by analyzing users' interactions with a web video player. As a
test bench to verify our idea we have extended the YouTube video player into
the VideoSkip system. In addition, VideoSkip uses a web-database (Google
Application Engine) to keep a record of some important parameters, such as the
timing of basic user actions (play, pause, skip). Moreover, we implemented an
algorithm that selects representative thumbnails. Finally, we populated the
system with data from an experiment with nine users. We found that the
VideoSkip system indexes video content by leveraging implicit users
interactions, such as pause and thirty seconds skip. Our early findings point
toward improvements of the web video player and its thumbnail generation
technique. The VideSkip system could compliment content-based algorithms, in
order to achieve efficient video-indexing in difficult videos, such as lectures
or sports.
|
1204.6325
|
CELL: Connecting Everyday Life in an archipeLago
|
cs.HC cs.LG
|
We explore the design of a seamless broadcast communication system that
brings together the distributed community of remote secondary education
schools. In contrast to higher education, primary and secondary education
establishments should remain distributed, in order to maintain a balance of
urban and rural life in the developing and the developed world. We plan to
deploy an ambient and social interactive TV platform (physical installation,
authoring tools, interactive content) that supports social communication in a
positive way. In particular, we present the physical design and the conceptual
model of the system.
|
1204.6326
|
Background subtraction based on Local Shape
|
cs.CV
|
We present a novel approach to background subtraction that is based on the
local shape of small image regions. In our approach, an image region centered
on a pixel is mod-eled using the local self-similarity descriptor. We aim at
obtaining a reliable change detection based on local shape change in an image
when foreground objects are moving. The method first builds a background model
and compares the local self-similarities between the background model and the
subsequent frames to distinguish background and foreground objects.
Post-processing is then used to refine the boundaries of moving objects.
Results show that this approach is promising as the foregrounds obtained are
com-plete, although they often include shadows.
|
1204.6341
|
Stochastic Ordering of Interferences in Large-scale Wireless Networks
|
cs.IT math.IT
|
Stochastic orders are binary relations defined on probability distributions
which capture intuitive notions like being larger or being more variable. This
paper introduces stochastic ordering of interference distributions in
large-scale networks modeled as point process. Interference is the main
performance-limiting factor in most wireless networks, thus it is important to
understand its statistics. Since closed-form results for the distribution of
interference for such networks are only available in limited cases,
interference of networks are compared using stochastic orders, even when closed
form expressions for interferences are not tractable. We show that the
interference from a large-scale network depends on the fading distributions
with respect to the stochastic Laplace transform order. The condition for
path-loss models is also established to have stochastic ordering between
interferences. The stochastic ordering of interferences between different
networks are also shown. Monte-Carlo simulations are used to supplement our
analytical results.
|
1204.6346
|
Magic Sets for Disjunctive Datalog Programs
|
cs.AI cs.LO
|
In this paper, a new technique for the optimization of (partially) bound
queries over disjunctive Datalog programs with stratified negation is
presented. The technique exploits the propagation of query bindings and extends
the Magic Set (MS) optimization technique.
An important feature of disjunctive Datalog is nonmonotonicity, which calls
for nondeterministic implementations, such as backtracking search. A
distinguishing characteristic of the new method is that the optimization can be
exploited also during the nondeterministic phase. In particular, after some
assumptions have been made during the computation, parts of the program may
become irrelevant to a query under these assumptions. This allows for dynamic
pruning of the search space. In contrast, the effect of the previously defined
MS methods for disjunctive Datalog is limited to the deterministic portion of
the process. In this way, the potential performance gain by using the proposed
method can be exponential, as could be observed empirically.
The correctness of MS is established thanks to a strong relationship between
MS and unfounded sets that has not been studied in the literature before. This
knowledge allows for extending the method also to programs with stratified
negation in a natural way.
The proposed method has been implemented in DLV and various experiments have
been conducted. Experimental results on synthetic data confirm the utility of
MS for disjunctive Datalog, and they highlight the computational gain that may
be obtained by the new method w.r.t. the previously proposed MS methods for
disjunctive Datalog programs. Further experiments on real-world data show the
benefits of MS within an application scenario that has received considerable
attention in recent years, the problem of answering user queries over possibly
inconsistent databases originating from integration of autonomous sources of
information.
|
1204.6350
|
Secure Computation in a Bidirectional Relay
|
cs.IT math.IT
|
Bidirectional relaying, where a relay helps two user nodes to exchange equal
length binary messages, has been an active area of recent research. A popular
strategy involves a modified Gaussian MAC, where the relay decodes the XOR of
the two messages using the naturally-occurring sum of symbols simultaneously
transmitted by user nodes. In this work, we consider the Gaussian MAC in
bidirectional relaying with an additional secrecy constraint for protection
against a honest but curious relay. The constraint is that, while the relay
should decode the XOR, it should be fully ignorant of the individual messages
of the users. We exploit the symbol addition that occurs in a Gaussian MAC to
design explicit strategies that achieve perfect independence between the
received symbols and individual transmitted messages. Our results actually hold
for a more general scenario where the messages at the two user nodes come from
a finite Abelian group, and the relay must decode the sum within the group of
the two messages. We provide a lattice coding strategy and study optimal rate
versus average power trade-offs for asymptotically large dimensions.
|
1204.6362
|
A Corpus-based Evaluation of Lexical Components of a Domainspecific Text
to Knowledge Mapping Prototype
|
cs.IR cs.CL
|
The aim of this paper is to evaluate the lexical components of a Text to
Knowledge Mapping (TKM) prototype. The prototype is domain-specific, the
purpose of which is to map instructional text onto a knowledge domain. The
context of the knowledge domain of the prototype is physics, specifically DC
electrical circuits. During development, the prototype has been tested with a
limited data set from the domain. The prototype now reached a stage where it
needs to be evaluated with a representative linguistic data set called corpus.
A corpus is a collection of text drawn from typical sources which can be used
as a test data set to evaluate NLP systems. As there is no available corpus for
the domain, we developed a representative corpus and annotated it with
linguistic information. The evaluation of the prototype considers one of its
two main components- lexical knowledge base. With the corpus, the evaluation
enriches the lexical knowledge resources like vocabulary and grammar structure.
This leads the prototype to parse a reasonable amount of sentences in the
corpus.
|
1204.6364
|
A Corpus-based Evaluation of a Domain-specific Text to Knowledge Mapping
Prototype
|
cs.CL
|
The aim of this paper is to evaluate a Text to Knowledge Mapping (TKM)
Prototype. The prototype is domain-specific, the purpose of which is to map
instructional text onto a knowledge domain. The context of the knowledge domain
is DC electrical circuit. During development, the prototype has been tested
with a limited data set from the domain. The prototype reached a stage where it
needs to be evaluated with a representative linguistic data set called corpus.
A corpus is a collection of text drawn from typical sources which can be used
as a test data set to evaluate NLP systems. As there is no available corpus for
the domain, we developed and annotated a representative corpus. The evaluation
of the prototype considers two of its major components- lexical components and
knowledge model. Evaluation on lexical components enriches the lexical
resources of the prototype like vocabulary and grammar structures. This leads
the prototype to parse a reasonable amount of sentences in the corpus. While
dealing with the lexicon was straight forward, the identification and
extraction of appropriate semantic relations was much more involved. It was
necessary, therefore, to manually develop a conceptual structure for the domain
to formulate a domain-specific framework of semantic relations. The framework
of semantic relationsthat has resulted from this study consisted of 55
relations, out of which 42 have inverse relations. We also conducted rhetorical
analysis on the corpus to prove its representativeness in conveying semantic.
Finally, we conducted a topical and discourse analysis on the corpus to analyze
the coverage of discourse by the prototype.
|
1204.6376
|
The Landscape of Complex Networks
|
stat.ME cs.SI physics.soc-ph q-bio.MN
|
Topological landscape is introduced for networks with functions defined on
the nodes. By extending the notion of gradient flows to the network setting,
critical nodes of different indices are defined. This leads to a concise and
hierarchical representation of the network. Persistent homology from
computational topology is used to design efficient algorithms for performing
such analysis. Applications to some examples in social and biological networks
are demonstrated, which show that critical nodes carry important information
about structures and dynamics of such networks.
|
1204.6385
|
A 3D Segmentation Method for Retinal Optical Coherence Tomography Volume
Data
|
cs.CV physics.optics
|
With the introduction of spectral-domain optical coherence tomography (OCT),
much larger image datasets are routinely acquired compared to what was possible
using the previous generation of time-domain OCT. Thus, the need for 3-D
segmentation methods for processing such data is becoming increasingly
important. We present a new 3D segmentation method for retinal OCT volume data,
which generates an enhanced volume data by using pixel intensity, boundary
position information, intensity changes on both sides of the border
simultaneously, and preliminary discrete boundary points are found from all
A-Scans and then the smoothed boundary surface can be obtained after removing a
small quantity of error points. Our experiments show that this method is
efficient, accurate and robust.
|
1204.6389
|
Rigidity and flexibility of biological networks
|
physics.bio-ph cond-mat.dis-nn cs.CE cs.CG nlin.PS q-bio.MN
|
The network approach became a widely used tool to understand the behaviour of
complex systems in the last decade. We start from a short description of
structural rigidity theory. A detailed account on the combinatorial rigidity
analysis of protein structures, as well as local flexibility measures of
proteins and their applications in explaining allostery and thermostability is
given. We also briefly discuss the network aspects of cytoskeletal tensegrity.
Finally, we show the importance of the balance between functional flexibility
and rigidity in protein-protein interaction, metabolic, gene regulatory and
neuronal networks. Our summary raises the possibility that the concepts of
flexibility and rigidity can be generalized to all networks.
|
1204.6408
|
Standing on the Shoulders of Their Peers: Success Factors for Massive
Cooperation Among Children Creating Open Source Animations and Games on Their
Smartphones
|
cs.CY cs.SI
|
We developed a website for kids where they can share new as well as remixed
animations and games, e.g., interactive music videos, which they created on
their smartphones or tablets using a visual "LEGO-style" programming
environment called Catroid. Online communities for children like our website
have unique requirements, and keeping the commitment of kids on a high level is
a continuous challenge. For instance, one key motivator for kids is the ability
to entertain their friends. Another success factor is the ability to learn from
and cooperate with other children. In this short position paper we attempt at
identifying the requirements for the success of such an online community, both
from the point of view of the kids as well as of their parents, and at finding
ways to make it attractive for both.
|
1204.6411
|
Catroid: A Mobile Visual Programming System for Children
|
cs.PL cs.CY cs.HC cs.RO
|
Catroid is a free and open source visual programming language, programming
environment, image manipulation program, and website. Catroid allows casual and
first-time users starting from age eight to develop their own animations and
games solely using their Android phones or tablets. Catroid also allows to
wirelessly control external hardware such as Lego Mindstorms robots via
Bluetooth, Bluetooth Arduino boards, as well as Parrot's popular and
inexpensive AR.Drone quadcopters via WiFi.
|
1204.6415
|
A Fuzzy Model for Analogical Problem Solving
|
cs.AI
|
In this paper we develop a fuzzy model for the description of the process of
Analogical Reasoning by representing its main steps as fuzzy subsets of a set
of linguistic labels characterizing the individuals' performance in each step
and we use the Shannon- Wiener diversity index as a measure of the individuals'
abilities in analogical problem solving. This model is compared with a
stochastic model presented in author's earlier papers by introducing a finite
Markov chain on the steps of the process of Analogical Reasoning. A classroom
experiment is also presented to illustrate the use of our results in practice.
|
1204.6423
|
Minimum Description Length Principle for Maximum Entropy Model Selection
|
cs.IT math.IT
|
Model selection is central to statistics, and many learning problems can be
formulated as model selection problems. In this paper, we treat the problem of
selecting a maximum entropy model given various feature subsets and their
moments, as a model selection problem, and present a minimum description length
(MDL) formulation to solve this problem. For this, we derive normalized maximum
likelihood (NML) codelength for these models. Furthermore, we prove that the
minimax entropy principle is a special case of maximum entropy model selection,
where one assumes that complexity of all the models are equal. We apply our
approach to gene selection problem and present simulation results.
|
1204.6441
|
"I Wanted to Predict Elections with Twitter and all I got was this Lousy
Paper" -- A Balanced Survey on Election Prediction using Twitter Data
|
cs.CY cs.CL cs.SI physics.soc-ph
|
Predicting X from Twitter is a popular fad within the Twitter research
subculture. It seems both appealing and relatively easy. Among such kind of
studies, electoral prediction is maybe the most attractive, and at this moment
there is a growing body of literature on such a topic. This is not only an
interesting research problem but, above all, it is extremely difficult.
However, most of the authors seem to be more interested in claiming positive
results than in providing sound and reproducible methods. It is also especially
worrisome that many recent papers seem to only acknowledge those studies
supporting the idea of Twitter predicting elections, instead of conducting a
balanced literature review showing both sides of the matter. After reading many
of such papers I have decided to write such a survey myself. Hence, in this
paper, every study relevant to the matter of electoral prediction using social
media is commented. From this review it can be concluded that the predictive
power of Twitter regarding elections has been greatly exaggerated, and that
hard research problems still lie ahead.
|
1204.6453
|
The Role of Vertex Consistency in Sampling-based Algorithms for Optimal
Motion Planning
|
cs.RO
|
Motion planning problems have been studied by both the robotics and the
controls research communities for a long time, and many algorithms have been
developed for their solution. Among them, incremental sampling-based motion
planning algorithms, such as the Rapidly-exploring Random Trees (RRTs), and the
Probabilistic Road Maps (PRMs) have become very popular recently, owing to
their implementation simplicity and their advantages in handling
high-dimensional problems. Although these algorithms work very well in
practice, the quality of the computed solution is often not good, i.e., the
solution can be far from the optimal one. A recent variation of RRT, namely the
RRT* algorithm, bypasses this drawback of the traditional RRT algorithm, by
ensuring asymptotic optimality as the number of samples tends to infinity.
Nonetheless, the convergence rate to the optimal solution may still be slow.
This paper presents a new incremental sampling-based motion planning algorithm
based on Rapidly-exploring Random Graphs (RRG), denoted RRT# (RRT "sharp")
which also guarantees asymptotic optimality but, in addition, it also ensures
that the constructed spanning tree of the geometric graph is consistent after
each iteration. In consistent trees, the vertices which have the potential to
be part of the optimal solution have the minimum cost-come-value. This implies
that the best possible solution is readily computed if there are some vertices
in the current graph that are already in the goal region. Numerical results
compare with the RRT* algorithm.
|
1204.6458
|
Active Contour with A Tangential Component
|
cs.CV
|
Conventional edge-based active contours often require the normal component of
an edge indicator function on the optimal contours to approximate zero, while
the tangential component can still be significant. In real images, the full
gradients of the edge indicator function along the object boundaries are often
small. Hence, the curve evolution of edge-based active contours can terminate
early before converging to the object boundaries with a careless contour
initialization. We propose a novel Geodesic Snakes (GeoSnakes) active contour
that requires the full gradients of the edge indicator to vanish at the optimal
solution. Besides, the conventional curve evolution approach for minimizing
active contour energy cannot fully solve the Euler-Lagrange (EL) equation of
our GeoSnakes active contour, causing a Pseudo Stationary Phenomenon (PSP). To
address the PSP problem, we propose an auxiliary curve evolution equation,
named the equilibrium flow (EF) equation. Based on the EF and the conventional
curve evolution, we obtain a solution to the full EL equation of GeoSnakes
active contour. Experimental results validate the proposed geometrical
interpretation of the early termination problem, and they also show that the
proposed method overcomes the problem.
|
1204.6482
|
Tradeoff Analysis of Delay-Power-CSIT Quality of Dynamic BackPressure
Algorithm for Energy Efficient OFDM Systems
|
cs.SY
|
In this paper, we analyze the fundamental power-delay tradeoff in
point-to-point OFDM systems under imperfect channel state information quality
and non-ideal circuit power. We consider the dynamic back- pressure (DBP)
algorithm, where the transmitter determines the rate and power control actions
based on the instantaneous channel state information (CSIT) and the queue state
information (QSI). We exploit a general fluid queue dynamics using a continuous
time dynamic equation. Using the sample-path approach and renewal theory, we
decompose the average delay in terms of multiple unfinished works along a
sample path, and derive an upper bound on the average delay under the DBP power
control, which is asymptotically accurate at small delay regime. We show that
despite imperfect CSIT quality and non-ideal circuit power, the average power
(P) of the DBP policy scales with delay (D) as P = O(Dexp(1/D)) at small delay
regime. While the impacts of CSIT quality and circuit power appears as the
coefficients of the scaling law, they may be significant in some operating
regimes.
|
1204.6509
|
Dissimilarity Clustering by Hierarchical Multi-Level Refinement
|
stat.ML cs.LG
|
We introduce in this paper a new way of optimizing the natural extension of
the quantization error using in k-means clustering to dissimilarity data. The
proposed method is based on hierarchical clustering analysis combined with
multi-level heuristic refinement. The method is computationally efficient and
achieves better quantization errors than the
|
1204.6512
|
An adaptive, high-order phase-space remapping for the two-dimensional
Vlasov-Poisson equations
|
math.NA cs.CE physics.comp-ph
|
The numerical solution of high dimensional Vlasov equation is usually
performed by particle-in-cell (PIC) methods. However, due to the well-known
numerical noise, it is challenging to use PIC methods to get a precise
description of the distribution function in phase space. To control the
numerical error, we introduce an adaptive phase-space remapping which
regularizes the particle distribution by periodically reconstructing the
distribution function on a hierarchy of phase-space grids with high-order
interpolations. The positivity of the distribution function can be preserved by
using a local redistribution technique. The method has been successfully
applied to a set of classical plasma problems in one dimension. In this paper,
we present the algorithm for the two dimensional Vlasov-Poisson equations. An
efficient Poisson solver with infinite domain boundary conditions is used. The
parallel scalability of the algorithm on massively parallel computers will be
discussed.
|
1204.6521
|
Harnessing Folksonomies for Resource Classification
|
cs.DL cs.IR cs.SI
|
In our daily lives, organizing resources into a set of categories is a common
task. Categorization becomes more useful as the collection of resources
increases. Large collections of books, movies, and web pages, for instance, are
cataloged in libraries, organized in databases and classified in directories,
respectively. However, the usual largeness of these collections requires a vast
endeavor and an outrageous expense to organize manually.
Recent research is moving towards developing automated classifiers that
reduce the increasing costs and effort of the task. Little work has been done
analyzing the appropriateness of and exploring how to harness the annotations
provided by users on social tagging systems as a data source. Users on these
systems save resources as bookmarks in a social environment by attaching
annotations in the form of tags. It has been shown that these tags facilitate
retrieval of resources not only for the annotators themselves but also for the
whole community. Likewise, these tags provide meaningful metadata that refers
to the content of the resources.
In this thesis, we deal with the utilization of these user-provided tags in
search of the most accurate classification of resources as compared to
expert-driven categorizations. To the best of our knowledge, this is the first
research work performing actual classification experiments utilizing social
tags. By exploring the characteristics and nature of these systems and the
underlying folksonomies, this thesis sheds new light on the way of getting the
most out of social tags for the sake of automated resource classification
tasks. Therefore, we believe that the contributions in this work are of utmost
interest for future researchers in the field, as well as for the scientific
community in order to better understand these systems and further utilize the
knowledge garnered from social tags.
|
1204.6529
|
Generalising unit-refutation completeness and SLUR via nested input
resolution
|
cs.LO cs.AI
|
We introduce two hierarchies of clause-sets, SLUR_k and UC_k, based on the
classes SLUR (Single Lookahead Unit Refutation), introduced in 1995, and UC
(Unit refutation Complete), introduced in 1994.
The class SLUR, introduced in [Annexstein et al, 1995], is the class of
clause-sets for which unit-clause-propagation (denoted by r_1) detects
unsatisfiability, or where otherwise iterative assignment, avoiding obviously
false assignments by look-ahead, always yields a satisfying assignment. It is
natural to consider how to form a hierarchy based on SLUR. Such investigations
were started in [Cepek et al, 2012] and [Balyo et al, 2012]. We present what we
consider the "limit hierarchy" SLUR_k, based on generalising r_1 by r_k, that
is, using generalised unit-clause-propagation introduced in [Kullmann, 1999,
2004].
The class UC, studied in [Del Val, 1994], is the class of Unit refutation
Complete clause-sets, that is, those clause-sets for which unsatisfiability is
decidable by r_1 under any falsifying assignment. For unsatisfiable clause-sets
F, the minimum k such that r_k determines unsatisfiability of F is exactly the
"hardness" of F, as introduced in [Ku 99, 04]. For satisfiable F we use now an
extension mentioned in [Ansotegui et al, 2008]: The hardness is the minimum k
such that after application of any falsifying partial assignments, r_k
determines unsatisfiability. The class UC_k is given by the clause-sets which
have hardness <= k. We observe that UC_1 is exactly UC.
UC_k has a proof-theoretic character, due to the relations between hardness
and tree-resolution, while SLUR_k has an algorithmic character. The
correspondence between r_k and k-times nested input resolution (or tree
resolution using clause-space k+1) means that r_k has a dual nature: both
algorithmic and proof theoretic. This corresponds to a basic result of this
paper, namely SLUR_k = UC_k.
|
1204.6535
|
Citations, Sequence Alignments, Contagion, and Semantics: On Acyclic
Structures and their Randomness
|
cs.DM cs.DB
|
Datasets from several domains, such as life-sciences, semantic web, machine
learning, natural language processing, etc. are naturally structured as acyclic
graphs. These datasets, particularly those in bio-informatics and computational
epidemiology, have grown tremendously over the last decade or so. Increasingly,
as a consequence, there is a need to build and evaluate various strategies for
processing acyclic structured graphs. Most of the proposed research models the
real world acyclic structures as random graphs, i.e., they are generated by
randomly selecting a subset of edges from all possible edges. Unfortunately the
graphs thus generated have predictable and degenerate structures, i.e., the
resulting graphs will always have almost the same degree distribution and very
short paths.
Specifically, we show that if $O(n \log n \log n)$ edges are added to a
binary tree of $n$ nodes then with probability more than $O(1/(\log n)^{1/n})$
the depth of all but $O({\log \log n} ^{\log \log n})$ vertices of the dag
collapses to 1. Experiments show that irregularity, as measured by distribution
of length of random walks from root to leaves, is also predictable and small.
The degree distribution and random walk length properties of real world graphs
from these domains are significantly different from random graphs of similar
vertex and edge size.
|
1204.6537
|
Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to
Unveiling Traffic Anomalies
|
cs.IT cs.NI math.IT stat.ML
|
Given the superposition of a low-rank matrix plus the product of a known fat
compression matrix times a sparse matrix, the goal of this paper is to
establish deterministic conditions under which exact recovery of the low-rank
and sparse components becomes possible. This fundamental identifiability issue
arises with traffic anomaly detection in backbone networks, and subsumes
compressed sensing as well as the timely low-rank plus sparse matrix recovery
tasks encountered in matrix decomposition problems. Leveraging the ability of
$\ell_1$- and nuclear norms to recover sparse and low-rank matrices, a convex
program is formulated to estimate the unknowns. Analysis and simulations
confirm that the said convex program can recover the unknowns for sufficiently
low-rank and sparse enough components, along with a compression matrix
possessing an isometry property when restricted to operate on sparse vectors.
When the low-rank, sparse, and compression matrices are drawn from certain
random ensembles, it is established that exact recovery is possible with high
probability. First-order algorithms are developed to solve the nonsmooth convex
optimization problem with provable iteration complexity guarantees. Insightful
tests with synthetic and real network data corroborate the effectiveness of the
novel approach in unveiling traffic anomalies across flows and time, and its
ability to outperform existing alternatives.
|
1204.6549
|
Power Law Distributions of Patents as Indicators of Innovation
|
physics.soc-ph cs.SI
|
The total number of patents produced by a country (or the number of patents
produced per capita) is often used as an indicator for innovation. Here we
present evidence that the distribution of patents amongst applicants within
many OECD countries is well-described by power laws with exponents that vary
between 1.66 (Japan) and 2.37 (Poland). Using simulations based on simple
preferential attachment-type rules that generate power laws, we find we can
explain some of the variation in exponents between countries, with countries
that have larger numbers of patents per applicant generally exhibiting smaller
exponents in both the simulated and actual data. Similarly we find that the
exponents for most countries are inversely correlated with other indicators of
innovation, such as R&D intensity or the ubiquity of export baskets. This
suggests that in more advanced economies, which tend to have smaller values of
the exponent, a greater proportion of the total number of patents are filed by
large companies than in less advanced countries.
|
1204.6552
|
A Game-Theoretic Model Motivated by the DARPA Network Challenge
|
cs.GT cs.AI cs.MA
|
In this paper we propose a game-theoretic model to analyze events similar to
the 2009 \emph{DARPA Network Challenge}, which was organized by the Defense
Advanced Research Projects Agency (DARPA) for exploring the roles that the
Internet and social networks play in incentivizing wide-area collaborations.
The challenge was to form a group that would be the first to find the locations
of ten moored weather balloons across the United States. We consider a model in
which $N$ people (who can form groups) are located in some topology with a
fixed coverage volume around each person's geographical location. We consider
various topologies where the players can be located such as the Euclidean
$d$-dimension space and the vertices of a graph. A balloon is placed in the
space and a group wins if it is the first one to report the location of the
balloon. A larger team has a higher probability of finding the balloon, but we
assume that the prize money is divided equally among the team members. Hence
there is a competing tension to keep teams as small as possible.
\emph{Risk aversion} is the reluctance of a person to accept a bargain with
an uncertain payoff rather than another bargain with a more certain, but
possibly lower, expected payoff. In our model we consider the \emph{isoelastic}
utility function derived from the Arrow-Pratt measure of relative risk
aversion. The main aim is to analyze the structures of the groups in Nash
equilibria for our model. For the $d$-dimensional Euclidean space ($d\geq 1$)
and the class of bounded degree regular graphs we show that in any Nash
Equilibrium the \emph{richest} group (having maximum expected utility per
person) covers a constant fraction of the total volume.
|
1204.6563
|
Parametric annealing: a stochastic search method for human pose tracking
|
cs.CV
|
Model based methods to marker-free motion capture have a very high
computational overhead that make them unattractive. In this paper we describe a
method that improves on existing global optimization techniques to tracking
articulated objects. Our method improves on the state-of-the-art Annealed
Particle Filter (APF) by reusing samples across annealing layers and by using
an adaptive parametric density for diffusion. We compare the proposed method
with APF on a scalable problem and study how the two methods scale with the
dimensionality, multi-modality and the range of search. Then we perform
sensitivity analysis on the parameters of our algorithm and show that it
tolerates a wide range of parameter settings. We also show results on tracking
human pose from the widely-used Human Eva I dataset. Our results show that the
proposed method reduces the tracking error despite using less than 50% of the
computational resources as APF. The tracked output also shows a significant
qualitative improvement over APF as demonstrated through image and video
results.
|
1204.6564
|
A New Family of Low-Complexity STBCs for Four Transmit Antennas
|
cs.IT math.IT
|
Space-Time Block Codes (STBCs) suffer from a prohibitively high decoding
complexity unless the low-complexity decodability property is taken into
consideration in the STBC design. For this purpose, several families of STBCs
that involve a reduced decoding complexity have been proposed, notably the
multi-group decodable and the fast decodable (FD) codes. Recently, a new family
of codes that combines both of these families namely the fast group decodable
(FGD) codes was proposed. In this paper, we propose a new construction scheme
for rate-1 FGD codes for 2^a transmit antennas. The proposed scheme is then
applied to the case of four transmit antennas and we show that the new rate-1
FGD code has the lowest worst-case decoding complexity among existing
comparable STBCs. The coding gain of the new rate-1 code is optimized through
constellation stretching and proved to be constant irrespective of the
underlying QAM constellation prior to normalization. Next, we propose a new
rate-2 FD STBC by multiplexing two of our rate-1 codes by the means of a
unitary matrix. Also a compromise between rate and complexity is obtained
through puncturing our rate-2 FD code giving rise to a new rate-3/2 FD code.
The proposed codes are compared to existing codes in the literature and
simulation results show that our rate-3/2 code has a lower average decoding
complexity while our rate-2 code maintains its lower average decoding
complexity in the low SNR region. If a time-out sphere decoder is employed, our
proposed codes outperform existing codes at high SNR region thanks to their
lower worst-case decoding complexity.
|
1204.6583
|
A Conjugate Property between Loss Functions and Uncertainty Sets in
Classification Problems
|
stat.ML cs.LG
|
In binary classification problems, mainly two approaches have been proposed;
one is loss function approach and the other is uncertainty set approach. The
loss function approach is applied to major learning algorithms such as support
vector machine (SVM) and boosting methods. The loss function represents the
penalty of the decision function on the training samples. In the learning
algorithm, the empirical mean of the loss function is minimized to obtain the
classifier. Against a backdrop of the development of mathematical programming,
nowadays learning algorithms based on loss functions are widely applied to
real-world data analysis. In addition, statistical properties of such learning
algorithms are well-understood based on a lots of theoretical works. On the
other hand, the learning method using the so-called uncertainty set is used in
hard-margin SVM, mini-max probability machine (MPM) and maximum margin MPM. In
the learning algorithm, firstly, the uncertainty set is defined for each binary
label based on the training samples. Then, the best separating hyperplane
between the two uncertainty sets is employed as the decision function. This is
regarded as an extension of the maximum-margin approach. The uncertainty set
approach has been studied as an application of robust optimization in the field
of mathematical programming. The statistical properties of learning algorithms
with uncertainty sets have not been intensively studied. In this paper, we
consider the relation between the above two approaches. We point out that the
uncertainty set is described by using the level set of the conjugate of the
loss function. Based on such relation, we study statistical properties of
learning algorithms using uncertainty sets.
|
1204.6610
|
Residual Belief Propagation for Topic Modeling
|
cs.LG cs.IR
|
Fast convergence speed is a desired property for training latent Dirichlet
allocation (LDA), especially in online and parallel topic modeling for massive
data sets. This paper presents a novel residual belief propagation (RBP)
algorithm to accelerate the convergence speed for training LDA. The proposed
RBP uses an informed scheduling scheme for asynchronous message passing, which
passes fast-convergent messages with a higher priority to influence those
slow-convergent messages at each learning iteration. Extensive empirical
studies confirm that RBP significantly reduces the training time until
convergence while achieves a much lower predictive perplexity than other
state-of-the-art training algorithms for LDA, including variational Bayes (VB),
collapsed Gibbs sampling (GS), loopy belief propagation (BP), and residual VB
(RVB).
|
1204.6624
|
Theorems about Ergodicity and Class-Ergodicity of Chains with
Applications in Known Consensus Models
|
math.DS cs.SY eess.SY math.OC
|
In a multi-agent system, unconditional (multiple) consensus is the property
of reaching to (multiple) consensus irrespective of the instant and values at
which states are initialized. For linear algorithms, occurrence of
unconditional (multiple) consensus turns out to be equivalent to (class-)
ergodicity of the transition chain (A_n). For a wide class of chains, chains
with so-called balanced asymmetry property, necessary and sufficient conditions
for ergodicity and class-ergodicity are derived. The results are employed to
analyze the limiting behavior of agents' states in the JLM model, the Krause
model, and the Cucker-Smale model. In particular, unconditional single or
multiple consensus occurs in all three models. Moreover, a necessary and
sufficient condition for unconditional consensus in the JLM model and a
sufficient condition for consensus in the Cucker-Smale model are obtained.
|
1204.6638
|
Modelling the emergence of spatial patterns of economic activity
|
cs.MA cs.SI physics.soc-ph q-fin.GN
|
Understanding how spatial configurations of economic activity emerge is
important when formulating spatial planning and economic policy. A simple model
was proposed by Simon, who assumed that firms grow at a rate proportional to
their size, and that new divisions of firms with certain probabilities relocate
to other firms or to new centres of economic activity. Simon's model produces
realistic results in the sense that the sizes of economic centres follow a Zipf
distribution, which is also observed in reality. It lacks realism in the sense
that mechanisms such as cluster formation, congestion (defined as an overly
high density of the same activities) and dependence on the spatial distribution
of external parties (clients, labour markets) are ignored.
The present paper proposed an extension of the Simon model that includes both
centripetal and centrifugal forces. Centripetal forces are included in the
sense that firm divisions are more likely to settle in locations that offer a
higher accessibility to other firms. Centrifugal forces are represented by an
aversion of a too high density of activities in the potential location. The
model is implemented as an agent-based simulation model in a simplified spatial
setting. By running both the Simon model and the extended model, comparisons
are made with respect to their effects on spatial configurations. To this end a
series of metrics are used, including the rank-size distribution and indices of
the degree of clustering and concentration.
|
1204.6653
|
Elimination of Glass Artifacts and Object Segmentation
|
cs.CV
|
Many images nowadays are captured from behind the glasses and may have
certain stains discrepancy because of glass and must be processed to make
differentiation between the glass and objects behind it. This research paper
proposes an algorithm to remove the damaged or corrupted part of the image and
make it consistent with other part of the image and to segment objects behind
the glass. The damaged part is removed using total variation inpainting method
and segmentation is done using kmeans clustering, anisotropic diffusion and
watershed transformation. The final output is obtained by interpolation. This
algorithm can be useful to applications in which some part of the images are
corrupted due to data transmission or needs to segment objects from an image
for further processing.
|
1204.6703
|
A Spectral Algorithm for Latent Dirichlet Allocation
|
cs.LG stat.ML
|
The problem of topic modeling can be seen as a generalization of the
clustering problem, in that it posits that observations are generated due to
multiple latent factors (e.g., the words in each document are generated as a
mixture of several active topics, as opposed to just one). This increased
representational power comes at the cost of a more challenging unsupervised
learning problem of estimating the topic probability vectors (the distributions
over words for each topic), when only the words are observed and the
corresponding topics are hidden.
We provide a simple and efficient learning procedure that is guaranteed to
recover the parameters for a wide class of mixture models, including the
popular latent Dirichlet allocation (LDA) model. For LDA, the procedure
correctly recovers both the topic probability vectors and the prior over the
topics, using only trigram statistics (i.e., third order moments, which may be
estimated with documents containing just three words). The method, termed
Excess Correlation Analysis (ECA), is based on a spectral decomposition of low
order moments (third and fourth order) via two singular value decompositions
(SVDs). Moreover, the algorithm is scalable since the SVD operations are
carried out on $k\times k$ matrices, where $k$ is the number of latent factors
(e.g. the number of topics), rather than in the $d$-dimensional observed space
(typically $d \gg k$).
|
1204.6725
|
OCT Segmentation Survey and Summary Reviews and a Novel 3D Segmentation
Algorithm and a Proof of Concept Implementation
|
cs.CV physics.optics
|
We overview the existing OCT work, especially the practical aspects of it. We
create a novel algorithm for 3D OCT segmentation with the goals of speed and/or
accuracy while remaining flexible in the design and implementation for future
extensions and improvements. The document at this point is a running draft
being iteratively "developed" as a progress report as the work and survey
advance. It contains the review and summarization of select OCT works, the
design and implementation of the OCTMARF experimentation application and some
results.
|
1205.0030
|
A Market for Unbiased Private Data: Paying Individuals According to
their Privacy Attitudes
|
cs.CY cs.SI physics.soc-ph
|
Since there is, in principle, no reason why third parties should not pay
individuals for the use of their data, we introduce a realistic market that
would allow these payments to be made while taking into account the privacy
attitude of the participants. And since it is usually important to use unbiased
samples to obtain credible statistical results, we examine the properties that
such a market should have and suggest a mechanism that compensates those
individuals that participate according to their risk attitudes. Equally
important, we show that this mechanism also benefits buyers, as they pay less
for the data than they would if they compensated all individuals with the same
maximum fee that the most concerned ones expect.
|
1205.0038
|
Percolation Computation in Complex Networks
|
cs.SI physics.soc-ph
|
K-clique percolation is an overlapping community finding algorithm which
extracts particular structures, comprised of overlapping cliques, from complex
networks. While it is conceptually straightforward, and can be elegantly
expressed using clique graphs, certain aspects of k-clique percolation are
computationally challenging in practice. In this paper we investigate aspects
of empirical social networks, such as the large numbers of overlapping maximal
cliques contained within them, that make clique percolation, and clique graph
representations, computationally expensive. We motivate a simple algorithm to
conduct clique percolation, and investigate its performance compared to current
best-in-class algorithms. We present improvements to this algorithm, which
allow us to perform k-clique percolation on much larger empirical datasets. Our
approaches perform much better than existing algorithms on networks exhibiting
pervasively overlapping community structure, especially for higher values of k.
However, clique percolation remains a hard computational problem; current
algorithms still scale worse than some other overlapping community finding
algorithms.
|
1205.0044
|
A Singly-Exponential Time Algorithm for Computing Nonnegative Rank
|
cs.DS cs.IR cs.LG
|
Here, we give an algorithm for deciding if the nonnegative rank of a matrix
$M$ of dimension $m \times n$ is at most $r$ which runs in time
$(nm)^{O(r^2)}$. This is the first exact algorithm that runs in time
singly-exponential in $r$. This algorithm (and earlier algorithms) are built on
methods for finding a solution to a system of polynomial inequalities (if one
exists). Notably, the best algorithms for this task run in time exponential in
the number of variables but polynomial in all of the other parameters (the
number of inequalities and the maximum degree).
Hence these algorithms motivate natural algebraic questions whose solution
have immediate {\em algorithmic} implications: How many variables do we need to
represent the decision problem, does $M$ have nonnegative rank at most $r$? A
naive formulation uses $nr + mr$ variables and yields an algorithm that is
exponential in $n$ and $m$ even for constant $r$. (Arora, Ge, Kannan, Moitra,
STOC 2012) recently reduced the number of variables to $2r^2 2^r$, and here we
exponentially reduce the number of variables to $2r^2$ and this yields our main
algorithm. In fact, the algorithm that we obtain is nearly-optimal (under the
Exponential Time Hypothesis) since an algorithm that runs in time $(nm)^{o(r)}$
would yield a subexponential algorithm for 3-SAT .
Our main result is based on establishing a normal form for nonnegative matrix
factorization - which in turn allows us to exploit algebraic dependence among a
large collection of linear transformations with variable entries. Additionally,
we also demonstrate that nonnegative rank cannot be certified by even a very
large submatrix of $M$, and this property also follows from the intuition
gained from viewing nonnegative rank through the lens of systems of polynomial
inequalities.
|
1205.0047
|
$QD$-Learning: A Collaborative Distributed Strategy for Multi-Agent
Reinforcement Learning Through Consensus + Innovations
|
stat.ML cs.LG cs.MA math.OC math.PR
|
The paper considers a class of multi-agent Markov decision processes (MDPs),
in which the network agents respond differently (as manifested by the
instantaneous one-stage random costs) to a global controlled state and the
control actions of a remote controller. The paper investigates a distributed
reinforcement learning setup with no prior information on the global state
transition and local agent cost statistics. Specifically, with the agents'
objective consisting of minimizing a network-averaged infinite horizon
discounted cost, the paper proposes a distributed version of $Q$-learning,
$\mathcal{QD}$-learning, in which the network agents collaborate by means of
local processing and mutual information exchange over a sparse (possibly
stochastic) communication network to achieve the network goal. Under the
assumption that each agent is only aware of its local online cost data and the
inter-agent communication network is \emph{weakly} connected, the proposed
distributed scheme is almost surely (a.s.) shown to yield asymptotically the
desired value function and the optimal stationary control policy at each
network agent. The analytical techniques developed in the paper to address the
mixed time-scale stochastic dynamics of the \emph{consensus + innovations}
form, which arise as a result of the proposed interactive distributed scheme,
are of independent interest.
|
1205.0076
|
Robust Distributed Routing in Dynamical Networks with Cascading Failures
|
cs.SY math.DS
|
Robustness of routing policies for networks is a central problem which is
gaining increased attention with a growing awareness to safeguard critical
infrastructure networks against natural and man-induced disruptions. Routing
under limited information and the possibility of cascades through the network
adds serious challenges to this problem. This abstract considers the framework
of dynamical networks introduced in our earlier work [1,2], where the network
is modeled by a system of ordinary differential equations derived from mass
conservation laws on directed acyclic graphs with a single origin-destination
pair and a constant inflow at the origin. The rate of change of the particle
density on each link of the network equals the difference between the inflow
and the outflow on that link. The latter is modeled to depend on the current
particle density on that link through a flow function. The novel modeling
element in this paper is that every link is assumed to have finite capacity for
particle density and that the flow function is modeled to be strictly
increasing as density increases from zero up to the maximum density capacity,
and is discontinuous at the maximum density capacity, with the flow function
value being zero at that point. This feature, in particular, allows for the
possibility of spill-backs in our model. In this paper, we present our results
on resilience of such networks under distributed routing, towards perturbations
that reduce link-wise flow functions.
|
1205.0079
|
Complexity Analysis of the Lasso Regularization Path
|
stat.ML cs.LG math.OC
|
The regularization path of the Lasso can be shown to be piecewise linear,
making it possible to "follow" and explicitly compute the entire path. We
analyze in this paper this popular strategy, and prove that its worst case
complexity is exponential in the number of variables. We then oppose this
pessimistic result to an (optimistic) approximate analysis: We show that an
approximate path with at most O(1/sqrt(epsilon)) linear segments can always be
obtained, where every point on the path is guaranteed to be optimal up to a
relative epsilon-duality gap. We complete our theoretical analysis with a
practical algorithm to compute these approximate paths.
|
1205.0085
|
Spectrum Leasing via Cooperation for Enhanced Physical-Layer Secrecy
|
cs.IT math.IT
|
Spectrum leasing via cooperation refers to the possibility of primary users
leasing a portion of the spectral resources to secondary users in exchange for
cooperation. In the presence of an eavesdropper, this correspondence proposes a
novel application of this concept in which the secondary cooperation aims at
improving secrecy of the primary network by creating more interference to the
eavesdropper than to the primary receiver. To generate the interference in a
positive way, this work studies an optimal design of a beamformer at the
secondary transmitter with multiple antennas that maximizes a secrecy rate of
the primary network while satisfying a required rate for the secondary network.
Moreover, we investigate two scenarios depending upon the operation of the
eavesdropper: i) the eavesdropper treats the interference by the secondary
transmission as an additive noise (single-user decoding) and ii) the
eavesdropper tries to decode and remove the secondary signal (joint decoding).
Numerical results confirm that, for a wide range of required secondary rate
constraints, the proposed spectrum-leasing strategy increases the secrecy rate
of the primary network compared to the case of no spectrum leasing.
|
1205.0088
|
ProPPA: A Fast Algorithm for $\ell_1$ Minimization and Low-Rank Matrix
Completion
|
cs.LG math.OC
|
We propose a Projected Proximal Point Algorithm (ProPPA) for solving a class
of optimization problems. The algorithm iteratively computes the proximal point
of the last estimated solution projected into an affine space which itself is
parallel and approaching to the feasible set. We provide convergence analysis
theoretically supporting the general algorithm, and then apply it for solving
$\ell_1$-minimization problems and the matrix completion problem. These
problems arise in many applications including machine learning, image and
signal processing. We compare our algorithm with the existing state-of-the-art
algorithms. Experimental results on solving these problems show that our
algorithm is very efficient and competitive.
|
1205.0110
|
Modelling spatial patterns of economic activity in the Netherlands
|
cs.MA
|
Understanding how spatial configurations of economic activity emerge is
important when formulating spatial planning and economic policy. Not only
micro-simulation and agent-based model such as UrbanSim, ILUMAS and SIMFIRMS,
but also Simon's model of hierarchical concentration have widely applied, for
this purpose. These models, however, have limitations with respect to
simulating structural changes in spatial economic systems and the impact of
proximity. The present paper proposes a model of firm development that is based
on behavioural rules such as growth, closure, spin-off and relocation. An
important aspect of the model is that locational preferences of firms are based
on agglomeration advantages, accessibility of markets and congestion, allowing
for a proper description of concentration and deconcentration tendencies. By
comparing the outcomes of the proposed model with real world data, we will
calibrate the parameters and assess how well the model predicts existing
spatial configurations and decide. The model is implemented as an agent-based
simulation model describing firm development in the Netherlands in 21
industrial sectors from 1950 to 2004.
|
1205.0111
|
Alternatives for optimization in systems and control: convex and
non-convex approaches
|
math.OC cs.SY
|
In this presentation, we will develop a short overview of main trends of
optimization in systems and control, and from there outline some new
perspectives emerging today. More specifically, we will focus on the current
situation, where it is clear that convex and Linear Matrix Inequality (LMI)
methods have become the most common option. However, because of its vast
success, the convex approach is often the only direction considered, despite
the underlying problem is non-convex and that other optimization methods
specifically equipped to handle such problems should have been used instead. We
will present key points on this topic, and as a side result we will propose a
method to produce a virtually infinite number of papers.
|
1205.0149
|
Non-Universality in Semi-Directed Barabasi-Albert Networks
|
physics.comp-ph cs.SI physics.soc-ph
|
In usual scale-free networks of Barabasi-Albert type, a newly added node
selects randomly m neighbors from the already existing network nodes,
proportionally to the number of links these had before. Then the number N(k) of
nodes with k links each decays as 1/k^gamma where gamma=3 is universal, i.e.
independent of m. Now we use a limited directedness in the construction of the
network, as a result of which the exponent gamma decreases from 3 to 2 for
increasing m.
|
1205.0162
|
Joint Power and Resource Allocation for Block-Fading Relay-Assisted
Broadcast Channels
|
cs.IT math.IT math.OC
|
We provide the solution for optimizing the power and resource allocation over
block-fading relay-assisted broadcast channels in order to maximize the long
term average achievable rates region of the users. The problem formulation
assumes regenerative (repetition coding) decode-and-forward (DF) relaying
strategy, long-term average total transmitted power constraint, orthogonal
multiplexing of the users messages within the channel blocks, possibility to
use a direct transmission (DT) mode from the base station to the user terminal
directly or a relaying (DF) transmission mode, and partial channel state
information. We show that our optimization problem can be transformed into an
equivalent "no-relaying" broadcast channel optimization problem with each
actual user substituted by two virtual users having different channel qualities
and multiplexing weights. The proposed power and resource allocation strategies
are expressed in closed-form that can be applied practically in centralized
relay-assisted wireless networks. Furthermore, we show by numerical examples
that our scheme enlarges the achievable rates region significantly.
|
1205.0181
|
Distributed Linear Precoder Optimization and Base Station Selection for
an Uplink Heterogeneous Network
|
cs.IT math.IT
|
In a heterogeneous wireless cellular network, each user may be covered by
multiple access points such as macro/pico/relay/femto base stations (BS). An
effective approach to maximize the sum utility (e.g., system throughput) in
such a network is to jointly optimize users' linear procoders as well as their
base station associations. In this paper we first show that this joint
optimization problem is NP-hard and thus is difficult to solve to global
optimality. To find a locally optimal solution, we formulate the problem as a
noncooperative game in which the users and the BSs both act as players. We
introduce a set of new utility functions for the players and show that every
Nash equilibrium (NE) of the resulting game is a stationary solution of the
original sum utility maximization problem. Moreover, we develop a best-response
type algorithm that allows the players to distributedly reach a NE of the game.
Simulation results show that the proposed distributed algorithm can effectively
relieve local BS congestion and simultaneously achieve high throughput and load
balancing in a heterogeneous network.
|
1205.0207
|
Shortest Path Set Induced Vertex Ordering and its Application to
Distributed Distance Optimal Multi-agent Formation Path Planning
|
cs.RO cs.SY
|
For the task of moving a group of indistinguishable agents on a connected
graph with unit edge lengths into an arbitrary goal formation, it was
previously shown that distance optimal paths can be scheduled to complete with
a tight convergence time guarantee, using a fully centralized algorithm. In
this study, we show that the problem formulation in fact induces a more
fundamental ordering of the vertices on the underlying graph network, which
directly leads to a more intuitive scheduling algorithm that assures the same
convergence time and runs faster. More importantly, this structure enables a
distributed scheduling algorithm once individual paths are assigned to the
agents, which was not possible before. The vertex ordering also readily extends
to more general graphs - those with non-unit capacities and edge lengths - for
which we again guarantee the convergence time until the desired formation is
achieved.
|
1205.0211
|
Non-conservative kinetic exchange model of opinion dynamics with
randomness and bounded confidence
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
The concept of a bounded confidence level is incorporated in a
nonconservative kinetic exchange model of opinion dynamics model where opinions
have continuous values $\in [-1,1]$. The characteristics of the unrestricted
model, which has one parameter $\lambda$ representing conviction, undergo
drastic changes with the introduction of bounded confidence parametrised by
$\delta$. Three distinct regions are identified in the phase diagram in the
$\delta-\lambda$ plane and the evidences of a first order phase transition for
$\delta \geq 0.3$ are presented. A neutral state with all opinions equal to
zero occurs for $\lambda \leq \lambda_{c_1} \simeq 2/3$, independent of
$\delta$, while for $\lambda_{c_1} \leq \lambda \leq \lambda_{c_2}(\delta)$, an
ordered region is seen to exist where opinions of only one sign prevail. At
$\lambda_{c_2}(\delta)$, a transition to a disordered state is observed, where
individual opinions of both signs coexist and move closer to the extreme values
($\pm 1$) as $\lambda$ is increased. For confidence level $\delta < 0.3$, the
ordered phase exists for a narrow range of $\lambda$ only. The line $\delta =
0$ is apparently a line of discontinuity and this limit is discussed in some
detail.
|
1205.0213
|
Convex dwell-time characterizations for uncertain linear impulsive
systems
|
math.OC cs.SY math.CA math.DS
|
New sufficient conditions for the characterization of dwell-times for linear
impulsive systems are proposed and shown to coincide with continuous decrease
conditions of a certain class of looped-functionals, a recently introduced type
of functionals suitable for the analysis of hybrid systems. This approach
allows to consider Lyapunov functions that evolve non-monotonically along the
flow of the system in a new way, broadening then the admissible class of
systems which may be analyzed. As a byproduct, the particular structure of the
obtained conditions makes the method is easily extendable to uncertain systems
by exploiting some convexity properties. Several examples illustrate the
approach.
|
1205.0243
|
Poultry Diseases Expert System using Dempster-Shafer Theory
|
cs.AI stat.AP
|
Based on World Health Organization (WHO) fact sheet in the 2011, outbreaks of
poultry diseases especially Avian Influenza in poultry may raise global public
health concerns due to their effect on poultry populations, their potential to
cause serious disease in people, and their pandemic potential. In this
research, we built a Poultry Diseases Expert System using Dempster-Shafer
Theory. In this Poultry Diseases Expert System We describe five symptoms which
include depression, combs, wattle, bluish face region, swollen face region,
narrowness of eyes, and balance disorders. The result of the research is that
Poultry Diseases Expert System has been successfully identifying poultry
diseases.
|
1205.0260
|
Littlewood Polynomials with Small $L^4$ Norm
|
math.NT cs.IT math.CO math.IT
|
Littlewood asked how small the ratio $||f||_4/||f||_2$ (where $||.||_\alpha$
denotes the $L^\alpha$ norm on the unit circle) can be for polynomials $f$
having all coefficients in $\{1,-1\}$, as the degree tends to infinity. Since
1988, the least known asymptotic value of this ratio has been $\sqrt[4]{7/6}$,
which was conjectured to be minimum. We disprove this conjecture by showing
that there is a sequence of such polynomials, derived from the Fekete
polynomials, for which the limit of this ratio is less than $\sqrt[4]{22/19}$.
|
1205.0281
|
Improving Achievable Rate for the Two-User SISO Interference Channel
with Improper Gaussian Signaling
|
cs.IT math.IT
|
This paper studies the achievable rate region of the two-user
single-input-single-output (SISO) Gaussian interference channel, when the
improper Gaussian signaling is applied. Under the assumption that the
interference is treated as additive Gaussian noise, we show that the user's
achievable rate can be expressed as a summation of the rate achievable by the
conventional proper Gaussian signaling, which depends on the users' input
covariances only, and an additional term, which is a function of both the
users' covariances and pseudo-covariances. The additional degree of freedom
given by the pseudo-covariance, which is conventionally set to be zero for the
case of proper Gaussian signaling, provides an opportunity to improve the
achievable rate by employing the improper Gaussian signaling. Since finding the
optimal solution for the joint covariance and pseudo-covariance optimization is
difficult, we propose a sub-optimal but efficient algorithm by separately
optimizing these two sets of parameters. Numerical results show that the
proposed algorithm provides a close-to-optimal performance as compared to the
exhaustive search method, and significantly outperforms the optimal proper
Gaussian signaling and other existing improper Gaussian signaling schemes.
|
1205.0288
|
A Randomized Mirror Descent Algorithm for Large Scale Multiple Kernel
Learning
|
cs.LG stat.ML
|
We consider the problem of simultaneously learning to linearly combine a very
large number of kernels and learn a good predictor based on the learnt kernel.
When the number of kernels $d$ to be combined is very large, multiple kernel
learning methods whose computational cost scales linearly in $d$ are
intractable. We propose a randomized version of the mirror descent algorithm to
overcome this issue, under the objective of minimizing the group $p$-norm
penalized empirical risk. The key to achieve the required exponential speed-up
is the computationally efficient construction of low-variance estimates of the
gradient. We propose importance sampling based estimates, and find that the
ideal distribution samples a coordinate with a probability proportional to the
magnitude of the corresponding gradient. We show the surprising result that in
the case of learning the coefficients of a polynomial kernel, the combinatorial
structure of the base kernels to be combined allows the implementation of
sampling from this distribution to run in $O(\log(d))$ time, making the total
computational cost of the method to achieve an $\epsilon$-optimal solution to
be $O(\log(d)/\epsilon^2)$, thereby allowing our method to operate for very
large values of $d$. Experiments with simulated and real data confirm that the
new algorithm is computationally more efficient than its state-of-the-art
alternatives.
|
1205.0312
|
Least Information Modeling for Information Retrieval
|
cs.IR cs.IT math.IT
|
We proposed a Least Information theory (LIT) to quantify meaning of
information in probability distribution changes, from which a new information
retrieval model was developed. We observed several important characteristics of
the proposed theory and derived two quantities in the IR context for document
representation. Given probability distributions in a collection as prior
knowledge, LI Binary (LIB) quantifies least information due to the binary
occurrence of a term in a document whereas LI Frequency (LIF) measures least
information based on the probability of drawing a term from a bag of words.
Three fusion methods were also developed to combine LIB and LIF quantities for
term weighting and document ranking. Experiments on four benchmark TREC
collections for ad hoc retrieval showed that LIT-based methods demonstrated
very strong performances compared to classic TF*IDF and BM25, especially for
verbose queries and hard search topics. The least information theory offers a
new approach to measuring semantic quantities of information and provides
valuable insight into the development of new IR models.
|
1205.0326
|
Performance Analysis of Decode-and-Forward Relaying in Gamma-Gamma
Fading Channels
|
cs.IT math.IT
|
Decode-and-forward (DF) cooperative communication based on free space optical
(FSO) links is studied in this letter. We analyze performance of the DF
protocol in the FSO links following the Gamma-Gamma distribution. The
cumulative distribution function (CDF) and probability density function (PDF)
of a random variable containing mixture of the Gamma- Gamma and Gaussian random
variables is derived. By using the derived CDF and PDF, average bit error rate
of the DF relaying is obtained.
|
1205.0329
|
An Adaptive Conditional Zero-Forcing Decoder with Full-diversity, Least
Complexity and Essentially-ML Performance for STBCs
|
cs.IT math.IT
|
A low complexity, essentially-ML decoding technique for the Golden code and
the 3 antenna Perfect code was introduced by Sirianunpiboon, Howard and
Calderbank. Though no theoretical analysis of the decoder was given, the
simulations showed that this decoding technique has almost maximum-likelihood
(ML) performance. Inspired by this technique, in this paper we introduce two
new low complexity decoders for Space-Time Block Codes (STBCs) - the Adaptive
Conditional Zero-Forcing (ACZF) decoder and the ACZF decoder with successive
interference cancellation (ACZF-SIC), which include as a special case the
decoding technique of Sirianunpiboon et al. We show that both ACZF and ACZF-SIC
decoders are capable of achieving full-diversity, and we give sufficient
conditions for an STBC to give full-diversity with these decoders. We then show
that the Golden code, the 3 and 4 antenna Perfect codes, the 3 antenna Threaded
Algebraic Space-Time code and the 4 antenna rate 2 code of Srinath and Rajan
are all full-diversity ACZF/ACZF-SIC decodable with complexity strictly less
than that of their ML decoders. Simulations show that the proposed decoding
method performs identical to ML decoding for all these five codes. These STBCs
along with the proposed decoding algorithm outperform all known codes in terms
of decoding complexity and error performance for 2,3 and 4 transmit antennas.
We further provide a lower bound on the complexity of full-diversity
ACZF/ACZF-SIC decoding. All the five codes listed above achieve this lower
bound and hence are optimal in terms of minimizing the ACZF/ACZF-SIC decoding
complexity. Both ACZF and ACZF-SIC decoders are amenable to sphere decoding
implementation.
|
1205.0345
|
Bounds on List Decoding Gabidulin Codes
|
cs.IT math.IT
|
An open question about Gabidulin codes is whether polynomial-time list
decoding beyond half the minimum distance is possible or not. In this
contribution, we give a lower and an upper bound on the list size, i.e., the
number of codewords in a ball around the received word. The lower bound shows
that if the radius of this ball is greater than the Johnson radius, this list
size can be exponential and hence, no polynomial-time list decoding is
possible. The upper bound on the list size uses subspace properties.
|
1205.0406
|
Minimax Classifier for Uncertain Costs
|
cs.LG
|
Many studies on the cost-sensitive learning assumed that a unique cost matrix
is known for a problem. However, this assumption may not hold for many
real-world problems. For example, a classifier might need to be applied in
several circumstances, each of which associates with a different cost matrix.
Or, different human experts have different opinions about the costs for a given
problem. Motivated by these facts, this study aims to seek the minimax
classifier over multiple cost matrices. In summary, we theoretically proved
that, no matter how many cost matrices are involved, the minimax problem can be
tackled by solving a number of standard cost-sensitive problems and
sub-problems that involve only two cost matrices. As a result, a general
framework for achieving minimax classifier over multiple cost matrices is
suggested and justified by preliminary empirical studies.
|
1205.0411
|
Hypothesis testing using pairwise distances and associated kernels (with
Appendix)
|
cs.LG stat.ME stat.ML
|
We provide a unifying framework linking two classes of statistics used in
two-sample and independence testing: on the one hand, the energy distances and
distance covariances from the statistics literature; on the other, distances
between embeddings of distributions to reproducing kernel Hilbert spaces
(RKHS), as established in machine learning. The equivalence holds when energy
distances are computed with semimetrics of negative type, in which case a
kernel may be defined such that the RKHS distance between distributions
corresponds exactly to the energy distance. We determine the class of
probability distributions for which kernels induced by semimetrics are
characteristic (that is, for which embeddings of the distributions to an RKHS
are injective). Finally, we investigate the performance of this family of
kernels in two-sample and independence tests: we show in particular that the
energy distance most commonly employed in statistics is just one member of a
parametric family of kernels, and that other choices from this family can yield
more powerful tests.
|
1205.0435
|
Scalable Social Coordination using Enmeshed Queries
|
cs.DB cs.SI physics.soc-ph
|
Social coordination allows users to move beyond awareness of their friends to
efficiently coordinating physical activities with others. While specific forms
of social coordination can be seen in tools such as Evite, Meetup and Groupon,
we introduce a more general model using what we call enmeshed queries. An
enmeshed query allows users to declaratively specify an intent to coordinate by
specifying social attributes such as the desired group size and who/what/when,
and the database returns matching queries. Enmeshed queries are continuous, but
new queries (and not data) answer older queries; the variable group size also
makes enmeshed queries different from entangled queries, publish-subscribe
systems, and dating services.
We show that even offline group coordination using enmeshed queries is
NP-hard. We then introduce efficient heuristics that use selective indices such
as location and time to reduce the space of possible matches; we also add
refinements such as delayed evaluation and using the relative matchability of
users to determine search order. We describe a centralized implementation and
evaluate its performance against an optimal algorithm. We show that the
combination of not stopping prematurely (after finding a match) and delayed
evaluation results in an algorithm that finds 86% of the matches found by an
optimal algorithm, and takes an average of 40 usec per query using 1 core of a
2.5 Ghz server machine. Further, the algorithm has good latency, is reasonably
fair to large group size requests, and can be scaled to global workloads using
multiple cores and multiple servers. We conclude by describing potential
generalizations that add prices, recommendations, and data mining to basic
enmeshed queries.
|
1205.0439
|
TH*:Scalable Distributed Trie Hashing
|
cs.DS cs.DB cs.DC
|
In today's world of computers, dealing with huge amounts of data is not
unusual. The need to distribute this data in order to increase its availability
and increase the performance of accessing it is more urgent than ever. For
these reasons it is necessary to develop scalable distributed data structures.
In this paper we propose a TH* distributed variant of the Trie Hashing data
structure. First we propose Thsw new version of TH without node Nil in digital
tree (trie), then this version will be adapted to multicomputer environment.
The simulation results reveal that TH* is scalable in the sense that it grows
gracefully, one bucket at a time, to a large number of servers, also TH* offers
a good storage space utilization and high query efficiency special for ordering
operations.
|
1205.0483
|
High availability using virtualization - 3RC
|
cs.SY
|
High availability has always been one of the main problems for a data center.
Till now high availability was achieved by host per host redundancy, a highly
expensive method in terms of hardware and human costs. A new approach to the
problem can be offered by virtualization. Using virtualization, it is possible
to achieve a redundancy system for all the services running on a data center.
This new approach to high availability allows the running virtual machines to
be distributed over a small number of servers, by exploiting the features of
the virtualization layer: start, stop and move virtual machines between
physical hosts. The 3RC system is based on a finite state machine, providing
the possibility to restart each virtual machine over any physical host, or
reinstall it from scratch. A complete infrastructure has been developed to
install operating system and middleware in a few minutes. To virtualize the
main servers of a data center, a new procedure has been developed to migrate
physical to virtual hosts. The whole Grid data center SNS-PISA is running at
the moment in virtual environment under the high availability system.
|
1205.0537
|
A greedy-navigator approach to navigable city plans
|
physics.soc-ph cs.SI
|
We use a set of four theoretical navigability indices for street maps to
investigate the shape of the resulting street networks, if they are grown by
optimizing these indices. The indices compare the performance of simulated
navigators (having a partial information about the surroundings, like humans in
many real situations) to the performance of optimally navigating individuals.
We show that our simple greedy shortcut construction strategy generates the
emerging structures that are different from real road network, but not
inconceivable. The resulting city plans, for all navigation indices, share
common qualitative properties such as the tendency for triangular blocks to
appear, while the more quantitative features, such as degree distributions and
clustering, are characteristically different depending on the type of metrics
and routing strategies. We show that it is the type of metrics used which
determines the overall shapes characterized by structural heterogeneity, but
the routing schemes contribute to more subtle details of locality, which is
more emphasized in case of unrestricted connections when the edge crossing is
allowed.
|
1205.0540
|
A Fitness Model for Scholarly Impact Analysis
|
stat.AP cs.DL cs.SI physics.soc-ph
|
We propose a model to analyze citation growth and influences of fitness
(competitiveness) factors in an evolving citation network. Applying the
proposed method to modeling citations to papers and scholars in the InfoVis
2004 data, a benchmark collection about a 31-year history of information
visualization, leads to findings consistent with citation distributions in
general and observations of the domain in particular. Fitness variables based
on prior impacts and the time factor have significant influences on citation
outcomes. We find considerably large effect sizes from the fitness modeling,
which suggest inevitable bias in citation analysis due to these factors. While
raw citation scores offer little insight into the growth of InfoVis,
normalization of the scores by influences of time and prior fitness offers a
reasonable depiction of the field's development. The analysis demonstrates the
proposed model's ability to produce results consistent with observed data and
to support meaningful comparison of citation scores over time.
|
1205.0541
|
Percolation threshold determines the optimal population density for
public cooperation
|
physics.soc-ph cs.SI q-bio.PE
|
While worldwide census data provide statistical evidence that firmly link the
population density with several indicators of social welfare, the precise
mechanisms underlying these observations are largely unknown. Here we study the
impact of population density on the evolution of public cooperation in
structured populations and find that the optimal density is uniquely related to
the percolation threshold of the host graph irrespective of its topological
details. We explain our observations by showing that spatial reciprocity peaks
in the vicinity of the percolation threshold, when the emergence of a giant
cooperative cluster is hindered neither by vacancy nor by invading defectors,
thus discovering an intuitive yet universal law that links the population
density with social prosperity.
|
1205.0561
|
Multi-level agent-based modeling - A literature survey
|
cs.MA
|
During last decade, multi-level agent-based modeling has received significant
and dramatically increasing interest. In this article we present a
comprehensive and structured review of literature on the subject. We present
the main theoretical contributions and application domains of this concept,
with an emphasis on social, flow, biological and biomedical models.
|
1205.0586
|
Enhanced Algebraic Error Control for Random Linear Network Coding
|
cs.IT math.IT
|
Error control is significant to network coding, since when unchecked, errors
greatly deteriorate the throughput gains of network coding and seriously
undermine both reliability and security of data. Two families of codes,
subspace and rank metric codes, have been used to provide error control for
random linear network coding. In this paper, we enhance the error correction
capability of these two families of codes by using a novel two-tier decoding
scheme. While the decoding of subspace and rank metric codes serves a
second-tier decoding, we propose to perform a first-tier decoding on the packet
level by taking advantage of Hamming distance properties of subspace and rank
metric codes. This packet-level decoding can also be implemented by
intermediate nodes to reduce error propagation. To support the first-tier
decoding, we also investigate Hamming distance properties of three important
families of subspace and rank metric codes, Gabidulin codes,
Kotter--Kschischang codes, and Mahdavifar--Vardy codes. Both the two-tier
decoding scheme and the Hamming distance properties of these codes are novel to
the best of our knowledge.
|
1205.0591
|
Multi-Faceted Ranking of News Articles using Post-Read Actions
|
cs.IR
|
Personalized article recommendation is important to improve user engagement
on news sites. Existing work quantifies engagement primarily through click
rates. We argue that quality of recommendations can be improved by
incorporating different types of "post-read" engagement signals like sharing,
commenting, printing and e-mailing article links. More specifically, we propose
a multi-faceted ranking problem for recommending news articles where each facet
corresponds to a ranking problem to maximize actions of a post-read action
type. The key technical challenge is to estimate the rates of post-read action
types by mitigating the impact of enormous data sparsity, we do so through
several variations of factor models. To exploit correlations among post-read
action types we also introduce a novel variant called locally augmented tensor
(LAT) model. Through data obtained from a major news site in the US, we show
that factor models significantly outperform a few baseline IR models and the
LAT model significantly outperforms several other variations of factor models.
Our findings show that it is possible to incorporate post-read signals that are
commonly available on online news sites to improve quality of recommendations.
|
1205.0596
|
Complex Networks from Simple Rewrite Systems
|
cs.SI nlin.AO physics.soc-ph
|
Complex networks are all around us, and they can be generated by simple
mechanisms. Understanding what kinds of networks can be produced by following
simple rules is therefore of great importance. We investigate this issue by
studying the dynamics of extremely simple systems where are `writer' moves
around a network, and modifies it in a way that depends upon the writer's
surroundings. Each vertex in the network has three edges incident upon it,
which are colored red, blue and green. This edge coloring is done to provide a
way for the writer to orient its movement. We explore the dynamics of a space
of 3888 of these `colored trinet automata' systems. We find a large variety of
behaviour, ranging from the very simple to the very complex. We also discover
simple rules that generate forms which are remarkably similar to a wide range
of natural objects. We study our systems using simulations (with appropriate
visualization techniques) and analyze selected rules mathematically. We arrive
at an empirical classification scheme which reveals a lot about the kinds of
dynamics and networks that can be generated by these systems.
|
1205.0610
|
Greedy Multiple Instance Learning via Codebook Learning and Nearest
Neighbor Voting
|
cs.LG
|
Multiple instance learning (MIL) has attracted great attention recently in
machine learning community. However, most MIL algorithms are very slow and
cannot be applied to large datasets. In this paper, we propose a greedy
strategy to speed up the multiple instance learning process. Our contribution
is two fold. First, we propose a density ratio model, and show that maximizing
a density ratio function is the low bound of the DD model under certain
conditions. Secondly, we make use of a histogram ratio between positive bags
and negative bags to represent the density ratio function and find codebooks
separately for positive bags and negative bags by a greedy strategy. For
testing, we make use of a nearest neighbor strategy to classify new bags. We
test our method on both small benchmark datasets and the large TRECVID MED11
dataset. The experimental results show that our method yields comparable
accuracy to the current state of the art, while being up to at least one order
of magnitude faster.
|
1205.0618
|
Wireless Information and Power Transfer: Architecture Design and
Rate-Energy Tradeoff
|
cs.IT math.IT
|
Simultaneous information and power transfer over the wireless channels
potentially offers great convenience to mobile users. Yet practical receiver
designs impose technical constraints on its hardware realization, as practical
circuits for harvesting energy from radio signals are not yet able to decode
the carried information directly. To make theoretical progress, we propose a
general receiver operation, namely, dynamic power splitting (DPS), which splits
the received signal with adjustable power ratio for energy harvesting and
information decoding, separately. Three special cases of DPS, namely, time
switching (TS), static power splitting (SPS) and on-off power splitting (OPS)
are investigated. The TS and SPS schemes can be treated as special cases of
OPS. Moreover, we propose two types of practical receiver architectures,
namely, separated versus integrated information and energy receivers. The
integrated receiver integrates the front-end components of the separated
receiver, thus achieving a smaller form factor. The rate-energy tradeoff for
the two architectures are characterized by a so-called rate-energy (R-E)
region. The optimal transmission strategy is derived to achieve different
rate-energy tradeoffs. With receiver circuit power consumption taken into
account, it is shown that the OPS scheme is optimal for both receivers. For the
ideal case when the receiver circuit does not consume power, the SPS scheme is
optimal for both receivers. In addition, we study the performance for the two
types of receivers under a realistic system setup that employs practical
modulation. Our results provide useful insights to the optimal practical
receiver design for simultaneous wireless information and power transfer
(SWIPT).
|
1205.0622
|
No-Regret Learning in Extensive-Form Games with Imperfect Recall
|
cs.GT cs.AI
|
Counterfactual Regret Minimization (CFR) is an efficient no-regret learning
algorithm for decision problems modeled as extensive games. CFR's regret bounds
depend on the requirement of perfect recall: players always remember
information that was revealed to them and the order in which it was revealed.
In games without perfect recall, however, CFR's guarantees do not apply. In
this paper, we present the first regret bound for CFR when applied to a general
class of games with imperfect recall. In addition, we show that CFR applied to
any abstraction belonging to our general class results in a regret bound not
just for the abstract game, but for the full game as well. We verify our theory
and show how imperfect recall can be used to trade a small increase in regret
for a significant reduction in memory in three domains: die-roll poker, phantom
tic-tac-toe, and Bluff.
|
1205.0626
|
Advances in the merit factor problem for binary sequences
|
math.CO cs.IT math.IT
|
The identification of binary sequences with large merit factor (small
mean-squared aperiodic autocorrelation) is an old problem of complex analysis
and combinatorial optimization, with practical importance in digital
communications engineering and condensed matter physics. We establish the
asymptotic merit factor of several families of binary sequences and thereby
prove various conjectures, explain numerical evidence presented by other
authors, and bring together within a single framework results previously
appearing in scattered form. We exhibit, for the first time, families of
skew-symmetric sequences whose asymptotic merit factor is as large as the best
known value (an algebraic number greater than 6.34) for all binary sequences;
this is interesting in light of Golay's conjecture that the subclass of
skew-symmetric sequences has asymptotically optimal merit factor. Our methods
combine Fourier analysis, estimation of character sums, and estimation of the
number of lattice points in polyhedra.
|
1205.0627
|
Rule-weighted and terminal-weighted context-free grammars have identical
expressivity
|
cs.CL
|
Two formalisms, both based on context-free grammars, have recently been
proposed as a basis for a non-uniform random generation of combinatorial
objects. The former, introduced by Denise et al, associates weights with
letters, while the latter, recently explored by Weinberg et al in the context
of random generation, associates weights to transitions. In this short note, we
use a simple modification of the Greibach Normal Form transformation algorithm,
due to Blum and Koch, to show the equivalent expressivities, in term of their
induced distributions, of these two formalisms.
|
1205.0651
|
Generative Maximum Entropy Learning for Multiclass Classification
|
cs.IT cs.LG math.IT
|
Maximum entropy approach to classification is very well studied in applied
statistics and machine learning and almost all the methods that exists in
literature are discriminative in nature. In this paper, we introduce a maximum
entropy classification method with feature selection for large dimensional data
such as text datasets that is generative in nature. To tackle the curse of
dimensionality of large data sets, we employ conditional independence
assumption (Naive Bayes) and we perform feature selection simultaneously, by
enforcing a `maximum discrimination' between estimated class conditional
densities. For two class problems, in the proposed method, we use Jeffreys
($J$) divergence to discriminate the class conditional densities. To extend our
method to the multi-class case, we propose a completely new approach by
considering a multi-distribution divergence: we replace Jeffreys divergence by
Jensen-Shannon ($JS$) divergence to discriminate conditional densities of
multiple classes. In order to reduce computational complexity, we employ a
modified Jensen-Shannon divergence ($JS_{GM}$), based on AM-GM inequality. We
show that the resulting divergence is a natural generalization of Jeffreys
divergence to a multiple distributions case. As far as the theoretical
justifications are concerned we show that when one intends to select the best
features in a generative maximum entropy approach, maximum discrimination using
$J-$divergence emerges naturally in binary classification. Performance and
comparative study of the proposed algorithms have been demonstrated on large
dimensional text and gene expression datasets that show our methods scale up
very well with large dimensional datasets.
|
1205.0699
|
Time-Varying Space-Only Codes for Coded MIMO
|
cs.IT math.IT
|
Multiple antenna (MIMO) devices are widely used to increase reliability and
information bit rate. Optimal error rate performance (full diversity and large
coding gain), for unknown channel state information at the transmitter and for
maximal rate, can be achieved by approximately universal space-time codes, but
comes at a price of large detection complexity, infeasible for most practical
systems. We propose a new coded modulation paradigm: error-correction outer
code with space-only but time-varying precoder (as inner code). We refer to the
latter as Ergodic Mutual Information (EMI) code. The EMI code achieves the
maximal multiplexing gain and full diversity is proved in terms of the outage
probability. Contrary to most of the literature, our work is not based on the
elegant but difficult classical algebraic MIMO theory. Instead, the relation
between MIMO and parallel channels is exploited. The theoretical proof of full
diversity is corroborated by means of numerical simulations for many MIMO
scenarios, in terms of outage probability and word error rate of LDPC coded
systems. The full-diversity and full-rate at low detection complexity comes at
a price of a small coding gain loss for outer coding rates close to one, but
this loss vanishes with decreasing coding rate.
|
1205.0703
|
Paraunitary Matrices
|
cs.IT math.IT
|
Design methods for paraunitary matrices from complete orthogonal sets of
idempotents and related matrix structures are presented. These include
techniques for designing non-separable multidimensional paraunitary matrices.
Properties of the structures are obtained and proofs given. Paraunitary
matrices play a central role in signal processing, in particular in the areas
of filterbanks and wavelets.
|
1205.0724
|
Using Data Warehouse to Support Building Strategy or Forecast Business
Tend
|
cs.DB
|
The data warehousing is becoming increasingly important in terms of strategic
decision making through their capacity to integrate heterogeneous data from
multiple information sources in a common storage space, for querying and
analysis. So it can evolve into a multi-tier structure where parts of the
organization take information from the main data warehouse into their own
systems. These may include analysis databases or dependent data marts. As the
data warehouse evolves and the organization gets better at capturing
information on all interactions with the customer. Data warehouse can track
customer interactions over the whole of the customer's lifetime.
|
1205.0732
|
Discretization of a matrix in the problem of quadratic functional binary
minimization
|
cs.NE
|
The capability of discretization of matrix elements in the problem of
quadratic functional minimization with linear member built on matrix in
N-dimensional configuration space with discrete coordinates is researched. It
is shown, that optimal procedure of replacement matrix elements by the integer
quantities with the limited number of gradations exist, and the efficient of
minimization does not reduce. Parameter depends on matrix properties, which
allows estimate the capability of using described procedure for given type of
matrix, is found. Computational complexities of algorithm and RAM requirements
are reduced by 16 times, correct using of integer elements allows increase
minimization algorithm speed by the orders.
|
1205.0768
|
Optimization of Survivability Analysis for Large-Scale Engineering
Networks
|
math.OC cs.SI physics.soc-ph
|
Engineering networks fall into the category of large-scale networks with
heterogeneous nodes such as sources and sinks. The survivability analysis of
such networks requires the analysis of the connectivity of the network
components for every possible combination of faults to determine a network
response to each combination of faults. From the computational complexity point
of view, the problem belongs to the class of exponential time problems at
least. Partially, the problem complexity can be reduced by mapping the initial
topology of a complex large-scale network with multiple sources and multiple
sinks onto a set of smaller sub-topologies with multiple sources and a single
sink connected to the network of sources by a single link. In this paper, the
mapping procedure is applied to the Florida power grid.
|
1205.0790
|
Automating embedded analysis capabilities and managing software
complexity in multiphysics simulation part I: template-based generic
programming
|
cs.MS cs.CE
|
An approach for incorporating embedded simulation and analysis capabilities
in complex simulation codes through template-based generic programming is
presented. This approach relies on templating and operator overloading within
the C++ language to transform a given calculation into one that can compute a
variety of additional quantities that are necessary for many state-of-the-art
simulation and analysis algorithms. An approach for incorporating these ideas
into complex simulation codes through general graph-based assembly is also
presented. These ideas have been implemented within a set of packages in the
Trilinos framework and are demonstrated on a simple problem from chemical
engineering.
|
1205.0792
|
Exact Wavelets on the Ball
|
cs.IT astro-ph.IM math.IT
|
We develop an exact wavelet transform on the three-dimensional ball (i.e. on
the solid sphere), which we name the flaglet transform. For this purpose we
first construct an exact transform on the radial half-line using damped
Laguerre polynomials and develop a corresponding quadrature rule. Combined with
the spherical harmonic transform, this approach leads to a sampling theorem on
the ball and a novel three-dimensional decomposition which we call the
Fourier-Laguerre transform. We relate this new transform to the well-known
Fourier-Bessel decomposition and show that band-limitedness in the
Fourier-Laguerre basis is a sufficient condition to compute the Fourier-Bessel
decomposition exactly. We then construct the flaglet transform on the ball
through a harmonic tiling, which is exact thanks to the exactness of the
Fourier-Laguerre transform (from which the name flaglets is coined). The
corresponding wavelet kernels are well localised in real and Fourier-Laguerre
spaces and their angular aperture is invariant under radial translation. We
introduce a multiresolution algorithm to perform the flaglet transform rapidly,
while capturing all information at each wavelet scale in the minimal number of
samples on the ball. Our implementation of these new tools achieves
floating-point precision and is made publicly available. We perform numerical
experiments demonstrating the speed and accuracy of these libraries and
illustrate their capabilities on a simple denoising example.
|
1205.0831
|
African Trypanosomiasis Detection using Dempster-Shafer Theory
|
cs.AI stat.CO
|
World Health Organization reports that African Trypanosomiasis affects mostly
poor populations living in remote rural areas of Africa that can be fatal if
properly not treated. This paper presents Dempster-Shafer Theory for the
detection of African trypanosomiasis. Sustainable elimination of African
trypanosomiasis as a public-health problem is feasible and requires continuous
efforts and innovative approaches. In this research, we implement
Dempster-Shafer theory for detecting African trypanosomiasis and displaying the
result of detection process. We describe eleven symptoms as major symptoms
which include fever, red urine, skin rash, paralysis, headache, bleeding around
the bite, joint the paint, swollen lymph nodes, sleep disturbances, meningitis
and arthritis. Dempster-Shafer theory to quantify the degree of belief, our
approach uses Dempster-Shafer theory to combine beliefs under conditions of
uncertainty and ignorance, and allows quantitative measurement of the belief
and plausibility in our identification result.
|
1205.0835
|
Parameter Tracking via Optimal Distributed Beamforming in an Analog
Sensor Network
|
cs.IT math.IT
|
We consider the problem of optimal distributed beamforming in a sensor
network where the sensors observe a dynamic parameter in noise and coherently
amplify and forward their observations to a fusion center (FC). The FC uses a
Kalman filter to track the parameter using the observations from the sensors,
and we show how to find the optimal gain and phase of the sensor transmissions
under both global and individual power constraints in order to minimize the
mean squared error (MSE) of the parameter estimate. For the case of a global
power constraint, a closed-form solution can be obtained. A numerical
optimization is required for individual power constraints, but the problem can
be relaxed to a semidefinite programming problem (SDP), and we show how the
optimal solution can be constructed from the solution to the SDP. Simulation
results show that compared with equal power transmission, the use of optimized
power control can significantly reduce the MSE.
|
1205.0837
|
Indexing Reverse Top-k Queries
|
cs.DB cs.CG
|
We consider the recently introduced monochromatic reverse top-k queries which
ask for, given a new tuple q and a dataset D, all possible top-k queries on D
union {q} for which q is in the result. Towards this problem, we focus on
designing indexes in two dimensions for repeated (or batch) querying, a novel
but practical consideration. We present the insight that by representing the
dataset as an arrangement of lines, a critical k-polygon can be identified and
used exclusively to respond to reverse top-k queries. We construct an index
based on this observation which has guaranteed worst-case query cost that is
logarithmic in the size of the k-polygon.
We implement our work and compare it to related approaches, demonstrating
that our index is fast in practice. Furthermore, we demonstrate through our
experiments that a k-polygon is comprised of a small proportion of the original
data, so our index structure consumes little disk space.
|
1205.0858
|
Controlled Sensing for Multihypothesis Testing
|
cs.IT math.IT
|
The problem of multiple hypothesis testing with observation control is
considered in both fixed sample size and sequential settings. In the fixed
sample size setting, for binary hypothesis testing, the optimal exponent for
the maximal error probability corresponds to the maximum Chernoff information
over the choice of controls, and a pure stationary open-loop control policy is
asymptotically optimal within the larger class of all causal control policies.
For multihypothesis testing in the fixed sample size setting, lower and upper
bounds on the optimal error exponent are derived. It is also shown through an
example with three hypotheses that the optimal causal control policy can be
strictly better than the optimal open-loop control policy. In the sequential
setting, a test based on earlier work by Chernoff for binary hypothesis
testing, is shown to be first-order asymptotically optimal for multihypothesis
testing in a strong sense, using the notion of decision making risk in place of
the overall probability of error. Another test is also designed to meet hard
risk constrains while retaining asymptotic optimality. The role of past
information and randomization in designing optimal control policies is
discussed.
|
1205.0908
|
Weighted Patterns as a Tool for Improving the Hopfield Model
|
cond-mat.dis-nn cs.LG cs.NE
|
We generalize the standard Hopfield model to the case when a weight is
assigned to each input pattern. The weight can be interpreted as the frequency
of the pattern occurrence at the input of the network. In the framework of the
statistical physics approach we obtain the saddle-point equation allowing us to
examine the memory of the network. In the case of unequal weights our model
does not lead to the catastrophic destruction of the memory due to its
overfilling (that is typical for the standard Hopfield model). The real memory
consists only of the patterns with weights exceeding a critical value that is
determined by the weights distribution. We obtain the algorithm allowing us to
find this critical value for an arbitrary distribution of the weights, and
analyze in detail some particular weights distributions. It is shown that the
memory decreases as compared to the case of the standard Hopfield model.
However, in our model the network can learn online without the catastrophic
destruction of the memory.
|
1205.0910
|
On projections of arbitrary lattices
|
cs.CG cs.IT math.IT
|
In this paper we prove that given any two point lattices $\Lambda_1 \subset
\mathbb{R}^n$ and $ \Lambda_2 \subset \nobreak \mathbb{R}^{n-k}$, there is a
set of $k$ vectors $\bm{v}_i \in \Lambda_1$ such that $\Lambda_2$ is, up to
similarity, arbitrarily close to the projection of $\Lambda_1$ onto the
orthogonal complement of the subspace spanned by $\bm{v}_1, \ldots, \bm{v}_k$.
This result extends the main theorem of \cite{Sloane2} and has applications in
communication theory.
|
1205.0917
|
VIQI: A New Approach for Visual Interpretation of Deep Web Query
Interfaces
|
cs.IR cs.AI
|
Deep Web databases contain more than 90% of pertinent information of the Web.
Despite their importance, users don't profit of this treasury. Many deep web
services are offering competitive services in term of prices, quality of
service, and facilities. As the number of services is growing rapidly, users
have difficulty to ask many web services in the same time. In this paper, we
imagine a system where users have the possibility to formulate one query using
one query interface and then the system translates query to the rest of query
interfaces. However, interfaces are created by designers in order to be
interpreted visually by users, machines can not interpret query from a given
interface. We propose a new approach which emulates capacity of interpretation
of users and extracts query from deep web query interfaces. Our approach has
proved good performances on two standard datasets.
|
1205.0919
|
ViQIE: A New Approach for Visual Query Interpretation and Extraction
|
cs.IR
|
Web services are accessed via query interfaces which hide databases
containing thousands of relevant information. User's side, distant database is
a black box which accepts query and returns results, there is no way to access
database schema which reflect data and query meanings. Hence, web services are
very autonomous. Users view this autonomy as a major drawback because they need
often to combine query capabilities of many web services at the same time. In
this work, we will present a new approach which allows users to benefit of
query capabilities of many web services while respecting autonomy of each
service. This solution is a new contribution in Information Retrieval research
axe and has proven good performances on two standard datasets.
|
1205.0927
|
An Energy-Efficient MIMO Algorithm with Receive Power Constraint
|
cs.IT cs.NI math.IT
|
We consider the energy-efficiency of Multiple-Input Multiple-Output (MIMO)
systems with constrained received power rather than constrained transmit power.
A Energy-Efficient Water-Filling (EEWF) algorithm that maximizes the ratio of
the transmission rate to the total transmit power has been derived. The EEWF
power allocation policy establishes a trade-off between the transmission rate
and the total transmit power under the total receive power constraint. The
static and the uncorrelated fast fading Rayleigh channels have been considered,
where the maximization is performed on the instantaneous realization of the
channel assuming perfect information at both the transmitter and the receiver
with equal number of antennas. We show, based on Monte Carlo simulations that
the energy-efficiency provided by the EEWF algorithm can be more than an order
of magnitude greater than the energy-efficiency corresponding to capacity
achieving Water-Filling (WF) algorithm. We also show that the energy-efficiency
increases with both the number of antennas and the signal-to-noise ratio. The
corresponding transmission rate also increases but at a slower rate than the
Shannon capacity, while the corresponding total transmit power decreases with
the number of antennas.
|
1205.0960
|
Parallel clustering with CFinder
|
physics.soc-ph cs.DC cs.DS cs.SI physics.data-an
|
The amount of available data about complex systems is increasing every year,
measurements of larger and larger systems are collected and recorded. A natural
representation of such data is given by networks, whose size is following the
size of the original system. The current trend of multiple cores in computing
infrastructures call for a parallel reimplementation of earlier methods. Here
we present the grid version of CFinder, which can locate overlapping
communities in directed, weighted or undirected networks based on the clique
percolation method (CPM). We show that the computation of the communities can
be distributed among several CPU-s or computers. Although switching to the
parallel version not necessarily leads to gain in computing time, it definitely
makes the community structure of extremely large networks accessible.
|
1205.0968
|
Information Complexity versus Corruption and Applications to
Orthogonality and Gap-Hamming
|
cs.CC cs.IT math.IT math.PR
|
Three decades of research in communication complexity have led to the
invention of a number of techniques to lower bound randomized communication
complexity. The majority of these techniques involve properties of large
submatrices (rectangles) of the truth-table matrix defining a communication
problem. The only technique that does not quite fit is information complexity,
which has been investigated over the last decade. Here, we connect information
complexity to one of the most powerful "rectangular" techniques: the
recently-introduced smooth corruption (or "smooth rectangle") bound. We show
that the former subsumes the latter under rectangular input distributions. We
conjecture that this subsumption holds more generally, under arbitrary
distributions, which would resolve the long-standing direct sum question for
randomized communication. As an application, we obtain an optimal $\Omega(n)$
lower bound on the information complexity---under the {\em uniform
distribution}---of the so-called orthogonality problem (ORT), which is in turn
closely related to the much-studied Gap-Hamming-Distance (GHD). The proof of
this bound is along the lines of recent communication lower bounds for GHD, but
we encounter a surprising amount of additional technical detail.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.