id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1401.3890 | Analyzing Search Topology Without Running Any Search: On the Connection
Between Causal Graphs and h+ | cs.AI | The ignoring delete lists relaxation is of paramount importance for both
satisficing and optimal planning. In earlier work, it was observed that the
optimal relaxation heuristic h+ has amazing qualities in many classical
planning benchmarks, in particular pertaining to the complete absence of local
minima. The proofs of this are hand-made, raising the question whether such
proofs can be lead automatically by domain analysis techniques. In contrast to
earlier disappointing results -- the analysis method has exponential runtime
and succeeds only in two extremely simple benchmark domains -- we herein answer
this question in the affirmative. We establish connections between causal graph
structure and h+ topology. This results in low-order polynomial time analysis
methods, implemented in a tool we call TorchLight. Of the 12 domains where the
absence of local minima has been proved, TorchLight gives strong success
guarantees in 8 domains. Empirically, its analysis exhibits strong performance
in a further 2 of these domains, plus in 4 more domains where local minima may
exist but are rare. In this way, TorchLight can distinguish easy domains from
hard ones. By summarizing structural reasons for analysis failure, TorchLight
also provides diagnostic output indicating domain aspects that may cause local
minima.
|
1401.3892 | Sequential Diagnosis by Abstraction | cs.AI | When a system behaves abnormally, sequential diagnosis takes a sequence of
measurements of the system until the faults causing the abnormality are
identified, and the goal is to reduce the diagnostic cost, defined here as the
number of measurements. To propose measurement points, previous work employs a
heuristic based on reducing the entropy over a computed set of diagnoses. This
approach generally has good performance in terms of diagnostic cost, but can
fail to diagnose large systems when the set of diagnoses is too large. Focusing
on a smaller set of probable diagnoses scales the approach but generally leads
to increased average diagnostic costs. In this paper, we propose a new
diagnostic framework employing four new techniques, which scales to much larger
systems with good performance in terms of diagnostic cost. First, we propose a
new heuristic for measurement point selection that can be computed efficiently,
without requiring the set of diagnoses, once the system is modeled as a
Bayesian network and compiled into a logical form known as d-DNNF. Second, we
extend hierarchical diagnosis, a technique based on system abstraction from our
previous work, to handle probabilities so that it can be applied to sequential
diagnosis to allow larger systems to be diagnosed. Third, for the largest
systems where even hierarchical diagnosis fails, we propose a novel method that
converts the system into one that has a smaller abstraction and whose diagnoses
form a superset of those of the original system; the new system can then be
diagnosed and the result mapped back to the original system. Finally, we
propose a novel cost estimation function which can be used to choose an
abstraction of the system that is more likely to provide optimal average cost.
Experiments with ISCAS-85 benchmark circuits indicate that our approach scales
to all circuits in the suite except one that has a flat structure not
susceptible to useful abstraction.
|
1401.3893 | Most Relevant Explanation in Bayesian Networks | cs.AI | A major inference task in Bayesian networks is explaining why some variables
are observed in their particular states using a set of target variables.
Existing methods for solving this problem often generate explanations that are
either too simple (underspecified) or too complex (overspecified). In this
paper, we introduce a method called Most Relevant Explanation (MRE) which finds
a partial instantiation of the target variables that maximizes the generalized
Bayes factor (GBF) as the best explanation for the given evidence. Our study
shows that GBF has several theoretical properties that enable MRE to
automatically identify the most relevant target variables in forming its
explanation. In particular, conditional Bayes factor (CBF), defined as the GBF
of a new explanation conditioned on an existing explanation, provides a soft
measure on the degree of relevance of the variables in the new explanation in
explaining the evidence given the existing explanation. As a result, MRE is
able to automatically prune less relevant variables from its explanation. We
also show that CBF is able to capture well the explaining-away phenomenon that
is often represented in Bayesian networks. Moreover, we define two dominance
relations between the candidate solutions and use the relations to generalize
MRE to find a set of top explanations that is both diverse and representative.
Case studies on several benchmark diagnostic Bayesian networks show that MRE is
often able to find explanatory hypotheses that are not only precise but also
concise.
|
1401.3894 | Efficient Multi-Start Strategies for Local Search Algorithms | cs.LG cs.AI stat.ML | Local search algorithms applied to optimization problems often suffer from
getting trapped in a local optimum. The common solution for this deficiency is
to restart the algorithm when no progress is observed. Alternatively, one can
start multiple instances of a local search algorithm, and allocate
computational resources (in particular, processing time) to the instances
depending on their behavior. Hence, a multi-start strategy has to decide
(dynamically) when to allocate additional resources to a particular instance
and when to start new instances. In this paper we propose multi-start
strategies motivated by works on multi-armed bandit problems and Lipschitz
optimization with an unknown constant. The strategies continuously estimate the
potential performance of each algorithm instance by supposing a convergence
rate of the local search algorithm up to an unknown constant, and in every
phase allocate resources to those instances that could converge to the optimum
for a particular range of the constant. Asymptotic bounds are given on the
performance of the strategies. In particular, we prove that at most a quadratic
increase in the number of times the target function is evaluated is needed to
achieve the performance of a local search algorithm started from the attraction
region of the optimum. Experiments are provided using SPSA (Simultaneous
Perturbation Stochastic Approximation) and k-means as local search algorithms,
and the results indicate that the proposed strategies work well in practice,
and, in all cases studied, need only logarithmically more evaluations of the
target function as opposed to the theoretically suggested quadratic increase.
|
1401.3895 | On the Intertranslatability of Argumentation Semantics | cs.AI | Translations between different nonmonotonic formalisms always have been an
important topic in the field, in particular to understand the
knowledge-representation capabilities those formalisms offer. We provide such
an investigation in terms of different semantics proposed for abstract
argumentation frameworks, a nonmonotonic yet simple formalism which received
increasing interest within the last decade. Although the properties of these
different semantics are nowadays well understood, there are no explicit results
about intertranslatability. We provide such translations wrt. different
properties and also give a few novel complexity results which underlie some
negative results.
|
1401.3896 | The Opposite of Smoothing: A Language Model Approach to Ranking
Query-Specific Document Clusters | cs.IR | Exploiting information induced from (query-specific) clustering of
top-retrieved documents has long been proposed as a means for improving
precision at the very top ranks of the returned results. We present a novel
language model approach to ranking query-specific clusters by the presumed
percentage of relevant documents that they contain. While most previous cluster
ranking approaches focus on the cluster as a whole, our model utilizes also
information induced from documents associated with the cluster. Our model
substantially outperforms previous approaches for identifying clusters
containing a high relevant-document percentage. Furthermore, using the model to
produce document ranking yields precision-at-top-ranks performance that is
consistently better than that of the initial ranking upon which clustering is
performed. The performance also favorably compares with that of a
state-of-the-art pseudo-feedback-based retrieval method.
|
1401.3897 | Interpolable Formulas in Equilibrium Logic and Answer Set Programming | cs.LO cs.AI | Interpolation is an important property of classical and many non-classical
logics that has been shown to have interesting applications in computer science
and AI. Here we study the Interpolation Property for the the non-monotonic
system of equilibrium logic, establishing weaker or stronger forms of
interpolation depending on the precise interpretation of the inference
relation. These results also yield a form of interpolation for ground logic
programs under the answer sets semantics. For disjunctive logic programs we
also study the property of uniform interpolation that is closely related to the
concept of variable forgetting. The first-order version of equilibrium logic
has analogous Interpolation properties whenever the collection of equilibrium
models is (first-order) definable. Since this is the case for so-called safe
programs and theories, it applies to the usual situations that arise in
practical answer set programming.
|
1401.3898 | First-Order Stable Model Semantics and First-Order Loop Formulas | cs.LO cs.AI | Lin and Zhaos theorem on loop formulas states that in the propositional case
the stable model semantics of a logic program can be completely characterized
by propositional loop formulas, but this result does not fully carry over to
the first-order case. We investigate the precise relationship between the
first-order stable model semantics and first-order loop formulas, and study
conditions under which the former can be represented by the latter. In order to
facilitate the comparison, we extend the definition of a first-order loop
formula which was limited to a nondisjunctive program, to a disjunctive program
and to an arbitrary first-order theory. Based on the studied relationship we
extend the syntax of a logic program with explicit quantifiers, which allows us
to do reasoning involving non-Herbrand stable models using first-order
reasoners. Such programs can be viewed as a special class of first-order
theories under the stable model semantics, which yields more succinct loop
formulas than the general language due to their restricted syntax.
|
1401.3899 | Representing and Reasoning with Qualitative Preferences for
Compositional Systems | cs.AI | Many applications, e.g., Web service composition, complex system design, team
formation, etc., rely on methods for identifying collections of objects or
entities satisfying some functional requirement. Among the collections that
satisfy the functional requirement, it is often necessary to identify one or
more collections that are optimal with respect to user preferences over a set
of attributes that describe the non-functional properties of the collection.
We develop a formalism that lets users express the relative importance among
attributes and qualitative preferences over the valuations of each attribute.
We define a dominance relation that allows us to compare collections of objects
in terms of preferences over attributes of the objects that make up the
collection. We establish some key properties of the dominance relation. In
particular, we show that the dominance relation is a strict partial order when
the intra-attribute preference relations are strict partial orders and the
relative importance preference relation is an interval order.
We provide algorithms that use this dominance relation to identify the set of
most preferred collections. We show that under certain conditions, the
algorithms are guaranteed to return only (sound), all (complete), or at least
one (weakly complete) of the most preferred collections. We present results of
simulation experiments comparing the proposed algorithms with respect to (a)
the quality of solutions (number of most preferred solutions) produced by the
algorithms, and (b) their performance and efficiency. We also explore some
interesting conjectures suggested by the results of our experiments that relate
the properties of the user preferences, the dominance relation, and the
algorithms.
|
1401.3900 | Decidability and Undecidability Results for Propositional Schemata | cs.LO cs.AI | We define a logic of propositional formula schemata adding to the syntax of
propositional logic indexed propositions and iterated connectives ranging over
intervals parameterized by arithmetic variables. The satisfiability problem is
shown to be undecidable for this new logic, but we introduce a very general
class of schemata, called bound-linear, for which this problem becomes
decidable. This result is obtained by reduction to a particular class of
schemata called regular, for which we provide a sound and complete terminating
proof procedure. This schemata calculus allows one to capture proof patterns
corresponding to a large class of problems specified in propositional logic. We
also show that the satisfiability problem becomes again undecidable for slight
extensions of this class, thus demonstrating that bound-linear schemata
represent a good compromise between expressivity and decidability.
|
1401.3901 | Defeasible Inclusions in Low-Complexity DLs | cs.LO cs.AI | Some of the applications of OWL and RDF (e.g. biomedical knowledge
representation and semantic policy formulation) call for extensions of these
languages with nonmonotonic constructs such as inheritance with overriding.
Nonmonotonic description logics have been studied for many years, however no
practical such knowledge representation languages exist, due to a combination
of semantic difficulties and high computational complexity. Independently,
low-complexity description logics such as DL-lite and EL have been introduced
and incorporated in the OWL standard. Therefore, it is interesting to see
whether the syntactic restrictions characterizing DL-lite and EL bring
computational benefits to their nonmonotonic versions, too. In this paper we
extensively investigate the computational complexity of Circumscription when
knowledge bases are formulated in DL-lite_R, EL, and fragments thereof. We
identify fragments whose complexity ranges from P to the second level of the
polynomial hierarchy, as well as fragments whose complexity raises to PSPACE
and beyond.
|
1401.3902 | On the Link between Partial Meet, Kernel, and Infra Contraction and its
Application to Horn Logic | cs.AI cs.LO | Standard belief change assumes an underlying logic containing full classical
propositional logic. However, there are good reasons for considering belief
change in less expressive logics as well. In this paper we build on recent
investigations by Delgrande on contraction for Horn logic. We show that the
standard basic form of contraction, partial meet, is too strong in the Horn
case. This result stands in contrast to Delgrande's conjecture that orderly
maxichoice is the appropriate form of contraction for Horn logic. We then
define a more appropriate notion of basic contraction for the Horn case,
influenced by the convexity property holding for full propositional logic and
which we refer to as infra contraction. The main contribution of this work is a
result which shows that the construction method for Horn contraction for belief
sets based on our infra remainder sets corresponds exactly to Hansson's
classical kernel contraction for belief sets, when restricted to Horn logic.
This result is obtained via a detour through contraction for belief bases. We
prove that kernel contraction for belief bases produces precisely the same
results as the belief base version of infra contraction. The use of belief
bases to obtain this result provides evidence for the conjecture that Horn
belief change is best viewed as a hybrid version of belief set change and
belief base change. One of the consequences of the link with base contraction
is the provision of a representation result for Horn contraction for belief
sets in which a version of the Core-retainment postulate features.
|
1401.3903 | Multi-Robot Adversarial Patrolling: Facing a Full-Knowledge Opponent | cs.MA cs.RO | The problem of adversarial multi-robot patrol has gained interest in recent
years, mainly due to its immediate relevance to various security applications.
In this problem, robots are required to repeatedly visit a target area in a way
that maximizes their chances of detecting an adversary trying to penetrate
through the patrol path. When facing a strong adversary that knows the patrol
strategy of the robots, if the robots use a deterministic patrol algorithm,
then in many cases it is easy for the adversary to penetrate undetected (in
fact, in some of those cases the adversary can guarantee penetration).
Therefore this paper presents a non-deterministic patrol framework for the
robots. Assuming that the strong adversary will take advantage of its knowledge
and try to penetrate through the patrols weakest spot, hence an optimal
algorithm is one that maximizes the chances of detection in that point. We
therefore present a polynomial-time algorithm for determining an optimal patrol
under the Markovian strategy assumption for the robots, such that the
probability of detecting the adversary in the patrols weakest spot is
maximized. We build upon this framework and describe an optimal patrol strategy
for several robotic models based on their movement abilities (directed or
undirected) and sensing abilities (perfect or imperfect), and in different
environment models - either patrol around a perimeter (closed polygon) or an
open fence (open polyline).
|
1401.3905 | MAPP: a Scalable Multi-Agent Path Planning Algorithm with Tractability
and Completeness Guarantees | cs.AI | Multi-agent path planning is a challenging problem with numerous real-life
applications. Running a centralized search such as A* in the combined state
space of all units is complete and cost-optimal, but scales poorly, as the
state space size is exponential in the number of mobile units. Traditional
decentralized approaches, such as FAR and WHCA*, are faster and more scalable,
being based on problem decomposition. However, such methods are incomplete and
provide no guarantees with respect to the running time or the solution quality.
They are not necessarily able to tell in a reasonable time whether they would
succeed in finding a solution to a given instance. We introduce MAPP, a
tractable algorithm for multi-agent path planning on undirected graphs. We
present a basic version and several extensions. They have low-polynomial
worst-case upper bounds for the running time, the memory requirements, and the
length of solutions. Even though all algorithmic versions are incomplete in the
general case, each provides formal guarantees on problems it can solve. For
each version, we discuss the algorithms completeness with respect to clearly
defined subclasses of instances. Experiments were run on realistic game grid
maps. MAPP solved 99.86% of all mobile units, which is 18--22% better than the
percentage of FAR and WHCA*. MAPP marked 98.82% of all units as provably
solvable during the first stage of plan computation. Parts of MAPPs computation
can be re-used across instances on the same map. Speed-wise, MAPP is
competitive or significantly faster than WHCA*, depending on whether MAPP
performs all computations from scratch. When data that MAPP can re-use are
preprocessed offline and readily available, MAPP is slower than the very fast
FAR algorithm by a factor of 2.18 on average. MAPPs solutions are on average
20% longer than FARs solutions and 7--31% longer than WHCA*s solutions.
|
1401.3906 | Making Decisions Using Sets of Probabilities: Updating, Time
Consistency, and Calibration | cs.AI cs.GT | We consider how an agent should update her beliefs when her beliefs are
represented by a set P of probability distributions, given that the agent makes
decisions using the minimax criterion, perhaps the best-studied and most
commonly-used criterion in the literature. We adopt a game-theoretic framework,
where the agent plays against a bookie, who chooses some distribution from P.
We consider two reasonable games that differ in what the bookie knows when he
makes his choice. Anomalies that have been observed before, like time
inconsistency, can be understood as arising because different games are being
played, against bookies with different information. We characterize the
important special cases in which the optimal decision rules according to the
minimax criterion amount to either conditioning or simply ignoring the
information. Finally, we consider the relationship between updating and
calibration when uncertainty is described by sets of probabilities. Our results
emphasize the key role of the rectangularity condition of Epstein and
Schneider.
|
1401.3907 | Policy Invariance under Reward Transformations for General-Sum
Stochastic Games | cs.GT cs.LG | We extend the potential-based shaping method from Markov decision processes
to multi-player general-sum stochastic games. We prove that the Nash equilibria
in a stochastic game remains unchanged after potential-based shaping is applied
to the environment. The property of policy invariance provides a possible way
of speeding convergence when learning to play a stochastic game.
|
1401.3908 | Centrality-as-Relevance: Support Sets and Similarity as Geometric
Proximity | cs.IR cs.CL | In automatic summarization, centrality-as-relevance means that the most
important content of an information source, or a collection of information
sources, corresponds to the most central passages, considering a representation
where such notion makes sense (graph, spatial, etc.). We assess the main
paradigms, and introduce a new centrality-based relevance model for automatic
summarization that relies on the use of support sets to better estimate the
relevant content. Geometric proximity is used to compute semantic relatedness.
Centrality (relevance) is determined by considering the whole input source (and
not only local information), and by taking into account the existence of minor
topics or lateral subjects in the information sources to be summarized. The
method consists in creating, for each passage of the input source, a support
set consisting only of the most semantically related passages. Then, the
determination of the most relevant content is achieved by selecting the
passages that occur in the largest number of support sets. This model produces
extractive summaries that are generic, and language- and domain-independent.
Thorough automatic evaluation shows that the method achieves state-of-the-art
performance, both in written text, and automatically transcribed speech
summarization, including when compared to considerably more complex approaches.
|
1401.3909 | Scheduling Bipartite Tournaments to Minimize Total Travel Distance | cs.AI cs.DS | In many professional sports leagues, teams from opposing leagues/conferences
compete against one another, playing inter-league games. This is an example of
a bipartite tournament. In this paper, we consider the problem of reducing the
total travel distance of bipartite tournaments, by analyzing inter-league
scheduling from the perspective of discrete optimization. This research has
natural applications to sports scheduling, especially for leagues such as the
National Basketball Association (NBA) where teams must travel long distances
across North America to play all their games, thus consuming much time, money,
and greenhouse gas emissions. We introduce the Bipartite Traveling Tournament
Problem (BTTP), the inter-league variant of the well-studied Traveling
Tournament Problem. We prove that the 2n-team BTTP is NP-complete, but for
small values of n, a distance-optimal inter-league schedule can be generated
from an algorithm based on minimum-weight 4-cycle-covers. We apply our
theoretical results to the 12-team Nippon Professional Baseball (NPB) league in
Japan, producing a provably-optimal schedule requiring 42950 kilometres of
total team travel, a 16% reduction compared to the actual distance traveled by
these teams during the 2010 NPB season. We also develop a nearly-optimal
inter-league tournament for the 30-team NBA league, just 3.8% higher than the
trivial theoretical lower bound.
|
1401.3910 | Topological Value Iteration Algorithms | cs.AI | Value iteration is a powerful yet inefficient algorithm for Markov decision
processes (MDPs) because it puts the majority of its effort into backing up the
entire state space, which turns out to be unnecessary in many cases. In order
to overcome this problem, many approaches have been proposed. Among them, ILAO*
and variants of RTDP are state-of-the-art ones. These methods use reachability
analysis and heuristic search to avoid some unnecessary backups. However, none
of these approaches build the graphical structure of the state transitions in a
pre-processing step or use the structural information to systematically
decompose a problem, whereby generating an intelligent backup sequence of the
state space. In this paper, we present two optimal MDP algorithms. The first
algorithm, topological value iteration (TVI), detects the structure of MDPs and
backs up states based on topological sequences. It (1) divides an MDP into
strongly-connected components (SCCs), and (2) solves these components
sequentially. TVI outperforms VI and other state-of-the-art algorithms vastly
when an MDP has multiple, close-to-equal-sized SCCs. The second algorithm,
focused topological value iteration (FTVI), is an extension of TVI. FTVI
restricts its attention to connected components that are relevant for solving
the MDP. Specifically, it uses a small amount of heuristic search to eliminate
provably sub-optimal actions; this pruning allows FTVI to find smaller
connected components, thus running faster. We demonstrate that FTVI outperforms
TVI by an order of magnitude, averaged across several domains. Surprisingly,
FTVI also significantly outperforms popular heuristically-informed MDP
algorithms such as ILAO*, LRTDP, BRTDP and Bayesian-RTDP in many domains,
sometimes by as much as two orders of magnitude. Finally, we characterize the
type of domains where FTVI excels --- suggesting a way to an informed choice of
solver.
|
1401.3915 | Community Detection in Networks using Graph Distance | stat.ML cs.SI | The study of networks has received increased attention recently not only from
the social sciences and statistics but also from physicists, computer
scientists and mathematicians. One of the principal problem in networks is
community detection. Many algorithms have been proposed for community finding
but most of them do not have have theoretical guarantee for sparse networks and
networks close to the phase transition boundary proposed by physicists. There
are some exceptions but all have some incomplete theoretical basis. Here we
propose an algorithm based on the graph distance of vertices in the network. We
give theoretical guarantees that our method works in identifying communities
for block models and can be extended for degree-corrected block models and
block models with the number of communities growing with number of vertices.
Despite favorable simulation results, we are not yet able to conclude that our
method is satisfactory for worst possible case. We illustrate on a network of
political blogs, Facebook networks and some other networks.
|
1401.3918 | A universal law in human mobility | physics.soc-ph cs.SI | The intrinsic factor that drives the human movement remains unclear for
decades. While our observations from intra-urban and inter-urban trips both
demonstrate a universal law in human mobility. Be specific, the probability
from one location to another is inversely proportional to the number of
population living in locations which are closer than the destination. A simple
rank-based model is then presented, which is parameterless but predicts human
flows with a convincing fidelity. Besides, comparison with other models shows
that our model is more stable and fundamental at different spatial scales by
implying the strong correlation between human mobility and social relationship.
|
1401.3922 | Discrete Convexity and Stochastic Approximation for Cross-layer On-off
Transmission Control | cs.IT cs.SY math.IT | This paper considers the discrete convexity of a cross-layer on-off
transmission control problem in wireless communications. In this system, a
scheduler decides whether or not to transmit in order to optimize the long-term
quality of service (QoS) incurred by the queueing effects in the data link
layer and the transmission power consumption in the physical (PHY) layer
simultaneously. Using a Markov decision process (MDP) formulation, we show that
the optimal policy can be determined by solving a minimization problem over a
set of queue thresholds if the dynamic programming (DP) is submodular. We prove
that this minimization problem is discrete convex. In order to search the
minimizer, we consider two discrete stochastic approximation (DSA) algorithms:
discrete simultaneous perturbation stochastic approximation (DSPSA) and
L-natural-convex stochastic approximation (L-natural-convex SA). Through
numerical studies, we show that the two DSA algorithms converge significantly
faster than the existing continuous simultaneous perturbation stochastic
approximation (CSPSA) algorithm in multi-user systems. Finally, we compare the
convergence results and complexity of two DSA and CSPSA algorithms where we
show that DSPSA achieves the best trade-off between complexity and accuracy in
multi-user systems.
|
1401.3928 | Multiply Constant-Weight Codes and the Reliability of Loop Physically
Unclonable Functions | cs.IT math.CO math.IT | We introduce the class of multiply constant-weight codes to improve the
reliability of certain physically unclonable function (PUF) response. We extend
classical coding methods to construct multiply constant-weight codes from known
$q$-ary and constant-weight codes. Analogues of Johnson bounds are derived and
are shown to be asymptotically tight to a constant factor under certain
conditions. We also examine the rates of the multiply constant-weight codes and
interestingly, demonstrate that these rates are the same as those of
constant-weight codes of suitable parameters. Asymptotic analysis of our code
constructions is provided.
|
1401.3938 | Robust Modulation Technique for Diffusion-based Molecular Communication
in Nanonetworks | cs.IT math.IT | Diffusion-based molecular communication over nanonetworks is an emerging
communication paradigm that enables nanomachines to communicate by using
molecules as the information carrier. For such a communication paradigm,
Concentration Shift Keying (CSK) has been considered as one of the most
promising techniques for modulating information symbols, owing to its inherent
simplicity and practicality. CSK modulated subsequent information symbols,
however, may interfere with each other due to the random amount of time that
molecules of each modulated symbols take to reach the receiver nanomachine. To
alleviate Inter Symbol Interference (ISI) problem associated with CSK, we
propose a new modulation technique called Zebra-CSK. The proposed Zebra-CSK
adds inhibitor molecules in CSK-modulated molecular signal to selectively
suppress ISI causing molecules. Numerical results from our newly developed
probabilistic analytical model show that Zebra-CSK not only enhances capacity
of the molecular channel but also reduces symbol error probability observed at
the receiver nanomachine.
|
1401.3941 | Network Coding for $3$s$/n$t Sum-Networks | cs.IT math.IT | A sum-network is a directed acyclic network where each source independently
generates one symbol from a given field $\mathbb F$ and each terminal wants to
receive the sum $($over $\mathbb F)$ of the source symbols. For sum-networks
with two sources or two terminals, the solvability is characterized by the
connection condition of each source-terminal pair [3]. A necessary and
sufficient condition for the solvability of the $3$-source $3$-terminal
$(3$s$/3$t$)$ sum-networks was given by Shenvi and Dey [6]. However, the
general case of arbitrary sources/sinks is still open. In this paper, we
investigate the sum-network with three sources and $n$ sinks using a region
decomposition method. A sufficient and necessary condition is established for a
class of $3$s$/n$t sum-networks. As a direct application of this result, a
necessary and sufficient condition of solvability is obtained for the special
case of $3$s$/3$t sum-networks.
|
1401.3945 | Distributed Remote Vector Gaussian Source Coding for Wireless Acoustic
Sensor Networks | cs.IT math.IT | In this paper, we consider the problem of remote vector Gaussian source
coding for a wireless acoustic sensor network. Each node receives messages from
multiple nodes in the network and decodes these messages using its own
measurement of the sound field as side information. The node's measurement and
the estimates of the source resulting from decoding the received messages are
then jointly encoded and transmitted to a neighboring node in the network. We
show that for this distributed source coding scenario, one can encode a
so-called conditional sufficient statistic of the sources instead of jointly
encoding multiple sources. We focus on the case where node measurements are in
form of noisy linearly mixed combinations of the sources and the acoustic
channel mixing matrices are invertible. For this problem, we derive the
rate-distortion function for vector Gaussian sources and under covariance
distortion constraints.
|
1401.3973 | An Empirical Evaluation of Similarity Measures for Time Series
Classification | cs.LG cs.CV stat.ML | Time series are ubiquitous, and a measure to assess their similarity is a
core part of many computational systems. In particular, the similarity measure
is the most essential ingredient of time series clustering and classification
systems. Because of this importance, countless approaches to estimate time
series similarity have been proposed. However, there is a lack of comparative
studies using empirical, rigorous, quantitative, and large-scale assessment
strategies. In this article, we provide an extensive evaluation of similarity
measures for time series classification following the aforementioned
principles. We consider 7 different measures coming from alternative measure
`families', and 45 publicly-available time series data sets coming from a wide
variety of scientific domains. We focus on out-of-sample classification
accuracy, but in-sample accuracies and parameter choices are also discussed.
Our work is based on rigorous evaluation methodologies and includes the use of
powerful statistical significance tests to derive meaningful conclusions. The
obtained results show the equivalence, in terms of accuracy, of a number of
measures, but with one single candidate outperforming the rest. Such findings,
together with the followed methodology, invite researchers on the field to
adopt a more consistent evaluation criteria and a more informed decision
regarding the baseline measures to which new developments should be compared.
|
1401.3985 | Engineering the Hardware/Software Interface for Robotic Platforms - A
Comparison of Applied Model Checking with Prolog and Alloy | cs.SE cs.RO | Robotic platforms serve different use cases ranging from experiments for
prototyping assistive applications up to embedded systems for realizing
cyber-physical systems in various domains. We are using 1:10 scale miniature
vehicles as a robotic platform to conduct research in the domain of
self-driving cars and collaborative vehicle fleets. Thus, experiments with
different sensors like e.g.~ultra-sonic, infrared, and rotary encoders need to
be prepared and realized using our vehicle platform. For each setup, we need to
configure the hardware/software interface board to handle all sensors and
actors. Therefore, we need to find a specific configuration setting for each
pin of the interface board that can handle our current hardware setup but which
is also flexible enough to support further sensors or actors for future use
cases. In this paper, we show how to model the domain of the configuration
space for a hardware/software interface board to enable model checking for
solving the tasks of finding any, all, and the best possible pin configuration.
We present results from a formal experiment applying the declarative languages
Alloy and Prolog to guide the process of engineering the hardware/software
interface for robotic platforms on the example of a configuration complexity up
to ten pins resulting in a configuration space greater than 14.5 million
possibilities. Our results show that our domain model in Alloy performs better
compared to Prolog to find feasible solutions for larger configurations with an
average time of 0.58s. To find the best solution, our model for Prolog performs
better taking only 1.38s for the largest desired configuration; however, this
important use case is currently not covered by the existing tools for the
hardware used as an example in this article.
|
1401.3995 | $Y$-$\Delta$ Product in 3-Way $\Delta$ and Y-Channels for Cyclic
Interference and Signal Alignment | cs.IT math.IT | In a full-duplex 3-way $\Delta$ channel, three transceivers communicate to
each other, so that a number of six messages is exchanged. In a $Y$-channel,
however, these three transceivers are connected to an intermediate full-duplex
relay. Loop-back self-interference is suppressed perfectly. The relay forwards
network-coded messages to their dedicated users by means of interference
alignment (IA) and signal alignment. A conceptual channel model with cyclic
shifts described by a polynomial ring is considered for these two related
channels. The maximally achievable rates in terms of the degrees of freedom
measure are derived. We observe that the Y-channel and the 3-way $\Delta$
channel provide a $Y$-$\Delta$ product relationship. Moreover, we briefly
discuss how this analysis relates to spatial IA and MIMO IA.
|
1401.3998 | Performance Evaluation of Bit Division Multiplexing combined with
Non-Uniform QAM | cs.IT math.IT | Broadcasting systems have to deal with channel variability in order to offer
the best spectral efficiency to the receivers. However, the transmission
parameters that maximise the spectral efficiency generally leads to a large
link unavailability. In this paper, we study analytically the trade-off between
spectral efficiency and coverage for various channel resource allocation
strategies when broadcasting two services. More precisely, we consider the
following strategies: time sharing, hierarchical modulation and bit division
multiplexing. Our main contribution is the combination of bit division
multiplexing with non-uniform QAM to improve the performance of broadcasting
systems. The results show that this scheme outperforms all the previous channel
resource allocation strategies.
|
1401.4020 | Robust Recursive State Estimation with Random Measurements Droppings | cs.SY | A recursive state estimation procedure is derived for a linear time varying
system with both parametric uncertainties and stochastic measurement droppings.
This estimator has a similar form as that of the Kalman filter with
intermittent observations, but its parameters should be adjusted when a plant
output measurement arrives. A new recursive form is derived for the
pseudo-covariance matrix of estimation errors, which plays important roles in
analyzing its asymptotic properties. Based on a Riemannian metric for positive
definite matrices, some necessary and sufficient conditions have been obtained
for the strict contractiveness of an iteration of this recursion. It has also
been proved that under some controllability and observability conditions, as
well as some weak requirements on measurement arrival probability, the gain
matrix of this recursive robust state estimator converges in probability one to
a stationary distribution. Numerical simulation results show that estimation
accuracy of the suggested procedure is more robust against parametric modelling
errors than the Kalman filter.
|
1401.4023 | Asymptotic Behavior of the Pseudo-Covariance Matrix of a Robust State
Estimator with Intermittent Measurements | cs.SY cs.IT math.IT | Ergodic properties and asymptotic stationarity are investigated in this paper
for the pseudo-covariance matrix (PCM) of a recursive state estimator which is
robust against parametric uncertainties and is based on plant output
measurements that may be randomly dropped. When the measurement dropping
process is described by a Markov chain and the modified plant is both
controllable and observable, it is proved that if the dropping probability is
less than 1, this PCM converges to a stationary distribution that is
independent of its initial values. A convergence rate is also provided. In
addition, it has also been made clear that when the initial value of the PCM is
set to the stabilizing solution of the algebraic Riccati equation related to
the robust state estimator without measurement dropping, this PCM converges to
an ergodic process. Based on these results, two approximations are derived for
the probability distribution function of the stationary PCM, as well as a bound
of approximation errors. A numerical example is provided to illustrate the
obtained theoretical results.
|
1401.4068 | Efficient transfer entropy analysis of non-stationary neural time series | cs.IT math.IT q-bio.NC | Information theory allows us to investigate information processing in neural
systems in terms of information transfer, storage and modification. Especially
the measure of information transfer, transfer entropy, has seen a dramatic
surge of interest in neuroscience. Estimating transfer entropy from two
processes requires the observation of multiple realizations of these processes
to estimate associated probability density functions. To obtain these
observations, available estimators assume stationarity of processes to allow
pooling of observations over time. This assumption however, is a major obstacle
to the application of these estimators in neuroscience as observed processes
are often non-stationary. As a solution, Gomez-Herrero and colleagues
theoretically showed that the stationarity assumption may be avoided by
estimating transfer entropy from an ensemble of realizations. Such an ensemble
is often readily available in neuroscience experiments in the form of
experimental trials. Thus, in this work we combine the ensemble method with a
recently proposed transfer entropy estimator to make transfer entropy
estimation applicable to non-stationary time series. We present an efficient
implementation of the approach that deals with the increased computational
demand of the ensemble method's practical application. In particular, we use a
massively parallel implementation for a graphics processing unit to handle the
computationally most heavy aspects of the ensemble method. We test the
performance and robustness of our implementation on data from simulated
stochastic processes and demonstrate the method's applicability to
magnetoencephalographic data. While we mainly evaluate the proposed method for
neuroscientific data, we expect it to be applicable in a variety of fields that
are concerned with the analysis of information transfer in complex biological,
social, and artificial systems.
|
1401.4082 | Stochastic Backpropagation and Approximate Inference in Deep Generative
Models | stat.ML cs.AI cs.LG stat.CO stat.ME | We marry ideas from deep neural networks and approximate Bayesian inference
to derive a generalised class of deep, directed generative models, endowed with
a new algorithm for scalable inference and learning. Our algorithm introduces a
recognition model to represent approximate posterior distributions, and that
acts as a stochastic encoder of the data. We develop stochastic
back-propagation -- rules for back-propagation through stochastic variables --
and use this to develop an algorithm that allows for joint optimisation of the
parameters of both the generative and recognition model. We demonstrate on
several real-world data sets that the model generates realistic samples,
provides accurate imputations of missing data and is a useful tool for
high-dimensional data visualisation.
|
1401.4105 | Learning $\ell_1$-based analysis and synthesis sparsity priors using
bi-level optimization | cs.CV | We consider the analysis operator and synthesis dictionary learning problems
based on the the $\ell_1$ regularized sparse representation model. We reveal
the internal relations between the $\ell_1$-based analysis model and synthesis
model. We then introduce an approach to learn both analysis operator and
synthesis dictionary simultaneously by using a unified framework of bi-level
optimization. Our aim is to learn a meaningful operator (dictionary) such that
the minimum energy solution of the analysis (synthesis)-prior based model is as
close as possible to the ground-truth. We solve the bi-level optimization
problem using the implicit differentiation technique. Moreover, we demonstrate
the effectiveness of our leaning approach by applying the learned analysis
operator (dictionary) to the image denoising task and comparing its performance
with state-of-the-art methods. Under this unified framework, we can compare the
performance of the two types of priors.
|
1401.4107 | Revisiting loss-specific training of filter-based MRFs for image
restoration | cs.CV | It is now well known that Markov random fields (MRFs) are particularly
effective for modeling image priors in low-level vision. Recent years have seen
the emergence of two main approaches for learning the parameters in MRFs: (1)
probabilistic learning using sampling-based algorithms and (2) loss-specific
training based on MAP estimate. After investigating existing training
approaches, it turns out that the performance of the loss-specific training has
been significantly underestimated in existing work. In this paper, we revisit
this approach and use techniques from bi-level optimization to solve it. We
show that we can get a substantial gain in the final performance by solving the
lower-level problem in the bi-level framework with high accuracy using our
newly proposed algorithm. As a result, our trained model is on par with highly
specialized image denoising algorithms and clearly outperforms
probabilistically trained MRF models. Our findings suggest that for the
loss-specific training scheme, solving the lower-level problem with higher
accuracy is beneficial. Our trained model comes along with the additional
advantage, that inference is extremely efficient. Our GPU-based implementation
takes less than 1s to produce state-of-the-art performance.
|
1401.4112 | A bi-level view of inpainting - based image compression | cs.CV | Inpainting based image compression approaches, especially linear and
non-linear diffusion models, are an active research topic for lossy image
compression. The major challenge in these compression models is to find a small
set of descriptive supporting points, which allow for an accurate
reconstruction of the original image. It turns out in practice that this is a
challenging problem even for the simplest Laplacian interpolation model. In
this paper, we revisit the Laplacian interpolation compression model and
introduce two fast algorithms, namely successive preconditioning primal dual
algorithm and the recently proposed iPiano algorithm, to solve this problem
efficiently. Furthermore, we extend the Laplacian interpolation based
compression model to a more general form, which is based on principles from
bi-level optimization. We investigate two different variants of the Laplacian
model, namely biharmonic interpolation and smoothed Total Variation
regularization. Our numerical results show that significant improvements can be
obtained from the biharmonic interpolation model, and it can recover an image
with very high quality from only 5% pixels.
|
1401.4126 | Lower bounds on the communication complexity of two-party (quantum)
processes | quant-ph cs.IT math.IT | The process of state preparation, its transmission and subsequent measurement
can be classically simulated through the communication of some amount of
classical information. Recently, we proved that the minimal communication cost
is the minimum of a convex functional over a space of suitable probability
distributions. It is now proved that this optimization problem is the dual of a
geometric programming maximization problem, which displays some appealing
properties. First, the number of variables grows linearly with the input size.
Second, the objective function is linear in the input parameters and the
variables. Finally, the constraints do not depend on the input parameters.
These properties imply that, once a feasible point is found, the computation of
a lower bound on the communication cost in any two-party process is linearly
complex. The studied scenario goes beyond quantum processes and includes the
communication complexity scenario introduced by Yao. We illustrate the method
by analytically deriving some non-trivial lower bounds. Finally, we conjecture
the lower bound $n 2^n$ for a noiseless quantum channel with capacity $n$
qubits. This bound can have an interesting consequence in the context of the
recent quantum-foundational debate on the reality of the quantum state.
|
1401.4127 | Cognitive Robotics: for never was a story of more woe than this | cs.RO cs.CY q-bio.NC | We are now on the verge of the next technical revolution - robots are going
to invade our lives. However, to interact with humans or to be incorporated
into a human "collective" robots have to be provided with some human-like
cognitive abilities. What does it mean? - nobody knows. But robotics research
communities are trying hard to find out a way to cope with this problem.
Meanwhile, despite abundant funding these efforts did not lead to any
meaningful result (only in Europe, only in the past ten years, Cognitive
Robotics research funding has reached a ceiling of 1.39 billion euros). In the
next ten years, a similar budget is going to be spent to tackle the Cognitive
Robotics problems in the frame of the Human Brain Project. There is no reason
to expect that this time the result will be different. I would like to try to
explain why I'm so unhappy about this.
|
1401.4128 | Towards the selection of patients requiring ICD implantation by
automatic classification from Holter monitoring indices | cs.LG stat.AP | The purpose of this study is to optimize the selection of prophylactic
cardioverter defibrillator implantation candidates. Currently, the main
criterion for implantation is a low Left Ventricular Ejection Fraction (LVEF)
whose specificity is relatively poor. We designed two classifiers aimed to
predict, from long term ECG recordings (Holter), whether a low-LVEF patient is
likely or not to undergo ventricular arrhythmia in the next six months. One
classifier is a single hidden layer neural network whose variables are the most
relevant features extracted from Holter recordings, and the other classifier
has a structure that capitalizes on the physiological decomposition of the
arrhythmogenic factors into three disjoint groups: the myocardial substrate,
the triggers and the autonomic nervous system (ANS). In this ad hoc network,
the features were assigned to each group; one neural network classifier per
group was designed and its complexity was optimized. The outputs of the
classifiers were fed to a single neuron that provided the required probability
estimate. The latter was thresholded for final discrimination A dataset
composed of 186 pre-implantation 30-mn Holter recordings of patients equipped
with an implantable cardioverter defibrillator (ICD) in primary prevention was
used in order to design and test this classifier. 44 out of 186 patients
underwent at least one treated ventricular arrhythmia during the six-month
follow-up period. Performances of the designed classifier were evaluated using
a cross-test strategy that consists in splitting the database into several
combinations of a training set and a test set. The average arrhythmia
prediction performances of the ad-hoc classifier are NPV = 77% $\pm$ 13% and
PPV = 31% $\pm$ 19% (Negative Predictive Value $\pm$ std, Positive Predictive
Value $\pm$ std). According to our study, improving prophylactic
ICD-implantation candidate selection by automatic classification from ECG
features may be possible, but the availability of a sizable dataset appears to
be essential to decrease the number of False Negatives.
|
1401.4134 | A conditional compression distance that unveils insights of the genomic
evolution | q-bio.GN cs.IT math.IT | We describe a compression-based distance for genomic sequences. Instead of
using the usual conjoint information content, as in the classical Normalized
Compression Distance (NCD), it uses the conditional information content. To
compute this Normalized Conditional Compression Distance (NCCD), we need a
normal conditional compressor, that we built using a mixture of static and
dynamic finite-context models. Using this approach, we measured chromosomal
distances between Hominidae primates and also between Muroidea (rat and mouse),
observing several insights of evolution that so far have not been reported in
the literature.
|
1401.4140 | Statistics of co-occurring keywords on Twitter | physics.soc-ph cs.SI | Online social media such as the micro-blogging site Twitter has become a rich
source of real-time data on online human behaviors. Here we analyze the
occurrence and co-occurrence frequency of keywords in user posts on Twitter.
From the occurrence rate of major international brand names, we provide
examples on predictions of brand-user behaviors. From the co-occurrence rates,
we further analyze the user-perceived relationships between international brand
names and construct the corresponding relationship networks. In general the
user activity on Twitter is highly intermittent and we show that the occurrence
rate of brand names forms a highly correlated time signal.
|
1401.4143 | Convex Optimization for Binary Classifier Aggregation in Multiclass
Problems | cs.LG | Multiclass problems are often decomposed into multiple binary problems that
are solved by individual binary classifiers whose results are integrated into a
final answer. Various methods, including all-pairs (APs), one-versus-all (OVA),
and error correcting output code (ECOC), have been studied, to decompose
multiclass problems into binary problems. However, little study has been made
to optimally aggregate binary problems to determine a final answer to the
multiclass problem. In this paper we present a convex optimization method for
an optimal aggregation of binary classifiers to estimate class membership
probabilities in multiclass problems. We model the class membership probability
as a softmax function which takes a conic combination of discrepancies induced
by individual binary classifiers, as an input. With this model, we formulate
the regularized maximum likelihood estimation as a convex optimization problem,
which is solved by the primal-dual interior point method. Connections of our
method to large margin classifiers are presented, showing that the large margin
formulation can be considered as a limiting case of our convex formulation.
Numerical experiments on synthetic and real-world data sets demonstrate that
our method outperforms existing aggregation methods as well as direct methods,
in terms of the classification accuracy and the quality of class membership
probability estimates.
|
1401.4144 | Arguments using ontological and causal knowledge | cs.AI | We investigate an approach to reasoning about causes through argumentation.
We consider a causal model for a physical system, and look for arguments about
facts. Some arguments are meant to provide explanations of facts whereas some
challenge these explanations and so on. At the root of argumentation here, are
causal links ({A_1, ... ,A_n} causes B) and ontological links (o_1 is_a o_2).
We present a system that provides a candidate explanation ({A_1, ... ,A_n}
explains {B_1, ... ,B_m}) by resorting to an underlying causal link
substantiated with appropriate ontological links. Argumentation is then at work
from these various explaining links. A case study is developed: a severe storm
Xynthia that devastated part of France in 2010, with an unaccountably high
number of casualties.
|
1401.4147 | Simple Semi-Distributed Lifetime Maximizing Strategy via Power
Allocation in Collaborative Beamforming for Wireless Sensor Networks | cs.IT cs.NI math.IT | Energy-efficient communication is an important issue in wireless sensor
networks (WSNs) consisting of large number of energy-constrained sensor nodes.
Indeed, sensor nodes have different energy budgets assigned to data
transmission at individual nodes. Therefore, without energy-aware transmission
schemes, energy can deplete from sensor nodes with smaller energy budget faster
than from the rest of the sensor nodes in WSNs. This reduces the coverage area
as well as the lifetime of WSNs. Collaborative beamforming (CB) has been
proposed originally to achieve directional gain, however, it also inherently
distributes the corresponding energy consumption over the collaborative sensor
nodes. In fact, CB can be seen as a physical layer solution (versus the media
access control/network layer solution) to balance the lifetimes of individual
sensor nodes and extend the lifetime of the whole WSN. However, the
introduction of energy-aware CB schemes is critical for extending the WSNs
lifetime.
In this paper, CB with power allocation (CB-PA) is developed to extend the
lifetime of a cluster of collaborative sensor nodes by balancing the individual
sensor node lifetimes. A novel strategy is proposed to utilize the residual
energy information available at each sensor node. It adjusts the energy
consumption rate at each sensor node while achieving the required average
signal-to-noise ratio (SNR) at the destination. It is a semi-distributed
strategy and it maintains average SNR. Different factors affecting the energy
consumption are studied as well. Simulation results show that CB-PA outperforms
CB with Equal Power Allocation (CB-EPA) in terms of extending the lifetime of a
cluster of collaborative nodes.
|
1401.4158 | Embodied social interaction constitutes social cognition in pairs of
humans: A minimalist virtual reality experiment | nlin.AO cs.HC cs.MA | Scientists have traditionally limited the mechanisms of social cognition to
one brain, but recent approaches claim that interaction also realizes cognitive
work. Experiments under constrained virtual settings revealed that interaction
dynamics implicitly guide social cognition. Here we show that embodied social
interaction can be constitutive of agency detection and of experiencing
another`s presence. Pairs of participants moved their "avatars" along an
invisible virtual line and could make haptic contact with three identical
objects, two of which embodied the other`s motions, but only one, the other`s
avatar, also embodied the other`s contact sensor and thereby enabled responsive
interaction. Co-regulated interactions were significantly correlated with
identifications of the other`s avatar and reports of the clearest awareness of
the other`s presence. These results challenge folk psychological notions about
the boundaries of mind, but make sense from evolutionary and developmental
perspectives: an extendible mind can offload cognitive work into its
environment.
|
1401.4161 | Strong converse for the classical capacity of optical quantum
communication channels | quant-ph cs.IT math.IT | We establish the classical capacity of optical quantum channels as a sharp
transition between two regimes---one which is an error-free regime for
communication rates below the capacity, and the other in which the probability
of correctly decoding a classical message converges exponentially fast to zero
if the communication rate exceeds the classical capacity. This result is
obtained by proving a strong converse theorem for the classical capacity of all
phase-insensitive bosonic Gaussian channels, a well-established model of
optical quantum communication channels, such as lossy optical fibers, amplifier
and free-space communication. The theorem holds under a particular
photon-number occupation constraint, which we describe in detail in the paper.
Our result bolsters the understanding of the classical capacity of these
channels and opens the path to applications, such as proving the security of
noisy quantum storage models of cryptography with optical links.
|
1401.4189 | Scalable Capacity Bounding Models for Wireless Networks | cs.IT math.IT | The framework of network equivalence theory developed by Koetter et al.
introduces a notion of channel emulation to construct noiseless networks as
upper (resp. lower) bounding models, which can be used to calculate the outer
(resp. inner) bounds for the capacity region of the original noisy network.
Based on the network equivalence framework, this paper presents scalable upper
and lower bounding models for wireless networks with potentially many nodes. A
channel decoupling method is proposed to decompose wireless networks into
decoupled multiple-access channels (MACs) and broadcast channels (BCs). The
upper bounding model, consisting of only point-to-point bit pipes, is
constructed by firstly extending the "one-shot" upper bounding models developed
by Calmon et al. and then integrating them with network equivalence tools. The
lower bounding model, consisting of both point-to-point and point-to-points bit
pipes, is constructed based on a two-step update of the lower bounding models
to incorporate the broadcast nature of wireless transmission. The main
advantages of the proposed methods are their simplicity and the fact that they
can be extended easily to large networks with a complexity that grows linearly
with the number of nodes. It is demonstrated that the resulting upper and lower
bounds can approach the capacity in some setups.
|
1401.4205 | Entropy analysis of word-length series of natural language texts:
Effects of text language and genre | cs.CL physics.data-an | We estimate the $n$-gram entropies of natural language texts in word-length
representation and find that these are sensitive to text language and genre. We
attribute this sensitivity to changes in the probability distribution of the
lengths of single words and emphasize the crucial role of the uniformity of
probabilities of having words with length between five and ten. Furthermore,
comparison with the entropies of shuffled data reveals the impact of word
length correlations on the estimated $n$-gram entropies.
|
1401.4208 | Epidemiological modeling of online social network dynamics | cs.SI physics.soc-ph | The last decade has seen the rise of immense online social networks (OSNs)
such as MySpace and Facebook. In this paper we use epidemiological models to
explain user adoption and abandonment of OSNs, where adoption is analogous to
infection and abandonment is analogous to recovery. We modify the traditional
SIR model of disease spread by incorporating infectious recovery dynamics such
that contact between a recovered and infected member of the population is
required for recovery. The proposed infectious recovery SIR model (irSIR model)
is validated using publicly available Google search query data for "MySpace" as
a case study of an OSN that has exhibited both adoption and abandonment phases.
The irSIR model is then applied to search query data for "Facebook," which is
just beginning to show the onset of an abandonment phase. Extrapolating the
best fit model into the future predicts a rapid decline in Facebook activity in
the next few years.
|
1401.4221 | Distortion-driven Turbulence Effect Removal using Variational Model | cs.CV | It remains a challenge to simultaneously remove geometric distortion and
space-time-varying blur in frames captured through a turbulent atmospheric
medium. To solve, or at least reduce these effects, we propose a new scheme to
recover a latent image from observed frames by integrating a new variational
model and distortion-driven spatial-temporal kernel regression. The proposed
scheme first constructs a high-quality reference image from the observed frames
using low-rank decomposition. Then, to generate an improved registered
sequence, the reference image is iteratively optimized using a variational
model containing a new spatial-temporal regularization. The proposed fast
algorithm efficiently solves this model without the use of partial differential
equations (PDEs). Next, to reduce blur variation, distortion-driven
spatial-temporal kernel regression is carried out to fuse the registered
sequence into one image by introducing the concept of the near-stationary
patch. Applying a blind deconvolution algorithm to the fused image produces the
final output. Extensive experimental testing shows, both qualitatively and
quantitatively, that the proposed method can effectively alleviate distortion
and blur and recover details of the original scene compared to state-of-the-art
methods.
|
1401.4230 | Electricity Pooling Markets with Strategic Producers Possessing
Asymmetric Information I: Elastic Demand | cs.GT cs.SY | In the restructured electricity industry, electricity pooling markets are an
oligopoly with strategic producers possessing private information (private
production cost function). We focus on pooling markets where aggregate demand
is represented by a non-strategic agent. We consider demand to be elastic.
We propose a market mechanism that has the following features. (F1) It is
individually rational. (F2) It is budget balanced. (F3) It is price efficient,
that is, at equilibrium the price of electricity is equal to the marginal cost
of production. (F4) The energy production profile corresponding to every
non-zero Nash equilibrium of the game induced by the mechanism is a solution of
the corresponding centralized problem where the objective is the maximization
of the sum of the producers' and consumers' utilities.
We identify some open problems associated with our approach to electricity
pooling markets.
|
1401.4234 | The power of indirect social ties | cs.SI physics.soc-ph | While direct social ties have been intensely studied in the context of
computer-mediated social networks, indirect ties (e.g., friends of friends)
have seen little attention. Yet in real life, we often rely on friends of our
friends for recommendations (of good doctors, good schools, or good
babysitters), for introduction to a new job opportunity, and for many other
occasional needs. In this work we attempt to 1) quantify the strength of
indirect social ties, 2) validate it, and 3) empirically demonstrate its
usefulness for distributed applications on two examples. We quantify social
strength of indirect ties using a(ny) measure of the strength of the direct
ties that connect two people and the intuition provided by the sociology
literature. We validate the proposed metric experimentally by comparing
correlations with other direct social tie evaluators. We show via data-driven
experiments that the proposed metric for social strength can be used
successfully for social applications. Specifically, we show that it alleviates
known problems in friend-to-friend storage systems by addressing two previously
documented shortcomings: reduced set of storage candidates and data
availability correlations. We also show that it can be used for predicting the
effects of a social diffusion with an accuracy of up to 93.5%.
|
1401.4236 | The Impact of Phase Fading on the Dirty Paper Channel | cs.IT math.IT | The impact of phase fading on the classical Costa dirty paper coding channel
is studied. We consider a variation of this channel model in which the
amplitude of the interference sequence is known at the transmitter while its
phase is known at the receiver. Although the capacity of this channel has
already been established, it is expressed using an auxiliary random variable
and as the solution of a maximization problem. To circumvent the difficulty
evaluating capacity, we derive alternative inner and outer bounds and show that
the two expressions are to within a finite distance. This provide an
approximate characterization of the capacity which depends only on the channel
parameters. We consider, in particular, two distributions of the phase fading:
circular binomial and circular uniform. The first distribution models the
scenario in which the transmitter has a minimal uncertainty over the phase of
the interference while the second distribution models complete uncertainty. For
circular binomial fading, we show that binning with Gaussian signaling still
approaches capacity, as in the channel without phase fading. In the case of
circular uniform fading, instead, binning with Gaussian signaling is no longer
effective and novel interference avoidance strategies are developed to approach
capacity.
|
1401.4237 | The Cognitive Compressive Sensing Problem | cs.IT math.IT math.OC | In the Cognitive Compressive Sensing (CCS) problem, a Cognitive Receiver (CR)
seeks to optimize the reward obtained by sensing an underlying $N$ dimensional
random vector, by collecting at most $K$ arbitrary projections of it. The $N$
components of the latent vector represent sub-channels states, that change
dynamically from "busy" to "idle" and vice versa, as a Markov chain that is
biased towards producing sparse vectors. To identify the optimal strategy we
formulate the Multi-Armed Bandit Compressive Sensing (MAB-CS) problem,
generalizing the popular Cognitive Spectrum Sensing model, in which the CR can
sense $K$ out of the $N$ sub-channels, as well as the typical static setting of
Compressive Sensing, in which the CR observes $K$ linear combinations of the
$N$ dimensional sparse vector. The CR opportunistic choice of the sensing
matrix should balance the desire of revealing the state of as many dimensions
of the latent vector as possible, while not exceeding the limits beyond which
the vector support is no longer uniquely identifiable.
|
1401.4248 | Complexity Analysis of Heuristic Pulse Interleaving Algorithms for
Multi-Target Tracking with Multiple Simultaneous Receive Beams | cs.SY | This paper presents heuristic algorithms for interleaved pulse scheduling
problems on multi-target tracking in pulse Doppler phased array radars that can
process multiple simultaneous received beams. The interleaved pulse scheduling
problems for element and subarray level digital beamforming architectures are
formulated as the same integer program and the asymptotic time complexities of
the algorithms are analyzed.
|
1401.4251 | Bitwise MAP Algorithm for Group Testing based on Holographic
Transformation | cs.IT math.IT | In this paper, an exact bitwise MAP (Maximum A Posteriori) estimation
algorithm for group testing problems is presented. We assume a simplest
non-adaptive group testing scenario including N-objects with binary status and
M-disjunctive tests. If a group contains a positive object, the test result for
the group is assumed to be one; otherwise, the test result becomes zero. Our
inference problem is to evaluate the posterior probabilities of the objects
from the observation of M-test results and from our knowledge on the prior
probabilities for objects. The heart of the algorithm is the dual expression of
the posterior values. The derivation of the dual expression can be naturally
described based on a holographic transformation to the normal factor graph
(NFG) representing the inference problem.
|
1401.4269 | SUPER: Sparse signals with Unknown Phases Efficiently Recovered | cs.IT math.IT | Suppose ${\bf x}$ is any exactly $k$-sparse vector in $\mathbb{C}^{n}$. We
present a class of phase measurement matrix $A$ in $\mathbb{C}^{m\times n}$,
and a corresponding algorithm, called SUPER, that can resolve ${\bf x}$ up to a
global phase from intensity measurements $|A{\bf x}|$ with high probability
over $A$. Here $|A{\bf x}|$ is a vector of component-wise magnitudes of $A{\bf
x}$. The SUPER algorithm is the first to simultaneously have the following
properties: (a) it requires only ${\cal O}(k)$ (order-optimal) measurements,
(b) the computational complexity of decoding is ${\cal O}(k\log k)$ (near
order-optimal) arithmetic operations.
|
1401.4271 | R\'enyi entropies and nonlinear diffusion equations | math-ph cs.IT math.IT math.MP | Since their introduction in the early sixties, the R\'enyi entropies have
been used in many contexts, ranging from information theory to astrophysics,
turbulence phenomena and others. In this note, we enlighten the main
connections between R\'enyi entropies and nonlinear diffusion equations. In
particular, it is shown that these relationships allow to prove various
functional inequalities in sharp form.
|
1401.4273 | Nuclear Norm Subspace Identification (N2SID) for short data batches | cs.SY | Subspace identification is revisited in the scope of nuclear norm
minimization methods. It is shown that essential structural knowledge about the
unknown data matrices in the data equation that relates Hankel matrices
constructed from input and output data can be used in the first step of the
numerical solution presented. The structural knowledge comprises the low rank
property of a matrix that is the product of the extended observability matrix
and the state sequence and the Toeplitz structure of the matrix of Markov
parameters (of the system in innovation form). The new subspace identification
method is referred to as the N2SID (twice the N of Nuclear Norm and SID for
Subspace IDentification) method. In addition to include key structural
knowledge in the solution it integrates the subspace calculation with
minimization of a classical prediction error cost function. The nuclear norm
relaxation enables us to perform such integration while preserving convexity.
The advantages of N2SID are demonstrated in a numerical open- and closed-loop
simulation study. Here a comparison is made with another widely used SID
method, i.e. N4SID. The comparison focusses on the identification with short
data batches, i.e. where the number of measurements is a small multiple of the
system order.
|
1401.4276 | Modeling Emotion Influence from Images in Social Networks | cs.SI cs.HC cs.MM | Images become an important and prevalent way to express users' activities,
opinions and emotions. In a social network, individual emotions may be
influenced by others, in particular by close friends. We focus on understanding
how users embed emotions into the images they uploaded to the social websites
and how social influence plays a role in changing users' emotions. We first
verify the existence of emotion influence in the image networks, and then
propose a probabilistic factor graph based emotion influence model to answer
the questions of "who influences whom". Employing a real network from Flickr as
experimental data, we study the effectiveness of factors in the proposed model
with in-depth data analysis. Our experiments also show that our model, by
incorporating the emotion influence, can significantly improve the accuracy
(+5%) for predicting emotions from images. Finally, a case study is used as the
anecdotal evidence to further demonstrate the effectiveness of the proposed
model.
|
1401.4312 | Super-Resolution Compressed Sensing: An Iterative Reweighted Algorithm
for Joint Parameter Learning and Sparse Signal Recovery | cs.IT math.IT | In many practical applications such as direction-of-arrival (DOA) estimation
and line spectral estimation, the sparsifying dictionary is usually
characterized by a set of unknown parameters in a continuous domain. To apply
the conventional compressed sensing to such applications, the continuous
parameter space has to be discretized to a finite set of grid points.
Discretization, however, incurs errors and leads to deteriorated recovery
performance. To address this issue, we propose an iterative reweighted method
which jointly estimates the unknown parameters and the sparse signals.
Specifically, the proposed algorithm is developed by iteratively decreasing a
surrogate function majorizing a given objective function, which results in a
gradual and interweaved iterative process to refine the unknown parameters and
the sparse signal. Numerical results show that the algorithm provides superior
performance in resolving closely-spaced frequency components.
|
1401.4313 | Robust Bayesian compressed sensing over finite fields: asymptotic
performance analysis | cs.IT math.IT | This paper addresses the topic of robust Bayesian compressed sensing over
finite fields. For stationary and ergodic sources, it provides asymptotic (with
the size of the vector to estimate) necessary and sufficient conditions on the
number of required measurements to achieve vanishing reconstruction error, in
presence of sensing and communication noise. In all considered cases, the
necessary and sufficient conditions asymptotically coincide. Conditions on the
sparsity of the sensing matrix are established in presence of communication
noise. Several previously published results are generalized and extended.
|
1401.4335 | On the Controllability and Observability of Networked Dynamic Systems | cs.SY | Some necessary and sufficient conditions are obtained for the controllability
and observability of a networked system with linear time invariant (LTI)
dynamics. The topology of this system is fixed but arbitrary, and every
subsystem is permitted to have different dynamic input-output relations. These
conditions essentially depend only on transmission zeros of every subsystem and
the connection matrix among subsystems, which makes them attractive in the
analysis and synthesis of a large scale networked system. As an application,
these conditions are utilized to characterize systems whose steady state
estimation accuracy with the distributed predictor developed in (Zhou, 2013) is
equal to that of the lumped Kalman filter. Some necessary and sufficient
conditions on system matrices are derived for this equivalence. It has been
made clear that to guarantee this equivalence, the steady state update gain
matrix of the Kalman filter must be block diagonal.
|
1401.4337 | On the Design of Fast Convergent LDPC Codes: An Optimization Approach | cs.IT math.IT | The complexity-performance trade-off is a fundamental aspect of the design of
low-density parity-check (LDPC) codes. In this paper, we consider LDPC codes
for the binary erasure channel (BEC), use code rate for performance metric, and
number of decoding iterations to achieve a certain residual erasure probability
for complexity metric. We first propose a quite accurate approximation of the
number of iterations for the BEC. Moreover, a simple but efficient utility
function corresponding to the number of iterations is developed. Using the
aforementioned approximation and the utility function, two optimization
problems w.r.t. complexity are formulated to find the code degree
distributions. We show that both optimization problems are convex. In
particular, the problem with the proposed approximation belongs to the class of
semi-infinite problems which are computationally challenging to be solved.
However, the problem with the proposed utility function falls into the class of
semi-definite programming (SDP) and thus, the global solution can be found
efficiently using available SDP solvers. Numerical results reveal the
superiority of the proposed code design compared to existing code designs from
literature.
|
1401.4381 | Intelligent Techniques for Resolving Conflicts of Knowledge in
Multi-Agent Decision Support Systems | cs.MA | This paper focuses on some of the key intelligent techniques for conflict
resolution in Multi-Agent Decision Support Systems.
|
1401.4383 | On the Hegselmann-Krause conjecture in opinion dynamics | math.DS cs.SI | We give an elementary proof of a conjecture by Hegselmann and Krause in
opinion dynamics, concerning a symmetric bounded confidence interval model: If
there is a truth and all individuals take each other seriously by a positive
amount bounded away from zero, then all truth seekers will converge to the
truth. Here truth seekers are the individuals which are attracted by the truth
by a positive amount. In the absence of truth seekers it was already shown by
Hegselmann and Krause that the opinions of the individuals converge.
|
1401.4387 | A Multiple Network Approach to Corporate Governance | q-fin.GN cs.SI physics.soc-ph | In this work, we consider Corporate Governance (CG) ties among companies from
a multiple network perspective. Such a structure naturally arises from the
close interrelation between the Shareholding Network (SH) and the Board of
Directors network (BD). In order to capture the simultaneous effects of both
networks on CG, we propose to model the CG multiple network structure via
tensor analysis. In particular, we consider the TOPHITS model, based on the
PARAFAC tensor decomposition, to show that tensor techniques can be
successfully applied in this context. By providing some empirical results from
the Italian financial market in the univariate case, we then show that a
tensor--based multiple network approach can reveal important information.
|
1401.4436 | Cause Identification from Aviation Safety Incident Reports via Weakly
Supervised Semantic Lexicon Construction | cs.CL cs.LG | The Aviation Safety Reporting System collects voluntarily submitted reports
on aviation safety incidents to facilitate research work aiming to reduce such
incidents. To effectively reduce these incidents, it is vital to accurately
identify why these incidents occurred. More precisely, given a set of possible
causes, or shaping factors, this task of cause identification involves
identifying all and only those shaping factors that are responsible for the
incidents described in a report. We investigate two approaches to cause
identification. Both approaches exploit information provided by a semantic
lexicon, which is automatically constructed via Thelen and Riloffs Basilisk
framework augmented with our linguistic and algorithmic modifications. The
first approach labels a report using a simple heuristic, which looks for the
words and phrases acquired during the semantic lexicon learning process in the
report. The second approach recasts cause identification as a text
classification problem, employing supervised and transductive text
classification algorithms to learn models from incident reports labeled with
shaping factors and using the models to label unseen reports. Our experiments
show that both the heuristic-based approach and the learning-based approach
(when given sufficient training data) outperform the baseline system
significantly.
|
1401.4446 | Humanoid Robot With Vision Recognition Control System | cs.RO | This paper presents a solution to controlling humanoid robotic systems. The
robot can be programmed to execute certain complex actions based on basic
motion primitives. The humanoid robot is programmed using a PC. The software
running on the PC can obtain at any given moment information about the state of
the robot, or it can program the robot to execute a different action, providing
the possibility of implementing a complex behavior. We want to provide the
robotic system the ability to understand more on the external real world. In
this paper we describe a method for detecting ellipses in real world images
using the Randomized Hough Transform with Result Clustering. Real world images
are preprocessed, noise reduction, greyscale transform, edge detection and
finaly binarization in order to be processed by the actual ellipse detector.
After all the ellipses are detected a post processing phase clusters the
results.
|
1401.4447 | Leaf Classification Using Shape, Color, and Texture Features | cs.CV cs.CY | Several methods to identify plants have been proposed by several researchers.
Commonly, the methods did not capture color information, because color was not
recognized as an important aspect to the identification. In this research,
shape and vein, color, and texture features were incorporated to classify a
leaf. In this case, a neural network called Probabilistic Neural network (PNN)
was used as a classifier. The experimental result shows that the method for
classification gives average accuracy of 93.75% when it was tested on Flavia
dataset, that contains 32 kinds of plant leaves. It means that the method gives
better performance compared to the original work.
|
1401.4451 | Reliable, Deniable, and Hidable Communication over Multipath Networks | cs.IT math.IT | We consider the scenario wherein Alice wants to (potentially) communicate to
the intended receiver Bob over a network consisting of multiple parallel links
in the presence of a passive eavesdropper Willie, who observes an unknown
subset of links. A primary goal of our communication protocol is to make the
communication "deniable", {\it i.e.}, Willie should not be able to {\it
reliably} estimate whether or not Alice is transmitting any {\it covert}
information to Bob. Moreover, if Alice is indeed actively communicating, her
covert messages should be information-theoretically "hidable" in the sense that
Willie's observations should not {\it leak any information} about Alice's
(potential) message to Bob -- our notion of hidability is slightly stronger
than the notion of information-theoretic strong secrecy well-studied in the
literature, and may be of independent interest. It can be shown that
deniability does not imply either hidability or (weak or strong)
information-theoretic secrecy; nor does any form of information-theoretic
secrecy imply deniability. We present matching inner and outer bounds on the
capacity for deniable and hidable communication over {\it multipath networks}.
|
1401.4472 | YASCA: A collective intelligence approach for community detection in
complex networks | cs.SI physics.soc-ph | In this paper we present an original approach for community detection in
complex networks. The approach belongs to the family of seed-centric
algorithms. However, instead of expanding communities around selected seeds as
most of existing approaches do, we explore here applying an ensemble clustering
approach to different network partitions derived from ego-centered communities
computed for each selected seed. Ego-centered communities are themselves
computed applying a recently proposed ensemble ranking based approach that
allow to efficiently combine various local modularities used to guide a greedy
optimisation process. Results of first experiments on real world networks for
which a ground truth decomposition into communities are known, argue for the
validity of our approach.
|
1401.4473 | The Impact of the Topology on Cascading Failures in Electric Power Grids | physics.soc-ph cs.SY | Cascading failures are one of the main reasons for blackouts in power
transmission grids. The topology of a power grid, together with its operative
state determine, for the most part, the robustness of the power grid against
cascading failures. Secure electrical power supply requires, together with
careful operation, a robust design of the electrical power grid topology. This
paper investigates the impact of a power grid topology on its robustness
against cascading failures. Currently, the impact of the topology on a grid
robustness is mainly assessed by using purely topological approaches that fail
to capture the essence of electric power flow. This paper proposes a metric,
the effective graph resistance, that relates the topology of a power grid to
its robustness against cascading failures by deliberate attacks, while also
taking the fundamental characteristics of the electric power grid into account
such as power flow allocation according to Kirchoff Laws. Experimental
verification shows that the proposed metric anticipates the grid robustness
accurately. The proposed metric is used to optimize a grid topology for a
higher level of robustness. To demonstrate its applicability, the metric is
applied on the IEEE 118 bus power system to improve its robustness against
cascading failures.
|
1401.4484 | Constrained Codes for Rank Modulation | cs.IT math.IT | Motivated by the rank modulation scheme, a recent work by Sala and Dolecek
explored the study of constraint codes for permutations. The constraint studied
by them is inherited by the inter-cell interference phenomenon in flash
memories, where high-level cells can inadvertently increase the level of
low-level cells.
In this paper, the model studied by Sala and Dolecek is extended into two
constraints. A permutation $\sigma \in S_n$ satisfies the \emph{two-neighbor
$k$-constraint} if for all $2 \leq i \leq n-1$ either
$|\sigma(i-1)-\sigma(i)|\leq k$ or $|\sigma(i)-\sigma(i+1)|\leq k$, and it
satisfies the \emph{asymmetric two-neighbor $k$-constraint} if for all $2 \leq
i \leq n-1$, either $\sigma(i-1)-\sigma(i) < k$ or $\sigma(i+1)-\sigma(i) < k$.
We show that the capacity of the first constraint is $(1+\epsilon)/2$ in case
that $k=\Theta(n^{\epsilon})$ and the capacity of the second constraint is 1
regardless to the value of $k$. We also extend our results and study the
capacity of these two constraints combined with error-correction codes in the
Kendall's $\tau$ metric.
|
1401.4489 | An Analysis of Random Projections in Cancelable Biometrics | cs.CV cs.LG stat.ML | With increasing concerns about security, the need for highly secure physical
biometrics-based authentication systems utilizing \emph{cancelable biometric}
technologies is on the rise. Because the problem of cancelable template
generation deals with the trade-off between template security and matching
performance, many state-of-the-art algorithms successful in generating high
quality cancelable biometrics all have random projection as one of their early
processing steps. This paper therefore presents a formal analysis of why random
projections is an essential step in cancelable biometrics. By formally defining
the notion of an \textit{Independent Subspace Structure} for datasets, it can
be shown that random projection preserves the subspace structure of data
vectors generated from a union of independent linear subspaces. The bound on
the minimum number of random vectors required for this to hold is also derived
and is shown to depend logarithmically on the number of data samples, not only
in independent subspaces but in disjoint subspace settings as well. The
theoretical analysis presented is supported in detail with empirical results on
real-world face recognition datasets.
|
1401.4506 | Detection and Decoding for 2D Magnetic Recording Channels with 2D
Intersymbol Interference | cs.IT math.IT | This paper considers iterative detection and decoding on the concatenated
communication channel consisting of a two-dimensional magnetic recording (TDMR)
channel modeled by the four-grain rectangular discrete grain model (DGM)
proposed by Kavcic et. al., followed by a two-dimensional intersymbol
interference (2D-ISI) channel modeled by linear convolution of the DGM model's
output with a finite-extent 2D blurring mask followed by addition of white
Gaussian noise. An iterative detection and decoding scheme combines TDMR
detection, 2D-ISI detection, and soft-in/soft-out (SISO) channel decoding in a
structure with two iteration loops. In the first loop, the 2D-ISI channel
detector exchanges log-likelihood ratios (LLRs) with the TDMR detector. In the
second loop, the TDMR detector exchanges LLRs with a serially concatenated
convolutional code (SCCC) decoder. Simulation results for the concatenated TDMR
and 2 x 2 averaging mask ISI channel with 10 dB SNR show that densities of 0.48
user bits per grain and above can be achieved, corresponding to an areal
density of about 9.6 Terabits per square inch, over the entire range of grain
probabilities in the TDMR model.
|
1401.4509 | When and By How Much Can Helper Node Selection Improve Regenerating
Codes? | cs.IT math.IT | Regenerating codes (RCs) can significantly reduce the repair-bandwidth of
distributed storage networks. Initially, the analysis of RCs was based on the
assumption that during the repair process, the newcomer does not distinguish
(among all surviving nodes) which nodes to access, i.e., the newcomer is
oblivious to the set of helpers being used. Such a scheme is termed the blind
repair (BR) scheme. Nonetheless, it is intuitive in practice that the newcomer
should choose to access only those "good" helpers. In this paper, a new
characterization of the effect of choosing the helper nodes in terms of the
storage-bandwidth tradeoff is given. Specifically, answers to the following
fundamental questions are given: Under what conditions does proactively
choosing the helper nodes improve the storage-bandwidth tradeoff? Can this
improvement be analytically quantified?
This paper answers the former question by providing a necessary and
sufficient condition under which optimally choosing good helpers strictly
improves the storage-bandwidth tradeoff. To answer the latter question, a
low-complexity helper selection solution, termed the family repair (FR) scheme,
is proposed and the corresponding storage/repair-bandwidth curve is
characterized. For example, consider a distributed storage network with 60
total number of nodes and the network is resilient against 50 node failures. If
the number of helper nodes is 10, then the FR scheme and its variant
demonstrate 27% reduction in the repair-bandwidth when compared to the BR
solution. This paper also proves that under some design parameters, the FR
scheme is indeed optimal among all helper selection schemes. An explicit
construction of an exact-repair code is also proposed that can achieve the
minimum-bandwidth-regenerating point of the FR scheme. The new exact-repair
code can be viewed as a generalization of the existing fractional repetition
code.
|
1401.4529 | General factorization framework for context-aware recommendations | cs.IR cs.LG | Context-aware recommendation algorithms focus on refining recommendations by
considering additional information, available to the system. This topic has
gained a lot of attention recently. Among others, several factorization methods
were proposed to solve the problem, although most of them assume explicit
feedback which strongly limits their real-world applicability. While these
algorithms apply various loss functions and optimization strategies, the
preference modeling under context is less explored due to the lack of tools
allowing for easy experimentation with various models. As context dimensions
are introduced beyond users and items, the space of possible preference models
and the importance of proper modeling largely increases.
In this paper we propose a General Factorization Framework (GFF), a single
flexible algorithm that takes the preference model as an input and computes
latent feature matrices for the input dimensions. GFF allows us to easily
experiment with various linear models on any context-aware recommendation task,
be it explicit or implicit feedback based. The scaling properties makes it
usable under real life circumstances as well.
We demonstrate the framework's potential by exploring various preference
models on a 4-dimensional context-aware problem with contexts that are
available for almost any real life datasets. We show in our experiments --
performed on five real life, implicit feedback datasets -- that proper
preference modelling significantly increases recommendation accuracy, and
previously unused models outperform the traditional ones. Novel models in GFF
also outperform state-of-the-art factorization algorithms.
We also extend the method to be fully compliant to the Multidimensional
Dataspace Model, one of the most extensive data models of context-enriched
data. Extended GFF allows the seamless incorporation of information into the
fac[truncated]
|
1401.4532 | Polar Lattices for Strong Secrecy Over the Mod-$\Lambda$ Gaussian
Wiretap Channel | cs.IT math.IT | Polar lattices, which are constructed from polar codes, are provably good for
the additive white Gaussian noise (AWGN) channel. In this work, we propose a
new polar lattice construction that achieves the secrecy capacity under the
strong secrecy criterion over the mod-$\Lambda$ Gaussian wiretap channel. This
construction leads to an AWGN-good lattice and a secrecy-good lattice
simultaneously. The design methodology is mainly based on the equivalence in
terms of polarization between the $\Lambda/\Lambda'$ channel in lattice coding
and the equivalent channel derived from the chain rule of mutual information in
multilevel coding.
|
1401.4533 | Government and Social Media: A Case Study of 31 Informational World
Cities | cs.CY cs.DL cs.SI physics.soc-ph | Social media platforms are increasingly being used by governments to foster
user interaction. Particularly in cities with enhanced ICT infrastructures
(i.e., Informational World Cities) and high internet penetration rates, social
media platforms are valuable tools for reaching high numbers of citizens. This
empirical investigation of 31 Informational World Cities will provide an
overview of social media services used for governmental purposes, of their
popularity among governments, and of their usage intensity in broadcasting
information online.
|
1401.4539 | Solving the Minimum Common String Partition Problem with the Help of
Ants | cs.AI | In this paper, we consider the problem of finding a minimum common partition
of two strings. The problem has its application in genome comparison. As it is
an NP-hard, discrete combinatorial optimization problem, we employ a
metaheuristic technique, namely, MAX-MIN ant system to solve this problem. To
achieve better efficiency we first map the problem instance into a special kind
of graph. Subsequently, we employ a MAX-MIN ant system to achieve high quality
solutions for the problem. Experimental results show the superiority of our
algorithm in comparison with the state of art algorithm in the literature. The
improvement achieved is also justified by standard statistical test.
|
1401.4543 | On the Potential of Twitter for Understanding the Tunisia of the
Post-Arab Spring | cs.SI cs.CY physics.soc-ph | Micro-blogging through Twitter has made information short and to the point,
and more importantly systematically searchable. This work is the first of a
series in which quotidian observations about Tunisia are obtained using the
micro-blogging site Twitter. Data was extracted using the open source Twitter
API v1.1. Specific tweets were obtained using functional search operators in
particular thematic hash tags, geo-location, date, time and language. The
presence of Tunisia in the international tweet stream, the language of
communication of Tunisian residents through Twitter as well as Twitter usage
across Tunisia are the center of attention of this article.
|
1401.4566 | Excess Risk Bounds for Exponentially Concave Losses | cs.LG stat.ML | The overarching goal of this paper is to derive excess risk bounds for
learning from exp-concave loss functions in passive and sequential learning
settings. Exp-concave loss functions encompass several fundamental problems in
machine learning such as squared loss in linear regression, logistic loss in
classification, and negative logarithm loss in portfolio management. In batch
setting, we obtain sharp bounds on the performance of empirical risk
minimization performed in a linear hypothesis space and with respect to the
exp-concave loss functions. We also extend the results to the online setting
where the learner receives the training examples in a sequential manner. We
propose an online learning algorithm that is a properly modified version of
online Newton method to obtain sharp risk bounds. Under an additional mild
assumption on the loss function, we show that in both settings we are able to
achieve an excess risk bound of $O(d\log n/n)$ that holds with a high
probability.
|
1401.4575 | Various Views on the Trapdoor Channel and an Upper Bound on its Capacity | cs.IT math.IT | Two novel views are presented on the trapdoor channel. First, by deriving the
underlying iterated function system (IFS), it is shown that the trapdoor
channel with input blocks of length $n$ can be regarded as the $n$th element of
a sequence of shapes approximating a fractal. Second, an algorithm is presented
that fully characterizes the trapdoor channel and resembles the recursion of
generating all permutations of a given string. Subsequently, the problem of
maximizing a $n$-letter mutual information is considered. It is shown that
$\frac{1}{2}\log_2\left(\frac{5}{2}\right)\approx 0.6610$ bits per use is an
upper bound on the capacity of the trapdoor channel. This upper bound, which is
the tightest upper bound known proves that feedback increases capacity of the
trapdoor channel.
|
1401.4580 | Graph eigenvectors, fundamental weights and centrality metrics for nodes
in networks | math.SP cond-mat.stat-mech cs.DM cs.SI physics.soc-ph | Several expressions for the $j$-th component $\left( x_{k}\right)_{j}$ of the
$k$-th eigenvector $x_{k}$ of a symmetric matrix $A$ belonging to eigenvalue
$\lambda_{k}$ and normalized as $x_{k}^{T}x_{k}=1$ are presented. In
particular, the expression \[ \left(
x_{k}\right)_{j}^{2}=-\frac{1}{c_{A}^{\prime}\left( \lambda_{k}\right)
}\det\left( A_{\backslash\left\{ j\right\} }-\lambda_{k}I\right) \] where
$c_{A}\left( \lambda\right) =\det\left( A-\lambda I\right) $ is the
characteristic polynomial of $A$, $c_{A}^{\prime}\left( \lambda\right)
=\frac{dc_{A}\left( \lambda\right) }{d\lambda}$ and $A_{\backslash\left\{
j\right\} }$ is obtained from $A$ by removal of row $j$ and column $j$,
suggests us to consider the square eigenvector component as a graph centrality
metric for node $j$ that reflects the impact of the removal of node $j$ from
the graph at an eigenfrequency/eigenvalue $\lambda_{k}$ of a graph related
matrix (such as the adjacency or Laplacian matrix). Removal of nodes in a graph
relates to the robustness of a graph. The set of such nodal centrality metrics,
the squared eigenvector components $\left( x_{k}\right)_{j}^{2}$ of the
adjacency matrix over all eigenvalue $\lambda_{k}$ for each node $j$, is
'ideal' in the sense of being complete, \emph{almost} uncorrelated and
mathematically precisely defined and computable. Fundamental weights (column
sum of $X$) and dual fundamental weights (row sum of $X$) are introduced as
spectral metrics that condense information embedded in the orthogonal
eigenvector matrix $X$, with elements $X_{ij}=\left( x_{j}\right)_{i}$.
In addition to the criterion {\em If the algebraic connectivity is positive,
then the graph is connected}, we found an alternative condition: {\em If
$\min_{1\leq k\leq N}\left( \lambda_{k}^{2}(A)\right) =d_{\min}$, then the
graph is disconnected.}
|
1401.4589 | miRNA and Gene Expression based Cancer Classification using Self-
Learning and Co-Training Approaches | cs.CE cs.LG | miRNA and gene expression profiles have been proved useful for classifying
cancer samples. Efficient classifiers have been recently sought and developed.
A number of attempts to classify cancer samples using miRNA/gene expression
profiles are known in literature. However, the use of semi-supervised learning
models have been used recently in bioinformatics, to exploit the huge corpuses
of publicly available sets. Using both labeled and unlabeled sets to train
sample classifiers, have not been previously considered when gene and miRNA
expression sets are used. Moreover, there is a motivation to integrate both
miRNA and gene expression for a semi-supervised cancer classification as that
provides more information on the characteristics of cancer samples. In this
paper, two semi-supervised machine learning approaches, namely self-learning
and co-training, are adapted to enhance the quality of cancer sample
classification. These approaches exploit the huge public corpuses to enrich the
training data. In self-learning, miRNA and gene based classifiers are enhanced
independently. While in co-training, both miRNA and gene expression profiles
are used simultaneously to provide different views of cancer samples. To our
knowledge, it is the first attempt to apply these learning approaches to cancer
classification. The approaches were evaluated using breast cancer,
hepatocellular carcinoma (HCC) and lung cancer expression sets. Results show up
to 20% improvement in F1-measure over Random Forests and SVM classifiers.
Co-Training also outperforms Low Density Separation (LDS) approach by around
25% improvement in F1-measure in breast cancer.
|
1401.4590 | Combining Evaluation Metrics via the Unanimous Improvement Ratio and its
Application to Clustering Tasks | cs.AI cs.LG | Many Artificial Intelligence tasks cannot be evaluated with a single quality
criterion and some sort of weighted combination is needed to provide system
rankings. A problem of weighted combination measures is that slight changes in
the relative weights may produce substantial changes in the system rankings.
This paper introduces the Unanimous Improvement Ratio (UIR), a measure that
complements standard metric combination criteria (such as van Rijsbergen's
F-measure) and indicates how robust the measured differences are to changes in
the relative weights of the individual metrics. UIR is meant to elucidate
whether a perceived difference between two systems is an artifact of how
individual metrics are weighted.
Besides discussing the theoretical foundations of UIR, this paper presents
empirical results that confirm the validity and usefulness of the metric for
the Text Clustering problem, where there is a tradeoff between precision and
recall based metrics and results are particularly sensitive to the weighting
scheme used to combine them. Remarkably, our experiments show that UIR can be
used as a predictor of how well differences between systems measured on a given
test bed will also hold in a different test bed.
|
1401.4592 | Proximity-Based Non-uniform Abstractions for Approximate Planning | cs.AI | In a deterministic world, a planning agent can be certain of the consequences
of its planned sequence of actions. Not so, however, in dynamic, stochastic
domains where Markov decision processes are commonly used. Unfortunately these
suffer from the curse of dimensionality: if the state space is a Cartesian
product of many small sets (dimensions), planning is exponential in the number
of those dimensions.
Our new technique exploits the intuitive strategy of selectively ignoring
various dimensions in different parts of the state space. The resulting
non-uniformity has strong implications, since the approximation is no longer
Markovian, requiring the use of a modified planner. We also use a spatial and
temporal proximity measure, which responds to continued planning as well as
movement of the agent through the state space, to dynamically adapt the
abstraction as planning progresses.
We present qualitative and quantitative results across a range of
experimental domains showing that an agent exploiting this novel approximation
method successfully finds solutions to the planning problem using much less
than the full state space. We assess and analyse the features of domains which
our method can exploit.
|
1401.4593 | Location-Based Reasoning about Complex Multi-Agent Behavior | cs.MA cs.AI | Recent research has shown that surprisingly rich models of human activity can
be learned from GPS (positional) data. However, most effort to date has
concentrated on modeling single individuals or statistical properties of groups
of people. Moreover, prior work focused solely on modeling actual successful
executions (and not failed or attempted executions) of the activities of
interest. We, in contrast, take on the task of understanding human
interactions, attempted interactions, and intentions from noisy sensor data in
a fully relational multi-agent setting. We use a real-world game of capture the
flag to illustrate our approach in a well-defined domain that involves many
distinct cooperative and competitive joint activities. We model the domain
using Markov logic, a statistical-relational language, and learn a theory that
jointly denoises the data and infers occurrences of high-level activities, such
as a player capturing an enemy. Our unified model combines constraints imposed
by the geometry of the game area, the motion model of the players, and by the
rules and dynamics of the game in a probabilistically and logically sound
fashion. We show that while it may be impossible to directly detect a
multi-agent activity due to sensor noise or malfunction, the occurrence of the
activity can still be inferred by considering both its impact on the future
behaviors of the people involved as well as the events that could have preceded
it. Further, we show that given a model of successfully performed multi-agent
activities, along with a set of examples of failed attempts at the same
activities, our system automatically learns an augmented model that is capable
of recognizing success and failure, as well as goals of peoples actions with
high accuracy. We compare our approach with other alternatives and show that
our unified model, which takes into account not only relationships among
individual players, but also relationships among activities over the entire
length of a game, although more computationally costly, is significantly more
accurate. Finally, we demonstrate that explicitly modeling unsuccessful
attempts boosts performance on other important recognition tasks.
|
1401.4595 | Robust Local Search for Solving RCPSP/max with Durational Uncertainty | cs.AI | Scheduling problems in manufacturing, logistics and project management have
frequently been modeled using the framework of Resource Constrained Project
Scheduling Problems with minimum and maximum time lags (RCPSP/max). Due to the
importance of these problems, providing scalable solution schedules for
RCPSP/max problems is a topic of extensive research. However, all existing
methods for solving RCPSP/max assume that durations of activities are known
with certainty, an assumption that does not hold in real world scheduling
problems where unexpected external events such as manpower availability,
weather changes, etc. lead to delays or advances in completion of activities.
Thus, in this paper, our focus is on providing a scalable method for solving
RCPSP/max problems with durational uncertainty. To that end, we introduce the
robust local search method consisting of three key ideas: (a) Introducing and
studying the properties of two decision rule approximations used to compute
start times of activities with respect to dynamic realizations of the
durational uncertainty; (b) Deriving the expression for robust makespan of an
execution strategy based on decision rule approximations; and (c) A robust
local search mechanism to efficiently compute activity execution strategies
that are robust against durational uncertainty. Furthermore, we also provide
enhancements to local search that exploit temporal dependencies between
activities. Our experimental results illustrate that robust local search is
able to provide robust execution strategies efficiently.
|
1401.4596 | Unfounded Sets and Well-Founded Semantics of Answer Set Programs with
Aggregates | cs.LO cs.AI | Logic programs with aggregates (LPA) are one of the major linguistic
extensions to Logic Programming (LP). In this work, we propose a generalization
of the notions of unfounded set and well-founded semantics for programs with
monotone and antimonotone aggregates (LPAma programs). In particular, we
present a new notion of unfounded set for LPAma programs, which is a sound
generalization of the original definition for standard (aggregate-free) LP. On
this basis, we define a well-founded operator for LPAma programs, the fixpoint
of which is called well-founded model (or well-founded semantics) for LPAma
programs. The most important properties of unfounded sets and the well-founded
semantics for standard LP are retained by this generalization, notably
existence and uniqueness of the well-founded model, together with a strong
relationship to the answer set semantics for LPAma programs. We show that one
of the D-well-founded semantics, defined by Pelov, Denecker, and Bruynooghe for
a broader class of aggregates using approximating operators, coincides with the
well-founded model as defined in this work on LPAma programs. We also discuss
some complexity issues, most importantly we give a formal proof of tractable
computation of the well-founded model for LPA programs. Moreover, we prove that
for general LPA programs, which may contain aggregates that are neither
monotone nor antimonotone, deciding satisfaction of aggregate expressions with
respect to partial interpretations is coNP-complete. As a consequence, a
well-founded semantics for general LPA programs that allows for tractable
computation is unlikely to exist, which justifies the restriction on LPAma
programs. Finally, we present a prototype system extending DLV, which supports
the well-founded semantics for LPAma programs, at the time of writing the only
implemented system that does so. Experiments with this prototype show
significant computational advantages of aggregate constructs over equivalent
aggregate-free encodings.
|
1401.4597 | Dr.Fill: Crosswords and an Implemented Solver for Singly Weighted CSPs | cs.AI | We describe Dr.Fill, a program that solves American-style crossword puzzles.
From a technical perspective, Dr.Fill works by converting crosswords to
weighted CSPs, and then using a variety of novel techniques to find a solution.
These techniques include generally applicable heuristics for variable and value
selection, a variant of limited discrepancy search, and postprocessing and
partitioning ideas. Branch and bound is not used, as it was incompatible with
postprocessing and was determined experimentally to be of little practical
value. Dr.Fillls performance on crosswords from the American Crossword Puzzle
Tournament suggests that it ranks among the top fifty or so crossword solvers
in the world.
|
1401.4598 | SAS+ Planning as Satisfiability | cs.AI | Planning as satisfiability is a principal approach to planning with many
eminent advantages. The existing planning as satisfiability techniques usually
use encodings compiled from STRIPS. We introduce a novel SAT encoding scheme
(SASE) based on the SAS+ formalism. The new scheme exploits the structural
information in SAS+, resulting in an encoding that is both more compact and
efficient for planning. We prove the correctness of the new encoding by
establishing an isomorphism between the solution plans of SASE and that of
STRIPS based encodings. We further analyze the transition variables newly
introduced in SASE to explain why it accommodates modern SAT solving algorithms
and improves performance. We give empirical statistical results to support our
analysis. We also develop a number of techniques to further reduce the encoding
size of SASE, and conduct experimental studies to show the strength of each
individual technique. Finally, we report extensive experimental results to
demonstrate significant improvements of SASE over the state-of-the-art STRIPS
based encoding schemes in terms of both time and memory efficiency.
|
1401.4599 | Learning and Reasoning with Action-Related Places for Robust Mobile
Manipulation | cs.RO cs.AI | We propose the concept of Action-Related Place (ARPlace) as a powerful and
flexible representation of task-related place in the context of mobile
manipulation. ARPlace represents robot base locations not as a single position,
but rather as a collection of positions, each with an associated probability
that the manipulation action will succeed when located there. ARPlaces are
generated using a predictive model that is acquired through experience-based
learning, and take into account the uncertainty the robot has about its own
location and the location of the object to be manipulated.
When executing the task, rather than choosing one specific goal position
based only on the initial knowledge about the task context, the robot
instantiates an ARPlace, and bases its decisions on this ARPlace, which is
updated as new information about the task becomes available. To show the
advantages of this least-commitment approach, we present a transformational
planner that reasons about ARPlaces in order to optimize symbolic plans. Our
empirical evaluation demonstrates that using ARPlaces leads to more robust and
efficient mobile manipulation in the face of state estimation uncertainty on
our simulated robot.
|
1401.4600 | Exploiting Model Equivalences for Solving Interactive Dynamic Influence
Diagrams | cs.AI | We focus on the problem of sequential decision making in partially observable
environments shared with other agents of uncertain types having similar or
conflicting objectives. This problem has been previously formalized by multiple
frameworks one of which is the interactive dynamic influence diagram (I-DID),
which generalizes the well-known influence diagram to the multiagent setting.
I-DIDs are graphical models and may be used to compute the policy of an agent
given its belief over the physical state and others models, which changes as
the agent acts and observes in the multiagent setting.
As we may expect, solving I-DIDs is computationally hard. This is
predominantly due to the large space of candidate models ascribed to the other
agents and its exponential growth over time. We present two methods for
reducing the size of the model space and stemming its exponential growth. Both
these methods involve aggregating individual models into equivalence classes.
Our first method groups together behaviorally equivalent models and selects
only those models for updating which will result in predictive behaviors that
are distinct from others in the updated model space. The second method further
compacts the model space by focusing on portions of the behavioral predictions.
Specifically, we cluster actionally equivalent models that prescribe identical
actions at a single time step. Exactly identifying the equivalences would
require us to solve all models in the initial set. We avoid this by selectively
solving some of the models, thereby introducing an approximation. We discuss
the error introduced by the approximation, and empirically demonstrate the
improved efficiency in solving I-DIDs due to the equivalences.
|
1401.4601 | Counting-Based Search: Branching Heuristics for Constraint Satisfaction
Problems | cs.AI | Designing a search heuristic for constraint programming that is reliable
across problem domains has been an important research topic in recent years.
This paper concentrates on one family of candidates: counting-based search.
Such heuristics seek to make branching decisions that preserve most of the
solutions by determining what proportion of solutions to each individual
constraint agree with that decision. Whereas most generic search heuristics in
constraint programming rely on local information at the level of the individual
variable, our search heuristics are based on more global information at the
constraint level. We design several algorithms that are used to count the
number of solutions to specific families of constraints and propose some search
heuristics exploiting such information. The experimental part of the paper
considers eight problem domains ranging from well-established benchmark puzzles
to rostering and sport scheduling. An initial empirical analysis identifies
heuristic maxSD as a robust candidate among our proposals.eWe then evaluate the
latter against the state of the art, including the latest generic search
heuristics, restarts, and discrepancy-based tree traversals. Experimental
results show that counting-based search generally outperforms other generic
heuristics.
|
1401.4602 | Cloning in Elections: Finding the Possible Winners | cs.GT cs.MA | We consider the problem of manipulating elections by cloning candidates. In
our model, a manipulator can replace each candidate c by several clones, i.e.,
new candidates that are so similar to c that each voter simply replaces c in
his vote with a block of these new candidates, ranked consecutively. The
outcome of the resulting election may then depend on the number of clones as
well as on how each voter orders the clones within the block. We formalize what
it means for a cloning manipulation to be successful (which turns out to be a
surprisingly delicate issue), and, for a number of common voting rules,
characterize the preference profiles for which a successful cloning
manipulation exists. We also consider the model where there is a cost
associated with producing each clone, and study the complexity of finding a
minimum-cost cloning manipulation. Finally, we compare cloning with two related
problems: the problem of control by adding candidates and the problem of
possible (co)winners when new alternatives can join.
|
1401.4603 | Semantic Similarity Measures Applied to an Ontology for Human-Like
Interaction | cs.AI cs.CL | The focus of this paper is the calculation of similarity between two concepts
from an ontology for a Human-Like Interaction system. In order to facilitate
this calculation, a similarity function is proposed based on five dimensions
(sort, compositional, essential, restrictive and descriptive) constituting the
structure of ontological knowledge. The paper includes a proposal for computing
a similarity function for each dimension of knowledge. Later on, the similarity
values obtained are weighted and aggregated to obtain a global similarity
measure. In order to calculate those weights associated to each dimension, four
training methods have been proposed. The training methods differ in the element
to fit: the user, concepts or pairs of concepts, and a hybrid approach. For
evaluating the proposal, the knowledge base was fed from WordNet and extended
by using a knowledge editing toolkit (Cognos). The evaluation of the proposal
is carried out through the comparison of system responses with those given by
human test subjects, both providing a measure of the soundness of the procedure
and revealing ways in which the proposal may be improved.
|
1401.4604 | Completeness Guarantees for Incomplete Ontology Reasoners: Theory and
Practice | cs.AI cs.LO | To achieve scalability of query answering, the developers of Semantic Web
applications are often forced to use incomplete OWL 2 reasoners, which fail to
derive all answers for at least one query, ontology, and data set. The lack of
completeness guarantees, however, may be unacceptable for applications in areas
such as health care and defence, where missing answers can adversely affect the
applications functionality. Furthermore, even if an application can tolerate
some level of incompleteness, it is often advantageous to estimate how many and
what kind of answers are being lost.
In this paper, we present a novel logic-based framework that allows one to
check whether a reasoner is complete for a given query Q and ontology T---that
is, whether the reasoner is guaranteed to compute all answers to Q w.r.t. T and
an arbitrary data set A. Since ontologies and typical queries are often fixed
at application design time, our approach allows application developers to check
whether a reasoner known to be incomplete in general is actually complete for
the kinds of input relevant for the application.
We also present a technique that, given a query Q, an ontology T, and
reasoners R_1 and R_2 that satisfy certain assumptions, can be used to
determine whether, for each data set A, reasoner R_1 computes more answers to Q
w.r.t. T and A than reasoner R_2. This allows application developers to select
the reasoner that provides the highest degree of completeness for Q and T that
is compatible with the applications scalability requirements.
Our results thus provide a theoretical and practical foundation for the
design of future ontology-based information systems that maximise scalability
while minimising or even eliminating incompleteness of query answers.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.