id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1110.2659
|
Efficient Detection of Hot Span in Information Diffusion from
Observation
|
cs.SI physics.soc-ph
|
We addressed the problem of detecting the change in behavior of information
diffusion from a small amount of observation data, where the behavior changes
were assumed to be effectively reflected in changes in the diffusion parameter
value. The problem is to detect where in time and how long this change
persisted and how big this change is. We solved this problem by searching the
change pattern that maximizes the likelihood of generating the observed
diffusion sequences. The naive learning algorithm has to iteratively update the
patten boundaries, each requiring optimization of diffusion parameters by the
EM algorithm, and is very inefficient. We devised a very efficient search
algorithm using the derivative of likelihood which avoids parameter value
optimization during the search. The results tested using three real world
network structures confirmed that the algorithm can efficiently identify the
correct change pattern. We further compared our algorithm with the naive method
that finds the best combination of change boundaries by an exhaustive search
through a set of randomly selected boundary candidates, and showed that the
proposed algorithm far outperforms the native method both in terms of accuracy
and computation time.
|
1110.2704
|
An Efficient Fuzzy Clustering-Based Approach for Intrusion Detection
|
cs.DB
|
The need to increase accuracy in detecting sophisticated cyber attacks poses
a great challenge not only to the research community but also to corporations.
So far, many approaches have been proposed to cope with this threat. Among
them, data mining has brought on remarkable contributions to the intrusion
detection problem. However, the generalization ability of data mining-based
methods remains limited, and hence detecting sophisticated attacks remains a
tough task. In this thread, we present a novel method based on both clustering
and classification for developing an efficient intrusion detection system
(IDS). The key idea is to take useful information exploited from fuzzy
clustering into account for the process of building an IDS. To this aim, we
first present cornerstones to construct additional cluster features for a
training set. Then, we come up with an algorithm to generate an IDS based on
such cluster features and the original input features. Finally, we
experimentally prove that our method outperforms several well-known methods.
|
1110.2711
|
Generalized network community detection
|
physics.soc-ph cs.SI physics.data-an
|
Community structure is largely regarded as an intrinsic property of complex
real-world networks. However, recent studies reveal that networks comprise even
more sophisticated modules than classical cohesive communities. More precisely,
real-world networks can also be naturally partitioned according to common
patterns of connections between the nodes. Recently, a propagation based
algorithm has been proposed for the detection of arbitrary network modules. We
here advance the latter with a more adequate community modeling based on
network clustering. The resulting algorithm is evaluated on various synthetic
benchmark networks and random graphs. It is shown to be comparable to current
state-of-the-art algorithms, however, in contrast to other approaches, it does
not require some prior knowledge of the true community structure. To
demonstrate its generality, we further employ the proposed algorithm for
community detection in different unipartite and bipartite real-world networks,
for generalized community detection and also predictive data clustering.
|
1110.2722
|
Compressive and Noncompressive Power Spectral Density Estimation from
Periodic Nonuniform Samples
|
cs.IT math.IT
|
This paper presents a novel power spectral density estimation technique for
band-limited, wide-sense stationary signals from sub-Nyquist sampled data. The
technique employs multi-coset sampling and incorporates the advantages of
compressed sensing (CS) when the power spectrum is sparse, but applies to
sparse and nonsparse power spectra alike. The estimates are consistent
piecewise constant approximations whose resolutions (width of the piecewise
constant segments) are controlled by the periodicity of the multi-coset
sampling. We show that compressive estimates exhibit better tradeoffs among the
estimator's resolution, system complexity, and average sampling rate compared
to their noncompressive counterparts. For suitable sampling patterns,
noncompressive estimates are obtained as least squares solutions. Because of
the non-negativity of power spectra, compressive estimates can be computed by
seeking non-negative least squares solutions (provided appropriate sampling
patterns exist) instead of using standard CS recovery algorithms. This
flexibility suggests a reduction in computational overhead for systems
estimating both sparse and nonsparse power spectra because one algorithm can be
used to compute both compressive and noncompressive estimates.
|
1110.2724
|
Information Transfer in Social Media
|
cs.SI physics.soc-ph stat.AP
|
Recent research has explored the increasingly important role of social media
by examining the dynamics of individual and group behavior, characterizing
patterns of information diffusion, and identifying influential individuals. In
this paper we suggest a measure of causal relationships between nodes based on
the information-theoretic notion of transfer entropy, or information transfer.
This theoretically grounded measure is based on dynamic information, captures
fine-grain notions of influence, and admits a natural, predictive
interpretation. Causal networks inferred by transfer entropy can differ
significantly from static friendship networks because most friendship links are
not useful for predicting future dynamics. We demonstrate through analysis of
synthetic and real-world data that transfer entropy reveals meaningful hidden
network structures. In addition to altering our notion of who is influential,
transfer entropy allows us to differentiate between weak influence over large
groups and strong influence over small groups.
|
1110.2726
|
Combining Spatial and Temporal Logics: Expressiveness vs. Complexity
|
cs.AI
|
In this paper, we construct and investigate a hierarchy of spatio-temporal
formalisms that result from various combinations of propositional spatial and
temporal logics such as the propositional temporal logic PTL, the spatial
logics RCC-8, BRCC-8, S4u and their fragments. The obtained results give a
clear picture of the trade-off between expressiveness and computational
realisability within the hierarchy. We demonstrate how different combining
principles as well as spatial and temporal primitives can produce NP-, PSPACE-,
EXPSPACE-, 2EXPSPACE-complete, and even undecidable spatio-temporal logics out
of components that are at most NP- or PSPACE-complete.
|
1110.2728
|
An Approach to Temporal Planning and Scheduling in Domains with
Predictable Exogenous Events
|
cs.AI
|
The treatment of exogenous events in planning is practically important in
many real-world domains where the preconditions of certain plan actions are
affected by such events. In this paper we focus on planning in temporal domains
with exogenous events that happen at known times, imposing the constraint that
certain actions in the plan must be executed during some predefined time
windows. When actions have durations, handling such temporal constraints adds
an extra difficulty to planning. We propose an approach to planning in these
domains which integrates constraint-based temporal reasoning into a graph-based
planning framework using local search. Our techniques are implemented in a
planner that took part in the 4th International Planning Competition (IPC-4). A
statistical analysis of the results of IPC-4 demonstrates the effectiveness of
our approach in terms of both CPU-time and plan quality. Additional experiments
show the good performance of the temporal reasoning techniques integrated into
our planner.
|
1110.2729
|
The Power of Modeling - a Response to PDDL2.1
|
cs.AI
|
In this commentary I argue that although PDDL is a very useful standard for
the planning competition, its design does not properly consider the issue of
domain modeling. Hence, I would not advocate its use in specifying planning
domains outside of the context of the planning competition. Rather, the field
needs to explore different approaches and grapple more directly with the
problem of effectively modeling and utilizing all of the diverse pieces of
knowledge we typically have about planning domains.
|
1110.2730
|
Imperfect Match: PDDL 2.1 and Real Applications
|
cs.AI
|
PDDL was originally conceived and constructed as a lingua franca for the
International Planning Competition. PDDL2.1 embodies a set of extensions
intended to support the expression of something closer to real planning
problems. This objective has only been partially achieved, due in large part to
a deliberate focus on not moving too far from classical planning models and
solution methods.
|
1110.2731
|
PDDL 2.1: Representation vs. Computation
|
cs.AI
|
I comment on the PDDL 2.1 language and its use in the planning competition,
focusing on the choices made for accommodating time and concurrency. I also
discuss some methodological issues that have to do with the move toward more
expressive planning languages and the balance needed in planning research
between semantics and computation.
|
1110.2732
|
Proactive Algorithms for Job Shop Scheduling with Probabilistic
Durations
|
cs.AI
|
Most classical scheduling formulations assume a fixed and known duration for
each activity. In this paper, we weaken this assumption, requiring instead that
each duration can be represented by an independent random variable with a known
mean and variance. The best solutions are ones which have a high probability of
achieving a good makespan. We first create a theoretical framework, formally
showing how Monte Carlo simulation can be combined with deterministic
scheduling algorithms to solve this problem. We propose an associated
deterministic scheduling problem whose solution is proved, under certain
conditions, to be a lower bound for the probabilistic problem. We then propose
and investigate a number of techniques for solving such problems based on
combinations of Monte Carlo simulation, solutions to the associated
deterministic problem, and either constraint programming or tabu search. Our
empirical results demonstrate that a combination of the use of the associated
deterministic problem and Monte Carlo simulation results in algorithms that
scale best both in terms of problem size and uncertainty. Further experiments
point to the correlation between the quality of the deterministic solution and
the quality of the probabilistic solution as a major factor responsible for
this success.
|
1110.2733
|
Auctions with Severely Bounded Communication
|
cs.GT cs.AI
|
We study auctions with severe bounds on the communication allowed: each
bidder may only transmit t bits of information to the auctioneer. We consider
both welfare- and profit-maximizing auctions under this communication
restriction. For both measures, we determine the optimal auction and show that
the loss incurred relative to unconstrained auctions is mild. We prove
non-surprising properties of these kinds of auctions, e.g., that in optimal
mechanisms bidders simply report the interval in which their valuation lies in,
as well as some surprising properties, e.g., that asymmetric auctions are
better than symmetric ones and that multi-round auctions reduce the
communication complexity only by a linear factor.
|
1110.2734
|
The Language of Search
|
cs.AI
|
This paper is concerned with a class of algorithms that perform exhaustive
search on propositional knowledge bases. We show that each of these algorithms
defines and generates a propositional language. Specifically, we show that the
trace of a search can be interpreted as a combinational circuit, and a search
algorithm then defines a propositional language consisting of circuits that are
generated across all possible executions of the algorithm. In particular, we
show that several versions of exhaustive DPLL search correspond to such
well-known languages as FBDD, OBDD, and a precisely-defined subset of d-DNNF.
By thus mapping search algorithms to propositional languages, we provide a
uniform and practical framework in which successful search techniques can be
harnessed for compilation of knowledge into various languages of interest, and
a new methodology whereby the power and limitations of search algorithms can be
understood by looking up the tractability and succinctness of the corresponding
propositional languages.
|
1110.2735
|
Understanding Algorithm Performance on an Oversubscribed Scheduling
Application
|
cs.AI
|
The best performing algorithms for a particular oversubscribed scheduling
application, Air Force Satellite Control Network (AFSCN) scheduling, appear to
have little in common. Yet, through careful experimentation and modeling of
performance in real problem instances, we can relate characteristics of the
best algorithms to characteristics of the application. In particular, we find
that plateaus dominate the search spaces (thus favoring algorithms that make
larger changes to solutions) and that some randomization in exploration is
critical to good performance (due to the lack of gradient information on the
plateaus). Based on our explanations of algorithm performance, we develop a new
algorithm that combines characteristics of the best performers; the new
algorithms performance is better than the previous best. We show how hypothesis
driven experimentation and search modeling can both explain algorithm
performance and motivate the design of a new algorithm.
|
1110.2736
|
Marvin: A Heuristic Search Planner with Online Macro-Action Learning
|
cs.AI
|
This paper describes Marvin, a planner that competed in the Fourth
International Planning Competition (IPC 4). Marvin uses
action-sequence-memoisation techniques to generate macro-actions, which are
then used during search for a solution plan. We provide an overview of its
architecture and search behaviour, detailing the algorithms used. We also
empirically demonstrate the effectiveness of its features in various planning
domains; in particular, the effects on performance due to the use of
macro-actions, the novel features of its search behaviour, and the native
support of ADL and Derived Predicates.
|
1110.2737
|
Anytime Heuristic Search
|
cs.AI
|
We describe how to convert the heuristic search algorithm A* into an anytime
algorithm that finds a sequence of improved solutions and eventually converges
to an optimal solution. The approach we adopt uses weighted heuristic search to
find an approximate solution quickly, and then continues the weighted search to
find improved solutions as well as to improve a bound on the suboptimality of
the current solution. When the time available to solve a search problem is
limited or uncertain, this creates an anytime heuristic search algorithm that
allows a flexible tradeoff between search time and solution quality. We analyze
the properties of the resulting Anytime A* algorithm, and consider its
performance in three domains; sliding-tile puzzles, STRIPS planning, and
multiple sequence alignment. To illustrate the generality of this approach, we
also describe how to transform the memory-efficient search algorithm Recursive
Best-First Search (RBFS) into an anytime algorithm.
|
1110.2738
|
Discovering Classes of Strongly Equivalent Logic Programs
|
cs.AI
|
In this paper we apply computer-aided theorem discovery technique to discover
theorems about strongly equivalent logic programs under the answer set
semantics. Our discovered theorems capture new classes of strongly equivalent
logic programs that can lead to new program simplification rules that preserve
strong equivalence. Specifically, with the help of computers, we discovered
exact conditions that capture the strong equivalence between a rule and the
empty set, between two rules, between two rules and one of the two rules,
between two rules and another rule, and between three rules and two of the
three rules.
|
1110.2739
|
Phase Transition for Random Quantified XOR-Formulas
|
cs.AI
|
The QXORSAT problem is the quantified version of the satisfiability problem
XORSAT in which the connective exclusive-or is used instead of the usual or. We
study the phase transition associated with random QXORSAT instances. We give a
description of this phase transition in the case of one alternation of
quantifiers, thus performing an advanced practical and theoretical study on the
phase transition of a quantified roblem.
|
1110.2740
|
Cutset Sampling for Bayesian Networks
|
cs.AI
|
The paper presents a new sampling methodology for Bayesian networks that
samples only a subset of variables and applies exact inference to the rest.
Cutset sampling is a network structure-exploiting application of the
Rao-Blackwellisation principle to sampling in Bayesian networks. It improves
convergence by exploiting memory-based inference algorithms. It can also be
viewed as an anytime approximation of the exact cutset-conditioning algorithm
developed by Pearl. Cutset sampling can be implemented efficiently when the
sampled variables constitute a loop-cutset of the Bayesian network and, more
generally, when the induced width of the networks graph conditioned on the
observed sampled variables is bounded by a constant w. We demonstrate
empirically the benefit of this scheme on a range of benchmarks.
|
1110.2741
|
An Algebraic Graphical Model for Decision with Uncertainties,
Feasibilities, and Utilities
|
cs.AI
|
Numerous formalisms and dedicated algorithms have been designed in the last
decades to model and solve decision making problems. Some formalisms, such as
constraint networks, can express "simple" decision problems, while others are
designed to take into account uncertainties, unfeasible decisions, and
utilities. Even in a single formalism, several variants are often proposed to
model different types of uncertainty (probability, possibility...) or utility
(additive or not). In this article, we introduce an algebraic graphical model
that encompasses a large number of such formalisms: (1) we first adapt previous
structures from Friedman, Chu and Halpern for representing uncertainty,
utility, and expected utility in order to deal with generic forms of sequential
decision making; (2) on these structures, we then introduce composite graphical
models that express information via variables linked by "local" functions,
thanks to conditional independence; (3) on these graphical models, we finally
define a simple class of queries which can represent various scenarios in terms
of observabilities and controllabilities. A natural decision-tree semantics for
such queries is completed by an equivalent operational semantics, which induces
generic algorithms. The proposed framework, called the
Plausibility-Feasibility-Utility (PFU) framework, not only provides a better
understanding of the links between existing formalisms, but it also covers yet
unpublished frameworks (such as possibilistic influence diagrams) and unifies
formalisms such as quantified boolean formulas and influence diagrams. Our
backtrack and variable elimination generic algorithms are a first step towards
unified algorithms.
|
1110.2742
|
Semantic Matchmaking as Non-Monotonic Reasoning: A Description Logic
Approach
|
cs.AI
|
Matchmaking arises when supply and demand meet in an electronic marketplace,
or when agents search for a web service to perform some task, or even when
recruiting agencies match curricula and job profiles. In such open
environments, the objective of a matchmaking process is to discover best
available offers to a given request. We address the problem of matchmaking from
a knowledge representation perspective, with a formalization based on
Description Logics. We devise Concept Abduction and Concept Contraction as
non-monotonic inferences in Description Logics suitable for modeling
matchmaking in a logical framework, and prove some related complexity results.
We also present reasonable algorithms for semantic matchmaking based on the
devised inferences, and prove that they obey to some commonsense properties.
Finally, we report on the implementation of the proposed matchmaking framework,
which has been used both as a mediator in e-marketplaces and for semantic web
services discovery.
|
1110.2743
|
Solution-Guided Multi-Point Constructive Search for Job Shop Scheduling
|
cs.AI
|
Solution-Guided Multi-Point Constructive Search (SGMPCS) is a novel
constructive search technique that performs a series of resource-limited tree
searches where each search begins either from an empty solution (as in
randomized restart) or from a solution that has been encountered during the
search. A small number of these "elite solutions is maintained during the
search. We introduce the technique and perform three sets of experiments on the
job shop scheduling problem. First, a systematic, fully crossed study of SGMPCS
is carried out to evaluate the performance impact of various parameter
settings. Second, we inquire into the diversity of the elite solution set,
showing, contrary to expectations, that a less diverse set leads to stronger
performance. Finally, we compare the best parameter setting of SGMPCS from the
first two experiments to chronological backtracking, limited discrepancy
search, randomized restart, and a sophisticated tabu search algorithm on a set
of well-known benchmark problems. Results demonstrate that SGMPCS is
significantly better than the other constructive techniques tested, though lags
behind the tabu search.
|
1110.2755
|
Efficient Tracking of Large Classes of Experts
|
cs.LG cs.IT math.IT
|
In the framework of prediction of individual sequences, sequential prediction
methods are to be constructed that perform nearly as well as the best expert
from a given class. We consider prediction strategies that compete with the
class of switching strategies that can segment a given sequence into several
blocks, and follow the advice of a different "base" expert in each block. As
usual, the performance of the algorithm is measured by the regret defined as
the excess loss relative to the best switching strategy selected in hindsight
for the particular sequence to be predicted. In this paper we construct
prediction strategies of low computational cost for the case where the set of
base experts is large. In particular we provide a method that can transform any
prediction algorithm $\A$ that is designed for the base class into a tracking
algorithm. The resulting tracking algorithm can take advantage of the
prediction performance and potential computational efficiency of $\A$ in the
sense that it can be implemented with time and space complexity only
$O(n^{\gamma} \ln n)$ times larger than that of $\A$, where $n$ is the time
horizon and $\gamma \ge 0$ is a parameter of the algorithm. With $\A$ properly
chosen, our algorithm achieves a regret bound of optimal order for $\gamma>0$,
and only $O(\ln n)$ times larger than the optimal order for $\gamma=0$ for all
typical regret bound types we examined. For example, for predicting binary
sequences with switching parameters under the logarithmic loss, our method
achieves the optimal $O(\ln n)$ regret rate with time complexity
$O(n^{1+\gamma}\ln n)$ for any $\gamma\in (0,1)$.
|
1110.2765
|
Multi-Issue Negotiation with Deadlines
|
cs.MA cs.AI
|
This paper studies bilateral multi-issue negotiation between self-interested
autonomous agents. Now, there are a number of different procedures that can be
used for this process; the three main ones being the package deal procedure in
which all the issues are bundled and discussed together, the simultaneous
procedure in which the issues are discussed simultaneously but independently of
each other, and the sequential procedure in which the issues are discussed one
after another. Since each of them yields a different outcome, a key problem is
to decide which one to use in which circumstances. Specifically, we consider
this question for a model in which the agents have time constraints (in the
form of both deadlines and discount factors) and information uncertainty (in
that the agents do not know the opponents utility function). For this model, we
consider issues that are both independent and those that are interdependent and
determine equilibria for each case for each procedure. In so doing, we show
that the package deal is in fact the optimal procedure for each party. We then
go on to show that, although the package deal may be computationally more
complex than the other two procedures, it generates Pareto optimal outcomes
(unlike the other two), it has similar earliest and latest possible times of
agreement to the simultaneous procedure (which is better than the sequential
procedure), and that it (like the other two procedures) generates a unique
outcome only under certain conditions (which we define).
|
1110.2766
|
The Strategy-Proofness Landscape of Merging
|
cs.GT cs.MA
|
Merging operators aim at defining the beliefs/goals of a group of agents from
the beliefs/goals of each member of the group. Whenever an agent of the group
has preferences over the possible results of the merging process (i.e., the
possible merged bases), she can try to rig the merging process by lying on her
true beliefs/goals if this leads to better merged base according to her point
of view. Obviously, strategy-proof operators are highly desirable in order to
guarantee equity among agents even when some of them are not sincere. In this
paper, we draw the strategy-proof landscape for many merging operators from the
literature, including model-based ones and formula-based ones. Both the general
case and several restrictions on the merging process are considered.
|
1110.2767
|
Resource Allocation Among Agents with MDP-Induced Preferences
|
cs.MA cs.AI
|
Allocating scarce resources among agents to maximize global utility is, in
general, computationally challenging. We focus on problems where resources
enable agents to execute actions in stochastic environments, modeled as Markov
decision processes (MDPs), such that the value of a resource bundle is defined
as the expected value of the optimal MDP policy realizable given these
resources. We present an algorithm that simultaneously solves the
resource-allocation and the policy-optimization problems. This allows us to
avoid explicitly representing utilities over exponentially many resource
bundles, leading to drastic (often exponential) reductions in computational
complexity. We then use this algorithm in the context of self-interested agents
to design a combinatorial auction for allocating resources. We empirically
demonstrate the effectiveness of our approach by showing that it can, in
minutes, optimally solve problems for which a straightforward combinatorial
resource-allocation technique would require the agents to enumerate up to 2^100
resource bundles and the auctioneer to solve an NP-complete problem with an
input of that size.
|
1110.2813
|
Towards Quantifying Vertex Similarity in Networks
|
cs.SI physics.soc-ph
|
Vertex similarity is a major problem in network science with a wide range of
applications. In this work we provide novel perspectives on finding
(dis)similar vertices within a network and across two networks with the same
number of vertices (graph matching). With respect to the former problem, we
propose to optimize a geometric objective which allows us to express each
vertex uniquely as a convex combination of a few extreme types of vertices. Our
method has the important advantage of supporting efficiently several types of
queries such as "which other vertices are most similar to this vertex?" by the
use of the appropriate data structures and of mining interesting patterns in
the network. With respect to the latter problem (graph matching), we propose
the generalized condition number --a quantity widely used in numerical
analysis-- $\kappa(L_G,L_H)$ of the Laplacian matrix representations of $G,H$
as a measure of graph similarity, where $G,H$ are the graphs of interest. We
show that this objective has a solid theoretical basis and propose a
deterministic and a randomized graph alignment algorithm. We evaluate our
algorithms on both synthetic and real data. We observe that our proposed
methods achieve high-quality results and provide us with significant insights
into the network structure.
|
1110.2825
|
Scaling of nestedness in complex networks
|
physics.soc-ph cond-mat.dis-nn cs.SI
|
Nestedness characterizes the linkage pattern of networked systems, indicating
the likelihood that a node is linked to the nodes linked to the nodes with
larger degrees than it. Networks of mutualistic relationship between distinct
groups of species in ecological communities exhibit such nestedness, which is
known to support the network robustness. Despite such importance, quantitative
characteristics of nestedness is little understood. Here we take
graph-theoretic approach to derive the scaling properties of nestedness in
various model networks. Our results show how the heterogeneous connectivity
patterns enhance nestedness. Also we find that the nestedness of bipartite
networks depend sensitively on the fraction of different types of nodes,
causing nestedness to scale differently for nodes of different types.
|
1110.2834
|
Interspecific competition underlying mutualistic networks
|
q-bio.PE cs.SI physics.soc-ph
|
The architecture of bipartite networks linking two classes of constituents is
affected by the interactions within each class. For the bipartite networks
representing the mutualistic relationship between pollinating animals and
plants, it has been known that their degree distributions are broad but often
deviate from power-law form, more significantly for plants than animals. Here
we consider a model for the evolution of the mutualistic networks and find that
their topology is strongly dependent on the asymmetry and non-linearity of the
preferential selection of mutualistic partners. Real-world mutualistic networks
analyzed in the framework of the model show that a new animal species
determines its partners not only by their attractiveness but also as a result
of the competition with pre-existing animals, which leads to the
stretched-exponential degree distributions of plant species.
|
1110.2842
|
A tail inequality for quadratic forms of subgaussian random vectors
|
math.PR cs.LG
|
We prove an exponential probability tail inequality for positive semidefinite
quadratic forms in a subgaussian random vector. The bound is analogous to one
that holds when the vector has independent Gaussian entries.
|
1110.2855
|
Sparse Image Representation with Epitomes
|
cs.LG cs.CV stat.ML
|
Sparse coding, which is the decomposition of a vector using only a few basis
elements, is widely used in machine learning and image processing. The basis
set, also called dictionary, is learned to adapt to specific data. This
approach has proven to be very effective in many image processing tasks.
Traditionally, the dictionary is an unstructured "flat" set of atoms. In this
paper, we study structured dictionaries which are obtained from an epitome, or
a set of epitomes. The epitome is itself a small image, and the atoms are all
the patches of a chosen size inside this image. This considerably reduces the
number of parameters to learn and provides sparse image decompositions with
shiftinvariance properties. We propose a new formulation and an algorithm for
learning the structured dictionaries associated with epitomes, and illustrate
their use in image denoising tasks.
|
1110.2867
|
Robust Beamforming in Interference Channels with Imperfect Transmitter
Channel Information
|
cs.IT math.IT
|
We consider $K$ links operating concurrently in the same spectral band. Each
transmitter has multiple antennas, while each receiver uses a single antenna.
This setting corresponds to the multiple-input single-output interference
channel. We assume perfect channel state information at the single-user
decoding receivers whereas the transmitters only have estimates of the true
channels. The channel estimation errors are assumed to be bounded in elliptical
regions whose geometry is known at the transmitters. Robust beamforming
optimizes worst-case received power gains, and a Pareto optimal point is a
worst-case achievable rate tuple from which it is impossible to increase a
link's performance without degrading the performance of another. We
characterize the robust beamforming vectors necessary to operate at any Pareto
optimal point. Moreover, these beamforming vectors are parameterized by
$K(K-1)$ real-valued parameters. We analyze the system's spectral efficiency at
high and low signal-to-noise ratio (SNR). Zero forcing transmission achieves
full multiplexing gain at high SNR only if the estimation errors scale linearly
with inverse SNR. If the errors are SNR independent, then single-user
transmission is optimal at high SNR. At low SNR, robust maximum ratio
transmission optimizes the minimum energy per bit for reliable communication.
Numerical simulations illustrate the gained theoretical results.
|
1110.2872
|
Exchange Economy in Two-User Multiple-Input Single-Output Interference
Channels
|
cs.GT cs.IT math.IT
|
We study the conflict between two links in a multiple-input single-output
interference channel. This setting is strictly competitive and can be related
to perfectly competitive market models. In such models, general equilibrium
theory is used to determine equilibrium measures that are Pareto optimal.
First, we consider the links to be consumers that can trade goods within
themselves. The goods in our setting correspond to beamforming vectors. We
utilize the conflict representation of the consumers in the Edgeworth box, a
graphical tool that depicts the allocation of the goods for the two consumers,
to provide closed-form solution to all Pareto optimal outcomes. Afterwards, we
model the situation between the links as a competitive market which
additionally defines prices for the goods. The equilibrium in this economy is
called Walrasian and corresponds to the prices that equate the demand to the
supply of goods. We calculate the unique Walrasian equilibrium and propose a
coordination process that is realized by an arbitrator which distributes the
Walrasian prices to the consumers. The consumers then calculate in a
decentralized manner their optimal demand corresponding to beamforming vectors
that achieve the Walrasian equilibrium. This outcome is Pareto optimal and
dominates the noncooperative outcome of the systems. Thus, based on the game
theoretic model and solution concept, an algorithm for a distributed
implementation of the beamforming problem in multiple-input single-output
interference channels is provided.
|
1110.2890
|
ELCA Evaluation for Keyword Search on Probabilistic XML Data
|
cs.DB
|
As probabilistic data management is becoming one of the main research focuses
and keyword search is turning into a more popular query means, it is natural to
think how to support keyword queries on probabilistic XML data. With regards to
keyword query on deterministic XML documents, ELCA (Exclusive Lowest Common
Ancestor) semantics allows more relevant fragments rooted at the ELCAs to
appear as results and is more popular compared with other keyword query result
semantics (such as SLCAs).
In this paper, we investigate how to evaluate ELCA results for keyword
queries on probabilistic XML documents. After defining probabilistic ELCA
semantics in terms of possible world semantics, we propose an approach to
compute ELCA probabilities without generating possible worlds. Then we develop
an efficient stack-based algorithm that can find all probabilistic ELCA results
and their ELCA probabilities for a given keyword query on a probabilistic XML
document. Finally, we experimentally evaluate the proposed ELCA algorithm and
compare it with its SLCA counterpart in aspects of result effectiveness, time
and space efficiency, and scalability.
|
1110.2897
|
Randomized Dimensionality Reduction for k-means Clustering
|
cs.DS cs.LG
|
We study the topic of dimensionality reduction for $k$-means clustering.
Dimensionality reduction encompasses the union of two approaches: \emph{feature
selection} and \emph{feature extraction}. A feature selection based algorithm
for $k$-means clustering selects a small subset of the input features and then
applies $k$-means clustering on the selected features. A feature extraction
based algorithm for $k$-means clustering constructs a small set of new
artificial features and then applies $k$-means clustering on the constructed
features. Despite the significance of $k$-means clustering as well as the
wealth of heuristic methods addressing it, provably accurate feature selection
methods for $k$-means clustering are not known. On the other hand, two provably
accurate feature extraction methods for $k$-means clustering are known in the
literature; one is based on random projections and the other is based on the
singular value decomposition (SVD).
This paper makes further progress towards a better understanding of
dimensionality reduction for $k$-means clustering. Namely, we present the first
provably accurate feature selection method for $k$-means clustering and, in
addition, we present two feature extraction methods. The first feature
extraction method is based on random projections and it improves upon the
existing results in terms of time complexity and number of features needed to
be extracted. The second feature extraction method is based on fast approximate
SVD factorizations and it also improves upon the existing results in terms of
time complexity. The proposed algorithms are randomized and provide
constant-factor approximation guarantees with respect to the optimal $k$-means
objective value.
|
1110.2899
|
Discovering Emerging Topics in Social Streams via Link Anomaly Detection
|
stat.ML cs.LG cs.SI physics.soc-ph
|
Detection of emerging topics are now receiving renewed interest motivated by
the rapid growth of social networks. Conventional term-frequency-based
approaches may not be appropriate in this context, because the information
exchanged are not only texts but also images, URLs, and videos. We focus on the
social aspects of theses networks. That is, the links between users that are
generated dynamically intentionally or unintentionally through replies,
mentions, and retweets. We propose a probability model of the mentioning
behaviour of a social network user, and propose to detect the emergence of a
new topic from the anomaly measured through the model. We combine the proposed
mention anomaly score with a recently proposed change-point detection technique
based on the Sequentially Discounting Normalized Maximum Likelihood (SDNML), or
with Kleinberg's burst model. Aggregating anomaly scores from hundreds of
users, we show that we can detect emerging topics only based on the
reply/mention relationships in social network posts. We demonstrate our
technique in a number of real data sets we gathered from Twitter. The
experiments show that the proposed mention-anomaly-based approaches can detect
new topics at least as early as the conventional term-frequency-based approach,
and sometimes much earlier when the keyword is ill-defined.
|
1110.2906
|
Multiple dynamical time-scales in networks with hierarchically nested
modular organization
|
physics.soc-ph cs.SI physics.bio-ph
|
Many natural and engineered complex networks have intricate mesoscopic
organization, e.g., the clustering of the constituent nodes into several
communities or modules. Often, such modularity is manifested at several
different hierarchical levels, where the clusters defined at one level appear
as elementary entities at the next higher level. Using a simple model of a
hierarchical modular network, we show that such a topological structure gives
rise to characteristic time-scale separation between dynamics occurring at
different levels of the hierarchy. This generalizes our earlier result for
simple modular networks, where fast intra-modular and slow inter-modular
processes were clearly distinguished. Investigating the process of
synchronization of oscillators in a hierarchical modular network, we show the
existence of as many distinct time-scales as there are hierarchical levels in
the system. This suggests a possible functional role of such mesoscopic
organization principle in natural systems, viz., in the dynamical separation of
events occurring at different spatial scales.
|
1110.2907
|
System Identification Using Reweighted Zero Attracting Least Absolute
Deviation Algorithm
|
cs.SY
|
In this paper, the l1 norm penalty on the filter coefficients is incorporated
in the least mean absolute deviation (LAD) algorithm to improve the performance
of the LAD algorithm. The performance of LAD, zero-attracting LAD (ZA-LAD) and
reweighted zero-attracting LAD (RZA-LAD) are evaluated for linear time varying
system identification under the non-Gaussian (alpha-stable) noise environments.
Effectiveness of the ZA-LAD type algorithms is demonstrated through computer
simulations.
|
1110.2931
|
Networking - A Statistical Physics Perspective
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
Efficient networking has a substantial economic and societal impact in a
broad range of areas including transportation systems, wired and wireless
communications and a range of Internet applications. As transportation and
communication networks become increasingly more complex, the ever increasing
demand for congestion control, higher traffic capacity, quality of service,
robustness and reduced energy consumption require new tools and methods to meet
these conflicting requirements. The new methodology should serve for gaining
better understanding of the properties of networking systems at the macroscopic
level, as well as for the development of new principled optimization and
management algorithms at the microscopic level. Methods of statistical physics
seem best placed to provide new approaches as they have been developed
specifically to deal with non-linear large scale systems. This paper aims at
presenting an overview of tools and methods that have been developed within the
statistical physics community and that can be readily applied to address the
emerging problems in networking. These include diffusion processes, methods
from disordered systems and polymer physics, probabilistic inference, which
have direct relevance to network routing, file and frequency distribution, the
exploration of network structures and vulnerability, and various other
practical networking applications.
|
1110.2980
|
The Myth of Global Science Collaboration - Collaboration patterns in
epistemic communities
|
physics.soc-ph cs.DL cs.SI
|
Scientific collaboration is often perceived as a joint global process that
involves researchers worldwide, regardless of their place of work and
residence. Globalization of science, in this respect, implies that
collaboration among scientists takes place along the lines of common topics and
irrespective of the spatial distances between the collaborators. The networks
of collaborators, termed 'epistemic communities', should thus have a
space-independent structure. This paper shows that such a notion of globalized
scientific collaboration is not supported by empirical data. It introduces a
novel approach of analyzing distance-dependent probabilities of collaboration.
The results of the analysis of six distinct scientific fields reveal that
intra-country collaboration is about 10-50 times more likely to occur than
international collaboration. Moreover, strong dependencies exist between
collaboration activity (measured in co-authorships) and spatial distance when
confined to national borders. However, the fact that distance becomes
irrelevant once collaboration is taken to the international scale suggests a
globalized science system that is strongly influenced by the gravity of local
science clusters. The similarity of the probability functions of the six
science fields analyzed suggests a universal mode of spatial governance that is
independent from the mode of knowledge creation in science.
|
1110.2995
|
On sequences of projections of the cubic lattice
|
math.CO cs.CG cs.IT math.IT
|
In this paper we study sequences of lattices which are, up to similarity,
projections of $\mathbb{Z}^{n+1}$ onto a hyperplane $\bm{v}^{\perp}$, with
$\bm{v} \in \mathbb{Z}^{n+1}$ and converge to a target lattice $\Lambda$ which
is equivalent to an integer lattice. We show a sufficient condition to
construct sequences converging at rate $O(1/ |\bm{v}|^{2/n})$ and exhibit
explicit constructions for some important families of lattices.
|
1110.3001
|
Step size adaptation in first-order method for stochastic strongly
convex programming
|
math.OC cs.LG
|
We propose a first-order method for stochastic strongly convex optimization
that attains $O(1/n)$ rate of convergence, analysis show that the proposed
method is simple, easily to implement, and in worst case, asymptotically four
times faster than its peers. We derive this method from several intuitive
observations that are generalized from existing first order optimization
methods.
|
1110.3002
|
Are Minds Computable?
|
cs.AI
|
This essay explores the limits of Turing machines concerning the modeling of
minds and suggests alternatives to go beyond those limits.
|
1110.3005
|
Symmetry in the sequence of approximation coefficients
|
math.NT cs.IT math.DS math.HO math.IT
|
Let $\{a_n\}_1^\infty$ and $\{\theta_n\}_0^\infty$ be the sequences of
partial quotients and approximation coefficients for the continued fraction
expansion of an irrational number. We will provide a function $f$ such that
$a_{n+1} = f(\theta_{n\pm1},\theta_n)$. In tandem with a formula due to Dajani
and Kraaikamp, we will write $\theta_{n \pm 1}$ as a function of $(\theta_{n
\mp 1}, \theta_n)$, revealing an elegant symmetry in this classical sequence
and allowing for its recovery from a pair of consecutive terms.
|
1110.3017
|
Towards a Query Language for the Web of Data (A Vision Paper)
|
cs.DB cs.NI
|
Research on querying the Web of Data is still in its infancy. In this paper,
we provide an initial set of general features that we envision should be
considered in order to define a query language for the Web of Data.
Furthermore, for each of these features, we pose questions that have not been
addressed before in the context of querying the Web of Data. We believe that
addressing these questions and studying these features may guide the next 10
years of research on the Web of Data.
|
1110.3062
|
Separation Theorems for Phase-Incoherent Multiple-User Channels
|
cs.IT math.IT
|
We study the transmission of two correlated and memoryless sources $(U,V)$
over several multiple-user phase asynchronous channels. Namely, we consider a
class of phase-incoherent multiple access relay channels (MARC) with both
non-causal and causal unidirectional cooperation between encoders, referred to
as phase-incoherent unidirectional non-causal cooperative MARC (PI-UNCC-MARC),
and phase-incoherent unidirectional causal cooperative MARC (PI-UCC-MARC)
respectively. We also consider phase-incoherent interference channels (PI-IC),
and interference relay channel (PI-IRC) models in the same context. In all
cases, the input signals are assumed to undergo non-ergodic phase shifts due to
the channel. The shifts are assumed to be unknown to the transmitters and known
to the receivers as a realistic assumption. Both necessary and sufficient
conditions in order to reliably send the correlated sources to the destinations
over the considered channels are derived. In particular, for all of the channel
models, we first derive an outer bound for reliable communication that is
defined with respect to the source entropy content (i.e., the triple
$(H(U|V),H(V|U),H(U,V))$). Then, using {\em separate} source and channel
coding, under specific gain conditions, we establish the same region as the
inner bound and therefore obtain tight conditions for reliable communication
for the specific channel under study. We thus establish a source-channel
separation theorem for each channel and conclude that without the knowledge of
the phase shifts at the transmitter sides, separation is optimal. It is further
conjectured that separation in general is optimal for all channel coefficients.
|
1110.3069
|
Multiterminal Source Coding under Logarithmic Loss
|
cs.IT math.IT
|
We consider the classical two-encoder multiterminal source coding problem
where distortion is measured under logarithmic loss. We provide a single-letter
characterization of the achievable rate distortion region for arbitrarily
correlated sources with finite alphabets. In doing so, we also give the rate
distortion region for the $m$-encoder CEO problem (also under logarithmic
loss). Several applications and examples are given.
|
1110.3076
|
Efficient Latent Variable Graphical Model Selection via Split Bregman
Method
|
stat.ML cs.LG
|
We consider the problem of covariance matrix estimation in the presence of
latent variables. Under suitable conditions, it is possible to learn the
marginal covariance matrix of the observed variables via a tractable convex
program, where the concentration matrix of the observed variables is decomposed
into a sparse matrix (representing the graphical structure of the observed
variables) and a low rank matrix (representing the marginalization effect of
latent variables). We present an efficient first-order method based on split
Bregman to solve the convex problem. The algorithm is guaranteed to converge
under mild conditions. We show that our algorithm is significantly faster than
the state-of-the-art algorithm on both artificial and real-world data. Applying
the algorithm to a gene expression data involving thousands of genes, we show
that most of the correlation between observed variables can be explained by
only a few dozen latent factors.
|
1110.3088
|
Towards cross-lingual alerting for bursty epidemic events
|
cs.CL cs.IR cs.SI
|
Background: Online news reports are increasingly becoming a source for event
based early warning systems that detect natural disasters. Harnessing the
massive volume of information available from multilingual newswire presents as
many challenges as opportunities due to the patterns of reporting complex
spatiotemporal events. Results: In this article we study the problem of
utilising correlated event reports across languages. We track the evolution of
16 disease outbreaks using 5 temporal aberration detection algorithms on
text-mined events classified according to disease and outbreak country. Using
ProMED reports as a silver standard, comparative analysis of news data for 13
languages over a 129 day trial period showed improved sensitivity, F1 and
timeliness across most models using cross-lingual events. We report a detailed
case study analysis for Cholera in Angola 2010 which highlights the challenges
faced in correlating news events with the silver standard. Conclusions: The
results show that automated health surveillance using multilingual text mining
has the potential to turn low value news into high value alerts if informed
choices are used to govern the selection of models and data sources. An
implementation of the C2 alerting algorithm using multilingual news is
available at the BioCaster portal http://born.nii.ac.jp/?page=globalroundup.
|
1110.3089
|
OMG U got flu? Analysis of shared health messages for bio-surveillance
|
cs.CL cs.IR cs.SI
|
Background: Micro-blogging services such as Twitter offer the potential to
crowdsource epidemics in real-time. However, Twitter posts ('tweets') are often
ambiguous and reactive to media trends. In order to ground user messages in
epidemic response we focused on tracking reports of self-protective behaviour
such as avoiding public gatherings or increased sanitation as the basis for
further risk analysis. Results: We created guidelines for tagging self
protective behaviour based on Jones and Salath\'e (2009)'s behaviour response
survey. Applying the guidelines to a corpus of 5283 Twitter messages related to
influenza like illness showed a high level of inter-annotator agreement (kappa
0.86). We employed supervised learning using unigrams, bigrams and regular
expressions as features with two supervised classifiers (SVM and Naive Bayes)
to classify tweets into 4 self-reported protective behaviour categories plus a
self-reported diagnosis. In addition to classification performance we report
moderately strong Spearman's Rho correlation by comparing classifier output
against WHO/NREVSS laboratory data for A(H1N1) in the USA during the 2009-2010
influenza season. Conclusions: The study adds to evidence supporting a high
degree of correlation between pre-diagnostic social media signals and
diagnostic influenza case data, pointing the way towards low cost sensor
networks. We believe that the signals we have modelled may be applicable to a
wide range of diseases.
|
1110.3091
|
What's unusual in online disease outbreak news?
|
cs.CL cs.IR cs.SI
|
Background: Accurate and timely detection of public health events of
international concern is necessary to help support risk assessment and response
and save lives. Novel event-based methods that use the World Wide Web as a
signal source offer potential to extend health surveillance into areas where
traditional indicator networks are lacking. In this paper we address the issue
of systematically evaluating online health news to support automatic alerting
using daily disease-country counts text mined from real world data using
BioCaster. For 18 data sets produced by BioCaster, we compare 5 aberration
detection algorithms (EARS C2, C3, W2, F-statistic and EWMA) for performance
against expert moderated ProMED-mail postings. Results: We report sensitivity,
specificity, positive predictive value (PPV), negative predictive value (NPV),
mean alerts/100 days and F1, at 95% confidence interval (CI) for 287
ProMED-mail postings on 18 outbreaks across 14 countries over a 366 day period.
Results indicate that W2 had the best F1 with a slight benefit for day of week
effect over C2. In drill down analysis we indicate issues arising from the
granular choice of country-level modeling, sudden drops in reporting due to day
of week effects and reporting bias. Automatic alerting has been implemented in
BioCaster available from http://born.nii.ac.jp. Conclusions: Online health news
alerts have the potential to enhance manual analytical methods by increasing
throughput, timeliness and detection rates. Systematic evaluation of health
news aberrations is necessary to push forward our understanding of the complex
relationship between news report volumes and case numbers and to select the
best performing features and algorithms.
|
1110.3094
|
Syndromic classification of Twitter messages
|
cs.CL cs.IR cs.SI
|
Recent studies have shown strong correlation between social networking data
and national influenza rates. We expanded upon this success to develop an
automated text mining system that classifies Twitter messages in real time into
six syndromic categories based on key terms from a public health ontology.
10-fold cross validation tests were used to compare Naive Bayes (NB) and
Support Vector Machine (SVM) models on a corpus of 7431 Twitter messages. SVM
performed better than NB on 4 out of 6 syndromes. The best performing
classifiers showed moderately strong F1 scores: respiratory = 86.2 (NB);
gastrointestinal = 85.4 (SVM polynomial kernel degree 2); neurological = 88.6
(SVM polynomial kernel degree 1); rash = 86.0 (SVM polynomial kernel degree 1);
constitutional = 89.3 (SVM polynomial kernel degree 1); hemorrhagic = 89.9
(NB). The resulting classifiers were deployed together with an EARS C2
aberration detection algorithm in an experimental online system.
|
1110.3109
|
Robust Image Analysis by L1-Norm Semi-supervised Learning
|
cs.CV cs.LG
|
This paper presents a novel L1-norm semi-supervised learning algorithm for
robust image analysis by giving new L1-norm formulation of Laplacian
regularization which is the key step of graph-based semi-supervised learning.
Since our L1-norm Laplacian regularization is defined directly over the
eigenvectors of the normalized Laplacian matrix, we successfully formulate
semi-supervised learning as an L1-norm linear reconstruction problem which can
be effectively solved with sparse coding. By working with only a small subset
of eigenvectors, we further develop a fast sparse coding algorithm for our
L1-norm semi-supervised learning. Due to the sparsity induced by sparse coding,
the proposed algorithm can deal with the noise in the data to some extent and
thus has important applications to robust image analysis, such as noise-robust
image classification and noise reduction for visual and textual bag-of-words
(BOW) models. In particular, this paper is the first attempt to obtain robust
image representation by sparse co-refinement of visual and textual BOW models.
The experimental results have shown the promising performance of the proposed
algorithm.
|
1110.3121
|
Transient fluctuation of the prosperity of firms in a network economy
|
q-bio.MN cs.CE physics.bio-ph physics.soc-ph
|
The transient fluctuation of the prosperity of firms in a network economy is
investigated with an abstract stochastic model. The model describes the profit
which firms make when they sell materials to a firm which produces a product
and the fixed cost expense to the firms to produce those materials and product.
The formulae for this model are parallel to those for population dynamics. The
swinging changes in the fluctuation in the transient state from the initial
growth to the final steady state are the consequence of a topology-dependent
time trial competition between the profitable interactions and expense. The
firm in a sparse random network economy is more likely to go bankrupt than
expected from the value of the limit of the fluctuation in the steady state,
and there is a risk of failing to reach by far the less fluctuating steady
state.
|
1110.3158
|
Efficient Incremental Breadth-Depth XML Event Mining
|
cs.DB
|
Many applications log a large amount of events continuously. Extracting
interesting knowledge from logged events is an emerging active research area in
data mining. In this context, we propose an approach for mining frequent events
and association rules from logged events in XML format. This approach is
composed of two-main phases: I) constructing a novel tree structure called
Frequency XML-based Tree (FXT), which contains the frequency of events to be
mined; II) querying the constructed FXT using XQuery to discover frequent
itemsets and association rules. The FXT is constructed with a single-pass over
logged data. We implement the proposed algorithm and study various performance
issues. The performance study shows that the algorithm is efficient, for both
constructing the FXT and discovering association rules.
|
1110.3177
|
On a Class of Quadratic Polynomials with no Zeros and its Application to
APN Functions
|
cs.IT math.IT
|
We show that the there exists an infinite family of APN functions of the form
$F(x)=x^{2^{s}+1} + x^{2^{k+s}+2^k} + cx^{2^{k+s}+1} + c^{2^k}x^{2^k + 2^s} +
\delta x^{2^{k}+1}$, over $\gf_{2^{2k}}$, where $k$ is an even integer and
$\gcd(2k,s)=1, 3\nmid k$. This is actually a proposed APN family of Lilya
Budaghyan and Claude Carlet who show in \cite{carlet-1} that the function is
APN when there exists $c$ such that the polynomial
$y^{2^s+1}+cy^{2^s}+c^{2^k}y+1=0$ has no solutions in the field $\gf_{2^{2k}}$.
In \cite{carlet-1} they demonstrate by computer that such elements $c$ can be
found over many fields, particularly when the degree of the field is not
divisible by 3. We show that such $c$ exists when $k$ is even and $3\nmid k$
(and demonstrate why the $k$ odd case only re-describes an existing family of
APN functions). The form of these coefficients is given so that we may write
the infinite family of APN functions.
|
1110.3194
|
Controlled Total Variation regularization for inverse problems
|
cs.CV
|
This paper provides a new algorithm for solving inverse problems, based on
the minimization of the $L^2$ norm and on the control of the Total Variation.
It consists in relaxing the role of the Total Variation in the classical Total
Variation minimization approach, which permits us to get better approximation
to the inverse problems. The numerical results on the deconvolution problem
show that our method outperforms some previous ones.
|
1110.3195
|
Blind Known Interference Cancellation
|
cs.IT cs.NI math.IT
|
This paper investigates interference-cancellation schemes at the receiver, in
which the original data of the interference is known a priori. Such a priori
knowledge is common in wireless relay networks. For example, a transmitting
relay could be relaying data that was previously transmitted by a node, in
which case the interference received by the node now is actually self
information. Besides the case of self information, the node could also have
overheard or received the interference data in a prior transmission by another
node. Directly removing the known interference requires accurate estimate of
the interference channel, which may be difficult in many situations. In this
paper, we propose a novel scheme, Blind Known Interference Cancellation (BKIC),
to cancel known interference without interference channel information. BKIC
consists of two steps. The first step combines adjacent symbols to cancel the
interference, exploiting the fact that the channel coefficients are almost the
same between successive symbols. After such interference cancellation, however,
the signal of interest is also distorted. The second step recovers the signal
of interest amidst the distortion. We propose two algorithms for the critical
second steps. The first algorithm (BKIC-S) is based on the principle of
smoothing. It is simple and has near optimal performance in the slow fading
scenario. The second algorithm (BKIC-RBP) is based on the principle of
real-valued belief propagation. It can achieve MAP-optimal performance with
fast convergence, and has near optimal performance even in the fast fading
scenario. Both BKIC schemes outperform the traditional self-interference
cancellation schemes with perfect initial channel information by a large
margin, while having lower complexities.
|
1110.3197
|
Non-memoryless Analog Network Coding in Two-Way Relay Channel
|
cs.IT cs.NI math.IT
|
Physical-layer Network Coding (PNC) can significantly improve the throughput
of two-way relay channels. An interesting variant of PNC is Analog Network
Coding (ANC). Almost all ANC schemes proposed to date, however, operate in a
symbol by symbol manner (memoryless) and cannot exploit the redundant
information in channel-coded packets to enhance performance. This paper
proposes a non-memoryless ANC scheme. In particular, we design a soft-input
soft-output decoder for the relay node to process the superimposed packets from
the two end nodes to yield an estimated MMSE packet for forwarding back to the
end nodes. Our decoder takes into account the correlation among different
symbols in the packets due to channel coding, and provides significantly
improved MSE performance. Our analysis shows that the SNR improvement at the
relay node is lower bounded by 1/R (R is the code rate) with the simplest LDPC
code (repeat code). The SNR improvement is also verified by numerical
simulation with LDPC code. Our results indicate that LDPC codes of different
degrees are preferred in different SNR regions. Generally speaking, smaller
degrees are preferred for lower SNRs.
|
1110.3216
|
An Enhanced Multiple Random Access Scheme for Satellite Communications
|
cs.IT cs.NI math.IT
|
In this paper, we introduce Multi-Slots Coded ALOHA (MuSCA) as a multiple
random access method for satellite communications. This scheme can be
considered as a generalization of the Contention Resolution Diversity Slotted
Aloha (CRDSA) mechanism. Instead of transmitting replicas, this system replaces
them by several parts of a single word of an error correcting code. It is also
different from Coded Slotted ALOHA (CSA) as the assumption of destructive
collisions is not adopted. In MuSCA, the entity in charge of the decoding
mechanism collects all bursts of the same user (including the interfered slots)
before decoding and implements a successive interference cancellation (SIC)
process to remove successfully decoded signals. Simulations show that for a
frame of 100 slots, the achievable total normalized throughput is greater than
1.25 and 1.4 for a frame of 500 slots, resulting in a gain of 80% and 75% with
respect to CRDSA and CSA respectively. This paper is a first analysis of the
proposed scheme and opens several perspectives.
|
1110.3225
|
Mining Patterns in Networks using Homomorphism
|
cs.DS cs.SI physics.soc-ph
|
In recent years many algorithms have been developed for finding patterns in
graphs and networks. A disadvantage of these algorithms is that they use
subgraph isomorphism to determine the support of a graph pattern; subgraph
isomorphism is a well-known NP complete problem. In this paper, we propose an
alternative approach which mines tree patterns in networks by using subgraph
homomorphism. The advantage of homomorphism is that it can be computed in
polynomial time, which allows us to develop an algorithm that mines tree
patterns in arbitrary graphs in incremental polynomial time. Homomorphism
however entails two problems not found when using isomorphism: (1) two patterns
of different size can be equivalent; (2) patterns of unbounded size can be
frequent. In this paper we formalize these problems and study solutions that
easily fit within our algorithm.
|
1110.3239
|
Improving parameter learning of Bayesian nets from incomplete data
|
cs.LG cs.AI stat.ML
|
This paper addresses the estimation of parameters of a Bayesian network from
incomplete data. The task is usually tackled by running the
Expectation-Maximization (EM) algorithm several times in order to obtain a high
log-likelihood estimate. We argue that choosing the maximum log-likelihood
estimate (as well as the maximum penalized log-likelihood and the maximum a
posteriori estimate) has severe drawbacks, being affected both by overfitting
and model uncertainty. Two ideas are discussed to overcome these issues: a
maximum entropy approach and a Bayesian model averaging approach. Both ideas
can be easily applied on top of EM, while the entropy idea can be also
implemented in a more sophisticated way, through a dedicated non-linear solver.
A vast set of experiments shows that these ideas produce significantly better
estimates and inferences than the traditional and widely used maximum
(penalized) log-likelihood and maximum a posteriori estimates. In particular,
if EM is adopted as optimization engine, the model averaging approach is the
best performing one; its performance is matched by the entropy approach when
implemented using the non-linear solver. The results suggest that the
applicability of these ideas is immediate (they are easy to implement and to
integrate in currently available inference engines) and that they constitute a
better way to learn Bayesian network parameters.
|
1110.3264
|
Reduced-dimension multiuser detection: detectors and performance
guarantees
|
cs.IT math.IT
|
We explore several reduced-dimension multiuser detection (RD-MUD) structures
that significantly decrease the number of required correlation branches at the
receiver front-end, while still achieving performance similar to that of the
conventional matched-filter (MF) bank. RD-MUD exploits the fact that the number
of active users is typically small relative to the total number of users in the
system and relies on ideas of analog compressed sensing to reduce the number of
correlators. We first develop a general framework for both linear and nonlinear
RD-MUD detectors. We then present theoretical performance analysis for two
specific detectors: the linear reduced-dimension decorrelating (RDD) detector,
which combines subspace projection and thresholding to determine active users
and sign detection for data recovery, and the nonlinear reduced-dimension
decision-feedback (RDDF) detector, which combines decision-feedback orthogonal
matching pursuit for active user detection and sign detection for data
recovery. The theoretical performance results for both detectors are validated
via numerical simulations.
|
1110.3267
|
Multi-tier Network Performance Analysis using a Shotgun Cellular System
|
cs.IT math.IT
|
This paper studies the carrier-to-interference ratio (CIR) and
carrier-to-interference-plus-noise ratio (CINR) performance at the mobile
station (MS) within a multi-tier network composed of M tiers of wireless
networks, with each tier modeled as the homogeneous n-dimensional (n-D, n=1,2,
and 3) shotgun cellular system, where the base station (BS) distribution is
given by the homogeneous Poisson point process in n-D. The CIR and CINR at the
MS in a single tier network are thoroughly analyzed to simplify the analysis of
the multi-tier network. For the multi-tier network with given system
parameters, the following are the main results of this paper: (1)
semi-analytical expressions for the tail probabilities of CIR and CINR; (2) a
closed form expression for the tail probability of CIR in the range
[1,Infinity); (3) a closed form expression for the tail probability of an
approximation to CIR in the entire range [0,Infinity); (4) a lookup table based
approach for obtaining the tail probability of CINR, and (5) the study of the
effect of shadow fading and BSs with ideal sectorized antennas on the CIR and
CINR. Based on these results, it is shown that, in a practical cellular system,
the installation of additional wireless networks (microcells, picocells and
femtocells) with low power BSs over the already existing macrocell network will
always improve the CINR performance at the MS.
|
1110.3280
|
Stochastic Ordering based Carrier-to-Interference Ratio Analysis for the
Shotgun Cellular Systems
|
cs.IT math.IT
|
A simple analytical tool based on stochastic ordering is developed to compare
the distributions of carrier-to-interference ratio at the mobile station of two
cellular systems where the base stations are distributed randomly according to
certain non-homogeneous Poisson point processes. The comparison is conveniently
done by studying only the base station densities without having to solve for
the distributions of the carrier-to-interference ratio, that are often hard to
obtain.
|
1110.3315
|
Evolution of spatially embedded branching trees with interacting nodes
|
physics.soc-ph cond-mat.dis-nn cond-mat.stat-mech cs.SI
|
We study the evolution of branching trees embedded in Euclidean spaces with
suppressed branching of spatially close nodes. This cooperative branching
process accounts for the effect of overcrowding of nodes in the embedding space
and mimics the evolution of life processes (the so-called "tree of life") in
which a new level of complexity emerges as a short transition followed by a
long period of gradual evolution or even complete extinction. We consider the
models of branching trees in which each new node can produce up to two twigs
within a unit distance from the node in the Euclidean space, but this branching
is suppressed if the newborn node is closer than at distance $a$ from one of
the previous generation nodes. This results in an explosive (exponential)
growth in the initial period, and, after some crossover time $t_x \sim
\ln(1/a)$ for small $a$, in a slow (power-law) growth. This special point is
also a transition from "small" to "large words" in terms of network science. We
show that if the space is restricted, then this evolution may end by
extinction.
|
1110.3347
|
Dynamic Batch Bayesian Optimization
|
cs.LG
|
Bayesian optimization (BO) algorithms try to optimize an unknown function
that is expensive to evaluate using minimum number of evaluations/experiments.
Most of the proposed algorithms in BO are sequential, where only one experiment
is selected at each iteration. This method can be time inefficient when each
experiment takes a long time and more than one experiment can be ran
concurrently. On the other hand, requesting a fix-sized batch of experiments at
each iteration causes performance inefficiency in BO compared to the sequential
policies. In this paper, we present an algorithm that asks a batch of
experiments at each time step t where the batch size p_t is dynamically
determined in each step. Our algorithm is based on the observation that the
sequence of experiments selected by the sequential policy can sometimes be
almost independent from each other. Our algorithm identifies such scenarios and
request those experiments at the same time without degrading the performance.
We evaluate our proposed method using the Expected Improvement policy and the
results show substantial speedup with little impact on the performance in eight
real and synthetic benchmarks.
|
1110.3365
|
Secure Hybrid Digital-Analog Coding With Side Information at the
Receiver
|
cs.IT math.IT
|
In this work, the problem of transmitting an i.i.d Gaussian source over an
i.i.d Gaussian wiretap channel with an i.i.d Gaussian side information
available at the intended receiver is considered. The intended receiver is
assumed to have a certain minimum SNR and the eavesdropper is assumed to have a
strictly lower SNR, compared to the intended receiver. The objective is to
minimize the distortion of source reconstruction at the intended receiver. In
this work, it is shown that the source-channel separation coding scheme is
optimum in the sense of achieving minimum distortion. Two hybrid digital-analog
Wyner-Ziv coding schemes are then proposed which achieve the minimum
distortion. These secure joint source-channel coding schemes are based on the
Wyner-Ziv coding scheme and wiretap channel coding scheme when the analog
source is not explicitly quantized. The proposed secure hybrid digital-analog
schemes are analyzed under the main channel SNR mismatch. It is proven that the
proposed schemes can give a graceful degradation of distortion with SNR under
SNR mismatch, i.e., when the actual SNR is larger than the designed SNR.
|
1110.3366
|
Optimum Relay Scheme in a Secure Two-Hop Amplify and Forward Cooperative
Communication System
|
cs.IT math.IT
|
A MIMO secure two-hop wireless communication system is considered in this
paper. In this model, there are no direct links between the source-destination
and the source-eavesdropper. The problem is maximizing the secrecy capacity of
the system over all possible amplify and forward (AF) relay strategies, such
that the power consumption at the source node and the relay node is limited.
When all the nodes are equipped with single antenna, this non-convex
optimization problem is fully characterized. When all the nodes (except the
intended receiver) are equipped with multiple antennas, the optimization
problem is characterized based on the generalized eigenvalues-eigenvectors of
the channel gain matrices.
|
1110.3382
|
Sampling Techniques in Bayesian Finite Element Model Updating
|
cs.CE
|
Recent papers in the field of Finite Element Model (FEM) updating have
highlighted the benefits of Bayesian techniques. The Bayesian approaches are
designed to deal with the uncertainties associated with complex systems, which
is the main problem in the development and updating of FEMs. This paper
highlights the complexities and challenges of implementing any Bayesian method
when the analysis involves a complicated structural dynamic model. In such
systems an analytical Bayesian formulation might not be available in an
analytic form; therefore this leads to the use of numerical methods, i.e.
sampling methods. The main challenge then is to determine an efficient sampling
of the model parameter space. In this paper, three sampling techniques, the
Metropolis-Hastings (MH) algorithm, Slice Sampling and the Hybrid Monte Carlo
(HMC) technique, are tested by updating a structural beam model. The efficiency
and limitations of each technique is investigated when the FEM updating problem
is implemented using the Bayesian Approach. Both MH and HMC techniques are
found to perform better than the Slice sampling when Young's modulus is chosen
as the updating parameter. The HMC method gives better results than MH and
Slice sampling techniques, when the area moment of inertias and section areas
are updated.
|
1110.3385
|
Fuzzy Inference Systems Optimization
|
cs.AI
|
This paper compares various optimization methods for fuzzy inference system
optimization. The optimization methods compared are genetic algorithm, particle
swarm optimization and simulated annealing. When these techniques were
implemented it was observed that the performance of each technique within the
fuzzy inference system classification was context dependent.
|
1110.3450
|
Regime Change: Bit-Depth versus Measurement-Rate in Compressive Sensing
|
cs.IT math.IT
|
The recently introduced compressive sensing (CS) framework enables digital
signal acquisition systems to take advantage of signal structures beyond
bandlimitedness. Indeed, the number of CS measurements required for stable
reconstruction is closer to the order of the signal complexity than the Nyquist
rate. To date, the CS theory has focused on real-valued measurements, but in
practice, measurements are mapped to bits from a finite alphabet. Moreover, in
many potential applications the total number of measurement bits is
constrained, which suggests a tradeoff between the number of measurements and
the number of bits per measurement. We study this situation in this paper and
show that there exist two distinct regimes of operation that correspond to
high/low signal-to-noise ratio (SNR). In the measurement compression (MC)
regime, a high SNR favors acquiring fewer measurements with more bits per
measurement; in the quantization compression (QC) regime, a low SNR favors
acquiring more measurements with fewer bits per measurement. A surprise from
our analysis and experiments is that in many practical applications it is
better to operate in the QC regime, even acquiring as few as 1 bit per
measurement.
|
1110.3459
|
Two-Way Training Design for Discriminatory Channel Estimation in
Wireless MIMO Systems
|
cs.IT math.IT math.OC
|
This work examines the use of two-way training in multiple-input
multiple-output (MIMO) wireless systems to discriminate the channel estimation
performances between a legitimate receiver (LR) and an unauthorized receiver
(UR). This thesis extends upon the previously proposed discriminatory channel
estimation (DCE) scheme that allows only the transmitter to send training
signals. The goal of DCE is to minimize the channel estimation error at LR
while requiring the channel estimation error at UR to remain beyond a certain
level. If the training signal is sent only by the transmitter, the performance
discrimination between LR and UR will be limited since the training signals
help both receivers perform estimates of their downlink channels. In this work,
we consider instead the two-way training methodology that allows both the
transmitter and LR to send training signals. In this case, the training signal
sent by LR helps the transmitter obtain knowledge of the transmitter-to-LR
channel, but does not help UR estimate its downlink channel (i.e., the
transmitter-to-UR channel). With transmitter knowledge of the estimated
transmitter-to-LR channel, artificial noise (AN) can then be embedded in the
null space of the transmitter-to-LR channel to disrupt UR's channel estimation
without severely degrading the channel estimation at LR. Based on these ideas,
two-way DCE training schemes are developed for both reciprocal and
non-reciprocal channels. The optimal power allocation between training and AN
signals is devised under both average and individual power constraints.
Numerical results are provided to demonstrate the efficacy of the proposed
two-way DCE training schemes.
|
1110.3460
|
Performance analysis and optimal selection of large mean-variance
portfolios under estimation risk
|
q-fin.PM cs.IT math.IT
|
We study the consistency of sample mean-variance portfolios of arbitrarily
high dimension that are based on Bayesian or shrinkage estimation of the input
parameters as well as weighted sampling. In an asymptotic setting where the
number of assets remains comparable in magnitude to the sample size, we provide
a characterization of the estimation risk by providing deterministic
equivalents of the portfolio out-of-sample performance in terms of the
underlying investment scenario. The previous estimates represent a means of
quantifying the amount of risk underestimation and return overestimation of
improved portfolio constructions beyond standard ones. Well-known for the
latter, if not corrected, these deviations lead to inaccurate and overly
optimistic Sharpe-based investment decisions. Our results are based on recent
contributions in the field of random matrix theory. Along with the asymptotic
analysis, the analytical framework allows us to find bias corrections improving
on the achieved out-of-sample performance of typical portfolio constructions.
Some numerical simulations validate our theoretical findings.
|
1110.3531
|
Switching Strategies for Linear Feedback Stabilization with Sparsified
State Measurements
|
math.OC cs.SY
|
In this paper, we address the problem of stabilization in continuous time
linear dynamical systems using state feedback when compressive sampling
techniques are used for state measurement and reconstruction. In [5], we had
introduced the concept of using l1 reconstruction technique, commonly used in
sparse data reconstruction, for state measurement and estimation in a discrete
time linear system. In this work, we extend the previous scenario to analyse
continuous time linear systems. We investigate the effect of switching within a
set of sparsifiers, introduced in [5], on the stability of a linear plant in
continuous time settings. Initially, we analyze the problem of stabilization in
low dimensional systems, following which we generalize the results to address
the problem of stabilization in systems of arbitrary dimensions.
|
1110.3546
|
On the Computational Complexity of Measuring Global Stability of Banking
Networks
|
q-fin.RM cs.CC cs.CE cs.DM
|
Threats on the stability of a financial system may severely affect the
functioning of the entire economy, and thus considerable emphasis is placed on
the analyzing the cause and effect of such threats. The financial crisis in the
current and past decade has shown that one important cause of instability in
global markets is the so-called financial contagion, namely the spreading of
instabilities or failures of individual components of the network to other,
perhaps healthier, components. This leads to a natural question of whether the
regulatory authorities could have predicted and perhaps mitigated the current
economic crisis by effective computations of some stability measure of the
banking networks. Motivated by such observations, we consider the problem of
defining and evaluating stabilities of both homogeneous and heterogeneous
banking networks against propagation of synchronous idiosyncratic shocks given
to a subset of banks. We formalize the homogeneous banking network model of
Nier et al. and its corresponding heterogeneous version, formalize the
synchronous shock propagation procedures, define two appropriate stability
measures and investigate the computational complexities of evaluating these
measures for various network topologies and parameters of interest. Our results
and proofs also shed some light on the properties of topologies and parameters
of the network that may lead to higher or lower stabilities.
|
1110.3559
|
Separation of source-network coding and channel coding in wireline
networks
|
cs.IT math.IT
|
In this paper we prove the separation of source-network coding and channel
coding in wireline networks. For the purposes of this work, a wireline network
is any network of independent, memoryless, point-to-point, finite-alphabet
channels used to transmit dependent sources either losslessly or subject to a
distortion constraint. In deriving this result, we also prove that in a general
memoryless network with dependent sources, lossless and zero-distortion
reconstruction are equivalent provided that the conditional entropy of each
source given the other sources is non-zero. Furthermore, we extend the
separation result to the case of continuous-alphabet, point-to-point channels
such as additive white Gaussian noise (AWGN) channels.
|
1110.3561
|
Minimum Complexity Pursuit
|
cs.IT math.IT
|
The fast growing field of compressed sensing is founded on the fact that if a
signal is 'simple' and has some 'structure', then it can be reconstructed
accurately with far fewer samples than its ambient dimension. Many different
plausible structures have been explored in this field, ranging from sparsity to
low-rankness and to finite rate of innovation. However, there are important
abstract questions that are yet to be answered. For instance, what are the
general abstract meanings of 'structure' and 'simplicity'? Do there exist
universal algorithms for recovering such simple structured objects from fewer
samples than their ambient dimension? In this paper, we aim to address these
two questions. Using algorithmic information theory tools such as Kolmogorov
complexity, we provide a unified method of describing 'simplicity' and
'structure'. We then explore the performance of an algorithm motivated by
Ocam's Razor (called MCP for minimum complexity pursuit) and show that it
requires $O(k\log n)$ number of samples to recover a signal, where $k$ and $n$
represent its complexity and ambient dimension, respectively. Finally, we
discuss more general classes of signals and provide guarantees on the
performance of MCP.
|
1110.3563
|
Network Clustering Approximation Algorithm Using One Pass Black Box
Sampling
|
cs.SI physics.soc-ph
|
Finding a good clustering of vertices in a network, where vertices in the
same cluster are more tightly connected than those in different clusters, is a
useful, important, and well-studied task. Many clustering algorithms scale
well, however they are not designed to operate upon internet-scale networks
with billions of nodes or more. We study one of the fastest and most memory
efficient algorithms possible - clustering based on the connected components in
a random edge-induced subgraph. When defining the cost of a clustering to be
its distance from such a random clustering, we show that this surprisingly
simple algorithm gives a solution that is within an expected factor of two or
three of optimal with either of two natural distance functions. In fact, this
approximation guarantee works for any problem where there is a probability
distribution on clusterings. We then examine the behavior of this algorithm in
the context of social network trust inference.
|
1110.3564
|
Budget-Optimal Task Allocation for Reliable Crowdsourcing Systems
|
cs.LG cs.DS cs.HC stat.ML
|
Crowdsourcing systems, in which numerous tasks are electronically distributed
to numerous "information piece-workers", have emerged as an effective paradigm
for human-powered solving of large scale problems in domains such as image
classification, data entry, optical character recognition, recommendation, and
proofreading. Because these low-paid workers can be unreliable, nearly all such
systems must devise schemes to increase confidence in their answers, typically
by assigning each task multiple times and combining the answers in an
appropriate manner, e.g. majority voting.
In this paper, we consider a general model of such crowdsourcing tasks and
pose the problem of minimizing the total price (i.e., number of task
assignments) that must be paid to achieve a target overall reliability. We give
a new algorithm for deciding which tasks to assign to which workers and for
inferring correct answers from the workers' answers. We show that our
algorithm, inspired by belief propagation and low-rank matrix approximation,
significantly outperforms majority voting and, in fact, is optimal through
comparison to an oracle that knows the reliability of every worker. Further, we
compare our approach with a more general class of algorithms which can
dynamically assign tasks. By adaptively deciding which questions to ask to the
next arriving worker, one might hope to reduce uncertainty more efficiently. We
show that, perhaps surprisingly, the minimum price necessary to achieve a
target reliability scales in the same manner under both adaptive and
non-adaptive scenarios. Hence, our non-adaptive approach is order-optimal under
both scenarios. This strongly relies on the fact that workers are fleeting and
can not be exploited. Therefore, architecturally, our results suggest that
building a reliable worker-reputation system is essential to fully harnessing
the potential of adaptive designs.
|
1110.3566
|
Asymptotics of the number of threshold functions on a two-dimensional
rectangular grid
|
math.CO cs.IT math.IT math.LO math.NT
|
Let $m,n\ge 2$, $m\le n$. It is well-known that the number of
(two-dimensional) threshold functions on an $m\times n$ rectangular grid is
{eqnarray*} t(m,n)=\frac{6}{\pi^2}(mn)^2+O(m^2n\log{n})+O(mn^2\log{\log{n}})=
\frac{6}{\pi^2}(mn)^2+O(mn^2\log{m}). {eqnarray*} We improve the error term by
showing that $$ t(m,n)=\frac{6}{\pi^2}(mn)^2+O(mn^2). $$
|
1110.3569
|
Dimension Reduction of Health Data Clustering
|
cs.DB
|
The current data tends to be more complex than conventional data and need
dimension reduction. Dimension reduction is important in cluster analysis and
creates a smaller data in volume and has the same analytical results as the
original representation. A clustering process needs data reduction to obtain an
efficient processing time while clustering and mitigate curse of
dimensionality. This paper proposes a model for extracting multidimensional
data clustering of health database. We implemented four dimension reduction
techniques such as Singular Value Decomposition (SVD), Principal Component
Analysis (PCA), Self Organizing Map (SOM) and FastICA. The results show that
dimension reductions significantly reduce dimension and shorten processing time
and also increased performance of cluster in several health datasets.
|
1110.3586
|
Period-halving Bifurcation of a Neuronal Recurrence Equation
|
cs.NE math.DS nlin.CD
|
We study the sequences generated by neuronal recurrence equations of the form
$x(n) = {\bf 1}[\sum_{j=1}^{h} a_{j} x(n-j)- \theta]$. From a neuronal
recurrence equation of memory size $h$ which describes a cycle of length
$\rho(m) \times lcm(p_0, p_1,..., p_{-1+\rho(m)})$, we construct a set of
$\rho(m)$ neuronal recurrence equations whose dynamics describe respectively
the transient of length $O(\rho(m) \times lcm(p_0, ..., p_{d}))$ and the cycle
of length $O(\rho(m) \times lcm(p_{d+1}, ..., p_{-1+\rho(m)}))$ if $0 \leq d
\leq -2+\rho(m)$ and 1 if $d=\rho(m)-1$.
This result shows the exponential time of the convergence of neuronal
recurrence equation to fixed points and the existence of the period-halving
bifurcation.
|
1110.3592
|
Information, learning and falsification
|
cs.IT cs.LG math.IT stat.ML
|
There are (at least) three approaches to quantifying information. The first,
algorithmic information or Kolmogorov complexity, takes events as strings and,
given a universal Turing machine, quantifies the information content of a
string as the length of the shortest program producing it. The second, Shannon
information, takes events as belonging to ensembles and quantifies the
information resulting from observing the given event in terms of the number of
alternate events that have been ruled out. The third, statistical learning
theory, has introduced measures of capacity that control (in part) the expected
risk of classifiers. These capacities quantify the expectations regarding
future data that learning algorithms embed into classifiers.
This note describes a new method of quantifying information, effective
information, that links algorithmic information to Shannon information, and
also links both to capacities arising in statistical learning theory. After
introducing the measure, we show that it provides a non-universal analog of
Kolmogorov complexity. We then apply it to derive basic capacities in
statistical learning theory: empirical VC-entropy and empirical Rademacher
complexity. A nice byproduct of our approach is an interpretation of the
explanatory power of a learning algorithm in terms of the number of hypotheses
it falsifies, counted in two different ways for the two capacities. We also
discuss how effective information relates to information gain, Shannon and
mutual information.
|
1110.3619
|
Playing Mastermind With Constant-Size Memory
|
cs.DS cs.NE
|
We analyze the classic board game of Mastermind with $n$ holes and a constant
number of colors. A result of Chv\'atal (Combinatorica 3 (1983), 325-329)
states that the codebreaker can find the secret code with $\Theta(n / \log n)$
questions. We show that this bound remains valid if the codebreaker may only
store a constant number of guesses and answers. In addition to an intrinsic
interest in this question, our result also disproves a conjecture of Droste,
Jansen, and Wegener (Theory of Computing Systems 39 (2006), 525-544) on the
memory-restricted black-box complexity of the OneMax function class.
|
1110.3649
|
Algorithms to automatically quantify the geometric similarity of
anatomical surfaces
|
math.NA cs.CV cs.GR
|
We describe new approaches for distances between pairs of 2-dimensional
surfaces (embedded in 3-dimensional space) that use local structures and global
information contained in inter-structure geometric relationships. We present
algorithms to automatically determine these distances as well as geometric
correspondences. This is motivated by the aspiration of students of natural
science to understand the continuity of form that unites the diversity of life.
At present, scientists using physical traits to study evolutionary
relationships among living and extinct animals analyze data extracted from
carefully defined anatomical correspondence points (landmarks). Identifying and
recording these landmarks is time consuming and can be done accurately only by
trained morphologists. This renders these studies inaccessible to
non-morphologists, and causes phenomics to lag behind genomics in elucidating
evolutionary patterns. Unlike other algorithms presented for morphological
correspondences our approach does not require any preliminary marking of
special features or landmarks by the user. It also differs from other seminal
work in computational geometry in that our algorithms are polynomial in nature
and thus faster, making pairwise comparisons feasible for significantly larger
numbers of digitized surfaces. We illustrate our approach using three datasets
representing teeth and different bones of primates and humans, and show that it
leads to highly accurate results.
|
1110.3672
|
Reasoning about Actions with Temporal Answer Sets
|
cs.AI cs.LO
|
In this paper we combine Answer Set Programming (ASP) with Dynamic Linear
Time Temporal Logic (DLTL) to define a temporal logic programming language for
reasoning about complex actions and infinite computations. DLTL extends
propositional temporal logic of linear time with regular programs of
propositional dynamic logic, which are used for indexing temporal modalities.
The action language allows general DLTL formulas to be included in domain
descriptions to constrain the space of possible extensions. We introduce a
notion of Temporal Answer Set for domain descriptions, based on the usual
notion of Answer Set. Also, we provide a translation of domain descriptions
into standard ASP and we use Bounded Model Checking techniques for the
verification of DLTL constraints.
|
1110.3695
|
Geometric methods for estimation of structured covariances
|
math.OC cs.SY math.ST stat.TH
|
We consider problems of estimation of structured covariance matrices, and in
particular of matrices with a Toeplitz structure. We follow a geometric
viewpoint that is based on some suitable notion of distance. To this end, we
overview and compare several alternatives metrics and divergence measures. We
advocate a specific one which represents the Wasserstein distance between the
corresponding Gaussians distributions and show that it coincides with the
so-called Bures/Hellinger distance between covariance matrices as well. Most
importantly, besides the physically appealing interpretation, computation of
the metric requires solving a linear matrix inequality (LMI). As a consequence,
computations scale nicely for problems involving large covariance matrices, and
linear prior constraints on the covariance structure are easy to handle. We
compare this transportation/Bures/Hellinger metric with the maximum likelihood
and the Burg methods as to their performance with regard to estimation of power
spectra with spectral lines on a representative case study from the literature.
|
1110.3711
|
Optimization strategies for parallel CPU and GPU implementations of a
meshfree particle method
|
cs.PF cs.CE
|
Much of the current focus in high performance computing (HPC) for
computational fluid dynamics (CFD) deals with grid based methods. However,
parallel implementations for new meshfree particle methods such as Smoothed
Particle Hydrodynamics (SPH) are less studied. In this work, we present
optimizations for both central processing unit (CPU) and graphics processing
unit (GPU) of a SPH method. These optimization strategies can be further
applied to many other meshfree methods. The obtained performance for each
architecture and a comparison between the most efficient implementations for
CPU and GPU are shown.
|
1110.3717
|
A critical evaluation of network and pathway based classifiers for
outcome prediction in breast cancer
|
cs.LG q-bio.QM
|
Recently, several classifiers that combine primary tumor data, like gene
expression data, and secondary data sources, such as protein-protein
interaction networks, have been proposed for predicting outcome in breast
cancer. In these approaches, new composite features are typically constructed
by aggregating the expression levels of several genes. The secondary data
sources are employed to guide this aggregation. Although many studies claim
that these approaches improve classification performance over single gene
classifiers, the gain in performance is difficult to assess. This stems mainly
from the fact that different breast cancer data sets and validation procedures
are employed to assess the performance. Here we address these issues by
employing a large cohort of six breast cancer data sets as benchmark set and by
performing an unbiased evaluation of the classification accuracies of the
different approaches. Contrary to previous claims, we find that composite
feature classifiers do not outperform simple single gene classifiers. We
investigate the effect of (1) the number of selected features; (2) the specific
gene set from which features are selected; (3) the size of the training set and
(4) the heterogeneity of the data set on the performance of composite feature
and single gene classifiers. Strikingly, we find that randomization of
secondary data sources, which destroys all biological information in these
sources, does not result in a deterioration in performance of composite feature
classifiers. Finally, we show that when a proper correction for gene set size
is performed, the stability of single gene sets is similar to the stability of
composite feature sets. Based on these results there is currently no reason to
prefer prognostic classifiers based on composite features over single gene
classifiers for predicting outcome in breast cancer.
|
1110.3741
|
Multi-criteria Anomaly Detection using Pareto Depth Analysis
|
cs.LG cs.CV cs.DB stat.ML
|
We consider the problem of identifying patterns in a data set that exhibit
anomalous behavior, often referred to as anomaly detection. In most anomaly
detection algorithms, the dissimilarity between data samples is calculated by a
single criterion, such as Euclidean distance. However, in many cases there may
not exist a single dissimilarity measure that captures all possible anomalous
patterns. In such a case, multiple criteria can be defined, and one can test
for anomalies by scalarizing the multiple criteria using a linear combination
of them. If the importance of the different criteria are not known in advance,
the algorithm may need to be executed multiple times with different choices of
weights in the linear combination. In this paper, we introduce a novel
non-parametric multi-criteria anomaly detection method using Pareto depth
analysis (PDA). PDA uses the concept of Pareto optimality to detect anomalies
under multiple criteria without having to run an algorithm multiple times with
different choices of weights. The proposed PDA approach scales linearly in the
number of criteria and is provably better than linear combinations of the
criteria.
|
1110.3767
|
Anti-sparse coding for approximate nearest neighbor search
|
cs.CV cs.DB cs.IR cs.IT math.IT
|
This paper proposes a binarization scheme for vectors of high dimension based
on the recent concept of anti-sparse coding, and shows its excellent
performance for approximate nearest neighbor search. Unlike other binarization
schemes, this framework allows, up to a scaling factor, the explicit
reconstruction from the binary representation of the original vector. The paper
also shows that random projections which are used in Locality Sensitive Hashing
algorithms, are significantly outperformed by regular frames for both synthetic
and real data if the number of bits exceeds the vector dimensionality, i.e.,
when high precision is required.
|
1110.3774
|
Time-Stampless Adaptive Nonuniform Sampling for Stochastic Signals
|
cs.IT math.IT
|
In this paper, we introduce a time-stampless adaptive nonuniform sampling
(TANS) framework, in which time increments between samples are determined by a
function of the $m$ most recent increments and sample values. Since only past
samples are used in computing time increments, it is not necessary to save
sampling times (time stamps) for use in the reconstruction process. We focus on
two TANS schemes for discrete-time stochastic signals: a greedy method, and a
method based on dynamic programming. We analyze the performances of these
schemes by computing (or bounding) their trade-offs between sampling rate and
expected reconstruction distortion for autoregressive and Markovian signals.
Simulation results support the analysis of the sampling schemes. We show that,
by opportunistically adapting to local signal characteristics, TANS may lead to
improved power efficiency in some applications.
|
1110.3832
|
Distributed flow optimization and cascading effects in weighted complex
networks
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
We investigate the effect of a specific edge weighting scheme $\sim (k_i
k_j)^{\beta}$ on distributed flow efficiency and robustness to cascading
failures in scale-free networks. In particular, we analyze a simple, yet
fundamental distributed flow model: current flow in random resistor networks.
By the tuning of control parameter $\beta$ and by considering two general cases
of relative node processing capabilities as well as the effect of bandwidth, we
show the dependence of transport efficiency upon the correlations between the
topology and weights. By studying the severity of cascades for different
control parameter $\beta$, we find that network resilience to cascading
overloads and network throughput is optimal for the same value of $\beta$ over
the range of node capacities and available bandwidth.
|
1110.3843
|
Robustness of Information Diffusion Algorithms to Locally Bounded
Adversaries
|
cs.SI cs.DC cs.MA cs.SY math.OC physics.soc-ph
|
We consider the problem of diffusing information in networks that contain
malicious nodes. We assume that each normal node in the network has no
knowledge of the network topology other than an upper bound on the number of
malicious nodes in its neighborhood. We introduce a topological property known
as r-robustness of a graph, and show that this property provides improved
bounds on tolerating malicious behavior, in comparison to traditional concepts
such as connectivity and minimum degree. We use this topological property to
analyze the canonical problems of distributed consensus and broadcasting, and
provide sufficient conditions for these operations to succeed. Finally, we
provide a construction for r-robust graphs and show that the common
preferential-attachment model for scale-free networks produces a robust graph.
|
1110.3854
|
Consistency of community detection in networks under degree-corrected
stochastic block models
|
math.ST cs.SI physics.soc-ph stat.TH
|
Community detection is a fundamental problem in network analysis, with
applications in many diverse areas. The stochastic block model is a common tool
for model-based community detection, and asymptotic tools for checking
consistency of community detection under the block model have been recently
developed. However, the block model is limited by its assumption that all nodes
within a community are stochastically equivalent, and provides a poor fit to
networks with hubs or highly varying node degrees within communities, which are
common in practice. The degree-corrected stochastic block model was proposed to
address this shortcoming and allows variation in node degrees within a
community while preserving the overall block community structure. In this paper
we establish general theory for checking consistency of community detection
under the degree-corrected stochastic block model and compare several community
detection criteria under both the standard and the degree-corrected models. We
show which criteria are consistent under which models and constraints, as well
as compare their relative performance in practice. We find that methods based
on the degree-corrected block model, which includes the standard block model as
a special case, are consistent under a wider class of models and that
modularity-type methods require parameter constraints for consistency, whereas
likelihood-based methods do not. On the other hand, in practice, the degree
correction involves estimating many more parameters, and empirically we find it
is only worth doing if the node degrees within communities are indeed highly
variable. We illustrate the methods on simulated networks and on a network of
political blogs.
|
1110.3855
|
An Upper Bound on Broadcast Subspace Codes
|
cs.IT math.AG math.IT
|
Linear operator broadcast channel (LOBC) models the scenario of multi-rate
packet broadcasting over a network, when random network coding is applied. This
paper presents the framework of algebraic coding for LOBCs and provides a
Hamming-like upper bound on (multishot) subspace codes for LOBCs.
|
1110.3860
|
Contending Parties: A Logistic Choice Analysis of Inter- and Intra-group
Blog Citation Dynamics in the 2004 US Presidential Election
|
cs.SI physics.soc-ph stat.AP stat.OT
|
The 2004 US Presidential Election cycle marked the debut of Internet-based
media such as blogs and social networking websites as institutionally
recognized features of the American political landscape. Using a longitudinal
sample of all DNC/RNC-designated blog-citation networks we are able to test the
influence of various strategic, institutional, and balance-theoretic mechanisms
and exogenous factors such as seasonality and political events on the
propensity of blogs to cite one another over time. Capitalizing on the temporal
resolution of our data, we utilize an autoregressive network regression
framework to carry out inference for a logistic choice process. Using a
combination of deviance-based model selection criteria and simulation-based
model adequacy tests, we identify the combination of processes that best
characterizes the choice behavior of the contending blogs.
|
1110.3879
|
GTRACE-RS: Efficient Graph Sequence Mining using Reverse Search
|
cs.DB
|
The mining of frequent subgraphs from labeled graph data has been studied
extensively. Furthermore, much attention has recently been paid to frequent
pattern mining from graph sequences. A method, called GTRACE, has been proposed
to mine frequent patterns from graph sequences under the assumption that
changes in graphs are gradual. Although GTRACE mines the frequent patterns
efficiently, it still needs substantial computation time to mine the patterns
from graph sequences containing large graphs and long sequences. In this paper,
we propose a new version of GTRACE that enables efficient mining of frequent
patterns based on the principle of a reverse search. The underlying concept of
the reverse search is a general scheme for designing efficient algorithms for
hard enumeration problems. Our performance study shows that the proposed method
is efficient and scalable for mining both long and large graph sequence
patterns and is several orders of magnitude faster than the original GTRACE.
|
1110.3888
|
Handling controversial arguments by matrix
|
cs.AI
|
We introduce matrix and its block to the Dung's theory of argumentation
framework. It is showed that each argumentation framework has a matrix
representation, and the indirect attack relation and indirect defence relation
can be characterized by computing the matrix. This provide a powerful
mathematics way to determine the "controversial arguments" in an argumentation
framework. Also, we introduce several kinds of blocks based on the matrix, and
various prudent semantics of argumentation frameworks can all be determined by
computing and comparing the matrices and their blocks which we have defined. In
contrast with traditional method of directed graph, the matrix method has an
excellent advantage: computability(even can be realized on computer easily).
So, there is an intensive perspective to import the theory of matrix to the
research of argumentation frameworks and its related areas.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.