id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1104.3117
|
Estimating the State of AC Power Systems using Semidefinite Programming
|
cs.SY math.OC
|
This paper has been withdrawn by the authors
|
1104.3131
|
Global stabilization of feedforward systems under perturbations in
sampling schedule
|
math.OC cs.SY
|
For nonlinear systems that are known to be globally asymptotically
stabilizable, control over networks introduces a major challenge because of the
asynchrony in the transmission schedule. Maintaining global asymptotic
stabilization in sampled-data implementations with zero-order hold and with
perturbations in the sampling schedule is not achievable in general but we show
in this paper that it is achievable for the class of feedforward systems. We
develop sampled-data feedback stabilizers which are not approximations of
continuous-time designs but are discontinuous feedback laws that are
specifically developed for maintaining global asymptotic stabilizability under
any sequence of sampling periods that is uniformly bounded by a certain
"maximum allowable sampling period".
|
1104.3152
|
Polyethism in a colony of artificial ants
|
cs.AI nlin.AO q-bio.PE
|
We explore self-organizing strategies for role assignment in a foraging task
carried out by a colony of artificial agents. Our strategies are inspired by
various mechanisms of division of labor (polyethism) observed in eusocial
insects like ants, termites, or bees. Specifically we instantiate models of
caste polyethism and age or temporal polyethism to evaluated the benefits to
foraging in a dynamic environment. Our experiment is directly related to the
exploration/exploitation trade of in machine learning.
|
1104.3160
|
Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse
Vectors
|
cs.IT math.IT
|
The Compressive Sensing (CS) framework aims to ease the burden on
analog-to-digital converters (ADCs) by reducing the sampling rate required to
acquire and stably recover sparse signals. Practical ADCs not only sample but
also quantize each measurement to a finite number of bits; moreover, there is
an inverse relationship between the achievable sampling rate and the bit depth.
In this paper, we investigate an alternative CS approach that shifts the
emphasis from the sampling rate to the number of bits per measurement. In
particular, we explore the extreme case of 1-bit CS measurements, which capture
just their sign. Our results come in two flavors. First, we consider ideal
reconstruction from noiseless 1-bit measurements and provide a lower bound on
the best achievable reconstruction error. We also demonstrate that i.i.d.
random Gaussian matrices describe measurement mappings achieving, with
overwhelming probability, nearly optimal error decay. Next, we consider
reconstruction robustness to measurement errors and noise and introduce the
Binary $\epsilon$-Stable Embedding (B$\epsilon$SE) property, which
characterizes the robustness measurement process to sign changes. We show the
same class of matrices that provide almost optimal noiseless performance also
enable such a robust mapping. On the practical side, we introduce the Binary
Iterative Hard Thresholding (BIHT) algorithm for signal reconstruction from
1-bit measurements that offers state-of-the-art performance.
|
1104.3161
|
Robust Secure Transmission in MISO Channels Based on Worst-Case
Optimization
|
cs.IT math.IT
|
This paper studies robust transmission schemes for multiple-input
single-output (MISO) wiretap channels. Both the cases of direct transmission
and cooperative jamming with a helper are investigated with imperfect channel
state information (CSI) for the eavesdropper links. Robust transmit covariance
matrices are obtained based on worst-case secrecy rate maximization, under both
individual and global power constraints. For the case of an individual power
constraint, we show that the non-convex maximin optimization problem can be
transformed into a quasiconvex problem that can be efficiently solved with
existing methods. For a global power constraint, the joint optimization of the
transmit covariance matrices and power allocation between the source and the
helper is studied via geometric programming. We also study the robust wiretap
transmission problem for the case with a quality-of-service constraint at the
legitimate receiver. Numerical results show the advantage of the proposed
robust design. In particular, for the global power constraint scenario,
although cooperative jamming is not necessary for optimal transmission with
perfect eavesdropper's CSI, we show that robust jamming support can increase
the worst-case secrecy rate and lower the signal to interference-plus-noise
ratio at Eve in the presence of channel mismatches between the transmitters and
the eavesdropper.
|
1104.3162
|
Ubiquitousness of link-density and link-pattern communities in
real-world networks
|
physics.soc-ph cs.SI physics.data-an
|
Community structure appears to be an intrinsic property of many complex
real-world networks. However, recent work shows that real-world networks reveal
even more sophisticated modules than classical cohesive (link-density)
communities. In particular, networks can also be naturally partitioned
according to similar patterns of connectedness among the nodes, revealing
link-pattern communities. We here propose a propagation based algorithm that
can extract both link-density and link-pattern communities, without any prior
knowledge of the true structure. The algorithm was first validated on different
classes of synthetic benchmark networks with community structure, and also on
random networks. We have further applied the algorithm to different social,
information, technological and biological networks, where it indeed reveals
meaningful (composites of) link-density and link-pattern communities. The
results thus seem to imply that, similarly as link-density counterparts,
link-pattern communities appear ubiquitous in nature and design.
|
1104.3165
|
Dynamic Packet Scheduler Optimization in Wireless Relay Networks
|
cs.NI cs.SY math.OC
|
In this work, we investigate the optimal dynamic packet scheduling policy in
a wireless relay network (WRN). We model this network by two sets of parallel
queues, that represent the subscriber stations (SS) and the relay stations
(RS), with random link connectivity. An optimal policy minimizes, in stochastic
ordering sense, the process of cost function of the SS and RS queue sizes. We
prove that, in a system with symmetrical connectivity and arrival
distributions, a policy that tries to balance the lengths of all the system
queues, at every time slot, is optimal. We use stochastic dominance and
coupling arguments in our proof. We also provide a low-overhead algorithm for
optimal policy implementation.
|
1104.3179
|
Heterogeneity and Allometric Growth of Human Collaborative Tagging
Behavior
|
cs.IR cs.SI physics.soc-ph
|
Allometric growth is found in many tagging systems online. That is, the
number of new tags (T) is a power law function of the active population (P), or
T P^gamma (gamma!=1). According to previous studies, it is the heterogeneity in
individual tagging behavior that gives rise to allometric growth. These studies
consider the power-law distribution model with an exponent beta, regarding
1/beta as an index for heterogeneity. However, they did not discuss whether
power-law is the only distribution that leads to allometric growth, or
equivalently, whether the positive correlation between heterogeneity and
allometric growth holds in systems of distributions other than power-law. In
this paper, the authors systematically examine the growth pattern of systems of
six different distributions, and find that both power-law distribution and
log-normal distribution lead to allometric growth. Furthermore, by introducing
Shannon entropy as an indicator for heterogeneity instead of 1/beta, the
authors confirm that the positive relationship between heterogeneity and
allometric growth exists in both cases of power-law and log-normal
distributions.
|
1104.3184
|
Hidden Variables in Bipartite Networks
|
physics.data-an cond-mat.dis-nn cond-mat.stat-mech cs.SI physics.soc-ph
|
We introduce and study random bipartite networks with hidden variables. Nodes
in these networks are characterized by hidden variables which control the
appearance of links between node pairs. We derive analytic expressions for the
degree distribution, degree correlations, the distribution of the number of
common neighbors, and the bipartite clustering coefficient in these networks.
We also establish the relationship between degrees of nodes in original
bipartite networks and in their unipartite projections. We further demonstrate
how hidden variable formalism can be applied to analyze topological properties
of networks in certain bipartite network models, and verify our analytical
results in numerical simulations.
|
1104.3207
|
Common information revisited
|
cs.IT cs.DM math.CO math.IT
|
One of the main notions of information theory is the notion of mutual
information in two messages (two random variables in Shannon information theory
or two binary strings in algorithmic information theory). The mutual
information in $x$ and $y$ measures how much the transmission of $x$ can be
simplified if both the sender and the recipient know $y$ in advance. G\'acs and
K\"orner gave an example where mutual information cannot be presented as common
information (a third message easily extractable from both $x$ and $y$). Then
this question was studied in the framework of algorithmic information theory by
An. Muchnik and A. Romashchenko who found many other examples of this type. K.
Makarychev and Yu. Makarychev found a new proof of G\'acs--K\"orner results by
means of conditionally independent random variables. The question about the
difference between mutual and common information can be studied quantitatively:
for a given $x$ and $y$ we look for three messages $a$, $b$, $c$ such that $a$
and $c$ are enough to reconstruct $x$, while $b$ and $c$ are enough to
reconstruct $y$. In this paper: We state and prove (using hypercontractivity of
product spaces) a quantitative version of G\'acs--K\"orner theorem; We study
the tradeoff between $\abs{a}, \abs{b}, \abs{c}$ for a random pair $(x, y)$
such that Hamming distance between $x$ and $y$ is $\eps n$ (our bounds are
almost tight); We construct "the worst possible" distribution on $(x, y)$ in
terms of the tradeoff between $\abs{a}, \abs{b}, \abs{c}$.
|
1104.3209
|
Broadcast Analysis for Large Cooperative Wireless Networks
|
cs.IT cs.NI math.IT
|
The capability of nodes to broadcast their message to the entire wireless
network when nodes employ cooperation is considered. We employ an asymptotic
analysis using an extended random network setting and show that the broadcast
performance strongly depends on the path loss exponent of the medium. In
particular, as the size of the random network grows, the probability of
broadcast in a one-dimensional network goes to zero for path loss exponents
larger than one, and goes to a nonzero value for path loss exponents less than
one. In two-dimensional networks, the same behavior is observed for path loss
exponents above and below two, respectively.
|
1104.3212
|
Similarity Join Size Estimation using Locality Sensitive Hashing
|
cs.DB cs.DS
|
Similarity joins are important operations with a broad range of applications.
In this paper, we study the problem of vector similarity join size estimation
(VSJ). It is a generalization of the previously studied set similarity join
size estimation (SSJ) problem and can handle more interesting cases such as
TF-IDF vectors. One of the key challenges in similarity join size estimation is
that the join size can change dramatically depending on the input similarity
threshold.
We propose a sampling based algorithm that uses the
Locality-Sensitive-Hashing (LSH) scheme. The proposed algorithm LSH-SS uses an
LSH index to enable effective sampling even at high thresholds. We compare the
proposed technique with random sampling and the state-of-the-art technique for
SSJ (adapted to VSJ) and demonstrate LSH-SS offers more accurate estimates at
both high and low similarity thresholds and small variance using real-world
data sets.
|
1104.3213
|
Query Expansion Based on Clustered Results
|
cs.IR
|
Query expansion is a functionality of search engines that suggests a set of
related queries for a user-issued keyword query. Typical corpus-driven keyword
query expansion approaches return popular words in the results as expanded
queries. Using these approaches, the expanded queries may correspond to a
subset of possible query semantics, and thus miss relevant results. To handle
ambiguous queries and exploratory queries, whose result relevance is difficult
to judge, we propose a new framework for keyword query expansion: we start with
clustering the results according to user specified granularity, and then
generate expanded queries, such that one expanded query is generated for each
cluster whose result set should ideally be the corresponding cluster. We
formalize this problem and show its APX-hardness. Then we propose two efficient
algorithms named iterative single-keyword refinement and partial elimination
based convergence, respectively, which effectively generate a set of expanded
queries from clustered results that provide a classification of the original
query results. We believe our study of generating an optimal query based on the
ground truth of the query results not only has applications in query expansion,
but has significance for studying keyword search quality in general.
|
1104.3214
|
CoPhy: A Scalable, Portable, and Interactive Index Advisor for Large
Workloads
|
cs.DB
|
Index tuning, i.e., selecting the indexes appropriate for a workload, is a
crucial problem in database system tuning. In this paper, we solve index tuning
for large problem instances that are common in practice, e.g., thousands of
queries in the workload, thousands of candidate indexes and several hard and
soft constraints. Our work is the first to reveal that the index tuning problem
has a well structured space of solutions, and this space can be explored
efficiently with well known techniques from linear optimization. Experimental
results demonstrate that our approach outperforms state-of-the-art commercial
and research techniques by a significant margin (up to an order of magnitude).
|
1104.3216
|
Tuffy: Scaling up Statistical Inference in Markov Logic Networks using
an RDBMS
|
cs.DB
|
Markov Logic Networks (MLNs) have emerged as a powerful framework that
combines statistical and logical reasoning; they have been applied to many data
intensive problems including information extraction, entity resolution, and
text mining. Current implementations of MLNs do not scale to large real-world
data sets, which is preventing their wide-spread adoption. We present Tuffy
that achieves scalability via three novel contributions: (1) a bottom-up
approach to grounding that allows us to leverage the full power of the
relational optimizer, (2) a novel hybrid architecture that allows us to perform
AI-style local search efficiently using an RDBMS, and (3) a theoretical insight
that shows when one can (exponentially) improve the efficiency of stochastic
local search. We leverage (3) to build novel partitioning, loading, and
parallel algorithms. We show that our approach outperforms state-of-the-art
implementations in both quality and speed on several publicly available
datasets.
|
1104.3217
|
Automatic Optimization for MapReduce Programs
|
cs.DB cs.DC
|
The MapReduce distributed programming framework has become popular, despite
evidence that current implementations are inefficient, requiring far more
hardware than a traditional relational databases to complete similar tasks.
MapReduce jobs are amenable to many traditional database query optimizations
(B+Trees for selections, column-store- style techniques for projections, etc),
but existing systems do not apply them, substantially because free-form user
code obscures the true data operation being performed. For example, a selection
in SQL is easily detected, but a selection in a MapReduce program is embedded
in Java code along with lots of other program logic. We could ask the
programmer to provide explicit hints about the program's data semantics, but
one of MapReduce's attractions is precisely that it does not ask the user for
such information. This paper covers Manimal, which automatically analyzes
MapReduce programs and applies appropriate data- aware optimizations, thereby
requiring no additional help at all from the programmer. We show that Manimal
successfully detects optimization opportunities across a range of data
operations, and that it yields speedups of up to 1,121% on previously-written
MapReduce programs.
|
1104.3219
|
On Social-Temporal Group Query with Acquaintance Constraint
|
cs.SI
|
Three essential criteria are important for activity planning, including: (1)
finding a group of attendees familiar with the initiator, (2) ensuring each
attendee in the group to have tight social relations with most of the members
in the group, and (3) selecting an activity period available for all attendees.
Therefore, this paper proposes Social-Temporal Group Query to find the activity
time and attendees with the minimum total social distance to the initiator.
Moreover, this query incorporates an acquaintance constraint to avoid finding a
group with mutually unfamiliar attendees. Efficient processing of the
social-temporal group query is very challenging. We show that the problem is
NP-hard via a proof and formulate the problem with Integer Programming. We then
propose two efficient algorithms, SGSelect and STGSelect, which include
effective pruning techniques and employ the idea of pivot time slots to
substantially reduce the running time, for finding the optimal solutions.
Experimental results indicate that the proposed algorithms are much more
efficient and scalable. In the comparison of solution quality, we show that
STGSelect outperforms the algorithm that represents manual coordination by the
initiator.
|
1104.3221
|
On the geometry of higher-order variational problems on Lie groups
|
math-ph cs.SY math.DG math.MP math.OC
|
In this paper, we describe a geometric setting for higher-order lagrangian
problems on Lie groups. Using left-trivialization of the higher-order tangent
bundle of a Lie group and an adaptation of the classical Skinner-Rusk
formalism, we deduce an intrinsic framework for this type of dynamical systems.
Interesting applications as, for instance, a geometric derivation of the
higher-order Euler-Poincar\'e equations, optimal control of underactuated
control systems whose configuration space is a Lie group are shown, among
others, along the paper.
|
1104.3248
|
Signal Classification for Acoustic Neutrino Detection
|
astro-ph.IM cs.LG physics.data-an
|
This article focuses on signal classification for deep-sea acoustic neutrino
detection. In the deep sea, the background of transient signals is very
diverse. Approaches like matched filtering are not sufficient to distinguish
between neutrino-like signals and other transient signals with similar
signature, which are forming the acoustic background for neutrino detection in
the deep-sea environment. A classification system based on machine learning
algorithms is analysed with the goal to find a robust and effective way to
perform this task. For a well-trained model, a testing error on the level of
one percent is achieved for strong classifiers like Random Forest and Boosting
Trees using the extracted features of the signal as input and utilising dense
clusters of sensors instead of single sensors.
|
1104.3250
|
Adding noise to the input of a model trained with a regularized
objective
|
cs.AI
|
Regularization is a well studied problem in the context of neural networks.
It is usually used to improve the generalization performance when the number of
input samples is relatively small or heavily contaminated with noise. The
regularization of a parametric model can be achieved in different manners some
of which are early stopping (Morgan and Bourlard, 1990), weight decay, output
smoothing that are used to avoid overfitting during the training of the
considered model. From a Bayesian point of view, many regularization techniques
correspond to imposing certain prior distributions on model parameters (Krogh
and Hertz, 1991). Using Bishop's approximation (Bishop, 1995) of the objective
function when a restricted type of noise is added to the input of a parametric
function, we derive the higher order terms of the Taylor expansion and analyze
the coefficients of the regularization terms induced by the noisy input. In
particular we study the effect of penalizing the Hessian of the mapping
function with respect to the input in terms of generalization performance. We
also show how we can control independently this coefficient by explicitly
penalizing the Jacobian of the mapping function on corrupted inputs.
|
1104.3270
|
Affine trajectory correction for nonholonomic mobile robots
|
cs.RO
|
Planning trajectories for nonholonomic systems is difficult and
computationally expensive. When facing unexpected events, it may therefore be
preferable to deform in some way the initially planned trajectory rather than
to re-plan entirely a new one. We suggest here a method based on affine
transformations to make such deformations. This method is exact and fast: the
deformations and the resulting trajectories can be computed algebraically, in
one step, and without any trajectory re-integration. To demonstrate the
possibilities offered by this new method, we use it to derive position and
orientation correction algorithms for the general class of planar wheeled
robots and for a tridimensional underwater vehicle. These algorithms allow in
turn achieving more complex applications, including obstacle avoidance,
feedback control or gap filling for sampling-based kinodynamic planners.
|
1104.3300
|
The Gaussian Multiple Access Diamond Channel
|
cs.IT math.IT
|
In this paper, we study the capacity of the diamond channel. We focus on the
special case where the channel between the source node and the two relay nodes
are two separate links with finite capacities and the link from the two relay
nodes to the destination node is a Gaussian multiple access channel. We call
this model the Gaussian multiple access diamond channel. We first propose an
upper bound on the capacity. This upper bound is a single-letterization of an
$n$-letter upper bound proposed by Traskov and Kramer, and is tighter than the
cut-set bound. As for the lower bound, we propose an achievability scheme based
on sending correlated codes through the multiple access channel with
superposition structure. We then specialize this achievable rate to the
Gaussian multiple access diamond channel. Noting the similarity between the
upper and lower bounds, we provide sufficient and necessary conditions that a
Gaussian multiple access diamond channel has to satisfy such that the proposed
upper and lower bounds meet. Thus, for a Gaussian multiple access diamond
channel that satisfies these conditions, we have found its capacity.
|
1104.3344
|
Quantum Structure in Cognition: Fundamentals and Applications
|
cs.AI cs.IR quant-ph
|
Experiments in cognitive science and decision theory show that the ways in
which people combine concepts and make decisions cannot be described by
classical logic and probability theory. This has serious implications for
applied disciplines such as information retrieval, artificial intelligence and
robotics. Inspired by a mathematical formalism that generalizes quantum
mechanics the authors have constructed a contextual framework for both concept
representation and decision making, together with quantum models that are in
strong alignment with experimental data. The results can be interpreted by
assuming the existence in human thought of a double-layered structure, a
'classical logical thought' and a 'quantum conceptual thought', the latter
being responsible of the above paradoxes and nonclassical effects. The presence
of a quantum structure in cognition is relevant, for it shows that quantum
mechanics provides not only a useful modeling tool for experimental data but
also supplies a structural model for human and artificial thought processes.
This approach has strong connections with theories formalizing meaning, such as
semantic analysis, and has also a deep impact on computer science, information
retrieval and artificial intelligence. More specifically, the links with
information retrieval are discussed in this paper.
|
1104.3345
|
Quantum Interaction Approach in Cognition, Artificial Intelligence and
Robotics
|
cs.AI cs.RO quant-ph
|
The mathematical formalism of quantum mechanics has been successfully
employed in the last years to model situations in which the use of classical
structures gives rise to problematical situations, and where typically quantum
effects, such as 'contextuality' and 'entanglement', have been recognized. This
'Quantum Interaction Approach' is briefly reviewed in this paper focusing, in
particular, on the quantum models that have been elaborated to describe how
concepts combine in cognitive science, and on the ensuing identification of a
quantum structure in human thought. We point out that these results provide
interesting insights toward the development of a unified theory for meaning and
knowledge formalization and representation. Then, we analyze the technological
aspects and implications of our approach, and a particular attention is devoted
to the connections with symbolic artificial intelligence, quantum computation
and robotics.
|
1104.3419
|
Optimal Threshold-Based Multi-Trial Error/Erasure Decoding with the
Guruswami-Sudan Algorithm
|
cs.IT math.IT
|
Traditionally, multi-trial error/erasure decoding of Reed-Solomon (RS) codes
is based on Bounded Minimum Distance (BMD) decoders with an erasure option.
Such decoders have error/erasure tradeoff factor L=2, which means that an error
is twice as expensive as an erasure in terms of the code's minimum distance.
The Guruswami-Sudan (GS) list decoder can be considered as state of the art in
algebraic decoding of RS codes. Besides an erasure option, it allows to adjust
L to values in the range 1<L<=2. Based on previous work, we provide formulae
which allow to optimally (in terms of residual codeword error probability)
exploit the erasure option of decoders with arbitrary L, if the decoder can be
used z>=1 times. We show that BMD decoders with z_BMD decoding trials can
result in lower residual codeword error probability than GS decoders with z_GS
trials, if z_BMD is only slightly larger than z_GS. This is of practical
interest since BMD decoders generally have lower computational complexity than
GS decoders.
|
1104.3466
|
Characterization of Random Linear Network Coding with Application to
Broadcast Optimization in Intermittently Connected Networks
|
cs.IT cs.NI math.IT
|
We address the problem of optimizing the throughput of network coded traffic
in mobile networks operating in challenging environments where connectivity is
intermittent and locally available memory space is limited. Random linear
network coding (RLNC) is shown to be equivalent (across all possible initial
conditions) to a random message selection strategy where nodes are able to
exchange buffer occupancy information during contacts. This result creates the
premises for a tractable analysis of RLNC packet spread, which is in turn used
for enhancing its throughput under broadcast. By exploiting the similarity
between channel coding and RLNC in intermittently connected networks, we show
that quite surprisingly, network coding, when not used properly, is still
significantly underutilizing network resources. We propose an enhanced
forwarding protocol that increases considerably the throughput for practical
cases, with negligible additional delay.
|
1104.3497
|
Clean relaying aided cognitive radio under the coexistence constraint
|
cs.IT math.IT
|
We consider the interference-mitigation based cognitive radio where the
primary and secondary users can coexist at the same time and frequency bands,
under the constraint that the rate of the primary user (PU) must remain the
same with a single-user decoder. To meet such a coexistence constraint, the
relaying from the secondary user (SU) can help the PU's transmission under the
interference from the SU. However, the relayed signal in the known dirty paper
coding (DPC) based scheme is interfered by the SU's signal, and is not "clean".
In this paper, under the half-duplex constraints, we propose two new
transmission schemes aided by the clean relaying from the SU's transmitter and
receiver without interference from the SU. We name them as the clean
transmitter relaying (CT) and clean transmitter-receiver relaying (CTR) aided
cognitive radio, respectively. The rate and multiplexing gain performances of
CT and CTR in fading channels with various availabilities of the channel state
information at the transmitters (CSIT) are studied. Our CT generalizes the
celebrated DPC based scheme proposed previously. With full CSIT, the
multiplexing gain of the CTR is proved to be better (or no less) than that of
the previous DPC based schemes. This is because the silent period for decoding
the PU's messages for the DPC may not be necessary in the CTR. With only the
statistics of CSIT, we further prove that the CTR outperforms the rate
performance of the previous scheme in fast Rayleigh fading channels. The
numerical examples also show that in a large class of channels, the proposed CT
and CTR provide significant rate gains over the previous scheme with small
complexity penalties.
|
1104.3510
|
Least-squares based iterative multipath super-resolution technique
|
cs.IT cs.SY math.IT
|
In this paper, we study the problem of multipath channel estimation for
direct sequence spread spectrum signals. To resolve multipath components
arriving within a short interval, we propose a new algorithm called the
least-squares based iterative multipath super-resolution (LIMS). Compared to
conventional super-resolution techniques, such as the multiple signal
classification (MUSIC) and the estimation of signal parameters via rotation
invariance techniques (ESPRIT), our algorithm has several appealing features.
In particular, even in critical situations where the conventional
super-resolution techniques are not very powerful due to limited data or the
correlation between path coefficients, the LIMS algorithm can produce
successful results. In addition, due to its iterative nature, the LIMS
algorithm is suitable for recursive multipath tracking, whereas the
conventional super-resolution techniques may not be. Through numerical
simulations, we show that the LIMS algorithm can resolve the first arrival path
among closely arriving independently faded multipaths with a much lower mean
square error than can conventional early-late discriminator based techniques.
|
1104.3513
|
An Effect of Spatial Filtering in Visualization of Coronary Arteries
Imaging
|
cs.CV cs.CE
|
At present, coronary angiography is the well known standard for the diagnosis
of coronary artery disease. Conventional coronary angiography is an invasive
procedure with a small, yet inherent risk of myocardial infarction, stroke,
potential arrhythmias, and death. Other noninvasive diagnostic tools, such as
electrocardiography, echocardiography, and nuclear imaging are now widely
available but are limited by their inability to directly visualize and quantify
coronary artery stenoses and predict the stability of plaques. Coronary
magnetic resonance angiography (MRA) is a technique that allows visualization
of the coronary arteries by noninvasive means; however, it has not yet reached
a stage where it can be used in routine clinical practice. Although coronary
MRA is a potentially useful diagnostic tool, it has limitations. Further
research should focus on improving the diagnostic resolution and accuracy of
coronary MRA. This paper will helps to cardiologists to take the clear look of
spatial filtered imaging of coronary arteries.
|
1104.3556
|
How to Achieve Privacy in Bidirectional Relay Networks
|
cs.IT math.IT
|
Recent research developments show that the concept of bidirectional relaying
significantly improves the performance in wireless networks. This applies to
three-node networks, where a half-duplex relay node establishes a bidirectional
communication between two other nodes using a decode-and-forward protocol. In
this work we consider the scenario when in the broadcast phase the relay
transmits additional confidential information to one node, which should be kept
as secret as possible from the other, non-intended node. This is the
bidirectional broadcast channel with confidential messages for which we derive
the capacityequivocation region and the secrecy capacity region. The latter
characterizes the communication scenario with perfect secrecy, where the
confidential message is completely hidden from the non-legitimated node.
|
1104.3561
|
Soft-In Soft-Out DFE and Bi-directional DFE
|
cs.IT math.IT
|
We design a soft-in soft-out (SISO) decision feedback equalizer (DFE) that
performs better than its linear counterpart in turbo equalizer (TE) setting.
Unlike previously developed SISO-DFEs, the present DFE scheme relies on
extrinsic information formulation that directly takes into account the error
propagation effect. With this new approach, both error rate simulation and the
extrinsic information transfer (EXIT) chart analysis indicate that the proposed
SISO-DFE is superior to the well-known SISO linear equalizer (LE). This result
is in contrast with the general understanding today that the error propagation
effect of the DFE degrades the overall TE performance below that of the TE
based on a LE. We also describe a new extrinsic information combining strategy
involving the outputs of two DFEs running in opposite directions, that explores
error correlation between the two sets of DFE outputs. When this method is
combined with the new DFE extrinsic information formulation, the resulting
"bidirectional" turbo-DFE achieves excellent performance-complexity tradeoffs
compared to the TE based on the BCJR algorithm or on the LE. Unlike turbo LE or
turbo DFE, the turbo BiDFE's performance does not degrade significantly as the
feedforward and feedback filter taps are constrained to be time-invariant.
|
1104.3571
|
Visualization techniques for data mining of Latur district satellite
imagery
|
cs.CE cs.CV
|
This study presents a new visualization tool for classification of satellite
imagery. Visualization of feature space allows exploration of patterns in the
image data and insight into the classification process and related uncertainty.
Visual Data Mining provides added value to image classifications as the user
can be involved in the classification process providing increased confidence in
and understanding of the results. In this study, we present a prototype
visualization tool for visual data mining (VDM) of satellite imagery. The
visualization tool is showcased in a classification study of highresolution
imageries of Latur district in Maharashtra state of India.
|
1104.3590
|
An efficient and principled method for detecting communities in networks
|
cs.SI cond-mat.stat-mech physics.soc-ph
|
A fundamental problem in the analysis of network data is the detection of
network communities, groups of densely interconnected nodes, which may be
overlapping or disjoint. Here we describe a method for finding overlapping
communities based on a principled statistical approach using generative network
models. We show how the method can be implemented using a fast, closed-form
expectation-maximization algorithm that allows us to analyze networks of
millions of nodes in reasonable running times. We test the method both on
real-world networks and on synthetic benchmarks and find that it gives results
competitive with previous methods. We also show that the same approach can be
used to extract nonoverlapping community divisions via a relaxation method, and
demonstrate that the algorithm is competitively fast and accurate for the
nonoverlapping problem.
|
1104.3602
|
Non-Shannon Information Inequalities in Four Random Variables
|
cs.IT math.IT
|
Any unconstrained information inequality in three or fewer random variables
can be written as a linear combination of instances of Shannon's inequality
I(A;B|C) >= 0 . Such inequalities are sometimes referred to as "Shannon"
inequalities. In 1998, Zhang and Yeung gave the first example of a
"non-Shannon" information inequality in four variables. Their technique was to
add two auxiliary variables with special properties and then apply Shannon
inequalities to the enlarged list. Here we will show that the Zhang-Yeung
inequality can actually be derived from just one auxiliary variable. Then we
use their same basic technique of adding auxiliary variables to give many other
non-Shannon inequalities in four variables. Our list includes the inequalities
found by Xu, Wang, and Sun, but it is by no means exhaustive. Furthermore, some
of the inequalities obtained may be superseded by stronger inequalities that
have yet to be found. Indeed, we show that the Zhang-Yeung inequality is one of
those that is superseded. We also present several infinite families of
inequalities. This list includes some, but not all of the infinite families
found by Matus. Then we will give a description of what additional information
these inequalities tell us about entropy space. This will include a conjecture
on the maximum possible failure of Ingleton's inequality. Finally, we will
present an application of non-Shannon inequalities to network coding. We will
demonstrate how these inequalities are useful in finding bounds on the
information that can flow through a particular network called the Vamos
network.
|
1104.3661
|
Interference Channel with State Information
|
cs.IT math.IT
|
In this paper, we study the state-dependent two-user interference channel,
where the state information is non-causally known at both transmitters but
unknown to either of the receivers. We first propose two coding schemes for the
discrete memoryless case: simultaneous encoding for the sub-messages in the
first one and superposition encoding in the second one, both with rate
splitting and Gel'fand-Pinsker coding. The corresponding achievable rate
regions are established. Moreover, for the Gaussian case, we focus on the
simultaneous encoding scheme and propose an \emph{active interference
cancellation} mechanism, which is a generalized dirty-paper coding technique,
to partially eliminate the state effect at the receivers. The corresponding
achievable rate region is then derived. We also propose several heuristic
schemes for some special cases: the strong interference case, the mixed
interference case, and the weak interference case. For the strong and mixed
interference case, numerical results are provided to show that active
interference cancellation significantly enlarges the achievable rate region.
For the weak interference case, flexible power splitting instead of active
interference cancellation improves the performance significantly.
|
1104.3662
|
Asymptotic Capacity of Large Relay Networks with Conferencing Links
|
cs.IT math.IT
|
In this correspondence, we consider a half-duplex large relay network, which
consists of one source-destination pair and $N$ relay nodes, each of which is
connected with a subset of the other relays via signal-to-noise ratio
(SNR)-limited out-of-band conferencing links. The asymptotic achievable rates
of two basic relaying schemes with the "$p$-portion" conferencing strategy are
studied: For the decode-and-forward (DF) scheme, we prove that the DF rate
scales as $\mathcal{O} (\log (N))$; for the amplify-and-forward (AF) scheme, we
prove that it asymptotically achieves the capacity upper bound in some
interesting scenarios as $N$ goes to infinity.
|
1104.3681
|
An Unmanned Aerial Vehicle as Human-Assistant Robotics System
|
cs.RO
|
According to the American Heritage Dictionary [1],Robotics is the science or
study of the technology associated with the design, fabrication, theory, and
application of Robots. The term Hoverbot is also often used to refer to
sophisticated mechanical devices that are remotely controlled by human beings
even though these devices are not autonomous. This paper describes a remotely
controlled hoverbot by installing a transmitter and receiver on both sides that
is the control computer (PC) and the hoverbot respectively. Data is transmitted
as signal or instruction via a infrastructure network which is converted into a
command for the hoverbot that operates at a remote site.
|
1104.3727
|
A complete classification of doubly even self-dual codes of length 40
|
math.CO cs.IT math.IT
|
A complete classification of binary doubly even self-dual codes of length 40
is given. As a consequence, a classification of binary extremal self-dual codes
of length 38 is also given.
|
1104.3739
|
On a conjecture by Belfiore and Sol\'e on some lattices
|
cs.IT math.IT math.NT
|
The point of this note is to prove that the secrecy function attains its
maximum at y=1 on all known extremal even unimodular lattices. This is a
special case of a conjecture by Belfiore and Sol\'e. Further, we will give a
very simple method to verify or disprove the conjecture on any given unimodular
lattice.
|
1104.3742
|
Hue Histograms to Spatiotemporal Local Features for Action Recognition
|
cs.CV
|
Despite the recent developments in spatiotemporal local features for action
recognition in video sequences, local color information has so far been
ignored. However, color has been proved an important element to the success of
automated recognition of objects and scenes. In this paper we extend the
space-time interest point descriptor STIP to take into account the color
information on the features' neighborhood. We compare the performance of our
color-aware version of STIP (which we have called HueSTIP) with the original
one.
|
1104.3791
|
Fast matrix computations for pair-wise and column-wise commute times and
Katz scores
|
cs.SI cs.NA physics.soc-ph
|
We first explore methods for approximating the commute time and Katz score
between a pair of nodes. These methods are based on the approach of matrices,
moments, and quadrature developed in the numerical linear algebra community.
They rely on the Lanczos process and provide upper and lower bounds on an
estimate of the pair-wise scores. We also explore methods to approximate the
commute times and Katz scores from a node to all other nodes in the graph.
Here, our approach for the commute times is based on a variation of the
conjugate gradient algorithm, and it provides an estimate of all the diagonals
of the inverse of a matrix. Our technique for the Katz scores is based on
exploiting an empirical localization property of the Katz matrix. We adopt
algorithms used for personalized PageRank computing to these Katz scores and
theoretically show that this approach is convergent. We evaluate these methods
on 17 real world graphs ranging in size from 1000 to 1,000,000 nodes. Our
results show that our pair-wise commute time method and column-wise Katz
algorithm both have attractive theoretical properties and empirical
performance.
|
1104.3792
|
A sufficient condition on monotonic increase of the number of nonzero
entry in the optimizer of L1 norm penalized least-square problem
|
stat.ML cs.LG math.NA
|
The $\ell$-1 norm based optimization is widely used in signal processing,
especially in recent compressed sensing theory. This paper studies the solution
path of the $\ell$-1 norm penalized least-square problem, whose constrained
form is known as Least Absolute Shrinkage and Selection Operator (LASSO). A
solution path is the set of all the optimizers with respect to the evolution of
the hyperparameter (Lagrange multiplier). The study of the solution path is of
great significance in viewing and understanding the profile of the tradeoff
between the approximation and regularization terms. If the solution path of a
given problem is known, it can help us to find the optimal hyperparameter under
a given criterion such as the Akaike Information Criterion. In this paper we
present a sufficient condition on $\ell$-1 norm penalized least-square problem.
Under this sufficient condition, the number of nonzero entries in the optimizer
or solution vector increases monotonically when the hyperparameter decreases.
We also generalize the result to the often used total variation case, where the
$\ell$-1 norm is taken over the first order derivative of the solution vector.
We prove that the proposed condition has intrinsic connections with the
condition given by Donoho, et al \cite{Donoho08} and the positive cone
condition by Efron {\it el al} \cite{Efron04}. However, the proposed condition
does not need to assume the sparsity level of the signal as required by Donoho
et al's condition, and is easier to verify than Efron, et al's positive cone
condition when being used for practical applications.
|
1104.3801
|
Extended force density method and its expressions
|
cs.CE math-ph math.MP
|
The objective of this work can be divided into two parts. The first one is to
propose an extension of the force density method (FDM)(H.J. Schek, 1974), a
form-finding method for prestressed cable-net structures. The second one is to
present a review of various form-finding methods for tension structures, in the
relation with the extended FDM. In the first part, it is pointed out that the
original FDM become useless when it is applied to the prestressed structures
that consist of combinations of both tension and compression members, while the
FDM is usually advantageous in form-finding analysis of cable-nets. To
eliminate the limitation, a functional whose stationary problem simply
represents the FDM is firstly proposed. Additionally, the existence of a
variational principle in the FDM is also indicated. Then, the FDM is
extensively redefined by generalizing the formulation of the functional. As the
result, the generalized functionals enable us to find the forms of tension
structures that consist of combinations of both tension and compression
members, such as tensegrities and suspended membranes with compression struts.
In the second part, it is indicated the important role of three expressions
used by the description of the extended FDM, such as stationary problems of
functionals, the principle of virtual work and stationary conditions using
Nabla symbol. They can be commonly found in general problems of statics,
whereas the original FDM only provides a particular form of equilibrium
equation. Then, to demonstrate the advantage of such expressions, various
form-finding methods are reviewed and compared. As the result, the common
features and the differences over various form-finding methods are examined.
Finally, to give an overview of the reviewed methods, the corresponding
expressions are shown in the form of three tables.
|
1104.3810
|
Fixed Block Compression Boosting in FM-Indexes
|
cs.DS cs.IR
|
A compressed full-text self-index occupies space close to that of the
compressed text and simultaneously allows fast pattern matching and random
access to the underlying text. Among the best compressed self-indexes, in
theory and in practice, are several members of the FM-index family. In this
paper, we describe new FM-index variants that combine nice theoretical
properties, simple implementation and improved practical performance. Our main
result is a new technique called fixed block compression boosting, which is a
simpler and faster alternative to optimal compression boosting and implicit
compression boosting used in previous FM-indexes.
|
1104.3833
|
Noise Folding in Compressed Sensing
|
cs.IT math.IT math.ST stat.TH
|
The literature on compressed sensing has focused almost entirely on settings
where the signal is noiseless and the measurements are contaminated by noise.
In practice, however, the signal itself is often subject to random noise prior
to measurement. We briefly study this setting and show that, for the vast
majority of measurement schemes employed in compressed sensing, the two models
are equivalent with the important difference that the signal-to-noise ratio is
divided by a factor proportional to p/n, where p is the dimension of the signal
and n is the number of observations. Since p/n is often large, this leads to
noise folding which can have a severe impact on the SNR.
|
1104.3847
|
Collective Construction of 2D Block Structures with Holes
|
cs.CG cs.RO
|
In this paper we present algorithms for collective construction systems in
which a large number of autonomous mobile robots trans- port modular building
elements to construct a desired structure. We focus on building block
structures subject to some physical constraints that restrict the order in
which the blocks may be attached to the structure. Specifically, we determine a
partial ordering on the blocks such that if they are attached in accordance
with this ordering, then (i) the structure is a single, connected piece at all
intermediate stages of construction, and (ii) no block is attached between two
other previously attached blocks, since such a space is too narrow for a robot
to maneuver a block into it. Previous work has consider this problem for
building 2D structures without holes. Here we extend this work to 2D structures
with holes. We accomplish this by modeling the problem as a graph orientation
problem and describe an O(n^2) algorithm for solving it. We also describe how
this partial ordering may be used in a distributed fashion by the robots to
coordinate their actions during the building process.
|
1104.3904
|
An expert system for detecting automobile insurance fraud using social
network analysis
|
cs.AI cs.SI physics.soc-ph stat.ML
|
The article proposes an expert system for detection, and subsequent
investigation, of groups of collaborating automobile insurance fraudsters. The
system is described and examined in great detail, several technical
difficulties in detecting fraud are also considered, for it to be applicable in
practice. Opposed to many other approaches, the system uses networks for
representation of data. Networks are the most natural representation of such a
relational domain, allowing formulation and analysis of complex relations
between entities. Fraudulent entities are found by employing a novel assessment
algorithm, \textit{Iterative Assessment Algorithm} (\textit{IAA}), also
presented in the article. Besides intrinsic attributes of entities, the
algorithm explores also the relations between entities. The prototype was
evaluated and rigorously analyzed on real world data. Results show that
automobile insurance fraud can be efficiently detected with the proposed system
and that appropriate data representation is vital.
|
1104.3911
|
Information Exchange Limits in Cooperative MIMO Networks
|
cs.IT math.IT
|
Concurrent presence of inter-cell and intra-cell interferences constitutes a
major impediment to reliable downlink transmission in multi-cell multiuser
networks. Harnessing such interferences largely hinges on two levels of
information exchange in the network: one from the users to the base-stations
(feedback) and the other one among the base-stations (cooperation). We
demonstrate that exchanging a finite number of bits across the network, in the
form of feedback and cooperation, is adequate for achieving the optimal
capacity scaling. We also show that the average level of information exchange
is independent of the number of users in the network. This level of information
exchange is considerably less than that required by the existing coordination
strategies which necessitate exchanging infinite bits across the network for
achieving the optimal sum-rate capacity scaling. The results provided rely on a
constructive proof.
|
1104.3925
|
On the Residue Codes of Extremal Type II Z4-Codes of Lengths 32 and 40
|
math.CO cs.IT math.IT
|
In this paper, we determine the dimensions of the residue codes of extremal
Type II Z4-codes for lengths 32 and 40. We demonstrate that every binary doubly
even self-dual code of length 32 can be realized as the residue code of some
extremal Type II Z4-code. It is also shown that there is a unique extremal Type
II Z4-code of length 32 whose residue code has the smallest dimension 6 up to
equivalence. As a consequence, many new extremal Type II Z4-codes of lengths 32
and 40 are constructed.
|
1104.3927
|
Translation-based Constraint Answer Set Solving
|
cs.AI
|
We solve constraint satisfaction problems through translation to answer set
programming (ASP). Our reformulations have the property that unit-propagation
in the ASP solver achieves well defined local consistency properties like arc,
bound and range consistency. Experiments demonstrate the computational value of
this approach.
|
1104.3929
|
Understanding Exhaustive Pattern Learning
|
cs.AI cs.LG
|
Pattern learning in an important problem in Natural Language Processing
(NLP). Some exhaustive pattern learning (EPL) methods (Bod, 1992) were proved
to be flawed (Johnson, 2002), while similar algorithms (Och and Ney, 2004)
showed great advantages on other tasks, such as machine translation. In this
article, we first formalize EPL, and then show that the probability given by an
EPL model is constant-factor approximation of the probability given by an
ensemble method that integrates exponential number of models obtained with
various segmentations of the training data. This work for the first time
provides theoretical justification for the widely used EPL algorithm in NLP,
which was previously viewed as a flawed heuristic method. Better understanding
of EPL may lead to improved pattern learning algorithms in future.
|
1104.3953
|
Classical vs Quantum Games: Continuous-time Evolutionary Strategy
Dynamics
|
quant-ph cs.GT cs.IT math.IT
|
This paper unifies the concepts of evolutionary games and quantum strategies.
First, we state the formulation and properties of classical evolutionary
strategies, with focus on the destinations of evolution in 2-player 2-strategy
games. We then introduce a new formalism of quantum evolutionary dynamics, and
give an example where an evolving quantum strategy gives reward if played
against its classical counterpart.
|
1104.4013
|
On Optimal Binary One-Error-Correcting Codes of Lengths $2^m-4$ and
$2^m-3$
|
cs.IT math.IT
|
Best and Brouwer [Discrete Math. 17 (1977), 235-245] proved that
triply-shortened and doubly-shortened binary Hamming codes (which have length
$2^m-4$ and $2^m-3$, respectively) are optimal. Properties of such codes are
here studied, determining among other things parameters of certain subcodes. A
utilization of these properties makes a computer-aided classification of the
optimal binary one-error-correcting codes of lengths 12 and 13 possible; there
are 237610 and 117823 such codes, respectively (with 27375 and 17513
inequivalent extensions). This completes the classification of optimal binary
one-error-correcting codes for all lengths up to 15. Some properties of the
classified codes are further investigated. Finally, it is proved that for any
$m \geq 4$, there are optimal binary one-error-correcting codes of length
$2^m-4$ and $2^m-3$ that cannot be lengthened to perfect codes of length
$2^m-1$.
|
1104.4024
|
Palette-colouring: a belief-propagation approach
|
cond-mat.stat-mech cs.AI cs.DS math.CO
|
We consider a variation of the prototype combinatorial-optimisation problem
known as graph-colouring. Our optimisation goal is to colour the vertices of a
graph with a fixed number of colours, in a way to maximise the number of
different colours present in the set of nearest neighbours of each given
vertex. This problem, which we pictorially call "palette-colouring", has been
recently addressed as a basic example of problem arising in the context of
distributed data storage. Even though it has not been proved to be NP complete,
random search algorithms find the problem hard to solve. Heuristics based on a
naive belief propagation algorithm are observed to work quite well in certain
conditions. In this paper, we build upon the mentioned result, working out the
correct belief propagation algorithm, which needs to take into account the
many-body nature of the constraints present in this problem. This method
improves the naive belief propagation approach, at the cost of increased
computational effort. We also investigate the emergence of a satisfiable to
unsatisfiable "phase transition" as a function of the vertex mean degree, for
different ensembles of sparse random graphs in the large size ("thermodynamic")
limit.
|
1104.4035
|
Wireless MIMO Switching
|
cs.IT cs.NI math.IT
|
In a generic switching problem, a switching pattern consists of a one-to-one
mapping from a set of inputs to a set of outputs (i.e., a permutation). We
propose and investigate a wireless switching framework in which a multi-antenna
relay is responsible for switching traffic among a set of $N$ stations. We
refer to such a relay as a MIMO switch. With beamforming and linear detection,
the MIMO switch controls which stations are connected to which stations. Each
beamforming matrix realizes a permutation pattern among the stations. We refer
to the corresponding permutation matrix as a switch matrix. By scheduling a set
of different switch matrices, full connectivity among the stations can be
established. In this paper, we focus on "fair switching" in which equal amounts
of traffic are to be delivered for all $N(N-1)$ ordered pairs of stations. In
particular, we investigate how the system throughput can be maximized. In
general, for large $N$ the number of possible switch matrices (i.e.,
permutations) is huge, making the scheduling problem combinatorially
challenging. We show that for N=4 and 5, only a subset of $N-1$ switch matrices
need to be considered in the scheduling problem to achieve good throughput. We
conjecture that this will be the case for large $N$ as well. This conjecture,
if valid, implies that for practical purposes, fair-switching scheduling is not
an intractable problem.
|
1104.4053
|
On the evolution of the instance level of DL-lite knowledge bases
|
cs.AI
|
Recent papers address the issue of updating the instance level of knowledge
bases expressed in Description Logic following a model-based approach. One of
the outcomes of these papers is that the result of updating a knowledge base K
is generally not expressible in the Description Logic used to express K. In
this paper we introduce a formula-based approach to this problem, by revisiting
some research work on formula-based updates developed in the '80s, in
particular the WIDTIO (When In Doubt, Throw It Out) approach. We show that our
operator enjoys desirable properties, including that both insertions and
deletions according to such operator can be expressed in the DL used for the
original KB. Also, we present polynomial time algorithms for the evolution of
the instance level knowledge bases expressed in the most expressive Description
Logics of the DL-lite family.
|
1104.4056
|
Cram\'er-Rao Bound for Localization with A Priori Knowledge on Biased
Range Measurements
|
cs.IT cs.SY math.IT math.OC
|
This paper derives a general expression for the Cram\'er-Rao bound (CRB) of
wireless localization algorithms using range measurements subject to bias
corruption. Specifically, the a priori knowledge about which range measurements
are biased, and the probability density functions (PDF) of the biases are
assumed to be available. For each range measurement, the error due to
estimating the time-of-arrival of the detected signal is modeled as a Gaussian
distributed random variable with zero mean and known variance. In general, the
derived CRB expression can be evaluated numerically. An approximate CRB
expression is also derived when the bias PDF is very informative. Using these
CRB expressions, we study the impact of the bias distribution on the mean
square error (MSE) bound corresponding to the CRB. The analysis is corroborated
by numerical experiments.
|
1104.4063
|
Fast redshift clustering with the Baire (ultra) metric
|
cs.IR astro-ph.IM stat.ML
|
The Baire metric induces an ultrametric on a dataset and is of linear
computational complexity, contrasted with the standard quadratic time
agglomerative hierarchical clustering algorithm. We apply the Baire distance to
spectrometric and photometric redshifts from the Sloan Digital Sky Survey
using, in this work, about half a million astronomical objects. We want to know
how well the (more cos\ tly to determine) spectrometric redshifts can predict
the (more easily obtained) photometric redshifts, i.e. we seek to regress the
spectrometric on the photometric redshifts, and we develop a clusterwise
nearest neighbor regression procedure for this.
|
1104.4107
|
Reinforcement-Driven Spread of Innovations and Fads
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
We propose kinetic models for the spread of permanent innovations and
transient fads by the mechanism of social reinforcement. Each individual can be
in one of M+1 states of awareness 0,1,2,...,M, with state M corresponding to
adopting an innovation. An individual with awareness k<M increases to k+1 by
interacting with an adopter. Starting with a single adopter, the time for an
initially unaware population of size N to adopt a permanent innovation grows as
ln(N) for M=1, and as N^{1-1/M} for M>1. The fraction of the population that
remains clueless about a transient fad after it has come and gone changes
discontinuously as a function of the fad abandonment rate lambda for M>1. The
fad dies out completely in a time that varies non-monotonically with lambda.
|
1104.4141
|
Emergent Criticality Through Adaptive Information Processing in Boolean
Networks
|
cond-mat.dis-nn cs.NE nlin.AO
|
We study information processing in populations of Boolean networks with
evolving connectivity and systematically explore the interplay between the
learning capability, robustness, the network topology, and the task complexity.
We solve a long-standing open question and find computationally that, for large
system sizes $N$, adaptive information processing drives the networks to a
critical connectivity $K_{c}=2$. For finite size networks, the connectivity
approaches the critical value with a power-law of the system size $N$. We show
that network learning and generalization are optimized near criticality, given
task complexity and the amount of information provided threshold values. Both
random and evolved networks exhibit maximal topological diversity near $K_{c}$.
We hypothesize that this supports efficient exploration and robustness of
solutions. Also reflected in our observation is that the variance of the values
is maximal in critical network populations. Finally, we discuss implications of
our results for determining the optimal topology of adaptive dynamical networks
that solve computational tasks.
|
1104.4153
|
Learning invariant features through local space contraction
|
cs.AI
|
We present in this paper a novel approach for training deterministic
auto-encoders. We show that by adding a well chosen penalty term to the
classical reconstruction cost function, we can achieve results that equal or
surpass those attained by other regularized auto-encoders as well as denoising
auto-encoders on a range of datasets. This penalty term corresponds to the
Frobenius norm of the Jacobian matrix of the encoder activations with respect
to the input. We show that this penalty term results in a localized space
contraction which in turn yields robust features on the activation layer.
Furthermore, we show how this penalty term is related to both regularized
auto-encoders and denoising encoders and how it can be seen as a link between
deterministic and non-deterministic auto-encoders. We find empirically that
this penalty helps to carve a representation that better captures the local
directions of variation dictated by the data, corresponding to a
lower-dimensional non-linear manifold, while being more invariant to the vast
majority of directions orthogonal to the manifold. Finally, we show that by
using the learned features to initialize a MLP, we achieve state of the art
classification error on a range of datasets, surpassing other methods of
pre-training.
|
1104.4154
|
Power Allocation Based on SEP Minimization in Two-Hop Decode-and-Forward
Relay Networks
|
cs.IT math.IT
|
The problem of optimal power allocation among the relays in a two-hop
decode-and-forward cooperative relay network with independent Rayleigh fading
channels is considered. It is assumed that only the relays that decode the
source message correctly contribute in data transmission. Moreover, only the
knowledge of statistical channel state information is available. A new simple
closed-form expression for the average symbol error probability is derived.
Based on this expression, a new power allocation method that minimizes the
average symbol error probability and takes into account the constraints on the
total average power of all the relay nodes and maximum instant power of each
relay node is developed. The corresponding optimization problem is shown to be
a convex problem that can be solved using interior point methods. However, an
approximate closed-form solution is obtained and shown to be practically more
appealing due to significant complexity reduction. The accuracy of the
approximation is discussed. Moreover, the so obtained closed-form solution
gives additional insights into the optimal power allocation problem. Simulation
results confirm the improved performance of the proposed power allocation
scheme as compared to other schemes.
|
1104.4155
|
Interference Mitigation for Cognitive Radio MIMO Systems Based on
Practical Precoding
|
cs.IT math.IT
|
In this paper, we propose two subspace-projection-based precoding schemes,
namely, full-projection (FP)- and partial-projection (PP)-based precoding, for
a cognitive radio multiple-input multiple-output (CR-MIMO) network to mitigate
its interference to a primary time-division-duplexing (TDD) system. The
proposed precoding schemes are capable of estimating interference channels
between CR and primary networks, and incorporating the interference from the
primary to the CR system into CR precoding via a novel sensing approach. Then,
the CR performance and resulting interference of the proposed precoding schemes
are analyzed and evaluated. By fully projecting the CR transmission onto a null
space of the interference channels, the FP-based precoding scheme can
effectively avoid interfering the primary system with boosted CR throughput.
While, the PP-based scheme is able to further improve the CR throughput by
partially projecting its transmission onto the null space.
|
1104.4163
|
Data Mining : A prediction of performer or underperformer using
classification
|
cs.DB cs.IR
|
Now a day's students have a large set of data having precious information
hidden. Data mining technique can help to find this hidden information. In this
paper, data mining techniques name Byes classification method is used on these
data to help an institution. Institutions can find those students who are
consistently perform well. This study will help to institution reduce the drop
put ratio to a significant level and improve the performance level of the
institution.
|
1104.4164
|
A Data Mining view on Class Room Teaching Language
|
cs.DB cs.IR
|
From ancient period in India, educational institution embarked to use class
room teaching. Where a teacher explains the material and students understand
and learn the lesson. There is no absolute scale for measuring knowledge but
examination score is one scale which shows the performance indicator of
students. So it is important that appropriate material is taught but it is
vital that while teaching which language is chosen, class notes must be
prepared and attendance. This study analyses the impact of language on the
presence of students in class room. The main idea is to find out the support,
confidence and interestingness level for appropriate language and attendance in
the classroom. For this purpose association rule is used.
|
1104.4168
|
A Meshless Method for Variational Nonrigid 2-D Shape Registration
|
cs.CV
|
We present a method for nonrigid registration of 2-D geometric shapes. Our
contribution is twofold. First, we extend the classic chamfer-matching energy
to a variational functional. Secondly, we introduce a meshless deformation
model that can handle significant high-curvature deformations. We represent 2-D
shapes implicitly using distance transforms, and registration error is defined
based on the shape contours' mutual distances. In addition, we model global
shape deformation as an approximation blended from local deformation fields
using partition-of-unity. The global deformation field is regularized by
penalizing inconsistencies between local fields. The representation can be made
adaptive to shape's contour, leading to registration that is both flexible and
efficient. Finally, registration is achieved by minimizing a variational
chamfer-energy functional combined with the consistency regularizer. We
demonstrate the effectiveness of our method on a number of experiments.
|
1104.4209
|
Modeling the clustering in citation networks
|
physics.soc-ph cs.DL cs.SI
|
For the study of citation networks, a challenging problem is modeling the
high clustering. Existing studies indicate that the promising way to model the
high clustering is a copying strategy, i.e., a paper copies the references of
its neighbour as its own references. However, the line of models highly
underestimates the number of abundant triangles observed in real citation
networks and thus cannot well model the high clustering. In this paper, we
point out that the failure of existing models lies in that they do not capture
the connecting patterns among existing papers. By leveraging the knowledge
indicated by such connecting patterns, we further propose a new model for the
high clustering in citation networks. Experiments on two real world citation
networks, respectively from a special research area and a multidisciplinary
research area, demonstrate that our model can reproduce not only the power-law
degree distribution as traditional models but also the number of triangles, the
high clustering coefficient and the size distribution of co-citation clusters
as observed in these real networks.
|
1104.4247
|
QoS-Aware Base-Station Selections for Distributed MIMO Links in
Broadband Wireless Networks
|
cs.IT math.IT
|
We propose the QoS-aware BS-selection schemes for the distributed wireless
MIMO links, which aim at minimizing the BS usages and reducing the interfering
range, while satisfying diverse statistical delay-QoS constraints characterized
by the delay-bound violation probability and the effective capacity technique.
In particular, based on the channel state information (CSI) and QoS
requirements, a subset of BS with variable cardinality for the distributed MIMO
transmission is dynamically selected, where the selections are controlled by a
central server. For the single-user scenario, we develop two optimization
frameworks, respectively, to derive the efficient BS-selection schemes and the
corresponding resource allocation algorithms. One framework uses the
incremental BS-selection and time-sharing (IBS-TS) strategies, and the other
employs the ordered-gain based BS-selection and probabilistic transmissions
(OGBS-PT). The IBS-TS framework can yield better performance, while the scheme
developed under the OGBS-PT framework is easier to implement. For the
multi-user scenario, we propose the optimization framework applying the
priority BS-selection, block-diagonalization precoding, and probabilistic
transmission (PBS-BD-PT) techniques. We also propose the optimization framework
applying the priority BS-selection, time-division-multiple-access, and
probabilistic transmission (PBS-TDMA-PT) techniques. We derive the optimal
transmission schemes for all the aforementioned frameworks, respectively. Also
conducted is a set of simulation evaluations which compare our proposed schemes
with several baseline schemes and show the impact of the delay-QoS
requirements, transmit power, and traffic loads on the performances of BS
selections for distributed MIMO systems.
|
1104.4249
|
Robustness and Contagion in the International Financial Network
|
q-fin.GN cs.SI physics.soc-ph
|
The recent financial crisis of 2008 and the 2011 indebtedness of Greece
highlight the importance of understanding the structure of the global financial
network. In this paper we set out to analyze and characterize this network, as
captured by the IMF Coordinated Portfolio Investment Survey (CPIS), in two
ways. First, through an adaptation of the "error and attack" methodology [1],
we show that the network is of the "robust-yet-fragile" type, a topology found
in a wide variety of evolved networks. We compare these results against four
common null-models, generated only from first-order statistics of the empirical
data. In addition, we suggest a fifth, log-normal model, which generates
networks that seem to match the empirical one more closely. Still, this model
does not account for several higher order network statistics, which reenforces
the added value of the higher-order analysis. Second, using loss-given-default
dynamics [2], we model financial interdependence and potential cascading of
financial distress through the network. Preliminary simulations indicate that
default by a single relatively small country like Greece can be absorbed by the
network, but that default in combination with defaults of other PIGS countries
(Portugal, Ireland, and Spain) could lead to a massive extinction cascade in
the global economy.
|
1104.4251
|
Distributed Self-Organization Of Swarms To Find Globally
$\epsilon$-Optimal Routes To Locally Sensed Targets
|
cs.RO cs.MA cs.SY math.OC
|
The problem of near-optimal distributed path planning to locally sensed
targets is investigated in the context of large swarms. The proposed algorithm
uses only information that can be locally queried, and rigorous theoretical
results on convergence, robustness, scalability are established, and effect of
system parameters such as the agent-level communication radius and agent
velocities on global performance is analyzed. The fundamental philosophy of the
proposed approach is to percolate local information across the swarm, enabling
agents to indirectly access the global context. A gradient emerges, reflecting
the performance of agents, computed in a distributed manner via local
information exchange between neighboring agents. It is shown that to follow
near-optimal routes to a target which can be only sensed locally, and whose
location is not known a priori, the agents need to simply move towards its
"best" neighbor, where the notion of "best" is obtained by computing the
state-specific language measure of an underlying probabilistic finite state
automata. The theoretical results are validated in high-fidelity simulation
experiments, with excess of $10^4$ agents.
|
1104.4260
|
A Robust Artificial Noise Aided Transmit Design for Miso Secrecy
|
cs.IT math.IT
|
This paper considers an artificial noise (AN) aided secrecy rate maximization
(SRM) problem for a multi-input single-output (MISO) channel overheard by
multiple single-antenna eavesdroppers. We assume that the transmitter has
perfect knowledge about the channel to the desired user but imperfect knowledge
about the channels to the eavesdroppers. Therefore, the resultant SRM problem
is formulated in the way that we maximize the worst-case secrecy rate by
jointly designing the signal covariance ${\bf W}$ and the AN covariance ${\bf
\Sigma}$. However, such a worst-case SRM problem turns out to be hard to
optimize, since it is nonconvex in ${\bf W}$ and ${\bf \Sigma}$ jointly.
Moreover, it falls into the class of semi-infinite optimization problems.
Through a careful reformulation, we show that the worst-case SRM problem can be
handled by performing a one-dimensional line search in which a sequence of
semidefinite programs (SDPs) are involved. Moreover, we also show that the
optimal ${\bf W}$ admits a rank-one structure, implying that transmit
beamforming is secrecy rate optimal under the considered scenario. Simulation
results are provided to demonstrate the robustness and effectiveness of the
proposed design compared to a non-robust AN design.
|
1104.4266
|
Ecosystem Viable Yields
|
math.OC cs.SY q-bio.PE
|
The World Summit on Sustainable Development (Johannesburg, 2002) encouraged
the application of the ecosystem approach by 2010. However, at the same Summit,
the signatory States undertook to restore and exploit their stocks at maximum
sustainable yield (MSY), a concept and practice without ecosystemic dimension,
since MSY is computed species by species, on the basis of a monospecific model.
Acknowledging this gap, we propose a definition of "ecosystem viable yields"
(EVY) as yields compatible i) with guaranteed biological safety levels for all
time and ii) with an ecosystem dynamics. To the difference of MSY, this notion
is not based on equilibrium, but on viability theory, which offers advantages
for robustness. For a generic class of multispecies models with harvesting, we
provide explicit expressions for the EVY. We apply our approach to the
anchovy--hake couple in the Peruvian upwelling ecosystem.
|
1104.4285
|
Universally Attainable Error and Information Exponents, and Equivocation
Rate for the Broadcast Channels with Confidential Messages
|
cs.IT cs.CR math.IT
|
We show universally attainable exponents for the decoding error and the
mutual information and universally attainable equivocation rates for the
conditional entropy for the broadcast channels with confidential messages. The
error exponents are the same as ones given by Korner and Sgarro for the
broadcast channels with degraded message sets.
|
1104.4290
|
Algorithms and Complexity Results for Persuasive Argumentation
|
cs.AI
|
The study of arguments as abstract entities and their interaction as
introduced by Dung (Artificial Intelligence 177, 1995) has become one of the
most active research branches within Artificial Intelligence and Reasoning. A
main issue for abstract argumentation systems is the selection of acceptable
sets of arguments. Value-based argumentation, as introduced by Bench-Capon (J.
Logic Comput. 13, 2003), extends Dung's framework. It takes into account the
relative strength of arguments with respect to some ranking representing an
audience: an argument is subjectively accepted if it is accepted with respect
to some audience, it is objectively accepted if it is accepted with respect to
all audiences. Deciding whether an argument is subjectively or objectively
accepted, respectively, are computationally intractable problems. In fact, the
problems remain intractable under structural restrictions that render the main
computational problems for non-value-based argumentation systems tractable. In
this paper we identify nontrivial classes of value-based argumentation systems
for which the acceptance problems are polynomial-time tractable. The classes
are defined by means of structural restrictions in terms of the underlying
graphical structure of the value-based system. Furthermore we show that the
acceptance problems are intractable for two classes of value-based systems that
where conjectured to be tractable by Dunne (Artificial Intelligence 171, 2007).
|
1104.4295
|
Improving digital signal interpolation: L2-optimal kernels with
kernel-invariant interpolation speed
|
cs.CV math.OC
|
Interpolation is responsible for digital signal resampling and can
significantly degrade the original signal quality if not done properly. For
many years, optimal interpolation algorithms were sought within constrained
classes of interpolation kernel functions. We derive a new family of
unconstrained L2-optimal interpolation kernels, and compare their properties to
the previously known. Although digital images are used to illustrate this work,
our L2-optimal kernels can be applied to interpolate any digital signals.
|
1104.4296
|
Collaboration in computer science: a network science approach. Part II
|
cs.SI cs.DL physics.soc-ph
|
We represent collaboration of authors in computer science papers in terms of
both affiliation and collaboration networks and observe how these networks
evolved over time since 1960. We investigate the temporal evolution of
bibliometric properties, like size of the discipline, productivity of scholars,
and collaboration level in papers, as well as of large-scale network
properties, like reachability and average separation distance among scientists,
distribution of the number of scholar collaborators, network clustering and
network assortativity by number of collaborators.
|
1104.4298
|
Curved Gabor Filters for Fingerprint Image Enhancement
|
cs.CV
|
Gabor filters play an important role in many application areas for the
enhancement of various types of images and the extraction of Gabor features.
For the purpose of enhancing curved structures in noisy images, we introduce
curved Gabor filters which locally adapt their shape to the direction of flow.
These curved Gabor filters enable the choice of filter parameters which
increase the smoothing power without creating artifacts in the enhanced image.
In this paper, curved Gabor filters are applied to the curved ridge and valley
structure of low-quality fingerprint images. First, we combine two orientation
field estimation methods in order to obtain a more robust estimation for very
noisy images. Next, curved regions are constructed by following the respective
local orientation and they are used for estimating the local ridge frequency.
Lastly, curved Gabor filters are defined based on curved regions and they are
applied for the enhancement of low-quality fingerprint images. Experimental
results on the FVC2004 databases show improvements of this approach in
comparison to state-of-the-art enhancement methods.
|
1104.4300
|
A Short Course on Frame Theory
|
cs.IT math.IT
|
A Short Course on Frame Theory.
|
1104.4302
|
Rank Minimization over Finite Fields: Fundamental Limits and
Coding-Theoretic Interpretations
|
cs.IT math.IT stat.ML
|
This paper establishes information-theoretic limits in estimating a finite
field low-rank matrix given random linear measurements of it. These linear
measurements are obtained by taking inner products of the low-rank matrix with
random sensing matrices. Necessary and sufficient conditions on the number of
measurements required are provided. It is shown that these conditions are sharp
and the minimum-rank decoder is asymptotically optimal. The reliability
function of this decoder is also derived by appealing to de Caen's lower bound
on the probability of a union. The sufficient condition also holds when the
sensing matrices are sparse - a scenario that may be amenable to efficient
decoding. More precisely, it is shown that if the n\times n-sensing matrices
contain, on average, \Omega(nlog n) entries, the number of measurements
required is the same as that when the sensing matrices are dense and contain
entries drawn uniformly at random from the field. Analogies are drawn between
the above results and rank-metric codes in the coding theory literature. In
fact, we are also strongly motivated by understanding when minimum rank
distance decoding of random rank-metric codes succeeds. To this end, we derive
distance properties of equiprobable and sparse rank-metric codes. These
distance properties provide a precise geometric interpretation of the fact that
the sparse ensemble requires as few measurements as the dense one. Finally, we
provide a non-exhaustive procedure to search for the unknown low-rank matrix.
|
1104.4308
|
Capacity Theorems for the Fading Interference Channel with a Relay and
Feedback Links
|
cs.IT math.IT
|
Handling interference is one of the main challenges in the design of wireless
networks. One of the key approaches to interference management is node
cooperation, which can be classified into two main types: relaying and
feedback. In this work we consider simultaneous application of both cooperation
types in the presence of interference. We obtain exact characterization of the
capacity regions for Rayleigh fading and phase fading interference channels
with a relay and with feedback links, in the strong and very strong
interference regimes. Four feedback configurations are considered: (1) feedback
from both receivers to the relay, (2) feedback from each receiver to the relay
and to one of the transmitters (either corresponding or opposite), (3) feedback
from one of the receivers to the relay, (4) feedback from one of the receivers
to the relay and to one of the transmitters. Our results show that there is a
strong motivation for incorporating relaying and feedback into wireless
networks.
|
1104.4321
|
Seeking Meaning in a Space Made out of Strokes, Radicals, Characters and
Compounds
|
cs.CL
|
Chinese characters can be compared to a molecular structure: a character is
analogous to a molecule, radicals are like atoms, calligraphic strokes
correspond to elementary particles, and when characters form compounds, they
are like molecular structures. In chemistry the conjunction of all of these
structural levels produces what we perceive as matter. In language, the
conjunction of strokes, radicals, characters, and compounds produces meaning.
But when does meaning arise? We all know that radicals are, in some sense, the
basic semantic components of Chinese script, but what about strokes?
Considering the fact that many characters are made by adding individual strokes
to (combinations of) radicals, we can legitimately ask the question whether
strokes carry meaning, or not. In this talk I will present my project of
extending traditional NLP techniques to radicals and strokes, aiming to obtain
a deeper understanding of the way ideographic languages model the world.
|
1104.4370
|
The maximum disjoint paths problem on multi-relations social networks
|
cs.DS cs.SI
|
Motivated by applications to social network analysis (SNA), we study the
problem of finding the maximum number of disjoint uni-color paths in an
edge-colored graph. We show the NP-hardness and the approximability of the
problem, and both approximation and exact algorithms are proposed. Since short
paths are much more significant in SNA, we also study the length-bounded
version of the problem, in which the lengths of paths are required to be upper
bounded by a fixed integer $l$. It is shown that the problem can be solved in
polynomial time for $l=3$ and is NP-hard for $l\geq 4$. We also show that the
problem can be approximated with ratio $(l-1)/2+\epsilon$ in polynomial time
for any $\epsilon >0$. Particularly, for $l=4$, we develop an efficient
2-approximation algorithm.
|
1104.4375
|
Array independent MIMO channel models with analytical characteristics
|
cs.IT math.IT
|
The conventional analytical channel models for multiple-input multiple-output
(MIMO) wireless radio channels are array dependent. In this paper, we present
several array independent MIMO channel models that inherit the essence of
analytical models. The key idea is to decompose the physical scattering channel
into two parts using the manifold decomposition technique: one is the wavefield
independent sampling matrices depending on the antenna arrays only; the other
is the array independent physical channel that can be individually modeled in
an analytical manner. Based on the framework, we firstly extend the
conventional virtual channel representation (VCR), which is restricted to
uniform linear arrays (ULAs) so far, to a general version applicable to
arbitrary array configurations. Then, we present two array independent
stochastic MIMO channel models based on the proposed new VCR as well as the
Weichselberger model. These two models are good at angular power spectrum (APS)
estimation and capacity prediction, respectively. Finally, the impact of array
characteristics on channel capacity is separately investigated by studying the
condition number of the array steering matrix at fixed angles, and the results
agree well with existing conclusions. Numerical results are presented for model
validation and comparison.
|
1104.4376
|
Intent Inference and Syntactic Tracking with GMTI Measurements
|
stat.ME cs.CV cs.LG
|
In conventional target tracking systems, human operators use the estimated
target tracks to make higher level inference of the target behaviour/intent.
This paper develops syntactic filtering algorithms that assist human operators
by extracting spatial patterns from target tracks to identify
suspicious/anomalous spatial trajectories. The targets' spatial trajectories
are modeled by a stochastic context free grammar (SCFG) and a switched mode
state space model. Bayesian filtering algorithms for stochastic context free
grammars are presented for extracting the syntactic structure and illustrated
for a ground moving target indicator (GMTI) radar example. The performance of
the algorithms is tested with the experimental data collected using DRDC
Ottawa's X-band Wideband Experimental Airborne Radar (XWEAR).
|
1104.4381
|
Unraveling the Rank-Size Rule with Self-Similar Hierarchies
|
physics.soc-ph cs.SI
|
Many scientists are interested in but puzzled by the various inverse power
laws with a negative exponent 1 such as the rank-size rule. The rank-size rule
is a very simple scaling law followed by many observations of the ubiquitous
empirical patterns in physical and social systems. Where there is a rank-size
distribution, there will be a hierarchy with cascade structure. However, the
equivalence relation between the rank-size rule and the hierarchical scaling
law remains to be mathematically demonstrated and empirically testified. In
this paper, theoretical derivation, mathematical experiments, and empirical
analysis are employed to show that the rank-size rule is equivalent in theory
to the hierarchical scaling law (the Nn principle). Abstracting an ordered set
of quantities in the form {1, 1/2,..., 1/k,...} from the rank-size rule, I
prove a geometric subdivision theorem of the harmonic sequence (k=1, 2, 3,...).
By the theorem, the rank-size distribution can be transformed into a
self-similar hierarchy, thus a power law can be decomposed as a pair of
exponential laws, and further the rank-size power law can be reconstructed as a
hierarchical scaling law. A number of ubiquitous empirical observations and
rules, including Zipf's law, Pareto's distribution, fractals, allometric
scaling, 1/f noise, can be unified into the hierarchical framework. The
self-similar hierarchy can provide us with a new perspective of looking at the
inverse power law of nature or even how nature works.
|
1104.4384
|
EMBANKS: Towards Disk Based Algorithms For Keyword-Search In Structured
Databases
|
cs.DB
|
In recent years, there has been a lot of interest in the field of keyword
querying relational databases. A variety of systems such as DBXplorer [ACD02],
Discover [HP02] and ObjectRank [BHP04] have been proposed. Another such system
is BANKS, which enables data and schema browsing together with keyword-based
search for relational databases. It models tuples as nodes in a graph,
connected by links induced by foreign key and other relationships. The size of
the database graph that BANKS uses is proportional to the sum of the number of
nodes and edges in the graph. Systems such as SPIN, which search on Personal
Information Networks and use BANKS as the backend, maintain a lot of
information about the users' data. Since these systems run on the user
workstation which have other demands of memory, such a heavy use of memory is
unreasonable and if possible, should be avoided. In order to alleviate this
problem, we introduce EMBANKS (acronym for External Memory BANKS), a framework
for an optimized disk-based BANKS system. The complexity of this framework
poses many questions, some of which we try to answer in this thesis. We
demonstrate that the cluster representation proposed in EMBANKS enables
in-memory processing of very large database graphs. We also present detailed
experiments that show that EMBANKS can significantly reduce database load time
and query execution times when compared to the original BANKS algorithms.
|
1104.4385
|
Convex Approaches to Model Wavelet Sparsity Patterns
|
cs.CV stat.ML
|
Statistical dependencies among wavelet coefficients are commonly represented
by graphical models such as hidden Markov trees(HMTs). However, in linear
inverse problems such as deconvolution, tomography, and compressed sensing, the
presence of a sensing or observation matrix produces a linear mixing of the
simple Markovian dependency structure. This leads to reconstruction problems
that are non-convex optimizations. Past work has dealt with this issue by
resorting to greedy or suboptimal iterative reconstruction methods. In this
paper, we propose new modeling approaches based on group-sparsity penalties
that leads to convex optimizations that can be solved exactly and efficiently.
We show that the methods we develop perform significantly better in
deconvolution and compressed sensing applications, while being as
computationally efficient as standard coefficient-wise approaches such as
lasso.
|
1104.4406
|
Sparsity based sub-wavelength imaging with partially incoherent light
via quadratic compressed sensing
|
cs.IT math.IT physics.optics
|
We demonstrate that sub-wavelength optical images borne on
partially-spatially-incoherent light can be recovered, from their far-field or
from the blurred image, given the prior knowledge that the image is sparse, and
only that. The reconstruction method relies on the recently demonstrated
sparsity-based sub-wavelength imaging. However, for
partially-spatially-incoherent light, the relation between the measurements and
the image is quadratic, yielding non-convex measurement equations that do not
conform to previously used techniques. Consequently, we demonstrate new
algorithmic methodology, referred to as quadratic compressed sensing, which can
be applied to a range of other problems involving information recovery from
partial correlation measurements, including when the correlation function has
local dependencies. Specifically for microscopy, this method can be readily
extended to white light microscopes with the additional knowledge of the light
source spectrum.
|
1104.4418
|
Internal links and pairs as a new tool for the analysis of bipartite
complex networks
|
cs.SI physics.soc-ph
|
Many real-world complex networks are best modeled as bipartite (or 2-mode)
graphs, where nodes are divided into two sets with links connecting one side to
the other. However, there is currently a lack of methods to analyze properly
such graphs as most existing measures and methods are suited to classical
graphs. A usual but limited approach consists in deriving 1-mode graphs (called
projections) from the underlying bipartite structure, though it causes
important loss of information and data storage issues. We introduce here
internal links and pairs as a new notion useful for such analysis: it gives
insights on the information lost by projecting the bipartite graph. We
illustrate the relevance of theses concepts on several real-world instances
illustrating how it enables to discriminate behaviors among various cases when
we compare them to a benchmark of random networks. Then, we show that we can
draw benefit from this concept for both modeling complex networks and storing
them in a compact format.
|
1104.4426
|
Phylogeny and geometry of languages from normalized Levenshtein distance
|
cs.CL q-bio.PE
|
The idea that the distance among pairs of languages can be evaluated from
lexical differences seems to have its roots in the work of the French explorer
Dumont D'Urville. He collected comparative words lists of various languages
during his voyages aboard the Astrolabe from 1826 to 1829 and, in his work
about the geographical division of the Pacific, he proposed a method to measure
the degree of relation between languages.
The method used by the modern lexicostatistics, developed by Morris Swadesh
in the 1950s, measures distances from the percentage of shared cognates, which
are words with a common historical origin. The weak point of this method is
that subjective judgment plays a relevant role.
Recently, we have proposed a new automated method which is motivated by the
analogy with genetics. The new approach avoids any subjectivity and results can
be easily replicated by other scholars. The distance between two languages is
defined by considering a renormalized Levenshtein distance between pair of
words with the same meaning and averaging on the words contained in a list. The
renormalization, which takes into account the length of the words, plays a
crucial role, and no sensible results can be found without it.
In this paper we give a short review of our automated method and we
illustrate it by considering the cluster of Malagasy dialects. We show that it
sheds new light on their kinship relation and also that it furnishes a lot of
new information concerning the modalities of the settlement of Madagascar.
|
1104.4491
|
Opportunistic Wireless Relay Networks: Diversity-Multiplexing Tradeoff
|
cs.IT math.IT
|
Opportunistic analysis has traditionally relied on independence assumptions
that break down in many interesting and useful network topologies. This paper
develops techniques that expand opportunistic analysis to a broader class of
networks, proposes new opportunistic methods for several network geometries,
and analyzes them in the high-SNR regime. For each of the geometries studied in
the paper, we analyze the opportunistic DMT of several relay protocols,
including amplify-and-forward, decode-and-forward, compress-and-forward,
non-orthogonal amplify-forward, and dynamic decode-forward. Among the
highlights of the results: in a variety of multi-user single-relay networks,
simple selection strategies are developed and shown to be DMT-optimal. It is
shown that compress-forward relaying achieves the DMT upper bound in the
opportunistic multiple-access relay channel as well as in the opportunistic nxn
user network with relay. Other protocols, e.g. dynamic decode-forward, are
shown to be near optimal in several cases. Finite-precision feedback is
analyzed for the opportunistic multiple-access relay channel, the opportunistic
broadcast relay channel, and the opportunistic gateway channel, and is shown to
be almost as good as full channel state information.
|
1104.4512
|
Robust Clustering Using Outlier-Sparsity Regularization
|
stat.ML cs.LG
|
Notwithstanding the popularity of conventional clustering algorithms such as
K-means and probabilistic clustering, their clustering results are sensitive to
the presence of outliers in the data. Even a few outliers can compromise the
ability of these algorithms to identify meaningful hidden structures rendering
their outcome unreliable. This paper develops robust clustering algorithms that
not only aim to cluster the data, but also to identify the outliers. The novel
approaches rely on the infrequent presence of outliers in the data which
translates to sparsity in a judiciously chosen domain. Capitalizing on the
sparsity in the outlier domain, outlier-aware robust K-means and probabilistic
clustering approaches are proposed. Their novelty lies on identifying outliers
while effecting sparsity in the outlier domain through carefully chosen
regularization. A block coordinate descent approach is developed to obtain
iterative algorithms with convergence guarantees and small excess computational
complexity with respect to their non-robust counterparts. Kernelized versions
of the robust clustering algorithms are also developed to efficiently handle
high-dimensional data, identify nonlinearly separable clusters, or even cluster
objects that are not represented by vectors. Numerical tests on both synthetic
and real datasets validate the performance and applicability of the novel
algorithms.
|
1104.4521
|
A Metric Between Probability Distributions on Finite Sets of Different
Cardinalities and Applications to Order Reduction
|
cs.SY cs.IT math.IT math.OC
|
With increasing use of digital control it is natural to view control inputs
and outputs as stochastic processes assuming values over finite alphabets
rather than in a Euclidean space. As control over networks becomes increasingly
common, data compression by reducing the size of the input and output alphabets
without losing the fidelity of representation becomes relevant. This requires
us to define a notion of distance between two stochastic processes assuming
values in distinct sets, possibly of different cardinalities. If the two
processes are i.i.d., then the problem becomes one of defining a metric between
two probability distributions over distinct finite sets of possibly different
cardinalities. This is the problem addressed in the present paper. A metric is
defined in terms of a joint distribution on the product of the two sets, which
has the two given distributions as its marginals, and has minimum entropy.
Computing the metric exactly turns out to be NP-hard. Therefore an efficient
greedy algorithm is presented for finding an upper bound on the distance. This
problem also turns out to be NP-hard, so again a greedy algorithm is
constructed for finding a suboptimal reduced order approximation. Taken
together, all the results presented here permit the approximation of an i.i.d.
process over a set of large cardinality by another i.i.d. process over a set of
smaller cardinality. In future work, attempts will be made to extend this work
to Markov processes over finite sets.
|
1104.4558
|
Controlled Tripping of Overheated Lines Mitigates Power Outages
|
physics.soc-ph cs.SY math.OC
|
We study the evolution of fast blackout cascades in the model of the Polish
(transmission) power grid (2700 nodes and 3504 transmission lines). The cascade
is initiated by a sufficiently severe initial contingency tripping. It
propagates via sequential trippings of many more overheated lines, islanding
loads and generators and eventually arriving at a fixed point with the
surviving part of the system being power-flow-balanced and the rest of the
system being outaged. Utilizing an improved form of the quasi-static model for
cascade propagation introduced in our earlier study (Statistical Classification
of Cascading Failures in Power Grids, IEEE PES GM 2011), we analyze how the
severity of the cascade depends on the order of tripping overheated lines. Our
main observation is that the order of tripping has a tremendous effect on the
size of the resulting outage. Finding the "best" tripping, defined as causing
the least damage, constitutes a difficult dynamical optimization problem, whose
solution is most likely computationally infeasible. Instead, here we study
performance of a number of natural heuristics, resolving the next switching
decision based on the current state of the grid. Overall, we conclude that
controlled intentional tripping is advantageous in the situation of a fast
developing extreme emergency, as it provides significant mitigation of the
resulting damage.
|
1104.4578
|
Exploring Human Mobility Patterns Based on Location Information of US
Flights
|
physics.data-an cs.SI physics.soc-ph
|
A range of early studies have been conducted to illustrate human mobility
patterns using different tracking data, such as dollar notes, cell phones and
taxicabs. Here, we explore human mobility patterns based on massive tracking
data of US flights. Both topological and geometric properties are examined in
detail. We found that topological properties, such as traffic volume (between
airports) and degree of connectivity (of individual airports), including both
in- and outdegrees, follow a power law distribution but not a geometric
property like travel lengths. The travel lengths exhibit an exponential
distribution rather than a power law with an exponential cutoff as previous
studies illustrated. We further simulated human mobility on the established
topologies of airports with various moving behaviors and found that the
mobility patterns are mainly attributed to the underlying binary topology of
airports and have little to do with other factors, such as moving behaviors and
geometric distances. Apart from the above findings, this study adopts the
head/tail division rule, which is regularity behind any heavy-tailed
distribution for extracting individual airports. The adoption of this rule for
data processing constitutes another major contribution of this paper.
Keywords: scaling of geographic space, head/tail division rule, power law,
geographic information, agent-based simulations
|
1104.4605
|
Compressive Network Analysis
|
stat.ML cs.DM cs.LG cs.SI physics.soc-ph
|
Modern data acquisition routinely produces massive amounts of network data.
Though many methods and models have been proposed to analyze such data, the
research of network data is largely disconnected with the classical theory of
statistical learning and signal processing. In this paper, we present a new
framework for modeling network data, which connects two seemingly different
areas: network data analysis and compressed sensing. From a nonparametric
perspective, we model an observed network using a large dictionary. In
particular, we consider the network clique detection problem and show
connections between our formulation with a new algebraic tool, namely Randon
basis pursuit in homogeneous spaces. Such a connection allows us to identify
rigorous recovery conditions for clique detection problems. Though this paper
is mainly conceptual, we also develop practical approximation algorithms for
solving empirical problems and demonstrate their usefulness on real-world
datasets.
|
1104.4607
|
Tree-Structured Random Vector Quantization for Limited-Feedback Wireless
Channels
|
cs.IT math.IT
|
We consider the quantization of a transmit beamforming vector in multiantenna
channels and of a signature vector in code division multiple access (CDMA)
systems. Assuming perfect channel knowledge, the receiver selects for a
transmitter the vector that maximizes the performance from a random vector
quantization (RVQ) codebook, which consists of independent isotropically
distributed unit-norm vectors. The quantized vector is then relayed to the
transmitter via a rate-limited feedback channel. The RVQ codebook requires an
exhaustive search to locate the selected entry. To reduce the search
complexity, we apply generalized Lloyd or $k$-dimensional (kd)-tree algorithms
to organize RVQ entries into a tree. In examples shown, the search complexity
of tree-structured (TS) RVQ can be a few orders of magnitude less than that of
the unstructured RVQ for the same performance. We also derive the performance
approximation for TS-RVQ in a large system limit, which predicts the
performance of a moderate-size system very well.
|
1104.4612
|
New Power Estimation Methods for Highly Overloaded Synchronous CDMA
Systems
|
cs.IT math.IT
|
In CDMA systems, the received user powers vary due to moving distance of
users. Thus, the CDMA receivers consist of two stages. The first stage is the
power estimator and the second one is a Multi-User Detector (MUD). Conventional
methods for estimating the user powers are suitable for underor fully-loaded
cases (when the number of users is less than or equal to the spreading gain).
These methods fail to work for overloaded CDMA systems because of high
interference among the users. Since the bandwidth is becoming more and more
valuable, it is worth considering overloaded CDMA systems. In this paper, an
optimum user power estimation for over-loaded CDMA systems with Gaussian inputs
is proposed. We also introduce a suboptimum method with lower complexity whose
performance is very close to the optimum one. We shall show that the proposed
methods work for highly over-loaded systems (up to m(m + 1) =2 users for a
system with only m chips). The performance of the proposed methods is
demonstrated by simulations. In addition, a class of signature sets is proposed
that seems to be optimum from a power estimation point of view. Additionally,
an iterative estimation for binary input CDMA systems is proposed which works
more accurately than the optimal Gaussian input method.
|
1104.4617
|
Boolean Equi-propagation for Optimized SAT Encoding
|
cs.AI cs.DS cs.LO
|
We present an approach to propagation based solving, Boolean
equi-propagation, where constraints are modelled as propagators of information
about equalities between Boolean literals. Propagation based solving applies
this information as a form of partial evaluation resulting in optimized SAT
encodings. We demonstrate for a variety of benchmarks that our approach results
in smaller CNF encodings and leads to speed-ups in solving times.
|
1104.4646
|
Local Optimality Certificates for LP Decoding of Tanner Codes
|
cs.IT math.CO math.IT
|
We present a new combinatorial characterization for local optimality of a
codeword in an irregular Tanner code. The main novelty in this characterization
is that it is based on a linear combination of subtrees in the computation
trees. These subtrees may have any degree in the local code nodes and may have
any height (even greater than the girth). We expect this new characterization
to lead to improvements in bounds for successful decoding.
We prove that local optimality in this new characterization implies
ML-optimality and LP-optimality, as one would expect. Finally, we show that is
possible to compute efficiently a certificate for the local optimality of a
codeword given an LLR vector.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.