id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
0906.5339
|
Asymmetric Quantum Cyclic Codes
|
cs.IT cs.MS math.IT quant-ph
|
It is recently conjectured in quantum information processing that phase-shift
errors occur with high probability than qubit-flip errors, hence the former is
more disturbing to quantum information than the later one. This leads us to
construct asymmetric quantum error controlling codes to protect quantum
information over asymmetric channels, $\Pr Z \geq \Pr X$. In this paper we
present two generic methods to derive asymmetric quantum cyclic codes using the
generator polynomials and defining sets of classical cyclic codes.
Consequently, the methods allow us to construct several families of asymmetric
quantum BCH, RS, and RM codes. Finally, the methods are used to construct
families of asymmetric subsystem codes.
|
0906.5394
|
Wireless Network Information Flow: A Deterministic Approach
|
cs.IT math.IT
|
In a wireless network with a single source and a single destination and an
arbitrary number of relay nodes, what is the maximum rate of information flow
achievable? We make progress on this long standing problem through a two-step
approach. First we propose a deterministic channel model which captures the key
wireless properties of signal strength, broadcast and superposition. We obtain
an exact characterization of the capacity of a network with nodes connected by
such deterministic channels. This result is a natural generalization of the
celebrated max-flow min-cut theorem for wired networks. Second, we use the
insights obtained from the deterministic analysis to design a new
quantize-map-and-forward scheme for Gaussian networks. In this scheme, each
relay quantizes the received signal at the noise level and maps it to a random
Gaussian codeword for forwarding, and the final destination decodes the
source's message based on the received signal. We show that, in contrast to
existing schemes, this scheme can achieve the cut-set upper bound to within a
gap which is independent of the channel parameters. In the case of the relay
channel with a single relay as well as the two-relay Gaussian diamond network,
the gap is 1 bit/s/Hz. Moreover, the scheme is universal in the sense that the
relays need no knowledge of the values of the channel parameters to
(approximately) achieve the rate supportable by the network. We also present
extensions of the results to multicast networks, half-duplex networks and
ergodic networks.
|
0906.5397
|
Asymptotically Optimal Policies for Hard-deadline Scheduling over Fading
Channels
|
cs.IT math.IT
|
A hard-deadline, opportunistic scheduling problem in which $B$ bits must be
transmitted within $T$ time-slots over a time-varying channel is studied: the
transmitter must decide how many bits to serve in each slot based on knowledge
of the current channel but without knowledge of the channel in future slots,
with the objective of minimizing expected transmission energy. In order to
focus on the effects of delay and fading, we assume that no other packets are
scheduled simultaneously and no outage is considered. We also assume that the
scheduler can transmit at capacity where the underlying noise channel is
Gaussian such that the energy-bit relation is a Shannon-type exponential
function. No closed form solution for the optimal policy is known for this
problem, which is naturally formulated as a finite-horizon dynamic program, but
three different policies are shown to be optimal in the limiting regimes where
$T$ is fixed and $B$ is large, $T$ is fixed and $B$ is small, and where $B$ and
$T$ are simultaneously taken to infinity. In addition, the advantage of optimal
scheduling is quantified relative to a non-opportunistic (i.e., channel-blind)
equal-bit policy.
|
0906.5485
|
Query Significance in Databases via Randomizations
|
cs.DB cs.AI
|
Many sorts of structured data are commonly stored in a multi-relational
format of interrelated tables. Under this relational model, exploratory data
analysis can be done by using relational queries. As an example, in the
Internet Movie Database (IMDb) a query can be used to check whether the average
rank of action movies is higher than the average rank of drama movies.
We consider the problem of assessing whether the results returned by such a
query are statistically significant or just a random artifact of the structure
in the data. Our approach is based on randomizing the tables occurring in the
queries and repeating the original query on the randomized tables. It turns out
that there is no unique way of randomizing in multi-relational data. We propose
several randomization techniques, study their properties, and show how to find
out which queries or hypotheses about our data result in statistically
significant information. We give results on real and generated data and show
how the significance of some queries vary between different randomizations.
|
0906.5608
|
Loading Arbitrary Knowledge Bases in Matrix Browser
|
cs.IR cs.DB
|
This paper describes the work done on Matrix Browser, which is a recently
developed graphical user interface to explore and navigate complex networked
information spaces. This approach presents a new way of navigating information
nets in windows explorer like widget. The problem on hand was how to export
arbitrary knowledge bases in Matrix Browser. This was achieved by identifying
the relationships present in knowledge bases and then by forming the
hierarchies from this data and these hierarchies are being exported to matrix
browser. This paper gives solution to this problem and informs about
implementation work.
|
0907.0001
|
On weight distributions of perfect colorings and completely regular
codes
|
math.CO cs.IT math.IT
|
A vertex coloring of a graph is called "perfect" if for any two colors $a$
and $b$, the number of the color-$b$ neighbors of a color-$a$ vertex $x$ does
not depend on the choice of $x$, that is, depends only on $a$ and $b$ (the
corresponding partition of the vertex set is known as "equitable"). A set of
vertices is called "completely regular" if the coloring according to the
distance from this set is perfect. By the "weight distribution" of some
coloring with respect to some set we mean the information about the number of
vertices of every color at every distance from the set. We study the weight
distribution of a perfect coloring (equitable partition) of a graph with
respect to a completely regular set (in particular, with respect to a vertex if
the graph is distance-regular). We show how to compute this distribution by the
knowledge of the color composition over the set. For some partial cases of
completely regular sets, we derive explicit formulas of weight distributions.
Since any (other) completely regular set itself generates a perfect coloring,
this gives universal formulas for calculating the weight distribution of any
completely regular set from its parameters. In the case of Hamming graphs, we
prove a very simple formula for the weight enumerator of an arbitrary perfect
coloring. Codewords: completely regular code; equitable partition; partition
design; perfect coloring; perfect structure; regular partition; weight
distribution; weight enumerator.
|
0907.0002
|
On the binary codes with parameters of doubly-shortened 1-perfect codes
|
math.CO cs.IT math.IT
|
We show that any binary $(n=2^m-3, 2^{n-m}, 3)$ code $C_1$ is a part of an
equitable partition (perfect coloring) $\{C_1,C_2,C_3,C_4\}$ of the $n$-cube
with the parameters $((0,1,n-1,0)(1,0,n-1,0)(1,1,n-4,2)(0,0,n-1,1))$. Now the
possibility to lengthen the code $C_1$ to a 1-perfect code of length $n+2$ is
equivalent to the possibility to split the part $C_4$ into two distance-3 codes
or, equivalently, to the biparticity of the graph of distances 1 and 2 of
$C_4$. In any case, $C_1$ is uniquely embeddable in a twofold 1-perfect code of
length $n+2$ with some structural restrictions, where by a twofold 1-perfect
code we mean that any vertex of the space is within radius 1 from exactly two
codewords.
|
0907.0049
|
No More Perfect Codes: Classification of Perfect Quantum Codes
|
quant-ph cs.IT math.IT
|
We solve the problem of the classification of perfect quantum codes. We prove
that the only nontrivial perfect quantum codes are those with the parameters .
There exist no other nontrivial perfect quantum codes.
|
0907.0067
|
A Novel Two-Staged Decision Support based Threat Evaluation and Weapon
Assignment Algorithm, Asset-based Dynamic Weapon Scheduling using Artificial
Intelligence Techinques
|
cs.AI
|
Surveillance control and reporting (SCR) system for air threats play an
important role in the defense of a country. SCR system corresponds to air and
ground situation management/processing along with information fusion,
communication, coordination, simulation and other critical defense oriented
tasks. Threat Evaluation and Weapon Assignment (TEWA) sits at the core of SCR
system. In such a system, maximal or near maximal utilization of constrained
resources is of extreme importance. Manual TEWA systems cannot provide
optimality because of different limitations e.g.surface to air missile (SAM)
can fire from a distance of 5Km, but manual TEWA systems are constrained by
human vision range and other constraints. Current TEWA systems usually work on
target-by-target basis using some type of greedy algorithm thus affecting the
optimality of the solution and failing in multi-target scenario. his paper
relates to a novel two-staged flexible dynamic decision support based optimal
threat evaluation and weapon assignment algorithm for multi-target air-borne
threats.
|
0907.0075
|
XDANNG: XML based Distributed Artificial Neural Network with Globus
Toolkit
|
cs.NE
|
Artificial Neural Network is one of the most common AI application fields.
This field has direct and indirect usages most sciences. The main goal of ANN
is to imitate biological neural networks for solving scientific problems. But
the level of parallelism is the main problem of ANN systems in comparison with
biological systems. To solve this problem, we have offered a XML-based
framework for implementing ANN on the Globus Toolkit Platform. Globus Toolkit
is well known management software for multipurpose Grids. Using the Grid for
simulating the neuron network will lead to a high degree of parallelism in the
implementation of ANN. We have used the XML for improving flexibility and
scalability in our framework.
|
0907.0204
|
Multi-Label MRF Optimization via Least Squares s-t Cuts
|
cs.CV
|
There are many applications of graph cuts in computer vision, e.g.
segmentation. We present a novel method to reformulate the NP-hard, k-way graph
partitioning problem as an approximate minimal s-t graph cut problem, for which
a globally optimal solution is found in polynomial time. Each non-terminal
vertex in the original graph is replaced by a set of ceil(log_2(k)) new
vertices. The original graph edges are replaced by new edges connecting the new
vertices to each other and to only two, source s and sink t, terminal nodes.
The weights of the new edges are obtained using a novel least squares solution
approximating the constraints of the initial k-way setup. The minimal s-t cut
labels each new vertex with a binary (s vs t) "Gray" encoding, which is then
decoded into a decimal label number that assigns each of the original vertices
to one of k classes. We analyze the properties of the approximation and present
quantitative as well as qualitative segmentation results.
|
0907.0229
|
A new model of artificial neuron: cyberneuron and its use
|
cs.NE cs.LG
|
This article describes a new type of artificial neuron, called the authors
"cyberneuron". Unlike classical models of artificial neurons, this type of
neuron used table substitution instead of the operation of multiplication of
input values for the weights. This allowed to significantly increase the
information capacity of a single neuron, but also greatly simplify the process
of learning. Considered an example of the use of "cyberneuron" with the task of
detecting computer viruses.
|
0907.0255
|
A Cut-off Phenomenon in Location Based Random Access Games with
Imperfect Information
|
cs.IT cs.GT math.IT math.PR
|
This paper analyzes the behavior of selfish transmitters under imperfect
location information. The scenario considered is that of a wireless network
consisting of selfish nodes that are randomly distributed over the network
domain according to a known probability distribution, and that are interested
in communicating with a common sink node using common radio resources. In this
scenario, the wireless nodes do not know the exact locations of their
competitors but rather have belief distributions about these locations.
Firstly, properties of the packet success probability curve as a function of
the node-sink separation are obtained for such networks. Secondly, a
monotonicity property for the best-response strategies of selfish nodes is
identified. That is, for any given strategies of competitors of a node, there
exists a critical node-sink separation for this node such that its
best-response is to transmit when its distance to the sink node is smaller than
this critical threshold, and to back off otherwise. Finally, necessary and
sufficient conditions for a given strategy profile to be a Nash equilibrium are
provided.
|
0907.0288
|
An Iterative Fingerprint Enhancement Algorithm Based on Accurate
Determination of Orientation Flow
|
cs.CV
|
We describe an algorithm to enhance and binarize a fingerprint image. The
algorithm is based on accurate determination of orientation flow of the ridges
of the fingerprint image by computing variance of the neighborhood pixels
around a pixel in different directions. We show that an iterative algorithm
which captures the mutual interdependence of orientation flow computation,
enhancement and binarization gives very good results on poor quality images.
|
0907.0328
|
Degenerate neutrality creates evolvable fitness landscapes
|
cs.NE cs.AI cs.MA
|
Understanding how systems can be designed to be evolvable is fundamental to
research in optimization, evolution, and complex systems science. Many
researchers have thus recognized the importance of evolvability, i.e. the
ability to find new variants of higher fitness, in the fields of biological
evolution and evolutionary computation. Recent studies by Ciliberti et al
(Proc. Nat. Acad. Sci., 2007) and Wagner (Proc. R. Soc. B., 2008) propose a
potentially important link between the robustness and the evolvability of a
system. In particular, it has been suggested that robustness may actually lead
to the emergence of evolvability. Here we study two design principles,
redundancy and degeneracy, for achieving robustness and we show that they have
a dramatically different impact on the evolvability of the system. In
particular, purely redundant systems are found to have very little evolvability
while systems with degeneracy, i.e. distributed robustness, can be orders of
magnitude more evolvable. These results offer insights into the general
principles for achieving evolvability and may prove to be an important step
forward in the pursuit of evolvable representations in evolutionary
computation.
|
0907.0329
|
Evidence of coevolution in multi-objective evolutionary algorithms
|
cs.NE cs.AI
|
This paper demonstrates that simple yet important characteristics of
coevolution can occur in evolutionary algorithms when only a few conditions are
met. We find that interaction-based fitness measurements such as fitness
(linear) ranking allow for a form of coevolutionary dynamics that is observed
when 1) changes are made in what solutions are able to interact during the
ranking process and 2) evolution takes place in a multi-objective environment.
This research contributes to the study of simulated evolution in a at least two
ways. First, it establishes a broader relationship between coevolution and
multi-objective optimization than has been previously considered in the
literature. Second, it demonstrates that the preconditions for coevolutionary
behavior are weaker than previously thought. In particular, our model indicates
that direct cooperation or competition between species is not required for
coevolution to take place. Moreover, our experiments provide evidence that
environmental perturbations can drive coevolutionary processes; a conclusion
that mirrors arguments put forth in dual phase evolution theory. In the
discussion, we briefly consider how our results may shed light onto this and
other recent theories of evolution.
|
0907.0332
|
Survival of the flexible: explaining the recent dominance of
nature-inspired optimization within a rapidly evolving world
|
cs.NE cs.AI
|
Although researchers often comment on the rising popularity of
nature-inspired meta-heuristics (NIM), there has been a paucity of data to
directly support the claim that NIM are growing in prominence compared to other
optimization techniques. This study presents evidence that the use of NIM is
not only growing, but indeed appears to have surpassed mathematical
optimization techniques (MOT) in several important metrics related to academic
research activity (publication frequency) and commercial activity (patenting
frequency). Motivated by these findings, this article discusses some of the
possible origins of this growing popularity. I review different explanations
for NIM popularity and discuss why some of these arguments remain unsatisfying.
I argue that a compelling and comprehensive explanation should directly account
for the manner in which most NIM success has actually been achieved, e.g.
through hybridization and customization to different problem environments. By
taking a problem lifecycle perspective, this paper offers a fresh look at the
hypothesis that nature-inspired meta-heuristics derive much of their utility
from being flexible. I discuss global trends within the business environments
where optimization algorithms are applied and I speculate that highly flexible
algorithm frameworks could become increasingly popular within our diverse and
rapidly changing world.
|
0907.0334
|
The Self-Organization of Interaction Networks for Nature-Inspired
Optimization
|
cs.NE cs.AI
|
Over the last decade, significant progress has been made in understanding
complex biological systems, however there have been few attempts at
incorporating this knowledge into nature inspired optimization algorithms. In
this paper, we present a first attempt at incorporating some of the basic
structural properties of complex biological systems which are believed to be
necessary preconditions for system qualities such as robustness. In particular,
we focus on two important conditions missing in Evolutionary Algorithm
populations; a self-organized definition of locality and interaction epistasis.
We demonstrate that these two features, when combined, provide algorithm
behaviors not observed in the canonical Evolutionary Algorithm or in
Evolutionary Algorithms with structured populations such as the Cellular
Genetic Algorithm. The most noticeable change in algorithm behavior is an
unprecedented capacity for sustainable coexistence of genetically distinct
individuals within a single population. This capacity for sustained genetic
diversity is not imposed on the population but instead emerges as a natural
consequence of the dynamics of the system.
|
0907.0340
|
Strategic Positioning in Tactical Scenario Planning
|
cs.NE cs.AI
|
Capability planning problems are pervasive throughout many areas of human
interest with prominent examples found in defense and security. Planning
provides a unique context for optimization that has not been explored in great
detail and involves a number of interesting challenges which are distinct from
traditional optimization research. Planning problems demand solutions that can
satisfy a number of competing objectives on multiple scales related to
robustness, adaptiveness, risk, etc. The scenario method is a key approach for
planning. Scenarios can be defined for long-term as well as short-term plans.
This paper introduces computational scenario-based planning problems and
proposes ways to accommodate strategic positioning within the tactical planning
domain. We demonstrate the methodology in a resource planning problem that is
solved with a multi-objective evolutionary algorithm. Our discussion and
results highlight the fact that scenario-based planning is naturally framed
within a multi-objective setting. However, the conflicting objectives occur on
different system levels rather than within a single system alone. This paper
also contends that planning problems are of vital interest in many human
endeavors and that Evolutionary Computation may be well positioned for this
problem domain.
|
0907.0418
|
Bounding the Probability of Error for High Precision Recognition
|
cs.CV
|
We consider models for which it is important, early in processing, to
estimate some variables with high precision, but perhaps at relatively low
rates of recall. If some variables can be identified with near certainty, then
they can be conditioned upon, allowing further inference to be done
efficiently. Specifically, we consider optical character recognition (OCR)
systems that can be bootstrapped by identifying a subset of correctly
translated document words with very high precision. This "clean set" is
subsequently used as document-specific training data. While many current OCR
systems produce measures of confidence for the identity of each letter or word,
thresholding these confidence values, even at very high values, still produces
some errors.
We introduce a novel technique for identifying a set of correct words with
very high precision. Rather than estimating posterior probabilities, we bound
the probability that any given word is incorrect under very general
assumptions, using an approximate worst case analysis. As a result, the
parameters of the model are nearly irrelevant, and we are able to identify a
subset of words, even in noisy documents, of which we are highly confident. On
our set of 10 documents, we are able to identify about 6% of the words on
average without making a single error. This ability to produce word lists with
very high precision allows us to use a family of models which depends upon such
clean word lists.
|
0907.0453
|
Random DFAs are Efficiently PAC Learnable
|
cs.LG
|
This paper has been withdrawn due to an error found by Dana Angluin and Lev
Reyzin.
|
0907.0472
|
Capacity Regions and Sum-Rate Capacities of Vector Gaussian Interference
Channels
|
cs.IT math.IT
|
The capacity regions of vector, or multiple-input multiple-output, Gaussian
interference channels are established for very strong interference and aligned
strong interference. Furthermore, the sum-rate capacities are established for Z
interference, noisy interference, and mixed (aligned weak/intermediate and
aligned strong) interference. These results generalize known results for scalar
Gaussian interference channels.
|
0907.0499
|
Agent-Oriented Approach for Detecting and Managing Risks in Emergency
Situations
|
cs.AI cs.MA
|
This paper presents an agent-oriented approach to build a decision support
system aimed at helping emergency managers to detect and to manage risks. We
stress the flexibility and the adaptivity characteristics that are crucial to
build a robust and efficient system, able to resolve complex problems. The
system should be independent as much as possible from the subject of study.
Thereby, an original approach based on a mechanism of perception,
representation, characterisation and assessment is proposed. The work described
here is applied on the RoboCupRescue application. Experimentations and results
are provided.
|
0907.0505
|
Multi-User MISO Interference Channels with Single-User Detection:
Optimality of Beamforming and the Achievable Rate Region
|
cs.IT math.IT
|
For a multi-user interference channel with multi-antenna transmitters and
single-antenna receivers, by restricting each transmitter to Gaussian input and
each receiver to a single-user detector, computing the largest achievable rate
region amounts to solving a family of non-convex optimization problems.
Recognizing the intrinsic connection between the signal power at the intended
receiver and the interference power at the unintended receiver, the original
family of non-convex optimization problems is converted into a new family of
convex optimization problems. It is shown that, for such interference channels
with each receiver implementing single-user detection, transmitter beamforming
can achieve all boundary points of the achievable rate region.
|
0907.0507
|
Spontaneous organization leads to robustness in evolutionary algorithms
|
cs.NE cs.AI
|
The interaction networks of biological systems are known to take on several
non-random structural properties, some of which are believed to positively
influence system robustness. Researchers are only starting to understand how
these structural properties emerge, however suggested roles for component
fitness and community development (modularity) have attracted interest from the
scientific community. In this study, we apply some of these concepts to an
evolutionary algorithm and spontaneously organize its population using
information that the population receives as it moves over a fitness landscape.
More precisely, we employ fitness and clustering based driving forces for
guiding network structural dynamics, which in turn are controlled by the
population dynamics of an evolutionary algorithm. To evaluate the effect this
has on evolution, experiments are conducted on six engineering design problems
and six artificial test functions and compared against cellular genetic
algorithms and 16 other evolutionary algorithm designs. Our results indicate
that a self-organizing topology evolutionary algorithm exhibits surprisingly
robust search behavior with promising performance observed over short and long
time scales. After a careful analysis of these results, we conclude that the
coevolution between a population and its topology represents a powerful new
paradigm for designing robust search heuristics.
|
0907.0516
|
Adaptation and Self-Organization in Evolutionary Algorithms
|
cs.NE
|
Abbreviated Abstract: The objective of Evolutionary Computation is to solve
practical problems (e.g. optimization, data mining) by simulating the
mechanisms of natural evolution. This thesis addresses several topics related
to adaptation and self-organization in evolving systems with the overall aims
of improving the performance of Evolutionary Algorithms (EA), understanding its
relation to natural evolution, and incorporating new mechanisms for mimicking
complex biological systems.
|
0907.0520
|
Computational Scenario-based Capability Planning
|
cs.NE cs.AI
|
Scenarios are pen-pictures of plausible futures, used for strategic planning.
The aim of this investigation is to expand the horizon of scenario-based
planning through computational models that are able to aid the analyst in the
planning process. The investigation builds upon the advances of Information and
Communication Technology (ICT) to create a novel, flexible and customizable
computational capability-based planning methodology that is practical and
theoretically sound. We will show how evolutionary computation, in particular
evolutionary multi-objective optimization, can play a central role - both as an
optimizer and as a source for innovation.
|
0907.0589
|
Generalized Collective Inference with Symmetric Clique Potentials
|
cs.AI
|
Collective graphical models exploit inter-instance associative dependence to
output more accurate labelings. However existing models support very limited
kind of associativity which restricts accuracy gains. This paper makes two
major contributions. First, we propose a general collective inference framework
that biases data instances to agree on a set of {\em properties} of their
labelings. Agreement is encouraged through symmetric clique potentials. We show
that rich properties leads to bigger gains, and present a systematic inference
procedure for a large class of such properties. The procedure performs message
passing on the cluster graph, where property-aware messages are computed with
cluster specific algorithms. This provides an inference-only solution for
domain adaptation. Our experiments on bibliographic information extraction
illustrate significant test error reduction over unseen domains. Our second
major contribution consists of algorithms for computing outgoing messages from
clique clusters with symmetric clique potentials. Our algorithms are exact for
arbitrary symmetric potentials on binary labels and for max-like and
majority-like potentials on multiple labels. For majority potentials, we also
provide an efficient Lagrangian Relaxation based algorithm that compares
favorably with the exact algorithm. We present a 13/15-approximation algorithm
for the NP-hard Potts potential, with runtime sub-quadratic in the clique size.
In contrast, the best known previous guarantee for graphs with Potts potentials
is only 1/2. We empirically show that our method for Potts potentials is an
order of magnitude faster than the best alternatives, and our Lagrangian
Relaxation based algorithm for majority potentials beats the best applicable
heuristic -- ICM.
|
0907.0592
|
Credit Assignment in Adaptive Evolutionary Algorithms
|
cs.NE cs.AI
|
In this paper, a new method for assigning credit to search operators is
presented. Starting with the principle of optimizing search bias, search
operators are selected based on an ability to create solutions that are
historically linked to future generations. Using a novel framework for defining
performance measurements, distributing credit for performance, and the
statistical interpretation of this credit, a new adaptive method is developed
and shown to outperform a variety of adaptive and non-adaptive competitors.
|
0907.0595
|
Use of statistical outlier detection method in adaptive evolutionary
algorithms
|
cs.NE cs.AI
|
In this paper, the issue of adapting probabilities for Evolutionary Algorithm
(EA) search operators is revisited. A framework is devised for distinguishing
between measurements of performance and the interpretation of those
measurements for purposes of adaptation. Several examples of measurements and
statistical interpretations are provided. Probability value adaptation is
tested using an EA with 10 search operators against 10 test problems with
results indicating that both the type of measurement and its statistical
interpretation play significant roles in EA performance. We also find that
selecting operators based on the prevalence of outliers rather than on average
performance is able to provide considerable improvements to adaptive methods
and soundly outperforms the non-adaptive case.
|
0907.0597
|
Network Topology and Time Criticality Effects in the Modularised Fleet
Mix Problem
|
cs.NE cs.AI
|
In this paper, we explore the interplay between network topology and time
criticality in a military logistics system. A general goal of this work (and
previous work) is to evaluate land transportation requirements or, more
specifically, how to design appropriate fleets of military general service
vehicles that are tasked with the supply and re-supply of military units
dispersed in an area of operation. The particular focus of this paper is to
gain a better understanding of how the logistics environment changes when
current Army vehicles with fixed transport characteristics are replaced by a
new generation of modularised vehicles that can be configured
task-specifically. The experimental work is conducted within a well developed
strategic planning simulation environment which includes a scenario generation
engine for automatically sampling supply and re-supply missions and a
multi-objective meta-heuristic search algorithm (i.e. Evolutionary Algorithm)
for solving the particular scheduling and routing problems. The results
presented in this paper allow for a better understanding of how (and under what
conditions) a modularised vehicle fleet can provide advantages over the
currently implemented system.
|
0907.0598
|
Robustness and Adaptiveness Analysis of Future Fleets
|
cs.NE cs.AI
|
Making decisions about the structure of a future military fleet is a
challenging task. Several issues need to be considered such as the existence of
multiple competing objectives and the complexity of the operating environment.
A particular challenge is posed by the various types of uncertainty that the
future might hold. It is uncertain what future events might be encountered; how
fleet design decisions will influence and shape the future; and how present and
future decision makers will act based on available information, their personal
biases regarding the importance of different objectives, and their economic
preferences. In order to assist strategic decision-making, an analysis of
future fleet options needs to account for conditions in which these different
classes of uncertainty are exposed. It is important to understand what
assumptions a particular fleet is robust to, what the fleet can readily adapt
to, and what conditions present clear risks to the fleet. We call this the
analysis of a fleet's strategic positioning. This paper introduces how
strategic positioning can be evaluated using computer simulations. Our main aim
is to introduce a framework for capturing information that can be useful to a
decision maker and for defining the concepts of robustness and adaptiveness in
the context of future fleet design. We demonstrate our conceptual framework
using simulation studies of an air transportation fleet. We capture uncertainty
by employing an explorative scenario-based approach. Each scenario represents a
sampling of different future conditions, different model assumptions, and
different economic preferences. Proposed changes to a fleet are then analysed
based on their influence on the fleet's robustness, adaptiveness, and risk to
different scenarios.
|
0907.0611
|
A process planning system with feature based neural network search
strategy for aluminum extrusion die manufacturing
|
cs.NE
|
Aluminum extrusion die manufacturing is a critical task for productive
improvement and increasing potential of competition in aluminum extrusion
industry. It causes to meet the efficiency not only consistent quality but also
time and production cost reduction. Die manufacturing consists first of die
design and process planning in order to make a die for extruding the customer's
requirement products. The efficiency of die design and process planning are
based on the knowledge and experience of die design and die manufacturer
experts. This knowledge has been formulated into a computer system called the
knowledge-based system. It can be reused to support a new die design and
process planning. Such knowledge can be extracted directly from die geometry
which is composed of die features. These features are stored in die feature
library to be prepared for producing a new die manufacturing. Die geometry is
defined according to the characteristics of the profile so we can reuse die
features from the previous similar profile design cases. This paper presents
the CaseXpert Process Planning System for die manufacturing based on feature
based neural network technique. Die manufacturing cases in the case library
would be retrieved with searching and learning method by neural network for
reusing or revising it to build a die design and process planning when a new
case is similar with the previous die manufacturing cases. The results of the
system are dies design and machining process. The system has been successfully
tested, it has been proved that the system can reduce planning time and respond
high consistent plans.
|
0907.0725
|
High-Rate Full-Diversity Space-Time Block Codes for Three and Four
Transmit Antennas
|
cs.IT math.IT
|
In this paper, we deal with the design of high-rate, full-diversity, low
maximum likelihood (ML) decoding complexity space-time block codes (STBCs) with
code rates of 2 and 1.5 complex symbols per channel use for multiple-input
multiple output (MIMO) systems employing three and four transmit antennas. We
fill the empty slots of the existing STBCs from CIODs in their transmission
matrices by additional symbols and use the conditional ML decoding technique
which significantly reduces the ML decoding complexity of non-orthogonal STBCs
while ensuring full-diversity and high coding gain. First, two new schemes with
code rates of 2 and 1.5 are proposed for MIMO systems with four transmit
antennas. We show that our low-complexity rate-2 STBC outperforms the
corresponding best STBC recently proposed by Biglieri et al. for QPSK, due to
its superior coding gain while our rate-1.5 STBC outperforms the full-diversity
quasi-orthogonal STBC (QOSTBC). Then, two STBCs with code rates of 2 and 1.5
are proposed for three transmit antennas which are shown to outperform the
corresponding full-diversity QOSTBC for three transmit antennas. We prove by an
information-theoretic analysis that the capacities of new rate-2 STBCs for
three and four transmit antennas are much closer to the actual MIMO channel
capacity than the capacities of classical OSTBCs and CIODs.
|
0907.0746
|
Open Problems in Universal Induction & Intelligence
|
cs.AI cs.IT cs.LG math.IT
|
Specialized intelligent systems can be found everywhere: finger print,
handwriting, speech, and face recognition, spam filtering, chess and other game
programs, robots, et al. This decade the first presumably complete mathematical
theory of artificial intelligence based on universal
induction-prediction-decision-action has been proposed. This
information-theoretic approach solidifies the foundations of inductive
inference and artificial intelligence. Getting the foundations right usually
marks a significant progress and maturing of a field. The theory provides a
gold standard and guidance for researchers working on intelligent algorithms.
The roots of universal induction have been laid exactly half-a-century ago and
the roots of universal intelligence exactly one decade ago. So it is timely to
take stock of what has been achieved and what remains to be done. Since there
are already good recent surveys, I describe the state-of-the-art only in
passing and refer the reader to the literature. This article concentrates on
the open problems in universal induction and its extension to universal
intelligence.
|
0907.0748
|
Gossip consensus algorithms via quantized communication
|
math.OC cs.SY
|
This paper considers the average consensus problem on a network of digital
links, and proposes a set of algorithms based on pairwise ''gossip''
communications and updates. We study the convergence properties of such
algorithms with the goal of answering two design questions, arising from the
literature: whether the agents should encode their communication by a
deterministic or a randomized quantizer, and whether they should use, and how,
exact information regarding their own states in the update.
|
0907.0783
|
Bayesian Multitask Learning with Latent Hierarchies
|
cs.LG
|
We learn multiple hypotheses for related tasks under a latent hierarchical
relationship between tasks. We exploit the intuition that for domain
adaptation, we wish to share classifier structure, but for multitask learning,
we wish to share covariance structure. Our hierarchical model is seen to
subsume several previously proposed multitask learning models and performs well
on three distinct real-world data sets.
|
0907.0784
|
Cross-Task Knowledge-Constrained Self Training
|
cs.LG cs.CL
|
We present an algorithmic framework for learning multiple related tasks. Our
framework exploits a form of prior knowledge that relates the output spaces of
these tasks. We present PAC learning results that analyze the conditions under
which such learning is possible. We present results on learning a shallow
parser and named-entity recognition system that exploits our framework, showing
consistent improvements over baseline methods.
|
0907.0785
|
A Bayesian Model for Discovering Typological Implications
|
cs.CL
|
A standard form of analysis for linguistic typology is the universal
implication. These implications state facts about the range of extant
languages, such as ``if objects come after verbs, then adjectives come after
nouns.'' Such implications are typically discovered by painstaking hand
analysis over a small sample of languages. We propose a computational model for
assisting at this process. Our model is able to discover both well-known
implications as well as some novel implications that deserve further study.
Moreover, through a careful application of hierarchical analysis, we are able
to cope with the well-known sampling problem: languages are not independent.
|
0907.0786
|
Search-based Structured Prediction
|
cs.LG cs.CL
|
We present Searn, an algorithm for integrating search and learning to solve
complex structured prediction problems such as those that occur in natural
language, speech, computational biology, and vision. Searn is a meta-algorithm
that transforms these complex problems into simple classification problems to
which any binary classifier may be applied. Unlike current algorithms for
structured learning that require decomposition of both the loss function and
the feature functions over the predicted structure, Searn is able to learn
prediction functions for any loss function and any class of features. Moreover,
Searn comes with a strong, natural theoretical guarantee: good performance on
the derived classification problems implies good performance on the structured
prediction problem.
|
0907.0804
|
Induction of Word and Phrase Alignments for Automatic Document
Summarization
|
cs.CL
|
Current research in automatic single document summarization is dominated by
two effective, yet naive approaches: summarization by sentence extraction, and
headline generation via bag-of-words models. While successful in some tasks,
neither of these models is able to adequately capture the large set of
linguistic devices utilized by humans when they produce summaries. One possible
explanation for the widespread use of these models is that good techniques have
been developed to extract appropriate training data for them from existing
document/abstract and document/headline corpora. We believe that future
progress in automatic summarization will be driven both by the development of
more sophisticated, linguistically informed models, as well as a more effective
leveraging of document/abstract corpora. In order to open the doors to
simultaneously achieving both of these goals, we have developed techniques for
automatically producing word-to-word and phrase-to-phrase alignments between
documents and their human-written abstracts. These alignments make explicit the
correspondences that exist in such document/abstract pairs, and create a
potentially rich data source from which complex summarization algorithms may
learn. This paper describes experiments we have carried out to analyze the
ability of humans to perform such alignments, and based on these analyses, we
describe experiments for creating them automatically. Our model for the
alignment task is based on an extension of the standard hidden Markov model,
and learns to create alignments in a completely unsupervised fashion. We
describe our model in detail and present experimental results that show that
our model is able to learn to reliably identify word- and phrase-level
alignments in a corpus of <document,abstract> pairs.
|
0907.0806
|
A Noisy-Channel Model for Document Compression
|
cs.CL
|
We present a document compression system that uses a hierarchical
noisy-channel model of text production. Our compression system first
automatically derives the syntactic structure of each sentence and the overall
discourse structure of the text given as input. The system then uses a
statistical hierarchical model of text production in order to drop
non-important syntactic and discourse constituents so as to generate coherent,
grammatical document compressions of arbitrary length. The system outperforms
both a baseline and a sentence-based compression system that operates by
simplifying sequentially all sentences in a text. Our results support the claim
that discourse knowledge plays an important role in document summarization.
|
0907.0807
|
A Large-Scale Exploration of Effective Global Features for a Joint
Entity Detection and Tracking Model
|
cs.CL
|
Entity detection and tracking (EDT) is the task of identifying textual
mentions of real-world entities in documents, extending the named entity
detection and coreference resolution task by considering mentions other than
names (pronouns, definite descriptions, etc.). Like NE tagging and coreference
resolution, most solutions to the EDT task separate out the mention detection
aspect from the coreference aspect. By doing so, these solutions are limited to
using only local features for learning. In contrast, by modeling both aspects
of the EDT task simultaneously, we are able to learn using highly complex,
non-local features. We develop a new joint EDT model and explore the utility of
many features, demonstrating their effectiveness on this task.
|
0907.0808
|
A Bayesian Model for Supervised Clustering with the Dirichlet Process
Prior
|
cs.LG
|
We develop a Bayesian framework for tackling the supervised clustering
problem, the generic problem encountered in tasks such as reference matching,
coreference resolution, identity uncertainty and record linkage. Our clustering
model is based on the Dirichlet process prior, which enables us to define
distributions over the countably infinite sets that naturally arise in this
problem. We add supervision to our model by positing the existence of a set of
unobserved random variables (we call these "reference types") that are generic
across all clusters. Inference in our framework, which requires integrating
over infinitely many parameters, is solved using Markov chain Monte Carlo
techniques. We present algorithms for both conjugate and non-conjugate priors.
We present a simple--but general--parameterization of our model based on a
Gaussian assumption. We evaluate this model on one artificial task and three
real-world tasks, comparing it against both unsupervised and state-of-the-art
supervised algorithms. Our results show that our model is able to outperform
other models across a variety of tasks and performance metrics.
|
0907.0809
|
Learning as Search Optimization: Approximate Large Margin Methods for
Structured Prediction
|
cs.LG cs.CL
|
Mappings to structured output spaces (strings, trees, partitions, etc.) are
typically learned using extensions of classification algorithms to simple
graphical structures (eg., linear chains) in which search and parameter
estimation can be performed exactly. Unfortunately, in many complex problems,
it is rare that exact search or parameter estimation is tractable. Instead of
learning exact models and searching via heuristic means, we embrace this
difficulty and treat the structured output problem in terms of approximate
search. We present a framework for learning as search optimization, and two
parameter updates with convergence theorems and bounds. Empirical evidence
shows that our integrated approach to learning and decoding can outperform
exact models at smaller computational cost.
|
0907.0914
|
A typical reconstruction limit of compressed sensing based on Lp-norm
minimization
|
cs.IT cond-mat.dis-nn math.IT math.ST stat.TH
|
We consider the problem of reconstructing an $N$-dimensional continuous
vector $\bx$ from $P$ constraints which are generated by its linear
transformation under the assumption that the number of non-zero elements of
$\bx$ is typically limited to $\rho N$ ($0\le \rho \le 1$). Problems of this
type can be solved by minimizing a cost function with respect to the $L_p$-norm
$||\bx||_p=\lim_{\epsilon \to +0}\sum_{i=1}^N |x_i|^{p+\epsilon}$, subject to
the constraints under an appropriate condition. For several $p$, we assess a
typical case limit $\alpha_c(\rho)$, which represents a critical relation
between $\alpha=P/N$ and $\rho$ for successfully reconstructing the original
vector by minimization for typical situations in the limit $N,P \to \infty$
with keeping $\alpha$ finite, utilizing the replica method. For $p=1$,
$\alpha_c(\rho)$ is considerably smaller than its worst case counterpart, which
has been rigorously derived by existing literature of information theory.
|
0907.0931
|
Distributed Sensor Selection using a Truncated Newton Method
|
cs.IT math.IT
|
We propose a new distributed algorithm for computing a truncated Newton
method, where the main diagonal of the Hessian is computed using belief
propagation. As a case study for this approach, we examine the sensor selection
problem, a Boolean convex optimization problem. We form two distributed
algorithms. The first algorithm is a distributed version of the interior point
method by Joshi and Boyd, and the second algorithm is an order of magnitude
faster approximation. As an example application we discuss distributed anomaly
detection in networks. We demonstrate the applicability of our solution using
both synthetic data and real traffic logs collected from the Abilene Internet
backbone.
|
0907.0939
|
The Soft Cumulative Constraint
|
cs.AI
|
This research report presents an extension of Cumulative of Choco constraint
solver, which is useful to encode over-constrained cumulative problems. This
new global constraint uses sweep and task interval violation-based algorithms.
|
0907.0944
|
Spread spectrum for imaging techniques in radio interferometry
|
astro-ph.IM cs.IT math.IT
|
We consider the probe of astrophysical signals through radio interferometers
with small field of view and baselines with non-negligible and constant
component in the pointing direction. In this context, the visibilities measured
essentially identify with a noisy and incomplete Fourier coverage of the
product of the planar signals with a linear chirp modulation. In light of the
recent theory of compressed sensing and in the perspective of defining the best
possible imaging techniques for sparse signals, we analyze the related spread
spectrum phenomenon and suggest its universality relative to the sparsity
dictionary. Our results rely both on theoretical considerations related to the
mutual coherence between the sparsity and sensing dictionaries, as well as on
numerical simulations.
|
0907.1005
|
A class of structured P2P systems supporting browsing
|
cs.IR cs.DC
|
Browsing is a way of finding documents in a large amount of data which is
complementary to querying and which is particularly suitable for multimedia
documents. Locating particular documents in a very large collection of
multimedia documents such as the ones available in peer to peer networks is a
difficult task. However, current peer to peer systems do not allow to do this
by browsing. In this report, we show how one can build a peer to peer system
supporting a kind of browsing. In our proposal, one must extend an existing
distributed hash table system with a few features : handling partial hash-keys
and providing appropriate routing mechanisms for these hash-keys. We give such
an algorithm for the particular case of the Tapestry distributed hash table.
This is a work in progress as no proper validation has been done yet.
|
0907.1012
|
Apply Local Clustering Method to Improve the Running Speed of Ant Colony
Optimization
|
cs.NE cs.AI
|
Ant Colony Optimization (ACO) has time complexity O(t*m*N*N), and its typical
application is to solve Traveling Salesman Problem (TSP), where t, m, and N
denotes the iteration number, number of ants, number of cities respectively.
Cutting down running time is one of study focuses, and one way is to decrease
parameter t and N, especially N. For this focus, the following method is
presented in this paper. Firstly, design a novel clustering algorithm named
Special Local Clustering algorithm (SLC), then apply it to classify all cities
into compact classes, where compact class is the class that all cities in this
class cluster tightly in a small region. Secondly, let ACO act on every class
to get a local TSP route. Thirdly, all local TSP routes are jointed to form
solution. Fourthly, the inaccuracy of solution caused by clustering is
eliminated. Simulation shows that the presented method improves the running
speed of ACO by 200 factors at least. And this high speed is benefit from two
factors. One is that class has small size and parameter N is cut down. The
route length at every iteration step is convergent when ACO acts on compact
class. The other factor is that, using the convergence of route length as
termination criterion of ACO and parameter t is cut down.
|
0907.1054
|
Learning Gaussian Mixtures with Arbitrary Separation
|
cs.LG cs.DS
|
In this paper we present a method for learning the parameters of a mixture of
$k$ identical spherical Gaussians in $n$-dimensional space with an arbitrarily
small separation between the components. Our algorithm is polynomial in all
parameters other than $k$. The algorithm is based on an appropriate grid search
over the space of parameters. The theoretical analysis of the algorithm hinges
on a reduction of the problem to 1 dimension and showing that two 1-dimensional
mixtures whose densities are close in the $L^2$ norm must have similar means
and mixing coefficients. To produce such a lower bound for the $L^2$ norm in
terms of the distances between the corresponding means, we analyze the behavior
of the Fourier transform of a mixture of Gaussians in 1 dimension around the
origin, which turns out to be closely related to the properties of the
Vandermonde matrix obtained from the component means. Analysis of this matrix
together with basic function approximation results allows us to provide a lower
bound for the norm of the mixture in the Fourier domain.
In recent years much research has been aimed at understanding the
computational aspects of learning parameters of Gaussians mixture distributions
in high dimension. To the best of our knowledge all existing work on learning
parameters of Gaussian mixtures assumes minimum separation between components
of the mixture which is an increasing function of either the dimension of the
space $n$ or the number of components $k$. In our paper we prove the first
result showing that parameters of a $n$-dimensional Gaussian mixture model with
arbitrarily small component separation can be learned in time polynomial in
$n$.
|
0907.1061
|
Boolean Compressed Sensing and Noisy Group Testing
|
cs.IT math.IT
|
The fundamental task of group testing is to recover a small distinguished
subset of items from a large population while efficiently reducing the total
number of tests (measurements). The key contribution of this paper is in
adopting a new information-theoretic perspective on group testing problems. We
formulate the group testing problem as a channel coding/decoding problem and
derive a single-letter characterization for the total number of tests used to
identify the defective set. Although the focus of this paper is primarily on
group testing, our main result is generally applicable to other compressive
sensing models.
The single letter characterization is shown to be order-wise tight for many
interesting noisy group testing scenarios. Specifically, we consider an
additive Bernoulli($q$) noise model where we show that, for $N$ items and $K$
defectives, the number of tests $T$ is $O(\frac{K\log N}{1-q})$ for arbitrarily
small average error probability and $O(\frac{K^2\log N}{1-q})$ for a worst case
error criterion. We also consider dilution effects whereby a defective item in
a positive pool might get diluted with probability $u$ and potentially missed.
In this case, it is shown that $T$ is $O(\frac{K\log N}{(1-u)^2})$ and
$O(\frac{K^2\log N}{(1-u)^2})$ for the average and the worst case error
criteria, respectively. Furthermore, our bounds allow us to verify existing
known bounds for noiseless group testing including the deterministic noise-free
case and approximate reconstruction with bounded distortion. Our proof of
achievability is based on random coding and the analysis of a Maximum
Likelihood Detector, and our information theoretic lower bound is based on
Fano's inequality.
|
0907.1065
|
Design of an Optimal Bayesian Incentive Compatible Broadcast Protocol
for Ad hoc Networks with Rational Nodes
|
cs.GT cs.AI cs.DC cs.NI
|
Nodes in an ad hoc wireless network incur certain costs for forwarding
packets since packet forwarding consumes the resources of the nodes. If the
nodes are rational, free packet forwarding by the nodes cannot be taken for
granted and incentive based protocols are required to stimulate cooperation
among the nodes. Existing incentive based approaches are based on the VCG
(Vickrey-Clarke-Groves) mechanism which leads to high levels of incentive
budgets and restricted applicability to only certain topologies of networks.
Moreover, the existing approaches have only focused on unicast and multicast.
Motivated by this, we propose an incentive based broadcast protocol that
satisfies Bayesian incentive compatibility and minimizes the incentive budgets
required by the individual nodes. The proposed protocol, which we call {\em
BIC-B} (Bayesian incentive compatible broadcast) protocol, also satisfies
budget balance. We also derive a necessary and sufficient condition for the
ex-post individual rationality of the BIC-B protocol. The {\em BIC-B} protocol
exhibits superior performance in comparison to a dominant strategy incentive
compatible broadcast protocol.
|
0907.1072
|
Self-Assembling Systems are Distributed Systems
|
cs.FL cs.DC cs.RO
|
In 2004, Klavins et al. introduced the use of graph grammars to describe --
and to program -- systems of self-assembly. We show that these graph grammars
can be embedded in a graph rewriting characterization of distributed systems
that was proposed by Degano and Montanari over twenty years ago. We apply this
embedding to generalize Soloveichik and Winfree's local determinism criterion
(for achieving a unique terminal assembly), from assembly systems of 4-sided
tiles that embed in the plane, to arbitrary graph assembly systems. We present
a partial converse of the embedding result, by providing sufficient conditions
under which systems of distributed processors can be simulated by graph
assembly systems topologically, in the plane, and in 3-space. We conclude by
defining a new complexity measure: "surface cost" (essentially the convex hull
of the space inhabited by agents at the conclusion of a self-assembled
computation). We show that, for growth-bounded graphs, executing a subroutine
to find a Maximum Independent Set only increases the surface cost of a
self-assembling computation by a constant factor. We obtain this complexity
bound by using the simulation results to import the distributed computing
notions of "local synchronizer" and "deterministic coin flipping" into
self-assembly.
|
0907.1099
|
Multi-User Diversity vs. Accurate Channel State Information in MIMO
Downlink Channels
|
cs.IT math.IT
|
In a multiple transmit antenna, single antenna per receiver downlink channel
with limited channel state feedback, we consider the following question: given
a constraint on the total system-wide feedback load, is it preferable to get
low-rate/coarse channel feedback from a large number of receivers or
high-rate/high-quality feedback from a smaller number of receivers? Acquiring
feedback from many receivers allows multi-user diversity to be exploited, while
high-rate feedback allows for very precise selection of beamforming directions.
We show that there is a strong preference for obtaining high-quality feedback,
and that obtaining near-perfect channel information from as many receivers as
possible provides a significantly larger sum rate than collecting a few
feedback bits from a large number of users.
|
0907.1201
|
Generating Product Systems
|
math.DS cs.IT math.IT
|
Generalizing Krieger's finite generation theorem, we give conditions for an
ergodic system to be generated by a pair of partitions, each required to be
measurable with respect to a given sub-algebra, and also required to have a
fixed size.
|
0907.1224
|
Effect of user tastes on personalized recommendation
|
physics.data-an cs.IR physics.soc-ph
|
In this paper, based on a weighted projection of the user-object bipartite
network, we study the effects of user tastes on the mass-diffusion-based
personalized recommendation algorithm, where a user's tastes or interests are
defined by the average degree of the objects he has collected. We argue that
the initial recommendation power located on the objects should be determined by
both of their degree and the users' tastes. By introducing a tunable parameter,
the user taste effects on the configuration of initial recommendation power
distribution are investigated. The numerical results indicate that the
presented algorithm could improve the accuracy, measured by the average ranking
score, more importantly, we find that when the data is sparse, the algorithm
should give more recommendation power to the objects whose degrees are close to
the users' tastes, while when the data becomes dense, it should assign more
power on the objects whose degrees are significantly different from user's
tastes.
|
0907.1228
|
Degree correlation effect of bipartite network on personalized
recommendation
|
physics.data-an cs.IR physics.soc-ph
|
In this paper, by introducing a new user similarity index base on the
diffusion process, we propose a modified collaborative filtering (MCF)
algorithm, which has remarkably higher accuracy than the standard collaborative
filtering. In the proposed algorithm, the degree correlation between users and
objects is taken into account and embedded into the similarity index by a
tunable parameter. The numerical simulation on a benchmark data set shows that
the algorithmic accuracy of the MCF, measured by the average ranking score, is
further improved by 18.19% in the optimal case. In addition, two significant
criteria of algorithmic performance, diversity and popularity, are also taken
into account. Numerical results show that the presented algorithm can provide
more diverse and less popular recommendations, for example, when the
recommendation list contains 10 objects, the diversity, measured by the hamming
distance, is improved by 21.90%.
|
0907.1245
|
How Controlled English can Improve Semantic Wikis
|
cs.HC cs.AI
|
The motivation of semantic wikis is to make acquisition, maintenance, and
mining of formal knowledge simpler, faster, and more flexible. However, most
existing semantic wikis have a very technical interface and are restricted to a
relatively low level of expressivity. In this paper, we explain how AceWiki
uses controlled English - concretely Attempto Controlled English (ACE) - to
provide a natural and intuitive interface while supporting a high degree of
expressivity. We introduce recent improvements of the AceWiki system and user
studies that indicate that AceWiki is usable and useful.
|
0907.1255
|
From Spectrum Pooling to Space Pooling: Opportunistic Interference
Alignment in MIMO Cognitive Networks
|
cs.IT math.IT
|
We describe a non-cooperative interference alignment (IA) technique which
allows an opportunistic multiple input multiple output (MIMO) link (secondary)
to harmlessly coexist with another MIMO link (primary) in the same frequency
band. Assuming perfect channel knowledge at the primary receiver and
transmitter, capacity is achieved by transmiting along the spatial directions
(SD) associated with the singular values of its channel matrix using a
water-filling power allocation (PA) scheme. Often, power limitations lead the
primary transmitter to leave some of its SD unused. Here, it is shown that the
opportunistic link can transmit its own data if it is possible to align the
interference produced on the primary link with such unused SDs. We provide both
a processing scheme to perform IA and a PA scheme which maximizes the
transmission rate of the opportunistic link. The asymptotes of the achievable
transmission rates of the opportunistic link are obtained in the regime of
large numbers of antennas. Using this result, it is shown that depending on the
signal-to-noise ratio and the number of transmit and receive antennas of the
primary and opportunistic links, both systems can achieve transmission rates of
the same order.
|
0907.1266
|
Distributed Random Access Algorithm: Scheduling and Congesion Control
|
cs.IT cs.NI math.IT math.PR
|
This paper provides proofs of the rate stability, Harris recurrence, and
epsilon-optimality of CSMA algorithms where the backoff parameter of each node
is based on its backlog. These algorithms require only local information and
are easy to implement.
The setup is a network of wireless nodes with a fixed conflict graph that
identifies pairs of nodes whose simultaneous transmissions conflict. The paper
studies two algorithms. The first algorithm schedules transmissions to keep up
with given arrival rates of packets. The second algorithm controls the arrivals
in addition to the scheduling and attempts to maximize the sum of the utilities
of the flows of packets at the different nodes. For the first algorithm, the
paper proves rate stability for strictly feasible arrival rates and also Harris
recurrence of the queues. For the second algorithm, the paper proves the
epsilon-optimality. Both algorithms operate with strictly local information in
the case of decreasing step sizes, and operate with the additional information
of the number of nodes in the network in the case of constant step size.
|
0907.1413
|
Privacy constraints in regularized convex optimization
|
cs.CR cs.DB cs.LG
|
This paper is withdrawn due to some errors, which are corrected in
arXiv:0912.0071v4 [cs.LG].
|
0907.1432
|
Reciprocity in Linear Deterministic Networks under Linear Coding
|
cs.IT math.IT
|
The linear deterministic model has been used recently to get a first order
understanding of many wireless communication network problems. In many of these
cases, it has been pointed out that the capacity regions of the network and its
reciprocal (where the communication links are reversed and the roles of the
sources and the destinations are swapped) are the same. In this paper, we
consider a linear deterministic communication network with multiple unicast
information flows. For this model and under the restriction to the class of
linear coding, we show that the rate regions for a network and its reciprocal
are the same. This can be viewed as a generalization of the linear
reversibility of wireline networks, already known in the network coding
literature.
|
0907.1523
|
Theoretical Performance Analysis of Eigenvalue-based Detection
|
cs.IT math.IT
|
In this paper we develop a complete analytical framework based on Random
Matrix Theory for the performance evaluation of Eigenvalue-based Detection.
While, up to now, analysis was limited to false-alarm probability, we have
obtained an analytical expression also for the probability of missed detection,
by using the theory of spiked population models. A general scenario with
multiple signals present at the same time is considered. The theoretical
results of this paper allow to predict the error probabilities, and to set the
decision threshold accordingly, by means of a few mathematical formulae. In
this way the design of an eigenvalue-based detector is made conceptually
identical to that of a traditional energy detector. As additional results, the
paper discusses the conditions of signal identifiability for single and
multiple sources. All the analytical results are validated through numerical
simulations, covering also convergence, identifiabilty and non-Gaussian
practical modulations.
|
0907.1545
|
Augmenting Light Field to model Wave Optics effects
|
cs.CV
|
The ray-based 4D light field representation cannot be directly used to
analyze diffractive or phase--sensitive optical elements. In this paper, we
exploit tools from wave optics and extend the light field representation via a
novel "light field transform". We introduce a key modification to the
ray--based model to support the transform. We insert a "virtual light source",
with potentially negative valued radiance for certain emitted rays. We create a
look-up table of light field transformers of canonical optical elements. The
two key conclusions are that (i) in free space, the 4D light field completely
represents wavefront propagation via rays with real (positive as well as
negative) valued radiance and (ii) at occluders, a light field composed of
light field transformers plus insertion of (ray--based) virtual light sources
represents resultant phase and amplitude of wavefronts. For free--space
propagation, we analyze different wavefronts and coherence possibilities. For
occluders, we show that the light field transform is simply based on a
convolution followed by a multiplication operation. This formulation brings
powerful concepts from wave optics to computer vision and graphics. We show
applications in cubic-phase plate imaging and holographic displays.
|
0907.1558
|
Towards the quantification of the semantic information encoded in
written language
|
physics.soc-ph cs.CL physics.data-an
|
Written language is a complex communication signal capable of conveying
information encoded in the form of ordered sequences of words. Beyond the local
order ruled by grammar, semantic and thematic structures affect long-range
patterns in word usage. Here, we show that a direct application of information
theory quantifies the relationship between the statistical distribution of
words and the semantic content of the text. We show that there is a
characteristic scale, roughly around a few thousand words, which establishes
the typical size of the most informative segments in written language.
Moreover, we find that the words whose contributions to the overall information
is larger, are the ones more closely associated with the main subjects and
topics of the text. This scenario can be explained by a model of word usage
that assumes that words are distributed along the text in domains of a
characteristic size where their frequency is higher than elsewhere. Our
conclusions are based on the analysis of a large database of written language,
diverse in subjects and styles, and thus are likely to be applicable to general
language sequences encoding complex information.
|
0907.1597
|
Beyond No Free Lunch: Realistic Algorithms for Arbitrary Problem Classes
|
cs.IT cs.NE math.IT
|
We show how the necessary and sufficient conditions for the NFL to apply can
be reduced to the single requirement of the set of objective functions under
consideration being closed under permutation, and quantify the extent to which
a set of objectives not closed under permutation can give rise to a performance
difference between two algorithms. Then we provide a more refined definition of
performance under which we show that revisiting algorithms are always trumped
by enumerative ones.
|
0907.1632
|
Incorporating Integrity Constraints in Uncertain Databases
|
cs.DB cs.IR
|
We develop an approach to incorporate additional knowledge, in the form of
general purpose integrity constraints (ICs), to reduce uncertainty in
probabilistic databases. While incorporating ICs improves data quality (and
hence quality of answers to a query), it significantly complicates query
processing. To overcome the additional complexity, we develop an approach to
map an uncertain relation U with ICs to another uncertain relation U', that
approximates the set of consistent worlds represented by U. Queries over U can
instead be evaluated over U' achieving higher quality (due to reduced
uncertainty in U') without additional complexity in query processing due to
ICs. We demonstrate the effectiveness and scalability of our approach to large
data-sets with complex constraints. We also present experimental results
demonstrating the utility of incorporating integrity constraints in uncertain
relations, in the context of an information extraction application.
|
0907.1721
|
Distributed Function Computation in Asymmetric Communication Scenarios
|
cs.IT math.IT
|
We consider the distributed function computation problem in asymmetric
communication scenarios, where the sink computes some deterministic function of
the data split among N correlated informants. The distributed function
computation problem is addressed as a generalization of distributed source
coding (DSC) problem. We are mainly interested in minimizing the number of
informant bits required, in the worst-case, to allow the sink to exactly
compute the function. We provide a constructive solution for this in terms of
an interactive communication protocol and prove its optimality. The proposed
protocol also allows us to compute the worst-case achievable rate-region for
the computation of any function. We define two classes of functions: lossy and
lossless. We show that, in general, the lossy functions can be computed at the
sink with fewer number of informant bits than the DSC problem, while
computation of the lossless functions requires as many informant bits as the
DSC problem.
|
0907.1723
|
Worst-case Compressibility of Discrete and Finite Distributions
|
cs.IT math.IT
|
In the worst-case distributed source coding (DSC) problem of [1], the smaller
cardinality of the support-set describing the correlation in informant data,
may neither imply that fewer informant bits are required nor that fewer
informants need to be queried, to finish the data-gathering at the sink. It is
important to formally address these observations for two reasons: first, to
develop good worst-case information measures and second, to perform meaningful
worst-case information-theoretic analysis of various distributed data-gathering
problems. Towards this goal, we introduce the notions of bit-compressibility
and informant-compressibility of support-sets. We consider DSC and distributed
function computation problems and provide results on computing the bit- and
informant- compressibilities regions of the support-sets as a function of their
cardinality.
|
0907.1728
|
Role of Weak Ties in Link Prediction of Complex Networks
|
cs.IR
|
Plenty of algorithms for link prediction have been proposed and were applied
to various real networks. Among these works, the weights of links are rarely
taken into account. In this paper, we use local similarity indices to estimate
the likelihood of the existence of links in weighted networks, including Common
Neighbor, Adamic-Adar Index, Resource Allocation Index, and their weighted
versions. In both the unweighted and weighted cases, the resource allocation
index performs the best. To our surprise, the weighted indices perform worse,
which reminds us of the well-known Weak Tie Theory. Further extensive
experimental study shows that the weak ties play a significant role in the link
prediction problem, and to emphasize the contribution of weak ties can
remarkably enhance the predicting accuracy.
|
0907.1737
|
Throughput Improvement and Its Tradeoff with The Queuing Delay in the
Diamond Relay Networks
|
cs.IT math.IT
|
Diamond relay channel model, as a basic transmission model, has recently been
attracting considerable attention in wireless Ad Hoc networks. Node cooperation
and opportunistic scheduling scheme are two important techniques to improve the
performance in wireless scenarios. In the paper we consider such a problem how
to efficiently combine opportunistic scheduling and cooperative modes in the
Rayleigh fading scenarios. To do so, we first compare the throughput of SRP
(Spatial Reused Pattern) and AFP (Amplify Forwarding Pattern) in the
half-duplex case with the assumption that channel side information is known to
all and then come up with a new scheduling scheme. It will that that only
switching between SRP and AFP simply does little help to obtain an expected
improvement because SRP is always superior to AFP on average due to its
efficient spatial reuse. To improve the throughput further, we put forward a
new processing strategy in which buffers are employed at both relays in SRP
mode. By efficiently utilizing the links with relatively higher gains, the
throughput can be greatly improved at a cost of queuing delay. Furthermore, we
shall quantitatively evaluate the queuing delay and the tradeoff between the
throughput and the additional queuing delay. Finally, to realize our developed
strategy and make sure it always run at stable status, we present two criteria
and an algorithm on the selection and adjustment of the switching thresholds.
|
0907.1739
|
Efficient Signal-Time Coding Design and its Application in Wireless
Gaussian Relay Networks
|
cs.IT math.IT
|
Signal-time coding, which combines the traditional encoding/modulation mode
in the signal domain with signal pulse phase modulation in the time domain, was
proposed to improve the information flow rate in relay networks. In this paper,
we mainly focus on the efficient signal-time coding design. We first derive an
explicit iterative algorithm to estimate the maximum number of available codes
given the code length of signal-time coding, and then present an iterative
construction method of codebooks. It is shown that compared with conventional
computer search, the proposed iterative construction method can reduce the
complexity greatly. Numerical results will also indicate that the new
constructed codebook is optimal in terms of coding rate. To minimize the buffer
size needed to store the codebook while keeping a relatively high efficiency,
we shall propose a combinatorial construction method. We will then consider
applications in wireless Gaussian relay networks. It will be shown that in the
three node network model, the mixed transmission by using two-hop and direct
transmissions is not always a good option.
|
0907.1788
|
FNT-based Reed-Solomon Erasure Codes
|
cs.IT math.IT
|
This paper presents a new construction of Maximum-Distance Separable (MDS)
Reed-Solomon erasure codes based on Fermat Number Transform (FNT). Thanks to
FNT, these codes support practical coding and decoding algorithms with
complexity O(n log n), where n is the number of symbols of a codeword. An
open-source implementation shows that the encoding speed can reach 150Mbps for
codes of length up to several 10,000s of symbols. These codes can be used as
the basic component of the Information Dispersal Algorithm (IDA) system used in
a several P2P systems.
|
0907.1812
|
Fast search for Dirichlet process mixture models
|
cs.LG
|
Dirichlet process (DP) mixture models provide a flexible Bayesian framework
for density estimation. Unfortunately, their flexibility comes at a cost:
inference in DP mixture models is computationally expensive, even when
conjugate distributions are used. In the common case when one seeks only a
maximum a posteriori assignment of data points to clusters, we show that search
algorithms provide a practical alternative to expensive MCMC and variational
techniques. When a true posterior sample is desired, the solution found by
search can serve as a good initializer for MCMC. Experimental results show that
using these techniques is it possible to apply DP mixture models to very large
data sets.
|
0907.1814
|
Bayesian Query-Focused Summarization
|
cs.CL cs.IR cs.LG
|
We present BayeSum (for ``Bayesian summarization''), a model for sentence
extraction in query-focused summarization. BayeSum leverages the common case in
which multiple documents are relevant to a single query. Using these documents
as reinforcement for query terms, BayeSum is not afflicted by the paucity of
information in short queries. We show that approximate inference in BayeSum is
possible on large data sets and results in a state-of-the-art summarization
system. Furthermore, we show how BayeSum can be understood as a justified query
expansion technique in the language modeling for IR framework.
|
0907.1815
|
Frustratingly Easy Domain Adaptation
|
cs.LG cs.CL
|
We describe an approach to domain adaptation that is appropriate exactly in
the case when one has enough ``target'' data to do slightly better than just
using only ``source'' data. Our approach is incredibly simple, easy to
implement as a preprocessing step (10 lines of Perl!) and outperforms
state-of-the-art approaches on a range of datasets. Moreover, it is trivially
extended to a multi-domain adaptation problem, where one has data from a
variety of different domains.
|
0907.1839
|
An Evolved Neural Controller for Bipdedal Walking with Dynamic Balance
|
cs.NE cs.RO
|
We successfully evolved a neural network controller that produces dynamic
walking in a simulated bipedal robot with compliant actuators, a difficult
control problem. The evolutionary evaluation uses a detailed software
simulation of a physical robot. We describe: 1) a novel theoretical method to
encourage populations to evolve "around" local optima, which employs multiple
demes and fitness functions of progressively increasing difficulty, and 2) the
novel genetic representation of the neural controller.
|
0907.1888
|
Compressive Sensing for Feedback Reduction in MIMO Broadcast Channels
|
cs.IT math.IT
|
We propose a generalized feedback model and compressive sensing based
opportunistic feedback schemes for feedback resource reduction in MIMO
Broadcast Channels under the assumption that both uplink and downlink channels
undergo block Rayleigh fading. Feedback resources are shared and are
opportunistically accessed by users who are strong, i.e. users whose channel
quality information is above a certain fixed threshold. Strong users send same
feedback information on all shared channels. They are identified by the base
station via compressive sensing. Both analog and digital feedbacks are
considered. The proposed analog & digital opportunistic feedback schemes are
shown to achieve the same sum-rate throughput as that achieved by dedicated
feedback schemes, but with feedback channels growing only logarithmically with
number of users. Moreover, there is also a reduction in the feedback load. In
the analog feedback case, we show that the propose scheme reduces the feedback
noise which eventually results in better throughput, whereas in the digital
feedback case the proposed scheme in a noisy scenario achieves almost the
throughput obtained in a noiseless dedicated feedback scenario. We also show
that for a fixed given budget of feedback bits, there exist a trade-off between
the number of shared channels and thresholds accuracy of the feedback SINR.
|
0907.1916
|
Learning Equilibria in Games by Stochastic Distributed Algorithms
|
cs.GT cs.LG
|
We consider a class of fully stochastic and fully distributed algorithms,
that we prove to learn equilibria in games.
Indeed, we consider a family of stochastic distributed dynamics that we prove
to converge weakly (in the sense of weak convergence for probabilistic
processes) towards their mean-field limit, i.e an ordinary differential
equation (ODE) in the general case. We focus then on a class of stochastic
dynamics where this ODE turns out to be related to multipopulation replicator
dynamics.
Using facts known about convergence of this ODE, we discuss the convergence
of the initial stochastic dynamics: For general games, there might be
non-convergence, but when convergence of the ODE holds, considered stochastic
algorithms converge towards Nash equilibria. For games admitting Lyapunov
functions, that we call Lyapunov games, the stochastic dynamics converge. We
prove that any ordinal potential game, and hence any potential game is a
Lyapunov game, with a multiaffine Lyapunov function. For Lyapunov games with a
multiaffine Lyapunov function, we prove that this Lyapunov function is a
super-martingale over the stochastic dynamics. This leads a way to provide
bounds on their time of convergence by martingale arguments. This applies in
particular for many classes of games that have been considered in literature,
including several load balancing game scenarios and congestion games.
|
0907.1925
|
Modeling self-organizing traffic lights with elementary cellular
automata
|
nlin.CG cond-mat.stat-mech cs.AI nlin.AO
|
There have been several highway traffic models proposed based on cellular
automata. The simplest one is elementary cellular automaton rule 184. We extend
this model to city traffic with cellular automata coupled at intersections
using only rules 184, 252, and 136. The simplicity of the model offers a clear
understanding of the main properties of city traffic and its phase transitions.
We use the proposed model to compare two methods for coordinating traffic
lights: a green-wave method that tries to optimize phases according to expected
flows and a self-organizing method that adapts to the current traffic
conditions. The self-organizing method delivers considerable improvements over
the green-wave method. For low densities, the self-organizing method promotes
the formation and coordination of platoons that flow freely in four directions,
i.e. with a maximum velocity and no stops. For medium densities, the method
allows a constant usage of the intersections, exploiting their maximum flux
capacity. For high densities, the method prevents gridlocks and promotes the
formation and coordination of "free-spaces" that flow in the opposite direction
of traffic.
|
0907.1956
|
Zero-error feedback capacity via dynamic programming
|
cs.IT math.IT
|
In this paper, we study the zero-error capacity for finite state channels
with feedback when channel state information is known to both the transmitter
and the receiver. We prove that the zero-error capacity in this case can be
obtained through the solution of a dynamic programming problem. Each iteration
of the dynamic programming provides lower and upper bounds on the zero-error
capacity, and in the limit, the lower bound coincides with the zero-error
feedback capacity. Furthermore, a sufficient condition for solving the dynamic
programming problem is provided through a fixed-point equation. Analytical
solutions for several examples are provided.
|
0907.1975
|
On semifast Fourier transform algorithms
|
cs.IT math.IT
|
We consider the relations between well-known Fourier transform algorithms.
|
0907.1978
|
BPDMN: A Conservative Extension of BPMN with Enhanced Data
Representation Capabilities
|
cs.SE cs.DB
|
The design of business processes involves the usage of modeling languages,
tools and methodologies. In this paper we highlight and address a relevant
limitation of the Business Process Modeling Notation (BPMN): its weak data
representation capabilities. In particular, we extend it with data-specific
constructs derived from existing data modeling notations and adapted to blend
gracefully into BPMN diagrams. The extension has been developed taking existing
modeling languages and requirement analyses into account: we characterize our
notation using the Workfl ow Data Patterns and provide mappings to the main
XML-based business process languages.
|
0907.1990
|
Automated Protein Structure Classification: A Survey
|
cs.CE q-bio.BM
|
Classification of proteins based on their structure provides a valuable
resource for studying protein structure, function and evolutionary
relationships. With the rapidly increasing number of known protein structures,
manual and semi-automatic classification is becoming ever more difficult and
prohibitively slow. Therefore, there is a growing need for automated, accurate
and efficient classification methods to generate classification databases or
increase the speed and accuracy of semi-automatic techniques. Recognizing this
need, several automated classification methods have been developed. In this
survey, we overview recent developments in this area. We classify different
methods based on their characteristics and compare their methodology, accuracy
and efficiency. We then present a few open problems and explain future
directions.
|
0907.1992
|
Spectrum sensing by cognitive radios at very low SNR
|
cs.IT math.IT
|
Spectrum sensing is one of the enabling functionalities for cognitive radio
(CR) systems to operate in the spectrum white space. To protect the primary
incumbent users from interference, the CR is required to detect incumbent
signals at very low signal-to-noise ratio (SNR). In this paper, we present a
spectrum sensing technique based on correlating spectra for detection of
television (TV) broadcasting signals. The basic strategy is to correlate the
periodogram of the received signal with the a priori known spectral features of
the primary signal. We show that according to the Neyman-Pearson criterion,
this spectral correlation-based sensing technique is asymptotically optimal at
very low SNR and with a large sensing time. From the system design perspective,
we analyze the effect of the spectral features on the spectrum sensing
performance. Through the optimization analysis, we obtain useful insights on
how to choose effective spectral features to achieve reliable sensing.
Simulation results show that the proposed sensing technique can reliably detect
analog and digital TV signals at SNR as low as -20 dB.
|
0907.2049
|
Strategyproof Approximation Mechanisms for Location on Networks
|
cs.GT cs.MA
|
We consider the problem of locating a facility on a network, represented by a
graph. A set of strategic agents have different ideal locations for the
facility; the cost of an agent is the distance between its ideal location and
the facility. A mechanism maps the locations reported by the agents to the
location of the facility. Specifically, we are interested in social choice
mechanisms that do not utilize payments. We wish to design mechanisms that are
strategyproof, in the sense that agents can never benefit by lying, or, even
better, group strategyproof, in the sense that a coalition of agents cannot all
benefit by lying. At the same time, our mechanisms must provide a small
approximation ratio with respect to one of two optimization targets: the social
cost or the maximum cost.
We give an almost complete characterization of the feasible truthful
approximation ratio under both target functions, deterministic and randomized
mechanisms, and with respect to different network topologies. Our main results
are: We show that a simple randomized mechanism is group strategyproof and
gives a (2-2/n)-approximation for the social cost, where n is the number of
agents, when the network is a circle (known as a ring in the case of computer
networks); we design a novel "hybrid" strategyproof randomized mechanism that
provides a tight approximation ratio of 3/2 for the maximum cost when the
network is a circle; and we show that no randomized SP mechanism can provide an
approximation ratio better than 2-o(1) to the maximum cost even when the
network is a tree, thereby matching a trivial upper bound of two.
|
0907.2075
|
Multiresolution Elastic Medical Image Registration in Standard Intensity
Scale
|
cs.CV
|
Medical image registration is a difficult problem. Not only a registration
algorithm needs to capture both large and small scale image deformations, it
also has to deal with global and local image intensity variations. In this
paper we describe a new multiresolution elastic image registration method that
challenges these difficulties in image registration. To capture large and small
scale image deformations, we use both global and local affine transformation
algorithms. To address global and local image intensity variations, we apply an
image intensity standardization algorithm to correct image intensity
variations. This transforms image intensities into a standard intensity scale,
which allows highly accurate registration of medical images.
|
0907.2079
|
An Augmented Lagrangian Approach for Sparse Principal Component Analysis
|
math.OC cs.LG math.ST stat.AP stat.CO stat.ME stat.ML stat.TH
|
Principal component analysis (PCA) is a widely used technique for data
analysis and dimension reduction with numerous applications in science and
engineering. However, the standard PCA suffers from the fact that the principal
components (PCs) are usually linear combinations of all the original variables,
and it is thus often difficult to interpret the PCs. To alleviate this
drawback, various sparse PCA approaches were proposed in literature [15, 6, 17,
28, 8, 25, 18, 7, 16]. Despite success in achieving sparsity, some important
properties enjoyed by the standard PCA are lost in these methods such as
uncorrelation of PCs and orthogonality of loading vectors. Also, the total
explained variance that they attempt to maximize can be too optimistic. In this
paper we propose a new formulation for sparse PCA, aiming at finding sparse and
nearly uncorrelated PCs with orthogonal loading vectors while explaining as
much of the total variance as possible. We also develop a novel augmented
Lagrangian method for solving a class of nonsmooth constrained optimization
problems, which is well suited for our formulation of sparse PCA. We show that
it converges to a feasible point, and moreover under some regularity
assumptions, it converges to a stationary point. Additionally, we propose two
nonmonotone gradient methods for solving the augmented Lagrangian subproblems,
and establish their global and local convergence. Finally, we compare our
sparse PCA approach with several existing methods on synthetic, random, and
real data, respectively. The computational results demonstrate that the sparse
PCs produced by our approach substantially outperform those by other methods in
terms of total explained variance, correlation of PCs, and orthogonality of
loading vectors.
|
0907.2089
|
Fast In-Memory XPath Search over Compressed Text and Tree Indexes
|
cs.DB cs.IR
|
A large fraction of an XML document typically consists of text data. The
XPath query language allows text search via the equal, contains, and
starts-with predicates. Such predicates can efficiently be implemented using a
compressed self-index of the document's text nodes. Most queries, however,
contain some parts of querying the text of the document, plus some parts of
querying the tree structure. It is therefore a challenge to choose an
appropriate evaluation order for a given query, which optimally leverages the
execution speeds of the text and tree indexes. Here the SXSI system is
introduced; it stores the tree structure of an XML document using a bit array
of opening and closing brackets, and stores the text nodes of the document
using a global compressed self-index. On top of these indexes sits an XPath
query engine that is based on tree automata. The engine uses fast counting
queries of the text index in order to dynamically determine whether to evaluate
top-down or bottom-up with respect to the tree structure. The resulting system
has several advantages over existing systems: (1) on pure tree queries (without
text search) such as the XPathMark queries, the SXSI system performs on par or
better than the fastest known systems MonetDB and Qizx, (2) on queries that use
text search, SXSI outperforms the existing systems by 1--3 orders of magnitude
(depending on the size of the result set), and (3) with respect to memory
consumption, SXSI outperforms all other systems for counting-only queries.
|
0907.2090
|
Some bounds on the capacity of communicating the sum of sources
|
cs.IT math.IT
|
We consider directed acyclic networks with multiple sources and multiple
terminals where each source generates one i.i.d. random process over an abelian
group and all the terminals want to recover the sum of these random processes.
The different source processes are assumed to be independent. The solvability
of such networks has been considered in some previous works. In this paper we
investigate on the capacity of such networks, referred as {\it sum-networks},
and present some bounds in terms of min-cut, and the numbers of sources and
terminals.
|
0907.2093
|
Distributed Opportunistic Scheduling With Two-Level Probing
|
cs.IT math.IT
|
Distributed opportunistic scheduling (DOS) is studied for wireless ad-hoc
networks in which many links contend for the channel using random access before
data transmissions. Simply put, DOS involves a process of joint channel probing
and distributed scheduling for ad-hoc (peer-to-peer) communications. Since, in
practice, link conditions are estimated with noisy observations, the
transmission rate has to be backed off from the estimated rate to avoid
transmission outages. Then, a natural question to ask is whether it is
worthwhile for the link with successful contention to perform further channel
probing to mitigate estimation errors, at the cost of additional probing. Thus
motivated, this work investigates DOS with two-level channel probing by
optimizing the tradeoff between the throughput gain from more accurate rate
estimation and the resulting additional delay. Capitalizing on optimal stopping
theory with incomplete information, we show that the optimal scheduling policy
is threshold-based and is characterized by either one or two thresholds,
depending on network settings. Necessary and sufficient conditions for both
cases are rigorously established. In particular, our analysis reveals that
performing second-level channel probing is optimal when the first-level
estimated channel condition falls in between the two thresholds. Numerical
results are provided to illustrate the effectiveness of the proposed DOS with
two-level channel probing. We also extend our study to the case with limited
feedback, where the feedback from the receiver to its transmitter takes the
form of (0,1,e).
|
0907.2209
|
Related terms search based on WordNet / Wiktionary and its application
in Ontology Matching
|
cs.IR
|
A set of ontology matching algorithms (for finding correspondences between
concepts) is based on a thesaurus that provides the source data for the
semantic distance calculations. In this wiki era, new resources may spring up
and improve this kind of semantic search. In the paper a solution of this task
based on Russian Wiktionary is compared to WordNet based algorithms. Metrics
are estimated using the test collection, containing 353 English word pairs with
a relatedness score assigned by human evaluators. The experiment shows that the
proposed method is capable in principle of calculating a semantic distance
between pair of words in any language presented in Russian Wiktionary. The
calculation of Wiktionary based metric had required the development of the
open-source Wiktionary parser software.
|
0907.2210
|
On the philosophy of Cram\'er-Rao-Bhattacharya Inequalities in Quantum
Statistics
|
math.PR cs.IT math.IT math.ST stat.TH
|
To any parametric family of states of a finite level quantum system we
associate a space of Fisher maps and introduce the natural notions of
Cram\'er-Rao-Bhattacharya tensor and Fisher information form. This leads us to
an abstract Cram\'er-Rao-Bhattacharya lower bound for the covariance matrix of
any finite number of unbiased estimators of parameteric functions. A number of
illustrative examples is included. Modulo technical assumptions of various
kinds our methods can be applied to infinite level quantum systems as well as
parametric families of classical probability distributions on Borel spaces.
|
0907.2222
|
Network-aware Adaptation with Real-Time Channel Statistics for Wireless
LAN Multimedia Transmissions in the Digital Home
|
cs.NI cs.LG
|
This paper suggests the use of intelligent network-aware processing agents in
wireless local area network drivers to generate metrics for bandwidth
estimation based on real-time channel statistics to enable wireless multimedia
application adaptation. Various configurations in the wireless digital home are
studied and the experimental results with performance variations are presented.
|
0907.2268
|
Evaluating Methods to Rediscover Missing Web Pages from the Web
Infrastructure
|
cs.IR cs.DL
|
Missing web pages (pages that return the 404 "Page Not Found" error) are part
of the browsing experience. The manual use of search engines to rediscover
missing pages can be frustrating and unsuccessful. We compare four automated
methods for rediscovering web pages. We extract the page's title, generate the
page's lexical signature (LS), obtain the page's tags from the bookmarking
website delicious.com and generate a LS from the page's link neighborhood. We
use the output of all methods to query Internet search engines and analyze
their retrieval performance. Our results show that both LSs and titles perform
fairly well with over 60% URIs returned top ranked from Yahoo!. However, the
combination of methods improves the retrieval performance. Considering the
complexity of the LS generation, querying the title first and in case of
insufficient results querying the LSs second is the preferable setup. This
combination accounts for more than 75% top ranked URIs.
|
0907.2309
|
Protocols and Performance Limits for Half-Duplex Relay Networks
|
cs.IT math.IT
|
In this paper, protocols for the half-duplex relay channel are introduced and
performance limits are analyzed. Relay nodes underly an orthogonality
constraint, which prohibits simultaneous receiving and transmitting on the same
time-frequency resource. Based upon this practical consideration, different
protocols are discussed and evaluated using a Gaussian system model. For the
considered scenarios compress-and-forward based protocols dominate for a wide
range of parameters decode-and-forward protocols. In this paper, a protocol
with one compress-and-forward and one decode-and-forward based relay is
introduced. Just as the cut-set bound, which operates in a mode where relays
transmit alternately, both relays support each other. Furthermore, it is shown
that in practical systems a random channel access provides only marginal
performance gains if any.
|
0907.2315
|
Hard Fault Analysis of Trivium
|
cs.CR cs.IT math.IT
|
Fault analysis is a powerful attack to stream ciphers. Up to now, the major
idea of fault analysis is to simplify the cipher system by injecting some soft
faults. We call it soft fault analysis. As a hardware-oriented stream cipher,
Trivium is weak under soft fault analysis.
In this paper we consider another type of fault analysis of stream cipher,
which is to simplify the cipher system by injecting some hard faults. We call
it hard fault analysis. We present the following results about such attack to
Trivium. In Case 1 with the probability not smaller than 0.2396, the attacker
can obtain 69 bits of 80-bits-key. In Case 2 with the probability not smaller
than 0.2291, the attacker can obtain all of 80-bits-key. In Case 3 with the
probability not smaller than 0.2291, the attacker can partially solve the key.
In Case 4 with non-neglectable probability, the attacker can obtain a
simplified cipher, with smaller number of state bits and slower
non-linearization procedure. In Case 5 with non-neglectable probability, the
attacker can obtain another simplified cipher. Besides, these 5 cases can be
checked out by observing the key-stream.
|
0907.2391
|
Optimal Diversity-Multiplexing Tradeoff in Selective-Fading MIMO
Channels
|
cs.IT math.IT
|
We establish the optimal diversity-multiplexing (DM) tradeoff of coherent
time, frequency, and time-frequency selective-fading multiple-input
multiple-output (MIMO) channels and provide a code design criterion for DM
tradeoff optimality. Our results are based on the new concept of the "Jensen
channel" associated to a given selective-fading MIMO channel. While the
original problem seems analytically intractable due to the mutual information
between channel input and output being a sum of correlated random variables,
the Jensen channel is equivalent to the original channel in the sense of the DM
tradeoff and lends itself nicely to analytical treatment. We formulate a
systematic procedure for designing DM tradeoff optimal codes for general
selective-fading MIMO channels by demonstrating that the design problem can be
separated into two simpler and independent problems: the design of an inner
code, or precoder, adapted to the channel statistics (i.e., the selectivity
characteristics) and an outer code independent of the channel statistics. Our
results are supported by appealing geometric intuition, first pointed out for
the flat-fading case by Zheng and Tse, IEEE Trans. Inf. Theory, 2003.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.