id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1102.0372
|
XWeB: the XML Warehouse Benchmark
|
cs.DB
|
With the emergence of XML as a standard for representing business data, new
decision support applications are being developed. These XML data warehouses
aim at supporting On-Line Analytical Processing (OLAP) operations that
manipulate irregular XML data. To ensure feasibility of these new tools,
important performance issues must be addressed. Performance is customarily
assessed with the help of benchmarks. However, decision support benchmarks do
not currently support XML features. In this paper, we introduce the XML
Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from
the relational decision support benchmark TPC-H. It is mainly composed of a
test data warehouse that is based on a unified reference model for XML
warehouses and that features XML-specific structures, and its associate XQuery
decision support workload. XWeB's usage is illustrated by experiments on
several XML database management systems.
|
1102.0406
|
Threshold Saturation on Channels with Memory via Spatial Coupling
|
cs.IT math.IT
|
We consider spatially coupled code ensembles. A particular instance are
convolutional LDPC ensembles. It was recently shown that, for transmission over
the memoryless binary erasure channel, this coupling increases the belief
propagation threshold of the ensemble to the maximum a-posteriori threshold of
the underlying component ensemble. This paved the way for a new class of
capacity achieving low-density parity check codes. It was also shown
empirically that the same threshold saturation occurs when we consider
transmission over general binary input memoryless channels.
In this work, we report on empirical evidence which suggests that the same
phenomenon also occurs when transmission takes place over a class of channels
with memory. This is confirmed both by simulations as well as by computing EXIT
curves.
|
1102.0424
|
Design of Finite-Length Irregular Protograph Codes with Low Error Floors
over the Binary-Input AWGN Channel Using Cyclic Liftings
|
cs.IT math.IT
|
We propose a technique to design finite-length irregular low-density
parity-check (LDPC) codes over the binary-input additive white Gaussian noise
(AWGN) channel with good performance in both the waterfall and the error floor
region. The design process starts from a protograph which embodies a desirable
degree distribution. This protograph is then lifted cyclically to a certain
block length of interest. The lift is designed carefully to satisfy a certain
approximate cycle extrinsic message degree (ACE) spectrum. The target ACE
spectrum is one with extremal properties, implying a good error floor
performance for the designed code. The proposed construction results in
quasi-cyclic codes which are attractive in practice due to simple encoder and
decoder implementation. Simulation results are provided to demonstrate the
effectiveness of the proposed construction in comparison with similar existing
constructions.
|
1102.0454
|
Evaluation of Three Vision Based Object Perception Methods for a Mobile
Robot
|
cs.RO
|
This paper addresses object perception applied to mobile robotics. Being able
to perceive semantically meaningful objects in unstructured environments is a
key capability in order to make robots suitable to perform high-level tasks in
home environments. However, finding a solution for this task is daunting: it
requires the ability to handle the variability in image formation in a moving
camera with tight time constraints. The paper brings to attention some of the
issues with applying three state of the art object recognition and detection
methods in a mobile robotics scenario, and proposes methods to deal with
windowing/segmentation. Thus, this work aims at evaluating the state-of-the-art
in object perception in an attempt to develop a lightweight solution for mobile
robotics use/research in typical indoor settings.
|
1102.0467
|
Delays Induce an Exponential Memory Gap for Rendezvous in Trees
|
cs.DC cs.RO
|
The aim of rendezvous in a graph is meeting of two mobile agents at some node
of an unknown anonymous connected graph. In this paper, we focus on rendezvous
in trees, and, analogously to the efforts that have been made for solving the
exploration problem with compact automata, we study the size of memory of
mobile agents that permits to solve the rendezvous problem deterministically.
We assume that the agents are identical, and move in synchronous rounds.
We first show that if the delay between the starting times of the agents is
arbitrary, then the lower bound on memory required for rendezvous is Omega(log
n) bits, even for the line of length n. This lower bound meets a previously
known upper bound of O(log n) bits for rendezvous in arbitrary graphs of size
at most n. Our main result is a proof that the amount of memory needed for
rendezvous with simultaneous start depends essentially on the number L of
leaves of the tree, and is exponentially less impacted by the number n of
nodes. Indeed, we present two identical agents with O(log L + loglog n) bits of
memory that solve the rendezvous problem in all trees with at most n nodes and
at most L leaves. Hence, for the class of trees with polylogarithmically many
leaves, there is an exponential gap in minimum memory size needed for
rendezvous between the scenario with arbitrary delay and the scenario with
delay zero. Moreover, we show that our upper bound is optimal by proving that
Omega(log L + loglog n)$ bits of memory are required for rendezvous, even in
the class of trees with degrees bounded by 3.
|
1102.0485
|
Design, Implementation and Characterization of a Cooperative
Communications System
|
cs.IT cs.NI math.IT
|
Cooperative communications is a class of techniques which seek to improve
reliability and throughput in wireless systems by pooling the resources of
distributed nodes. While cooperation can occur at different network layers and
time scales, physical layer cooperation at symbol time scales offers the
largest benefit in combating losses due to fading. However, symbol level
cooperation poses significant implementation challenges, especially in
synchronizing the behaviors and carrier frequencies of distributed nodes. We
present the implementation and characterization of a complete, real-time
cooperative physical layer transceiver built on the Rice Wireless Open-Access
Research Platform (WARP). In our implementation autonomous nodes employ
physical layer cooperation without a central synchronization source, and are
capable of selecting between non-cooperative and cooperative communication per
packet. Cooperative transmissions use a distributed Alamouti space-time block
code and employ either amplify-and-forward or decode-and-forward relaying. We
also present experimental results of our transceiver's real-time performance
under a variety of topologies and propagation conditions. Our results clearly
demonstrate significant performance gains (more than 40x improvement in PER in
some topologies) provided by physical layer cooperation, even when subject to
the constraints of a real-time implementation. We also present methodologies to
isolate and understand the sources of performance bottlenecks in our design. As
with all our work on WARP, our transceiver design and experimental framework
are available via the open-source WARP repository for use by other wireless
researchers.
|
1102.0522
|
Uncertainty Relations and Sparse Signal Recovery for Pairs of General
Signal Sets
|
cs.IT math.IT
|
We present an uncertainty relation for the representation of signals in two
different general (possibly redundant or incomplete) signal sets. This
uncertainty relation is relevant for the analysis of signals containing two
distinct features each of which can be described sparsely in a suitable general
signal set. Furthermore, the new uncertainty relation is shown to lead to
improved sparsity thresholds for recovery of signals that are sparse in general
dictionaries. Specifically, our results improve on the well-known
$(1+1/d)/2$-threshold for dictionaries with coherence $d$ by up to a factor of
two. Furthermore, we provide probabilistic recovery guarantees for pairs of
general dictionaries that also allow us to understand which parts of a general
dictionary one needs to randomize over to "weed out" the sparsity patterns that
prohibit breaking the square-root bottleneck.
|
1102.0540
|
Information theory of massively parallel probe storage channels
|
cs.IT cs.IR math.IT
|
Motivated by the concept of probe storage, we study the problem of
information retrieval using a large array of N nano-mechanical probes, N ~
4000. At the nanometer scale it is impossible to avoid errors in the
positioning of the array, thus all signals retrieved by the probes of the array
at a given sampling moment are affected by the same amount of random position
jitter. Therefore a massively parallel probe storage device is an example of a
noisy communication channel with long range correlations between channel
outputs due to the global positioning errors. We find that these correlations
have a profound effect on the channel's properties. For example, it turns out
that the channel's information capacity does approach 1 bit per probe in the
limit of high signal-to-noise ratio, but the rate of the approach is only
polynomial in the channel noise strength. Moreover, any error correction code
with block size N >> 1 such that codewords correspond to the instantaneous
outputs of the all probes in the array exhibits an error floor independently of
the code rate. We illustrate this phenomenon explicitly using Reed-Solomon
codes the performance of which is easy to simulate numerically. We also discuss
capacity-achieving error correction codes for the global jitter channel and
their complexity.
|
1102.0603
|
Persistent Robotic Tasks: Monitoring and Sweeping in Changing
Environments
|
cs.RO math.OC
|
We present controllers that enable mobile robots to persistently monitor or
sweep a changing environment. The changing environment is modeled as a field
which grows in locations that are not within range of a robot, and decreases in
locations that are within range of a robot. We assume that the robots travel on
given closed paths. The speed of each robot along its path is controlled to
prevent the field from growing unbounded at any location. We consider the space
of speed controllers that can be parametrized by a finite set of basis
functions. For a single robot, we develop a linear program that is guaranteed
to compute a speed controller in this space to keep the field bounded, if such
a controller exists. Another linear program is then derived whose solution is
the speed controller that minimizes the maximum field value over the
environment. We extend our linear program formulation to develop a multi-robot
controller that keeps the field bounded. The multi-robot controller has the
unique feature that it does not require communication among the robots.
Simulation studies demonstrate the robustness of the controllers to modeling
errors, and to stochasticity in the environment.
|
1102.0604
|
A small-world of weak ties provides optimal global integration of
self-similar modules in functional brain networks
|
physics.bio-ph cond-mat.stat-mech cs.SI physics.soc-ph q-bio.NC
|
The human brain is organized in functional modules. Such an organization
presents a basic conundrum: modules ought to be sufficiently independent to
guarantee functional specialization and sufficiently connected to bind multiple
processors for efficient information transfer. It is commonly accepted that
small-world architecture of short lengths and large local clustering may solve
this problem. However, there is intrinsic tension between shortcuts generating
small-worlds and the persistence of modularity; a global property unrelated to
local clustering. Here, we present a possible solution to this puzzle. We first
show that a modified percolation theory can define a set of hierarchically
organized modules made of strong links in functional brain networks. These
modules are "large-world" self-similar structures and, therefore, are far from
being small-world. However, incorporating weaker ties to the network converts
it into a small-world preserving an underlying backbone of well-defined
modules. Remarkably, weak ties are precisely organized as predicted by theory
maximizing information transfer with minimal wiring cost. This trade-off
architecture is reminiscent of the "strength of weak ties" crucial concept of
social networks. Such a design suggests a natural solution to the paradox of
efficient information flow in the highly modular structure of the brain.
|
1102.0629
|
Time-Varying Graphs and Social Network Analysis: Temporal Indicators and
Metrics
|
cs.SI cs.AI cs.DC physics.soc-ph
|
Most instruments - formalisms, concepts, and metrics - for social networks
analysis fail to capture their dynamics. Typical systems exhibit different
scales of dynamics, ranging from the fine-grain dynamics of interactions (which
recently led researchers to consider temporal versions of distance,
connectivity, and related indicators), to the evolution of network properties
over longer periods of time. This paper proposes a general approach to study
that evolution for both atemporal and temporal indicators, based respectively
on sequences of static graphs and sequences of time-varying graphs that cover
successive time-windows. All the concepts and indicators, some of which are
new, are expressed using a time-varying graph formalism.
|
1102.0651
|
Wikipedia information flow analysis reveals the scale-free architecture
of the Semantic Space
|
physics.soc-ph cs.IR cs.SI physics.data-an
|
In this paper we extract the topology of the semantic space in its
encyclopedic acception, measuring the semantic flow between the different
entries of the largest modern encyclopedia, Wikipedia, and thus creating a
directed complex network of semantic flows. Notably at the percolation
threshold the semantic space is characterised by scale-free behaviour at
different levels of complexity and this relates the semantic space to a wide
range of biological, social and linguistics phenomena. In particular we find
that the cluster size distribution, representing the size of different semantic
areas, is scale-free. Moreover the topology of the resulting semantic space is
scale-free in the connectivity distribution and displays small-world
properties. However its statistical properties do not allow a classical
interpretation via a generative model based on a simple multiplicative process.
After giving a detailed description and interpretation of the topological
properties of the semantic space, we introduce a stochastic model of
content-based network, based on a copy and mutation algorithm and on the Heaps'
law, that is able to capture the main statistical properties of the analysed
semantic space, including the Zipf's law for the word frequency distribution.
|
1102.0674
|
Effective Mechanism for Social Recommendation of News
|
physics.soc-ph cs.SI
|
Recommendation systems represent an important tool for news distribution on
the Internet. In this work we modify a recently proposed social recommendation
model in order to deal with no explicit ratings of users on news. The model
consists of a network of users which continually adapts in order to achieve an
efficient news traffic. To optimize network's topology we propose different
stochastic algorithms that are scalable with respect to the network's size.
Agent-based simulations reveal the features and the performance of these
algorithms. To overcome the resultant drawbacks of each method we introduce two
improved algorithms and show that they can optimize network's topology almost
as fast and effectively as other not-scalable methods that make use of much
more information.
|
1102.0676
|
Architecture of A Scalable Dynamic Parallel WebCrawler with High Speed
Downloadable Capability for a Web Search Engine
|
cs.IR
|
Today World Wide Web (WWW) has become a huge ocean of information and it is
growing in size everyday. Downloading even a fraction of this mammoth data is
like sailing through a huge ocean and it is a challenging task indeed. In order
to download a large portion of data from WWW, it has become absolutely
essential to make the crawling process parallel. In this paper we offer the
architecture of a dynamic parallel Web crawler, christened as "WEB-SAILOR,"
which presents a scalable approach based on Client-Server model to speed up the
download process on behalf of a Web Search Engine in a distributed Domain-set
specific environment. WEB-SAILOR removes the possibility of overlapping of
downloaded documents by multiple crawlers without even incurring the cost of
communication overhead among several parallel "client" crawling processes.
|
1102.0683
|
Volatility made observable at last
|
q-fin.CP cs.CE q-fin.ST
|
The Cartier-Perrin theorem, which was published in 1995 and is expressed in
the language of nonstandard analysis, permits, for the first time perhaps, a
clear-cut mathematical definition of the volatility of a financial asset. It
yields as a byproduct a new understanding of the means of returns, of the beta
coefficient, and of the Sharpe and Treynor ratios. New estimation techniques
from automatic control and signal processing, which were already successfully
applied in quantitative finance, lead to several computer experiments with some
quite convincing forecasts.
|
1102.0686
|
Towards an axiomatic system for Kolmogorov complexity
|
cs.IT cs.CC cs.LO math.IT math.LO
|
In [She82], it is shown that four basic functional properties are enough to
characterize plain Kolmogorov complexity, hence obtaining an axiomatic
characterization of this notion. In this paper, we try to extend this work,
both by looking at alternative axiomatic systems for plain complexity and by
considering potential axiomatic systems for other types of complexity. First we
show that the axiomatic system given by Shen cannot be weakened (at least in
any natural way). We then give an analogue of Shen's axiomatic system for
conditional complexity. In a the second part of the paper, we look at
prefix-free complexity and try to construct an axiomatic system for it. We show
however that the natural analogues of Shen's axiomatic systems fails to
characterize prefix-free complexity.
|
1102.0690
|
A New Sum-Rate Outer Bound for Interference Channels with Three
Source-Destination Pairs
|
cs.IT math.IT
|
This paper derives a novel sum-rate outer bound for the general memoryless
interference channel with three users. The derivation is a generalization of
the techniques developed by Kramer and by Etkin et al for the Gaussian two-user
channel. For the three-user Gaussian channel the proposed sum-rate outer bound
outperforms known bounds for certain channel parameters.
|
1102.0694
|
A Syntactic Classification based Web Page Ranking Algorithm
|
cs.IR
|
The existing search engines sometimes give unsatisfactory search result for
lack of any categorization of search result. If there is some means to know the
preference of user about the search result and rank pages according to that
preference, the result will be more useful and accurate to the user. In the
present paper a web page ranking algorithm is being proposed based on syntactic
classification of web pages. Syntactic Classification does not bother about the
meaning of the content of a web page. The proposed approach mainly consists of
three steps: select some properties of web pages based on user's demand,
measure them, and give different weightage to each property during ranking for
different types of pages. The existence of syntactic classification is
supported by running fuzzy c-means algorithm and neural network classification
on a set of web pages. The change in ranking for difference in type of pages
but for same query string is also being demonstrated.
|
1102.0695
|
A Domain Specific Ontology Based Semantic Web Search Engine
|
cs.IR
|
Since its emergence in the 1990s the World Wide Web (WWW) has rapidly evolved
into a huge mine of global information and it is growing in size everyday. The
presence of huge amount of resources on the Web thus poses a serious problem of
accurate search. This is mainly because today's Web is a human-readable Web
where information cannot be easily processed by machine. Highly sophisticated,
efficient keyword based search engines that have evolved today have not been
able to bridge this gap. So comes up the concept of the Semantic Web which is
envisioned by Tim Berners-Lee as the Web of machine interpretable information
to make a machine processable form for expressing information. Based on the
semantic Web technologies we present in this paper the design methodology and
development of a semantic Web search engine which provides exact search results
for a domain specific search. This search engine is developed for an
agricultural Website which hosts agricultural information about the state of
West Bengal.
|
1102.0699
|
Explore what-if scenarios with SONG: Social Network Write Generator
|
cs.SI cs.NI physics.soc-ph
|
Online Social Networks (OSNs) have witnessed a tremendous growth the last few
years, becoming a platform for online users to communicate, exchange content
and even find employment. The emergence of OSNs has attracted researchers and
analysts and much data-driven research has been conducted. However, collecting
data-sets is non-trivial and sometimes it is difficult for data-sets to be
shared between researchers. The main contribution of this paper is a framework
called SONG (Social Network Write Generator) to generate synthetic traces of
write activity on OSNs. We build our framework based on a characterization
study of a large Twitter data-set and identifying the important factors that
need to be accounted for. We show how one can generate traces with SONG and
validate it by comparing against real data. We discuss how one can extend and
use SONG to explore different `what-if' scenarios. We build a Twitter clone
using 16 machines and Cassandra. We then show by example the usefulness of SONG
by stress-testing our implementation. We hope that SONG is used by researchers
and analysts for their own work that involves write activity.
|
1102.0710
|
Universal Communication over Arbitrarily Varying Channels
|
cs.IT math.IT
|
We consider the problem of universally communicating over an unknown and
arbitrarily varying channel, using feedback. The focus of this paper is on
determining the input behavior, and specifically, a prior distribution which is
used to randomly generate the codebook. We pose the problem of setting the
prior as a sequential universal prediction problem, that attempts to approach a
given target rate, which depends on the unknown channel sequence. The main
result is that, for a channel comprised of an unknown, arbitrary sequence of
memoryless channels, there is a system using feedback and common randomness
that asymptotically attains, with high probability, the capacity of the
time-averaged channel, universally for every sequence of channels. While no
prior knowledge of the channel sequence is assumed, the rate achieved meets or
exceeds the traditional arbitrarily varying channel (AVC) capacity for every
memoryless AVC defined over the same alphabets, and therefore the system
universally attains the random code AVC capacity, without knowledge of the AVC
parameters. The system we present combines rateless coding with a universal
prediction scheme for the prior. We present rough upper bounds on the rates
that can be achieved in this setting and lower bounds for the redundancies.
|
1102.0714
|
An architecture for the evaluation of intelligent systems
|
cs.AI
|
One of the main research areas in Artificial Intelligence is the coding of
agents (programs) which are able to learn by themselves in any situation. This
means that agents must be useful for purposes other than those they were
created for, as, for example, playing chess. In this way we try to get closer
to the pristine goal of Artificial Intelligence. One of the problems to decide
whether an agent is really intelligent or not is the measurement of its
intelligence, since there is currently no way to measure it in a reliable way.
The purpose of this project is to create an interpreter that allows for the
execution of several environments, including those which are generated
randomly, so that an agent (a person or a program) can interact with them. Once
the interaction between the agent and the environment is over, the interpreter
will measure the intelligence of the agent according to the actions, states and
rewards the agent has undergone inside the environment during the test. As a
result we will be able to measure agents' intelligence in any possible
environment, and to make comparisons between several agents, in order to
determine which of them is the most intelligent. In order to perform the tests,
the interpreter must be able to randomly generate environments that are really
useful to measure agents' intelligence, since not any randomly generated
environment will serve that purpose.
|
1102.0735
|
Analyzing the Impact of Visitors on Page Views with Google Analytics
|
cs.IR
|
This paper develops a flexible methodology to analyze the effectiveness of
different variables on various dependent variables which all are times series
and especially shows how to use a time series regression on one of the most
important and primary index (page views per visit) on Google analytic and in
conjunction it shows how to use the most suitable data to gain a more accurate
result. Search engine visitors have a variety of impact on page views which
cannot be described by single regression. On one hand referral visitors are
well-fitted on linear regression with low impact. On the other hand, direct
visitors made a huge impact on page views. The higher connection speed does not
simply imply higher impact on page views and the content of web page and the
territory of visitors can help connection speed to describe user behavior.
Returning visitors have some similarities with direct visitors.
|
1102.0755
|
Message and State Cooperation in a Relay Channel When the Relay Has
Strictly Causal State Information
|
cs.IT math.IT
|
A state-dependent relay channel is studied in which strictly causal channel
state information is available at the relay and no state information is
available at the source and destination. Source and relay are connected via two
unidirectional out-of-band orthogonal links of finite capacity, and a
state-dependent memoryless channel connects source and relay, on one side, and
the destination, on the other. Via the orthogonal links, the source can convey
information about the message to be delivered to the destination to the relay
while the relay can forward state information to the source. This exchange
enables cooperation between source and relay on both transmission of message
and state information to the destination. First, an achievable scheme, inspired
by noisy network coding, is proposed that exploits both message and state
cooperation. Next, based on the given achievable rate and appropriate upper
bounds, capacity results are identified for some special cases. Finally, a
Gaussian model is studied, along with corresponding numerical results that
illuminate the relative merits of state and message cooperation.
|
1102.0768
|
Message and State Cooperation in a Relay Channel When Only the Relay
Knows the State
|
cs.IT math.IT
|
A state-dependent relay channel is studied in which strictly causal channel
state information is available at the relay and no state information is
available at the source and destination. The source and the relay are connected
via two unidirectional out-of-band orthogonal links of finite capacity, and a
state-dependent memoryless channel connects the source and the relay, on one
side, and the destination, on the other. Via the orthogonal links, the source
can convey information about the message to be delivered to the destination to
the relay while the relay can forward state information to the source. This
exchange enables cooperation between the source and the relay on transmission
of message and state information to the destination. First, two achievable
schemes are proposed that exploit both message and state cooperation. It is
shown that a transmission scheme inspired by noisy network coding performs
better than a strategy based on block Markov coding and backward decoding.
Next, based on the given achievable schemes and appropriate upper bounds,
capacity results are identified for some special cases. Finally, a Gaussian
model is studied, along with corresponding numerical results that illuminate
the relative merits of state and message cooperation.
|
1102.0817
|
Natural images from the birthplace of the human eye
|
q-bio.NC cs.CV
|
Here we introduce a database of calibrated natural images publicly available
through an easy-to-use web interface. Using a Nikon D70 digital SLR camera, we
acquired about 5000 six-megapixel images of Okavango Delta of Botswana, a
tropical savanna habitat similar to where the human eye is thought to have
evolved. Some sequences of images were captured unsystematically while
following a baboon troop, while others were designed to vary a single parameter
such as aperture, object distance, time of day or position on the horizon.
Images are available in the raw RGB format and in grayscale. Images are also
available in units relevant to the physiology of human cone photoreceptors,
where pixel values represent the expected number of photoisomerizations per
second for cones sensitive to long (L), medium (M) and short (S) wavelengths.
This database is distributed under a Creative Commons Attribution-Noncommercial
Unported license to facilitate research in computer vision, psychophysics of
perception, and visual neuroscience.
|
1102.0831
|
Intelligent Semantic Web Search Engines: A Brief Survey
|
cs.AI
|
The World Wide Web (WWW) allows the people to share the information (data)
from the large database repositories globally. The amount of information grows
billions of databases. We need to search the information will specialize tools
known generically search engine. There are many of search engines available
today, retrieving meaningful information is difficult. However to overcome this
problem in search engines to retrieve meaningful information intelligently,
semantic web technologies are playing a major role. In this paper we present
survey on the search engine generations and the role of search engines in
intelligent web and semantic search technologies.
|
1102.0836
|
EigenNet: A Bayesian hybrid of generative and conditional models for
sparse learning
|
cs.LG
|
It is a challenging task to select correlated variables in a high dimensional
space. To address this challenge, the elastic net has been developed and
successfully applied to many applications. Despite its great success, the
elastic net does not explicitly use correlation information embedded in data to
select correlated variables. To overcome this limitation, we present a novel
Bayesian hybrid model, the EigenNet, that uses the eigenstructures of data to
guide variable selection. Specifically, it integrates a sparse conditional
classification model with a generative model capturing variable correlations in
a principled Bayesian framework. We reparameterize the hybrid model in the
eigenspace to avoid overfiting and to increase the computational efficiency of
its MCMC sampler. Furthermore, we provide an alternative view to the EigenNet
from a regularization perspective: the EigenNet has an adaptive
eigenspace-based composite regularizer, which naturally generalizes the
$l_{1/2}$ regularizer used by the elastic net. Experiments on synthetic and
real data show that the EigenNet significantly outperforms the lasso, the
elastic net, and the Bayesian lasso in terms of prediction accuracy, especially
when the number of training samples is smaller than the number of variables.
|
1102.0899
|
Evidence Feed Forward Hidden Markov Model: A New Type of Hidden Markov
Model
|
cs.AI cs.CV cs.LG math.NA math.PR
|
The ability to predict the intentions of people based solely on their visual
actions is a skill only performed by humans and animals. The intelligence of
current computer algorithms has not reached this level of complexity, but there
are several research efforts that are working towards it. With the number of
classification algorithms available, it is hard to determine which algorithm
works best for a particular situation. In classification of visual human intent
data, Hidden Markov Models (HMM), and their variants, are leading candidates.
The inability of HMMs to provide a probability in the observation to
observation linkages is a big downfall in this classification technique. If a
person is visually identifying an action of another person, they monitor
patterns in the observations. By estimating the next observation, people have
the ability to summarize the actions, and thus determine, with pretty good
accuracy, the intention of the person performing the action. These visual cues
and linkages are important in creating intelligent algorithms for determining
human actions based on visual observations.
The Evidence Feed Forward Hidden Markov Model is a newly developed algorithm
which provides observation to observation linkages. The following research
addresses the theory behind Evidence Feed Forward HMMs, provides mathematical
proofs of their learning of these parameters to optimize the likelihood of
observations with a Evidence Feed Forwards HMM, which is important in all
computational intelligence algorithm, and gives comparative examples with
standard HMMs in classification of both visual action data and measurement
data; thus providing a strong base for Evidence Feed Forward HMMs in
classification of many types of problems.
|
1102.0902
|
Disorder induced phase transition in kinetic models of opinion dynamics
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
We propose a model of continuous opinion dynamics, where mutual interactions
can be both positive and negative. Different types of distributions for the
interactions, all characterized by a single parameter $p$ denoting the fraction
of negative interactions, are considered. Results from exact calculation of a
discrete version and numerical simulations of the continuous version of the
model indicate the existence of a universal continuous phase transition at
p=p_c below which a consensus is reached. Although the order-disorder
transition is analogous to a ferromagnetic-paramagnetic phase transition with
comparable critical exponents, the model is characterized by some distinctive
features relevant to a social system.
|
1102.0918
|
Incentive Compatible Influence Maximization in Social Networks and
Application to Viral Marketing
|
cs.GT cs.SI physics.soc-ph
|
Information diffusion and influence maximization are important and
extensively studied problems in social networks. Various models and algorithms
have been proposed in the literature in the context of the influence
maximization problem. A crucial assumption in all these studies is that the
influence probabilities are known to the social planner. This assumption is
unrealistic since the influence probabilities are usually private information
of the individual agents and strategic agents may not reveal them truthfully.
Moreover, the influence probabilities could vary significantly with the type of
the information flowing in the network and the time at which the information is
propagating in the network. In this paper, we use a mechanism design approach
to elicit influence probabilities truthfully from the agents. We first work
with a simple model, the influencer model, where we assume that each user knows
the level of influence she has on her neighbors but this is private
information. In the second model, the influencer-influencee model, which is
more realistic, we determine influence probabilities by combining the
probability values reported by the influencers and influencees. In the context
of the first model, we present how VCG (Vickrey-Clarke-Groves) mechanisms could
be used for truthfully eliciting the influence probabilities. Our main
contribution is to design a scoring rule based mechanism in the context of the
influencer-influencee model. In particular, we show the incentive compatibility
of the mechanisms when the scoring rules are proper and propose a reverse
weighted scoring rule based mechanism as an appropriate mechanism to use. We
also discuss briefly the implementation of such a mechanism in viral marketing
applications.
|
1102.0930
|
An Evaluation of Link Neighborhood Lexical Signatures to Rediscover
Missing Web Pages
|
cs.IR cs.DL cs.SI
|
For discovering the new URI of a missing web page, lexical signatures, which
consist of a small number of words chosen to represent the "aboutness" of a
page, have been previously proposed. However, prior methods relied on computing
the lexical signature before the page was lost, or using cached or archived
versions of the page to calculate a lexical signature. We demonstrate a system
of constructing a lexical signature for a page from its link neighborhood, that
is the "backlinks", or pages that link to the missing page. After testing
various methods, we show that one can construct a lexical signature for a
missing web page using only ten backlink pages. Further, we show that only the
first level of backlinks are useful in this effort. The text that the backlinks
use to point to the missing page is used as input for the creation of a
four-word lexical signature. That lexical signature is shown to successfully
find the target URI in over half of the test cases.
|
1102.0952
|
Pattern tree-based XOLAP rollup operator for XML complex hierarchies
|
cs.DB
|
With the rise of XML as a standard for representing business data, XML data
warehousing appears as a suitable solution for decision-support applications.
In this context, it is necessary to allow OLAP analyses on XML data cubes.
Thus, XQuery extensions are needed. To define a formal framework and allow
much-needed performance optimizations on analytical queries expressed in
XQuery, defining an algebra is desirable. However, XML-OLAP (XOLAP) algebras
from the literature still largely rely on the relational model. Hence, we
propose in this paper a rollup operator based on a pattern tree in order to
handle multidimensional XML data expressed within complex hierarchies.
|
1102.0958
|
Quantitative Stability and Optimality Conditions in Convex Semi-Infinite
and Infinite Programming
|
math.OC cs.SY
|
This paper concerns parameterized convex infinite (or semi-infinite)
inequality systems whose decision variables run over general
infinite-dimensional Banach (resp. finite-dimensional) spaces and that are
indexed by an arbitrary fixed set T . Parameter perturbations on the right-hand
side of the inequalities are measurable and bounded, and thus the natural
parameter space is $l_{\infty}(T)$. Based on advanced variational analysis, we
derive a precise formula for computing the exact Lipschitzian bound of the
feasible solution map, which involves only the system data, and then show that
this exact bound agrees with the coderivative norm of the aforementioned
mapping. On one hand, in this way we extend to the convex setting the results
of [4] developed in the linear framework under the boundedness assumption on
the system coefficients. On the other hand, in the case when the decision space
is reflexive, we succeed to remove this boundedness assumption in the general
convex case, establishing therefore results new even for linear infinite and
semi-infinite systems. The last part of the paper provides verifiable necessary
optimality conditions for infinite and semi-infinite programs with convex
inequality constraints and general nonsmooth and nonconvex objectives. In this
way we extend the corresponding results of [5] obtained for programs with
linear infinite inequality constraints.
|
1102.0964
|
Structured interference-mitigation in two-hop networks
|
cs.IT math.IT
|
We consider two-hop S-R-D Gaussian networks with a source (S), a relay (R)
and a destination (D), some of which experience additive interference. This
additive interference, which renders the channels state-dependent, is either a)
experienced at the destination D and known non-causally at the source S, or b)
experienced at the relay R and known at the destination D. In both cases, one
would hope to exploit this knowledge of the channel state at some of the nodes
to obtain "clean" or interference-free channels, just as Costa's dirty-paper
coding does for one-hop channels with state non-causally known to the
transmitter. We demonstrate a scheme which achieves to within 0.5 bit of a
"clean" channel. This novel scheme is based on nested-lattice code and a
Decode-and-Forward (DF) relay. Intuitively, this strategy uses the structure
provided by nested lattice codes to cancel the "integer" (or lattice quantized)
part of the interference and treats the "residual" (or quantization noise) as
noise.
|
1102.0969
|
On the Complexity of Newman's Community Finding Approach for Biological
and Social Networks
|
physics.soc-ph cs.CC cs.DM cs.SI
|
Given a graph of interactions, a module (also called a community or cluster)
is a subset of nodes whose fitness is a function of the statistical
significance of the pairwise interactions of nodes in the module. The topic of
this paper is a model-based community finding approach, commonly referred to as
modularity clustering, that was originally proposed by Newman and has
subsequently been extremely popular in practice. Various heuristic methods are
currently employed for finding the optimal solution. However, the exact
computational complexity of this approach is still largely unknown.
To this end, we initiate a systematic study of the computational complexity
of modularity clustering. Due to the specific quadratic nature of the
modularity function, it is necessary to study its value on sparse graphs and
dense graphs separately. Our main results include a (1+\eps)-inapproximability
for dense graphs and a logarithmic approximation for sparse graphs. We make use
of several combinatorial properties of modularity to get these results. These
are the first non-trivial approximability results beyond the previously known
NP-hardness results.
|
1102.0987
|
Propagation on networks: an exact alternative perspective
|
cond-mat.stat-mech cs.SI physics.soc-ph
|
By generating the specifics of a network structure only when needed
(on-the-fly), we derive a simple stochastic process that exactly models the
time evolution of susceptible-infectious dynamics on finite-size networks. The
small number of dynamical variables of this birth-death Markov process greatly
simplifies analytical calculations. We show how a dual analytical description,
treating large scale epidemics with a Gaussian approximations and small
outbreaks with a branching process, provides an accurate approximation of the
distribution even for rather small networks. The approach also offers important
computational advantages and generalizes to a vast class of systems.
|
1102.1025
|
Deformed Statistics Kullback-Leibler Divergence Minimization within a
Scaled Bregman Framework
|
cond-mat.stat-mech cs.IT math-ph math.IT math.MP
|
The generalized Kullback-Leibler divergence (K-Ld) in Tsallis statistics
[constrained by the additive duality of generalized statistics (dual
generalized K-Ld)] is here reconciled with the theory of Bregman divergences
for expectations defined by normal averages, within a measure-theoretic
framework. Specifically, it is demonstrated that the dual generalized K-Ld is a
scaled Bregman divergence. The Pythagorean theorem is derived from the minimum
discrimination information-principle using the dual generalized K-Ld as the
measure of uncertainty, with constraints defined by normal averages. The
minimization of the dual generalized K-Ld, with normal averages constraints, is
shown to exhibit distinctly unique features.
|
1102.1027
|
Collective Classification of Textual Documents by Guided
Self-Organization in T-Cell Cross-Regulation Dynamics
|
cs.IR cs.AI cs.LG nlin.AO q-bio.OT
|
We present and study an agent-based model of T-Cell cross-regulation in the
adaptive immune system, which we apply to binary classification. Our method
expands an existing analytical model of T-cell cross-regulation (Carneiro et
al. in Immunol Rev 216(1):48-68, 2007) that was used to study the
self-organizing dynamics of a single population of T-Cells in interaction with
an idealized antigen presenting cell capable of presenting a single antigen.
With agent-based modeling we are able to study the self-organizing dynamics of
multiple populations of distinct T-cells which interact via antigen presenting
cells that present hundreds of distinct antigens. Moreover, we show that such
self-organizing dynamics can be guided to produce an effective binary
classification of antigens, which is competitive with existing machine learning
methods when applied to biomedical text classification. More specifically, here
we test our model on a dataset of publicly available full-text biomedical
articles provided by the BioCreative challenge (Krallinger in The biocreative
ii. 5 challenge overview, p 19, 2009). We study the robustness of our model's
parameter configurations, and show that it leads to encouraging results
comparable to state-of-the-art classifiers. Our results help us understand both
T-cell cross-regulation as a general principle of guided self-organization, as
well as its applicability to document classification. Therefore, we show that
our bio-inspired algorithm is a promising novel method for biomedical article
classification and for binary document classification in general.
|
1102.1038
|
Prisoner's Dilemma on Graphs with Large Girth
|
cs.SI math.PR physics.soc-ph
|
We study the evolution of cooperation in populations where individuals play
prisoner's dilemma on a network. Every node of the network corresponds on an
individual choosing whether to cooperate or defect in a repeated game. The
players revise their actions by imitating those neighbors who have higher
payoffs. We show that when the interactions take place on graphs with large
girth, cooperation is more likely to emerge. On the flip side, in graphs with
many cycles of length 3 and 4, defection spreads more rapidly. One of the key
ideas of our analysis is that our dynamics can be seen as a perturbation of the
voter model. We write the transition kernel of the corresponding Markov chain
in terms of the pairwise correlations in the voter model. We analyze the
pairwise correlation and show that in graphs with relatively large girth,
cooperators cluster and help each other.
|
1102.1064
|
A Decade of Database Research Publications
|
cs.DL cs.DB
|
We analyze the database research publications of four major core database
technology conferences (SIGMOD, VLDB, ICDE, EDBT), two main theoretical
database conferences (PODS, ICDT) and three database journals (TODS, VLDB
Journal, TKDE) over a period of 10 years (2001 - 2010). Our analysis considers
only regular papers as we do not include short papers, demo papers, posters,
tutorials or panels into our statistics. We rank the research scholars
according to their number of publication in each conference/journal separately
and in combined. We also report about the growth in the number of research
publications and the size of the research community in the last decade.
|
1102.1101
|
Total variation regularization for fMRI-based prediction of behaviour
|
cs.CV q-bio.NC
|
While medical imaging typically provides massive amounts of data, the
extraction of relevant information for predictive diagnosis remains a difficult
challenge. Functional MRI (fMRI) data, that provide an indirect measure of
task-related or spontaneous neuronal activity, are classically analyzed in a
mass-univariate procedure yielding statistical parametric maps. This analysis
framework disregards some important principles of brain organization:
population coding, distributed and overlapping representations. Multivariate
pattern analysis, i.e., the prediction of behavioural variables from brain
activation patterns better captures this structure. To cope with the high
dimensionality of the data, the learning method has to be regularized. However,
the spatial structure of the image is not taken into account in standard
regularization methods, so that the extracted features are often hard to
interpret. More informative and interpretable results can be obtained with the
l_1 norm of the image gradient, a.k.a. its Total Variation (TV), as
regularization. We apply for the first time this method to fMRI data, and show
that TV regularization is well suited to the purpose of brain mapping while
being a powerful tool for brain decoding. Moreover, this article presents the
first use of TV regularization for classification.
|
1102.1103
|
Compound Outage Probability and Capacity of a Class of Fading MIMO
Channels with Channel Distribution Uncertainty
|
cs.IT math.IT
|
Outage probability and capacity of a class of block-fading MIMO channels are
considered with partial channel distribution information. Specifically, the
channel or its distribution are not known but the latter is known to belong to
a class of distributions where each member is within a certain distance
(uncertainty) from a nominal distribution. Relative entropy is used as a
measure of distance between distributions. Compound outage probability defined
as min (over the transmit signal distribution) -max (over the channel
distribution class) outage probability is introduced and investigated. This
generalizes the standard outage probability to the case of partial channel
distribution information. Compound outage probability characterization (via
one-dimensional convex optimization), its properties and approximations are
given. It is shown to have two-regime behavior: when the nominal outage
probability decreases (e.g. by increasing the SNR), the compound outage first
decreases linearly down to a certain threshold (related to relative entropy
distance) and then only logarithmically (i.e. very slowly), so that no
significant further decrease is possible. The compound outage depends on the
relative entropy distance and the nominal outage only, all other details
(nominal fading and noise distributions) being irrelevant. The transmit signal
distribution optimized for the nominal channel distribution is shown to be also
optimal for the whole class of distributions. The effect of swapping the
distributions in relative entropy is investigated and an error floor effect is
established. The compound outage probability under Lp distance constraint is
also investigated. The obtained results hold for a generic channel model
(arbitrary nominal fading and noise distributions).
|
1102.1107
|
Robust Distributed Routing in Dynamical Flow Networks - Part I: Locally
Responsive Policies and Weak Resilience
|
cs.SY math.CA math.DS math.OC nlin.AO
|
Robustness of distributed routing policies is studied for dynamical flow
networks, with respect to adversarial disturbances that reduce the link flow
capacities. A dynamical flow network is modeled as a system of ordinary
differential equations derived from mass conservation laws on a directed
acyclic graph with a single origin-destination pair and a constant inflow at
the origin. Routing policies regulate the way the inflow at a non-destination
node gets split among its outgoing links as a function of the current particle
density, while the outflow of a link is modeled to depend on the current
particle density on that link through a flow function. The dynamical flow
network is called partially transferring if the total inflow at the destination
node is asymptotically bounded away from zero, and its weak resilience is
measured as the minimum sum of the link-wise magnitude of all disturbances that
make it not partially transferring. The weak resilience of a dynamical flow
network with arbitrary routing policy is shown to be upper-bounded by the
network's min-cut capacity, independently of the initial flow conditions.
Moreover, a class of distributed routing policies that rely exclusively on
local information on the particle densities, and are locally responsive to
that, is shown to yield such maximal weak resilience. These results imply that
locality constraints on the information available to the routing policies do
not cause loss of weak resilience. Some fundamental properties of dynamical
flow networks driven by locally responsive distributed policies are analyzed in
detail, including global convergence to a unique limit flow.
|
1102.1111
|
Treelicious: a System for Semantically Navigating Tagged Web Pages
|
cs.IR
|
Collaborative tagging has emerged as a popular and effective method for
organizing and describing pages on the Web. We present Treelicious, a system
that allows hierarchical navigation of tagged web pages. Our system enriches
the navigational capabilities of standard tagging systems, which typically
exploit only popularity and co-occurrence data. We describe a prototype that
leverages the Wikipedia category structure to allow a user to semantically
navigate pages from the Delicious social bookmarking service. In our system a
user can perform an ordinary keyword search and browse relevant pages but is
also given the ability to broaden the search to more general topics and narrow
it to more specific topics. We show that Treelicious indeed provides an
intuitive framework that allows for improved and effective discovery of
knowledge.
|
1102.1115
|
Adaptive Resource Allocation in Jamming Teams Using Game Theory
|
cs.GT cs.IT cs.SY math.IT math.OC
|
In this work, we study the problem of power allocation and adaptive
modulation in teams of decision makers. We consider the special case of two
teams with each team consisting of two mobile agents. Agents belonging to the
same team communicate over wireless ad hoc networks, and they try to split
their available power between the tasks of communication and jamming the nodes
of the other team. The agents have constraints on their total energy and
instantaneous power usage. The cost function adopted is the difference between
the rates of erroneously transmitted bits of each team. We model the adaptive
modulation problem as a zero-sum matrix game which in turn gives rise to a a
continuous kernel game to handle power control. Based on the communications
model, we present sufficient conditions on the physical parameters of the
agents for the existence of a pure strategy saddle-point equilibrium (PSSPE).
|
1102.1140
|
Ranking-Based Black-Box Complexity
|
cs.NE cs.CC cs.DS
|
Randomized search heuristics such as evolutionary algorithms, simulated
annealing, and ant colony optimization are a broadly used class of
general-purpose algorithms. Analyzing them via classical methods of theoretical
computer science is a growing field. While several strong runtime analysis
results have appeared in the last 20 years, a powerful complexity theory for
such algorithms is yet to be developed. We enrich the existing notions of
black-box complexity by the additional restriction that not the actual
objective values, but only the relative quality of the previously evaluated
solutions may be taken into account by the black-box algorithm. Many randomized
search heuristics belong to this class of algorithms.
We show that the new ranking-based model gives more realistic complexity
estimates for some problems. For example, the class of all binary-value
functions has a black-box complexity of $O(\log n)$ in the previous black-box
models, but has a ranking-based complexity of $\Theta(n)$.
For the class of all OneMax functions, we present a ranking-based black-box
algorithm that has a runtime of $\Theta(n / \log n)$, which shows that the
OneMax problem does not become harder with the additional ranking-basedness
restriction.
|
1102.1165
|
Achievable Rate Region for Multiple Access Channel with Correlated
Channel States and Cooperating Encoders
|
cs.IT math.IT
|
In this paper, a two-user discrete memoryless multiple-access channel
(DM-MAC) with correlated channel states, each known at one of the encoders is
considered, in which each encoder transmits independent messages and tries to
cooperate with the other one. To consider cooperating encoders, it is assumed
that each encoder strictly-causally receives and learns the other encoder's
transmitted symbols and tries to cooperate with the other encoder by
transmitting its message. Next, we study this channel in a special case; we
assume that the common part of both states is known at both, hence encoders use
this opportunity to get better rate region. For these scenarios, an achievable
rate region is derived based on a combination of block-Markov encoding and
Gel'fand-Pinsker coding techniques. Furthermore, the achievable rate region is
established for the Gaussian channel, and it is shown that the capacity region
is achieved in certain circumstances.
|
1102.1167
|
Seats at the table: the network of the editorial boards in information
and library science
|
cs.DL cs.SI physics.soc-ph
|
The structural properties of the network generated by the editorial
activities of the members of the boards of "Information Science & Library
Science" journals are explored through network analysis techniques. The crossed
presence of scholars on editorial boards, the phenomenon called interlocking
editorship, is considered a proxy of the similarity of editorial policies. The
evidences support the idea that this group of journals is better described as a
set of only relatively connected subfields. In particular two main subfield are
identified, consisting of research oriented journals devoted respectively to
LIS and MIS. The links between these two subsets are weak. Around these two
subsets there are a lot of (relatively) isolated professional journals or
journals characterized more by their subject-matter content than by their focus
on information flows. It is possible to suggest that this configuration of the
network may be the consequence of the youthfulness of Information Science &
Library Science, which has not permitted yet to reach a general consensus
through scholars on research aims, methods and instruments.
|
1102.1168
|
Interlocking editorship. A network analysis of the links between
economic journals
|
cs.DL cs.SI physics.soc-ph
|
The exploratory analysis developed in this paper relies on the hypothesis
that each editor possesses some power in the definition of the editorial policy
of her journal. Consequently if the same scholar sits on the board of editors
of two journals, those journals could have some common elements in their
editorial policies. The proximity of the editorial policies of two scientific
journals can be assessed by the number of common editors sitting on their
boards. A database of all editors of ECONLIT journals is used. The structure of
the network generated by interlocking editorship is explored by applying the
instruments of network analysis. Evidences have been found of a compact network
containing different components. This is interpreted as the result of a
plurality of perspectives about the appropriate methods for the investigation
of problems and the construction of theories within the domain of economics.
|
1102.1173
|
Multi-Parameter Tikhonov Regularization
|
math.NA cs.SY math.OC
|
We study multi-parameter Tikhonov regularization, i.e., with multiple
penalties. Such models are useful when the sought-for solution exhibits several
distinct features simultaneously. Two choice rules, i.e., discrepancy principle
and balancing principle, are studied for choosing an appropriate
(vector-valued) regularization parameter, and some theoretical results are
presented. In particular, the consistency of the discrepancy principle as well
as convergence rate are established, and an a posteriori error estimate for the
balancing principle is established. Also two fixed point algorithms are
proposed for computing the regularization parameter by the latter rule.
Numerical results for several nonsmooth multi-parameter models are presented,
which show clearly their superior performance over their single-parameter
counterparts.
|
1102.1182
|
Phase transition in the detection of modules in sparse networks
|
cond-mat.stat-mech cs.LG cs.SI physics.soc-ph
|
We present an asymptotically exact analysis of the problem of detecting
communities in sparse random networks. Our results are also applicable to
detection of functional modules, partitions, and colorings in noisy planted
models. Using a cavity method analysis, we unveil a phase transition from a
region where the original group assignment is undetectable to one where
detection is possible. In some cases, the detectable region splits into an
algorithmically hard region and an easy one. Our approach naturally translates
into a practical algorithm for detecting modules in sparse networks, and
learning the parameters of the underlying model.
|
1102.1227
|
Exact recoverability from dense corrupted observations via $L_1$
minimization
|
cs.IT math.IT math.ST stat.TH
|
This paper confirms a surprising phenomenon first observed by Wright
\textit{et al.} \cite{WYGSM_Face_2009_J} \cite{WM_denseError_2010_J} under
different setting: given $m$ highly corrupted measurements $y = A_{\Omega
\bullet} x^{\star} + e^{\star}$, where $A_{\Omega \bullet}$ is a submatrix
whose rows are selected uniformly at random from rows of an orthogonal matrix
$A$ and $e^{\star}$ is an unknown sparse error vector whose nonzero entries may
be unbounded, we show that with high probability $\ell_1$-minimization can
recover the sparse signal of interest $x^{\star}$ exactly from only $m = C
\mu^2 k (\log n)^2$ where $k$ is the number of nonzero components of
$x^{\star}$ and $\mu = n \max_{ij} A_{ij}^2$, even if nearly 100% of the
measurements are corrupted. We further guarantee that stable recovery is
possible when measurements are polluted by both gross sparse and small dense
errors: $y = A_{\Omega \bullet} x^{\star} + e^{\star}+ \nu$ where $\nu$ is the
small dense noise with bounded energy. Numerous simulation results under
various settings are also presented to verify the validity of the theory as
well as to illustrate the promising potential of the proposed framework.
|
1102.1231
|
Cramer-Rao Bound for Blind Channel Estimators in Redundant Block
Transmission Systems
|
cs.IT math.IT
|
In this paper, we derive the Cramer-Rao bound (CRB) for blind channel
estimation in redundant block transmission systems, a lower bound for the mean
squared error of any blind channel estimators. The derived CRB is valid for any
full-rank linear redundant precoder, including both zero-padded (ZP) and
cyclic-prefixed (CP) precoders. A simple form of CRBs for multiple complex
parameters is also derived and presented which facilitates the CRB derivation
of the problem of interest. A comparison is made between the derived CRBs and
performances of existing subspace-based blind channel estimators for both CP
and ZP systems. Numerical results show that there is still some room for
performance improvement of blind channel estimators.
|
1102.1232
|
Asymptotic Spectral Efficiency of the Uplink in Spatially Distributed
Wireless Networks With Multi-Antenna Base Stations
|
cs.IT math.IT
|
The spectral efficiency of a representative uplink of a given length, in
interference-limited, spatially-distributed wireless networks with hexagonal
cells, simple power control, and multiantenna linear Minimum-Mean-Square-Error
receivers is found to approach an asymptote as the numbers of base-station
antennas N and wireless nodes go to infinity. An approximation for the
area-averaged spectral efficiency of a representative link (averaged over the
spatial base-station and mobile distributions), for Poisson distributed base
stations, is also provided. For large N, in the interference-limited regime,
the area-averaged spectral efficiency is primarily a function of the ratio of
the product of N and the ratio of base-station to wireless-node densities,
indicating that it is possible to scale such networks by linearly increasing
the product of the number of base-station antennas and the relative density of
base stations to wireless nodes, with wireless-node density. The results are
useful for designers of wireless systems with high inter-cell interference
because it provides simple expressions for spectral efficiency as a function of
tangible system parameters like base-station and wireless-node densities, and
number of antennas. These results were derived combining infinite random matrix
theory and stochastic geometry.
|
1102.1247
|
Randomness and dependencies extraction via polarization, with
applications to Slepian-Wolf coding and secrecy
|
cs.IT math.IT
|
The polarization phenomenon for a single source is extended to a framework
with multiple correlated sources. It is shown in addition to extracting the
randomness of the source, the polar transforms takes the original arbitrary
dependencies to extremal dependencies. Polar coding schemes for the
Slepian-Wolf problem and for secret key generations are then proposed based on
this phenomenon. In particular, constructions of secret keys achieving the
secrecy capacity and compression schemes achieving the Slepian-Wolf capacity
region are obtained with a complexity of $O(n \log (n))$.
|
1102.1249
|
Compressible Distributions for High-dimensional Statistics
|
math.ST cs.IT math.IT stat.TH
|
We develop a principled way of identifying probability distributions whose
independent and identically distributed (iid) realizations are compressible,
i.e., can be well-approximated as sparse. We focus on Gaussian random
underdetermined linear regression (GULR) problems, where compressibility is
known to ensure the success of estimators exploiting sparse regularization. We
prove that many distributions revolving around maximum a posteriori (MAP)
interpretation of sparse regularized estimators are in fact incompressible, in
the limit of large problem sizes. A highlight is the Laplace distribution and
$\ell^{1}$ regularized estimators such as the Lasso and Basis Pursuit
denoising. To establish this result, we identify non-trivial undersampling
regions in GULR where the simple least squares solution almost surely
outperforms an oracle sparse solution, when the data is generated from the
Laplace distribution. We provide simple rules of thumb to characterize classes
of compressible (respectively incompressible) distributions based on their
second and fourth moments. Generalized Gaussians and generalized Pareto
distributions serve as running examples for concreteness.
|
1102.1256
|
Stochastic Optimal Multi-Modes Switching with a Viscosity Solution
Approach
|
math.OC cs.SY math.PR
|
We consider the problem of optimal multi-modes switching in finite horizon,
when the state of the system, including the switching cost functions are
arbitrary ($g_{ij}(t,x)\geq 0$). We show existence of the optimal strategy, and
give when the optimal strategy is finite via a verification theorem. Finally,
when the state of the system is a markov process, we show that the vector of
value functions of the optimal problem is the unique viscosity solution to the
system of $m$ variational partial differential inequalities with
inter-connected obstacles.
|
1102.1261
|
Symmetry in behavior of complex social systems - discussion of models of
crowd evacuation organized in agreement with symmetry conditions
|
cs.MA
|
The evacuation of football stadium scenarios are discussed as model realizing
ordered states, described as movements of individuals according to fields of
displacements, calculated correspondingly to given scenario. The symmetry of
the evacuation space is taken into account in calculation of displacements
field - the displacements related to every point of this space are presented in
the coordinate frame in the best way adapted to given symmetry space group,
which the set of basic vectors of irreducible representation of given group is.
The speeds of individuals at every point in the presented model have the same
quantity. As the results the times of evacuation and average forces acting on
individuals during the evacuation are given. Both parameters are compared with
the same parameters got without symmetry considerations. They are calculated in
the simulation procedure. The new program (using modified Helbing model) has
been elaborated and presented in this work for realization the simulation tasks
the.
|
1102.1265
|
Sphere decoding complexity exponent for decoding full rate codes over
the quasi-static MIMO channel
|
cs.IT math.IT
|
In the setting of quasi-static multiple-input multiple-output (MIMO)
channels, we consider the high signal-to-noise ratio (SNR) asymptotic
complexity required by the sphere decoding (SD) algorithm for decoding a large
class of full rate linear space-time codes. With SD complexity having random
fluctuations induced by the random channel, noise and codeword realizations,
the introduced SD complexity exponent manages to concisely describe the
computational reserves required by the SD algorithm to achieve arbitrarily
close to optimal decoding performance. Bounds and exact expressions for the SD
complexity exponent are obtained for the decoding of large families of codes
with arbitrary performance characteristics. For the particular example of
decoding the recently introduced threaded cyclic division algebra (CDA) based
codes -- the only currently known explicit designs that are uniformly optimal
with respect to the diversity multiplexing tradeoff (DMT) -- the SD complexity
exponent is shown to take a particularly concise form as a non-monotonic
function of the multiplexing gain. To date, the SD complexity exponent also
describes the minimum known complexity of any decoder that can provably achieve
a gap to maximum likelihood (ML) performance which vanishes in the high SNR
limit.
|
1102.1292
|
Modeling Dynamic Swarms
|
cs.CV
|
This paper proposes the problem of modeling video sequences of dynamic swarms
(DS). We define DS as a large layout of stochastically repetitive spatial
configurations of dynamic objects (swarm elements) whose motions exhibit local
spatiotemporal interdependency and stationarity, i.e., the motions are similar
in any small spatiotemporal neighborhood. Examples of DS abound in nature,
e.g., herds of animals and flocks of birds. To capture the local spatiotemporal
properties of the DS, we present a probabilistic model that learns both the
spatial layout of swarm elements and their joint dynamics that are modeled as
linear transformations. To this end, a spatiotemporal neighborhood is
associated with each swarm element, in which local stationarity is enforced
both spatially and temporally. We assume that the prior on the swarm dynamics
is distributed according to an MRF in both space and time. Embedding this model
in a MAP framework, we iterate between learning the spatial layout of the swarm
and its dynamics. We learn the swarm transformations using ICM, which iterates
between estimating these transformations and updating their distribution in the
spatiotemporal neighborhoods. We demonstrate the validity of our method by
conducting experiments on real video sequences. Real sequences of birds, geese,
robot swarms, and pedestrians evaluate the applicability of our model to real
world data.
|
1102.1324
|
Refinement of Operator-valued Reproducing Kernels
|
cs.LG math.FA
|
This paper studies the construction of a refinement kernel for a given
operator-valued reproducing kernel such that the vector-valued reproducing
kernel Hilbert space of the refinement kernel contains that of the given one as
a subspace. The study is motivated from the need of updating the current
operator-valued reproducing kernel in multi-task learning when underfitting or
overfitting occurs. Numerical simulations confirm that the established
refinement kernel method is able to meet this need. Various characterizations
are provided based on feature maps and vector-valued integral representations
of operator-valued reproducing kernels. Concrete examples of refining
translation invariant and finite Hilbert-Schmidt operator-valued reproducing
kernels are provided. Other examples include refinement of Hessian of
scalar-valued translation-invariant kernels and transformation kernels.
Existence and properties of operator-valued reproducing kernels preserved
during the refinement process are also investigated.
|
1102.1345
|
Introducing a New Mechanism for Construction of an Efficient Search
Model
|
cs.IR
|
Search engine has become an inevitable tool for retrieving information from
the WWW. Web researchers introduce lots of algorithms to modify search engine
based on different features. Sometimes those algorithms are domain related,
sometimes they are Web page ranking related, and sometimes they are efficiency
related and so on. We are introducing such a type of algorithm which is
multiple domains as well as efficiency related. In this paper, we are providing
multilevel indexing on top of Index Based Acyclic Graph (IBAG) which support
multiple Ontologies as well as reduce search time. IBAG contains only domains
related pages and are constructed from Relevant Page Graph (RPaG). We have also
provided a comparative study of time complexity for the various models.
|
1102.1379
|
Structural and functional networks in complex systems with delay
|
cond-mat.dis-nn cs.SI physics.soc-ph
|
Functional networks of complex systems are obtained from the analysis of the
temporal activity of their components, and are often used to infer their
unknown underlying connectivity. We obtain the equations relating topology and
function in a system of diffusively delay-coupled elements in complex networks.
We solve exactly the resulting equations in motifs (directed structures of
three nodes), and in directed networks. The mean-field solution for directed
uncorrelated networks shows that the clusterization of the activity is
dominated by the in-degree of the nodes, and that the locking frequency
decreases with increasing average degree. We find that the exponent of a power
law degree distribution of the structural topology, b, is related to the
exponent of the associated functional network as a =1/(2-b), for b < 2.
|
1102.1398
|
Efficient Bayesian Social Learning on Trees
|
cs.SI cs.GT cs.MA
|
We consider a set of agents who are attempting to iteratively learn the
'state of the world' from their neighbors in a social network. Each agent
initially receives a noisy observation of the true state of the world. The
agents then repeatedly 'vote' and observe the votes of some of their peers,
from which they gain more information. The agents' calculations are Bayesian
and aim to myopically maximize the expected utility at each iteration.
This model, introduced by Gale and Kariv (2003), is a natural approach to
learning on networks. However, it has been criticized, chiefly because the
agents' decision rule appears to become computationally intractable as the
number of iterations advances. For instance, a dynamic programming approach
(part of this work) has running time that is exponentially large in \min(n,
(d-1)^t), where n is the number of agents.
We provide a new algorithm to perform the agents' computations on locally
tree-like graphs. Our algorithm uses the dynamic cavity method to drastically
reduce computational effort. Let d be the maximum degree and t be the iteration
number. The computational effort needed per agent is exponential only in O(td)
(note that the number of possible information sets of a neighbor at time t is
itself exponential in td).
Under appropriate assumptions on the rate of convergence, we deduce that each
agent is only required to spend polylogarithmic (in 1/\eps) computational
effort to approximately learn the true state of the world with error
probability \eps, on regular trees of degree at least five. We provide
numerical and other evidence to justify our assumption on convergence rate.
We extend our results in various directions, including loopy graphs. Our
results indicate efficiency of iterative Bayesian social learning in a wide
range of situations, contrary to widely held beliefs.
|
1102.1407
|
Stable Parallel Looped Systems -- A New Theoretical Framework for the
Evolution of Order
|
cs.NE nlin.AO
|
The objective of the paper is to identify laws and mechanisms that allow the
creation of more order from disorder using natural means i.e., without the help
of conscious beings. While this is not possible for the collection of all
dynamical systems as it violates the second law of thermodynamics, I show that
this is possible within a special subset called stable parallel looped (SPL)
dynamical systems. I identify a new infinite family of physical and chemical
dynamical SPL systems, which are (a) easy to create naturally and (b) easy to
merge, link and combine to create dynamical systems of any specified
complexity. Within SPL systems, I propose a special collection of designs
called active material-energy looped systems using which it is possible to
generate large-scale ordered chemical networks, like the metabolic networks, in
a reliable, repeatable, iterative and natural manner. The resulting SPL systems
provide a new theoretical framework for the problem of origin of life.
|
1102.1441
|
Generating Probability Distributions using Multivalued Stochastic Relay
Circuits
|
cs.IT cs.DM math.IT
|
The problem of random number generation dates back to von Neumann's work in
1951. Since then, many algorithms have been developed for generating unbiased
bits from complex correlated sources as well as for generating arbitrary
distributions from unbiased bits. An equally interesting, but less studied
aspect is the structural component of random number generation as opposed to
the algorithmic aspect. That is, given a network structure imposed by nature or
physical devices, how can we build networks that generate arbitrary probability
distributions in an optimal way? In this paper, we study the generation of
arbitrary probability distributions in multivalued relay circuits, a
generalization in which relays can take on any of N states and the logical
'and' and 'or' are replaced with 'min' and 'max' respectively. Previous work
was done on two-state relays. We generalize these results, describing a duality
property and networks that generate arbitrary rational probability
distributions. We prove that these networks are robust to errors and design a
universal probability generator which takes input bits and outputs arbitrary
binary probability distributions.
|
1102.1462
|
Diversity of MMSE MIMO Receivers
|
cs.IT math.IT
|
In most MIMO systems, the family of waterfall error curves, calculated at
different spectral efficiencies, are asymptotically parallel at high SNR. In
other words, most MIMO systems exhibit a single diversity value for all fixed
rates. The MIMO MMSE receiver does not follow this pattern and exhibits a
varying diversity in its family of error curves. This work analyzes this
interesting behavior of the MMSE MIMO receiver and produces the MMSE MIMO
diversity at all rates. The diversity of the quasi-static flat-fading MIMO
channel consisting of any arbitrary number of transmit and receive antennas is
fully characterized, showing that full spatial diversity is possible if and
only if the rate is within a certain bound which is a function of the number of
antennas. For other rates, the available diversity is fully characterized. At
sufficiently low rates, the MMSE receiver has a diversity similar to the
maximum likelihood receiver (maximal diversity), while at high rates it
performs similarly to the zero-forcing receiver (minimal diversity). Linear
receivers are also studied in the context of the MIMO multiple access channel
(MAC). Then, the quasi-static frequency selective MIMO channel is analyzed
under zero-padding (ZP) and cyclic-prefix (CP) block transmissions and MMSE
reception, and lower and upper bounds on diversity are derived. For the special
case of SIMO under CP, it is shown that the above-mentioned bounds are tight.
|
1102.1465
|
An Introduction to Artificial Prediction Markets for Classification
|
stat.ML cs.LG math.ST stat.TH
|
Prediction markets are used in real life to predict outcomes of interest such
as presidential elections. This paper presents a mathematical theory of
artificial prediction markets for supervised learning of conditional
probability estimators. The artificial prediction market is a novel method for
fusing the prediction information of features or trained classifiers, where the
fusion result is the contract price on the possible outcomes. The market can be
trained online by updating the participants' budgets using training examples.
Inspired by the real prediction markets, the equations that govern the market
are derived from simple and reasonable assumptions. Efficient numerical
algorithms are presented for solving these equations. The obtained artificial
prediction market is shown to be a maximum likelihood estimator. It generalizes
linear aggregation, existent in boosting and random forest, as well as logistic
regression and some kernel methods. Furthermore, the market mechanism allows
the aggregation of specialized classifiers that participate only on specific
instances. Experimental comparisons show that the artificial prediction markets
often outperform random forest and implicit online learning on synthetic data
and real UCI datasets. Moreover, an extensive evaluation for pelvic and
abdominal lymph node detection in CT data shows that the prediction market
improves adaboost's detection rate from 79.6% to 81.2% at 3 false
positives/volume.
|
1102.1466
|
Distributed Throughput-optimal Scheduling in Ad Hoc Wireless Networks
|
cs.IT math.IT
|
In this paper, we propose a distributed throughput-optimal ad hoc wireless
network scheduling algorithm, which is motivated by the celebrated simplex
algorithm for solving linear programming (LP) problems. The scheduler stores a
sparse set of basic schedules, and chooses the max-weight basic schedule for
transmission in each time slot. At the same time, the scheduler tries to update
the set of basic schedules by searching for a new basic schedule in a
throughput increasing direction. We show that both of the above procedures can
be achieved in a distributed manner. Specifically, we propose an average
consensus based link contending algorithm to implement the distributed max
weight scheduling. Further, we show that the basic schedule update can be
implemented using CSMA mechanisms, which is similar to the one proposed by
Jiang et al. Compared to the optimal distributed scheduler in Jiang's paper,
where schedules change in a random walk fashion, our algorithm has a better
delay performance by achieving faster schedule transitions in the steady state.
The performance of the algorithm is finally confirmed by simulation results.
|
1102.1475
|
Security Embedding Codes
|
cs.IT math.IT
|
This paper considers the problem of simultaneously communicating two
messages, a high-security message and a low-security message, to a legitimate
receiver, referred to as the security embedding problem. An
information-theoretic formulation of the problem is presented. A coding scheme
that combines rate splitting, superposition coding, nested binning and channel
prefixing is considered and is shown to achieve the secrecy capacity region of
the channel in several scenarios. Specifying these results to both scalar and
independent parallel Gaussian channels (under an average individual
per-subchannel power constraint), it is shown that the high-security message
can be embedded into the low-security message at full rate (as if the
low-security message does not exist) without incurring any loss on the overall
rate of communication (as if both messages are low-security messages).
Extensions to the wiretap channel II setting of Ozarow and Wyner are also
considered, where it is shown that "perfect" security embedding can be achieved
by an encoder that uses a two-level coset code.
|
1102.1480
|
Joint Decoding of LDPC Codes and Finite-State Channels via
Linear-Programming
|
cs.IT math.IT
|
This paper considers the joint-decoding (JD) problem for finite-state
channels (FSCs) and low-density parity-check (LDPC) codes. In the first part,
the linear-programming (LP) decoder for binary linear codes is extended to JD
of binary-input FSCs. In particular, we provide a rigorous definition of LP
joint-decoding pseudo-codewords (JD-PCWs) that enables evaluation of the
pairwise error probability between codewords and JD-PCWs in AWGN. This leads
naturally to a provable upper bound on decoder failure probability. If the
channel is a finite-state intersymbol interference channel, then the joint LP
decoder also has the maximum-likelihood (ML) certificate property and all
integer-valued solutions are codewords. In this case, the performance loss
relative to ML decoding can be explained completely by fractional-valued
JD-PCWs. After deriving these results, we discovered some elements were
equivalent to earlier work by Flanagan on LP receivers.
In the second part, we develop an efficient iterative solver for the joint LP
decoder discussed in the first part. In particular, we extend the approach of
iterative approximate LP decoding, proposed by Vontobel and Koetter and
analyzed by Burshtein, to this problem. By taking advantage of the dual-domain
structure of the JD-LP, we obtain a convergent iterative algorithm for joint LP
decoding whose structure is similar to BCJR-based turbo equalization (TE). The
result is a joint iterative decoder whose per-iteration complexity is similar
to that of TE but whose performance is similar to that of joint LP decoding.
The main advantage of this decoder is that it appears to provide the
predictability of joint LP decoding and superior performance with the
computational complexity of TE. One expected application is coding for magnetic
storage where the required block-error rate is extremely low and system
performance is difficult to verify by simulation.
|
1102.1487
|
Rumor Evolution in Social Networks
|
physics.soc-ph cs.SI
|
Social network is a main tunnel of rumor spreading. Previous studies are
concentrated on a static rumor spreading. The content of the rumor is
invariable during the whole spreading process. Indeed, the rumor evolves
constantly in its spreading process, which grows shorter, more concise, more
easily grasped and told. In an early psychological experiment, researchers
found about 70% of details in a rumor were lost in the first 6 mouth-to-mouth
transmissions \cite{TPR}. Based on the facts, we investigate rumor spreading on
social networks, where the content of the rumor is modified by the individuals
with a certain probability. In the scenario, they have two choices, to forward
or to modify. As a forwarder, an individual disseminates the rumor directly to
its neighbors. As a modifier, conversely, an individual revises the rumor
before spreading it out. When the rumor spreads on the social networks, for
instance, scale-free networks and small-world networks, the majority of
individuals actually are infected by the multi-revised version of the rumor, if
the modifiers dominate the networks. Our observation indicates that the
original rumor may lose its influence in the spreading process. Similarly, a
true information may turn to be a rumor as well. Our result suggests the rumor
evolution should not be a negligible question, which may provide a better
understanding of the generation and destruction of a rumor.
|
1102.1497
|
Belief Propagation for Error Correcting Codes and Lossy Compression
Using Multilayer Perceptrons
|
cs.IT math.IT physics.data-an
|
The belief propagation (BP) based algorithm is investigated as a potential
decoder for both of error correcting codes and lossy compression, which are
based on non-monotonic tree-like multilayer perceptron encoders. We discuss
that whether the BP can give practical algorithms or not in these schemes. The
BP implementations in those kind of fully connected networks unfortunately
shows strong limitation, while the theoretical results seems a bit promising.
Instead, it reveals it might have a rich and complex structure of the solution
space via the BP-based algorithms.
|
1102.1498
|
On Rate-Splitting by a Secondary Link in Multiple Access Primary Network
|
cs.IT math.IT
|
An achievable rate region is obtained for a primary multiple access network
coexisting with a secondary link of one transmitter and a corresponding
receiver. The rate region depicts the sum primary rate versus the secondary
rate and is established assuming that the secondary link performs
rate-splitting. The achievable rate region is the union of two types of
achievable rate regions. The first type is a rate region established assuming
that the secondary receiver cannot decode any primary signal, whereas the
second is established assuming that the secondary receiver can decode the
signal of one primary receiver. The achievable rate region is determined first
assuming discrete memoryless channel (DMC) then the results are applied to a
Gaussian channel. In the Gaussian channel, the performance of rate-splitting is
characterized for the two types of rate regions. Moreover, a necessary and
sufficient condition to determine which primary signal that the secondary
receiver can decode without degrading the range of primary achievable sum rates
is provided. When this condition is satisfied by a certain primary user, the
secondary receiver can decode its signal and achieve larger rates without
reducing the primary achievable sum rates from the case in which it does not
decode any primary signal. It is also shown that, the probability of having at
least one primary user satisfying this condition grows with the primary signal
to noise ratio.
|
1102.1502
|
On the Statistics and Predictability of Go-Arounds
|
cs.SY
|
This paper takes an empirical approach to identify operational factors at
busy airports that may predate go-around maneuvers. Using four years of data
from San Francisco International Airport, we begin our investigation with a
statistical approach to investigate which features of airborne, ground
operations (e.g., number of inbound aircraft, number of aircraft taxiing from
gate, etc.) or weather are most likely to fluctuate, relative to nominal
operations, in the minutes immediately preceding a missed approach. We analyze
these findings both in terms of their implication on current airport operations
and discuss how the antecedent factors may affect NextGen. Finally, as a means
to assist air traffic controllers, we draw upon techniques from the machine
learning community to develop a preliminary alert system for go-around
prediction.
|
1102.1503
|
Peer-to-Peer Multimedia Sharing based on Social Norms
|
cs.MM cs.SI
|
Empirical data shows that in the absence of incentives, a peer participating
in a Peer-to-Peer (P2P) network wishes to free-riding. Most solutions for
providing incentives in P2P networks are based on direct reciprocity, which are
not appropriate for most P2P multimedia sharing networks due to the unique
features exhibited by such networks: large populations of anonymous agents
interacting infrequently, asymmetric interests of peers, network errors, and
multiple concurrent transactions. In this paper, we design and rigorously
analyze a new family of incentive protocols that utilizes indirect reciprocity
which is based on the design of efficient social norms. In the proposed P2P
protocols, the social norms consist of a social strategy, which represents the
rule prescribing to the peers when they should or should not provide content to
other peers, and a reputation scheme, which rewards or punishes peers depending
on whether they comply or not with the social strategy. We first define the
concept of a sustainable social norm, under which no peer has an incentive to
deviate. We then formulate the problem of designing optimal social norms, which
selects the social norm that maximizes the network performance among all
sustainable social norms. Hence, we prove that it becomes in the self-interest
of peers to contribute their content to the network rather than to free-ride.
We also investigate the impact of various punishment schemes on the social
welfare as well as how should the optimal social norms be designed if
altruistic and malicious peers are active in the network. Our results show that
optimal social norms are capable of providing significant improvements in the
sharing efficiency of multimedia P2P networks.
|
1102.1507
|
Generalized Measures of Information Transfer
|
physics.data-an cs.IT math.DS math.IT
|
Transfer entropy provides a general tool for analyzing the magnitudes and
directions---but not the \emph{kinds}---of information transfer in a system. We
extend transfer entropy in two complementary ways. First, we distinguish
state-dependent from state-independent transfer, based on whether a source's
influence depends on the state of the target. Second, for multiple sources, we
distinguish between unique, redundant, and synergistic transfer. The new
measures are demonstrated on several systems that extend examples from previous
literature.
|
1102.1536
|
Evolutionary multiobjective optimization of the multi-location
transshipment problem
|
cs.AI math.OC
|
We consider a multi-location inventory system where inventory choices at each
location are centrally coordinated. Lateral transshipments are allowed as
recourse actions within the same echelon in the inventory system to reduce
costs and improve service level. However, this transshipment process usually
causes undesirable lead times. In this paper, we propose a multiobjective model
of the multi-location transshipment problem which addresses optimizing three
conflicting objectives: (1) minimizing the aggregate expected cost, (2)
maximizing the expected fill rate, and (3) minimizing the expected
transshipment lead times. We apply an evolutionary multiobjective optimization
approach using the strength Pareto evolutionary algorithm (SPEA2), to
approximate the optimal Pareto front. Simulation with a wide choice of model
parameters shows the different trades-off between the conflicting objectives.
|
1102.1552
|
Multiuser Diversity in Downlink Channels: When does the Feedback Cost
Outweigh the Spectral Efficiency Gain?
|
cs.IT math.IT
|
In this paper, we perform a cost-benefit analysis of multiuser diversity in
single antenna broadcast channels. It is well known that multiuser diversity
can be beneficial but there is a significant cost associated with acquiring
instantaneous CSI. We perform a cost-benefit analysis of multiuser diversity
for 2 types of CSI feedback methods, dedicated feedback and SNR dependent
feedback, quantifying how many users should feedback CSI from a net throughput
perspective. Dedicated feedback, in which orthogonal resources are allocated to
each user, has significant feedback cost and this limits the amount of
available multiuser diversity that can be used. SNR dependent feedback method,
in which only users with SNR above a threshold attempt to feedback, has
relatively much smaller feedback cost and this allows for all of the available
multiuser diversity to be used. Next, we study the effect of single user
multiantenna techniques, which reduce the SNR variation, on the number of
feedback users neccessary. It is seen that a broadcast channel using single
user multiantenna techniques should reduce the number of feedback users with
the spatial dimension.
|
1102.1609
|
Exact Minimum-Repair-Bandwidth Cooperative Regenerating Codes for
Distributed Storage Systems
|
cs.IT cs.DC math.IT
|
In order to provide high data reliability, distributed storage systems
disperse data with redundancy to multiple storage nodes. Regenerating codes is
a new class of erasure codes to introduce redundancy for the purpose of
improving the data repair performance in distributed storage. Most of the
studies on regenerating codes focus on the single-failure recovery, but it is
not uncommon to see two or more node failures at the same time in large storage
networks. To exploit the opportunity of repairing multiple failed nodes
simultaneously, a cooperative repair mechanism, in the sense that the nodes to
be repaired can exchange data among themselves, is investigated. A lower bound
on the repair-bandwidth for cooperative repair is derived and a construction of
a family of exact cooperative regenerating codes matching this lower bound is
presented.
|
1102.1621
|
Recovery of Sparsely Corrupted Signals
|
cs.IT math.IT
|
We investigate the recovery of signals exhibiting a sparse representation in
a general (i.e., possibly redundant or incomplete) dictionary that are
corrupted by additive noise admitting a sparse representation in another
general dictionary. This setup covers a wide range of applications, such as
image inpainting, super-resolution, signal separation, and recovery of signals
that are impaired by, e.g., clipping, impulse noise, or narrowband
interference. We present deterministic recovery guarantees based on a novel
uncertainty relation for pairs of general dictionaries and we provide
corresponding practicable recovery algorithms. The recovery guarantees we find
depend on the signal and noise sparsity levels, on the coherence parameters of
the involved dictionaries, and on the amount of prior knowledge about the
signal and noise support sets.
|
1102.1660
|
ATC Taskload Inherent to the Geometry of Stochastic 4-D Trajectory Flows
with Flight Technical Errors
|
cs.SY
|
A method to quantify the probabilistic controller taskload inherent to
maintaining aircraft adherence to 4-D trajectories within flow corridors is
presented. An Ornstein-Uhlenbeck model of the aircraft motion and a Poisson
model of the flow scheduling are introduced along with reasonable numerical
values of the model parameters. Analytic expressions are derived for the
taskload probability density functions for basic functional elements of the
flow structure. Monte Carlo simulations are performed for these basic
functional elements and the controller taskload probabilities are exhibited.
|
1102.1691
|
Schema Redescription in Cellular Automata: Revisiting Emergence in
Complex Systems
|
nlin.CG cs.AI cs.FL cs.NE q-bio.QM
|
We present a method to eliminate redundancy in the transition tables of
Boolean automata: schema redescription with two symbols. One symbol is used to
capture redundancy of individual input variables, and another to capture
permutability in sets of input variables: fully characterizing the canalization
present in Boolean functions. Two-symbol schemata explain aspects of the
behaviour of automata networks that the characterization of their emergent
patterns does not capture. We use our method to compare two well-known cellular
automata for the density classification task: the human engineered CA GKL, and
another obtained via genetic programming (GP). We show that despite having very
different collective behaviour, these rules are very similar. Indeed, GKL is a
special case of GP. Therefore, we demonstrate that it is more feasible to
compare cellular automata via schema redescriptions of their rules, than by
looking at their emergent behaviour, leading us to question the tendency in
complexity research to pay much more attention to emergent patterns than to
local interactions.
|
1102.1745
|
Restructuring in Combinatorial Optimization
|
cs.DS cs.AI math.CO math.OC
|
The paper addresses a new class of combinatorial problems which consist in
restructuring of solutions (as structures) in combinatorial optimization. Two
main features of the restructuring process are examined: (i) a cost of the
restructuring, (ii) a closeness to a goal solution. This problem corresponds to
redesign (improvement, upgrade) of modular systems or solutions. The
restructuring approach is described and illustrated for the following
combinatorial optimization problems: knapsack problem, multiple choice problem,
assignment problem, spanning tree problems. Examples illustrate the
restructuring processes.
|
1102.1747
|
Graph Coalition Structure Generation
|
cs.DS cs.AI cs.CC cs.GT cs.MA
|
We give the first analysis of the computational complexity of {\it coalition
structure generation over graphs}. Given an undirected graph $G=(N,E)$ and a
valuation function $v:2^N\rightarrow\RR$ over the subsets of nodes, the problem
is to find a partition of $N$ into connected subsets, that maximises the sum of
the components' values. This problem is generally NP--complete; in particular,
it is hard for a defined class of valuation functions which are {\it
independent of disconnected members}---that is, two nodes have no effect on
each other's marginal contribution to their vertex separator. Nonetheless, for
all such functions we provide bounds on the complexity of coalition structure
generation over general and minor free graphs. Our proof is constructive and
yields algorithms for solving corresponding instances of the problem.
Furthermore, we derive polynomial time bounds for acyclic, $K_{2,3}$ and $K_4$
minor free graphs. However, as we show, the problem remains NP--complete for
planar graphs, and hence, for any $K_k$ minor free graphs where $k\geq 5$.
Moreover, our hardness result holds for a particular subclass of valuation
functions, termed {\it edge sum}, where the value of each subset of nodes is
simply determined by the sum of given weights of the edges in the induced
subgraph.
|
1102.1753
|
Predictors of short-term decay of cell phone contacts in a large scale
communication network
|
cs.SI physics.soc-ph stat.ML
|
Under what conditions is an edge present in a social network at time t likely
to decay or persist by some future time t + Delta(t)? Previous research
addressing this issue suggests that the network range of the people involved in
the edge, the extent to which the edge is embedded in a surrounding structure,
and the age of the edge all play a role in edge decay. This paper uses weighted
data from a large-scale social network built from cell-phone calls in an 8-week
period to determine the importance of edge weight for the decay/persistence
process. In particular, we study the relative predictive power of directed
weight, embeddedness, newness, and range (measured as outdegree) with respect
to edge decay and assess the effectiveness with which a simple decision tree
and logistic regression classifier can accurately predict whether an edge that
was active in one time period continues to be so in a future time period. We
find that directed edge weight, weighted reciprocity and time-dependent
measures of edge longevity are highly predictive of whether we classify an edge
as persistent or decayed, relative to the other types of factors at the dyad
and neighborhood level.
|
1102.1782
|
On network coding for acyclic networks with delays
|
cs.IT math.IT
|
Problems related to network coding for acyclic, instantaneous networks (where
the edges of the acyclic graph representing the network are assumed to have
zero-delay) have been extensively dealt with in the recent past. The most
prominent of these problems include (a) the existence of network codes that
achieve maximum rate of transmission, (b) efficient network code constructions,
and (c) field size issues. In practice, however, networks have transmission
delays. In network coding theory, such networks with transmission delays are
generally abstracted by assuming that their edges have integer delays. Note
that using enough memory at the nodes of an acyclic network with integer delays
can effectively simulate instantaneous behavior, which is probably why only
acyclic instantaneous networks have been primarily focused on thus far. In this
work, we elaborate on issues ((a), (b) and (c) above) related to network coding
for acyclic networks with integer delays, which have till now mostly been
overlooked. We show that the delays associated with the edges of the network
cannot be ignored, and in fact turn out to be advantageous, disadvantageous or
immaterial, depending on the topology of the network and the problem considered
i.e., (a), (b) or (c). In the process, we also show that for a single source
multicast problem in acyclic networks (instantaneous and with delays), the
network coding operations at each node can simply be limited to storing old
symbols and coding them over a binary field. Therefore, operations over
elements of larger fields are unnecessary in the network, the trade-off being
that enough memory exists at the nodes and at the sinks, and that the sinks
have more processing power.
|
1102.1789
|
Extreme events on complex networks
|
cond-mat.stat-mech cs.SI physics.soc-ph
|
We study the extreme events taking place on complex networks. The transport
on networks is modelled using random walks and we compute the probability for
the occurance and recurrence of extreme events on the network. We show that the
nodes with smaller number of links are more prone to extreme events than the
ones with larger number of links. We obtain analytical estimates and verify
them with numerical simulations. They are shown to be robust even when random
walkers follow shortest path on the network. The results suggest a revision of
design principles and can be used as an input for designing the nodes of a
network so as to smoothly handle an extreme event.
|
1102.1803
|
Proposing LT based Search in PDM Systems for Better Information
Retrieval
|
cs.IR cs.AI
|
PDM Systems contain and manage heavy amount of data but the search mechanism
of most of the systems is not intelligent which can process user"s natural
language based queries to extract desired information. Currently available
search mechanisms in almost all of the PDM systems are not very efficient and
based on old ways of searching information by entering the relevant information
to the respective fields of search forms to find out some specific information
from attached repositories. Targeting this issue, a thorough research was
conducted in fields of PDM Systems and Language Technology. Concerning the PDM
System, conducted research provides the information about PDM and PDM Systems
in detail. Concerning the field of Language Technology, helps in implementing a
search mechanism for PDM Systems to search user"s needed information by
analyzing user"s natural language based requests. The accomplished goal of this
research was to support the field of PDM with a new proposition of a conceptual
model for the implementation of natural language based search. The proposed
conceptual model is successfully designed and partially implementation in the
form of a prototype. Describing the proposition in detail the main concept,
implementation designs and developed prototype of proposed approach is
discussed in this paper. Implemented prototype is compared with respective
functions of existing PDM systems .i.e., Windchill and CIM to evaluate its
effectiveness against targeted challenges.
|
1102.1808
|
From Machine Learning to Machine Reasoning
|
cs.AI cs.LG
|
A plausible definition of "reasoning" could be "algebraically manipulating
previously acquired knowledge in order to answer a new question". This
definition covers first-order logical inference or probabilistic inference. It
also includes much simpler manipulations commonly used to build large learning
systems. For instance, we can build an optical character recognition system by
first training a character segmenter, an isolated character recognizer, and a
language model, using appropriate labeled training sets. Adequately
concatenating these modules and fine tuning the resulting system can be viewed
as an algebraic operation in a space of models. The resulting model answers a
new question, that is, converting the image of a text page into a computer
readable text.
This observation suggests a conceptual continuity between algebraically rich
inference systems, such as logical or probabilistic inference, and simple
manipulations, such as the mere concatenation of trainable learning systems.
Therefore, instead of trying to bridge the gap between machine learning systems
and sophisticated "all-purpose" inference mechanisms, we can instead
algebraically enrich the set of manipulations applicable to training systems,
and build reasoning capabilities from the ground up.
|
1102.1820
|
Optimal Synthesis for Nonholonomic Vehicles With Constrained Side
Sensors
|
cs.RO
|
We present a complete characterization of shortest paths to a goal position
for a vehicle with unicycle kinematics and a limited range sensor, constantly
keeping a given landmark in sight. Previous work on this subject studied the
optimal paths in case of a frontal, symmetrically limited Field--Of--View
(FOV). In this paper we provide a generalization to the case of arbitrary FOVs,
including the case that the direction of motion is not an axis of symmetry for
the FOV, and even that it is not contained in the FOV. The provided solution is
of particular relevance to applications using side-scanning, such as e.g. in
underwater sonar-based surveying and navigation.
|
1102.1889
|
Ologs: a categorical framework for knowledge representation
|
cs.LO cs.AI math.CT
|
In this paper we introduce the olog, or ontology log, a category-theoretic
model for knowledge representation (KR). Grounded in formal mathematics, ologs
can be rigorously formulated and cross-compared in ways that other KR models
(such as semantic networks) cannot. An olog is similar to a relational database
schema; in fact an olog can serve as a data repository if desired. Unlike
database schemas, which are generally difficult to create or modify, ologs are
designed to be user-friendly enough that authoring or reconfiguring an olog is
a matter of course rather than a difficult chore. It is hoped that learning to
author ologs is much simpler than learning a database definition language,
despite their similarity. We describe ologs carefully and illustrate with many
examples. As an application we show that any primitive recursive function can
be described by an olog. We also show that ologs can be aligned or connected
together into a larger network using functors. The various methods of
information flow and institutions can then be used to integrate local and
global world-views. We finish by providing several different avenues for future
research.
|
1102.1929
|
Suppressing Epidemics with a Limited Amount of Immunization Units
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
The way diseases spread through schools, epidemics through countries, and
viruses through the Internet is crucial in determining their risk. Although
each of these threats has its own characteristics, its underlying network
determines the spreading. To restrain the spreading, a widely used approach is
the fragmentation of these networks through immunization, so that epidemics
cannot spread. Here we develop an immunization approach based on optimizing the
susceptible size, which outperforms the best known strategy based on immunizing
the highest-betweenness links or nodes. We find that the network's
vulnerability can be significantly reduced, demonstrating this on three
different real networks: the global flight network, a school friendship
network, and the internet. In all cases, we find that not only is the average
infection probability significantly suppressed, but also for the most relevant
case of a small and limited number of immunization units the infection
probability can be reduced by up to 55%.
|
1102.1959
|
Distributed Uplink Resource Allocation in Cognitive Radio Networks --
Part I: Equilibria and Algorithms for Power Allocation
|
cs.IT math.IT
|
Spectrum management has been identified as a crucial step towards enabling
the technology of a cognitive radio network (CRN). Most of the current works
dealing with spectrum management in the CRN focus on a single task of the
problem, e.g., spectrum sensing, spectrum decision, spectrum sharing or
spectrum mobility. In this two-part paper, we argue that for certain network
configurations, jointly performing several tasks of the spectrum management
improves the spectrum efficiency. Specifically, our aim is to study the uplink
resource management problem in a CRN where there exist multiple cognitive users
(CUs) and access points (APs). The CUs, in order to maximize their uplink
transmission rates, have to associate to a suitable AP (spectrum decision), and
to share the channels used by this AP with other CUs (spectrum sharing). These
tasks are clearly interdependent, and the problem of how they should be carried
out efficiently and in a distributed manner is still open in the literature.
|
1102.1960
|
Averaged Iterative Water-Filling Algorithm: Robustness and Convergence
|
cs.IT math.IT
|
The convergence properties of the Iterative water-filling (IWF) based
algorithms have been derived in the ideal situation where the transmitters in
the network are able to obtain the exact value of the interference plus noise
(IPN) experienced at the corresponding receivers in each iteration of the
algorithm. However, these algorithms are not robust because they diverge when
there is it time-varying estimation error of the IPN, a situation that arises
in real communication system. In this correspondence, we propose an algorithm
that possesses convergence guarantees in the presence of various forms of such
time-varying error. Moreover, we also show by simulation that in scenarios
where the interference is strong, the conventional IWF diverges while our
proposed algorithm still converges.
|
1102.1963
|
On quantum limit of optical communications: concatenated codes and
joint-detection receivers
|
quant-ph cs.IT math.IT
|
When classical information is sent over a channel with quantum-state
modulation alphabet, such as the free-space optical (FSO) channel, attaining
the ultimate (Holevo) limit to channel capacity requires the receiver to make
joint measurements over long codeword blocks. In recent work, we showed a
receiver for a pure-state channel that can attain the ultimate capacity by
applying a single-shot optical (unitary) transformation on the received
codeword state followed by simultaneous (but separable) projective measurements
on the single-modulation-symbol state spaces. In this paper, we study the
ultimate tradeoff between photon efficiency and spectral efficiency for the FSO
channel. Based on our general results for the pure-state quantum channel, we
show some of the first concrete examples of codes and laboratory-realizable
joint-detection optical receivers that can achieve fundamentally higher
(superadditive) channel capacity than receivers that physically detect each
modulation symbol one at a time, as is done by all conventional (coherent or
direct-detection) optical receivers.
|
1102.1965
|
Distributed Uplink Resource Allocation in Cognitive Radio Networks --
Part II: Equilibria and Algorithms for Joint Access Point Selection and Power
Allocation
|
cs.IT math.IT
|
In the first part of this paper, we have studied solely the spectrum sharing
aspect of the above problem, and proposed algorithms for the CUs in the single
AP network to efficiently share the spectrum. In this second part of the paper,
we build upon our previous understanding of the single AP network, and
formulate the joint spectrum decision and spectrum sharing problem in a
multiple AP network into a non-cooperative game, in which the feasible strategy
of a player contains a discrete variable (the AP/spectrum decision) and a
continuous vector (the power allocation among multiple channels). The structure
of the game is hence very different from most non-cooperative spectrum
management game proposed in the literature. We provide characterization of the
Nash Equilibrium (NE) of this game, and present a set of novel algorithms that
allow the CUs to distributively and efficiently select the suitable AP and
share the channels with other CUs. Finally, we study the properties of the
proposed algorithms as well as their performance via extensive simulations.
|
1102.1985
|
What Stops Social Epidemics?
|
cs.SI physics.soc-ph
|
Theoretical progress in understanding the dynamics of spreading processes on
graphs suggests the existence of an epidemic threshold below which no epidemics
form and above which epidemics spread to a significant fraction of the graph.
We have observed information cascades on the social media site Digg that spread
fast enough for one initial spreader to infect hundreds of people, yet end up
affecting only 0.1% of the entire network. We find that two effects, previously
studied in isolation, combine cooperatively to drastically limit the final size
of cascades on Digg. First, because of the highly clustered structure of the
Digg network, most people who are aware of a story have been exposed to it via
multiple friends. This structure lowers the epidemic threshold while moderately
slowing the overall growth of cascades. In addition, we find that the mechanism
for social contagion on Digg points to a fundamental difference between
information spread and other contagion processes: despite multiple
opportunities for infection within a social group, people are less likely to
become spreaders of information with repeated exposure. The consequences of
this mechanism become more pronounced for more clustered graphs. Ultimately,
this effect severely curtails the size of social epidemics on Digg.
|
1102.2017
|
Synthesis of Mechanism for single- and hybrid-tasks using Differential
Evolution
|
cs.CE
|
The optimal dimensional synthesis for planar mechanisms using differential
evolution (DE) is demonstrated. Four examples are included: in the first case,
the synthesis of a mechanism for hybrid-tasks, considering path generation,
function generation, and motion generation, is carried out. The second and
third cases pertain to path generation, with and without prescribed timing.
Finally, the synthesis of an Ackerman mechanism is reported. Order defect
problem is solved by manipulating individuals instead of penalizing or
discretizing the search space for the parameters. A technique that consists in
applying a transformation in order to satisfy the Grashof and crank conditions
to generate an initial elitist population is introduced. As a result, the
evolutionary algorithm increases its efficiency.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.