id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1109.6618
|
Multiple-Goal Heuristic Search
|
cs.AI
|
This paper presents a new framework for anytime heuristic search where the
task is to achieve as many goals as possible within the allocated resources. We
show the inadequacy of traditional distance-estimation heuristics for tasks of
this type and present alternative heuristics that are more appropriate for
multiple-goal search. In particular, we introduce the marginal-utility
heuristic, which estimates the cost and the benefit of exploring a subtree
below a search node. We developed two methods for online learning of the
marginal-utility heuristic. One is based on local similarity of the partial
marginal utility of sibling nodes, and the other generalizes marginal-utility
over the state feature space. We apply our adaptive and non-adaptive
multiple-goal search algorithms to several problems, including focused
crawling, and show their superiority over existing methods.
|
1109.6621
|
FluCaP: A Heuristic Search Planner for First-Order MDPs
|
cs.AI
|
We present a heuristic search algorithm for solving first-order Markov
Decision Processes (FOMDPs). Our approach combines first-order state
abstraction that avoids evaluating states individually, and heuristic search
that avoids evaluating all states. Firstly, in contrast to existing systems,
which start with propositionalizing the FOMDP and then perform state
abstraction on its propositionalized version we apply state abstraction
directly on the FOMDP avoiding propositionalization. This kind of abstraction
is referred to as first-order state abstraction. Secondly, guided by an
admissible heuristic, the search is restricted to those states that are
reachable from the initial state. We demonstrate the usefulness of the above
techniques for solving FOMDPs with a system, referred to as FluCaP (formerly,
FCPlanner), that entered the probabilistic track of the 2004 International
Planning Competition (IPC2004) and demonstrated an advantage over other
planners on the problems represented in first-order terms.
|
1109.6638
|
The Statistical Inefficiency of Sparse Coding for Images (or, One Gabor
to Rule them All)
|
cs.CV cs.AI
|
Sparse coding is a proven principle for learning compact representations of
images. However, sparse coding by itself often leads to very redundant
dictionaries. With images, this often takes the form of similar edge detectors
which are replicated many times at various positions, scales and orientations.
An immediate consequence of this observation is that the estimation of the
dictionary components is not statistically efficient. We propose a factored
model in which factors of variation (e.g. position, scale and orientation) are
untangled from the underlying Gabor-like filters. There is so much redundancy
in sparse codes for natural images that our model requires only a single
dictionary element (a Gabor-like edge detector) to outperform standard sparse
coding. Our model scales naturally to arbitrary-sized images while achieving
much greater statistical efficiency during learning. We validate this claim
with a number of experiments showing, in part, superior compression of
out-of-sample data using a sparse coding dictionary learned with only a single
image.
|
1109.6642
|
Encoding dynamics for multiscale community detection: Markov time
sweeping for the Map equation
|
physics.soc-ph cs.IT cs.SI math.IT
|
The detection of community structure in networks is intimately related to
finding a concise description of the network in terms of its modules. This
notion has been recently exploited by the Map equation formalism (M. Rosvall
and C.T. Bergstrom, PNAS, 105(4), pp.1118--1123, 2008) through an
information-theoretic description of the process of coding inter- and
intra-community transitions of a random walker in the network at stationarity.
However, a thorough study of the relationship between the full Markov dynamics
and the coding mechanism is still lacking. We show here that the original Map
coding scheme, which is both block-averaged and one-step, neglects the internal
structure of the communities and introduces an upper scale, the `field-of-view'
limit, in the communities it can detect. As a consequence, Map is well tuned to
detect clique-like communities but can lead to undesirable overpartitioning
when communities are far from clique-like. We show that a signature of this
behavior is a large compression gap: the Map description length is far from its
ideal limit. To address this issue, we propose a simple dynamic approach that
introduces time explicitly into the Map coding through the analysis of the
weighted adjacency matrix of the time-dependent multistep transition matrix of
the Markov process. The resulting Markov time sweeping induces a dynamical
zooming across scales that can reveal (potentially multiscale) community
structure above the field-of-view limit, with the relevant partitions indicated
by a small compression gap.
|
1109.6646
|
A Non-MDS Erasure Code Scheme For Storage Applications
|
cs.IT cs.DC cs.NI math.IT
|
This paper investigates the use of redundancy and self repairing against node
failures in distributed storage systems, using various strategies. In
replication method, access to one replication node is sufficient to reconstruct
a lost node, while in MDS erasure coded systems which are optimal in terms of
redundancy-reliability tradeoff, a single node failure is repaired after
recovering the entire stored data. Moreover, regenerating codes yield a
tradeoff curve between storage capacity and repair bandwidth. The current paper
aims at investigating a new storage code. Specifically, we propose a non-MDS
(2k, k) code that tolerates any three node failures and more importantly, it is
shown using our code a single node failure can be repaired through access to
only three nodes.
|
1109.6665
|
Distributed and Cascade Lossy Source Coding with a Side Information
"Vending Machine"
|
cs.IT math.IT
|
Source coding with a side information "vending machine" is a recently
proposed framework in which the statistical relationship between the side
information and the source, instead of being given and fixed as in the
classical Wyner-Ziv problem, can be controlled by the decoder. This control
action is selected by the decoder based on the message encoded by the source
node. Unlike conventional settings, the message can thus carry not only
information about the source to be reproduced at the decoder, but also control
information aimed at improving the quality of the side information. In this
paper, the analysis of the trade-offs between rate, distortion and cost
associated with the control actions is extended from the previously studied
point-to-point set-up to two basic multiterminal models. First, a distributed
source coding model is studied, in which two encoders communicate over
rate-limited links to a decoder, whose side information can be controlled. The
control actions are selected by the decoder based on the messages encoded by
both source nodes. For this set-up, inner bounds are derived on the
rate-distortion-cost region for both cases in which the side information is
available causally and non-causally at the decoder. These bounds are shown to
be tight under specific assumptions, including the scenario in which the
sequence observed by one of the nodes is a function of the source observed by
the other and the side information is available causally at the decoder. Then,
a cascade scenario in which three nodes are connected in a cascade and the last
node has controllable side information, is also investigated. For this model,
the rate-distortion-cost region is derived for general distortion requirements
and under the assumption of causal availability of side information at the last
node.
|
1109.6698
|
Improving Recommendation Quality by Merging Collaborative Filtering and
Social Relationships
|
cs.SI cs.IR physics.soc-ph
|
Matrix Factorization techniques have been successfully applied to raise the
quality of suggestions generated by Collaborative Filtering Systems (CFSs).
Traditional CFSs based on Matrix Factorization operate on the ratings provided
by users and have been recently extended to incorporate demographic aspects
such as age and gender. In this paper we propose to merge CFS based on Matrix
Factorization and information regarding social friendships in order to provide
users with more accurate suggestions and rankings on items of their interest.
The proposed approach has been evaluated on a real-life online social network;
the experimental results show an improvement against existing CFSs. A detailed
comparison with related literature is also present.
|
1109.6714
|
A Novel Two-stage Entropy-based Robust Cooperative Spectrum Sensing
Scheme with Two-bit Decision in Cognitive Radio
|
cs.IT math.IT
|
Spectrum sensing is a key problem in cognitive radio. However, traditional
detectors become ineffective when noise uncertainty is severe. It is shown that
the entropy of Gauss white noise is constant in the frequency domain, and a
robust detector based on the entropy of spectrum amplitude was proposed. In
this paper a novel detector is proposed based on the entropy of spectrum power
density, and its performance is better than the previous scheme with less
computational complexity. Furthermore, to improve the reliability of the
detection, a two-stage entropy-based cooperative spectrum sensing scheme using
two-bit decision is proposed, and simulation results show its superior
performance with relatively low computational complexity.
|
1109.6717
|
Lamarckism and mechanism synthesis: approaching constrained optimization
with ideas from biology
|
math.OC cs.NE
|
Nonlinear constrained optimization problems are encountered in many
scientific fields. To utilize the huge calculation power of current computers,
many mathematic models are also rebuilt as optimization problems. Most of them
have constrained conditions which need to be handled. Borrowing biological
concepts, a study is accomplished for dealing with the constraints in the
synthesis of a four-bar mechanism. Biologically regarding the constrained
condition as a form of selection for characteristics of a population, four new
algorithms are proposed, and a new explanation is given for the penalty method.
Using these algorithms, three cases are tested in differential-evolution based
programs. Better, or comparable, results show that the presented algorithms and
methodology may become common means for constraint handling in optimization
problems.
|
1109.6719
|
Separation Number and Generalized Clustering Coefficient in Small World
Networks based on String Formalism
|
physics.soc-ph cs.SI
|
We reformulated the string formalism given by Aoyama, using an adjacent
matrix of a network and introduced a series of generalized clustering
coefficients based on it. Furthermore we numerically evaluated Milgram
condition proposed by their article in order to explore $q$-$th$ degrees of
separation in scale free networks. In this article, we apply the reformulation
to small world networks and numerically evaluate Milgram condition, especially
the separation number of small world networks and its relation to cycle
structures are discussed. Considering the number of non-zero elements of an
adjacent matrix, the average path length and Milgram condition, we show that
the formalism proposed by us is effective to analyze the six degrees of
separation, especially effective for analyzing the relation between the
separation number and cycle structures in a network.
By this analysis of small world networks, it proves that a sort of power low
holds between $M_n$, which is a key quantity in Milgram condition, and the
generalized clustering coefficients. This property in small world networks
stands in contrast to that of scale free networks.
|
1109.6726
|
A Fuzzy Co-Clustering approach for Clickstream Data Pattern
|
cs.IR
|
Web Usage mining is a very important tool to extract the hidden business
intelligence data from large databases. The extracted information provides the
organizations with the ability to produce results more effectively to improve
their businesses and increasing of sales. Co-clustering is a powerful
bipartition technique which identifies group of users associated to group of
web pages. These associations are quantified to reveal the users' interest in
the different web pages' clusters. In this paper, Fuzzy Co-Clustering algorithm
is proposed for clickstream data to identify the subset of users of similar
navigational behavior /interest over a subset of web pages of a website.
Targeting the users group for various promotional activities is an important
aspect of marketing practices. Experiments are conducted on real dataset to
prove the efficiency of proposed algorithm. The results and findings of this
algorithm could be used to enhance the marketing strategy for directing
marketing, advertisements for web based businesses and so on.
|
1109.6757
|
New entropic uncertainty relations for prime power dimensions
|
quant-ph cs.IT math.IT
|
We consider the question of entropic uncertainty relations for prime power
dimensions. In order to improve upon such uncertainty relations for higher
dimensional quantum systems, we derive a tight lower bound amount of entropy
for multiple probability distributions under the constraint that the sum of the
collision probabilities for all distributions is fixed. This is purely a
classical information theoretical result, however using an interesting result
by Larsen \cite{Larsen90} allows us to connect this to an entropic uncertainty
relation.
|
1109.6776
|
Behaviors of $\phi$-exponential distributions in Wasserstein geometry
and an evolution equation
|
math.MG cs.IT math.IT
|
A $\phi$-exponential distribution is a generalization of an exponential
distribution associated to functions $\phi$ in an appropriate class, and the
space of $\phi$-exponential distributions has a dually flat structure. We study
features of the space of $\phi$-exponential distributions, such as the
convexity in Wasserstein geometry and the stability under an evolution
equation. From this study, we provide the new characterizations to the space of
Gaussian measures and the space of $q$-Gaussian measures.
|
1109.6838
|
Distributed Air Traffic Control : A Human Safety Perspective
|
cs.MA cs.AI
|
The issues in air traffic control have so far been addressed with the intent
to improve resource utilization and achieve an optimized solution with respect
to fuel comsumption of aircrafts, efficient usage of the available airspace
with minimal congestion related losses under various dynamic constraints. So
the focus has almost always been more on smarter management of traffic to
increase profits while human safety, though achieved in the process, we
believe, has remained less seriously attended. This has become all the more
important given that we have overburdened and overstressed air traffic
controllers managing hundreds of airports and thousands of aircrafts per day.
We propose a multiagent system based distributed approach to handle air
traffic ensuring complete human (passenger) safety without removing any humans
(ground controllers) from the loop thereby also retaining the earlier
advantages in the new solution. The detailed design of the agent system, which
will be easily interfacable with the existing environment, is described. Based
on our initial findings from simulations, we strongly believe the system to be
capable of handling the nuances involved, to be extendable and customizable at
any later point in time.
|
1109.6840
|
A Novel comprehensive method for real time Video Motion Detection
Surveillance
|
cs.CV
|
This article describes a comprehensive system for surveillance and monitoring
applications. The development of an efficient real time video motion detection
system is motivated by their potential for deployment in the areas where
security is the main concern. The paper presents a platform for real time video
motion detection and subsequent generation of an alarm condition as set by the
parameters of the control system. The prototype consists of a mobile platform
mounted with RF camera which provides continuous feedback of the environment.
The received visual information is then analyzed by user for appropriate
control action, thus enabling the user to operate the system from a remote
location. The system is also equipped with the ability to process the image of
an object and generate control signals which are automatically transmitted to
the mobile platform to track the object.
|
1109.6841
|
Learning Dependency-Based Compositional Semantics
|
cs.AI
|
Suppose we want to build a system that answers a natural language question by
representing its semantics as a logical form and computing the answer given a
structured database of facts. The core part of such a system is the semantic
parser that maps questions to logical forms. Semantic parsers are typically
trained from examples of questions annotated with their target logical forms,
but this type of annotation is expensive.
Our goal is to learn a semantic parser from question-answer pairs instead,
where the logical form is modeled as a latent variable. Motivated by this
challenging learning problem, we develop a new semantic formalism,
dependency-based compositional semantics (DCS), which has favorable linguistic,
statistical, and computational properties. We define a log-linear distribution
over DCS logical forms and estimate the parameters using a simple procedure
that alternates between beam search and numerical optimization. On two standard
semantic parsing benchmarks, our system outperforms all existing
state-of-the-art systems, despite using no annotated logical forms.
|
1109.6845
|
Optimal Power Allocation for Two-Way Decode-and-Forward OFDM Relay
Networks
|
cs.IT cs.NI math.IT
|
This paper presents a novel two-way decode-and-forward (DF) relay strategy
for Orthogonal Frequency Division Multiplexing (OFDM) relay networks. This DF
relay strategy employs multi-subcarrier joint channel coding to leverage
frequency selective fading, and thus can achieve a higher data rate than the
conventional per-subcarrier DF relay strategies. We further propose a
low-complexity, optimal power allocation strategy to maximize the data rate of
the proposed relay strategy. Simulation results suggest that our strategy
obtains a substantial gain over the per-subcarrier DF relay strategies, and
also outperforms the amplify-and-forward (AF) relay strategy in a wide
signal-to-noise-ratio (SNR) region.
|
1109.6862
|
Video OCR for Video Indexing
|
cs.IR cs.MM
|
Video OCR is a technique that can greatly help to locate the topics of
interest in video via the automatic extraction and reading of captions and
annotations. Text in video can provide key indexing information. Recognizing
such text for search application is critical. Major difficult problem for
character recognition for videos is degraded and deformated characters, low
resolution characters or very complex background. To tackle the problem
preprocessing on text image plays vital role. Most of the OCR engines are
working on the binary image so to find a better binarization procedure for
image to get a desired result is important.Accurate binarization process
minimizes the error rate of video OCR.
|
1109.6874
|
#h00t: Censorship Resistant Microblogging
|
cs.CR cs.SI
|
Microblogging services such as Twitter are an increasingly important way to
communicate, both for individuals and for groups through the use of hashtags
that denote topics of conversation. However, groups can be easily blocked from
communicating through blocking of posts with the given hashtags. We propose
#h00t, a system for censorship resistant microblogging. #h00t presents an
interface that is much like Twitter, except that hashtags are replaced with
very short hashes (e.g., 24 bits) of the group identifier. Naturally, with such
short hashes, hashtags from different groups may collide and #h00t users will
actually seek to create collisions. By encrypting all posts with keys derived
from the group identifiers, #h00t client software can filter out other groups'
posts while making such filtering difficult for the adversary. In essence, by
leveraging collisions, groups can tunnel their posts in other groups' posts. A
censor could not block a given group without also blocking the other groups
with colliding hashtags. We evaluate the feasibility of #h00t through traces
collected from Twitter, showing that a single modern computer has enough
computational throughput to encrypt every tweet sent through Twitter in real
time. We also use these traces to analyze the bandwidth and anonymity tradeoffs
that would come with different variations on how group identifiers are encoded
and hashtags are selected to purposefully collide with one another.
|
1109.6880
|
Explanation-Based Auditing
|
cs.DB
|
To comply with emerging privacy laws and regulations, it has become common
for applications like electronic health records systems (EHRs) to collect
access logs, which record each time a user (e.g., a hospital employee) accesses
a piece of sensitive data (e.g., a patient record). Using the access log, it is
easy to answer simple queries (e.g., Who accessed Alice's medical record?), but
this often does not provide enough information. In addition to learning who
accessed their medical records, patients will likely want to understand why
each access occurred. In this paper, we introduce the problem of generating
explanations for individual records in an access log. The problem is motivated
by user-centric auditing applications, and it also provides a novel approach to
misuse detection. We develop a framework for modeling explanations which is
based on a fundamental observation: For certain classes of databases, including
EHRs, the reason for most data accesses can be inferred from data stored
elsewhere in the database. For example, if Alice has an appointment with Dr.
Dave, this information is stored in the database, and it explains why Dr. Dave
looked at Alice's record. Large numbers of data accesses can be explained using
general forms called explanation templates. Rather than requiring an
administrator to manually specify explanation templates, we propose a set of
algorithms for automatically discovering frequent templates from the database
(i.e., those that explain a large number of accesses). We also propose
techniques for inferring collaborative user groups, which can be used to
enhance the quality of the discovered explanations. Finally, we have evaluated
our proposed techniques using an access log and data from the University of
Michigan Health System. Our results demonstrate that in practice we can provide
explanations for over 94% of data accesses in the log.
|
1109.6881
|
Human-powered Sorts and Joins
|
cs.DB
|
Crowdsourcing markets like Amazon's Mechanical Turk (MTurk) make it possible
to task people with small jobs, such as labeling images or looking up phone
numbers, via a programmatic interface. MTurk tasks for processing datasets with
humans are currently designed with significant reimplementation of common
workflows and ad-hoc selection of parameters such as price to pay per task. We
describe how we have integrated crowds into a declarative workflow engine
called Qurk to reduce the burden on workflow designers. In this paper, we focus
on how to use humans to compare items for sorting and joining data, two of the
most common operations in DBMSs. We describe our basic query interface and the
user interface of the tasks we post to MTurk. We also propose a number of
optimizations, including task batching, replacing pairwise comparisons with
numerical ratings, and pre-filtering tables before joining them, which
dramatically reduce the overall cost of running sorts and joins on the crowd.
In an experiment joining two sets of images, we reduce the overall cost from
$67 in a naive implementation to about $3, without substantially affecting
accuracy or latency. In an end-to-end experiment, we reduced cost by a factor
of 14.5.
|
1109.6882
|
Verifying Computations with Streaming Interactive Proofs
|
cs.DB
|
When computation is outsourced, the data owner would like to be assured that
the desired computation has been performed correctly by the service provider.
In theory, proof systems can give the necessary assurance, but prior work is
not sufficiently scalable or practical. In this paper, we develop new proof
protocols for verifying computations which are streaming in nature: the
verifier (data owner) needs only logarithmic space and a single pass over the
input, and after observing the input follows a simple protocol with a prover
(service provider) that takes logarithmic communication spread over a
logarithmic number of rounds. These ensure that the computation is performed
correctly: that the service provider has not made any errors or missed out some
data. The guarantee is very strong: even if the service provider deliberately
tries to cheat, there is only vanishingly small probability of doing so
undetected, while a correct computation is always accepted. We first observe
that some theoretical results can be modified to work with streaming verifiers,
showing that there are efficient protocols for problems in the complexity
classes NP and NC. Our main results then seek to bridge the gap between theory
and practice by developing usable protocols for a variety of problems of
central importance in streaming and database processing. All these problems
require linear space in the traditional streaming model, and therefore our
protocols demonstrate that adding a prover can exponentially reduce the effort
needed by the verifier. Our experimental results show that our protocols are
practical and scalable.
|
1109.6883
|
A MovingObject Index for Efficient Query Processing with Peer-Wise
Location Privacy
|
cs.DB
|
With the growing use of location-based services, location privacy attracts
increasing attention from users, industry, and the research community. While
considerable effort has been devoted to inventing techniques that prevent
service providers from knowing a user's exact location, relatively little
attention has been paid to enabling so-called peer-wise privacy--the protection
of a user's location from unauthorized peer users. This paper identifies an
important efficiency problem in existing peer-privacy approaches that simply
apply a filtering step to identify users that are located in a query range, but
that do not want to disclose their location to the querying peer. To solve this
problem, we propose a novel, privacy-policy enabled index called the PEB-tree
that seamlessly integrates location proximity and policy compatibility. We
propose efficient algorithms that use the PEB-tree for processing privacy-aware
range and kNN queries. Extensive experiments suggest that the PEB-tree enables
efficient query processing.
|
1109.6884
|
ERA: Efficient Serial and Parallel Suffix Tree Construction for Very
Long Strings
|
cs.DB
|
The suffix tree is a data structure for indexing strings. It is used in a
variety of applications such as bioinformatics, time series analysis,
clustering, text editing and data compression. However, when the string and the
resulting suffix tree are too large to fit into the main memory, most existing
construction algorithms become very inefficient. This paper presents a
disk-based suffix tree construction method, called Elastic Range (ERa), which
works efficiently with very long strings that are much larger than the
available memory. ERa partitions the tree construction process horizontally and
vertically and minimizes I/Os by dynamically adjusting the horizontal
partitions independently for each vertical partition, based on the evolving
shape of the tree and the available memory. Where appropriate, ERa also groups
vertical partitions together to amortize the I/O cost. We developed a serial
version; a parallel version for shared-memory and shared-disk multi-core
systems; and a parallel version for shared-nothing architectures. ERa indexes
the entire human genome in 19 minutes on an ordinary desktop computer. For
comparison, the fastest existing method needs 15 minutes using 1024 CPUs on an
IBM BlueGene supercomputer.
|
1109.6885
|
Fast Updates on Read-Optimized Databases Using Multi-Core CPUs
|
cs.DB
|
Read-optimized columnar databases use differential updates to handle writes
by maintaining a separate write-optimized delta partition which is periodically
merged with the read-optimized and compressed main partition. This merge
process introduces significant overheads and unacceptable downtimes in update
intensive systems, aspiring to combine transactional and analytical workloads
into one system. In the first part of the paper, we report data analyses of 12
SAP Business Suite customer systems. In the second half, we present an
optimized merge process reducing the merge overhead of current systems by a
factor of 30. Our linear-time merge algorithm exploits the underlying high
compute and bandwidth resources of modern multi-core CPUs with
architecture-aware optimizations and efficient parallelization. This enables
compressed in-memory column stores to handle the transactional update rate
required by enterprise applications, while keeping properties of read-optimized
databases for analytic-style queries.
|
1109.6886
|
A Data-Based Approach to Social Influence Maximization
|
cs.DB
|
Influence maximization is the problem of finding a set of users in a social
network, such that by targeting this set, one maximizes the expected spread of
influence in the network. Most of the literature on this topic has focused
exclusively on the social graph, overlooking historical data, i.e., traces of
past action propagations. In this paper, we study influence maximization from a
novel data-based perspective. In particular, we introduce a new model, which we
call credit distribution, that directly leverages available propagation traces
to learn how influence flows in the network and uses this to estimate expected
influence spread. Our approach also learns the different levels of
influenceability of users, and it is time-aware in the sense that it takes the
temporal nature of influence into account. We show that influence maximization
under the credit distribution model is NP-hard and that the function that
defines expected spread under our model is submodular. Based on these, we
develop an approximation algorithm for solving the influence maximization
problem that at once enjoys high accuracy compared to the standard approach,
while being several orders of magnitude faster and more scalable.
|
1110.0020
|
Causes of Ineradicable Spurious Predictions in Qualitative Simulation
|
cs.AI
|
It was recently proved that a sound and complete qualitative simulator does
not exist, that is, as long as the input-output vocabulary of the
state-of-the-art QSIM algorithm is used, there will always be input models
which cause any simulator with a coverage guarantee to make spurious
predictions in its output. In this paper, we examine whether a meaningfully
expressive restriction of this vocabulary is possible so that one can build a
simulator with both the soundness and completeness properties. We prove several
negative results: All sound qualitative simulators, employing subsets of the
QSIM representation which retain the operating region transition feature, and
support at least the addition and constancy constraints, are shown to be
inherently incomplete. Even when the simulations are restricted to run in a
single operating region, a constraint vocabulary containing just the addition,
constancy, derivative, and multiplication relations makes the construction of
sound and complete qualitative simulators impossible.
|
1110.0023
|
Properties and Applications of Programs with Monotone and Convex
Constraints
|
cs.AI
|
We study properties of programs with monotone and convex constraints. We
extend to these formalisms concepts and results from normal logic programming.
They include the notions of strong and uniform equivalence with their
characterizations, tight programs and Fages Lemma, program completion and loop
formulas. Our results provide an abstract account of properties of some recent
extensions of logic programming with aggregates, especially the formalism of
lparse programs. They imply a method to compute stable models of lparse
programs by means of off-the-shelf solvers of pseudo-boolean constraints, which
is often much faster than the smodels system.
|
1110.0024
|
How the Landscape of Random Job Shop Scheduling Instances Depends on the
Ratio of Jobs to Machines
|
cs.AI
|
We characterize the search landscape of random instances of the job shop
scheduling problem (JSP). Specifically, we investigate how the expected values
of (1) backbone size, (2) distance between near-optimal schedules, and (3)
makespan of random schedules vary as a function of the job to machine ratio
(N/M). For the limiting cases N/M approaches 0 and N/M approaches infinity we
provide analytical results, while for intermediate values of N/M we perform
experiments. We prove that as N/M approaches 0, backbone size approaches 100%,
while as N/M approaches infinity the backbone vanishes. In the process we show
that as N/M approaches 0 (resp. N/M approaches infinity), simple priority rules
almost surely generate an optimal schedule, providing theoretical evidence of
an "easy-hard-easy" pattern of typical-case instance difficulty in job shop
scheduling. We also draw connections between our theoretical results and the
"big valley" picture of JSP landscapes.
|
1110.0026
|
Preference-based Search using Example-Critiquing with Suggestions
|
cs.AI
|
We consider interactive tools that help users search for their most preferred
item in a large collection of options. In particular, we examine
example-critiquing, a technique for enabling users to incrementally construct
preference models by critiquing example options that are presented to them. We
present novel techniques for improving the example-critiquing technology by
adding suggestions to its displayed options. Such suggestions are calculated
based on an analysis of users current preference model and their potential
hidden preferences. We evaluate the performance of our model-based suggestion
techniques with both synthetic and real users. Results show that such
suggestions are highly attractive to users and can stimulate them to express
more preferences to improve the chance of identifying their most preferred item
by up to 78%.
|
1110.0027
|
Anytime Point-Based Approximations for Large POMDPs
|
cs.AI
|
The Partially Observable Markov Decision Process has long been recognized as
a rich framework for real-world planning and control problems, especially in
robotics. However exact solutions in this framework are typically
computationally intractable for all but the smallest problems. A well-known
technique for speeding up POMDP solving involves performing value backups at
specific belief points, rather than over the entire belief simplex. The
efficiency of this approach, however, depends greatly on the selection of
points. This paper presents a set of novel techniques for selecting informative
belief points which work well in practice. The point selection procedure is
combined with point-based value backups to form an effective anytime POMDP
algorithm called Point-Based Value Iteration (PBVI). The first aim of this
paper is to introduce this algorithm and present a theoretical analysis
justifying the choice of belief selection technique. The second aim of this
paper is to provide a thorough empirical comparison between PBVI and other
state-of-the-art POMDP methods, in particular the Perseus algorithm, in an
effort to highlight their similarities and differences. Evaluation is performed
using both standard POMDP domains and realistic robotic tasks.
|
1110.0028
|
Solving Factored MDPs with Hybrid State and Action Variables
|
cs.AI
|
Efficient representations and solutions for large decision problems with
continuous and discrete variables are among the most important challenges faced
by the designers of automated decision support systems. In this paper, we
describe a novel hybrid factored Markov decision process (MDP) model that
allows for a compact representation of these problems, and a new hybrid
approximate linear programming (HALP) framework that permits their efficient
solutions. The central idea of HALP is to approximate the optimal value
function by a linear combination of basis functions and optimize its weights by
linear programming. We analyze both theoretical and computational aspects of
this approach, and demonstrate its scale-up potential on several hybrid
optimization problems.
|
1110.0029
|
Combination Strategies for Semantic Role Labeling
|
cs.AI
|
This paper introduces and analyzes a battery of inference models for the
problem of semantic role labeling: one based on constraint satisfaction, and
several strategies that model the inference as a meta-learning problem using
discriminative classifiers. These classifiers are developed with a rich set of
novel features that encode proposition and sentence-level information. To our
knowledge, this is the first work that: (a) performs a thorough analysis of
learning-based inference models for semantic role labeling, and (b) compares
several inference strategies in this context. We evaluate the proposed
inference strategies in the framework of the CoNLL-2005 shared task using only
automatically-generated syntactic information. The extensive experimental
evaluation and analysis indicates that all the proposed inference strategies
are successful -they all outperform the current best results reported in the
CoNLL-2005 evaluation exercise- but each of the proposed approaches has its
advantages and disadvantages. Several important traits of a state-of-the-art
SRL combination strategy emerge from this analysis: (i) individual models
should be combined at the granularity of candidate arguments rather than at the
granularity of complete solutions; (ii) the best combination strategy uses an
inference model based in learning; and (iii) the learning-based inference
benefits from max-margin classifiers and global feedback.
|
1110.0061
|
Learning image transformations without training examples
|
cs.LG cs.CV
|
The use of image transformations is essential for efficient modeling and
learning of visual data. But the class of relevant transformations is large:
affine transformations, projective transformations, elastic deformations, ...
the list goes on. Therefore, learning these transformations, rather than hand
coding them, is of great conceptual interest. To the best of our knowledge, all
the related work so far has been concerned with either supervised or weakly
supervised learning (from correlated sequences, video streams, or
image-transform pairs). In this paper, on the contrary, we present a simple
method for learning affine and elastic transformations when no examples of
these transformations are explicitly given, and no prior knowledge of space
(such as ordering of pixels) is included either. The system has only access to
a moderately large database of natural images arranged in no particular order.
|
1110.0073
|
Hamming Compressed Sensing
|
cs.IT math.IT
|
Compressed sensing (CS) and 1-bit CS cannot directly recover quantized
signals and require time consuming recovery. In this paper, we introduce
\textit{Hamming compressed sensing} (HCS) that directly recovers a k-bit
quantized signal of dimensional $n$ from its 1-bit measurements via invoking
$n$ times of Kullback-Leibler divergence based nearest neighbor search.
Compared with CS and 1-bit CS, HCS allows the signal to be dense, takes
considerably less (linear) recovery time and requires substantially less
measurements ($\mathcal O(\log n)$). Moreover, HCS recovery can accelerate the
subsequent 1-bit CS dequantizer. We study a quantized recovery error bound of
HCS for general signals and "HCS+dequantizer" recovery error bound for sparse
signals. Extensive numerical simulations verify the appealing accuracy,
robustness, efficiency and consistency of HCS.
|
1110.0084
|
Wireless Bidirectional Relaying and Latin Squares
|
cs.IT math.IT
|
The design of modulation schemes for the physical layer network-coded two way
relaying scenario is considered with the protocol which employs two phases:
Multiple access (MA) Phase and Broadcast (BC) Phase. It was observed by
Koike-Akino et al. that adaptively changing the network coding map used at the
relay according to the channel conditions greatly reduces the impact of
multiple access interference which occurs at the relay during the MA Phase and
all these network coding maps should satisfy a requirement called the {\it
exclusive law}. We highlight the issues associated with the scheme proposed by
Koike-Akino et al. and propose a scheme which solves these issues. We show that
every network coding map that satisfies the exclusive law is representable by a
Latin Square and conversely, and this relationship can be used to get the
network coding maps satisfying the exclusive law. Using the structural
properties of the Latin Squares for a given set of parameters, the problem of
finding all the required maps is reduced to finding a small set of maps for
$M-$PSK constellations. This is achieved using the notions of isotopic and
transposed Latin Squares. Even though, the completability of partially filled
$M \times M$ Latin Square using $M$ symbols is an open problem, two specific
cases where such a completion is always possible are identified and explicit
construction procedures are provided. The Latin Squares constructed using the
first procedure, helps towards reducing the total number of network coding maps
used. The second procedure helps in the construction of certain Latin Squares
for $M$-PSK signal set from the Latin squares obtained for $M/2$-PSK signal
set.
|
1110.0100
|
Long Distance Continuous-Variable Quantum Key Distribution with a
Gaussian Modulation
|
quant-ph cs.IT math.IT
|
We designed high-efficiency error correcting codes allowing to extract an
errorless secret key in a continuous-variable quantum key distribution protocol
using a Gaussian modulation of coherent states and a homodyne detection. These
codes are available for a wide range of signal-to-noise ratios on an AWGN
channel with a binary modulation and can be combined with a multidimensional
reconciliation method proven secure against arbitrary collective attacks. This
improved reconciliation procedure considerably extends the secure range of a
continuous-variable quantum key distribution with a Gaussian modulation, giving
a secret key rate of about 10^{-3} bit per pulse at a distance of 120 km for
reasonable physical parameters.
|
1110.0105
|
Multi-Agent Programming Contest 2011 - The Python-DTU Team
|
cs.MA
|
We provide a brief description of the Python-DTU system, including the
overall design, the tools and the algorithms that we plan to use in the agent
contest.
|
1110.0107
|
Learning to relate images: Mapping units, complex cells and simultaneous
eigenspaces
|
cs.CV cs.AI nlin.AO stat.ML
|
A fundamental operation in many vision tasks, including motion understanding,
stereopsis, visual odometry, or invariant recognition, is establishing
correspondences between images or between images and data from other
modalities. We present an analysis of the role that multiplicative interactions
play in learning such correspondences, and we show how learning and inferring
relationships between images can be viewed as detecting rotations in the
eigenspaces shared among a set of orthogonal matrices. We review a variety of
recent multiplicative sparse coding methods in light of this observation. We
also review how the squaring operation performed by energy models and by models
of complex cells can be thought of as a way to implement multiplicative
interactions. This suggests that the main utility of including complex cells in
computational models of vision may be that they can encode relations not
invariances.
|
1110.0124
|
Capacity Bounds for State-Dependent Broadcast Channels
|
cs.IT math.IT
|
In this paper, we derive information-theoretic performance limits for three
classes of two-user state-dependent discrete memoryless broadcast channels,
with noncausal side-information at the encoder. The first class of channels
comprises a sender broadcasting two independent messages to two non-cooperating
receivers; for channels of the second class, each receiver is given the message
it need not decode; and the third class comprises channels where the sender is
constrained to keep each message confidential from the unintended receiver. We
derive inner bounds for all the three classes of channels. For the first and
second class of channels, we discuss the rate penalty on the achievable region
for having to deal with side-information. For channels of third class, we
characterize the rate penalties for having to deal not only with
side-information, but also to satisfy confidentiality constraints. We then
derive outer bounds, where we present an explicit characterization of sum-rate
bounds for the first and third class of channels. For channels of the second
class, we show that our outer bounds are within a fixed gap away from the
achievable rate region, where the gap is independent of the distribution
characterizing this class of channels. The channel models presented in this
paper are useful variants of the classical broadcast channel, and provide
fundamental building blocks for cellular downlink communications with
side-information, such as fading in the wireless medium, interference caused by
neighboring nodes in the network, {\etc}. at the encoder; two-way relay
communications; and secure wireless broadcasting.
|
1110.0169
|
Robust artificial neural networks and outlier detection. Technical
report
|
math.OC cs.CV cs.NA cs.NE math.NA stat.ME
|
Large outliers break down linear and nonlinear regression models. Robust
regression methods allow one to filter out the outliers when building a model.
By replacing the traditional least squares criterion with the least trimmed
squares criterion, in which half of data is treated as potential outliers, one
can fit accurate regression models to strongly contaminated data.
High-breakdown methods have become very well established in linear regression,
but have started being applied for non-linear regression only recently. In this
work, we examine the problem of fitting artificial neural networks to
contaminated data using least trimmed squares criterion. We introduce a
penalized least trimmed squares criterion which prevents unnecessary removal of
valid data. Training of ANNs leads to a challenging non-smooth global
optimization problem. We compare the efficiency of several derivative-free
optimization methods in solving it, and show that our approach identifies the
outliers correctly when ANNs are used for nonlinear regression.
|
1110.0194
|
Rate-Dependent Analysis of the Asymptotic Behavior of Channel
Polarization
|
cs.IT math.IT
|
For a binary-input memoryless symmetric channel $W$, we consider the
asymptotic behavior of the polarization process in the large block-length
regime when transmission takes place over $W$. In particular, we study the
asymptotics of the cumulative distribution $\mathbb{P}(Z_n \leq z)$, where
$\{Z_n\}$ is the Bhattacharyya process defined from $W$, and its dependence on
the rate of transmission. On the basis of this result, we characterize the
asymptotic behavior, as well as its dependence on the rate, of the block error
probability of polar codes using the successive cancellation decoder. This
refines the original bounds by Ar{\i}kan and Telatar. Our results apply to
general polar codes based on $\ell \times \ell$ kernel matrices.
We also provide lower bounds on the block error probability of polar codes
using the MAP decoder. The MAP lower bound and the successive cancellation
upper bound coincide when $\ell=2$, but there is a gap for $\ell>2$.
|
1110.0207
|
Analysing complexity of XML Schemas in geospatial web services
|
cs.DB
|
XML Schema is the language used to define the structure of messages exchanged
between OGC-based web service clients and providers. The size of these schemas
has been growing with time, reaching a state that makes its understanding and
effective application a hard task. A first step to cope with this situation is
to provide different ways to measure the complexity of the schemas. In this
regard, we present in this paper an analysis of the complexity of XML schemas
in OGC web services. We use a group of metrics found in the literature and
introduce new metrics to measure size and/or complexity of these schemas. The
use of adequate metrics allows us to quantify the complexity, quality and other
properties of the schemas, which can be very useful in different scenarios.
|
1110.0209
|
Dealing with large schema sets in mobile SOS-based applications
|
cs.DB
|
Although the adoption of OGC Web Services for server, desktop and web
applications has been successful, its penetration in mobile devices has been
slow. One of the main reasons is the performance problems associated with XML
processing as it consumes a lot of memory and processing time, which are scarce
resources in a mobile device. In this paper we propose an algorithm to generate
efficient code for XML data binding for mobile SOS-based applications. The
algorithm take advantage of the fact that individual implementations use only
some portions of the standards' schemas, which allows the simplification of
large XML schema sets in an application-specific manner by using a subset of
XML instance files conforming to these schemas.
|
1110.0214
|
Eclectic Extraction of Propositional Rules from Neural Networks
|
cs.LG cs.AI cs.CV cs.NE
|
Artificial Neural Network is among the most popular algorithm for supervised
learning. However, Neural Networks have a well-known drawback of being a "Black
Box" learner that is not comprehensible to the Users. This lack of transparency
makes it unsuitable for many high risk tasks such as medical diagnosis that
requires a rational justification for making a decision. Rule Extraction
methods attempt to curb this limitation by extracting comprehensible rules from
a trained Network. Many such extraction algorithms have been developed over the
years with their respective strengths and weaknesses. They have been broadly
categorized into three types based on their approach to use internal model of
the Network. Eclectic Methods are hybrid algorithms that combine the other
approaches to attain more performance. In this paper, we present an Eclectic
method called HERETIC. Our algorithm uses Inductive Decision Tree learning
combined with information of the neural network structure for extracting
logical rules. Experiments and theoretical analysis show HERETIC to be better
in terms of speed and performance.
|
1110.0215
|
Completion Time in Broadcast Channel and Interference Channel
|
cs.IT math.IT
|
In a multi-user channel, completion time refers to the number of channel uses
required for users, each with some given fixed bit pool, to complete the
transmission of all their data bits. This paper extends the information
theoretic formulation of multi-access completion time to broadcast channel and
interference channel, enabling us to obtain the so-called completion time
region (CTR), which, analogous to capacity region, characterizes all possible
trade-offs between users' completion times. Specifically, for Gaussian
broadcast channel (GBC) and Gaussian interference channel (GIC) in the
strong/very strong regime, the exact CTR is obtained. For GIC in the weak/mixed
regime, an achievable CTR based on the Etkin-Tse-Wang scheme and an outer-bound
are obtained.
|
1110.0235
|
The Stanford RNA Mapping Database for sharing and visualizing RNA
structure mapping experiments
|
q-bio.BM cs.DB
|
We have established an RNA Mapping Database (RMDB) to enable a new generation
of structural, thermodynamic, and kinetic studies from quantitative
single-nucleotide-resolution RNA structure mapping (freely available at
http://rmdb.stanford.edu). Chemical and enzymatic mapping is a rapid, robust,
and widespread approach to RNA characterization. Since its recent coupling with
high-throughput sequencing techniques, accelerated software pipelines, and
large-scale mutagenesis, the volume of mapping data has greatly increased, and
there is a critical need for a database to enable sharing, visualization, and
meta-analyses of these data. Through its on-line front-end, the RMDB allows
users to explore single-nucleotide-resolution chemical accessibility data in
heat-map, bar-graph, and colored secondary structure graphics; to leverage
these data to generate secondary structure hypotheses; and to download the data
in standardized and computer-friendly files, including the RDAT and
community-consensus SNRNASM formats. At the time of writing, the database
houses 38 entries, describing 2659 RNA sequences and comprising 355,084 data
points, and is growing rapidly.
|
1110.0244
|
Analysis of Laser & Detector Placement in MIMO Multimode Optical Fiber
Systems
|
physics.optics cs.IT math.IT
|
Multimode fibers (MMFs) offer a cost-effective connection solution for small
and medium length networks. However, data rates through multimode fibers are
traditionally limited by modal dispersion. Signal processing and Multiple-Input
Multiple-Output (MIMO) have been shown to be effective at combating these
limitations, but device design for the specific purpose of MIMO in MMFs is
still an open issue. This paper utilizes a statistical field propagation model
for MMFs to aid the analysis and designs of MMF laser and detector arrays, and
aims to improve data rates of the fiber. Simulations reveal that optimal device
designs could possess 2-3 times the data carrying capacity of suboptimal ones.
|
1110.0248
|
A Behavioral Distance for Fuzzy-Transition Systems
|
cs.AI
|
In contrast to the existing approaches to bisimulation for fuzzy systems, we
introduce a behavioral distance to measure the behavioral similarity of states
in a nondeterministic fuzzy-transition system. This behavioral distance is
defined as the greatest fixed point of a suitable monotonic function and
provides a quantitative analogue of bisimilarity. The behavioral distance has
the important property that two states are at zero distance if and only if they
are bisimilar. Moreover, for any given threshold, we find that states with
behavioral distances bounded by the threshold are equivalent. In addition, we
show that two system combinators---parallel composition and product---are
non-expansive with respect to our behavioral distance, which makes
compositional verification possible.
|
1110.0252
|
Universal Codes for the Gaussian MAC via Spatial Coupling
|
cs.IT math.IT
|
We consider transmission of two independent and separately encoded sources
over a two-user binary-input Gaussian multiple-access channel. The channel
gains are assumed to be unknown at the transmitter and the goal is to design an
encoder-decoder pair that achieves reliable communication for all channel gains
where this is theoretically possible. We call such a system \emph{universal}
with respect to the channel gains.
Kudekar et al. recently showed that terminated low-density parity-check
convolutional codes (a.k.a. spatially-coupled low-density parity-check
ensembles) have belief-propagation thresholds that approach their maximum
a-posteriori thresholds. This was proven for binary erasure channels and shown
empirically for binary memoryless symmetric channels. It was conjectured that
the principle of spatial coupling is very general and the phenomenon of
threshold saturation applies to a very broad class of graphical models. In this
work, we derive an area theorem for the joint decoder and empirically show that
threshold saturation occurs for this problem. As a result, we demonstrate
near-universal performance for this problem using the proposed
spatially-coupled coding system.
|
1110.0264
|
Face Recognition using Optimal Representation Ensemble
|
cs.CV
|
Recently, the face recognizers based on linear representations have been
shown to deliver state-of-the-art performance. In real-world applications,
however, face images usually suffer from expressions, disguises and random
occlusions. The problematic facial parts undermine the validity of the
linear-subspace assumption and thus the recognition performance deteriorates
significantly. In this work, we address the problem in a
learning-inference-mixed fashion. By observing that the linear-subspace
assumption is more reliable on certain face patches rather than on the holistic
face, some Bayesian Patch Representations (BPRs) are randomly generated and
interpreted according to the Bayes' theory. We then train an ensemble model
over the patch-representations by minimizing the empirical risk w.r.t the
"leave-one-out margins". The obtained model is termed Optimal Representation
Ensemble (ORE), since it guarantees the optimality from the perspective of
Empirical Risk Minimization. To handle the unknown patterns in test faces, a
robust version of BPR is proposed by taking the non-face category into
consideration. Equipped with the Robust-BPRs, the inference ability of ORE is
increased dramatically and several record-breaking accuracies (99.9% on Yale-B
and 99.5% on AR) and desirable efficiencies (below 20 ms per face in Matlab)
are achieved. It also overwhelms other modular heuristics on the faces with
random occlusions, extreme expressions and disguises. Furthermore, to
accommodate immense BPRs sets, a boosting-like algorithm is also derived. The
boosted model, a.k.a Boosted-ORE, obtains similar performance to its prototype.
Besides the empirical superiorities, two desirable features of the proposed
methods, namely, the training-determined model-selection and the
data-weight-free boosting procedure, are also theoretically verified.
|
1110.0279
|
Coding-Theoretic Methods for Sparse Recovery
|
cs.IT cs.DM math.IT
|
We review connections between coding-theoretic objects and sparse learning
problems. In particular, we show how seemingly different combinatorial objects
such as error-correcting codes, combinatorial designs, spherical codes,
compressed sensing matrices and group testing designs can be obtained from one
another. The reductions enable one to translate upper and lower bounds on the
parameters attainable by one object to another. We survey some of the
well-known reductions in a unified presentation, and bring some existing gaps
to attention. New reductions are also introduced; in particular, we bring up
the notion of minimum "L-wise distance" of codes and show that this notion
closely captures the combinatorial structure of RIP-2 matrices. Moreover, we
show how this weaker variation of the minimum distance is related to
combinatorial list-decoding properties of codes.
|
1110.0289
|
Repr\'esentation de donn\'ees et m\'etadonn\'ees dans une biblioth\`eque
virtuelle pour une ad\'equation avec l'usager et les outils de glanage ou
moissonnage scientifique
|
cs.IR
|
The vehicles for official knowledge, as well as university libraries, suffer
from an increasingly visible lack of interest. This is due to the advent of
fully digital practices. By studying the psychological and cognitive models in
information retrieval initiated in the 1980s, it is possible to use these
theories and apply them practically to the Information Retrieval System, taking
into account the requirements of virtual libraries. New metadata standards
along with modern tools that help managing references should help automating
the process of scientific research. We offer a practical implementation of the
given theories to test them when they are applied to the information retrieval
in computer sciences. This case under study will highlight good practices in
gleaning and harvesting scientific literature.
|
1110.0305
|
Significant communities in large sparse networks
|
physics.soc-ph cs.SI
|
Researchers use community-detection algorithms to reveal large-scale
organization in biological and social networks, but community detection is
useful only if the communities are significant and not a result of noisy data.
To assess the statistical significance of the network communities, or the
robustness of the detected structure, one approach is to perturb the network
structure by removing links and measure how much the communities change.
However, perturbing sparse networks is challenging because they are inherently
sensitive; they shatter easily if links are removed. Here we propose a simple
method to perturb sparse networks and assess the significance of their
communities. We generate resampled networks by adding extra links based on
local information, then we aggregate the information from multiple resampled
networks to find a coarse-grained description of significant clusters. In
addition to testing our method on benchmark networks, we use our method on the
sparse network of the European Court of Justice (ECJ) case law, to detect
significant and insignificant areas of law. We use our significance analysis to
draw a map of the ECJ case law network that reveals the relations between the
areas of law.
|
1110.0336
|
OntologyNavigator: WEB 2.0 scalable ontology based CLIR portal to IT
scientific corpus for researchers
|
cs.IR cs.DL cs.HC
|
This work presents the architecture used in the ongoing OntologyNavigator
project. It is a research tool to help advanced learners to find adapted IT
papers to create scientific bibliographies. The purpose is the use of an IT
representation as educational research software for researchers. We use an
ontology based on the ACM's Computing Classification System in order to find
scientific papers directly related to the new researcher's domain without any
formal request. An ontology translation in French is automatically proposed and
can be based on Web 2.0 enhanced by a community of users. A visualization and
navigation model is proposed to make it more accessible and examples are given
to show the interface of the tool. This model offers the possibility of cross
language query. Users deeply interact with the translation by providing
alternative translation of the node label. Customers also enrich the ontology
node labels with implicit descriptors.
|
1110.0347
|
Accelerating consensus on co-evolving networks: the effect of committed
individuals
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
Social networks are not static but rather constantly evolve in time. One of
the elements thought to drive the evolution of social network structure is
homophily - the need for individuals to connect with others who are similar to
them. In this paper, we study how the spread of a new opinion, idea, or
behavior on such a homophily-driven social network is affected by the changing
network structure. In particular, using simulations, we study a variant of the
Axelrod model on a network with a homophilic rewiring rule imposed. First, we
find that the presence of homophilic rewiring within the network, in general,
impedes the reaching of consensus in opinion, as the time to reach consensus
diverges exponentially with network size $N$. We then investigate whether the
introduction of committed individuals who are rigid in their opinion on a
particular issue, can speed up the convergence to consensus on that issue. We
demonstrate that as committed agents are added, beyond a critical value of the
committed fraction, the consensus time growth becomes logarithmic in network
size $N$. Furthermore, we show that slight changes in the interaction rule can
produce strikingly different results in the scaling behavior of $T_c$. However,
the benefit gained by introducing committed agents is qualitatively preserved
across all the interaction rules we consider.
|
1110.0376
|
Common Organizing Mechanisms in Ecological and Socio-economic Networks
|
physics.soc-ph cs.SI physics.data-an q-bio.PE
|
Previous work has shown that species interacting in an ecosystem and actors
transacting in an economic context may have notable similarities in behavior.
However, the specific mechanism that may underlie similarities in nature and
human systems has not been analyzed. Building on stochastic food-web models, we
propose a parsimonious bipartite-cooperation model that reproduces the key
features of mutualistic networks - degree distribution, nestedness and
modularity -- for both ecological networks and socio-economic networks. Our
analysis uses two diverse networks. Mutually-beneficial interactions between
plants and their pollinators, and cooperative economic exchanges between
designers and their contractors. We find that these mutualistic networks share
a key hierarchical ordering of their members, along with an exponential
constraint in the number and type of partners they can cooperate with. We use
our model to show that slight changes in the interaction constraints can
produce either extremely nested or random structures, revealing that these
constraints play a key role in the evolution of mutualistic networks. This
could also encourage a new systematic approach to study the functional and
structural properties of networks. The surprising correspondence across
mutualistic networks suggests their broadly representativeness and their
potential role in the productive organization of exchange systems, both
ecological and social.
|
1110.0378
|
Exact Dynamic Support Tracking with Multiple Measurement Vectors using
Compressive MUSIC
|
cs.IT math.IT
|
Dynamic tracking of sparse targets has been one of the important topics in
array signal processing. Recently, compressed sensing (CS) approaches have been
extensively investigated as a new tool for this problem using partial support
information obtained by exploiting temporal redundancy. However, most of these
approaches are formulated under single measurement vector compressed sensing
(SMV-CS) framework, where the performance guarantees are only in a
probabilistic manner. The main contribution of this paper is to allow
\textit{deterministic} tracking of time varying supports with multiple
measurement vectors (MMV) by exploiting multi-sensor diversity. In particular,
we show that a novel compressive MUSIC (CS-MUSIC) algorithm with optimized
partial support selection not only allows removal of inaccurate portion of
previous support estimation but also enables addition of newly emerged part of
unknown support. Numerical results confirm the theory.
|
1110.0381
|
Synchronicity, Instant Messaging and Performance among Financial Traders
|
physics.soc-ph cs.SI physics.data-an q-bio.PE
|
Successful animal systems often manage risk through synchronous behavior that
spontaneously arises without leadership. In critical human systems facing risk,
such as financial markets or military operations, our understanding of the
benefits associated to synchronicity is nascent but promising. Building on
previous work illuminating commonalities between ecological and human systems,
we compare the activity patterns of individual financial traders with the
simultaneous activity of other traders---an individual and spontaneous
characteristic we call synchronous trading. Additionally, we examine the
association of synchronous trading with individual performance and
communication patterns. Analyzing empirical data on day traders'
second-to-second trading and instant messaging, we find that the higher the
traders' synchronous trading, the less likely they lose money at the end of the
day. We also find that the daily instant messaging patterns of traders are
closely associated with their level of synchronous trading. This suggests that
synchronicity and vanguard technology may help cope with risky decisions in
complex systems and furnish new prospects for achieving collective and
individual goals.
|
1110.0413
|
Group Lasso with Overlaps: the Latent Group Lasso approach
|
stat.ML cs.LG
|
We study a norm for structured sparsity which leads to sparse linear
predictors whose supports are unions of prede ned overlapping groups of
variables. We call the obtained formulation latent group Lasso, since it is
based on applying the usual group Lasso penalty on a set of latent variables. A
detailed analysis of the norm and its properties is presented and we
characterize conditions under which the set of groups associated with latent
variables are correctly identi ed. We motivate and discuss the delicate choice
of weights associated to each group, and illustrate this approach on simulated
data and on the problem of breast cancer prognosis from gene expression data.
|
1110.0425
|
Hybrid Codes Needed for Coordination over the Point-to-Point Channel
|
cs.IT math.IT
|
We consider a new fundamental question regarding the point-to-point
memoryless channel. The source-channel separation theorem indicates that random
codebook construction for lossy source compression and channel coding can be
independently constructed and paired to achieve optimal performance for
coordinating a source sequence with a reconstruction sequence. But what if we
want the channel input to also be coordinated with the source and
reconstruction? Such situations arise in network communication problems, where
the correlation inherent in the information sources can be used to correlate
channel inputs.
Hybrid codes have been shown to be useful in a number of network
communication problems. In this work we highlight their advantages over purely
digital codebook construction by applying them to the point-to-point setting,
coordinating both the channel input and the reconstruction with the source.
|
1110.0428
|
A Power Efficient Sensing/Communication Scheme: Joint
Source-Channel-Network Coding by Using Compressive Sensing
|
cs.IT math.IT
|
We propose a joint source-channel-network coding scheme, based on compressive
sensing principles, for wireless networks with AWGN channels (that may include
multiple access and broadcast), with sources exhibiting temporal and spatial
dependencies. Our goal is to provide a reconstruction of sources within an
allowed distortion level at each receiver. We perform joint source-channel
coding at each source by randomly projecting source values to a lower
dimensional space. We consider sources that satisfy the restricted eigenvalue
(RE) condition as well as more general sources for which the randomness of the
network allows a mapping to lower dimensional spaces. Our approach relies on
using analog random linear network coding. The receiver uses compressive
sensing decoders to reconstruct sources. Our key insight is the fact that,
compressive sensing and analog network coding both preserve the source
characteristics required for compressive sensing decoding.
|
1110.0477
|
Distributed Evolutionary Graph Partitioning
|
cs.NE cs.DC
|
We present a novel distributed evolutionary algorithm, KaFFPaE, to solve the
Graph Partitioning Problem, which makes use of KaFFPa (Karlsruhe Fast Flow
Partitioner). The use of our multilevel graph partitioner KaFFPa provides new
effective crossover and mutation operators. By combining these with a scalable
communication protocol we obtain a system that is able to improve the best
known partitioning results for many inputs in a very short amount of time. For
example, in Walshaw's well known benchmark tables we are able to improve or
recompute 76% of entries for the tables with 1%, 3% and 5% imbalance.
|
1110.0517
|
Distance Preserving Graph Simplification
|
cs.SI
|
Large graphs are difficult to represent, visualize, and understand. In this
paper, we introduce "gate graph" - a new approach to perform graph
simplification. A gate graph provides a simplified topological view of the
original graph. Specifically, we construct a gate graph from a large graph so
that for any "non-local" vertex pair (distance higher than some threshold) in
the original graph, their shortest-path distance can be recovered by
consecutive "local" walks through the gate vertices in the gate graph. We
perform a theoretical investigation on the gate-vertex set discovery problem.
We characterize its computational complexity and reveal the upper bound of
minimum gate-vertex set using VC-dimension theory. We propose an efficient
mining algorithm to discover a gate-vertex set with guaranteed logarithmic
bound. We further present a fast technique for pruning redundant edges in a
gate graph. The detailed experimental results using both real and synthetic
graphs demonstrate the effectiveness and efficiency of our approach.
|
1110.0532
|
Strange Beta: An Assistance System for Indoor Rock Climbing Route
Setting Using Chaotic Variations and Machine Learning
|
cs.AI cs.HC stat.AP
|
This paper applies machine learning and the mathematics of chaos to the task
of designing indoor rock-climbing routes. Chaotic variation has been used to
great advantage on music and dance, but the challenges here are quite
different, beginning with the representation. We present a formalized system
for transcribing rock climbing problems, then describe a variation generator
that is designed to support human route-setters in designing new and
interesting climbing problems. This variation generator, termed Strange Beta,
combines chaos and machine learning, using the former to introduce novelty and
the latter to smooth transitions in a manner that is consistent with the style
of the climbs This entails parsing the domain-specific natural language that
rock climbers use to describe routes and movement and then learning the
patterns in the results. We validated this approach with a pilot study in a
small university rock climbing gym, followed by a large blinded study in a
commercial climbing gym, in cooperation with experienced climbers and expert
route setters. The results show that {\sc Strange Beta} can help a human setter
produce routes that are at least as good as, and in some cases better than,
those produced in the traditional manner.
|
1110.0535
|
Modeling the adoption of innovations in the presence of geographic and
media influences
|
cs.SI nlin.AO physics.soc-ph
|
While there has been much work examining the affects of social network
structure on innovation adoption, models to date have lacked important features
such as meta-populations reflecting real geography or influence from mass media
forces. In this article, we show these are features crucial to producing more
accurate predictions of a social contagion and technology adoption at the city
level. Using data from the adoption of the popular micro-blogging platform,
Twitter, we present a model of adoption on a network that places friendships in
real geographic space and exposes individuals to mass media influence. We show
that homopholy both amongst individuals with similar propensities to adopt a
technology and geographic location are critical to reproduce features of real
spatiotemporal adoption. Furthermore, we estimate that mass media was
responsible for increasing Twitter's user base two to four fold. To reflect
this strength, we extend traditional contagion models to include an endogenous
mass media agent that responds to those adopting an innovation as well as
influencing agents to adopt themselves.
|
1110.0543
|
A high performance scientific cloud computing environment for materials
simulations
|
physics.comp-ph cond-mat.mtrl-sci cs.CE
|
We describe the development of a scientific cloud computing (SCC) platform
that offers high performance computation capability. The platform consists of a
scientific virtual machine prototype containing a UNIX operating system and
several materials science codes, together with essential interface tools (an
SCC toolset) that offers functionality comparable to local compute clusters. In
particular, our SCC toolset provides automatic creation of virtual clusters for
parallel computing, including tools for execution and monitoring performance,
as well as efficient I/O utilities that enable seamless connections to and from
the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud
(EC2). We present benchmarks for prototypical scientific applications and
demonstrate performance comparable to local compute clusters. To facilitate
code execution and provide user-friendly access, we have also integrated cloud
computing capability in a JAVA-based GUI. Our SCC platform may be an
alternative to traditional HPC resources for materials science or quantum
chemistry applications.
|
1110.0560
|
Easily Computed Lower Bounds on the Information Rate of Intersymbol
Interference Channels
|
cs.IT math.IT
|
Provable lower bounds are presented for the information rate I(X; X+S+N)
where X is the symbol drawn independently and uniformly from a finite-size
alphabet, S is a discrete-valued random variable (RV) and N is a Gaussian RV.
It is well known that with S representing the precursor intersymbol
interference (ISI) at the decision feedback equalizer (DFE) output, I(X; X+S+N)
serves as a tight lower bound for the symmetric information rate (SIR) as well
as capacity of the ISI channel corrupted by Gaussian noise. When evaluated on a
number of well-known finite-ISI channels, these new bounds provide a very
similar level of tightness against the SIR to the conjectured lower bound by
Shamai and Laroia at all signal-to-noise ratio (SNR) ranges, while being
actually tighter when viewed closed up at high SNRs. The new lower bounds are
obtained in two steps: First, a "mismatched" mutual information function is
introduced which can be proved as a lower bound to I(X; X+S+N). Secondly, this
function is further bounded from below by an expression that can be computed
easily via a few single-dimensional integrations with a small computational
load.
|
1110.0564
|
Diversity Order Vs Rate in an AWGN Channel
|
cs.IT cs.NI math.IT
|
We study the diversity order vs rate of an additive white Gaussian noise
(AWGN) channel in the whole capacity region. We show that for discrete input as
well as for continuous input, Gallager's upper bounds on error probability have
exponential diversity in low and high rate region but only subexponential in
the mid-rate region. For the best available lower bounds and for the practical
codes one observes exponential diversity throughout the capacity region.
However we also show that performance of practical codes is close to Gallager's
upper bounds and the mid-rate subexponential diversity has a bearing on the
performance of the practical codes. Finally we show that the upper bounds with
Gaussian input provide good approximation throughout the capacity region even
for finite constellation.
|
1110.0578
|
Open Input: A New Way for Websites to Grow
|
cs.HC cs.CY cs.SI
|
Regardless of current web 2.0 and 3.0 trends, there are still a lot of
websites made in web 1.0 style. These websites have fixed pages which are
editable only by owner and not by community. It is normal for a lot of cases,
but looks like not modern and engaging approach. Are there any ways to make
these sites closer to life? This paper is devoted to open input technique, a
way for websites of web 1.0 era to grow and evolve community. The idea of open
input, in general, means that anybody from the web can add information to any
section of the website even without registration on that website. People can
add news, billboard announcements, testimonials, questions, pictures, videos
etc - whatever site owner permitted. We have tested this idea in practice and
have positive results approving that open input is a vital approach for
collaboration on the web.
|
1110.0585
|
Discriminately Decreasing Discriminability with Learned Image Filters
|
cs.CV
|
In machine learning and computer vision, input images are often filtered to
increase data discriminability. In some situations, however, one may wish to
purposely decrease discriminability of one classification task (a "distractor"
task), while simultaneously preserving information relevant to another (the
task-of-interest): For example, it may be important to mask the identity of
persons contained in face images before submitting them to a crowdsourcing site
(e.g., Mechanical Turk) when labeling them for certain facial attributes.
Another example is inter-dataset generalization: when training on a dataset
with a particular covariance structure among multiple attributes, it may be
useful to suppress one attribute while preserving another so that a trained
classifier does not learn spurious correlations between attributes. In this
paper we present an algorithm that finds optimal filters to give high
discriminability to one task while simultaneously giving low discriminability
to a distractor task. We present results showing the effectiveness of the
proposed technique on both simulated data and natural face images.
|
1110.0593
|
Two Projection Pursuit Algorithms for Machine Learning under
Non-Stationarity
|
cs.LG cs.AI
|
This thesis derives, tests and applies two linear projection algorithms for
machine learning under non-stationarity. The first finds a direction in a
linear space upon which a data set is maximally non-stationary. The second aims
to robustify two-way classification against non-stationarity. The algorithm is
tested on a key application scenario, namely Brain Computer Interfacing.
|
1110.0594
|
Practical Wireless Network Coding and Decoding Methods for Multiple
Unicast Transmissions
|
cs.IT math.IT
|
We propose a simple yet effective wireless network coding and decoding
technique. It utilizes spatial diversity through cooperation between nodes
which carry out distributed encoding operations dictated by generator matrices
of linear block codes. For this purpose, we make use of greedy codes over the
binary field and show that desired diversity orders can be flexibly assigned to
nodes in a multiple unicast network, contrary to the previous findings in the
literature. Furthermore, we present the optimal detection rule for the given
model that accounts for intermediate node errors and suggest a network decoder
using the sum-product algorithm. The proposed sum-product detector exhibits
near optimal performance.
|
1110.0623
|
On the Parameterized Complexity of Default Logic and Autoepistemic Logic
|
cs.CC cs.AI
|
We investigate the application of Courcelle's Theorem and the logspace
version of Elberfeld etal. in the context of the implication problem for
propositional sets of formulae, the extension existence problem for default
logic, as well as the expansion existence problem for autoepistemic logic and
obtain fixed-parameter time and space efficient algorithms for these problems.
On the other hand, we exhibit, for each of the above problems, families of
instances of a very simple structure that, for a wide range of different
parameterizations, do not have efficient fixed-parameter algorithms (even in
the sense of the large class XPnu), unless P=NP.
|
1110.0624
|
Autonomous Agents Coordination: Action Languages meet CLP(FD) and Linda
|
cs.LO cs.AI cs.PL
|
The paper presents a knowledge representation formalism, in the form of a
high-level Action Description Language for multi-agent systems, where
autonomous agents reason and act in a shared environment. Agents are
autonomously pursuing individual goals, but are capable of interacting through
a shared knowledge repository. In their interactions through shared portions of
the world, the agents deal with problems of synchronization and concurrency;
the action language allows the description of strategies to ensure a consistent
global execution of the agents' autonomously derived plans. A distributed
planning problem is formalized by providing the declarative specifications of
the portion of the problem pertaining a single agent. Each of these
specifications is executable by a stand-alone CLP-based planner. The
coordination among agents exploits a Linda infrastructure. The proposal is
validated in a prototype implementation developed in SICStus Prolog.
To appear in Theory and Practice of Logic Programming (TPLP).
|
1110.0631
|
Well-Definedness and Efficient Inference for Probabilistic Logic
Programming under the Distribution Semantics
|
cs.AI cs.LO cs.PL
|
The distribution semantics is one of the most prominent approaches for the
combination of logic programming and probability theory. Many languages follow
this semantics, such as Independent Choice Logic, PRISM, pD, Logic Programs
with Annotated Disjunctions (LPADs) and ProbLog. When a program contains
functions symbols, the distribution semantics is well-defined only if the set
of explanations for a query is finite and so is each explanation.
Well-definedness is usually either explicitly imposed or is achieved by
severely limiting the class of allowed programs. In this paper we identify a
larger class of programs for which the semantics is well-defined together with
an efficient procedure for computing the probability of queries. Since LPADs
offer the most general syntax, we present our results for them, but our results
are applicable to all languages under the distribution semantics. We present
the algorithm "Probabilistic Inference with Tabling and Answer subsumption"
(PITA) that computes the probability of queries by transforming a probabilistic
program into a normal program and then applying SLG resolution with answer
subsumption. PITA has been implemented in XSB and tested on six domains: two
with function symbols and four without. The execution times are compared with
those of ProbLog, cplint and CVE, PITA was almost always able to solve larger
problems in a shorter time, on domains with and without function symbols.
|
1110.0641
|
Identifying relationships between drugs and medical conditions: winning
experience in the Challenge 2 of the OMOP 2010 Cup
|
stat.ML cs.CV stat.AP
|
There is a growing interest in using a longitudinal observational databases
to detect drug safety signal. In this paper we present a novel method, which we
used online during the OMOP Cup. We consider homogeneous ensembling, which is
based on random re-sampling (known, also, as bagging) as a main innovation
compared to the previous publications in the related field. This study is based
on a very large simulated database of the 10 million patients records, which
was created by the Observational Medical Outcomes Partnership (OMOP). Compared
to the traditional classification problem, the given data are unlabelled. The
objective of this study is to discover hidden associations between drugs and
conditions. The main idea of the approach, which we used during the OMOP Cup is
to compare the numbers of observed and expected patterns. This comparison may
be organised in several different ways, and the outcomes (base learners) may be
quite different as well. It is proposed to construct the final decision
function as an ensemble of the base learners. Our method was recognised
formally by the Organisers of the OMOP Cup as a top performing method for the
Challenge N2.
|
1110.0678
|
Interference Alignment and Neutralization in a Cognitive 3-User
MAC-Interference Channel: Degrees of Freedom
|
cs.IT math.IT
|
A network consisting of a point-to-point (P2P) link and a multiple access
channel (MAC) sharing the same medium is considered. The resulting interference
network, with three transmitters and two receivers is studied from degrees of
freedom (DoF) perspective, with and without cognition. Several cognition
variants are examined. Namely, the setup is studied with (1) no cognitive
transmitters, (2) a cognitive P2P transmitter, (3) one cognitive MAC
transmitter, and (4) with two cognitive MAC transmitters. It is shown that
having a cognitive P2P transmitter does not bring any DoF gain to the network.
This is obtained by showing that the DoF of the two former cases (1) and (2) is
1. However, it is shown that a cognitive MAC transmitter is more beneficial
since the latter two cases (3) and (4) have 3/2 DoF. The achievability of 3/2
DoF is guaranteed by using a combination of interference neutralization and
interference alignment.
|
1110.0693
|
The Complexity of Rooted Phylogeny Problems
|
cs.CC cs.CE
|
Several computational problems in phylogenetic reconstruction can be
formulated as restrictions of the following general problem: given a formula in
conjunctive normal form where the literals are rooted triples, is there a
rooted binary tree that satisfies the formula? If the formulas do not contain
disjunctions, the problem becomes the famous rooted triple consistency problem,
which can be solved in polynomial time by an algorithm of Aho, Sagiv,
Szymanski, and Ullman. If the clauses in the formulas are restricted to
disjunctions of negated triples, Ng, Steel, and Wormald showed that the problem
remains NP-complete. We systematically study the computational complexity of
the problem for all such restrictions of the clauses in the input formula. For
certain restricted disjunctions of triples we present an algorithm that has
sub-quadratic running time and is asymptotically as fast as the fastest known
algorithm for the rooted triple consistency problem. We also show that any
restriction of the general rooted phylogeny problem that does not fall into our
tractable class is NP-complete, using known results about the complexity of
Boolean constraint satisfaction problems. Finally, we present a pebble game
argument that shows that the rooted triple consistency problem (and also all
generalizations studied in this paper) cannot be solved by Datalog.
|
1110.0704
|
Hierarchical Composable Optimization of Web Pages
|
cs.IR
|
The process of creating modern Web media experiences is challenged by the
need to adapt the content and presentation choices to dynamic real-time
fluctuations of user interest across multiple audiences. We introduce FAME - a
Framework for Agile Media Experiences - which addresses this scalability
problem. FAME allows media creators to define abstract page models that are
subsequently transformed into real experiences through algorithmic
experimentation. FAME's page models are hierarchically composed of simple
building blocks, mirroring the structure of most Web pages. They are resolved
into concrete page instances by pluggable algorithms which optimize the pages
for specific business goals. Our framework allows retrieving dynamic content
from multiple sources, defining the experimentation's degrees of freedom, and
constraining the algorithmic choices. It offers an effective separation of
concerns in the media creation process, enabling multiple stakeholders with
profoundly different skills to apply their crafts and perform their duties
independently, composing and reusing each other's work in modular ways.
|
1110.0718
|
Directed information and Pearl's causal calculus
|
cs.IT cs.LG cs.SY math.IT
|
Probabilistic graphical models are a fundamental tool in statistics, machine
learning, signal processing, and control. When such a model is defined on a
directed acyclic graph (DAG), one can assign a partial ordering to the events
occurring in the corresponding stochastic system. Based on the work of Judea
Pearl and others, these DAG-based "causal factorizations" of joint probability
measures have been used for characterization and inference of functional
dependencies (causal links). This mostly expository paper focuses on several
connections between Pearl's formalism (and in particular his notion of
"intervention") and information-theoretic notions of causality and feedback
(such as causal conditioning, directed stochastic kernels, and directed
information). As an application, we show how conditional directed information
can be used to develop an information-theoretic version of Pearl's "back-door"
criterion for identifiability of causal effects from passive observations. This
suggests that the back-door criterion can be thought of as a causal analog of
statistical sufficiency.
|
1110.0725
|
A Survey of Distributed Data Aggregation Algorithms
|
cs.DC cs.DS cs.IR cs.NI
|
Distributed data aggregation is an important task, allowing the decentralized
determination of meaningful global properties, that can then be used to direct
the execution of other applications. The resulting values result from the
distributed computation of functions like COUNT, SUM and AVERAGE. Some
application examples can found to determine the network size, total storage
capacity, average load, majorities and many others.
In the last decade, many different approaches have been proposed, with
different trade-offs in terms of accuracy, reliability, message and time
complexity. Due to the considerable amount and variety of aggregation
algorithms, it can be difficult and time consuming to determine which
techniques will be more appropriate to use in specific settings, justifying the
existence of a survey to aid in this task.
This work reviews the state of the art on distributed data aggregation
algorithms, providing three main contributions. First, it formally defines the
concept of aggregation, characterizing the different types of aggregation
functions. Second, it succinctly describes the main aggregation techniques,
organizing them in a taxonomy. Finally, it provides some guidelines toward the
selection and use of the most relevant techniques, summarizing their principal
characteristics.
|
1110.0748
|
Compress-Forward without Wyner-Ziv Binning for the One-Way and Two-Way
Relay Channels
|
cs.IT math.IT
|
We consider the role of Wyner-Ziv binning in compress-forward for relay
channels. In the one-way relay channel, we analyze a compress-forward scheme
without Wyner- Ziv binning but with joint decoding of both the message and
compression index. It achieves the same rate as the original compress-forward
scheme with binning and successive decoding. Therefore, binning helps reduce
decoding complexity by allowing successive decoding, but has no impact on
achievable rate for the one-way relay channel. On the other hand, no binning
simplifies relay operation. By extending compress-forward without binning to
the two-way relay channel, we can achieve a larger rate region than the
original compress-forward scheme when the channel is asymmetric for the two
users. Binning and successive decoding limits the compression rate to match the
weaker of the channels from relay to two users, whereas without binning, this
restriction no longer applies. Compared with noisy network coding,
compress-forward without binning achieves the same rate region in certain
Gaussian channel configurations, and it has much less delay. This work is a
step toward understanding the role of Wyner-Ziv binning in compress-forward
relaying.
|
1110.0751
|
Power-law weighted networks from local attachments
|
physics.soc-ph cs.SI
|
This letter introduces a mechanism for constructing, through a process of
distributed decision-making, substrates for the study of collective dynamics on
extended power-law weighted networks with both a desired scaling exponent and a
fixed clustering coefficient. The analytical results show that the connectivity
distribution converges to the scaling behavior often found in social and
engineering systems. To illustrate the approach of the proposed framework we
generate network substrates that resemble steady state properties of the
empirical citation distributions of (i) publications indexed by the Institute
for Scientific Information from 1981 to 1997; (ii) patents granted by the U.S.
Patent and Trademark Office from 1975 to 1999; and (iii) opinions written by
the Supreme Court and the cases they cite from 1754 to 2002.
|
1110.0784
|
Optimal rotation of a qubit under dynamic measurement and velocity
control
|
quant-ph cs.SY math.OC
|
In this article we explore a modification in the problem of controlling the
rotation of a two level quantum system from an initial state to a final state
in minimum time. Specifically we consider the case where the qubit is being
weakly monitored -- albeit with an assumption that both the measurement
strength as well as the angular velocity are assumed to be control signals.
This modification alters the dynamics significantly and enables the
exploitation of the measurement backaction to assist in achieving the control
objective. The proposed method yields a significant speedup in achieving the
desired state transfer compared to previous approaches. These results are
demonstrated via numerical solutions for an example problem on a single qubit.
|
1110.0819
|
Analytical Forms for Most Likely Matrices Derived from Incomplete
Information
|
cs.IT math.IT
|
Consider a rectangular matrix describing some type of communication or
transportation between a set of origins and a set of destinations, or a
classification of objects by two attributes. The problem is to infer the
entries of the matrix from limited information in the form of constraints,
generally the sums of the elements over various subsets of the matrix, such as
rows, columns, etc, or from bounds on these sums, down to individual elements.
Such problems are routinely addressed by applying the maximum entropy method to
compute the matrix numerically, but in this paper we derive analytical,
closed-form solutions. For the most complicated cases we consider the solution
depends on the root of a non-linear equation, for which we provide an
analytical approximation in the form of a power series. Some of our solutions
extend to 3-dimensional matrices. Besides being valid for matrices of arbitrary
size, the analytical solutions exhibit many of the appealing properties of
maximum entropy, such as precise use of the available data, intuitive behavior
with respect to changes in the constraints, and logical consistency.
|
1110.0872
|
Non-Gaussian Scale Space Filtering with 2 by 2 Matrix of Linear Filters
|
cs.CV
|
Construction of a scale space with a convolution filter has been studied
extensively in the past. It has been proven that the only convolution kernel
that satisfies the scale space requirements is a Gaussian type. In this paper,
we consider a matrix of convolution filters introduced in [1] as a building
kernel for a scale space, and shows that we can construct a non-Gaussian scale
space with a $2\times 2$ matrix of filters. The paper derives sufficient
conditions for the matrix of filters for being a scale space kernel, and
present some numerical demonstrations.
|
1110.0879
|
Linearized Additive Classifiers
|
cs.CV cs.AI cs.LG
|
We revisit the additive model learning literature and adapt a penalized
spline formulation due to Eilers and Marx, to train additive classifiers
efficiently. We also propose two new embeddings based two classes of orthogonal
basis with orthogonal derivatives, which can also be used to efficiently learn
additive classifiers. This paper follows the popular theme in the current
literature where kernel SVMs are learned much more efficiently using a
approximate embedding and linear machine. In this paper we show that spline
basis are especially well suited for learning additive models because of their
sparsity structure and the ease of computing the embedding which enables one to
train these models in an online manner, without incurring the memory overhead
of precomputing the storing the embeddings. We show interesting connections
between B-Spline basis and histogram intersection kernel and show that for a
particular choice of regularization and degree of the B-Splines, our proposed
learning algorithm closely approximates the histogram intersection kernel SVM.
This enables one to learn additive models with almost no memory overhead
compared to fast a linear solver, such as LIBLINEAR, while being only 5-6X
slower on average. On two large scale image classification datasets, MNIST and
Daimler Chrysler pedestrians, the proposed additive classifiers are as accurate
as the kernel SVM, while being two orders of magnitude faster to train.
|
1110.0881
|
Partition Function Expansion on Region-Graphs and Message-Passing
Equations
|
cond-mat.stat-mech cond-mat.dis-nn cs.IT math.IT
|
Disordered and frustrated graphical systems are ubiquitous in physics,
biology, and information science. For models on complete graphs or random
graphs, deep understanding has been achieved through the mean-field replica and
cavity methods. But finite-dimensional `real' systems persist to be very
challenging because of the abundance of short loops and strong local
correlations. A statistical mechanics theory is constructed in this paper for
finite-dimensional models based on the mathematical framework of partition
function expansion and the concept of region-graphs. Rigorous expressions for
the free energy and grand free energy are derived. Message-passing equations on
the region-graph, such as belief-propagation and survey-propagation, are also
derived rigorously.
|
1110.0886
|
Two-User Interference Channels with Local Views: On Capacity Regions of
TDM-Dominating Policies
|
cs.IT math.IT
|
We study the capacity regions of two-user interference channels where
transmitters base their transmission schemes on local views of the channel
state. Under the local view model, each transmitter knows only a subset of the
four channel gains, which may be mismatched from the other transmitter.
We consider a set of seven local views, and find that for five out of the
seven local views, TDM is sufficient to achieve the qualified notion of
capacity region for the linear deterministic interference channel which
approximates the Gaussian interference channel. For these five local views, the
qualified capacity result implies that no policy can achieve a rate point
outside the TDM region without inducing a corner case of sub-TDM performance in
another channel state. The common trait shared by the two remaining local views
- those with the potential to outperform TDM - is transmitter knowledge of the
outgoing interference link accompanied by some common knowledge of state,
emphasizing their importance in creating opportunities to coordinate usage of
more advanced schemes.
Our conclusions are extended to bounded gap characterizations of the capacity
region for the Gaussian interference channel.
|
1110.0895
|
Robust inversion via semistochastic dimensionality reduction
|
cs.CE cs.NA
|
We consider a class of inverse problems where it is possible to aggregate the
results of multiple experiments. This class includes problems where the forward
model is the solution operator to linear ODEs or PDEs. The tremendous size of
such problems motivates dimensionality reduction techniques based on randomly
mixing experiments. These techniques break down, however, when robust
data-fitting formulations are used, which are essential in cases of missing
data, unusually large errors, and systematic features in the data unexplained
by the forward model. We survey robust methods within a statistical framework,
and propose a semistochastic optimization approach that allows dimensionality
reduction. The efficacy of the methods are demonstrated for a large-scale
seismic inverse problem using the robust Student's t-distribution, where a
useful synthetic velocity model is recovered in the extreme scenario of 60%
data missing at random. The semistochastic approach achieves this recovery
using 20% of the effort required by a direct robust approach.
|
1110.0897
|
Block-Orthogonal Space-Time Code Structure and Its Impact on QRDM
Decoding Complexity Reduction
|
cs.IT math.IT
|
Full-rate space time codes (STC) with rate = number of transmit antennas have
high multiplexing gain, but high decoding complexity even when decoded using
reduced-complexity decoders such as sphere or QRDM decoders. In this paper, we
introduce a new code property of STC called block-orthogonal property, which
can be exploited by QR-decomposition-based decoders to achieve significant
decoding complexity reduction without performance loss. We show that such
complexity reduction principle can benefit the existing algebraic codes such as
Perfect and DjABBA codes due to their inherent (but previously undiscovered)
block-orthogonal property. In addition, we construct and optimize new full-rate
BOSTC (Block-Orthogonal STC) that further maximize the QRDM complexity
reduction potential. Simulation results of bit error rate (BER) performance
against decoding complexity show that the new BOSTC outperforms all previously
known codes as long as the QRDM decoder operates in reduced-complexity mode,
and the code exhibits a desirable complexity saturation property.
|
1110.0911
|
Estimates on the Size of Symbol Weight Codes
|
cs.IT math.IT
|
The study of codes for powerlines communication has garnered much interest
over the past decade. Various types of codes such as permutation codes,
frequency permutation arrays, and constant composition codes have been proposed
over the years. In this work we study a type of code called the bounded symbol
weight codes which was first introduced by Versfeld et al. in 2005, and a
related family of codes that we term constant symbol weight codes. We provide
new upper and lower bounds on the size of bounded symbol weight and constant
symbol weight codes. We also give direct and recursive constructions of codes
for certain parameters.
|
1110.0957
|
Dictionary Learning for Deblurring and Digital Zoom
|
cs.LG cs.CV
|
This paper proposes a novel approach to image deblurring and digital zooming
using sparse local models of image appearance. These models, where small image
patches are represented as linear combinations of a few elements drawn from
some large set (dictionary) of candidates, have proven well adapted to several
image restoration tasks. A key to their success has been to learn dictionaries
adapted to the reconstruction of small image patches. In contrast, recent works
have proposed instead to learn dictionaries which are not only adapted to data
reconstruction, but also tuned for a specific task. We introduce here such an
approach to deblurring and digital zoom, using pairs of blurry/sharp (or
low-/high-resolution) images for training, as well as an effective stochastic
gradient algorithm for solving the corresponding optimization task. Although
this learning problem is not convex, once the dictionaries have been learned,
the sharp/high-resolution image can be recovered via convex optimization at
test time. Experiments with synthetic and real data demonstrate the
effectiveness of the proposed approach, leading to state-of-the-art performance
for non-blind image deblurring and digital zoom.
|
1110.0983
|
Self-organizing magnetic beads for biomedical applications
|
physics.bio-ph cs.CE physics.flu-dyn
|
In the field of biomedicine magnetic beads are used for drug delivery and to
treat hyperthermia. Here we propose to use self-organized bead structures to
isolate circulating tumor cells using lab-on-chip technologies. Typically blood
flows past microposts functionalized with antibodies for circulating tumor
cells. Creating these microposts with interacting magnetic beads makes it
possible to tune the geometry in size, position and shape. We developed a
simulation tool that combines micromagnetics and discrete particle dynamics, in
order to design micropost arrays made of interacting beads. The simulation
takes into account the viscous drag of the blood flow, magnetostatic
interactions between the magnetic beads and gradient forces from external
aligned magnets. We developed a particle-particle particle-mesh method for
effective computation of the magnetic force and torque acting on the particles.
|
1110.0995
|
A tunable cancer cell filter using magnetic beads: cellular and fluid
dynamic simulations
|
physics.flu-dyn cs.CE physics.bio-ph
|
In the field of biomedicine magnetic beads are used for drug delivery and to
treat hyperthermia. Here we propose to use self-organized bead structures to
isolate circulating tumor cells using lab-on-chip technologies. Typically blood
flows past microposts functionalized with antibodies for circulating tumor
cells. Creating these microposts with interacting magnetic beads makes it
possible to tune the geometry in size, position and shape. We develop a
simulation tool that combines micromagnetics, discrete particle dynamics and
fluid dynamics, in order to design micropost arrays made of interacting beads.
For the simulation of blood flow we use the Lattice-Boltzmann method with
immersed elastic blood cell models. Parallelization distributes large fluid and
particle dynamic simulations over available resources to reduce overall
calculation time.
|
1110.0999
|
Generalization Strategies for the Verification of Infinite State Systems
|
cs.LO cs.AI cs.SE
|
We present a method for the automated verification of temporal properties of
infinite state systems. Our verification method is based on the specialization
of constraint logic programs (CLP) and works in two phases: (1) in the first
phase, a CLP specification of an infinite state system is specialized with
respect to the initial state of the system and the temporal property to be
verified, and (2) in the second phase, the specialized program is evaluated by
using a bottom-up strategy. The effectiveness of the method strongly depends on
the generalization strategy which is applied during the program specialization
phase. We consider several generalization strategies obtained by combining
techniques already known in the field of program analysis and program
transformation, and we also introduce some new strategies. Then, through many
verification experiments, we evaluate the effectiveness of the generalization
strategies we have considered. Finally, we compare the implementation of our
specialization-based verification method to other constraint-based model
checking tools. The experimental results show that our method is competitive
with the methods used by those other tools. To appear in Theory and Practice of
Logic Programming (TPLP).
|
1110.1016
|
Engineering Benchmarks for Planning: the Domains Used in the
Deterministic Part of IPC-4
|
cs.AI
|
In a field of research about general reasoning mechanisms, it is essential to
have appropriate benchmarks. Ideally, the benchmarks should reflect possible
applications of the developed technology. In AI Planning, researchers more and
more tend to draw their testing examples from the benchmark collections used in
the International Planning Competition (IPC). In the organization of (the
deterministic part of) the fourth IPC, IPC-4, the authors therefore invested
significant effort to create a useful set of benchmarks. They come from five
different (potential) real-world applications of planning: airport ground
traffic control, oil derivative transportation in pipeline networks,
model-checking safety properties, power supply restoration, and UMTS call
setup. Adapting and preparing such an application for use as a benchmark in the
IPC involves, at the time, inevitable (often drastic) simplifications, as well
as careful choice between, and engineering of, domain encodings. For the first
time in the IPC, we used compilations to formulate complex domain features in
simple languages such as STRIPS, rather than just dropping the more interesting
problem constraints in the simpler language subsets. The article explains and
discusses the five application domains and their adaptation to form the PDDL
test suites used in IPC-4. We summarize known theoretical results on structural
properties of the domains, regarding their computational complexity and
provable properties of their topology under the h+ function (an idealized
version of the relaxed plan heuristic). We present new (empirical) results
illuminating properties such as the quality of the most wide-spread heuristic
functions (planning graph, serial planning graph, and relaxed plan), the growth
of propositional representations over instance size, and the number of actions
available to achieve each fact; we discuss these data in conjunction with the
best results achieved by the different kinds of planners participating in
IPC-4.
|
1110.1038
|
Using Genetic Algorithm in the Evolutionary Design of Sequential Logic
Circuits
|
cs.NE
|
Evolvable hardware (EHW) is a set of techniques that are based on the idea of
combining reconfiguration hardware systems with evolutionary algorithms. In
other word, EHW has two sections; the reconfigurable hardware and evolutionary
algorithm where the configurations are under the control of an evolutionary
algorithm. This paper, suggests a method to design and optimize the synchronous
sequential circuits. Genetic algorithm (GA) was applied as evolutionary
algorithm. In this approach, for building input combinational logic circuit of
each DFF, and also output combinational logic circuit, the cell arrays have
been used. The obtained results show that our method can reduce the average
number of generations by limitation the search space.
|
1110.1073
|
Active Learning with Multiple Views
|
cs.LG
|
Active learners alleviate the burden of labeling large amounts of data by
detecting and asking the user to label only the most informative examples in
the domain. We focus here on active learning for multi-view domains, in which
there are several disjoint subsets of features (views), each of which is
sufficient to learn the target concept. In this paper we make several
contributions. First, we introduce Co-Testing, which is the first approach to
multi-view active learning. Second, we extend the multi-view learning framework
by also exploiting weak views, which are adequate only for learning a concept
that is more general/specific than the target concept. Finally, we empirically
show that Co-Testing outperforms existing active learners on a variety of real
world domains such as wrapper induction, Web page classification, advertisement
removal, and discourse tree parsing.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.