id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1204.2311
|
Robust Nonnegative Matrix Factorization via $L_1$ Norm Regularization
|
cs.LG cs.CV stat.ML
|
Nonnegative Matrix Factorization (NMF) is a widely used technique in many
applications such as face recognition, motion segmentation, etc. It
approximates the nonnegative data in an original high dimensional space with a
linear representation in a low dimensional space by using the product of two
nonnegative matrices. In many applications data are often partially corrupted
with large additive noise. When the positions of noise are known, some existing
variants of NMF can be applied by treating these corrupted entries as missing
values. However, the positions are often unknown in many real world
applications, which prevents the usage of traditional NMF or other existing
variants of NMF. This paper proposes a Robust Nonnegative Matrix Factorization
(RobustNMF) algorithm that explicitly models the partial corruption as large
additive noise without requiring the information of positions of noise. In
practice, large additive noise can be used to model outliers. In particular,
the proposed method jointly approximates the clean data matrix with the product
of two nonnegative matrices and estimates the positions and values of
outliers/noise. An efficient iterative optimization algorithm with a solid
theoretical justification has been proposed to learn the desired matrix
factorization. Experimental results demonstrate the advantages of the proposed
algorithm.
|
1204.2321
|
Derivation of Upper Bounds on Optimization Time of Population-Based
Evolutionary Algorithm on a Function with Fitness Plateaus Using Elitism
Levels Traverse Mechanism
|
cs.NE cs.AI
|
In this article a tool for the analysis of population-based EAs is used to
derive asymptotic upper bounds on the optimization time of the algorithm
solving Royal Roads problem, a test function with plateaus of fitness. In
addition to this, limiting distribution of a certain subset of the population
is approximated.
|
1204.2331
|
Compression with Actions
|
cs.IT math.IT
|
We consider the setting where actions can be used to modify a state sequence
before compression. The minimum rate needed to losslessly describe the optimal
modified sequence is characterized when the state sequence is either
non-causally or causally available at the action encoder. The achievability is
closely related to the optimal channel coding strategy for channel with states.
We also extend the analysis to the the lossy case.
|
1204.2335
|
Automated Generation of Cross-Domain Analogies via Evolutionary
Computation
|
cs.NE nlin.AO
|
Analogy plays an important role in creativity, and is extensively used in
science as well as art. In this paper we introduce a technique for the
automated generation of cross-domain analogies based on a novel evolutionary
algorithm (EA). Unlike existing work in computational analogy-making restricted
to creating analogies between two given cases, our approach, for a given case,
is capable of creating an analogy along with the novel analogous case itself.
Our algorithm is based on the concept of "memes", which are units of culture,
or knowledge, undergoing variation and selection under a fitness measure, and
represents evolving pieces of knowledge as semantic networks. Using a fitness
function based on Gentner's structure mapping theory of analogies, we
demonstrate the feasibility of spontaneously generating semantic networks that
are analogous to a given base network.
|
1204.2336
|
Feature Extraction Methods for Color Image Similarity
|
cs.CV
|
Many User interactive systems are proposed all methods are trying to
implement as a user friendly and various approaches proposed but most of the
systems not reached to the use specifications like user friendly systems with
user interest, all proposed method implemented basic techniques some are
improved methods also propose but not reaching to the user specifications. In
this proposed paper we concentrated on image retrieval system with in early
days many user interactive systems performed with basic concepts but such
systems are not reaching to the user specifications and not attracted to the
user so a lot of research interest in recent years with new specifications,
recent approaches have user is interested in friendly interacted methods are
expecting, many are concentrated for improvement in all methods. In this
proposed system we focus on the retrieval of images within a large image
collection based on color projections and different mathematical approaches are
introduced and applied for retrieval of images. before Appling proposed methods
images are sub grouping using threshold values, in this paper R G B color
combinations considered for retrieval of images, in proposed methods are
implemented and results are included, through results it is observed that we
obtaining efficient results comparatively previous and existing.
|
1204.2356
|
Self-Adaptive Surrogate-Assisted Covariance Matrix Adaptation Evolution
Strategy
|
cs.NE
|
This paper presents a novel mechanism to adapt surrogate-assisted
population-based algorithms. This mechanism is applied to ACM-ES, a recently
proposed surrogate-assisted variant of CMA-ES. The resulting algorithm,
saACM-ES, adjusts online the lifelength of the current surrogate model (the
number of CMA-ES generations before learning a new surrogate) and the surrogate
hyper-parameters. Both heuristics significantly improve the quality of the
surrogate model, yielding a significant speed-up of saACM-ES compared to the
ACM-ES and CMA-ES baselines. The empirical validation of saACM-ES on the
BBOB-2012 noiseless testbed demonstrates the efficiency and the scalability
w.r.t the problem dimension and the population size of the proposed approach,
that reaches new best results on some of the benchmark problems.
|
1204.2358
|
Collaborative Representation based Classification for Face Recognition
|
cs.CV
|
By coding a query sample as a sparse linear combination of all training
samples and then classifying it by evaluating which class leads to the minimal
coding residual, sparse representation based classification (SRC) leads to
interesting results for robust face recognition. It is widely believed that the
l1- norm sparsity constraint on coding coefficients plays a key role in the
success of SRC, while its use of all training samples to collaboratively
represent the query sample is rather ignored. In this paper we discuss how SRC
works, and show that the collaborative representation mechanism used in SRC is
much more crucial to its success of face classification. The SRC is a special
case of collaborative representation based classification (CRC), which has
various instantiations by applying different norms to the coding residual and
coding coefficient. More specifically, the l1 or l2 norm characterization of
coding residual is related to the robustness of CRC to outlier facial pixels,
while the l1 or l2 norm characterization of coding coefficient is related to
the degree of discrimination of facial features. Extensive experiments were
conducted to verify the face recognition accuracy and efficiency of CRC with
different instantiations.
|
1204.2385
|
Vision-Based Cooperative Estimation of Averaged 3D Target Pose under
Imperfect Visibility
|
cs.SY
|
This paper investigates vision-based cooperative estimation of a 3D target
object pose for visual sensor networks. In our previous works, we presented an
estimation mechanism called networked visual motion observer achieving
averaging of local pose estimates in real time. This paper extends the
mechanism so that it works even in the presence of cameras not viewing the
target due to the limited view angles and obstructions in order to fully take
advantage of the networked vision system. Then, we analyze the averaging
performance attained by the proposed mechanism and clarify a relation between
the feedback gains in the algorithm and the performance. Finally, we
demonstrate the effectiveness of the algorithm through simulation.
|
1204.2401
|
Controlling complex networks: How much energy is needed?
|
physics.soc-ph cs.SI cs.SY
|
The outstanding problem of controlling complex networks is relevant to many
areas of science and engineering, and has the potential to generate
technological breakthroughs as well. We address the physically important issue
of the energy required for achieving control by deriving and validating scaling
laws for the lower and upper energy bounds. These bounds represent a reasonable
estimate of the energy cost associated with control, and provide a step forward
from the current research on controllability toward ultimate control of complex
networked dynamical systems.
|
1204.2420
|
Variational Principle underlying Scale Invariant Social Systems
|
stat.AP cs.SI physics.soc-ph
|
MaxEnt's variational principle, in conjunction with Shannon's logarithmic
information measure, yields only exponential functional forms in
straightforward fashion. In this communication we show how to overcome this
limitation via the incorporation, into the variational process, of suitable
dynamical information. As a consequence, we are able to formulate a somewhat
generalized Shannonian Maximum Entropy approach which provides a unifying
"thermodynamic-like" explanation for the scale-invariant phenomena observed in
social contexts, as city-population distributions. We confirm the MaxEnt
predictions by means of numerical experiments with random walkers, and compare
them with some empirical data.
|
1204.2422
|
Scale-invariance underlying the logistic equation and its social
applications
|
stat.AP cs.SI physics.soc-ph
|
On the basis of dynamical principles we derive the Logistic Equation (LE),
widely employed (among multiple applications) in the simulation of population
growth, and demonstrate that scale-invariance and a mean-value constraint are
sufficient and necessary conditions for obtaining it. We also generalize the LE
to multi-component systems and show that the above dynamical mechanisms
underlie large number of scale-free processes. Examples are presented regarding
city-populations, diffusion in complex networks, and popularity of
technological products, all of them obeying the multi-component logistic
equation in an either stochastic or deterministic way. So as to assess the
predictability-power of our present formalism, we advance a prediction,
regarding the next 60 months, for the number of users of the three main web
browsers (Explorer, Firefox and Chrome) popularly referred as "Browser Wars".
|
1204.2428
|
Performance Analysis of Spectrum Sensing With Multiple Status Changes in
Primary User Traffic
|
cs.IT cs.NI math.IT
|
In this letter, the impact of primary user traffic with multiple status
changes on the spectrum sensing performance is analyzed. Closed-form
expressions for the probabilities of false alarm and detection are derived.
Numerical results show that the multiple status changes of the primary user
cause considerable degradation in the sensing performance. This degradation
depends on the number of changes, the primary user traffic model, the primary
user traffic intensity and the signal-to-noise ratio of the received signal.
Numerical results also show that the amount of degradation decreases when the
number of changes increases, and converges to a minimum sensing performance due
to the limited sensing period and primary holding time.
|
1204.2433
|
Decode-and-Forward Based Differential Modulation for Cooperative
Communication System with Unitary and Non-Unitary Constellations
|
cs.IT math.IT
|
In this paper, we derive a maximum likelihood (ML) decoder of the
differential data in a decode-and-forward (DF) based cooperative communication
system utilizing uncoded transmissions. This decoder is applicable to
complex-valued unitary and non-unitary constellations suitable for differential
modulation. The ML decoder helps in improving the diversity of the DF based
differential cooperative system using an erroneous relaying node. We also
derive a piecewise linear (PL) decoder of the differential data transmitted in
the DF based cooperative system. The proposed PL decoder significantly reduces
the decoding complexity as compared to the proposed ML decoder without any
significant degradation in the receiver performance. Existing ML and PL
decoders of the differentially modulated uncoded data in the DF based
cooperative communication system are only applicable to binary modulated
signals like binary phase shift keying (BPSK) and binary frequency shift keying
(BFSK), whereas, the proposed decoders are applicable to complex-valued unitary
and non-unitary constellations suitable for differential modulation under
uncoded transmissions. We derive a closedform expression of the uncoded average
symbol error rate (SER) of the proposed PL decoder with M-PSK constellation in
a cooperative communication system with a single relay and one
source-destination pair. An approximate average SER by ignoring higher order
noise terms is also derived for this set-up. It is analytically shown on the
basis of the derived approximate SER that the proposed PL decoder provides full
diversity of second order. In addition, we also derive approximate SER of the
differential DF system with multiple relays at asymptotically high
signal-to-noise ratio of the source-relay links.
|
1204.2435
|
Spectral Shape of Doubly-Generalized LDPC Codes: Efficient and Exact
Evaluation
|
cs.IT math.IT
|
This paper analyzes the asymptotic exponent of the weight spectrum for
irregular doubly-generalized LDPC (D-GLDPC) codes. In the process, an efficient
numerical technique for its evaluation is presented, involving the solution of
a 4 x 4 system of polynomial equations. The expression is consistent with
previous results, including the case where the normalized weight or stopping
set size tends to zero. The spectral shape is shown to admit a particularly
simple form in the special case where all variable nodes are repetition codes
of the same degree, a case which includes Tanner codes; for this case it is
also shown how certain symmetry properties of the local weight distribution at
the CNs induce a symmetry in the overall weight spectral shape function.
Finally, using these new results, weight and stopping set size spectral shapes
are evaluated for some example generalized and doubly-generalized LDPC code
ensembles.
|
1204.2447
|
On Capacity Regions of Discrete Asynchronous Multiple Access Channels
|
cs.IT math.IT
|
A general formalization is given for asynchronous multiple access channels
which admits different assumptions on delays. This general framework allows the
analysis of so far unexplored models leading to new interesting capacity
regions. In particular, a single letter characterization is given for the
capacity region in case of 3 senders, 2 synchronous with each other and the
third not synchronous with them.
|
1204.2477
|
A Simple Explanation of A Spectral Algorithm for Learning Hidden Markov
Models
|
stat.ME cs.LG stat.ML
|
A simple linear algebraic explanation of the algorithm in "A Spectral
Algorithm for Learning Hidden Markov Models" (COLT 2009). Most of the content
is in Figure 2; the text just makes everything precise in four nearly-trivial
claims.
|
1204.2518
|
Distributed Function Computation with Confidentiality
|
cs.IT cs.CR math.IT
|
A set of terminals observe correlated data and seek to compute functions of
the data using interactive public communication. At the same time, it is
required that the value of a private function of the data remains concealed
from an eavesdropper observing this communication. In general, the private
function and the functions computed by the nodes can be all different. We show
that a class of functions are securely computable if and only if the
conditional entropy of data given the value of private function is greater than
the least rate of interactive communication required for a related
multiterminal source-coding task. A single-letter formula is provided for this
rate in special cases.
|
1204.2523
|
Concept Modeling with Superwords
|
stat.ML cs.CL cs.IR cs.LG
|
In information retrieval, a fundamental goal is to transform a document into
concepts that are representative of its content. The term "representative" is
in itself challenging to define, and various tasks require different
granularities of concepts. In this paper, we aim to model concepts that are
sparse over the vocabulary, and that flexibly adapt their content based on
other relevant semantic information such as textual structure or associated
image features. We explore a Bayesian nonparametric model based on nested beta
processes that allows for inferring an unknown number of strictly sparse
concepts. The resulting model provides an inherently different representation
of concepts than a standard LDA (or HDP) based topic model, and allows for
direct incorporation of semantic features. We demonstrate the utility of this
representation on multilingual blog data and the Congressional Record.
|
1204.2541
|
Employing Subsequence Matching in Audio Data Processing
|
cs.SD cs.DB
|
We overview current problems of audio retrieval and time-series subsequence
matching. We discuss the usage of subsequence matching approaches in audio data
processing, especially in automatic speech recognition (ASR) area and we aim at
improving performance of the retrieval process. To overcome the problems known
from the time-series area like the occurrence of implementation bias and data
bias we present a Subsequence Matching Framework as a tool for fast
prototyping, building, and testing similarity search subsequence matching
applications. The framework is build on top of MESSIF (Metric Similarity Search
Implementation Framework) and thus the subsequence matching algorithms can
exploit advanced similarity indexes in order to significantly increase their
query processing performance. To prove our concept we provide a design of
query-by-example spoken term detection type of application with the usage of
phonetic posteriograms and subsequence matching approach.
|
1204.2577
|
Reduced-Complexity Column-Layered Decoding and Implementation for LDPC
Codes
|
cs.IT math.IT
|
Layered decoding is well appreciated in Low-Density Parity-Check (LDPC)
decoder implementation since it can achieve effectively high decoding
throughput with low computation complexity. This work, for the first time,
addresses low complexity column-layered decoding schemes and VLSI architectures
for multi-Gb/s applications. At first, the Min-Sum algorithm is incorporated
into the column-layered decoding. Then algorithmic transformations and
judicious approximations are explored to minimize the overall computation
complexity. Compared to the original column-layered decoding, the new approach
can reduce the computation complexity in check node processing for high-rate
LDPC codes by up to 90% while maintaining the fast convergence speed of layered
decoding. Furthermore, a relaxed pipelining scheme is presented to enable very
high clock speed for VLSI implementation. Equipped with these new techniques,
an efficient decoder architecture for quasi-cyclic LDPC codes is developed and
implemented with 0.13um CMOS technology. It is shown that a decoding throughput
of nearly 4 Gb/s at maximum of 10 iterations can be achieved for a (4096, 3584)
LDPC code. Hence, this work has facilitated practical applications of
column-layered decoding and particularly made it very attractive in high-speed,
high-rate LDPC decoder implementation.
|
1204.2581
|
Modeling Relational Data via Latent Factor Blockmodel
|
cs.DS cs.LG stat.ML
|
In this paper we address the problem of modeling relational data, which
appear in many applications such as social network analysis, recommender
systems and bioinformatics. Previous studies either consider latent feature
based models but disregarding local structure in the network, or focus
exclusively on capturing local structure of objects based on latent blockmodels
without coupling with latent characteristics of objects. To combine the
benefits of the previous work, we propose a novel model that can simultaneously
incorporate the effect of latent features and covariates if any, as well as the
effect of latent structure that may exist in the data. To achieve this, we
model the relation graph as a function of both latent feature factors and
latent cluster memberships of objects to collectively discover globally
predictive intrinsic properties of objects and capture latent block structure
in the network to improve prediction performance. We also develop an
optimization transfer algorithm based on the generalized EM-style strategy to
learn the latent factors. We prove the efficacy of our proposed model through
the link prediction task and cluster analysis task, and extensive experiments
on the synthetic data and several real world datasets suggest that our proposed
LFBM model outperforms the other state of the art approaches in the evaluated
tasks.
|
1204.2587
|
Upper Bounds on the Capacity of Binary Channels with Causal Adversaries
|
cs.IT cs.CR math.IT
|
In this work we consider the communication of information in the presence of
a causal adversarial jammer. In the setting under study, a sender wishes to
communicate a message to a receiver by transmitting a codeword $(x_1,...,x_n)$
bit-by-bit over a communication channel. The sender and the receiver do not
share common randomness. The adversarial jammer can view the transmitted bits
$x_i$ one at a time, and can change up to a $p$-fraction of them. However, the
decisions of the jammer must be made in a causal manner. Namely, for each bit
$x_i$ the jammer's decision on whether to corrupt it or not must depend only on
$x_j$ for $j \leq i$. This is in contrast to the "classical" adversarial
jamming situations in which the jammer has no knowledge of $(x_1,...,x_n)$, or
knows $(x_1,...,x_n)$ completely. In this work, we present upper bounds (that
hold under both the average and maximal probability of error criteria) on the
capacity which hold for both deterministic and stochastic encoding schemes.
|
1204.2588
|
Probabilistic Latent Tensor Factorization Model for Link Pattern
Prediction in Multi-relational Networks
|
cs.SI cs.LG stat.ML
|
This paper aims at the problem of link pattern prediction in collections of
objects connected by multiple relation types, where each type may play a
distinct role. While common link analysis models are limited to single-type
link prediction, we attempt here to capture the correlations among different
relation types and reveal the impact of various relation types on performance
quality. For that, we define the overall relations between object pairs as a
\textit{link pattern} which consists in interaction pattern and connection
structure in the network, and then use tensor formalization to jointly model
and predict the link patterns, which we refer to as \textit{Link Pattern
Prediction} (LPP) problem. To address the issue, we propose a Probabilistic
Latent Tensor Factorization (PLTF) model by introducing another latent factor
for multiple relation types and furnish the Hierarchical Bayesian treatment of
the proposed probabilistic model to avoid overfitting for solving the LPP
problem. To learn the proposed model we develop an efficient Markov Chain Monte
Carlo sampling method. Extensive experiments are conducted on several real
world datasets and demonstrate significant improvements over several existing
state-of-the-art methods.
|
1204.2601
|
Detecting lateral genetic material transfer
|
cs.NE cs.AI q-bio.GN
|
The bioinformatical methods to detect lateral gene transfer events are mainly
based on functional coding DNA characteristics. In this paper, we propose the
use of DNA traits not depending on protein coding requirements. We introduce
several semilocal variables that depend on DNA primary sequence and that
reflect thermodynamic as well as physico-chemical magnitudes that are able to
tell apart the genome of different organisms. After combining these variables
in a neural classificator, we obtain results whose power of resolution go as
far as to detect the exchange of genomic material between bacteria that are
phylogenetically close.
|
1204.2606
|
Privacy via the Johnson-Lindenstrauss Transform
|
cs.DS cs.CY cs.DB cs.SI
|
Suppose that party A collects private information about its users, where each
user's data is represented as a bit vector. Suppose that party B has a
proprietary data mining algorithm that requires estimating the distance between
users, such as clustering or nearest neighbors. We ask if it is possible for
party A to publish some information about each user so that B can estimate the
distance between users without being able to infer any private bit of a user.
Our method involves projecting each user's representation into a random,
lower-dimensional space via a sparse Johnson-Lindenstrauss transform and then
adding Gaussian noise to each entry of the lower-dimensional representation. We
show that the method preserves differential privacy---where the more privacy is
desired, the larger the variance of the Gaussian noise. Further, we show how to
approximate the true distances between users via only the lower-dimensional,
perturbed data. Finally, we consider other perturbation methods such as
randomized response and draw comparisons to sketch-based methods. While the
goal of releasing user-specific data to third parties is more broad than
preserving distances, this work shows that distance computations with privacy
is an achievable goal.
|
1204.2609
|
Stochastic Feature Mapping for PAC-Bayes Classification
|
cs.LG
|
Probabilistic generative modeling of data distributions can potentially
exploit hidden information which is useful for discriminative classification.
This observation has motivated the development of approaches that couple
generative and discriminative models for classification. In this paper, we
propose a new approach to couple generative and discriminative models in an
unified framework based on PAC-Bayes risk theory. We first derive the
model-parameter-independent stochastic feature mapping from a practical MAP
classifier operating on generative models. Then we construct a linear
stochastic classifier equipped with the feature mapping, and derive the
explicit PAC-Bayes risk bounds for such classifier for both supervised and
semi-supervised learning. Minimizing the risk bound, using an EM-like iterative
procedure, results in a new posterior over hidden variables (E-step) and the
update rules of model parameters (M-step). The derivation of the posterior is
always feasible due to the way of equipping feature mapping and the explicit
form of bounding risk. The derived posterior allows the tuning of generative
models and subsequently the feature mappings for better classification. The
derived update rules of the model parameters are same to those of the uncoupled
models as the feature mapping is model-parameter-independent. Our experiments
show that the coupling between data modeling generative model and the
discriminative classifier via a stochastic feature mapping in this framework
leads to a general classification tool with state-of-the-art performance.
|
1204.2610
|
A Novel Framework using Elliptic Curve Cryptography for Extremely Secure
Transmission in Distributed Privacy Preserving Data Mining
|
cs.DB cs.CR
|
Privacy Preserving Data Mining is a method which ensures privacy of
individual information during mining. Most important task involves retrieving
information from multiple data bases which is distributed. The data once in the
data warehouse can be used by mining algorithms to retrieve confidential
information. The proposed framework has two major tasks, secure transmission
and privacy of confidential information during mining. Secure transmission is
handled by using elliptic curve cryptography and data distortion for privacy
preservation ensuring highly secure environment.
|
1204.2611
|
Recovery from Linear Measurements with Complexity-Matching Universal
Signal Estimation
|
cs.IT math.IT
|
We study the compressed sensing (CS) signal estimation problem where an input
signal is measured via a linear matrix multiplication under additive noise.
While this setup usually assumes sparsity or compressibility in the input
signal during recovery, the signal structure that can be leveraged is often not
known a priori. In this paper, we consider universal CS recovery, where the
statistics of a stationary ergodic signal source are estimated simultaneously
with the signal itself. Inspired by Kolmogorov complexity and minimum
description length, we focus on a maximum a posteriori (MAP) estimation
framework that leverages universal priors to match the complexity of the
source. Our framework can also be applied to general linear inverse problems
where more measurements than in CS might be needed. We provide theoretical
results that support the algorithmic feasibility of universal MAP estimation
using a Markov chain Monte Carlo implementation, which is computationally
challenging. We incorporate some techniques to accelerate the algorithm while
providing comparable and in many cases better reconstruction quality than
existing algorithms. Experimental results show the promise of universality in
CS, particularly for low-complexity sources that do not exhibit standard
sparsity or compressibility.
|
1204.2637
|
Solution regions in the parameter space of a 3-RRR decoupled robot for a
prescribed workspace
|
cs.RO
|
This paper proposes a new design method to determine the feasible set of
parameters of translational or position/orientation decoupled parallel robots
for a prescribed singularity-free workspace of regular shape. The suggested
method uses Groebner bases to define the singularities and the cylindrical
algebraic decomposition to characterize the set of parameters. It makes it
possible to generate all the robot designs. A 3-RRR decoupled robot is used to
validate the proposed design method.
|
1204.2649
|
Multiuser Switched Diversity Scheduling Schemes
|
cs.IT math.IT
|
Multiuser switched-diversity scheduling schemes were recently proposed in
order to overcome the heavy feedback requirements of conventional opportunistic
scheduling schemes by applying a threshold-based, distributed, and ordered
scheduling mechanism. The main idea behind these schemes is that slight
reduction in the prospected multiuser diversity gains is an acceptable
trade-off for great savings in terms of required channel-state-information
feedback messages. In this work, we characterize the achievable rate region of
multiuser switched diversity systems and compare it with the rate region of
full feedback multiuser diversity systems. We propose also a novel proportional
fair multiuser switched-based scheduling scheme and we demonstrate that it can
be optimized using a practical and distributed method to obtain the feedback
thresholds. We finally demonstrate by numerical examples that
switched-diversity scheduling schemes operate within 0.3 bits/sec/Hz from the
ultimate network capacity of full feedback systems in Rayleigh fading
conditions.
|
1204.2651
|
Cooperative Cognitive Networks: Optimal, Distributed and Low-Complexity
Algorithms
|
cs.IT math.IT
|
This paper considers the cooperation between a cognitive system and a primary
system where multiple cognitive base stations (CBSs) relay the primary user's
(PU) signals in exchange for more opportunity to transmit their own signals.
The CBSs use amplify-and-forward (AF) relaying and coordinated beamforming to
relay the primary signals and transmit their own signals. The objective is to
minimize the overall transmit power of the CBSs given the rate requirements of
the PU and the cognitive users (CUs). We show that the relaying matrices have
unit rank and perform two functions: Matched filter receive beamforming and
transmit beamforming. We then develop two efficient algorithms to find the
optimal solution. The first one has linear convergence rate and is suitable for
distributed implementation, while the second one enjoys superlinear convergence
but requires centralized processing. Further, we derive the beamforming vectors
for the linear conventional zero-forcing (CZF) and prior zero-forcing (PZF)
schemes, which provide much simpler solutions. Simulation results demonstrate
the improvement in terms of outage performance due to the cooperation between
the primary and cognitive systems.
|
1204.2660
|
Efficient Iterative Decoding of LDPC in the Presence of Strong Phase
Noise
|
cs.IT math.IT
|
In this paper we propose a new efficient message passing algorithm for
decoding LDPC transmitted over a channel with strong phase noise. The algorithm
performs approximate bayesian inference on a factor graph representation of the
channel and code joint posterior. The approximate inference is based on an
improved canonical model for the messages of the Sum & Product Algorithm, and a
method for clustering the messages using the directional statistics framework.
The proposed canonical model includes treatment for phase slips which can limit
the performance of tracking algorithms. We show simulation results and
complexity analysis for the proposed algorithm demonstrating its superiority
over some of the current state of the art algorithms.
|
1204.2677
|
The Geographic Flow of Music
|
cs.SI cs.CY physics.soc-ph
|
The social media website last.fm provides a detailed snapshot of what its
users in hundreds of cities listen to each week. After suitably normalizing
this data, we use it to test three hypotheses related to the geographic flow of
music. The first is that although many of the most popular artists are listened
to around the world, music preferences are closely related to nationality,
language, and geographic location. We find support for this hypothesis, with a
couple of minor, yet interesting, exceptions. Our second hypothesis is that
some cities are consistently early adopters of new music (and early to snub
stale music). To test this hypothesis, we adapt a method previously used to
detect the leadership networks present in flocks of birds. We find empirical
support for the claim that a similar leadership network exists among cities,
and this finding is the main contribution of the paper. Finally, we test the
hypothesis that large cities tend to be ahead of smaller cities-we find only
weak support for this hypothesis.
|
1204.2692
|
Asynchronous Physical-layer Network Coding Scheme for Two-way OFDM Relay
|
cs.IT math.IT
|
In two-way OFDM relay, carrier frequency offsets (CFOs) between relay and
terminal nodes introduce severe intercarrier interference (ICI) which degrades
the performance of traditional physical-layer network coding (PLNC). Moreover,
traditional algorithm to compute the posteriori probability in the presence of
ICI would incur prohibitive computational complexity at the relay node. In this
paper, we proposed a two-step asynchronous PLNC scheme at the relay to mitigate
the effect of CFOs. In the first step, we intend to reconstruct the ICI
component, in which space-alternating generalized expectationmaximization
(SAGE) algorithm is used to jointly estimate the needed parameters. In the
second step, a channel-decoding and network-coding scheme is proposed to
transform the received signal into the XOR of two terminals' transmitted
information using the reconstructed ICI. It is shown that the proposed scheme
greatly mitigates the impact of CFOs with a relatively lower computational
complexity in two-way OFDM relay.
|
1204.2712
|
Learning to Rank Query Recommendations by Semantic Similarities
|
cs.AI cs.HC cs.IR
|
Logs of the interactions with a search engine show that users often
reformulate their queries. Examining these reformulations shows that
recommendations that precise the focus of a query are helpful, like those based
on expansions of the original queries. But it also shows that queries that
express some topical shift with respect to the original query can help user
access more rapidly the information they need. We propose a method to identify
from the query logs of past users queries that either focus or shift the
initial query topic. This method combines various click-based, topic-based and
session based ranking strategies and uses supervised learning in order to
maximize the semantic similarities between the query and the recommendations,
while at the same diversifying them. We evaluate our method using the
query/click logs of a Japanese web search engine and we show that the
combination of the three methods proposed is significantly better than any of
them taken individually.
|
1204.2713
|
Enabling Semantic Analysis of User Browsing Patterns in the Web of Data
|
cs.AI cs.HC cs.IR
|
A useful step towards better interpretation and analysis of the usage
patterns is to formalize the semantics of the resources that users are
accessing in the Web. We focus on this problem and present an approach for the
semantic formalization of usage logs, which lays the basis for eective
techniques of querying expressive usage patterns. We also present a query
answering approach, which is useful to nd in the logs expressive patterns of
usage behavior via formulation of semantic and temporal-based constraints. We
have processed over 30 thousand user browsing sessions extracted from usage
logs of DBPedia and Semantic Web Dog Food. All these events are formalized
semantically using respective domain ontologies and RDF representations of the
Web resources being accessed. We show the eectiveness of our approach through
experimental results, providing in this way an exploratory analysis of the way
users browse theWeb of Data.
|
1204.2715
|
Collaboratively Patching Linked Data
|
cs.IR cs.DL cs.HC
|
Today's Web of Data is noisy. Linked Data often needs extensive preprocessing
to enable efficient use of heterogeneous resources. While consistent and valid
data provides the key to efficient data processing and aggregation we are
facing two main challenges: (1st) Identification of erroneous facts and
tracking their origins in dynamically connected datasets is a difficult task,
and (2nd) efforts in the curation of deficient facts in Linked Data are
exchanged rather rarely. Since erroneous data often is duplicated and
(re-)distributed by mashup applications it is not only the responsibility of a
few original publishers to keep their data tidy, but progresses to be a mission
for all distributers and consumers of Linked Data too. We present a new
approach to expose and to reuse patches on erroneous data to enhance and to add
quality information to the Web of Data. The feasibility of our approach is
demonstrated by example of a collaborative game that patches statements in
DBpedia data and provides notifications for relevant changes.
|
1204.2718
|
Leveraging Usage Data for Linked Data Movie Entity Summarization
|
cs.AI cs.HC cs.IR
|
Novel research in the field of Linked Data focuses on the problem of entity
summarization. This field addresses the problem of ranking features according
to their importance for the task of identifying a particular entity. Next to a
more human friendly presentation, these summarizations can play a central role
for semantic search engines and semantic recommender systems. In current
approaches, it has been tried to apply entity summarization based on patterns
that are inherent to the regarded data.
The proposed approach of this paper focuses on the movie domain. It utilizes
usage data in order to support measuring the similarity between movie entities.
Using this similarity it is possible to determine the k-nearest neighbors of an
entity. This leads to the idea that features that entities share with their
nearest neighbors can be considered as significant or important for these
entities. Additionally, we introduce a downgrading factor (similar to TF-IDF)
in order to overcome the high number of commonly occurring features. We
exemplify the approach based on a movie-ratings dataset that has been linked to
Freebase entities.
|
1204.2731
|
How do Ontology Mappings Change in the Life Sciences?
|
cs.DB
|
Mappings between related ontologies are increasingly used to support data
integration and analysis tasks. Changes in the ontologies also require the
adaptation of ontology mappings. So far the evolution of ontology mappings has
received little attention albeit ontologies change continuously especially in
the life sciences. We therefore analyze how mappings between popular life
science ontologies evolve for different match algorithms. We also evaluate
which semantic ontology changes primarily affect the mappings. We further
investigate alternatives to predict or estimate the degree of future mapping
changes based on previous ontology and mapping transitions.
|
1204.2741
|
Simultaneous Object Detection, Tracking, and Event Recognition
|
cs.CV cs.AI
|
The common internal structure and algorithmic organization of object
detection, detection-based tracking, and event recognition facilitates a
general approach to integrating these three components. This supports
multidirectional information flow between these components allowing object
detection to influence tracking and event recognition and event recognition to
influence tracking and object detection. The performance of the combination can
exceed the performance of the components in isolation. This can be done with
linear asymptotic complexity.
|
1204.2742
|
Video In Sentences Out
|
cs.CV cs.AI
|
We present a system that produces sentential descriptions of video: who did
what to whom, and where and how they did it. Action class is rendered as a
verb, participant objects as noun phrases, properties of those objects as
adjectival modifiers in those noun phrases,spatial relations between those
participants as prepositional phrases, and characteristics of the event as
prepositional-phrase adjuncts and adverbial modifiers. Extracting the
information needed to render these linguistic entities requires an approach to
event recognition that recovers object tracks, the track-to-role assignments,
and changing body posture.
|
1204.2765
|
A practical approach to language complexity: a Wikipedia case study
|
cs.CL physics.data-an physics.soc-ph
|
In this paper we present statistical analysis of English texts from
Wikipedia. We try to address the issue of language complexity empirically by
comparing the simple English Wikipedia (Simple) to comparable samples of the
main English Wikipedia (Main). Simple is supposed to use a more simplified
language with a limited vocabulary, and editors are explicitly requested to
follow this guideline, yet in practice the vocabulary richness of both samples
are at the same level. Detailed analysis of longer units (n-grams of words and
part of speech tags) shows that the language of Simple is less complex than
that of Main primarily due to the use of shorter sentences, as opposed to
drastically simplified syntax or vocabulary. Comparing the two language
varieties by the Gunning readability index supports this conclusion. We also
report on the topical dependence of language complexity, e.g. that the language
is more advanced in conceptual articles compared to person-based (biographical)
and object-based articles. Finally, we investigate the relation between
conflict and language complexity by analyzing the content of the talk pages
associated to controversial and peacefully developing articles, concluding that
controversy has the effect of reducing language complexity.
|
1204.2775
|
Capacity Pre-Log of Noncoherent SIMO Channels via Hironaka's Theorem
|
cs.IT math.IT
|
We find the capacity pre-log of a temporally correlated Rayleigh block-fading
SIMO channel in the noncoherent setting. It is well known that for block-length
L and rank of the channel covariance matrix equal to Q, the capacity pre-log in
the SISO case is given by 1-Q/L. Here, Q/L can be interpreted as the pre-log
penalty incurred by channel uncertainty. Our main result reveals that, by
adding only one receive antenna, this penalty can be reduced to 1/L and can,
hence, be made to vanish in the large-L limit, even if Q/L remains constant as
L goes to infinity. Intuitively, even though the SISO channels between the
transmit antenna and the two receive antennas are statistically independent,
the transmit signal induces enough statistical dependence between the
corresponding receive signals for the second receive antenna to be able to
resolve the uncertainty associated with the first receive antenna's channel and
thereby make the overall system appear coherent. The proof of our main theorem
is based on a deep result from algebraic geometry known as Hironaka's Theorem
on the Resolution of Singularities.
|
1204.2801
|
Seeing Unseeability to See the Unseeable
|
cs.CV cs.AI cs.RO
|
We present a framework that allows an observer to determine occluded portions
of a structure by finding the maximum-likelihood estimate of those occluded
portions consistent with visible image evidence and a consistency model. Doing
this requires determining which portions of the structure are occluded in the
first place. Since each process relies on the other, we determine a solution to
both problems in tandem. We extend our framework to determine confidence of
one's assessment of which portions of an observed structure are occluded, and
the estimate of that occluded structure, by determining the sensitivity of
one's assessment to potential new observations. We further extend our framework
to determine a robotic action whose execution would allow a new observation
that would maximally increase one's confidence.
|
1204.2804
|
Estimating the Prevalence of Deception in Online Review Communities
|
cs.SI cs.CL cs.CY
|
Consumers' purchase decisions are increasingly influenced by user-generated
online reviews. Accordingly, there has been growing concern about the potential
for posting "deceptive opinion spam" -- fictitious reviews that have been
deliberately written to sound authentic, to deceive the reader. But while this
practice has received considerable public attention and concern, relatively
little is known about the actual prevalence, or rate, of deception in online
review communities, and less still about the factors that influence it.
We propose a generative model of deception which, in conjunction with a
deception classifier, we use to explore the prevalence of deception in six
popular online review communities: Expedia, Hotels.com, Orbitz, Priceline,
TripAdvisor, and Yelp. We additionally propose a theoretical model of online
reviews based on economic signaling theory, in which consumer reviews diminish
the inherent information asymmetry between consumers and producers, by acting
as a signal to a product's true, unknown quality. We find that deceptive
opinion spam is a growing problem overall, but with different growth rates
across communities. These rates, we argue, are driven by the different
signaling costs associated with deception for each review community, e.g.,
posting requirements. When measures are taken to increase signaling cost, e.g.,
filtering reviews written by first-time reviewers, deception prevalence is
effectively reduced.
|
1204.2837
|
Watersheds, waterfalls, on edge or node weighted graphs
|
cs.CV cs.DM
|
We present an algebraic approach to the watershed adapted to edge or node
weighted graphs. Starting with the flooding adjunction, we introduce the
flooding graphs, for which node and edge weights may be deduced one from the
other. Each node weighted or edge weighted graph may be transformed in a
flooding graph, showing that there is no superiority in using one or the other,
both being equivalent. We then introduce pruning operators extract subgraphs of
increasing steepness. For an increasing steepness, the number of never
ascending paths becomes smaller and smaller. This reduces the watershed zone,
where catchment basins overlap. A last pruning operator called scissor
associates to each node outside the regional minima one and only one edge. The
catchment basins of this new graph do not overlap and form a watershed
partition. Again, with an increasing steepness, the number of distinct
watershed partitions contained in a graph becomes smaller and smaller.
Ultimately, for natural image, an infinite steepness leads to a unique
solution, as it is not likely that two absolutely identical non ascending paths
of infinite steepness connect a node with two distinct minima. It happens that
non ascending paths of a given steepness are the geodesics of lexicographic
distance functions of a given depth. This permits to extract the watershed
partitions as skeletons by zone of influence of the minima for such
lexicographic distances. The waterfall hierarchy is obtained by a sequence of
operations. The first constructs the minimum spanning forest which spans an
initial watershed partition. The contraction of the trees into one node
produces a reduced graph which may be submitted to the same treatment. The
process is iterated until only one region remains. The union of the edges of
all forests produced constitutes a minimum spanning tree of the initial graph.
|
1204.2847
|
Segmentation Similarity and Agreement
|
cs.CL
|
We propose a new segmentation evaluation metric, called segmentation
similarity (S), that quantifies the similarity between two segmentations as the
proportion of boundaries that are not transformed when comparing them using
edit distance, essentially using edit distance as a penalty function and
scaling penalties by segmentation size. We propose several adapted
inter-annotator agreement coefficients which use S that are suitable for
segmentation. We show that S is configurable enough to suit a wide variety of
segmentation evaluations, and is an improvement upon the state of the art. We
also propose using inter-annotator agreement coefficients to evaluate automatic
segmenters in terms of human performance.
|
1204.2857
|
Synthesis of Minimal Error Control Software
|
cs.SY cs.SC
|
Software implementations of controllers for physical systems are at the core
of many embedded systems. The design of controllers uses the theory of
dynamical systems to construct a mathematical control law that ensures that the
controlled system has certain properties, such as asymptotic convergence to an
equilibrium point, while optimizing some performance criteria. However, owing
to quantization errors arising from the use of fixed-point arithmetic, the
implementation of this control law can only guarantee practical stability:
under the actions of the implementation, the trajectories of the controlled
system converge to a bounded set around the equilibrium point, and the size of
the bounded set is proportional to the error in the implementation. The problem
of verifying whether a controller implementation achieves practical stability
for a given bounded set has been studied before. In this paper, we change the
emphasis from verification to automatic synthesis. Using synthesis, the need
for formal verification can be considerably reduced thereby reducing the design
time as well as design cost of embedded control software.
We give a methodology and a tool to synthesize embedded control software that
is Pareto optimal w.r.t. both performance criteria and practical stability
regions. Our technique is a combination of static analysis to estimate
quantization errors for specific controller implementations and stochastic
local search over the space of possible controllers using particle swarm
optimization. The effectiveness of our technique is illustrated using examples
of various standard control systems: in most examples, we achieve controllers
with close LQR-LQG performance but with implementation errors, hence regions of
practical stability, several times as small.
|
1204.2912
|
Non-sparse Linear Representations for Visual Tracking with Online
Reservoir Metric Learning
|
cs.CV
|
Most sparse linear representation-based trackers need to solve a
computationally expensive L1-regularized optimization problem. To address this
problem, we propose a visual tracker based on non-sparse linear
representations, which admit an efficient closed-form solution without
sacrificing accuracy. Moreover, in order to capture the correlation information
between different feature dimensions, we learn a Mahalanobis distance metric in
an online fashion and incorporate the learned metric into the optimization
problem for obtaining the linear representation. We show that online metric
learning using proximity comparison significantly improves the robustness of
the tracking, especially on those sequences exhibiting drastic appearance
changes. Furthermore, in order to prevent the unbounded growth in the number of
training samples for the metric learning, we design a time-weighted reservoir
sampling method to maintain and update limited-sized foreground and background
sample buffers for balancing sample diversity and adaptability. Experimental
results on challenging videos demonstrate the effectiveness and robustness of
the proposed tracker.
|
1204.2922
|
Secret Key Agreement Using Correlated Sources over the Generalized
Multiple Access Channel
|
cs.IT math.IT
|
A secret key agreement setup between three users is considered in which each
of the users 1 and 2 intends to share a secret key with user 3 and users 1 and
2 are eavesdroppers with respect to each other. The three users observe i.i.d.
outputs of correlated sources and there is a generalized discrete memoryless
multiple access channel (GDMMAC) from users 1 and 2 to user 3 for communication
between the users. The secret key agreement is established using the correlated
sources and the GDMMAC. In this setup, inner and outer bounds of the secret key
capacity region are investigated. Moreover, for a special case where the
channel inputs and outputs and the sources form Markov chains in some order,
the secret key capacity region is derived. Also a Gaussian case is considered
in this setup.
|
1204.2927
|
Diversity versus Channel Knowledge at Finite Block-Length
|
cs.IT math.IT
|
We study the maximal achievable rate R*(n, \epsilon) for a given block-length
n and block error probability \epsilon over Rayleigh block-fading channels in
the noncoherent setting and in the finite block-length regime. Our results show
that for a given block-length and error probability, R*(n, \epsilon) is not
monotonic in the channel's coherence time, but there exists a rate maximizing
coherence time that optimally trades between diversity and cost of estimating
the channel.
|
1204.2980
|
Realizable Rate Distortion Function and Bayesian FIltering Theory
|
cs.IT math.FA math.IT math.PR
|
The relation between rate distortion function (RDF) and Bayesian filtering
theory is discussed. The relation is established by imposing a causal or
realizability constraint on the reconstruction conditional distribution of the
RDF, leading to the definition of a causal RDF. Existence of the optimal
reconstruction distribution of the causal RDF is shown using the topology of
weak convergence of probability measures. The optimal non-stationary causal
reproduction conditional distribution of the causal RDF is derived in closed
form; it is given by a set of recursive equations which are computed backward
in time. The realization of causal RDF is described via the source-channel
matching approach, while an example is briefly discussed to illustrate the
concepts.
|
1204.2991
|
Collective Intelligence 2012: Proceedings
|
cs.SI
|
This volume holds the proceedings of the Collective Intelligence 2012
conference in Cambridge, Massachusetts. It contains the full papers, poster
papers, and plenary abstracts.
Collective intelligence has existed at least as long as humans have, because
families, armies, countries, and companies have all - at least sometimes -
acted collectively in ways that seem intelligent. But in the last decade or so
a new kind of collective intelligence has emerged: groups of people and
computers, connected by the Internet, collectively doing intelligent things.
For example, Google technology harvests knowledge generated by millions of
people creating and linking web pages and then uses this knowledge to answer
queries in ways that often seem amazingly intelligent. Or in Wikipedia,
thousands of people around the world have collectively created a very large and
high quality intellectual product with almost no centralized control, and
almost all as volunteers! These early examples of Internet-enabled collective
intelligence are not the end of the story but just the beginning. And in order
to understand the possibilities and constraints of these new kinds of
intelligence, we need a new interdisciplinary field.
|
1204.2994
|
Image Restoration with Signal-dependent Camera Noise
|
cs.CV stat.AP
|
This article describes a fast iterative algorithm for image denoising and
deconvolution with signal-dependent observation noise. We use an optimization
strategy based on variable splitting that adapts traditional Gaussian
noise-based restoration algorithms to account for the observed image being
corrupted by mixed Poisson-Gaussian noise and quantization errors.
|
1204.2995
|
Analytic Methods for Optimizing Realtime Crowdsourcing
|
cs.SI cs.HC physics.soc-ph
|
Realtime crowdsourcing research has demonstrated that it is possible to
recruit paid crowds within seconds by managing a small, fast-reacting worker
pool. Realtime crowds enable crowd-powered systems that respond at interactive
speeds: for example, cameras, robots and instant opinion polls. So far, these
techniques have mainly been proof-of-concept prototypes: research has not yet
attempted to understand how they might work at large scale or optimize their
cost/performance trade-offs. In this paper, we use queueing theory to analyze
the retainer model for realtime crowdsourcing, in particular its expected wait
time and cost to requesters. We provide an algorithm that allows requesters to
minimize their cost subject to performance requirements. We then propose and
analyze three techniques to improve performance: push notifications, shared
retainer pools, and precruitment, which involves recalling retainer workers
before a task actually arrives. An experimental validation finds that
precruited workers begin a task 500 milliseconds after it is posted, delivering
results below the one-second cognitive threshold for an end-user to stay in
flow.
|
1204.3010
|
Optimal box-covering algorithm for fractal dimension of complex networks
|
physics.comp-ph cs.SI physics.soc-ph
|
The self-similarity of complex networks is typically investigated through
computational algorithms the primary task of which is to cover the structure
with a minimal number of boxes. Here we introduce a box-covering algorithm that
not only outperforms previous ones, but also finds optimal solutions. For the
two benchmark cases tested, namely, the E. Coli and the WWW networks, our
results show that the improvement can be rather substantial, reaching up to 15%
in the case of the WWW network.
|
1204.3040
|
Tractable Answer-Set Programming with Weight Constraints: Bounded
Treewidth is not Enough
|
cs.LO cs.AI cs.CC
|
Cardinality constraints or, more generally, weight constraints are well
recognized as an important extension of answer-set programming. Clearly, all
common algorithmic tasks related to programs with cardinality or weight
constraints - like checking the consistency of a program - are intractable.
Many intractable problems in the area of knowledge representation and reasoning
have been shown to become linear time tractable if the treewidth of the
programs or formulas under consideration is bounded by some constant. The goal
of this paper is to apply the notion of treewidth to programs with cardinality
or weight constraints and to identify tractable fragments. It will turn out
that the straightforward application of treewidth to such class of programs
does not suffice to obtain tractability. However, by imposing further
restrictions, tractability can be achieved.
|
1204.3046
|
The DoF Region of the Multiple-Antenna Time Correlated Interference
Channel with Delayed CSIT
|
cs.IT math.IT
|
We consider the time-correlated multiple-antenna interference channel where
the transmitters have (i) delayed channel state information (CSI) obtained from
a latency-prone feedback channel as well as (ii) imperfect current CSIT,
obtained e.g. from prediction on the basis of these past channel samples. We
derive the degrees of freedom (DoF) region for the two-user multiple-antenna
interference channel under such conditions. The proposed DoF achieving scheme
exploits a particular combination of the space-time alignment protocol designed
for fully outdated CSIT feedback channels (initially developed for the
broadcast channel by Maddah-Ali et al, later extended to the interference
channel by Vaze et al. and Ghasemi et al.) together with the use of simple
zero-forcing (ZF) precoders. The essential ingredient lies in the quantization
and feedback of the residual interference left after the application of the
initial imperfect ZF precoder. Our focus is on the MISO setting albeit
extensions to certain MIMO cases are also considered.
|
1204.3057
|
Asymptotically good binary linear codes with asymptotically good
self-intersection spans
|
cs.IT math.CO math.IT
|
If C is a binary linear code, let C^2 be the linear code spanned by
intersections of pairs of codewords of C. We construct an asymptotically good
family of binary linear codes such that, for C ranging in this family, the C^2
also form an asymptotically good family. For this we use algebraic-geometry
codes, concatenation, and a fair amount of bilinear algebra.
More precisely, the two main ingredients used in our construction are, first,
a description of the symmetric square of an odd degree extension field in terms
only of field operations of small degree, and second, a recent result of
Garcia-Stichtenoth-Bassa-Beelen on the number of points of curves on such an
odd degree extension field.
|
1204.3069
|
An Outer Bound for the Memoryless Two-user Interference Channel with
General Cooperation
|
cs.IT math.IT
|
The interference channel models a wireless network where several
source-destination pairs compete for the same resources. When nodes transmit
simultaneously the destinations experience interference. This paper considers a
4-node network, where two nodes are sources and the other two are destinations.
All nodes are full-duplex and cooperate to mitigate interference. A sum-rate
outer bound is derived, which is shown to unify a number of previously derived
outer bounds for special cases of cooperation. The approach is shown to extend
to cooperative interference networks with more than two source-destination
pairs and for any partial sum-rate. How the derived bound relates to similar
bounds for channel models including cognitive nodes, i.e., nodes that have
non-causal knowledge of the messages of some other node, is also discussed.
Finally, the bound is evaluated for the Gaussian noise channel and used to
compare different modes of cooperation.
|
1204.3074
|
Time-Critical Influence Maximization in Social Networks with
Time-Delayed Diffusion Process
|
cs.SI physics.soc-ph
|
Influence maximization is a problem of finding a small set of highly
influential users, also known as seeds, in a social network such that the
spread of influence under certain propagation models is maximized. In this
paper, we consider time-critical influence maximization, in which one wants to
maximize influence spread within a given deadline. Since timing is considered
in the optimization, we also extend the Independent Cascade (IC) model and the
Linear Threshold (LT) model to incorporate the time delay aspect of influence
diffusion among individuals in social networks. We show that time-critical
influence maximization under the time-delayed IC and LT models maintains
desired properties such as submodularity, which allows a greedy approximation
algorithm to achieve an approximation ratio of $1-1/e$. To overcome the
inefficiency of the greedy algorithm, we design two heuristic algorithms: the
first one is based on a dynamic programming procedure that computes exact
influence in tree structures and directed acyclic subgraphs, while the second
one converts the problem to one in the original models and then applies
existing fast heuristic algorithms to it. Our simulation results demonstrate
that our algorithms achieve the same level of influence spread as the greedy
algorithm while running a few orders of magnitude faster, and they also
outperform existing fast heuristics that disregard the deadline constraint and
delays in diffusion.
|
1204.3097
|
Technical Report: Observability of a Linear System under Sparsity
Constraints
|
cs.IT math.IT math.OC
|
Consider an n-dimensional linear system where it is known that there are at
most k<n non-zero components in the initial state. The observability problem,
that is the recovery of the initial state, for such a system is considered. We
obtain sufficient conditions on the number of the available observations to be
able to recover the initial state exactly for such a system. Both deterministic
and stochastic setups are considered for system dynamics. In the former
setting, the system matrices are known deterministically, whereas in the latter
setting, all of the matrices are picked from a randomized class of matrices.
The main message is that, one does not need to obtain full n observations to be
able to uniquely identify the initial state of the linear system, even when the
observations are picked randomly, when the initial condition is known to be
sparse.
|
1204.3100
|
Modular design of jointly optimal controllers and forwarding policies
for wireless control
|
math.OC cs.SY
|
We consider the joint design of packet forwarding policies and controllers
for wireless control loops where sensor measurements are sent to the controller
over an unreliable and energy-constrained multi-hop wireless network. For fixed
sampling rate of the sensor, the co-design problem separates into two
well-defined and independent subproblems: transmission scheduling for
maximizing the deadline-constrained reliability and optimal control under
packet loss. We develop optimal and implementable solutions for these
subproblems and show that the optimally co-designed system can be efficiently
found. Numerical examples highlight the many trade-offs involved and
demonstrate the power of our approach.
|
1204.3114
|
On the Role of Mobility for Multi-message Gossip
|
cs.SI cs.IT cs.NI math.IT
|
We consider information dissemination in a large $n$-user wireless network in
which $k$ users wish to share a unique message with all other users. Each of
the $n$ users only has knowledge of its own contents and state information;
this corresponds to a one-sided push-only scenario. The goal is to disseminate
all messages efficiently, hopefully achieving an order-optimal spreading rate
over unicast wireless random networks. First, we show that a random-push
strategy -- where a user sends its own or a received packet at random -- is
order-wise suboptimal in a random geometric graph: specifically,
$\Omega(\sqrt{n})$ times slower than optimal spreading. It is known that this
gap can be closed if each user has "full" mobility, since this effectively
creates a complete graph. We instead consider velocity-constrained mobility
where at each time slot the user moves locally using a discrete random walk
with velocity $v(n)$ that is much lower than full mobility. We propose a simple
two-stage dissemination strategy that alternates between individual message
flooding ("self promotion") and random gossiping. We prove that this scheme
achieves a close to optimal spreading rate (within only a logarithmic gap) as
long as the velocity is at least $v(n)=\omega(\sqrt{\log n/k})$. The key
insight is that the mixing property introduced by the partial mobility helps
users to spread in space within a relatively short period compared to the
optimal spreading time, which macroscopically mimics message dissemination over
a complete graph.
|
1204.3167
|
An Analytical Framework for Multi-Cell Cooperation via Stochastic
Geometry and Large Deviations
|
cs.IT math.IT
|
Multi-cell cooperation (MCC) is an approach for mitigating inter-cell
interference in dense cellular networks. Existing studies on MCC performance
typically rely on either over-simplified Wyner-type models or complex
system-level simulations. The promising theoretical results (typically using
Wyner models) seem to materialize neither in complex simulations nor in
practice. To more accurately investigate the theoretical performance of MCC,
this paper models an entire plane of interfering cells as a Poisson random
tessellation. The base stations (BSs) are then clustered using a regular
lattice, whereby BSs in the same cluster mitigate mutual interference by
beamforming with perfect channel state information. Techniques from stochastic
geometry and large deviation theory are applied to analyze the outage
probability as a function of the mobile locations, scattering environment, and
the average number of cooperating BSs per cluster, L. For mobiles near the
centers of BS clusters, it is shown that as L increases, outage probability
diminishes sub-exponentially if scattering is sparse, and following a power law
with an exponent proportional to the signal diversity order if scattering is
rich. For randomly located mobiles, regardless of scattering, outage
probability is shown to scale with increasing L following a power law with an
exponent no larger than 0.5. These results confirm analytically that
cluster-edge mobiles are the bottleneck for network coverage and provide a
plausible analytic framework for more realistic analysis of other multi-cell
techniques.
|
1204.3198
|
The failure of the law of brevity in two New World primates. Statistical
caveats
|
q-bio.NC cs.CL
|
Parallels of Zipf's law of brevity, the tendency of more frequent words to be
shorter, have been found in bottlenose dolphins and Formosan macaques. Although
these findings suggest that behavioral repertoires are shaped by a general
principle of compression, common marmosets and golden-backed uakaris do not
exhibit the law. However, we argue that the law may be impossible or difficult
to detect statistically in a given species if the repertoire is too small, a
problem that could be affecting golden backed uakaris, and show that the law is
present in a subset of the repertoire of common marmosets. We suggest that the
visibility of the law will depend on the subset of the repertoire under
consideration or the repertoire size.
|
1204.3210
|
FullSWOF: A software for overland flow simulation / FullSWOF : un
logiciel pour la simulation du ruissellement
|
math.NA cs.CE cs.NA math.AP
|
Overland flow on agricultural fields may have some undesirable effects such
as soil erosion, flood and pollutant transport. To better understand this
phenomenon and limit its consequences, we developed a code using
state-of-the-art numerical methods: FullSWOF (Full Shallow Water equations for
Overland Flow), an object oriented code written in C++. It has been made
open-source and can be downloaded from
http://www.univ-orleans.fr/mapmo/soft/FullSWOF/. The model is based on the
classical system of Shallow Water (SW) (or Saint-Venant system). Numerical
difficulties come from the numerous dry/wet transitions and the highly-variable
topography encountered inside a field. It includes runon and rainfall inputs,
infiltration (modified Green-Ampt equation), friction (Darcy-Weisbach and
Manning formulas). First we present the numerical method for the resolution of
the Shallow Water equations integrated in FullSWOF_2D (the two-dimension
version). This method is based on hydrostatic reconstruction scheme, coupled
with a semi-implicit friction term treatment. FullSWOF_2D has been previously
validated using analytical solutions from the SWASHES library (Shallow Water
Analytic Solutions for Hydraulic and Environmental Studies). Finally,
FullSWOF_2D is run on a real topography measured on a runoff plot located in
Thies (Senegal). Simulation results are compared with measured data. This
experimental benchmark demonstrate the capabilities of FullSWOF to simulate
adequately overland flow. FullSWOF could also be used for other environmental
issues, such as river floods and dam-breaks.
|
1204.3221
|
Neuroevolution Results in Emergence of Short-Term Memory for
Goal-Directed Behavior
|
cs.NE cs.AI nlin.AO
|
Animals behave adaptively in the environment with multiply competing goals.
Understanding of the mechanisms underlying such goal-directed behavior remains
a challenge for neuroscience as well for adaptive system research. To address
this problem we developed an evolutionary model of adaptive behavior in the
multigoal stochastic environment. Proposed neuroevolutionary algorithm is based
on neuron's duplication as a basic mechanism of agent's recurrent neural
network development. Results of simulation demonstrate that in the course of
evolution agents acquire the ability to store the short-term memory and,
therefore, use it in behavioral strategies with alternative actions. We found
that evolution discovered two mechanisms for short-term memory. The first
mechanism is integration of sensory signals and ongoing internal neural
activity, resulting in emergence of cell groups specialized on alternative
actions. And the second mechanism is slow neurodynamical processes that makes
possible to code the previous behavioral choice.
|
1204.3223
|
Intelligent Database Flexible Querying System by Approximate Query
Processing
|
cs.DB
|
Database flexible querying is an alternative to the classic one for users.
The use of Formal Concepts Analysis (FCA) makes it possible to make approximate
answers that those turned over by a classic DataBase Management System (DBMS).
Some applications do not need exact answers. However, flexible querying can be
expensive in response time. This time is more significant when the flexible
querying require the calculation of aggregate functions ("Sum", "Avg", "Count",
"Var" etc.). In this paper, we propose an approach which tries to solve this
problem by using Approximate Query Processing (AQP).
|
1204.3230
|
Information, Community, and Action: How Nonprofit Organizations Use
Social Media
|
cs.CY cs.SI
|
The rapid diffusion of "microblogging" services such as Twitter is ushering
in a new era of possibilities for organizations to communicate with and engage
their core stakeholders and the general public. To enhance understanding of the
communicative functions microblogging serves for organizations, this study
examines the Twitter utilization practices of the 100 largest nonprofit
organizations in the United States. The analysis reveals there are three key
functions of microblogging updates-"information," "community," and "action."
Though the informational use of microblogging is extensive, nonprofit
organizations are better at using Twitter to strategically engage their
stakeholders via dialogic and community-building practices than they have been
with traditional websites. The adoption of social media appears to have
engendered new paradigms of public engagement.
Keywords: microblogging; Twitter; social media; stakeholder relations;
organizational communication; organization-public relations; nonprofit
organizations
|
1204.3238
|
Reliable communication over non-binary insertion/deletion channels
|
cs.IT math.IT
|
We consider the problem of reliable communication over non-binary
insertion/deletion channels where symbols are randomly deleted from or inserted
in the transmitted sequence and all symbols are corrupted by additive white
Gaussian noise. To this end, we utilize the inherent redundancy achievable in
non-binary symbol sets by first expanding the symbol set and then allocating
part of the bits associated with each symbol to watermark symbols. The
watermark sequence, known at the receiver, is then used by a forward-backward
algorithm to provide soft information for an outer code which decodes the
transmitted sequence. Through numerical results and discussions, we evaluate
the performance of the proposed solution and show that it leads to significant
system ability to detect and correct insertions/deletions. We also provide
estimates of the maximum achievable information rates of the system, compare
them with the available bounds, and construct practical codes capable of
approaching these limits.
|
1204.3251
|
Plug-in martingales for testing exchangeability on-line
|
cs.LG stat.ME
|
A standard assumption in machine learning is the exchangeability of data,
which is equivalent to assuming that the examples are generated from the same
probability distribution independently. This paper is devoted to testing the
assumption of exchangeability on-line: the examples arrive one by one, and
after receiving each example we would like to have a valid measure of the
degree to which the assumption of exchangeability has been falsified. Such
measures are provided by exchangeability martingales. We extend known
techniques for constructing exchangeability martingales and show that our new
method is competitive with the martingales introduced before. Finally we
investigate the performance of our testing method on two benchmark datasets,
USPS and Statlog Satellite data; for the former, the known techniques give
satisfactory results, but for the latter our new more flexible method becomes
necessary.
|
1204.3255
|
Lower Complexity Bounds for Lifted Inference
|
cs.AI
|
One of the big challenges in the development of probabilistic relational (or
probabilistic logical) modeling and learning frameworks is the design of
inference techniques that operate on the level of the abstract model
representation language, rather than on the level of ground, propositional
instances of the model. Numerous approaches for such "lifted inference"
techniques have been proposed. While it has been demonstrated that these
techniques will lead to significantly more efficient inference on some specific
models, there are only very recent and still quite restricted results that show
the feasibility of lifted inference on certain syntactically defined classes of
models. Lower complexity bounds that imply some limitations for the feasibility
of lifted inference on more expressive model classes were established early on
in (Jaeger 2000). However, it is not immediate that these results also apply to
the type of modeling languages that currently receive the most attention, i.e.,
weighted, quantifier-free formulas. In this paper we extend these earlier
results, and show that under the assumption that NETIME =/= ETIME, there is no
polynomial lifted inference algorithm for knowledge bases of weighted,
quantifier- and function-free formulas. Further strengthening earlier results,
this is also shown to hold for approximate inference, and for knowledge bases
not containing the equality predicate.
|
1204.3256
|
Optimizing the Medium Access Control in Multi-hop Wireless Networks
|
cs.IT cs.NI math.IT
|
We study the problem of geometric optimization of medium access control in
multi-hop wireless network. We discuss the optimal placements of simultaneous
transmitters in the network and our general framework allows us to evaluate the
performance gains of highly managed medium access control schemes that would be
required to implement these placements. In a wireless network consisting of
randomly distributed nodes, our performance metrics are the optimum
transmission range that achieves the most optimal tradeoff between the progress
of packets in desired directions towards their respective destinations and the
total number of transmissions required to transport packets to their
destinations. We evaluate ALOHA based scheme where simultaneous transmitters
are dispatched according to a uniform Poisson distribution and compare it with
various grid pattern based schemes where simultaneous transmitters are
positioned in specific regular patterns. Our results show that optimizing the
medium access control in multi-hop network should take into account the
parameters like signal-to-interference ratio threshold and attenuation
coefficient. For instance, at typical values of signal-to-interference ratio
threshold and attenuation coefficient, the most optimal scheme is based on
triangular grid pattern and, under no fading channel model, the most optimal
transmission range and network capacity are higher than the optimum
transmission range and capacity achievable with ALOHA based scheme by factors
of two and three respectively. Later on, we also identify the optimal medium
access control schemes when signal-to-interference ratio threshold and
attenuation coefficient approach the extreme values and discuss how fading
impacts the performance of all schemes we evaluate in this article.
|
1204.3259
|
Combinatorial Evolution and Forecasting of Communication Protocol ZigBee
|
cs.NI cs.SY math.OC
|
The article addresses combinatorial evolution and forecasting of
communication protocol for wireless sensor networks (ZigBee). Morphological
tree structure (a version of and-or tree) is used as a hierarchical model for
the protocol. Three generations of ZigBee protocol are examined. A set of
protocol change operations is generated and described. The change operations
are used as items for forecasting based on combinatorial problems (e.g.,
clustering, knapsack problem, multiple choice knapsack problem). Two kinds of
preliminary forecasts for the examined communication protocol are considered:
(i) direct expert (expert judgment) based forecast, (ii) computation of the
forecast(s) (usage of multicriteria decision making and combinatorial
optimization problems). Finally, aggregation of the obtained preliminary
forecasts is considered (two aggregation strategies are used).
|
1204.3261
|
Investigating operation of the Internet in orbit: Five years of
collaboration around CLEO
|
cs.NI astro-ph.IM cs.SY
|
The Cisco router in Low Earth Orbit (CLEO) was launched into space as an
experimental secondary payload onboard the UK Disaster Monitoring Constellation
(UK-DMC) satellite in September 2003. The UK-DMC satellite is one of an
increasing number of DMC satellites in orbit that rely on the Internet Protocol
(IP) for command and control and for delivery of data from payloads. The DMC
satellites, built by Surrey Satellite Technology Ltd (SSTL), have imaged the
effects of Hurricane Katrina, the Indian Ocean Tsunami, and other events for
disaster relief under the International Space and Major Disasters Charter. It
was possible to integrate the Cisco mobile access router into the UK-DMC
satellite as a result of the DMC satellites' adoption of existing commercial
networking standards, using IP over Frame Relay over standard High-Level Data
Link Control, or HDLC (ISO 13239) on standard serial interfaces. This approach
came from work onboard SSTL's earlier UoSAT-12 satellite
|
1204.3284
|
Observer design for nonlinear triangular systems with unobservable
linearization
|
math.OC cs.SY
|
The paper deals with the observer design problem for a wide class of
triangular time-varying nonlinear systems, with unobservable linearization.
Sufficient conditions are derived for the existence of a Luenberger-type
observer, when it is a priori known that the initial state of the system
belongs to a given nonempty bounded subset of the state space. For the general
case, the state estimation is exhibited by means of a switching sequence of
time-varying dynamics
|
1204.3337
|
Approximation of Points on Low-Dimensional Manifolds Via Random Linear
Projections
|
cs.IT cs.DS math.IT
|
This paper considers the approximate reconstruction of points, x \in R^D,
which are close to a given compact d-dimensional submanifold, M, of R^D using a
small number of linear measurements of x. In particular, it is shown that a
number of measurements of x which is independent of the extrinsic dimension D
suffices for highly accurate reconstruction of a given x with high probability.
Furthermore, it is also proven that all vectors, x, which are sufficiently
close to M can be reconstructed with uniform approximation guarantees when the
number of linear measurements of x depends logarithmically on D. Finally, the
proofs of these facts are constructive: A practical algorithm for
manifold-based signal recovery is presented in the process of proving the two
main results mentioned above.
|
1204.3341
|
Patterns of Social Influence in a Network of Situated Cognitive Agents
|
cs.SI cs.AI physics.soc-ph
|
This paper presents the results of computational experiments on the effects
of social influence on individual and systemic behavior of situated cognitive
agents in a product-consumer environment. Paired experiments were performed
with identical initial conditions to compare social agents with non- social
agents. Experiment results show that social agents are more productive in
consuming available products, both in terms of aggregate unit consumption and
aggregate utility. But this comes at a cost of individual average utility per
unit consumed. In effect, social interaction achieved higher productivity by
'lowering the standards' of individual consumers. While still at an early stage
of development, such an agent-based model laboratory is shown to be an
effective research tool to investigate rich collective behavior in the context
of demanding cognitive tasks.
|
1204.3342
|
What "Crowdsourcing" Obscures: Exposing the Dynamics of Connected Crowd
Work during Disaster
|
cs.SI cs.CY
|
The aim of this paper is to demonstrate that the current understanding of
crowdsourcing may not be broad enough to capture the diversity of crowd work
during disasters, or specific enough to highlight the unique dynamics of
information organizing by the crowd in that context. In making this argument,
this paper first unpacks the crowdsourcing term, examining its roots in open
source development and outsourcing business models, and tying it to related
concepts of human computation and collective intelligence. The paper then
attempts to characterize several examples of crowd work during disasters using
current definitions of crowdsourcing and existing models for human computation
and collective intelligence, exposing a need for future research towards a
framework for understanding crowd work.
|
1204.3343
|
Broadcast Search in Innovation Contests: Case for Hybrid Models
|
cs.SI cs.CY
|
Organizations use broadcast search to identify new avenues of innovation.
Research on innovation contests provides insights on why excellent ideas are
created in a broadcast search. However, there is little research on how
excellent ideas are selected. Drawing from the brainstorming literature we find
that the selection of excellent ideas needs further investigation. We propose
that a hybrid model may lead to selection of better ideas. The hybrid model is
a broadcast search approach that exploits the strengths of different actors and
procedures in idea generation and the selection phase.
|
1204.3348
|
Symmetry Breaking Constraints: Recent Results
|
cs.AI cs.CC
|
Symmetry is an important problem in many combinatorial problems. One way of
dealing with symmetry is to add constraints that eliminate symmetric solutions.
We survey recent results in this area, focusing especially on two common and
useful cases: symmetry breaking constraints for row and column symmetry, and
symmetry breaking constraints for eliminating value symmetry
|
1204.3352
|
Collaborative Development in Wikipedia
|
cs.SI
|
Using 16,068 articles in Wikipedia's Medicine Wikiproject, we study the
relationship between collaboration and quality. We assess whether certain
collaborative patterns are associated with information quality in terms of
self-evaluated quality and article viewership. We find that the number of
contributors has a curvilinear relationship to information quality, more
contributors improving quality but only up to a certain point. Other articles
that its collaborators work on also influences the quality of an information
artifact, creating an interdependent network of artifacts and contributors.
Finally, we see evidence of a recursive relationship between information
quality and contributor activity, but that this recursive relationship
attenuates over time.
|
1204.3353
|
Collective Cognitive Authority: Expertise Location via Social Labeling
|
cs.SI cs.HC
|
The problem of knowing who knows what is multi-faceted. Knowledge and
expertise lie on a spectrum and one's expertise in one topic area may have
little bearing on one's knowledge in a disparate topic area. In addition, we
continue to learn new things over time. Each of us see but a sliver of our
acquaintances' and co-workers' areas of expertise. By making explicit and
visible many individual perceptions of cognitive authority, this work shows
that a group can know what its members know about in a relatively efficient and
inexpensive manner.
|
1204.3362
|
Event based classification of Web 2.0 text streams
|
cs.IR
|
Web 2.0 applications like Twitter or Facebook create a continuous stream of
information. This demands new ways of analysis in order to offer insight into
this stream right at the moment of the creation of the information, because
lots of this data is only relevant within a short period of time. To address
this problem real time search engines have recently received increased
attention. They take into account the continuous flow of information
differently than traditional web search by incorporating temporal and social
features, that describe the context of the information during its creation.
Standard approaches where data first get stored and then is processed from a
peristent storage suffer from latency. We want to address the fluent and rapid
nature of text stream by providing an event based approach that analyses
directly the stream of information. In a first step we want to define the
difference between real time search and traditional search to clarify the
demands in modern text filtering. In a second step we want to show how event
based features can be used to support the tasks of real time search engines.
Using the example of Twitter we present in this paper a way how to combine an
event based approach with text mining and information filtering concepts in
order to classify incoming information based on stream features. We calculate
stream dependant features and feed them into a neural network in order to
classify the text streams. We show the separative capabilities of event based
features as the foundation for a real time search engine.
|
1204.3367
|
Crowdsourcing Gaze Data Collection
|
cs.SI cs.HC
|
Knowing where people look is a useful tool in many various image and video
applications. However, traditional gaze tracking hardware is expensive and
requires local study participants, so acquiring gaze location data from a large
number of participants is very problematic. In this work we propose a
crowdsourced method for acquisition of gaze direction data from a virtually
unlimited number of participants, using a robust self-reporting mechanism (see
Figure 1). Our system collects temporally sparse but spatially dense
points-of-attention in any visual information. We apply our approach to an
existing video data set and demonstrate that we obtain results similar to
traditional gaze tracking. We also explore the parameter ranges of our method,
and collect gaze tracking data for a large set of YouTube videos.
|
1204.3374
|
Social Aspects of Virtual Teams
|
cs.SI cs.CY physics.soc-ph
|
There has been a transformation from individual work to team work in the last
few decades (Ilgen, 1999), and many organizations use teams for many activities
done by individuals in the past (Boyett & Conn, 1992 ; Katzenbach & Smith,
1993). In recent years, there has been a renewed interest in computer-mediated
groups because of the increases in globalization of business operations leading
to geographically dispersed executives and decision makers. However, what seems
to be lacking is some focus in terms of problem settings and corresponding
tools to support collaborative decision making. The research question of this
study deals with the dynamics of virtual teams' members. A model, suggesting
that team dynamics can increase the teams' output, is presented, and a
methodology to examine the model is illustrated. An experiment was performed,
in which subjects, who were grouped into teams, had to share information in
order to complete a task. The findings indicate that the social aspect of the
virtual team's discussion is negative than the social aspect of the
face-to-face team's discussion, and that the virtual team's output is inferior
to the face-to-face team's output. The virtual team is a common way of working
nowadays, and with the growing use of Internet applications and firms'
globalization it will expand in the future. Thus, the importance of the
theoretical and practical implementation of the research will be discussed.
|
1204.3375
|
Galaxysearch - Discovering the Knowledge of Many by Using Wikipedia as a
Meta-Searchindex
|
cs.SI
|
We propose a dynamic map of knowledge generated from Wikipedia pages and the
Web URLs contained therein. GalaxySearch provides answers to the questions we
don't know how to ask, by constructing a semantic network of the most relevant
pages in Wikipedia related to a search term. This search graph is constructed
based on the Wikipedia bidirectional link structure, the most recent edits on
the pages, the importance of the page, and the article quality; search results
are then ranked by the centrality of their network position. GalaxySearch
provides the results in three related ways: (1) WikiSearch - identifying the
most prominent Wikipedia pages and Weblinks for a chosen topic, (2) WikiMap -
creating a visual temporal map of the changes in the semantic network generated
by the search results over the lifetime of the returned Wikipedia articles, and
(3) WikiPulse - finding the most recent and most relevant changes and updates
about a topic.
|
1204.3379
|
A New Low-Complexity Decodable Rate-1 Full-Diversity 4 x 4 STBC with
Nonvanishing Determinants
|
cs.IT math.IT
|
Space-time coding techniques have become common-place in wireless
communication standards as they provide an effective way to mitigate the fading
phenomena inherent in wireless channels. However, the use of Space-Time Block
Codes (STBCs) increases significantly the optimal detection complexity at the
receiver unless the low complexity decodability property is taken into
consideration in the STBC design. In this letter we propose a new
low-complexity decodable rate-1 full-diversity 4 x 4 STBC. We provide an
analytical proof that the proposed code has the Non-Vanishing-Determinant (NVD)
property, a property that can be exploited through the use of adaptive
modulation which changes the transmission rate according to the wireless
channel quality. We compare the proposed code to existing low-complexity
decodable rate-1 full-diversity 4 x 4 STBCs in terms of performance over
quasi-static Rayleigh fading channels, detection complexity and Peak-to-Average
Power Ratio (PAPR). Our code is found to provide the best performance and the
smallest PAPR which is that of the used QAM constellation at the expense of a
slight increase in detection complexity w.r.t. certain previous codes but this
will only penalize the proposed code for high-order QAM constellations.
|
1204.3384
|
Unequal Error Protected JPEG 2000 Broadcast Scheme with Progressive
Fountain Codes
|
cs.IT math.IT
|
This paper proposes a novel scheme, based on progressive fountain codes, for
broadcasting JPEG 2000 multimedia. In such a broadcast scheme, progressive
resolution levels of images/video have been unequally protected when
transmitted using the proposed progressive fountain codes. With progressive
fountain codes applied in the broadcast scheme, the resolutions of images (JPEG
2000) or videos (MJPEG 2000) received by different users can be automatically
adaptive to their channel qualities, i.e. the users with good channel qualities
are possible to receive the high resolution images/vedio while the users with
bad channel qualities may receive low resolution images/vedio. Finally, the
performance of the proposed scheme is evaluated with the MJPEG 2000 broadcast
prototype.
|
1204.3388
|
A Novel Construction of Multi-group Decodable Space-Time Block Codes
|
cs.IT math.IT
|
Complex Orthogonal Design (COD) codes are known to have the lowest detection
complexity among Space-Time Block Codes (STBCs). However, the rate of square
COD codes decreases exponentially with the number of transmit antennas. The
Quasi-Orthogonal Design (QOD) codes emerged to provide a compromise between
rate and complexity as they offer higher rates compared to COD codes at the
expense of an increase of decoding complexity through partially relaxing the
orthogonality conditions. The QOD codes were then generalized with the so
called g-symbol and g-group decodable STBCs where the number of orthogonal
groups of symbols is no longer restricted to two as in the QOD case. However,
the adopted approach for the construction of such codes is based on sufficient
but not necessary conditions which may limit the achievable rates for any
number of orthogonal groups. In this paper, we limit ourselves to the case of
Unitary Weight (UW)-g-group decodable STBCs for 2^a transmit antennas where the
weight matrices are required to be single thread matrices with non-zero entries
in {1,-1,j,-j} and address the problem of finding the highest achievable rate
for any number of orthogonal groups. This special type of weight matrices
guarantees full symbol-wise diversity and subsumes a wide range of existing
codes in the literature. We show that in this case an exhaustive search can be
applied to find the maximum achievable rates for UW-g-group decodable STBCs
with g>1. For this purpose, we extend our previously proposed approach for
constructing UW-2-group decodable STBCs based on necessary and sufficient
conditions to the case of UW-g-group decodable STBCs in a recursive manner.
|
1204.3391
|
Rateless Codes with Progressive Recovery for Layered Multimedia Delivery
|
cs.IT cs.MM math.IT
|
This paper proposes a novel approach, based on unequal error protection, to
enhance rateless codes with progressive recovery for layered multimedia
delivery. With a parallel encoding structure, the proposed Progressive Rateless
codes (PRC) assign unequal redundancy to each layer in accordance with their
importance. Each output symbol contains information from all layers, and thus
the stream layers can be recovered progressively at the expected received
ratios of output symbols. Furthermore, the dependency between layers is
naturally considered. The performance of the PRC is evaluated and compared with
some related UEP approaches. Results show that our PRC approach provides better
recovery performance with lower overhead both theoretically and numerically.
|
1204.3401
|
Collective Intelligence in Humans: A Literature Review
|
cs.CY cs.SI
|
This literature review focuses on collective intelligence in humans. A
keyword search was performed on the Web of Knowledge and selected papers were
reviewed in order to reveal themes relevant to collective intelligence. Three
levels of abstraction were identified in discussion about the phenomenon: the
micro-level, the macro-level and the level of emergence. Recurring themes in
the literature were categorized under the above-mentioned framework and
directions for future research were identified.
|
1204.3432
|
Converging to the Chase - a Tool for Finite Controllability
|
cs.DB
|
We solve a problem, stated in [CGP10], showing that Sticky Datalog, defined
in the cited paper as an element of the Datalog\pm project, has the finite
controllability property. In order to do that, we develop a technique, which we
believe can have further applications, of approximating Chase(D, T), for a
database instance D and some sets of tuple generating dependencies T, by an
infinite sequence of finite structures, all of them being models of T.
|
1204.3436
|
Explaining Adaptation in Genetic Algorithms With Uniform Crossover: The
Hyperclimbing Hypothesis
|
cs.NE cs.AI
|
The hyperclimbing hypothesis is a hypothetical explanation for adaptation in
genetic algorithms with uniform crossover (UGAs). Hyperclimbing is an
intuitive, general-purpose, non-local search heuristic applicable to discrete
product spaces with rugged or stochastic cost functions. The strength of this
heuristic lie in its insusceptibility to local optima when the cost function is
deterministic, and its tolerance for noise when the cost function is
stochastic. Hyperclimbing works by decimating a search space, i.e. by
iteratively fixing the values of small numbers of variables. The hyperclimbing
hypothesis holds that UGAs work by implementing efficient hyperclimbing. Proof
of concept for this hypothesis comes from the use of a novel analytic technique
involving the exploitation of algorithmic symmetry. We have also obtained
experimental results that show that a simple tweak inspired by the
hyperclimbing hypothesis dramatically improves the performance of a UGA on
large, random instances of MAX-3SAT and the Sherrington Kirkpatrick Spin
Glasses problem.
|
1204.3453
|
There is No Deadline - Time Evolution of Wikipedia Discussions
|
cs.CY cs.SI physics.soc-ph
|
Wikipedia articles are by definition never finished: at any moment their
content can be edited, or discussed in the associated talk pages. In this study
we analyse the evolution of these discussions to unveil patterns of collective
participation along the temporal dimension, and to shed light on the process of
content creation on different topics. At a micro-scale, we investigate peaks in
the discussion activity and we observe a non-trivial relationship with edit
activity. At a larger scale, we introduce a measure to account for how fast
discussions grow in complexity, and we find speeds that span three orders of
magnitude for different articles. Our analysis should help the community in
tasks such as early detection of controversies and assessment of discussion
maturity.
|
1204.3457
|
The Effects of Prediction Market Design and Price Elasticity on Trading
Performance of Users: An Experimental Analysis
|
cs.SI q-fin.GN
|
We employ a 2x3 factorial experiment to study two central factors in the
design of prediction markets (PMs) for idea evaluation: the overall design of
the PM, and the elasticity of market prices set by a market maker. The results
show that 'multi-market designs' on which each contract is traded on a separate
PM lead to significantly higher trading performance than 'single-markets' that
handle all contracts one on PM. Price elasticity has no direct effect on
trading performance, but a significant interaction effect with market design
implies that the performance difference between the market designs is highest
in settings of moderate price elasticity. We contribute to the emerging
research stream of PM design through an unprecedented experiment which compares
current market designs.
|
1204.3458
|
The logic of quantum mechanics - Take II
|
quant-ph cs.CL cs.LO math.CT math.LO
|
We put forward a new take on the logic of quantum mechanics, following
Schroedinger's point of view that it is composition which makes quantum theory
what it is, rather than its particular propositional structure due to the
existence of superpositions, as proposed by Birkhoff and von Neumann. This
gives rise to an intrinsically quantitative kind of logic, which truly deserves
the name `logic' in that it also models meaning in natural language, the latter
being the origin of logic, that it supports automation, the most prominent
practical use of logic, and that it supports probabilistic inference.
|
1204.3463
|
Effects of Social Influence on the Wisdom of Crowds
|
cs.SI physics.soc-ph
|
Wisdom of crowds refers to the phenomenon that the aggregate prediction or
forecast of a group of individuals can be surprisingly more accurate than most
individuals in the group, and sometimes - than any of the individuals
comprising it. This article models the impact of social influence on the wisdom
of crowds. We build a minimalistic representation of individuals as Brownian
particles coupled by means of social influence. We demonstrate that the model
can reproduce results of a previous empirical study. This allows us to draw
more fundamental conclusions about the role of social influence: In particular,
we show that the question of whether social influence has a positive or
negative net effect on the wisdom of crowds is ill-defined. Instead, it is the
starting configuration of the population, in terms of its diversity and
accuracy, that directly determines how beneficial social influence actually is.
The article further examines the scenarios under which social influence
promotes or impairs the wisdom of crowds.
|
1204.3471
|
Cloudpress 2.0: A MapReduce Approach for News Retrieval on the Cloud
|
cs.DC cs.IR
|
In this era of the Internet, the amount of news articles added every minute
of everyday is humongous. As a result of this explosive amount of news
articles, news retrieval systems are required to process the news articles
frequently and intensively. The news retrieval systems that are in-use today
are not capable of coping up with these data-intensive computations. Cloudpress
2.0 presented here, is designed and implemented to be scalable, robust and
fault tolerant. It is designed in such a way that, all the processes involved
in news retrieval such as fetching, pre-processing, indexing, storing and
summarizing, exploit MapReduce paradigm and use the power of the Cloud
computing. It uses novel approaches for parallel processing, for storing the
news articles in a distributed database and for visualizing them as a 3D
visual. It uses Lucene-based indexing for efficient and faster retrieval. It
also includes a novel query expansion feature for searching the news articles.
Cloudpress 2.0 also allows on-the-fly, extractive summarization of news
articles based on the input query.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.