id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1403.7455 | Hybrid Approach to English-Hindi Name Entity Transliteration | cs.CL | Machine translation (MT) research in Indian languages is still in its
infancy. Not much work has been done in proper transliteration of name entities
in this domain. In this paper we address this issue. We have used English-Hindi
language pair for our experiments and have used a hybrid approach. At first we
have processed English words using a rule based approach which extracts
individual phonemes from the words and then we have applied statistical
approach which converts the English into its equivalent Hindi phoneme and in
turn the corresponding Hindi word. Through this approach we have attained
83.40% accuracy.
|
1403.7465 | Shiva: A Framework for Graph Based Ontology Matching | cs.AI | Since long, corporations are looking for knowledge sources which can provide
structured description of data and can focus on meaning and shared
understanding. Structures which can facilitate open world assumptions and can
be flexible enough to incorporate and recognize more than one name for an
entity. A source whose major purpose is to facilitate human communication and
interoperability. Clearly, databases fail to provide these features and
ontologies have emerged as an alternative choice, but corporations working on
same domain tend to make different ontologies. The problem occurs when they
want to share their data/knowledge. Thus we need tools to merge ontologies into
one. This task is termed as ontology matching. This is an emerging area and
still we have to go a long way in having an ideal matcher which can produce
good results. In this paper we have shown a framework to matching ontologies
using graphs.
|
1403.7471 | Approximate Decentralized Bayesian Inference | cs.LG | This paper presents an approximate method for performing Bayesian inference
in models with conditional independence over a decentralized network of
learning agents. The method first employs variational inference on each
individual learning agent to generate a local approximate posterior, the agents
transmit their local posteriors to other agents in the network, and finally
each agent combines its set of received local posteriors. The key insight in
this work is that, for many Bayesian models, approximate inference schemes
destroy symmetry and dependencies in the model that are crucial to the correct
application of Bayes' rule when combining the local posteriors. The proposed
method addresses this issue by including an additional optimization step in the
combination procedure that accounts for these broken dependencies. Experiments
on synthetic and real data demonstrate that the decentralized method provides
advantages in computational performance and predictive test likelihood over
previous batch and distributed methods.
|
1403.7481 | Indexing large genome collections on a PC | cs.CE q-bio.GN q-bio.QM | Motivation: The availability of thousands of invidual genomes of one species
should boost rapid progress in personalized medicine or understanding of the
interaction between genotype and phenotype, to name a few applications. A key
operation useful in such analyses is aligning sequencing reads against a
collection of genomes, which is costly with the use of existing algorithms due
to their large memory requirements.
Results: We present MuGI, Multiple Genome Index, which reports all
occurrences of a given pattern, in exact and approximate matching model,
against a collection of thousand(s) genomes. Its unique feature is the small
index size fitting in a standard computer with 16--32\,GB, or even 8\,GB, of
RAM, for the 1000GP collection of 1092 diploid human genomes. The solution is
also fast. For example, the exact matching queries are handled in average time
of 39\,$\mu$s and with up to 3 mismatches in 373\,$\mu$s on the test PC with
the index size of 13.4\,GB. For a smaller index, occupying 7.4\,GB in memory,
the respective times grow to 76\,$\mu$s and 917\,$\mu$s.
Availability: Software and Suuplementary material:
\url{http://sun.aei.polsl.pl/mugi}.
|
1403.7532 | Opportunistic Spectrum Sharing using Dumb Basis Patterns: The
Line-of-Sight Interference Scenario | cs.IT math.IT | We investigate a spectrum-sharing system with non-severely faded mutual
interference links, where both the secondary-to-primary and
primary-to-secondary channels have a Line-of-Sight (LoS) component. Based on a
Rician model for the LoS channels, we show, analytically and numerically, that
LoS interference hinders the achievable secondary user capacity. This is caused
by the poor dynamic range of the interference channels fluctuations when a
dominant LoS component exists. In order to improve the capacity of such system,
we propose the usage of an Electronically Steerable Parasitic Array Radiator
(ESPAR) antenna at the secondary terminals. An ESPAR antenna requires a single
RF chain and has a reconfigurable radiation pattern that is controlled by
assigning arbitrary weights to M orthonormal basis radiation patterns. By
viewing these orthonormal patterns as multiple virtual dumb antennas, we
randomly vary their weights over time creating artificial channel fluctuations
that can perfectly eliminate the undesired impact of LoS interference. Because
the proposed scheme uses a single RF chain, it is well suited for compact and
low cost mobile terminals.
|
1403.7543 | A sparse Kaczmarz solver and a linearized Bregman method for online
compressed sensing | math.OC cs.CV cs.IT math.IT math.NA | An algorithmic framework to compute sparse or minimal-TV solutions of linear
systems is proposed. The framework includes both the Kaczmarz method and the
linearized Bregman method as special cases and also several new methods such as
a sparse Kaczmarz solver. The algorithmic framework has a variety of
applications and is especially useful for problems in which the linear
measurements are slow and expensive to obtain. We present examples for online
compressed sensing, TV tomographic reconstruction and radio interferometry.
|
1403.7550 | DimmWitted: A Study of Main-Memory Statistical Analytics | cs.DB cs.LG math.OC stat.ML | We perform the first study of the tradeoff space of access methods and
replication to support statistical analytics using first-order methods executed
in the main memory of a Non-Uniform Memory Access (NUMA) machine. Statistical
analytics systems differ from conventional SQL-analytics in the amount and
types of memory incoherence they can tolerate. Our goal is to understand
tradeoffs in accessing the data in row- or column-order and at what granularity
one should share the model and data for a statistical task. We study this new
tradeoff space, and discover there are tradeoffs between hardware and
statistical efficiency. We argue that our tradeoff study may provide valuable
information for designers of analytics engines: for each system we consider,
our prototype engine can run at least one popular task at least 100x faster. We
conduct our study across five architectures using popular models including
SVMs, logistic regression, Gibbs sampling, and neural networks.
|
1403.7588 | Scalable Robust Matrix Recovery: Frank-Wolfe Meets Proximal Methods | math.OC cs.CV cs.NA stat.ML | Recovering matrices from compressive and grossly corrupted observations is a
fundamental problem in robust statistics, with rich applications in computer
vision and machine learning. In theory, under certain conditions, this problem
can be solved in polynomial time via a natural convex relaxation, known as
Compressive Principal Component Pursuit (CPCP). However, all existing provable
algorithms for CPCP suffer from superlinear per-iteration cost, which severely
limits their applicability to large scale problems. In this paper, we propose
provable, scalable and efficient methods to solve CPCP with (essentially)
linear per-iteration cost. Our method combines classical ideas from Frank-Wolfe
and proximal methods. In each iteration, we mainly exploit Frank-Wolfe to
update the low-rank component with rank-one SVD and exploit the proximal step
for the sparse term. Convergence results and implementation details are also
discussed. We demonstrate the scalability of the proposed approach with
promising numerical experiments on visual data.
|
1403.7591 | Building A Large Concept Bank for Representing Events in Video | cs.MM cs.CV cs.IR | Concept-based video representation has proven to be effective in complex
event detection. However, existing methods either manually design concepts or
directly adopt concept libraries not specifically designed for events. In this
paper, we propose to build Concept Bank, the largest concept library consisting
of 4,876 concepts specifically designed to cover 631 real-world events. To
construct the Concept Bank, we first gather a comprehensive event collection
from WikiHow, a collaborative writing project that aims to build the world's
largest manual for any possible How-To event. For each event, we then search
Flickr and discover relevant concepts from the tags of the returned images. We
train a Multiple Kernel Linear SVM for each discovered concept as a concept
detector in Concept Bank. We organize the concepts into a five-layer tree
structure, in which the higher-level nodes correspond to the event categories
while the leaf nodes are the event-specific concepts discovered for each event.
Based on such tree ontology, we develop a semantic matching method to select
relevant concepts for each textual event query, and then apply the
corresponding concept detectors to generate concept-based video
representations. We use TRECVID Multimedia Event Detection 2013 and Columbia
Consumer Video open source event definitions and videos as our test sets and
show very promising results on two video event detection tasks: event modeling
over concept space and zero-shot event retrieval. To the best of our knowledge,
this is the largest concept library covering the largest number of real-world
events.
|
1403.7595 | Information Filtering on Coupled Social Networks | cs.SI physics.soc-ph | In this paper, based on the coupled social networks (CSN), we propose a
hybrid algorithm to nonlinearly integrate both social and behavior information
of online users. Filtering algorithm based on the coupled social networks,
which considers the effects of both social influence and personalized
preference. Experimental results on two real datasets, \emph{Epinions} and
\emph{Friendfeed}, show that hybrid pattern can not only provide more accurate
recommendations, but also can enlarge the recommendation coverage while
adopting global metric. Further empirical analyses demonstrate that the mutual
reinforcement and rich-club phenomenon can also be found in coupled social
networks where the identical individuals occupy the core position of the online
system. This work may shed some light on the in-depth understanding structure
and function of coupled social networks.
|
1403.7598 | A Non-cooperative Differential Game Model for Frequency Reuse based
Channel Allocation in Satellite Networks | cs.IT cs.NI math.IT | In this paper, channel resource allocation problem for LEO mobile satellite
systems is investigated and a new dynamic channel resource allocation scheme is
proposed based on differential game. Optimal channel resource allocated to each
satellite beams are formulated as Nash equilibrium. It is proved that optimal
channel resource allocation can be achieved and the differential game based
scheme is applicable and acceptable. Numerical results shows that system
performance can be improved based on the proposed scheme.
|
1403.7654 | Where Businesses Thrive: Predicting the Impact of the Olympic Games on
Local Retailers through Location-based Services Data | cs.SI physics.soc-ph | The Olympic Games are an important sporting event with notable consequences
for the general economic landscape of the host city. Traditional economic
assessments focus on the aggregated impact of the event on the national income,
but fail to provide micro-scale insights on why local businesses will benefit
from the increased activity during the Games. In this paper we provide a novel
approach to modeling the impact of the Olympic Games on local retailers by
analyzing a dataset mined from a large location-based social service,
Foursquare. We hypothesize that the spatial positioning of businesses as well
as the mobility trends of visitors are primary indicators of whether retailers
will rise their popularity during the event. To confirm this we formulate a
retail winners prediction task in the context of which we evaluate a set of
geographic and mobility metrics. We find that the proximity to stadiums, the
diversity of activity in the neighborhood, the nearby area sociability, as well
as the probability of customer flows from and to event places such as stadiums
and parks are all vital factors. Through supervised learning techniques we
demonstrate that the success of businesses hinges on a combination of both
geographic and mobility factors. Our results suggest that location-based social
networks, where crowdsourced information about the dynamic interaction of users
with urban spaces becomes publicly available, present an alternative medium to
assess the economic impact of large scale events in a city.
|
1403.7657 | The Call of the Crowd: Event Participation in Location-based Social
Services | cs.SI physics.soc-ph | Understanding the social and behavioral forces behind event participation is
not only interesting from the viewpoint of social science, but also has
important applications in the design of personalized event recommender systems.
This paper takes advantage of data from a widely used location-based social
network, Foursquare, to analyze event patterns in three metropolitan cities. We
put forward several hypotheses on the motivating factors of user participation
and confirm that social aspects play a major role in determining the likelihood
of a user to participate in an event. While an explicit social filtering signal
accounting for whether friends are attending dominates the factors, the
popularity of an event proves to also be a strong attractor. Further, we
capture an implicit social signal by performing random walks in a high
dimensional graph that encodes the place type preferences of friends and that
proves especially suited to identify relevant niche events for users. Our
findings on the extent to which the various temporal, spatial and social
aspects underlie users' event preferences lead us to further hypothesize that a
combination of factors better models users' event interests. We verify this
through a supervised learning framework. We show that for one in three users in
London and one in five users in New York and Chicago it identifies the exact
event the user would attend among the pool of suggestions.
|
1403.7663 | Dynamical Systems on Networks: A Tutorial | nlin.AO cond-mat.dis-nn cond-mat.stat-mech cs.SI physics.soc-ph | We give a tutorial for the study of dynamical systems on networks. We focus
especially on "simple" situations that are tractable analytically, because they
can be very insightful and provide useful springboards for the study of more
complicated scenarios. We briefly motivate why examining dynamical systems on
networks is interesting and important, and we then give several fascinating
examples and discuss some theoretical results. We also briefly discuss
dynamical systems on dynamical (i.e., time-dependent) networks, overview
software implementations, and give an outlook on the field.
|
1403.7679 | Coded Distributed Diversity: A Novel Distributed Reception Technique for
Wireless Communication Systems | cs.IT math.IT | In this paper, we consider a distributed reception scenario where a
transmitter broadcasts a signal to multiple geographically separated receive
nodes over fading channels, and each node forwards a few bits representing a
processed version of the received signal to a fusion center. The fusion center
then tries to decode the transmitted signal based on the forwarded information
from the receive nodes and possible channel state information. We show that
there is a strong connection between the problem of minimizing a symbol error
probability at the fusion center in distributed reception and channel coding in
coding theory. This connection allows us to design a unified framework for
coded distributed diversity reception. We focus linear block codes such as
simplex codes or first-order Reed-Muller codes that achieve the Griesmer bound
with equality to maximize the diversity gain. Due to its simple structure, no
complex offline optimization process is needed to design the coding structure
at the receive nodes for the proposed coded diversity technique. The proposed
technique can support a wide array of distributed reception scenarios, i.e.,
arbitrary $M$-ary symbol transmission at the transmitter and received signal
processing with multiple bits at the receive nodes. Numerical studies show that
the proposed coded diversity technique can achieve practical symbol error rates
even with moderate signal-to-noise ratio and numbers of the receive nodes.
|
1403.7682 | Downlink Analysis for a Heterogeneous Cellular Network | cs.IT math.IT | In this paper, a comprehensive study of the the downlink performance in a
heterogeneous cellular network (or hetnet) is conducted. A general hetnet model
is considered consisting of an arbitrary number of open-access and
closed-access tier of base stations (BSs) arranged according to independent
homogeneous Poisson point processes. The BSs of each tier have a constant
transmission power, random fading coefficient with an arbitrary distribution
and arbitrary path-loss exponent of the power-law path-loss model. For such a
system, analytical characterizations for the coverage probability and average
rate at an arbitrary mobile-station (MS), and average per-tier load are derived
for both the max-SINR connectivity and nearest-BS connectivity models. Using
stochastic ordering, interesting properties and simplifications for the hetnet
downlink performance are derived by relating these two connectivity models to
the maximum instantaneous received power (MIRP) connectivity model and the
maximum biased received power (MBRP) connectivity models, respectively,
providing good insights about the hetnets and the downlink performance in these
complex networks. Furthermore, the results also demonstrate the effectiveness
and analytical tractability of the stochastic geometric approach to study the
hetnet performance.
|
1403.7683 | Approximate Matrix Multiplication with Application to Linear Embeddings | math.ST cs.IT math.IT stat.ML stat.TH | In this paper, we study the problem of approximately computing the product of
two real matrices. In particular, we analyze a dimensionality-reduction-based
approximation algorithm due to Sarlos [1], introducing the notion of nuclear
rank as the ratio of the nuclear norm over the spectral norm. The presented
bound has improved dependence with respect to the approximation error (as
compared to previous approaches), whereas the subspace -- on which we project
the input matrices -- has dimensions proportional to the maximum of their
nuclear rank and it is independent of the input dimensions. In addition, we
provide an application of this result to linear low-dimensional embeddings.
Namely, we show that any Euclidean point-set with bounded nuclear rank is
amenable to projection onto number of dimensions that is independent of the
input dimensionality, while achieving additive error guarantees.
|
1403.7691 | Mobile Conductance in Sparse Networks and Mobility-Connectivity Tradeoff | cs.NI cs.SI | In this paper, our recently proposed mobile-conductance based analytical
framework is extended to the sparse settings, thus offering a unified tool for
analyzing information spreading in mobile networks. A penalty factor is
identified for information spreading in sparse networks as compared to the
connected scenario, which is then intuitively interpreted and verified by
simulations. With the analytical results obtained, the mobility-connectivity
tradeoff is quantitatively analyzed to determine how much mobility may be
exploited to make up for network connectivity deficiency.
|
1403.7697 | MIMO Beamforming in Millimeter-Wave Directional Wi-Fi | cs.IT math.IT | Beamforming is indispensable in the operation of 60-GHz millimeter-wave
directional multi-gigabit Wi-Fi. Simple power method and its extensions enable
the transmitting and receiving antenna arrays to form a beam for single spatial
stream. To further improve the spectral efficiency in future 60-GHz directional
Wi-Fi, alternating least square (ALS) algorithm can form multiple beams between
the transmitter and receiver for multi-input-multi-output (MIMO) operations.
For both shared and split MIMO architecture, the ALS beamforming algorithm can
be operated in both frequency-flat and frequency-selective channels. In the
split architecture, MIMO beamforming approximately maximizes the capacity of
the beam-formed MIMO channel.
|
1403.7714 | Asymptotically-Optimal Motion Planning using Lower Bounds on Cost | cs.RO | Many path-finding algorithms on graphs such as A* are sped up by using a
heuristic function that gives lower bounds on the cost to reach the goal.
Aiming to apply similar techniques to speed up sampling-based motion-planning
algorithms, we use effective lower bounds on the cost between configurations to
tightly estimate the cost-to-go. We then use these estimates in an anytime
asymptotically-optimal algorithm which we call Motion Planning using Lower
Bounds (MPLB). MPLB is based on the Fast Marching Trees (FMT*) algorithm
recently presented by Janson and Pavone. An advantage of our approach is that
in many cases (especially as the number of samples grows) the weight of
collision detection in the computation is almost negligible with respect to
nearest-neighbor calls. We prove that MPLB performs no more collision-detection
calls than an anytime version of FMT*. Additionally, we demonstrate in
simulations that for certain scenarios, the algorithmic tools presented here
enable efficiently producing low-cost paths while spending only a small
fraction of the running time on collision detection.
|
1403.7720 | Irregular Fractional Repetition Code Optimization for Heterogeneous
Cloud Storage | cs.IT math.IT | This paper presents a flexible irregular model for heterogeneous cloud
storage systems and investigates how the cost of repairing failed nodes can be
minimized. The fractional repetition code, originally designed for minimizing
repair bandwidth for homogeneous storage systems, is generalized to the
irregular fractional repetition code, which is adaptable to heterogeneous
environments. The code structure and the associated storage allocation can be
obtained by solving an integer linear programming problem. For moderate sized
networks, a heuristic algorithm is proposed and shown to be near-optimal by
computer simulations.
|
1403.7726 | Relevant Feature Selection Model Using Data Mining for Intrusion
Detection System | cs.CR cs.LG | Network intrusions have become a significant threat in recent years as a
result of the increased demand of computer networks for critical systems.
Intrusion detection system (IDS) has been widely deployed as a defense measure
for computer networks. Features extracted from network traffic can be used as
sign to detect anomalies. However with the huge amount of network traffic,
collected data contains irrelevant and redundant features that affect the
detection rate of the IDS, consumes high amount of system resources, and
slowdown the training and testing process of the IDS. In this paper, a new
feature selection model is proposed; this model can effectively select the most
relevant features for intrusion detection. Our goal is to build a lightweight
intrusion detection system by using a reduced features set. Deleting irrelevant
and redundant features helps to build a faster training and testing process, to
have less resource consumption as well as to maintain high detection rates. The
effectiveness and the feasibility of our feature selection model were verified
by several experiments on KDD intrusion detection dataset. The experimental
results strongly showed that our model is not only able to yield high detection
rates but also to speed up the detection process.
|
1403.7729 | Multi-Resource Parallel Query Scheduling and Optimization | cs.DB | Scheduling query execution plans is a particularly complex problem in
shared-nothing parallel systems, where each site consists of a collection of
local time-shared (e.g., CPU(s) or disk(s)) and space-shared (e.g., memory)
resources and communicates with remote sites by message-passing. Earlier work
on parallel query scheduling employs either (a) one-dimensional models of
parallel task scheduling, effectively ignoring the potential benefits of
resource sharing, or (b) models of globally accessible resource units, which
are appropriate only for shared-memory architectures, since they cannot capture
the affinity of system resources to sites. In this paper, we develop a general
approach capturing the full complexity of scheduling distributed,
multi-dimensional resource units for all forms of parallelism within and across
queries and operators. We present a level-based list scheduling heuristic
algorithm for independent query tasks (i.e., physical operator pipelines) that
is provably near-optimal for given degrees of partitioned parallelism (with a
worst-case performance ratio that depends on the number of time-shared and
space-shared resources per site and the granularity of the clones). We also
propose extensions to handle blocking constraints in logical operator (e.g.,
hash-join) pipelines and bushy query plans as well as on-line task arrivals
(e.g., in a dynamic or multi-query execution environment). Experiments with our
scheduling algorithms implemented on top of a detailed simulation model verify
their effectiveness compared to existing approaches in a realistic setting.
Based on our analytical and experimental results, we revisit the open problem
of designing efficient cost models for parallel query optimization and propose
a solution that captures all the important parameters of parallel execution.
|
1403.7735 | Optimal Cooperative Cognitive Relaying and Spectrum Access for an Energy
Harvesting Cognitive Radio: Reinforcement Learning Approach | cs.NI cs.IT cs.LG math.IT | In this paper, we consider a cognitive setting under the context of
cooperative communications, where the cognitive radio (CR) user is assumed to
be a self-organized relay for the network. The CR user and the PU are assumed
to be energy harvesters. The CR user cooperatively relays some of the
undelivered packets of the primary user (PU). Specifically, the CR user stores
a fraction of the undelivered primary packets in a relaying queue (buffer). It
manages the flow of the undelivered primary packets to its relaying queue using
the appropriate actions over time slots. Moreover, it has the decision of
choosing the used queue for channel accessing at idle time slots (slots where
the PU's queue is empty). It is assumed that one data packet transmission
dissipates one energy packet. The optimal policy changes according to the
primary and CR users arrival rates to the data and energy queues as well as the
channels connectivity. The CR user saves energy for the PU by taking the
responsibility of relaying the undelivered primary packets. It optimally
organizes its own energy packets to maximize its payoff as time progresses.
|
1403.7737 | Sharpened Error Bounds for Random Sampling Based $\ell_2$ Regression | cs.LG cs.NA stat.ML | Given a data matrix $X \in R^{n\times d}$ and a response vector $y \in
R^{n}$, suppose $n>d$, it costs $O(n d^2)$ time and $O(n d)$ space to solve the
least squares regression (LSR) problem. When $n$ and $d$ are both large,
exactly solving the LSR problem is very expensive. When $n \gg d$, one feasible
approach to speeding up LSR is to randomly embed $y$ and all columns of $X$
into a smaller subspace $R^c$; the induced LSR problem has the same number of
columns but much fewer number of rows, and it can be solved in $O(c d^2)$ time
and $O(c d)$ space.
We discuss in this paper two random sampling based methods for solving LSR
more efficiently. Previous work showed that the leverage scores based sampling
based LSR achieves $1+\epsilon$ accuracy when $c \geq O(d \epsilon^{-2} \log
d)$. In this paper we sharpen this error bound, showing that $c = O(d \log d +
d \epsilon^{-1})$ is enough for achieving $1+\epsilon$ accuracy. We also show
that when $c \geq O(\mu d \epsilon^{-2} \log d)$, the uniform sampling based
LSR attains a $2+\epsilon$ bound with positive probability.
|
1403.7746 | Multi-label Ferns for Efficient Recognition of Musical Instruments in
Recordings | cs.LG cs.SD | In this paper we introduce multi-label ferns, and apply this technique for
automatic classification of musical instruments in audio recordings. We compare
the performance of our proposed method to a set of binary random ferns, using
jazz recordings as input data. Our main result is obtaining much faster
classification and higher F-score. We also achieve substantial reduction of the
model size.
|
1403.7752 | Auto-encoders: reconstruction versus compression | cs.NE cs.IT cs.LG math.IT | We discuss the similarities and differences between training an auto-encoder
to minimize the reconstruction error, and training the same auto-encoder to
compress the data via a generative model. Minimizing a codelength for the data
using an auto-encoder is equivalent to minimizing the reconstruction error plus
some correcting terms which have an interpretation as either a denoising or
contractive property of the decoding function. These terms are related but not
identical to those used in denoising or contractive auto-encoders [Vincent et
al. 2010, Rifai et al. 2011]. In particular, the codelength viewpoint fully
determines an optimal noise level for the denoising criterion.
|
1403.7755 | On the Construction of Optimal Asymmetric Quantum Codes | cs.IT math.IT | Constacyclic codes are important classes of linear codes that have been
applied to the construction of quantum codes. Six new families of asymmetric
quantum codes derived from constacyclic codes are constructed in this paper.
Moreover, the constructed asymmetric quantum codes are optimal and different
from the codes available in the literature.
|
1403.7766 | Enhancing Automated Decision Support across Medical and Oral Health
Domains with Semantic Web Technologies | cs.AI cs.IR | Research has shown that the general health and oral health of an individual
are closely related. Accordingly, current practice of isolating the information
base of medical and oral health domains can be dangerous and detrimental to the
health of the individual. However, technical issues such as heterogeneous data
collection and storage formats, limited sharing of patient information and lack
of decision support over the shared information are the principal reasons for
the current state of affairs. To address these issues, the following research
investigates the development and application of a cross-domain ontology and
rules to build an evidence-based and reusable knowledge base consisting of the
inter-dependent conditions from the two domains. Through example implementation
of the knowledge base in Protege, we demonstrate the effectiveness of our
approach in reasoning over and providing decision support for cross-domain
patient information.
|
1403.7772 | Twitter in Academic Conferences: Usage, Networking and Participation
over Time | cs.SI cs.CY physics.soc-ph | Twitter is often referred to as a backchannel for conferences. While the main
conference takes place in a physical setting, attendees and virtual attendees
socialize, introduce new ideas or broadcast information by microblogging on
Twitter. In this paper we analyze the scholars' Twitter use in 16 Computer
Science conferences over a timespan of five years. Our primary finding is that
over the years there are increasing differences with respect to conversation
use and information use in Twitter. We studied the interaction network between
users to understand whether assumptions about the structure of the
conversations hold over time and between different types of interactions, such
as retweets, replies, and mentions. While `people come and people go', we want
to understand what keeps people stay with the conference on Twitter. By casting
the problem to a classification task, we find different factors that contribute
to the continuing participation of users to the online Twitter conference
activity. These results have implications for research communities to implement
strategies for continuous and active participation among members.
|
1403.7774 | Study and Capacity Evaluation of SISO, MISO and MIMO RF Wireless
Communication Systems | cs.NI cs.IT math.IT | The wireless communication systems has gone from different generations from
SISO systems to MIMO systems. Bandwidth is one important constraint in wireless
communication. In wireless communication, high data transmission rates are
essential for the services like tripple play i.e. data, voice and video. At
user end the capacity determines the quality of the communication systems. This
paper aims to compare the different RF wireless communication systems like
SISO, MISO, SIMO and MIMO systems on the capacity basis and explaining the
concept as today, the wireless communication has evolved from 2G, 3G to 4G and
the companies are fighting to create networks with more and more capacity so
that data rates can be increased and customers can be benefitted more. The
ultimate goal of wireless communication systems is to create a global personal
and multimedia communication without any capacity issues.
|
1403.7783 | Extraction of Line Word Character Segments Directly from Run Length
Compressed Printed Text Documents | cs.CV | Segmentation of a text-document into lines, words and characters, which is
considered to be the crucial pre-processing stage in Optical Character
Recognition (OCR) is traditionally carried out on uncompressed documents,
although most of the documents in real life are available in compressed form,
for the reasons such as transmission and storage efficiency. However, this
implies that the compressed image should be decompressed, which indents
additional computing resources. This limitation has motivated us to take up
research in document image analysis using compressed documents. In this paper,
we think in a new way to carry out segmentation at line, word and character
level in run-length compressed printed-text-documents. We extract the
horizontal projection profile curve from the compressed file and using the
local minima points perform line segmentation. However, tracing vertical
information which leads to tracking words-characters in a run-length compressed
file is not very straight forward. Therefore, we propose a novel technique for
carrying out simultaneous word and character segmentation by popping out column
runs from each row in an intelligent sequence. The proposed algorithms have
been validated with 1101 text-lines, 1409 words and 7582 characters from a
data-set of 35 noise and skew free compressed documents of Bengali, Kannada and
English Scripts.
|
1403.7790 | Optimal Two Player LQR State Feedback With Varying Delay | math.OC cs.SY | This paper presents an explicit solution to a two player distributed LQR
problem in which communication between controllers occurs across a
communication link with varying delay. We extend known dynamic programming
methods to accommodate this varying delay, and show that under suitable
assumptions, the optimal control actions are linear in their information, and
that the resulting controller has piecewise linear dynamics dictated by the
current effective delay regime.
|
1403.7792 | Swarm Intelligence Based Algorithms: A Critical Analysis | math.OC cs.NE nlin.AO | Many optimization algorithms have been developed by drawing inspiration from
swarm intelligence (SI). These SI-based algorithms can have some advantages
over traditional algorithms. In this paper, we carry out a critical analysis of
these SI-based algorithms by analyzing their ways to mimic evolutionary
operators. We also analyze the ways of achieving exploration and exploitation
in algorithms by using mutation, crossover and selection. In addition, we also
look at algorithms using dynamic systems, self-organization and Markov chain
framework. Finally, we provide some discussions and topics for further
research.
|
1403.7793 | True Global Optimality of the Pressure Vessel Design Problem: A
Benchmark for Bio-Inspired Optimisation Algorithms | math.OC cs.NE nlin.AO | The pressure vessel design problem is a well-known design benchmark for
validating bio-inspired optimization algorithms. However, its global optimality
is not clear and there has been no mathematical proof put forward. In this
paper, a detailed mathematical analysis of this problem is provided that proves
that 6059.714335048436 is the global minimum. The Lagrange multiplier method is
also used as an alternative proof and this method is extended to find the
global optimum of a cantilever beam design problem.
|
1403.7795 | Bio-Inspired Computation: Success and Challenges of IJBIC | math.OC cs.NE | It is now five years since the launch of the International Journal of
Bio-Inspired Computation (IJBIC). At the same time, significant new progress
has been made in the area of bio-inspired computation. This review paper
summarizes the success and achievements of IJBIC in the past five years, and
also highlights the challenges and key issues for further research.
|
1403.7802 | Capacity Analysis of LTE-Advanced HetNets with Reduced Power Subframes
and Range Expansion | cs.IT cs.NI math.IT | The time domain inter-cell interference coordination techniques specified in
LTE Rel. 10 standard improves the throughput of picocell-edge users by
protecting them from macrocell interference. On the other hand, it also
degrades the aggregate capacity in macrocell because the macro base station
(MBS) does not transmit data during certain subframes known as almost blank
subframes. The MBS data transmission using reduced power subframes was
standardized in LTE Rel. 11, which can improve the capacity in macrocell while
not causing high interference to the nearby picocells. In order to get maximum
benefit from the reduced power subframes, setting the key system parameters,
such as the amount of power reduction, carries critical importance. Using
stochastic geometry, this paper lays down a theoretical foundation for the
performance evaluation of heterogeneous networks with reduced power subframes
and range expansion bias. The analytic expressions for average capacity and 5th
percentile throughput are derived as a function of transmit powers, node
densities, and interference coordination parameters in a heterogeneous network
scenario, and are validated through Monte Carlo simulations. Joint optimization
of range expansion bias, power reduction factor, scheduling thresholds, and
duty cycle of reduced power subframes are performed to study the trade-offs
between aggregate capacity of a cell and fairness among the users. To validate
our analysis, we also compare the stochastic geometry based theoretical results
with the real MBS deployment (in the city of London) and the hexagonal-grid
model. Our analysis shows that with optimum parameter settings, the LTE Rel. 11
with reduced power subframes can provide substantially better performance than
the LTE Rel. 10 with almost blank subframes, in terms of both aggregate
capacity and fairness.
|
1403.7806 | Unbiased Black-Box Complexities of Jump Functions | cs.NE | We analyze the unbiased black-box complexity of jump functions with small,
medium, and large sizes of the fitness plateau surrounding the optimal
solution.
Among other results, we show that when the jump size is $(1/2 -
\varepsilon)n$, that is, only a small constant fraction of the fitness values
is visible, then the unbiased black-box complexities for arities $3$ and higher
are of the same order as those for the simple \textsc{OneMax} function. Even
for the extreme jump function, in which all but the two fitness values $n/2$
and $n$ are blanked out, polynomial-time mutation-based (i.e., unary unbiased)
black-box optimization algorithms exist. This is quite surprising given that
for the extreme jump function almost the whole search space (all but a
$\Theta(n^{-1/2})$ fraction) is a plateau of constant fitness.
To prove these results, we introduce new tools for the analysis of unbiased
black-box complexities, for example, selecting the new parent individual not by
comparing the fitnesses of the competing search points, but also by taking into
account the (empirical) expected fitnesses of their offspring.
|
1403.7827 | Motif-based success scores in coauthorship networks are highly sensitive
to author name disambiguation | physics.soc-ph cs.DL cs.SI | Following the work of Krumov et al. [Eur. Phys. J. B 84, 535 (2011)] we
revisit the question whether the usage of large citation datasets allows for
the quantitative assessment of social (by means of coauthorship of
publications) influence on the progression of science. Applying a more
comprehensive and well-curated dataset containing the publications in the
journals of the American Physical Society during the whole 20th century we find
that the measure chosen in the original study, a score based on small induced
subgraphs, has to be used with caution, since the obtained results are highly
sensitive to the exact implementation of the author disambiguation task.
|
1403.7841 | Proceedings 1st International Workshop on Synthesis of Continuous
Parameters | cs.SC cs.FL cs.SY | This volume contains the proceedings of the 1st International Workshop on
Synthesis of Continuous Parameters (SynCoP'14). The workshop was held in
Grenoble, France on April 6th, 2014, as a satellite event of the 17th European
Joint Conferences on Theory and Practice of Software (ETAPS'14).
SynCoP aims at bringing together researchers working on parameter synthesis
for systems with continuous variables, where the parameters consist of a
(usually dense) set of constant values. Synthesis problems for such parameters
arise for real-time, hybrid or probabilistic systems in a large variety
application domains. A parameter could be, e.g., a delay in a real-time system,
or a reaction rate in a biological cell model. The objective of the synthesis
problem is to identify suitable parameters to achieve desired behavior, or to
verify the behavior for a given range of parameter values.
This volume contains seven contributions: two invited talks and five regular
papers.
|
1403.7846 | Distributed Channel Quantization for Two-User Interference Networks | cs.IT math.IT | We introduce conferencing-based distributed channel quantizers for two-user
interference networks where interference signals are treated as noise. Compared
with the conventional distributed quantizers where each receiver quantizes its
own channel independently, the proposed quantizers allow multiple rounds of
feedback communication in the form of conferencing between receivers. We take
the network outage probabilities of sum rate and minimum rate as performance
measures and consider quantizer design in the transmission strategies of time
sharing and interference transmission. First, we propose distributed quantizers
that achieve the optimal network outage probability of sum rate for both time
sharing and interference transmission strategies with an average feedback rate
of only two bits per channel state. Then, for the time sharing strategy, we
propose a distributed quantizer that achieves the optimal network outage
probability of minimum rate with finite average feedback rate; conventional
quantizers require infinite rate to achieve the same performance. For the
interference transmission strategy, a distributed quantizer that can approach
the optimal network outage probability of minimum rate closely is also
proposed. Numerical simulations confirm that our distributed quantizers based
on conferencing outperform the conventional ones.
|
1403.7851 | Adaptive Linear Programming Decoding of Polar Codes | cs.IT math.IT | Polar codes are high density parity check codes and hence the sparse factor
graph, instead of the parity check matrix, has been used to practically
represent an LP polytope for LP decoding. Although LP decoding on this polytope
has the ML-certificate property, it performs poorly over a BAWGN channel. In
this paper, we propose modifications to adaptive cut generation based LP
decoding techniques and apply the modified-adaptive LP decoder to short
blocklength polar codes over a BAWGN channel. The proposed decoder provides
significant FER performance gain compared to the previously proposed LP decoder
and its performance approaches that of ML decoding at high SNRs. We also
present an algorithm to obtain a smaller factor graph from the original sparse
factor graph of a polar code. This reduced factor graph preserves the small
check node degrees needed to represent the LP polytope in practice. We show
that the fundamental polytope of the reduced factor graph can be obtained from
the projection of the polytope represented by the original sparse factor graph
and the frozen bit information. Thus, the LP decoding time complexity is
decreased without changing the FER performance by using the reduced factor
graph representation.
|
1403.7869 | Application des techniques d'ench\`eres dans les r\'eseaux de radio
cognitive | cs.NI cs.MA | The rapid proliferation of standards and radio services in recent years
caused the problem of spectrum scarcity. The main objective of Cognitive Radio
(CR) is to facilitate access to radio spectrum. Our contribution in this paper
is the use of auctions to solve the problem of spectrum congestion in the
context of CR, for that, we will combine the theory of auctions with
multi-agent systems. Our approach has shown that it is preferable to use the
Sealed-bid Auction with dynamic programming because this method has many
advantages over other methods.
---
La prolif\'eration rapide de standards et services de radiocommunication ces
derni\`eres ann\'ees provoquent le probl\`eme de la p\'enurie du spectre. Dans
ce contexte, l'objectif principal de la Radio Cognitive (RC) est de faciliter
l'acc\`es au spectre radio. Notre contribution dans le cadre de ce papier est
l'utilisation des ench\`eres pour r\'esoudre le probl\`eme de l'encombrement du
spectre dans le cadre de la RC. Pour cela, nous avons combin\'e la th\'eorie
des ench\`eres avec les syst\`emes multi agents. Notre approche a prouv\'e
qu'il est pr\'ef\'erable d'utiliser les ench\`eres \`a Enveloppe Scell\'ee avec
programmation dynamique car cette m\'ethode a beaucoup d'avantages par rapport
aux autres m\'ethodes.
|
1403.7870 | Optimized Training Design for Wireless Energy Transfer | cs.IT math.IT | Radio-frequency (RF) enabled wireless energy transfer (WET), as a promising
solution to provide cost-effective and reliable power supplies for
energy-constrained wireless networks, has drawn growing interests recently. To
overcome the significant propagation loss over distance, employing
multi-antennas at the energy transmitter (ET) to more efficiently direct
wireless energy to desired energy receivers (ERs), termed \emph{energy
beamforming}, is an essential technique for enabling WET. However, the
achievable gain of energy beamforming crucially depends on the available
channel state information (CSI) at the ET, which needs to be acquired
practically. In this paper, we study the design of an efficient channel
acquisition method for a point-to-point multiple-input multiple-output (MIMO)
WET system by exploiting the channel reciprocity, i.e., the ET estimates the
CSI via dedicated reverse-link training from the ER. Considering the limited
energy availability at the ER, the training strategy should be carefully
designed so that the channel can be estimated with sufficient accuracy, and yet
without consuming excessive energy at the ER. To this end, we propose to
maximize the \emph{net} harvested energy at the ER, which is the average
harvested energy offset by that used for channel training. An optimization
problem is formulated for the training design over MIMO Rician fading channels,
including the subset of ER antennas to be trained, as well as the training time
and power allocated. Closed-form solutions are obtained for some special
scenarios, based on which useful insights are drawn on when training should be
employed to improve the net transferred energy in MIMO WET systems.
|
1403.7876 | Correlation Filters with Limited Boundaries | cs.CV | Correlation filters take advantage of specific properties in the Fourier
domain allowing them to be estimated efficiently: O(NDlogD) in the frequency
domain, versus O(D^3 + ND^2) spatially where D is signal length, and N is the
number of signals. Recent extensions to correlation filters, such as MOSSE,
have reignited interest of their use in the vision community due to their
robustness and attractive computational properties. In this paper we
demonstrate, however, that this computational efficiency comes at a cost.
Specifically, we demonstrate that only 1/D proportion of shifted examples are
unaffected by boundary effects which has a dramatic effect on
detection/tracking performance. In this paper, we propose a novel approach to
correlation filter estimation that: (i) takes advantage of inherent
computational redundancies in the frequency domain, and (ii) dramatically
reduces boundary effects. Impressive object tracking and detection results are
presented in terms of both accuracy and computational efficiency.
|
1403.7877 | ROML: A Robust Feature Correspondence Approach for Matching Objects in A
Set of Images | cs.CV | Feature-based object matching is a fundamental problem for many applications
in computer vision, such as object recognition, 3D reconstruction, tracking,
and motion segmentation. In this work, we consider simultaneously matching
object instances in a set of images, where both inlier and outlier features are
extracted. The task is to identify the inlier features and establish their
consistent correspondences across the image set. This is a challenging
combinatorial problem, and the problem complexity grows exponentially with the
image number. To this end, we propose a novel framework, termed ROML, to
address this problem. ROML optimizes simultaneously a partial permutation
matrix (PPM) for each image, and feature correspondences are established by the
obtained PPMs. Two of our key contributions are summarized as follows. (1) We
formulate the problem as rank and sparsity minimization for PPM optimization,
and treat simultaneous optimization of multiple PPMs as a regularized consensus
problem in the context of distributed optimization. (2) We use the ADMM method
to solve the thus formulated ROML problem, in which a subproblem associated
with a single PPM optimization appears to be a difficult integer quadratic
program (IQP). We prove that under wildly applicable conditions, this IQP is
equivalent to a linear sum assignment problem (LSAP), which can be efficiently
solved to an exact solution. Extensive experiments on rigid/non-rigid object
matching, matching instances of a common object category, and common object
localization show the efficacy of our proposed method.
|
1403.7879 | Triadic motifs in the dependence networks of virtual societies | physics.soc-ph cs.SI | In friendship networks, individuals have different numbers of friends, and
the closeness or intimacy between an individual and her friends is
heterogeneous. Using a statistical filtering method to identify relationships
about who depends on whom, we construct dependence networks (which are
directed) from weighted friendship networks of avatars in more than two hundred
virtual societies of a massively multiplayer online role-playing game (MMORPG).
We investigate the evolution of triadic motifs in dependence networks. Several
metrics show that the virtual societies evolved through a transient stage in
the first two to three weeks and reached a relatively stable stage. We find
that the unidirectional loop motif (${\rm{M}}_9$) is underrepresented and does
not appear, open motifs are also underrepresented, while other close motifs are
overrepresented. We also find that, for most motifs, the overall level
difference of the three avatars in the same motif is significantly lower than
average, whereas the sum of ranks is only slightly larger than average. Our
findings show that avatars' social status plays an important role in the
formation of triadic motifs.
|
1403.7883 | Multiple-Access Relay Wiretap Channel | cs.IT math.IT | In this paper, we investigate the effects of an additional trusted relay node
on the secrecy of multiple-access wiretap channel (MAC-WT) by considering the
model of multiple-access relay wiretap channel (MARC-WT). More specifically,
first, we investigate the discrete memoryless MARC-WT. Three inner bounds (with
respect to decode-forward (DF), noise-forward (NF) and compress-forward (CF)
strategies) on the secrecy capacity region are provided. Second, we investigate
the degraded discrete memoryless MARC-WT, and present an outer bound on the
secrecy capacity region of this degraded model. Finally, we investigate the
Gaussian MARC-WT, and find that the NF and CF strategies help to enhance
Tekin-Yener's achievable secrecy rate region of Gaussian MAC-WT. Moreover, we
find that if the noise variance of the transmitters-relay channel is smaller
than that of the transmitters-receiver channel, the DF strategy may also
enhance Tekin-Yener's achievable secrecy rate region of Gaussian MAC-WT, and it
may perform even better than the NF and CF strategies.
|
1403.7890 | Sparse K-Means with $\ell_{\infty}/\ell_0$ Penalty for High-Dimensional
Data Clustering | stat.ML cs.LG stat.ME | Sparse clustering, which aims to find a proper partition of an extremely
high-dimensional data set with redundant noise features, has been attracted
more and more interests in recent years. The existing studies commonly solve
the problem in a framework of maximizing the weighted feature contributions
subject to a $\ell_2/\ell_1$ penalty. Nevertheless, this framework has two
serious drawbacks: One is that the solution of the framework unavoidably
involves a considerable portion of redundant noise features in many situations,
and the other is that the framework neither offers intuitive explanations on
why this framework can select relevant features nor leads to any theoretical
guarantee for feature selection consistency.
In this article, we attempt to overcome those drawbacks through developing a
new sparse clustering framework which uses a $\ell_{\infty}/\ell_0$ penalty.
First, we introduce new concepts on optimal partitions and noise features for
the high-dimensional data clustering problems, based on which the previously
known framework can be intuitively explained in principle. Then, we apply the
suggested $\ell_{\infty}/\ell_0$ framework to formulate a new sparse k-means
model with the $\ell_{\infty}/\ell_0$ penalty ($\ell_0$-k-means for short). We
propose an efficient iterative algorithm for solving the $\ell_0$-k-means. To
deeply understand the behavior of $\ell_0$-k-means, we prove that the solution
yielded by the $\ell_0$-k-means algorithm has feature selection consistency
whenever the data matrix is generated from a high-dimensional Gaussian mixture
model. Finally, we provide experiments with both synthetic data and the Allen
Developing Mouse Brain Atlas data to support that the proposed $\ell_0$-k-means
exhibits better noise feature detection capacity over the previously known
sparse k-means with the $\ell_2/\ell_1$ penalty ($\ell_1$-k-means for short).
|
1403.7899 | Identifying User Behavior in domain-specific Repositories | cs.DL cs.IR | This paper presents an analysis of the user behavior of two different
domain-specific repositories. The web analytic tool etracker was used to gain a
first overall insight into the user behavior of these repositories. Moreover,
we extended our work to describe an apache web log analysis approach which
focuses on the identification of the user behavior. Therefore the user traffic
within our systems is visualized using chord diagrams. We could find that
recommendations are used frequently and users do rarely combine searching with
faceting or filtering.
|
1403.7920 | Computing the dimension of ideals in group algebras, with an application
to coding theory | cs.IT math.IT math.RA | The problem of computing the dimension of a left/right ideal in a group
algebra F[G] of a finite group G over a field F is considered. The ideal
dimension is related to the rank of a matrix originating from a regular
left/right representation of G; in particular, when F[G] is semisimple, the
dimension of a principal ideal is equal to the rank of the matrix representing
a generator. From this observation, a bound and an efficient algorithm to
compute the dimension of an ideal in a group ring are established. Since group
codes are ideals in finite group rings, the algorithm allows efficient
computation of their dimension.
|
1403.7923 | Using perceptually defined music features in music information retrieval | cs.IR cs.SD | In this study, the notion of perceptual features is introduced for describing
general music properties based on human perception. This is an attempt at
rethinking the concept of features, in order to understand the underlying human
perception mechanisms. Instead of using concepts from music theory such as
tones, pitches, and chords, a set of nine features describing overall
properties of the music was selected. They were chosen from qualitative
measures used in psychology studies and motivated from an ecological approach.
The selected perceptual features were rated in two listening experiments using
two different data sets. They were modeled both from symbolic (MIDI) and audio
data using different sets of computational features. Ratings of emotional
expression were predicted using the perceptual features. The results indicate
that (1) at least some of the perceptual features are reliable estimates; (2)
emotion ratings could be predicted by a small combination of perceptual
features with an explained variance up to 90%; (3) the perceptual features
could only to a limited extent be modeled using existing audio features. The
results also clearly indicated that a small number of dedicated features were
superior to a 'brute force' model using a large number of general audio
features.
|
1403.7928 | Integrated Data Acquisition, Storage, Retrieval and Processing Using the
COMPASS DataBase (CDB) | cs.DB | We present a complex data handling system for the COMPASS tokamak, operated
by IPP ASCR Prague, Czech Republic [1]. The system, called CDB (Compass
DataBase), integrates different data sources as an assortment of data
acquisition hardware and software from different vendors is used. Based on
widely available open source technologies wherever possible, CDB is vendor and
platform independent and it can be easily scaled and distributed. The data is
directly stored and retrieved using a standard NAS (Network Attached Storage),
hence independent of the particular technology; the description of the data
(the metadata) is recorded in a relational database. Database structure is
general and enables the inclusion of multi-dimensional data signals in multiple
revisions (no data is overwritten). This design is inherently distributed as
the work is off-loaded to the clients. Both NAS and database can be implemented
and optimized for fast local access as well as secure remote access. CDB is
implemented in Python language; bindings for Java, C/C++, IDL and Matlab are
provided. Independent data acquisitions systems as well as nodes managed by
FireSignal [2] are all integrated using CDB. An automated data post-processing
server is a part of CDB. Based on dependency rules, the server executes, in
parallel if possible, prescribed post-processing tasks.
|
1403.7933 | Additive codes over $GF(4)$ from circulant graphs | math.CO cs.DM cs.IT math.IT | In $2006$, Danielsen and Parker \cite{DP} proved that every self-dual
additive code over $GF(4)$ is equivalent to a graph code. So, graph is an
important tool for searching (proposed) optimum codes. In this paper, we
introduce a new method of searching (proposed) optimum additive codes from
circulant graphs.
|
1403.7948 | Structure of conflict graphs in constraint alignment problems and
algorithms | cs.DS cs.CE | We consider the constrained graph alignment problem which has applications in
biological network analysis. Given two input graphs $G_1=(V_1,E_1),
G_2=(V_2,E_2)$, a pair of vertex mappings induces an {\it edge conservation} if
the vertex pairs are adjacent in their respective graphs. %In general terms The
goal is to provide a one-to-one mapping between the vertices of the input
graphs in order to maximize edge conservation. However the allowed mappings are
restricted since each vertex from $V_1$ (resp. $V_2$) is allowed to be mapped
to at most $m_1$ (resp. $m_2$) specified vertices in $V_2$ (resp. $V_1$). Most
of results in this paper deal with the case $m_2=1$ which attracted most
attention in the related literature. We formulate the problem as a maximum
independent set problem in a related {\em conflict graph} and investigate
structural properties of this graph in terms of forbidden subgraphs. We are
interested, in particular, in excluding certain wheals, fans, cliques or claws
(all terms are defined in the paper), which corresponds in excluding certain
cycles, paths, cliques or independent sets in the neighborhood of each vertex.
Then, we investigate algorithmic consequences of some of these properties,
which illustrates the potential of this approach and raises new horizons for
further works. In particular this approach allows us to reinterpret a known
polynomial case in terms of conflict graph and to improve known approximation
and fixed-parameter tractability results through efficiently solving the
maximum independent set problem in conflict graphs. Some of our new
approximation results involve approximation ratios that are function of the
optimal value, in particular its square root; this kind of results cannot be
achieved for maximum independent set in general graphs.
|
1403.7970 | Direct design of LPV feedback controllers: technical details and
numerical examples | cs.SY | The paper contains technical details of recent results developed by the
author, regarding the design of LPV controllers directly from experimental
data. Two numerical examples are also presented, about control of the Duffing
oscillator and control of a two-degree-of-freedom manipulator.
|
1403.7985 | Relative generalized Hamming weights of one-point algebraic geometric
codes | cs.IT math.AG math.IT | Security of linear ramp secret sharing schemes can be characterized by the
relative generalized Hamming weights of the involved codes. In this paper we
elaborate on the implication of these parameters and we devise a method to
estimate their value for general one-point algebraic geometric codes. As it is
demonstrated, for Hermitian codes our bound is often tight. Furthermore, for
these codes the relative generalized Hamming weights are often much larger than
the corresponding generalized Hamming weights.
|
1403.8003 | Probabilistic Intra-Retinal Layer Segmentation in 3-D OCT Images Using
Global Shape Regularization | cs.CV | With the introduction of spectral-domain optical coherence tomography (OCT),
resulting in a significant increase in acquisition speed, the fast and accurate
segmentation of 3-D OCT scans has become evermore important. This paper
presents a novel probabilistic approach, that models the appearance of retinal
layers as well as the global shape variations of layer boundaries. Given an OCT
scan, the full posterior distribution over segmentations is approximately
inferred using a variational method enabling efficient probabilistic inference
in terms of computationally tractable model components: Segmenting a full 3-D
volume takes around a minute. Accurate segmentations demonstrate the benefit of
using global shape regularization: We segmented 35 fovea-centered 3-D volumes
with an average unsigned error of 2.46 $\pm$ 0.22 {\mu}m as well as 80 normal
and 66 glaucomatous 2-D circular scans with errors of 2.92 $\pm$ 0.53 {\mu}m
and 4.09 $\pm$ 0.98 {\mu}m respectively. Furthermore, we utilized the inferred
posterior distribution to rate the quality of the segmentation, point out
potentially erroneous regions and discriminate normal from pathological scans.
No pre- or postprocessing was required and we used the same set of parameters
for all data sets, underlining the robustness and out-of-the-box nature of our
approach.
|
1403.8024 | Replica Analysis and Approximate Message Passing Decoder for
Superposition Codes | cs.IT cond-mat.dis-nn math.IT | Superposition codes are efficient for the Additive White Gaussian Noise
channel. We provide here a replica analysis of the performances of these codes
for large signals. We also consider a Bayesian Approximate Message Passing
decoder based on a belief-propagation approach, and discuss its performance
using the density evolution technic. Our main findings are 1) for the sizes we
can access, the message-passing decoder outperforms other decoders studied in
the literature 2) its performance is limited by a sharp phase transition and 3)
while these codes reach capacity as $B$ (a crucial parameter in the code)
increases, the performance of the message passing decoder worsen as the phase
transition goes to lower rates.
|
1403.8034 | Keep Your Friends Close and Your Facebook Friends Closer: A Multiplex
Network Approach to the Analysis of Offline and Online Social Ties | cs.SI physics.soc-ph | Social media allow for an unprecedented amount of interaction between people
online. A fundamental aspect of human social behavior, however, is the tendency
of people to associate themselves with like-minded individuals, forming
homogeneous social circles both online and offline. In this work, we apply a
new model that allows us to distinguish between social ties of varying
strength, and to observe evidence of homophily with regards to politics, music,
health, residential sector & year in college, within the online and offline
social network of 74 college students. We present a multiplex network approach
to social tie strength, here applied to mobile communication data - calls, text
messages, and co-location, allowing us to dimensionally identify relationships
by considering the number of communication channels utilized between students.
We find that strong social ties are characterized by maximal use of
communication channels, while weak ties by minimal use. We are able to identify
75% of close friendships, 90% of weaker ties, and 90% of Facebook friendships
as compared to reported ground truth. We then show that stronger ties exhibit
greater profile similarity than weaker ones. Apart from high homogeneity in
social circles with respect to political and health aspects, we observe strong
homophily driven by music, residential sector and year in college. Despite
Facebook friendship being highly dependent on residence and year, exposure to
less homogeneous content can be found in the online rather than the offline
social circles of students, most notably in political and music aspects.
|
1403.8042 | Optimal Power Allocation for Three-phase Bidirectional DF Relaying with
Fixed Rates | cs.IT math.IT | Wireless systems that carry delay-sensitive information (such as speech
and/or video signals) typically transmit with fixed data rates, but may
occasionally suffer from transmission outages caused by the random nature of
the fading channels. If the transmitter has instantaneous channel state
information (CSI) available, it can compensate for a significant portion of
these outages by utilizing power allocation. In this paper, we consider optimal
power allocation for a conventional dual-hop bidirectional decode-and-forward
(DF) relaying system with a three-phase transmission protocol. The proposed
strategy minimizes the average power consumed by the end nodes and the relay,
subject to some maximum allowable system outage probability (OP), or
equivalently, minimizes the system OP while meeting average power constraints
at the end nodes and the relay. We show that in the proposed power allocation
scheme, the end nodes and the relay adjust their output powers to the minimum
level required to avoid outages, but will sometimes be silent, in order to
conserve power and prolong their lifetimes. For the proposed scheme, the end
nodes use the instantaneous CSI of their respective source-relay links and the
relay uses the instantaneous CSI of both links.
|
1403.8046 | Chemlambda, universality and self-multiplication | cs.AI math.GT math.LO | We present chemlambda (or the chemical concrete machine), an artificial
chemistry with the following properties: (a) is Turing complete, (b) has a
model of decentralized, distributed computing associated to it, (c) works at
the level of individual (artificial) molecules, subject of reversible, but
otherwise deterministic interactions with a small number of enzymes, (d)
encodes information in the geometrical structure of the molecules and not in
their numbers, (e) all interactions are purely local in space and time. This is
part of a larger project to create computing, artificial chemistry and
artificial life in a distributed context, using topological and graphical
languages.
|
1403.8067 | Robust Subspace Recovery via Bi-Sparsity Pursuit | cs.CV | Successful applications of sparse models in computer vision and machine
learning imply that in many real-world applications, high dimensional data is
distributed in a union of low dimensional subspaces. Nevertheless, the
underlying structure may be affected by sparse errors and/or outliers. In this
paper, we propose a bi-sparse model as a framework to analyze this problem and
provide a novel algorithm to recover the union of subspaces in presence of
sparse corruptions. We further show the effectiveness of our method by
experiments on both synthetic data and real-world vision data.
|
1403.8084 | Privacy Tradeoffs in Predictive Analytics | cs.CR cs.LG | Online services routinely mine user data to predict user preferences, make
recommendations, and place targeted ads. Recent research has demonstrated that
several private user attributes (such as political affiliation, sexual
orientation, and gender) can be inferred from such data. Can a
privacy-conscious user benefit from personalization while simultaneously
protecting her private attributes? We study this question in the context of a
rating prediction service based on matrix factorization. We construct a
protocol of interactions between the service and users that has remarkable
optimality properties: it is privacy-preserving, in that no inference algorithm
can succeed in inferring a user's private attribute with a probability better
than random guessing; it has maximal accuracy, in that no other
privacy-preserving protocol improves rating prediction; and, finally, it
involves a minimal disclosure, as the prediction accuracy strictly decreases
when the service reveals less information. We extensively evaluate our protocol
using several rating datasets, demonstrating that it successfully blocks the
inference of gender, age and political affiliation, while incurring less than
5% decrease in the accuracy of rating prediction.
|
1403.8093 | The Lossy Common Information of Correlated Sources | cs.IT math.IT | The two most prevalent notions of common information (CI) are due to Wyner
and Gacs-Korner and both the notions can be stated as two different
characteristic points in the lossless Gray-Wyner region. Although the
information theoretic characterizations for these two CI quantities can be
easily evaluated for random variables with infinite entropy (eg., continuous
random variables), their operational significance is applicable only to the
lossless framework. The primary objective of this paper is to generalize these
two CI notions to the lossy Gray-Wyner network, which hence extends the
theoretical foundation to general sources and distortion measures. We begin by
deriving a single letter characterization for the lossy generalization of
Wyner's CI, defined as the minimum rate on the shared branch of the Gray-Wyner
network, maintaining minimum sum transmit rate when the two decoders
reconstruct the sources subject to individual distortion constraints. To
demonstrate its use, we compute the CI of bivariate Gaussian random variables
for the entire regime of distortions. We then similarly generalize Gacs and
Korner's definition to the lossy framework. The latter half of the paper
focuses on studying the tradeoff between the total transmit rate and receive
rate in the Gray-Wyner network. We show that this tradeoff yields a contour of
points on the surface of the Gray-Wyner region, which passes through both the
Wyner and Gacs-Korner operating points, and thereby provides a unified
framework to understand the different notions of CI. We further show that this
tradeoff generalizes the two notions of CI to the excess sum transmit rate and
receive rate regimes, respectively.
|
1403.8098 | Hyperspectral image superresolution: An edge-preserving convex
formulation | cs.CV physics.data-an stat.ML | Hyperspectral remote sensing images (HSIs) are characterized by having a low
spatial resolution and a high spectral resolution, whereas multispectral images
(MSIs) are characterized by low spectral and high spatial resolutions. These
complementary characteristics have stimulated active research in the inference
of images with high spatial and spectral resolutions from HSI-MSI pairs.
In this paper, we formulate this data fusion problem as the minimization of a
convex objective function containing two data-fitting terms and an
edge-preserving regularizer. The data-fitting terms are quadratic and account
for blur, different spatial resolutions, and additive noise; the regularizer, a
form of vector Total Variation, promotes aligned discontinuities across the
reconstructed hyperspectral bands.
The optimization described above is rather hard, owing to its
non-diagonalizable linear operators, to the non-quadratic and non-smooth nature
of the regularizer, and to the very large size of the image to be inferred. We
tackle these difficulties by tailoring the Split Augmented Lagrangian Shrinkage
Algorithm (SALSA)---an instance of the Alternating Direction Method of
Multipliers (ADMM)---to this optimization problem. By using a convenient
variable splitting and by exploiting the fact that HSIs generally "live" in a
low-dimensional subspace, we obtain an effective algorithm that yields
state-of-the-art results, as illustrated by experiments.
|
1403.8118 | E-Generalization Using Grammars | cs.LO cs.AI cs.FL | We extend the notion of anti-unification to cover equational theories and
present a method based on regular tree grammars to compute a finite
representation of E-generalization sets. We present a framework to combine
Inductive Logic Programming and E-generalization that includes an extension of
Plotkin's lgg theorem to the equational case. We demonstrate the potential
power of E-generalization by three example applications: computation of
suggestions for auxiliary lemmas in equational inductive proofs, computation of
construction laws for given term sequences, and learning of screen editor
command sequences.
|
1403.8122 | Performance of Selection Combining for Differential Amplify-and-Forward
Relaying Over Time-Varying Channels | cs.IT math.IT | Selection combining (SC) at the destination for differential
amplify-and-forward (AF) relaying is attractive as it does not require channel
state information as compared to the semi maximum-ratio-combining (semi-MRC)
while delivering close performance. Performance analysis of the SC scheme was
recently reported but only for the case of slow-fading channels. This paper
provides an exact average bit-error-rate (BER) of the SC scheme over a general
case of time-varying Rayleigh fading channels and when the DBPSK modulation is
used together with the non-coherent detection at the destination. The presented
analysis is thoroughly verified with simulation results in various fading
scenarios. It is shown that the performance of the system is related to the
auto-correlation values of the channels. It is also shown that the performance
of the SC method is very close to that of the semi-MRC method and the existence
of an error floor at high signal-to-noise ratio region is inevitable in both
methods. The obtained BER analysis for the SC method can also be used to
approximate the BER performance of the MRC method, whose exact analytical
evaluation in time-varying channels appears to be difficult.
|
1403.8128 | Performance of Differential Amplify-and-Forward Relaying in Multi-Node
Wireless Communications | cs.IT math.IT | This paper is concerned with the performance of differential
amplify-and-forward (D-AF) relaying for multi-node wireless communications over
time-varying Rayleigh fading channels. A first-order auto-regressive model is
utilized to characterize the time-varying nature of the channels. Based on the
secondorder statistical properties of the wireless channels, a new set of
combining weights is proposed for signal detection at the destination.
Expression of pair-wise error probability (PEP) is provided and used to obtain
the approximated total average bit error probability (BER). It is shown that
the performance of the system is related to the auto-correlation of the direct
and cascaded channels and an irreducible error floor exists at high
signal-to-noise ratio (SNR). The new weights lead to a better performance when
compared to the conventional combining scheme. Computer simulation is carried
out in different scenarios to support the analysis.
|
1403.8130 | Selection Combining for Differential Amplify-and-Forward Relaying Over
Rayleigh-Fading Channel | cs.IT math.IT | This paper proposes and analyses selection combining (SC) at the destination
for differential amplify-andforward (D-AF) relaying over slow Rayleigh-fading
channels. The selection combiner chooses the link with the maximum magnitude of
the decision variable to be used for non-coherent detection of the transmitted
symbols. Therefore, in contrast to the maximum ratio combining (MRC), no
channel information is needed at the destination. The exact average
bit-error-rate (BER) of the proposed SC is derived and verified with simulation
results. It is also shown that the performance of the SC method is very close
to that of the MRC method, albeit with lower complexity
|
1403.8144 | Coding for Random Projections and Approximate Near Neighbor Search | cs.LG cs.DB cs.DS stat.CO | This technical note compares two coding (quantization) schemes for random
projections in the context of sub-linear time approximate near neighbor search.
The first scheme is based on uniform quantization while the second scheme
utilizes a uniform quantization plus a uniformly random offset (which has been
popular in practice). The prior work compared the two schemes in the context of
similarity estimation and training linear classifiers, with the conclusion that
the step of random offset is not necessary and may hurt the performance
(depending on the similarity level). The task of near neighbor search is
related to similarity estimation with importance distinctions and requires own
study. In this paper, we demonstrate that in the context of near neighbor
search, the step of random offset is not needed either and may hurt the
performance (sometimes significantly so, depending on the similarity and other
parameters).
|
1404.0027 | An efficient GPU acceptance-rejection algorithm for the selection of the
next reaction to occur for Stochastic Simulation Algorithms | cs.CE cs.DC | Motivation: The Stochastic Simulation Algorithm (SSA) has largely diffused in
the field of systems biology. This approach needs many realizations for
establishing statistical results on the system under study. It is very
computationnally demanding, and with the advent of large models this burden is
increasing. Hence parallel implementation of SSA are needed to address these
needs.
At the very heart of the SSA is the selection of the next reaction to occur
at each time step, and to the best of our knowledge all implementations are
based on an inverse transformation method. However, this method involves a
random number of steps to select this next reaction and is poorly amenable to a
parallel implementation.
Results: Here, we introduce a parallel acceptance-rejection algorithm to
select the K next reactions to occur. This algorithm uses a deterministic
number of steps, a property well suited to a parallel implementation. It is
simple and small, accurate and scalable. We propose a Graphics Processing Unit
(GPU) implementation and validate our algorithm with simulated propensity
distributions and the propensity distribution of a large model of yeast iron
metabolism. We show that our algorithm can handle thousands of selections of
next reaction to occur in parallel on the GPU, paving the way to massive SSA.
Availability: We present our GPU-AR algorithm that focuses on the very heart
of the SSA. We do not embed our algorithm within a full implementation in order
to stay pedagogical and allows its rapid implementation in existing software.
We hope that it will enable stochastic modelers to implement our algorithm with
the benefits of their own optimizations.
|
1404.0039 | Asynchronous Transmission of Wireless Multicast System with Genetic
Joint Antennas Selection | cs.NI cs.IT math.IT | Optimal antenna selection algorithm of multicast transmission can
significantly reduce the number of antennas and can acquire lower complexity
and high performance which is close to that of exhaustive search. An
asynchronous multicast transmission mechanism based on genetic antenna
selection is proposed. The computational complexity of genetic antenna
selection algorithm remains moderate while the total number of antennas
increases comparing with optimum searching algorithm. Symbol error rate (SER)
and capacity of our mechanism are analyzed and simulated, and the simulation
results demonstrate that our proposed mechanism can achieve better SER and
sub-maximum channel capacity in wireless multicast systems.
|
1404.0046 | Approximation Schemes for Many-Objective Query Optimization | cs.DB | The goal of multi-objective query optimization (MOQO) is to find query plans
that realize a good compromise between conflicting objectives such as
minimizing execution time and minimizing monetary fees in a Cloud scenario. A
previously proposed exhaustive MOQO algorithm needs hours to optimize even
simple TPC-H queries. This is why we propose several approximation schemes for
MOQO that generate guaranteed near-optimal plans in seconds where exhaustive
optimization takes hours.
We integrated all MOQO algorithms into the Postgres optimizer and present
experimental results for TPC-H queries; we extended the Postgres cost model and
optimize for up to nine conflicting objectives in our experiments. The proposed
algorithms are based on a formal analysis of typical cost functions that occur
in the context of MOQO. We identify properties that hold for a broad range of
objectives and can be exploited for the design of future MOQO algorithms.
|
1404.0058 | Short Term Electricity Load Forecasting on Varying Levels of Aggregation | stat.AP cs.SI | We propose a simple empirical scaling law that describes load forecasting
accuracy at different levels of aggregation. The model is justified based on a
simple decomposition of individual consumption patterns. We show that for
different forecasting methods and horizons, aggregating more customers improves
the relative forecasting performance up to specific point. Beyond this point,
no more improvement in relative performance can be obtained.
|
1404.0061 | Short Message Noisy Network Coding with Rate Splitting | cs.IT math.IT | Short message noisy network coding with rate splitting (SNNC-RS) encoding
strategy is presented. It has been shown by Hou and Kramer that mixed
cooperative strategies in which relays in favorable positions perform
decode-and-forward (DF) and the rest of the relays perform short message noisy
network coding (SNNC) can outperform noisy network coding (NNC). Our proposed
strategy further improves the rate performance of such mixed SNNC-DF
cooperative strategy. In the proposed scheme, superposition coding is
incorporated into the SNNC encoding in order to facilitate partial interference
cancellation at DF relays, thereby increasing the overall rate. To demonstrate
gains of the proposed SNNC-RS strategy, the achievable rate is analyzed for the
discrete memoryless two-relay network with one DF relay and one SNNC-RS relay
and compared to the case without rate-splitting. The obtained rate is evaluated
in the Gaussian two-relay network and gains over the rate achieved without rate
splitting are demonstrated.
|
1404.0062 | On redundancy of memoryless sources over countable alphabets | cs.IT math.IT | The minimum average number of bits need to describe a random variable is its
entropy, assuming knowledge of the underlying statistics On the other hand,
universal compression supposes that the distribution of the random variable,
while unknown, belongs to a known set $\cal P$ of distributions. Such universal
descriptions for the random variable are agnostic to the identity of the
distribution in $\cal P$. But because they are not matched exactly to the
underlying distribution of the random variable, the average number of bits they
use is higher, and the excess over the entropy used is the "redundancy". This
formulation is fundamental to problems not just in compression, but also
estimation and prediction and has a wide variety of applications from language
modeling to insurance.
In this paper, we study the redundancy of universal encodings of strings
generated by independent identically distributed (iid) sampling from a set
$\cal P$ of distributions over a countable support. We first show that if
describing a single sample from $\cal P$ incurs finite redundancy, then $\cal
P$ is tight but that the converse does not always hold. If a single sample can
be described with finite worst-case-regret (a more stringent formulation than
redundancy above), then it is known that length-$n$ iid samples only incurs a
diminishing (in $n$) redundancy per symbol as $n$ increases. However, we show
it is possible that a collection $\cal P$ incurs finite redundancy, yet
description of length-$n$ iid samples incurs a constant redundancy per symbol
encoded. We then show a sufficient condition on $\cal P$ such that length-$n$
iid samples will incur diminishing redundancy per symbol encoded.
|
1404.0067 | Topics in social network analysis and network science | physics.soc-ph cs.SI | This chapter introduces statistical methods used in the analysis of social
networks and in the rapidly evolving parallel-field of network science.
Although several instances of social network analysis in health services
research have appeared recently, the majority involve only the most basic
methods and thus scratch the surface of what might be accomplished.
Cutting-edge methods using relevant examples and illustrations in health
services research are provided.
|
1404.0074 | Quantum Turing automata | quant-ph cs.FL cs.IT math.IT | A denotational semantics of quantum Turing machines having a quantum control
is defined in the dagger compact closed category of finite dimensional Hilbert
spaces. Using the Moore-Penrose generalized inverse, a new additive trace is
introduced on the restriction of this category to isometries, which trace is
carried over to directed quantum Turing machines as monoidal automata. The
Joyal-Street-Verity Int construction is then used to extend this structure to a
reversible bidirectional one.
|
1404.0077 | Effective dimension in some general metric spaces | cs.CC cs.IT math.IT | We introduce the concept of effective dimension for a wide class of metric
spaces that are not required to have a computable measure. Effective dimension
was defined by Lutz in (Lutz 2003) for Cantor space and has also been extended
to Euclidean space. Lutz effectivization uses the concept of gale and
supergale, our extension of Hausdorff dimension to other metric spaces is also
based on a supergale characterization of dimension, which in practice avoids an
extra quantifier present in the classical definition of dimension that is based
on Hausdorff measure and therefore allows effectivization for small
time-bounds.
We present here the concept of constructive dimension and its
characterization in terms of Kolmogorov complexity, for which we extend the
concept of Kolmogorov complexity to any metric space defining the Kolmogorov
complexity of a point at a certain precision. Further research directions are
indicated.
|
1404.0084 | A Calculus of Located Entities | cs.PL cs.CE | We define BioScapeL, a stochastic pi-calculus in 3D-space. A novel aspect of
BioScapeL is that entities have programmable locations. The programmer can
specify a particular location where to place an entity, or a location relative
to the current location of the entity. The motivation for the extension comes
from the need to describe the evolution of populations of biochemical species
in space, while keeping a sufficiently high level description, so that
phenomena like diffusion, collision, and confinement can remain part of the
semantics of the calculus. Combined with the random diffusion movement
inherited from BioScape, programmable locations allow us to capture the
assemblies of configurations of polymers, oligomers, and complexes such as
microtubules or actin filaments.
Further new aspects of BioScapeL include random translation and scaling.
Random translation is instrumental in describing the location of new entities
relative to the old ones. For example, when a cell secretes a hydronium ion,
the ion should be placed at a given distance from the originating cell, but in
a random direction. Additionally, scaling allows us to capture at a high level
events such as division and growth; for example, daughter cells after mitosis
have half the size of the mother cell.
|
1404.0086 | Using HMM in Strategic Games | cs.GT cs.IR cs.LG | In this paper we describe an approach to resolve strategic games in which
players can assume different types along the game. Our goal is to infer which
type the opponent is adopting at each moment so that we can increase the
player's odds. To achieve that we use Markov games combined with hidden Markov
model. We discuss a hypothetical example of a tennis game whose solution can be
applied to any game with similar characteristics.
|
1404.0091 | Interestingness a Unifying Paradigm Bipolar Function Composition | cs.IR | Interestingness is an important criterion by which we judge knowledge
discovery. But, interestingness has escaped all attempts to capture its
intuitive meaning into a concise and comprehensive form. A unifying paradigm is
formulated by function composition. We claim that composition is bipolar, i.e.
composition of exactly two functions, whose two semantic poles are relevance
and unexpectedness. The paradigm generality is demonstrated by case studies of
new interestingness functions, examples of known functions that fit the
framework, and counter-examples for which the paradigm points out to the
lacking pole.
|
1404.0097 | Haplotype Assembly: An Information Theoretic View | cs.IT math.IT | This paper studies the haplotype assembly problem from an information
theoretic perspective. A haplotype is a sequence of nucleotide bases on a
chromosome, often conveniently represented by a binary string, that differ from
the bases in the corresponding positions on the other chromosome in a
homologous pair. Information about the order of bases in a genome is readily
inferred using short reads provided by high-throughput DNA sequencing
technologies. In this paper, the recovery of the target pair of haplotype
sequences using short reads is rephrased as a joint source-channel coding
problem. Two messages, representing haplotypes and chromosome memberships of
reads, are encoded and transmitted over a channel with erasures and errors,
where the channel model reflects salient features of high-throughput
sequencing. The focus of this paper is on the required number of reads for
reliable haplotype reconstruction, and both the necessary and sufficient
conditions are presented with order-wise optimal bounds.
|
1404.0099 | Venture: a higher-order probabilistic programming platform with
programmable inference | cs.AI cs.PL stat.CO stat.ML | We describe Venture, an interactive virtual machine for probabilistic
programming that aims to be sufficiently expressive, extensible, and efficient
for general-purpose use. Like Church, probabilistic models and inference
problems in Venture are specified via a Turing-complete, higher-order
probabilistic language descended from Lisp. Unlike Church, Venture also
provides a compositional language for custom inference strategies built out of
scalable exact and approximate techniques. We also describe four key aspects of
Venture's implementation that build on ideas from probabilistic graphical
models. First, we describe the stochastic procedure interface (SPI) that
specifies and encapsulates primitive random variables. The SPI supports custom
control flow, higher-order probabilistic procedures, partially exchangeable
sequences and ``likelihood-free'' stochastic simulators. It also supports
external models that do inference over latent variables hidden from Venture.
Second, we describe probabilistic execution traces (PETs), which represent
execution histories of Venture programs. PETs capture conditional dependencies,
existential dependencies and exchangeable coupling. Third, we describe
partitions of execution histories called scaffolds that factor global inference
problems into coherent sub-problems. Finally, we describe a family of
stochastic regeneration algorithms for efficiently modifying PET fragments
contained within scaffolds. Stochastic regeneration linear runtime scaling in
cases where many previous approaches scaled quadratically. We show how to use
stochastic regeneration and the SPI to implement general-purpose inference
strategies such as Metropolis-Hastings, Gibbs sampling, and blocked proposals
based on particle Markov chain Monte Carlo and mean-field variational inference
techniques.
|
1404.0101 | Quantization for Uplink Transmissions in Two-tier Networks with
Femtocells | cs.NI cs.IT math.IT | We propose two novel schemes to level up the sum--rate for a two-tier network
with femtocell where the backhaul uplink and downlink connecting the Base
Stations have limited capacity. The backhaul links are exploited to transport
the information in order to improve the decoding of the macrocell and femtocell
messages. In the first scheme, Quantize-and-Forward, the Femto Base Station
(FBS) quantizes what it receives and forwards it to the Macro Base Station
(MBS). Two quantization methods are considered: Elementary Quantization and
Wyner-Ziv Quantization. In the second scheme, called Decode-and-Forward with
Quantized Side Information (DFQSI) to be distinguished with the considered
conventional Decode-and-Forward (DF) scheme. The DFQSI scheme exploits the
backhaul downlink to quantize and send the information about the message in the
macrocell to the FBS to help it better decode the message, cancel it and decode
the message in the femtocell. The results show that there are interesting
scenarios in which the proposed techniques offer considerable gains in terms of
maximal sum rate and max minimal rate.
|
1404.0103 | Comparative Resilience Notions and Vertex Attack Tolerance of Scale-Free
Networks | cs.SI physics.soc-ph | We are concerned with an appropriate mathematical measure of resilience in
the face of targeted node attacks for arbitrary degree networks, and
subsequently comparing the resilience of different scale-free network models
with the proposed measure. We strongly motivate our resilience measure termed
\emph{vertex attack tolerance} (VAT), which is denoted mathematically as
$\tau(G) = \min_{S \subset V} \frac{|S|}{|V-S-C_{max}(V-S)|+1}$, where
$C_{max}(V-S)$ is the largest connected component in $V-S$. We attempt a
thorough comparison of VAT with several existing resilience notions:
conductance, vertex expansion, integrity, toughness, tenacity and scattering
number. Our comparisons indicate that for artbitrary degree distributions VAT
is the only measure that fully captures both the major \emph{bottlenecks} of a
network and the resulting \emph{component size distribution} upon targeted node
attacks (both captured in a manner proportional to the size of the attack set).
For the case of $d$-regular graphs, we prove that $\tau(G) \le d\Phi(G)$, where
$\Phi(G)$ is the conductance of the graph $G$. Conductance and expansion are
well-studied measures of robustness and bottlenecks in the case of regular
graphs but fail to capture resilience in the case of highly heterogeneous
degree graphs. Regarding comparison of different scale-free graph models, our
experimental results indicate that PLOD graphs with degree distributions
identical to BA graphs of the same size exhibit consistently better vertex
attack tolerance than the BA type graphs, although both graph types appear
asymptotically resilient for BA generative parameter $m = 2$. BA graphs with $m
= 1$ also appear to lack resilience, not only exhibiting very low VAT values,
but also great transparency in the identification of the vulnerable node sets,
namely the hubs, consistent with well known previous work.
|
1404.0106 | Traffic Monitoring Using M2M Communication | cs.CV | This paper presents an intelligent traffic monitoring system using wireless
vision sensor network that captures and processes the real-time video image to
obtain the traffic flow rate and vehicle speeds along different urban roadways.
This system will display the traffic states on the front roadways that can
guide the drivers to select the right way and avoid potential traffic
congestions. On the other hand, it will also monitor the vehicle speeds and
store the vehicle details, for those breaking the roadway speed limits, in its
database. The real-time traffic data is processed by the Personal Computer (PC)
at the sub roadway station and the traffic flow rate data is transmitted to the
main roadway station Arduino 3G via email, where the data is extracted and
traffic flow rate displayed.
|
1404.0138 | Efficient Algorithms and Error Analysis for the Modified Nystrom Method | cs.LG | Many kernel methods suffer from high time and space complexities and are thus
prohibitive in big-data applications. To tackle the computational challenge,
the Nystr\"om method has been extensively used to reduce time and space
complexities by sacrificing some accuracy. The Nystr\"om method speedups
computation by constructing an approximation of the kernel matrix using only a
few columns of the matrix. Recently, a variant of the Nystr\"om method called
the modified Nystr\"om method has demonstrated significant improvement over the
standard Nystr\"om method in approximation accuracy, both theoretically and
empirically.
In this paper, we propose two algorithms that make the modified Nystr\"om
method practical. First, we devise a simple column selection algorithm with a
provable error bound. Our algorithm is more efficient and easier to implement
than and nearly as accurate as the state-of-the-art algorithm. Second, with the
selected columns at hand, we propose an algorithm that computes the
approximation in lower time complexity than the approach in the previous work.
Furthermore, we prove that the modified Nystr\"om method is exact under certain
conditions, and we establish a lower error bound for the modified Nystr\"om
method.
|
1404.0142 | Information-Theoretic Bounds for Performance of Resource-Constrained
Communication Systems | cs.IT cs.SY math.IT math.OC | Resource-constrained systems are prevalent in communications. Such a system
is composed of many components but only some of them can be allocated with
resources such as time slots. According to the amount of information about the
system, algorithms are employed to allocate resources and the overall system
performance depends on the result of resource allocation. We do not always have
complete information, and thus, the system performance may not be satisfactory.
In this work, we propose a general model for the resource-constrained
communication systems. We draw the relationship between system information and
performance and derive the performance bounds for the optimal algorithm for the
system. This gives the expected performance corresponding to the available
information, and we can determine if we should put more efforts to collect more
accurate information before actually constructing an algorithm for the system.
Several examples of applications in communications to the model are also given.
|
1404.0163 | Gender Asymmetries in Reality and Fiction: The Bechdel Test of Social
Media | cs.SI cs.CY physics.soc-ph | The subjective nature of gender inequality motivates the analysis and
comparison of data from real and fictional human interaction. We present a
computational extension of the Bechdel test: A popular tool to assess if a
movie contains a male gender bias, by looking for two female characters who
discuss about something besides a man. We provide the tools to quantify Bechdel
scores for both genders, and we measure them in movie scripts and large
datasets of dialogues between users of MySpace and Twitter. Comparing movies
and users of social media, we find that movies and Twitter conversations have a
consistent male bias, which does not appear when analyzing MySpace.
Furthermore, the narrative of Twitter is closer to the movies that do not pass
the Bechdel test than to those that pass it.
We link the properties of movies and the users that share trailers of those
movies. Our analysis reveals some particularities of movies that pass the
Bechdel test: Their trailers are less popular, female users are more likely to
share them than male users, and users that share them tend to interact less
with male users. Based on our datasets, we define gender independence
measurements to analyze the gender biases of a society, as manifested through
digital traces of online behavior. Using the profile information of Twitter
users, we find larger gender independence for urban users in comparison to
rural ones. Additionally, the asymmetry between genders is larger for parents
and lower for students. Gender asymmetry varies across US states, increasing
with higher average income and latitude. This points to the relation between
gender inequality and social, economical, and cultural factors of a society,
and how gender roles exist in both fictional narratives and public online
dialogues.
|
1404.0173 | A Recursive Method for Enumeration of Costas Arrays | cs.IT math.IT | In this paper, we propose a recursive method for finding Costas arrays that
relies on a particular formation of Costas arrays from similar patterns of
smaller size. By using such an idea, the proposed algorithm is able to
dramatically reduce the computational burden (when compared to the exhaustive
search), and at the same time, still can find all possible Costas arrays of
given size. Similar to exhaustive search, the proposed method can be
conveniently implemented in parallel computing. The efficiency of the method is
discussed based on theoretical and numerical results.
|
1404.0195 | Extension theorems for self-dual codes over rings and new binary
self-dual codes | cs.IT math.IT | In this work, extension theorems are generalized to self-dual codes over
rings and as applications many new binary self-dual extremal codes are found
from self-dual codes over F_2^m+uF_2^m for m = 1, 2. The duality and distance
preserving Gray maps from F4 +uF4 to (F_2 +uF_2)^2 and (F_4)^2 are used to
obtain self-dual codes whose binary Gray images are [64,32,12]-extremal
self-dual. An F_2+uF_2-extension is used and as binary images, 178 extremal
binary self-dual codes of length 68 with new weight enumerators are obtained.
Especially the first examples of codes with gamma=3 and many codes with the
rare gamma= 4, 6 parameters are obtained. In addition to these, two hundred
fifty doubly even self dual [96,48,16]-codes with new weight enumerators are
obtained from four-circulant codes over F_4 + uF_4. New extremal doubly even
binary codes of lengths 80 and 88 are also found by the F_2+uF_2-lifts of
binary four circulant codes and a corresponding result about 3-designs is
stated.
|
1404.0200 | Household Electricity Demand Forecasting -- Benchmarking
State-of-the-Art Methods | cs.LG stat.AP | The increasing use of renewable energy sources with variable output, such as
solar photovoltaic and wind power generation, calls for Smart Grids that
effectively manage flexible loads and energy storage. The ability to forecast
consumption at different locations in distribution systems will be a key
capability of Smart Grids. The goal of this paper is to benchmark
state-of-the-art methods for forecasting electricity demand on the household
level across different granularities and time scales in an explorative way,
thereby revealing potential shortcomings and find promising directions for
future research in this area. We apply a number of forecasting methods
including ARIMA, neural networks, and exponential smoothening using several
strategies for training data selection, in particular day type and sliding
window based strategies. We consider forecasting horizons ranging between 15
minutes and 24 hours. Our evaluation is based on two data sets containing the
power usage of individual appliances at second time granularity collected over
the course of several months. The results indicate that forecasting accuracy
varies significantly depending on the choice of forecasting methods/strategy
and the parameter configuration. Measured by the Mean Absolute Percentage Error
(MAPE), the considered state-of-the-art forecasting methods rarely beat
corresponding persistence forecasts. Overall, we observed MAPEs in the range
between 5 and >100%. The average MAPE for the first data set was ~30%, while it
was ~85% for the other data set. These results show big room for improvement.
Based on the identified trends and experiences from our experiments, we
contribute a detailed discussion of promising future research.
|
1404.0218 | Sparse Model Uncertainties in Compressed Sensing with Application to
Convolutions and Sporadic Communication | cs.IT math.IT | The success of the compressed sensing paradigm has shown that a substantial
reduction in sampling and storage complexity can be achieved in certain linear
and non-adaptive estimation problems. It is therefore an advisable strategy for
noncoherent information retrieval in, for example, sporadic blind and
semi-blind communication and sampling problems. But, the conventional model is
not practical here since the compressible signals have to be estimated from
samples taken solely on the output of an un-calibrated system which is unknown
during measurement but often compressible. Conventionally, one has either to
operate at suboptimal sampling rates or the recovery performance substantially
suffers from the dominance of model mismatch. In this work we discuss such type
of estimation problems and we focus on bilinear inverse problems. We link this
problem to the recovery of low-rank and sparse matrices and establish stable
low-dimensional embeddings of the uncalibrated receive signals whereby
addressing also efficient communication-oriented methods like universal random
demodulation. Exemplary, we investigate in more detail sparse convolutions
serving as a basic communication channel model. In using some recent results
from additive combinatorics we show that such type of signals can be
efficiently low-rate sampled by semi-blind methods. Finally, we present a
further application of these results in the field of phase retrieval from
intensity Fourier measurements.
|
1404.0237 | Design of Symbolic Controllers for Networked Control Systems | cs.SY | Networked Control Systems (NCS) are distributed systems where plants,
sensors, actuators and controllers communicate over shared networks. Non-ideal
behaviors of the communication network include variable sampling/transmission
intervals and communication delays, packet losses, communication constraints
and quantization errors. NCS have been the object of intensive study in the
last few years. However, due to the inherent complexity of NCS, current
literature focuses on a subset of these non-idealities and mostly considers
stability and stabilizability problems. Recent technology advances need
different and more complex control objectives to be considered. In this paper
we present first a general model of NCS, including most relevant non-idealities
of the communication network; then, we propose a symbolic model approach to the
control design with objectives expressed in terms of non-deterministic
transition systems. The presented results are based on recent advances in
symbolic control design of continuous and hybrid systems. An example in the
context of robot motion planning with remote control is included, showing the
effectiveness of the proposed approach.
|
1404.0255 | A Case Where Interference Does Not Affect The Channel Dispersion | cs.IT math.IT | In 1975, Carleial presented a special case of an interference channel in
which the interference does not reduce the capacity of the constituent
point-to-point Gaussian channels. In this work, we show that if the
inequalities in the conditions that Carleial stated are strict, the dispersions
are similarly unaffected. More precisely, in this work, we characterize the
second-order coding rates of the Gaussian interference channel in the strictly
very strong interference regime. In other words, we characterize the speed of
convergence of rates of optimal block codes towards a boundary point of the
(rectangular) capacity region. These second-order rates are expressed in terms
of the average probability of error and variances of some modified information
densities which coincide with the dispersion of the (single-user) Gaussian
channel. We thus conclude that the dispersions are unaffected by interference
in this channel model.
|
1404.0265 | On Minimizing the Maximum Broadcast Decoding Delay for Instantly
Decodable Network Coding | cs.IT cs.NI math.IT | In this paper, we consider the problem of minimizing the maximum broadcast
decoding delay experienced by all the receivers of generalized instantly
decodable network coding (IDNC). Unlike the sum decoding delay, the maximum
decoding delay as a definition of delay for IDNC allows a more equitable
distribution of the delays between the different receivers and thus a better
Quality of Service (QoS). In order to solve this problem, we first derive the
expressions for the probability distributions of maximum decoding delay
increments. Given these expressions, we formulate the problem as a maximum
weight clique problem in the IDNC graph. Although this problem is known to be
NP-hard, we design a greedy algorithm to perform effective packet selection.
Through extensive simulations, we compare the sum decoding delay and the max
decoding delay experienced when applying the policies to minimize the sum
decoding delay [1] and our policy to reduce the max decoding delay. Simulations
results show that our policy gives a good agreement among all the delay aspects
in all situations and outperforms the sum decoding delay policy to effectively
minimize the sum decoding delay when the channel conditions become harsher.
They also show that our definition of delay significantly improve the number of
served receivers when they are subject to strict delay constraints.
|
1404.0267 | The diffusion dynamics of choice: From durable goods markets to fashion
first names | physics.soc-ph cs.SI | Goods, styles, ideologies are adopted by society through various mechanisms.
In particular, adoption driven by innovation is extensively studied by
marketing economics. Mathematical models are currently used to forecast the
sales of innovative goods. Inspired by the theory of diffusion processes
developed for marketing economics, we propose, for the first time, a predictive
framework for the mechanism of fashion, which we apply to first names. Analyses
of French, Dutch and US national databases validate our modelling approach for
thousands of first names, covering, on average, more than 50% of the yearly
incidence in each database. In these cases, it is thus possible to forecast how
popular the first names will become and when they will run out of fashion.
Furthermore, we uncover a clear distinction between popularity and fashion:
less popular names, typically not included in studies of fashion, may be driven
by fashion, as well.
|
1404.0273 | Lattice Codes for Many-to-One Interference Channels With and Without
Cognitive Messages | cs.IT math.IT | A new achievable rate region is given for the Gaussian cognitive many-to-one
interference channel. The proposed novel coding scheme is based on the
compute-and-forward approach with lattice codes. Using the idea of decoding
sums of codewords, our scheme improves considerably upon the conventional
coding schemes which treat interference as noise or decode messages
simultaneously. Our strategy also extends directly to the usual many-to-one
interference channels without cognitive messages. Comparing to the usual
compute-and-forward scheme where a fixed lattice is used for the code
construction, the novel scheme employs scaled lattices and also encompasses key
ingredients of the existing schemes for the cognitive interference channel.
With this new component, our scheme achieves a larger rate region in general.
For some symmetric channel settings, new constant gap or capacity results are
established, which are independent of the number of users in the system.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.