id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1209.2295
|
Multimodal diffusion geometry by joint diagonalization of Laplacians
|
cs.CV cs.AI
|
We construct an extension of diffusion geometry to multiple modalities
through joint approximate diagonalization of Laplacian matrices. This naturally
extends classical data analysis tools based on spectral geometry, such as
diffusion maps and spectral clustering. We provide several synthetic and real
examples of manifold learning, retrieval, and clustering demonstrating that the
joint diffusion geometry frequently better captures the inherent structure of
multi-modal data. We also show that many previous attempts to construct
multimodal spectral clustering can be seen as particular cases of joint
approximate diagonalization of the Laplacians.
|
1209.2322
|
On firm specific characteristics of pharmaceutical generics and
incentives to permanence under fuzzy conditions
|
cs.AI
|
The aim of this paper is to develop a methodology that is useful for
analysing from a microeconomic perspective the incentives to entry, permanence
and exit in the market for pharmaceutical generics under fuzzy conditions. In
an empirical application of our proposed methodology, the potential towards
permanence of labs with different characteristics has been estimated. The case
we deal with is set in an open market where global players diversify into
different national markets of pharmaceutical generics. Risk issues are
significantly important in deterring decision makers from expanding in the
generic pharmaceutical business. However, not all players are affected in the
same way and/or to the same extent. Small, non-diversified generics labs are in
the worse position. We have highlighted that the expected NPV and the number of
generics in the portfolio of a pharmaceutical lab are important variables, but
that it is also important to consider the degree of diversification. Labs with
a higher potential for diversification across markets have an advantage over
smaller labs. We have described a fuzzy decision support system based on the
Mamdani model in order to determine the incentives for a laboratory to remain
in the market both when it is stable and when it is growing.
|
1209.2341
|
Leveraging Sentiment to Compute Word Similarity
|
cs.IR cs.CL
|
In this paper, we introduce a new WordNet based similarity metric, SenSim,
which incorporates sentiment content (i.e., degree of positive or negative
sentiment) of the words being compared to measure the similarity between them.
The proposed metric is based on the hypothesis that knowing the sentiment is
beneficial in measuring the similarity. To verify this hypothesis, we measure
and compare the annotator agreement for 2 annotation strategies: 1) sentiment
information of a pair of words is considered while annotating and 2) sentiment
information of a pair of words is not considered while annotating.
Inter-annotator correlation scores show that the agreement is better when the
two annotators consider sentiment information while assigning a similarity
score to a pair of words. We use this hypothesis to measure the similarity
between a pair of words. Specifically, we represent each word as a vector
containing sentiment scores of all the content words in the WordNet gloss of
the sense of that word. These sentiment scores are derived from a sentiment
lexicon. We then measure the cosine similarity between the two vectors. We
perform both intrinsic and extrinsic evaluation of SenSim and compare the
performance with other widely usedWordNet similarity metrics.
|
1209.2352
|
Feature Specific Sentiment Analysis for Product Reviews
|
cs.IR cs.CL
|
In this paper, we present a novel approach to identify feature specific
expressions of opinion in product reviews with different features and mixed
emotions. The objective is realized by identifying a set of potential features
in the review and extracting opinion expressions about those features by
exploiting their associations. Capitalizing on the view that more closely
associated words come together to express an opinion about a certain feature,
dependency parsing is used to identify relations between the opinion
expressions. The system learns the set of significant relations to be used by
dependency parsing and a threshold parameter which allows us to merge closely
associated opinion expressions. The data requirement is minimal as this is a
one time learning of the domain independent parameters. The associations are
represented in the form of a graph which is partitioned to finally retrieve the
opinion expression describing the user specified feature. We show that the
system achieves a high accuracy across all domains and performs at par with
state-of-the-art systems despite its data limitations.
|
1209.2355
|
Counterfactual Reasoning and Learning Systems
|
cs.LG cs.AI cs.IR math.ST stat.TH
|
This work shows how to leverage causal inference to understand the behavior
of complex learning systems interacting with their environment and predict the
consequences of changes to the system. Such predictions allow both humans and
algorithms to select changes that improve both the short-term and long-term
performance of such systems. This work is illustrated by experiments carried
out on the ad placement system associated with the Bing search engine.
|
1209.2388
|
On the Complexity of Bandit and Derivative-Free Stochastic Convex
Optimization
|
cs.LG math.OC stat.ML
|
The problem of stochastic convex optimization with bandit feedback (in the
learning community) or without knowledge of gradients (in the optimization
community) has received much attention in recent years, in the form of
algorithms and performance upper bounds. However, much less is known about the
inherent complexity of these problems, and there are few lower bounds in the
literature, especially for nonlinear functions. In this paper, we investigate
the attainable error/regret in the bandit and derivative-free settings, as a
function of the dimension d and the available number of queries T. We provide a
precise characterization of the attainable performance for strongly-convex and
smooth functions, which also imply a non-trivial lower bound for more general
problems. Moreover, we prove that in both the bandit and derivative-free
setting, the required number of queries must scale at least quadratically with
the dimension. Finally, we show that on the natural class of quadratic
functions, it is possible to obtain a "fast" O(1/T) error rate in terms of T,
under mild assumptions, even without having access to gradients. To the best of
our knowledge, this is the first such rate in a derivative-free stochastic
setting, and holds despite previous results which seem to imply the contrary.
|
1209.2400
|
Identification of Fertile Translations in Medical Comparable Corpora: a
Morpho-Compositional Approach
|
cs.CL
|
This paper defines a method for lexicon in the biomedical domain from
comparable corpora. The method is based on compositional translation and
exploits morpheme-level translation equivalences. It can generate translations
for a large variety of morphologically constructed words and can also generate
'fertile' translations. We show that fertile translations increase the overall
quality of the extracted lexicon for English to French translation.
|
1209.2419
|
The role of caretakers in disease dynamics
|
physics.soc-ph cs.SI nlin.AO q-bio.PE
|
One of the key challenges in modeling the dynamics of contagion phenomena is
to understand how the structure of social interactions shapes the time course
of a disease. Complex network theory has provided significant advances in this
context. However, awareness of an epidemic in a population typically yields
behavioral changes that correspond to changes in the network structure on which
the disease evolves. This feedback mechanism has not been investigated in
depth. For example, one would intuitively expect susceptible individuals to
avoid other infecteds. However, doctors treating patients or parents tending
sick children may also increase the amount of contact made with an infecteds,
in an effort to speed up recovery but also exposing themselves to higher risks
of infection. We study the role of these caretaker links in an adaptive network
models where individuals react to a disease by increasing or decreasing the
amount of contact they make with infected individuals. We find that pure
avoidance, with only few caretaker links, is the best strategy for curtailing
an SIS disease in networks that possess a large topological variability. In
more homogeneous networks, disease prevalence is decreased for low
concentrations of caretakers whereas a high prevalence emerges if caretaker
concentration passes a well defined critical value.
|
1209.2433
|
Correlations between Google search data and Mortality Rates
|
stat.AP cs.IR
|
Inspired by correlations recently discovered between Google search data and
financial markets, we show correlations between Google search data mortality
rates. Words with negative connotations may provide for increased mortality
rates, while words with positive connotations may provide for decreased
mortality rates, and so statistical methods were employed to determine to
investigate further.
|
1209.2434
|
Query Complexity of Derivative-Free Optimization
|
stat.ML cs.LG
|
This paper provides lower bounds on the convergence rate of Derivative Free
Optimization (DFO) with noisy function evaluations, exposing a fundamental and
unavoidable gap between the performance of algorithms with access to gradients
and those with access to only function evaluations. However, there are
situations in which DFO is unavoidable, and for such situations we propose a
new DFO algorithm that is proved to be near optimal for the class of strongly
convex objective functions. A distinctive feature of the algorithm is that it
uses only Boolean-valued function comparisons, rather than function
evaluations. This makes the algorithm useful in an even wider range of
applications, such as optimization based on paired comparisons from human
subjects, for example. We also show that regardless of whether DFO is based on
noisy function evaluations or Boolean-valued function comparisons, the
convergence rate is the same.
|
1209.2476
|
Local Dimension of Complex Networks
|
physics.soc-ph cs.SI physics.data-an
|
Dimensionality is one of the most important properties of complex physical
systems. However, only recently this concept has been considered in the context
of complex networks. In this paper we further develop the previously introduced
definitions of dimension in complex networks by presenting a new method to
characterize the dimensionality of individual nodes. The methodology consists
in obtaining patterns of dimensionality at different scales for each node,
which can be used to detect regions with distinct dimensional structures as
well as borders. We also apply this technique to power grid networks, showing,
quantitatively, that the continental European power grid is substantially more
planar than the network covering the western states of US, which present
topological dimension higher than their intrinsic embedding space dimension.
Local dimension also successfully revealed how distinct regions of network
topologies spreads along the degrees of freedom when it is embedded in a metric
space.
|
1209.2486
|
On sampling social networking services
|
stat.AP cs.SI
|
This article aims at summarizing the existing methods for sampling social
networking services and proposing a faster confidence interval for related
sampling methods. It also includes comparisons of common network sampling
techniques.
|
1209.2493
|
WikiSent : Weakly Supervised Sentiment Analysis Through Extractive
Summarization With Wikipedia
|
cs.IR cs.CL
|
This paper describes a weakly supervised system for sentiment analysis in the
movie review domain. The objective is to classify a movie review into a
polarity class, positive or negative, based on those sentences bearing opinion
on the movie alone. The irrelevant text, not directly related to the reviewer
opinion on the movie, is left out of analysis. Wikipedia incorporates the world
knowledge of movie-specific features in the system which is used to obtain an
extractive summary of the review, consisting of the reviewer's opinions about
the specific aspects of the movie. This filters out the concepts which are
irrelevant or objective with respect to the given movie. The proposed system,
WikiSent, does not require any labeled data for training. The only weak
supervision arises out of the usage of resources like WordNet, Part-of-Speech
Tagger and Sentiment Lexicons by virtue of their construction. WikiSent
achieves a considerable accuracy improvement over the baseline and has a better
or comparable accuracy to the existing semi-supervised and unsupervised systems
in the domain, on the same dataset. We also perform a general movie review
trend analysis using WikiSent to find the trend in movie-making and the public
acceptance in terms of movie genre, year of release and polarity.
|
1209.2495
|
TwiSent: A Multistage System for Analyzing Sentiment in Twitter
|
cs.IR cs.CL
|
In this paper, we present TwiSent, a sentiment analysis system for Twitter.
Based on the topic searched, TwiSent collects tweets pertaining to it and
categorizes them into the different polarity classes positive, negative and
objective. However, analyzing micro-blog posts have many inherent challenges
compared to the other text genres. Through TwiSent, we address the problems of
1) Spams pertaining to sentiment analysis in Twitter, 2) Structural anomalies
in the text in the form of incorrect spellings, nonstandard abbreviations,
slangs etc., 3) Entity specificity in the context of the topic searched and 4)
Pragmatics embedded in text. The system performance is evaluated on manually
annotated gold standard data and on an automatically annotated tweet set based
on hashtags. It is a common practise to show the efficacy of a supervised
system on an automatically annotated dataset. However, we show that such a
system achieves lesser classification accurcy when tested on generic twitter
dataset. We also show that our system performs much better than an existing
system.
|
1209.2501
|
Performance Evaluation of Predictive Classifiers For Knowledge Discovery
From Engineering Materials Data Sets
|
cs.LG
|
In this paper, naive Bayesian and C4.5 Decision Tree Classifiers(DTC) are
successively applied on materials informatics to classify the engineering
materials into different classes for the selection of materials that suit the
input design specifications. Here, the classifiers are analyzed individually
and their performance evaluation is analyzed with confusion matrix predictive
parameters and standard measures, the classification results are analyzed on
different class of materials. Comparison of classifiers has found that naive
Bayesian classifier is more accurate and better than the C4.5 DTC. The
knowledge discovered by the naive bayesian classifier can be employed for
decision making in materials selection in manufacturing industries.
|
1209.2515
|
Wavelet Based Image Coding Schemes : A Recent Survey
|
cs.CV
|
A variety of new and powerful algorithms have been developed for image
compression over the years. Among them the wavelet-based image compression
schemes have gained much popularity due to their overlapping nature which
reduces the blocking artifacts that are common phenomena in JPEG compression
and multiresolution character which leads to superior energy compaction with
high quality reconstructed images. This paper provides a detailed survey on
some of the popular wavelet coding techniques such as the Embedded Zerotree
Wavelet (EZW) coding, Set Partitioning in Hierarchical Tree (SPIHT) coding, the
Set Partitioned Embedded Block (SPECK) Coder, and the Embedded Block Coding
with Optimized Truncation (EBCOT) algorithm. Other wavelet-based coding
techniques like the Wavelet Difference Reduction (WDR) and the Adaptive Scanned
Wavelet Difference Reduction (ASWDR) algorithms, the Space Frequency
Quantization (SFQ) algorithm, the Embedded Predictive Wavelet Image Coder
(EPWIC), Compression with Reversible Embedded Wavelet (CREW), the Stack-Run
(SR) coding and the recent Geometric Wavelet (GW) coding are also discussed.
Based on the review, recommendations and discussions are presented for
algorithm development and implementation.
|
1209.2541
|
Absence of epidemic thresholds in a growing adaptive network
|
physics.soc-ph cs.SI nlin.AO
|
The structure of social contact networks strongly influences the dynamics of
epidemic diseases. In particular the scale-free structure of real-world social
networks allows unlikely diseases with low infection rates to spread and become
endemic. However, in particular for potentially fatal diseases, also the impact
of the disease on the social structure cannot be neglected, leading to a
complex interplay. Here, we consider the growth of a network by preferential
attachment from which nodes are simultaneously removed due to an SIR epidemic.
We show that increased infectiousness increases the prevalence of the disease
and simultaneously causes a transition from scale-free to exponential topology.
Although a transition to a degree distribution with finite variance takes
place, the network still exhibits no epidemic threshold in the thermodynamic
limit. We illustrate these results using agent-based simulations and
analytically tractable approximation schemes.
|
1209.2542
|
Joint Detection/Decoding Algorithms for Nonbinary LDPC Codes over ISI
Channels
|
cs.IT math.IT
|
This paper is concerned with the application of nonbinary low-density
parity-check (NB-LDPC) codes to binary input inter-symbol interference (ISI)
channels. Two low-complexity joint detection/decoding algorithms are proposed.
One is referred to as max-log-MAP/X-EMS algorithm, which is implemented by
exchanging soft messages between the max-log-MAP detector and the extended
min-sum (EMS) decoder. The max-log-MAP/X-EMS algorithm is applicable to general
NB-LDPC codes. The other one, referred to as Viterbi/GMLGD algorithm, is
designed in particular for majority-logic decodable NB-LDPC codes. The
Viterbi/GMLGD algorithm works in an iterative manner by exchanging
hard-decisions between the Viterbi detector and the generalized majority-logic
decoder(GMLGD). As a by-product, a variant of the original EMS algorithm is
proposed, which is referred to as \mu-EMS algorithm. In the \mu-EMS algorithm,
the messages are truncated according to an adaptive threshold, resulting in a
more efficient algorithm. Simulations results show that the max-log-MAP/X-EMS
algorithm performs as well as the traditional iterative detection/decoding
algorithm based on the BCJR algorithm and the QSPA, but with lower complexity.
The complexity can be further reduced for majority-logic decodable NB-LDPC
codes by executing the Viterbi/GMLGD algorithm with a performance degradation
within one dB. Simulation results also confirm that the \mu-EMS algorithm
requires lower computational loads than the EMS algorithm with a fixed
threshold. These algorithms provide good candidates for trade-offs between
performance and complexity.
|
1209.2548
|
Training a Feed-forward Neural Network with Artificial Bee Colony Based
Backpropagation Method
|
cs.NE cs.AI
|
Back-propagation algorithm is one of the most widely used and popular
techniques to optimize the feed forward neural network training. Nature
inspired meta-heuristic algorithms also provide derivative-free solution to
optimize complex problem. Artificial bee colony algorithm is a nature inspired
meta-heuristic algorithm, mimicking the foraging or food source searching
behaviour of bees in a bee colony and this algorithm is implemented in several
applications for an improved optimized outcome. The proposed method in this
paper includes an improved artificial bee colony algorithm based
back-propagation neural network training method for fast and improved
convergence rate of the hybrid neural network learning method. The result is
analysed with the genetic algorithm based back-propagation method, and it is
another hybridized procedure of its kind. Analysis is performed over standard
data sets, reflecting the light of efficiency of proposed method in terms of
convergence speed and rate.
|
1209.2602
|
Internal joint forces in dynamics of a 3-PRP planar parallel robot
|
cs.RO
|
Recursive matrix relations for the complete dynamics of a 3-PRP planar
parallel robot are established in this paper. Three identical planar legs
connecting to the moving platform are located in the same vertical plane.
Knowing the motion of the platform, we develop first the inverse kinematical
problem and determine the positions, velocities and accelerations of the robot.
Further, the inverse dynamic problem is solved using an approach based on the
principle of virtual work. Finally, some graphs of simulation for the input
powers of three actuators and the internal joint forces are obtained.
|
1209.2620
|
Probabilities on Sentences in an Expressive Logic
|
cs.LO cs.AI cs.LG math.LO math.PR
|
Automated reasoning about uncertain knowledge has many applications. One
difficulty when developing such systems is the lack of a completely
satisfactory integration of logic and probability. We address this problem
directly. Expressive languages like higher-order logic are ideally suited for
representing and reasoning about structured knowledge. Uncertain knowledge can
be modeled by using graded probabilities rather than binary truth-values. The
main technical problem studied in this paper is the following: Given a set of
sentences, each having some probability of being true, what probability should
be ascribed to other (query) sentences? A natural wish-list, among others, is
that the probability distribution (i) is consistent with the knowledge base,
(ii) allows for a consistent inference procedure and in particular (iii)
reduces to deductive logic in the limit of probabilities being 0 and 1, (iv)
allows (Bayesian) inductive reasoning and (v) learning in the limit and in
particular (vi) allows confirmation of universally quantified
hypotheses/sentences. We translate this wish-list into technical requirements
for a prior probability and show that probabilities satisfying all our criteria
exist. We also give explicit constructions and several general
characterizations of probabilities that satisfy some or all of the criteria and
various (counter) examples. We also derive necessary and sufficient conditions
for extending beliefs about finitely many sentences to suitable probabilities
over all sentences, and in particular least dogmatic or least biased ones. We
conclude with a brief outlook on how the developed theory might be used and
approximated in autonomous reasoning agents. Our theory is a step towards a
globally consistent and empirically satisfactory unification of probability and
logic.
|
1209.2641
|
C-PASS-PC: A Cloud-driven Prototype of Multi-Center Proactive
Surveillance System for Prostate Cancer
|
cs.CE
|
Currently there are many clinical trials using paper case report forms as the
primary data collection tool. Cloud Computing platforms provide big potential
for increasing efficiency through a web-based data collection interface,
especially for large-scale multi-center trials. Traditionally, clinical and
biological data for multi-center trials are stored in one dedicated,
centralized database system running at a data coordinating center (DCC). This
paper presents C-PASS-PC, a cloud-driven prototype of multi-center proactive
surveillance system for prostate cancer. The prototype is developed in PHP,
JQuery and CSS with an Oracle backend in a local Web server and database server
and deployed on Google App Engine (GAE) and Google Cloud SQL-MySQL. The
deploying process is fast and easy to follow. The C-PASS-PC prototype can be
accessed through an SSL-enabled web browser. Our approach proves the concept
that cloud computing platforms such as GAE is a suitable and flexible solution
in the near future for multi-center clinical trials.
|
1209.2647
|
Shadow Theory, data model design for data integration
|
cs.DB
|
For data integration in information ecosystems, semantic heterogeneity is a
known difficulty. In this paper, we propose Shadow Theory as the philosophical
foundation to address this issue. It is based on the notion of shadows in
Plato's Allegory of the Cave. What we can observe are just shadows, and
meanings of shadows are mental entities that only exist in viewers' cognitive
structures. With enterprise customer data integration example, we proposed six
design principles and algebra to support required operations.
|
1209.2657
|
Sparse Representation of Astronomical Images
|
math-ph cs.CV math.MP
|
Sparse representation of astronomical images is discussed. It is shown that a
significant gain in sparsity is achieved when particular mixed dictionaries are
used for approximating these types of images with greedy selection strategies.
Experiments are conducted to confirm: i)Effectiveness at producing sparse
representations. ii)Competitiveness, with respect to the time required to
process large images.The latter is a consequence of the suitability of the
proposed dictionaries for approximating images in partitions of small
blocks.This feature makes it possible to apply the effective greedy selection
technique Orthogonal Matching Pursuit, up to some block size. For blocks
exceeding that size a refinement of the original Matching Pursuit approach is
considered. The resulting method is termed Self Projected Matching Pursuit,
because is shown to be effective for implementing, via Matching Pursuit itself,
the optional back-projection intermediate steps in that approach.
|
1209.2660
|
Review of strategies for a comprehensive simulation in sputtering
devices
|
cs.CE physics.plasm-ph
|
The development of sputtering facilities, at the moment, is mainly pursued
through experimental tests, or simply by expertise in the field, and relies
much less on numerical simulation of the process environment. This leads to
great efforts and empirically, roughly optimized solutions: in fact, the
simulation of these devices, at the state of art, is quite good in predicting
the behavior of single steps of the overall deposition process, but it seems
still ahead a full integration among the tools simulating the various phenomena
involved in a sputter. We summarize here the techniques and codes already
available for problems of interest in sputtering facilities, and we try to
outline the possible features of a comprehensive simulation framework. This
framework should be able to integrate the single paradigms, dealing with
aspects going from the plasma environment up to the distribution and properties
of the deposited film, not only on the surface of the substrate, but also on
the walls of the process chamber.
|
1209.2672
|
New Crosstalk Avoidance Codes Based on a Novel Pattern Classification
|
cs.IT math.IT
|
The crosstalk delay associated with global on-chip interconnects becomes more
severe in deep submicron technology, and hence can greatly affect the overall
system performance. Based on a delay model proposed by Sotiriadis et al.,
transition patterns over a bus can be classified according to their delays.
Using this classification, crosstalk avoidance codes (CACs) have been proposed
to alleviate the crosstalk delays by restricting the transition patterns on a
bus. In this paper, we first propose a new classification of transition
patterns, and then devise a new family of CACs based on this classification. In
comparison to the previous classification, our classification has more classes
and the delays of its classes do not overlap, both leading to more accurate
control of delays. Our new family of CACs includes some previously proposed
codes as well as new codes with reduced delays and improved throughput. Thus,
this new family of crosstalk avoidance codes provides a wider variety of
tradeoffs between bus delay and efficiency. Finally, since our analytical
approach to the classification and CACs treats the technology-dependent
parameters as variables, our approach can be easily adapted to a wide variety
of technology.
|
1209.2673
|
Conditional validity of inductive conformal predictors
|
cs.LG
|
Conformal predictors are set predictors that are automatically valid in the
sense of having coverage probability equal to or exceeding a given confidence
level. Inductive conformal predictors are a computationally efficient version
of conformal predictors satisfying the same property of validity. However,
inductive conformal predictors have been only known to control unconditional
coverage probability. This paper explores various versions of conditional
validity and various ways to achieve them using inductive conformal predictors
and their modifications.
|
1209.2678
|
Bad Communities with High Modularity
|
cs.SI physics.data-an physics.soc-ph
|
In this paper we discuss some problematic aspects of Newman's modularity
function QN. Given a graph G, the modularity of G can be written as QN = Qf
-Q0, where Qf is the intracluster edge fraction of G and Q0 is the expected
intracluster edge fraction of the null model, i.e., a randomly connected graph
with same expected degree distribution as G. It follows that the maximization
of QN must accomodate two factors pulling in opposite directions: Qf favors a
small number of clusters and Q0 favors many balanced (i.e., with approximately
equal degrees) clusters. In certain cases the Q0 term can cause overestimation
of the true cluster number; this is the opposite of the well-known under
estimation effect caused by the "resolution limit" of modularity. We illustrate
the overestimation effect by constructing families of graphs with a "natural"
community structure which, however, does not maximize modularity. In fact, we
prove that we can always find a graph G with a "natural clustering" V of G and
another, balanced clustering U of G such that (i) the pair (G; U) has higher
modularity than (G; V) and (ii) V and U are arbitrarily different.
|
1209.2684
|
NetSimile: A Scalable Approach to Size-Independent Network Similarity
|
cs.SI physics.soc-ph stat.AP
|
Given a set of k networks, possibly with different sizes and no overlaps in
nodes or edges, how can we quickly assess similarity between them, without
solving the node-correspondence problem? Analogously, how can we extract a
small number of descriptive, numerical features from each graph that
effectively serve as the graph's "signature"? Having such features will enable
a wealth of graph mining tasks, including clustering, outlier detection,
visualization, etc.
We propose NetSimile -- a novel, effective, and scalable method for solving
the aforementioned problem. NetSimile has the following desirable properties:
(a) It gives similarity scores that are size-invariant. (b) It is scalable,
being linear on the number of edges for "signature" vector extraction. (c) It
does not need to solve the node-correspondence problem. We present extensive
experiments on numerous synthetic and real graphs from disparate domains, and
show NetSimile's superiority over baseline competitors. We also show how
NetSimile enables several mining tasks such as clustering, visualization,
discontinuity detection, network transfer learning, and re-identification
across networks.
|
1209.2688
|
Molecular Communication Between Two Populations of Bacteria
|
cs.IT math.IT q-bio.QM
|
Molecular communication is an expanding body of research. Recent advances in
biology have encouraged using genetically engineered bacteria as the main
component in the molecular communication. This has stimulated a new line of
research that attempts to study molecular communication among bacteria from an
information-theoretic point of view. Due to high randomness in the individual
behavior of the bacterium, reliable communication between two bacteria is
almost impossible. Therefore, we recently proposed that a population of
bacteria in a cluster is considered as a node capable of molecular transmission
and reception. This proposition enables us to form a reliable node out of many
unreliable bacteria. The bacteria inside a node sense the environment and
respond accordingly. In this paper, we study the communication between two
nodes, one acting as the transmitter and the other as the receiver. We consider
the case in which the information is encoded in the concentration of molecules
by the transmitter. The molecules produced by the bacteria in the transmitter
node propagate in the environment via the diffusion process. Then, their
concentration sensed by the bacteria in the receiver node would decode the
information. The randomness in the communication is caused by both the error in
the molecular production at the transmitter and the reception of molecules at
the receiver. We study the theoretical limits of the information transfer rate
in such a setup versus the number of bacteria per node. Finally, we consider
M-ary modulation schemes and study the achievable rates and their error
probabilities.
|
1209.2693
|
Regret Bounds for Restless Markov Bandits
|
cs.LG math.OC stat.ML
|
We consider the restless Markov bandit problem, in which the state of each
arm evolves according to a Markov process independently of the learner's
actions. We suggest an algorithm that after $T$ steps achieves
$\tilde{O}(\sqrt{T})$ regret with respect to the best policy that knows the
distributions of all arms. No assumptions on the Markov chains are made except
that they are irreducible. In addition, we show that index-based policies are
necessarily suboptimal for the considered problem.
|
1209.2696
|
Visual Tracking with Similarity Matching Ratio
|
cs.CV cs.RO
|
This paper presents a novel approach to visual tracking: Similarity Matching
Ratio (SMR). The traditional approach of tracking is minimizing some measures
of the difference between the template and a patch from the frame. This
approach is vulnerable to outliers and drastic appearance changes and an
extensive study is focusing on making the approach more tolerant to them.
However, this often results in longer, corrective algo- rithms which do not
solve the original problem. This paper proposes a novel approach to the
definition of the tracking problems, SMR, which turns the differences into a
probability measure. Only pixel differences below a threshold count towards
deciding the match, the rest are ignored. This approach makes the SMR tracker
robust to outliers and points that dramaticaly change appearance. The SMR
tracker is tested on challenging video sequences and achieved state-of-the-art
performance.
|
1209.2717
|
Comparison Study for Clonal Selection Algorithm and Genetic Algorithm
|
cs.NE
|
Two metaheuristic algorithms namely Artificial Immune Systems (AIS) and
Genetic Algorithms are classified as computational systems inspired by
theoretical immunology and genetics mechanisms. In this work we examine the
comparative performances of two algorithms. A special selection algorithm,
Clonal Selection Algorithm (CLONALG), which is a subset of Artificial Immune
Systems, and Genetic Algorithms are tested with certain benchmark functions. It
is shown that depending on type of a function Clonal Selection Algorithm and
Genetic Algorithm have better performance over each other.
|
1209.2755
|
Relaxing the Gaussian AVC
|
cs.IT math.IT
|
The arbitrarily varying channel (AVC) is a conservative way of modeling an
unknown interference, and the corresponding capacity results are pessimistic.
We reconsider the Gaussian AVC by relaxing the classical model and thereby
weakening the adversarial nature of the interference. We examine three
different relaxations. First, we show how a very small amount of common
randomness between transmitter and receiver is sufficient to achieve the rates
of fully randomized codes. Second, akin to the dirty paper coding problem, we
study the impact of an additional interference known to the transmitter. We
provide partial capacity results that differ significantly from the standard
AVC. Third, we revisit a Gaussian MIMO AVC in which the interference is
arbitrary but of limited dimension.
|
1209.2759
|
Multi-track Map Matching
|
cs.LG cs.DS stat.AP
|
We study algorithms for matching user tracks, consisting of time-ordered
location points, to paths in the road network. Previous work has focused on the
scenario where the location data is linearly ordered and consists of fairly
dense and regular samples. In this work, we consider the \emph{multi-track map
matching}, where the location data comes from different trips on the same
route, each with very sparse samples. This captures the realistic scenario
where users repeatedly travel on regular routes and samples are sparsely
collected, either due to energy consumption constraints or because samples are
only collected when the user actively uses a service. In the multi-track
problem, the total set of combined locations is only partially ordered, rather
than globally ordered as required by previous map-matching algorithms. We
propose two methods, the iterative projection scheme and the graph Laplacian
scheme, to solve the multi-track problem by using a single-track map-matching
subroutine. We also propose a boosting technique which may be applied to either
approach to improve the accuracy of the estimated paths. In addition, in order
to deal with variable sampling rates in single-track map matching, we propose a
method based on a particular regularized cost function that can be adapted for
different sampling rates and measurement errors. We evaluate the effectiveness
of our techniques for reconstructing tracks under several different
configurations of sampling error and sampling rate.
|
1209.2784
|
Minimax Multi-Task Learning and a Generalized Loss-Compositional
Paradigm for MTL
|
cs.LG stat.ML
|
Since its inception, the modus operandi of multi-task learning (MTL) has been
to minimize the task-wise mean of the empirical risks. We introduce a
generalized loss-compositional paradigm for MTL that includes a spectrum of
formulations as a subfamily. One endpoint of this spectrum is minimax MTL: a
new MTL formulation that minimizes the maximum of the tasks' empirical risks.
Via a certain relaxation of minimax MTL, we obtain a continuum of MTL
formulations spanning minimax MTL and classical MTL. The full paradigm itself
is loss-compositional, operating on the vector of empirical risks. It
incorporates minimax MTL, its relaxations, and many new MTL formulations as
special cases. We show theoretically that minimax MTL tends to avoid worst case
outcomes on newly drawn test tasks in the learning to learn (LTL) test setting.
The results of several MTL formulations on synthetic and real problems in the
MTL and LTL test settings are encouraging.
|
1209.2790
|
Improving Energy Efficiency in Femtocell Networks: A Hierarchical
Reinforcement Learning Framework
|
cs.LG
|
This paper investigates energy efficiency for two-tier femtocell networks
through combining game theory and stochastic learning. With the Stackelberg
game formulation, a hierarchical reinforcement learning framework is applied to
study the joint average utility maximization of macrocells and femtocells
subject to the minimum signal-to-interference-plus-noise-ratio requirements.
The macrocells behave as the leaders and the femtocells are followers during
the learning procedure. At each time step, the leaders commit to dynamic
strategies based on the best responses of the followers, while the followers
compete against each other with no further information but the leaders'
strategy information. In this paper, we propose two learning algorithms to
schedule each cell's stochastic power levels, leading by the macrocells.
Numerical experiments are presented to validate the proposed studies and show
that the two learning algorithms substantially improve the energy efficiency of
the femtocell networks.
|
1209.2794
|
Protecting oracle pl/sql source code from a dba user
|
cs.DB
|
In this paper we are presenting a new way to disable DDL statements on some
specific PL/SQL procedures to a dba user in the Oracle database. Nowadays dba
users have access to a lot of data and source code even if they do not have
legal permissions to see or modify them. With this method we can disable the
ability to execute DDL and DML statements on some specific pl/sql procedures
from every Oracle database user even if it has a dba role. Oracle gives to
developer the possibility to wrap the pl/sql procedures, functions and packages
but those wrapped scripts can be unwrapped by using third party tools. The
scripts that we have developed analyzes all database sessions, and if they
detect a DML or a DDL statement from an unauthorized user to procedure,
function or package which should be protected then the execution of the
statement is denied. Furthermore, these scripts do not allow a dba user to drop
or disable the scripts themselves. In other words by managing sessions prior to
the execution of an eventual statement from a dba user, we can prevent the
execution of eventual statements which target our scripts.
|
1209.2816
|
Hirarchical Digital Image Inpainting Using Wavelets
|
cs.CV
|
Inpainting is the technique of reconstructing unknown or damaged portions of
an image in a visually plausible way. Inpainting algorithm automatically fills
the damaged region in an image using the information available in undamaged
region. Propagation of structure and texture information becomes a challenge as
the size of damaged area increases. In this paper, a hierarchical inpainting
algorithm using wavelets is proposed. The hierarchical method tries to keep the
mask size smaller while wavelets help in handling the high pass structure
information and low pass texture information separately. The performance of the
proposed algorithm is tested using different factors. The results of our
algorithm are compared with existing methods such as interpolation, diffusion
and exemplar techniques.
|
1209.2817
|
Preferential Attachment in the Interaction between Dynamically Generated
Interdependent Networks
|
physics.soc-ph cond-mat.stat-mech cs.SI q-fin.RM
|
We generalize the scale-free network model of Barab\`asi and Albert [Science
286, 509 (1999)] by proposing a class of stochastic models for scale-free
interdependent networks in which interdependent nodes are not randomly
connected but rather are connected via preferential attachment (PA). Each
network grows through the continuous addition of new nodes, and new nodes in
each network attach preferentially and simultaneously to (a) well-connected
nodes within the same network and (b) well-connected nodes in other networks.
We present analytic solutions for the power-law exponents as functions of the
number of links both between networks and within networks. We show that a
cross-clustering coefficient vs. size of network $N$ follows a power law. We
illustrate the models using selected examples from the Internet and finance.
|
1209.2820
|
Conditions for a Monotonic Channel Capacity
|
cs.IT math.IT
|
Motivated by results in optical communications, where the performance can
degrade dramatically if the transmit power is sufficiently increased, the
channel capacity is characterized for various kinds of memoryless vector
channels. It is proved that for all static point-to-point channels, the channel
capacity is a nondecreasing function of power. As a consequence, maximizing the
mutual information over all input distributions with a certain power is for
such channels equivalent to maximizing it over the larger set of input
distributions with upperbounded power. For interference channels such as
optical wavelength-division multiplexing systems, the primary channel capacity
is always nondecreasing with power if all interferers transmit with identical
distributions as the primary user. Also, if all input distributions in an
interference channel are optimized jointly, then the achievable sum-rate
capacity is again nondecreasing. The results generalizes to the channel
capacity as a function of a wide class of costs, not only power.
|
1209.2868
|
Spatio-Temporal Small Worlds for Decentralized Information Retrieval in
Social Networking
|
cs.SI cs.IR physics.soc-ph
|
We discuss foundations and options for alternative, agent-based information
retrieval (IR) approaches in Social Networking, especially Decentralized and
Mobile Social Networking scenarios. In addition to usual semantic contexts,
these approaches make use of long-term social and spatio-temporal contexts in
order to satisfy conscious as well as unconscious information needs according
to Human IR heuristics. Using a large Twitter dataset, we investigate these
approaches and especially investigate the question in how far spatio-temporal
contexts can act as a conceptual bracket implicating social and semantic
cohesion, giving rise to the concept of Spatio-Temporal Small Worlds.
|
1209.2873
|
Extraction of hidden information by efficient community detection in
networks
|
physics.data-an cs.SI physics.bio-ph physics.soc-ph q-bio.MN
|
Currently, we are overwhelmed by a deluge of experimental data, and network
physics has the potential to become an invaluable method to increase our
understanding of large interacting datasets. However, this potential is often
unrealized for two reasons: uncovering the hidden community structure of a
network, known as community detection, is difficult, and further, even if one
has an idea of this community structure, it is not a priori obvious how to
efficiently use this information. Here, to address both of these issues, we,
first, identify optimal community structure of given networks in terms of
modularity by utilizing a recently introduced community detection method.
Second, we develop an approach to use this community information to extract
hidden information from a network. When applied to a protein-protein
interaction network, the proposed method outperforms current state-of-the-art
methods that use only the local information of a network. The method is
generally applicable to networks from many areas.
|
1209.2883
|
Control Design for Markov Chains under Safety Constraints: A Convex
Approach
|
cs.SY math.OC
|
This paper focuses on the design of time-invariant memoryless control
policies for fully observed controlled Markov chains, with a finite state
space. Safety constraints are imposed through a pre-selected set of forbidden
states. A state is qualified as safe if it is not a forbidden state and the
probability of it transitioning to a forbidden state is zero. The main
objective is to obtain control policies whose closed loop generates the maximal
set of safe recurrent states, which may include multiple recurrent classes. A
design method is proposed that relies on a finitely parametrized convex program
inspired on entropy maximization principles. A numerical example is provided
and the adoption of additional constraints is discussed.
|
1209.2887
|
Decoding of Subspace Codes, a Problem of Schubert Calculus over Finite
Fields
|
cs.IT math.IT
|
Schubert calculus provides algebraic tools to solve enumerative problems.
There have been several applied problems in systems theory, linear algebra and
physics which were studied by means of Schubert calculus. The method is most
powerful when the base field is algebraically closed. In this article we first
review some of the successes Schubert calculus had in the past. Then we show
how the problem of decoding of subspace codes used in random network coding can
be formulated as a problem in Schubert calculus. Since for this application the
base field has to be assumed to be a finite field new techniques will have to
be developed in the future.
|
1209.2894
|
Layered Subspace Codes for Network Coding
|
cs.IT math.IT
|
Subspace codes were introduced by K\"otter and Kschischang for error control
in random linear network coding. In this paper, a layered type of subspace
codes is considered, which can be viewed as a superposition of multiple
component subspace codes. Exploiting the layered structure, we develop two
decoding algorithms for these codes. The first algorithm operates by separately
decoding each component code. The second algorithm is similar to the successive
interference cancellation (SIC) algorithm for conventional superposition
coding, and further permits an iterative version. We show that both algorithms
decode not only deterministically up to but also probabilistically beyond the
error-correction capability of the overall code. Finally we present possible
applications of layered subspace codes in several network coding scenarios.
|
1209.2903
|
A Novel Approach of Harris Corner Detection of Noisy Images using
Adaptive Wavelet Thresholding Technique
|
cs.CV
|
In this paper we propose a method of corner detection for obtaining features
which is required to track and recognize objects within a noisy image. Corner
detection of noisy images is a challenging task in image processing. Natural
images often get corrupted by noise during acquisition and transmission. Though
Corner detection of these noisy images does not provide desired results, hence
de-noising is required. Adaptive wavelet thresholding approach is applied for
the same.
|
1209.2910
|
Community Detection in the Labelled Stochastic Block Model
|
cs.SI cs.LG math.PR physics.soc-ph
|
We consider the problem of community detection from observed interactions
between individuals, in the context where multiple types of interaction are
possible. We use labelled stochastic block models to represent the observed
data, where labels correspond to interaction types. Focusing on a two-community
scenario, we conjecture a threshold for the problem of reconstructing the
hidden communities in a way that is correlated with the true partition. To
substantiate the conjecture, we prove that the given threshold correctly
identifies a transition on the behaviour of belief propagation from insensitive
to sensitive. We further prove that the same threshold corresponds to the
transition in a related inference problem on a tree model from infeasible to
feasible. Finally, numerical results using belief propagation for community
detection give further support to the conjecture.
|
1209.2918
|
A new class of metrics for spike trains
|
cs.IT cs.NE math.IT q-bio.NC
|
The distance between a pair of spike trains, quantifying the differences
between them, can be measured using various metrics. Here we introduce a new
class of spike train metrics, inspired by the Pompeiu-Hausdorff distance, and
compare them with existing metrics. Some of our new metrics (the modulus-metric
and the max-metric) have characteristics that are qualitatively different than
those of classical metrics like the van Rossum distance or the Victor & Purpura
distance. The modulus-metric and the max-metric are particularly suitable for
measuring distances between spike trains where information is encoded in
bursts, but the number and the timing of spikes inside a burst does not carry
information. The modulus-metric does not depend on any parameters and can be
computed using a fast algorithm, in a time that depends linearly on the number
of spikes in the two spike trains. We also introduce localized versions of the
new metrics, which could have the biologically-relevant interpretation of
measuring the differences between spike trains as they are perceived at a
particular moment in time by a neuron receiving these spike trains.
|
1209.2946
|
Technical Report: CSVM Ecosystem
|
cs.CE cs.DS q-bio.QM
|
The CSVM format is derived from CSV format and allows the storage of tabular
like data with a limited but extensible amount of metadata. This approach could
help computer scientists because all information needed to uses subsequently
the data is included in the CSVM file and is particularly well suited for
handling RAW data in a lot of scientific fields and to be used as a canonical
format. The use of CSVM has shown that it greatly facilitates: the data
management independently of using databases; the data exchange; the integration
of RAW data in dataflows or calculation pipes; the search for best practices in
RAW data management. The efficiency of this format is closely related to its
plasticity: a generic frame is given for all kind of data and the CSVM parsers
don't make any interpretation of data types. This task is done by the
application layer, so it is possible to use same format and same parser codes
for a lot of purposes. In this document some implementation of CSVM format for
ten years and in different laboratories are presented. Some programming
examples are also shown: a Python toolkit for using the format, manipulating
and querying is available. A first specification of this format (CSVM-1) is now
defined, as well as some derivatives such as CSVM dictionaries used for data
interchange. CSVM is an Open Format and could be used as a support for Open
Data and long term conservation of RAW or unpublished data.
|
1209.2948
|
Cultural Algorithm Toolkit for Multi-objective Rule Mining
|
cs.NE cs.AI
|
Cultural algorithm is a kind of evolutionary algorithm inspired from societal
evolution and is composed of a belief space, a population space and a protocol
that enables exchange of knowledge between these sources. Knowledge created in
the population space is accepted into the belief space while this collective
knowledge from these sources is combined to influence the decisions of the
individual agents in solving problems. Classification rules comes under
descriptive knowledge discovery in data mining and are the most sought out by
users since they represent highly comprehensible form of knowledge. The rules
have certain properties which make them useful forms of actionable knowledge to
users. The rules are evaluated using these properties namely the rule metrics.
In the current study a Cultural Algorithm Toolkit for Classification Rule
Mining (CAT-CRM) is proposed which allows the user to control three different
set of parameters namely the evolutionary parameters, the rule parameters as
well as agent parameters and hence can be used for experimenting with an
evolutionary system, a rule mining system or an agent based social system.
Results of experiments conducted to observe the effect of different number and
type of metrics on the performance of the algorithm on bench mark data sets is
reported.
|
1209.3026
|
Losing My Revolution: How Many Resources Shared on Social Media Have
Been Lost?
|
cs.DL cs.IR
|
Social media content has grown exponentially in the recent years and the role
of social media has evolved from just narrating life events to actually shaping
them. In this paper we explore how many resources shared in social media are
still available on the live web or in public web archives. By analyzing six
different event-centric datasets of resources shared in social media in the
period from June 2009 to March 2012, we found about 11% lost and 20% archived
after just a year and an average of 27% lost and 41% archived after two and a
half years. Furthermore, we found a nearly linear relationship between time of
sharing of the resource and the percentage lost, with a slightly less linear
relationship between time of sharing and archiving coverage of the resource.
From this model we conclude that after the first year of publishing, nearly 11%
of shared resources will be lost and after that we will continue to lose 0.02%
per day.
|
1209.3047
|
SINR Statistics of Correlated MIMO Linear Receivers
|
cs.IT math.IT
|
Linear receivers offer a low complexity option for multi-antenna
communication systems. Therefore, understanding the outage behavior of the
corresponding SINR is important in a fading mobile environment. In this paper
we introduce a large deviations method, valid nominally for a large number M of
antennas, which provides the probability density of the SINR of Gaussian
channel MIMO Minimum Mean Square Error (MMSE) and zero-forcing (ZF) receivers,
with arbitrary transmission power profiles and in the presence of receiver
antenna correlations. This approach extends the Gaussian approximation of the
SINR, valid for large M asymptotically close to the center of the distribution,
obtaining the non-Gaussian tails of the distribution. Our methodology allows us
to calculate the SINR distribution to next-to-leading order (O(1/M)) and
showcase the deviations from approximations that have appeared in the
literature (e.g. the Gaussian or the generalized Gamma distribution). We also
analytically evaluate the outage probability, as well as the uncoded
bit-error-rate. We find that our approximation is quite accurate even for the
smallest antenna arrays (2x2).
|
1209.3054
|
Database Semantics
|
cs.DB math.CT
|
This paper, the first step to connect relational databases with systems
consequence (Kent: "System Consequence" 2009), is concerned with the semantics
of relational databases. It aims to to study system consequence in the
logical/semantic system of relational databases. The paper, which was inspired
by and which extends a recent set of papers on the theory of relational
database systems (Spivak: "Functorial Data Migration" 2012), is linked with
work on the Information Flow Framework (IFF) [http://suo.ieee.org/IFF/]
connected with the ontology standards effort (SUO), since relational databases
naturally embed into first order logic. The database semantics discussed here
is concerned with the conceptual level of database architecture. We offer both
an intuitive and technical discussion. Corresponding to the notions of primary
and foreign keys, relational database semantics takes two forms: a
distinguished form where entities are distinguished from relations, and a
unified form where relations and entities coincide. The distinguished form
corresponds to the theory presented in (Spivak: "Simplicial databases"
2009)[arXiv:0904.2012]. The unified form, a special case of the distinguished
form, corresponds to the theory presented in (Spivak: "Functorial Data
Migration" 2012). A later paper will discuss various formalisms of relational
databases, such as relational algebra and first order logic, and will complete
the description of the relational database logical environment.
|
1209.3056
|
Parametric Local Metric Learning for Nearest Neighbor Classification
|
cs.LG
|
We study the problem of learning local metrics for nearest neighbor
classification. Most previous works on local metric learning learn a number of
local unrelated metrics. While this "independence" approach delivers an
increased flexibility its downside is the considerable risk of overfitting. We
present a new parametric local metric learning method in which we learn a
smooth metric matrix function over the data manifold. Using an approximation
error bound of the metric matrix function we learn local metrics as linear
combinations of basis metrics defined on anchor points over different regions
of the instance space. We constrain the metric matrix function by imposing on
the linear combinations manifold regularization which makes the learned metric
matrix function vary smoothly along the geodesics of the data manifold. Our
metric learning method has excellent performance both in terms of predictive
power and scalability. We experimented with several large-scale classification
problems, tens of thousands of instances, and compared it with several state of
the art metric learning methods, both global and local, as well as to SVM with
automatic kernel selection, all of which it outperforms in a significant
manner.
|
1209.3089
|
Pattern Detection with Rare Item-set Mining
|
cs.SE cs.DB
|
The discovery of new and interesting patterns in large datasets, known as
data mining, draws more and more interest as the quantities of available data
are exploding. Data mining techniques may be applied to different domains and
fields such as computer science, health sector, insurances, homeland security,
banking and finance, etc. In this paper we are interested by the discovery of a
specific category of patterns, known as rare and non-present patterns. We
present a novel approach towards the discovery of non-present patterns using
rare item-set mining.
|
1209.3105
|
Spectrum Leasing and Cooperative Resource Allocation in Cognitive OFDMA
Networks
|
cs.IT math.IT
|
This paper considers a cooperative OFDMA-based cognitive radio network where
the primary system leases some of its subchannels to the secondary system for a
fraction of time in exchange for the secondary users (SUs) assisting the
transmission of primary users (PUs) as relays. Our aim is to determine the
cooperation strategies among the primary and secondary systems so as to
maximize the sum-rate of SUs while maintaining quality-of-service (QoS)
requirements of PUs. We formulate a joint optimization problem of PU
transmission mode selection, SU (or relay) selection, subcarrier assignment,
power control, and time allocation. By applying dual method, this mixed integer
programming problem is decomposed into parallel per-subcarrier subproblems,
with each determining the cooperation strategy between one PU and one SU. We
show that, on each leased subcarrier, the optimal strategy is to let a SU
exclusively act as a relay or transmit for itself. This result is fundamentally
different from the conventional spectrum leasing in single-channel systems
where a SU must transmit a fraction of time for itself if it helps the PU's
transmission. We then propose a subgradient-based algorithm to find the
asymptotically optimal solution to the primal problem in polynomial time.
Simulation results demonstrate that the proposed algorithm can significantly
enhance the network performance.
|
1209.3113
|
Detection and Classification of Viewer Age Range Smart Signs at TV
Broadcast
|
cs.CV
|
In this paper, the identification and classification of Viewer Age Range
Smart Signs, designed by the Radio and Television Supreme Council of Turkey, to
give age range information for the TV viewers, are realized. Therefore, the
automatic detection at the broadcast will be possible, enabling the
manufacturing of TV receivers which are sensible to these signs. The most
important step at this process is the pattern recognition. Since the symbols
that must be identified are circular, various circle detection techniques can
be employed. In our study, first, two different circle segmentation methods for
still images are analyzed, their advantages and drawbacks are discussed. A
popular neural network structure called Multilayer Perceptron is employed for
the classification. Afterwards, the same procedures are carried out for
streaming video. All of the steps depicted above are realized on a standard PC.
|
1209.3117
|
Development of an e-learning system incorporating semantic web
|
cs.CY cs.IR
|
E-Learning is efficient, task relevant and just-in-time learning grown from
the learning requirements of the new and dynamically changing world. The term
Semantic Web covers the steps to create a new WWW architecture that augments
the content with formal semantics enabling better possibilities of navigation
through the cyberspace and its contents. In this paper, we present the Semantic
Web-Based model for our e-learning system taking into account the learning
environment at Saudi Arabian universities. The proposed system is mainly based
on ontology-based descriptions of content, context and structure of the
learning materials. It further provides flexible and personalized access to
these learning materials. The framework has been validated by an interview
based qualitative method.
|
1209.3126
|
Beyond Stemming and Lemmatization: Ultra-stemming to Improve Automatic
Text Summarization
|
cs.IR cs.CL
|
In Automatic Text Summarization, preprocessing is an important phase to
reduce the space of textual representation. Classically, stemming and
lemmatization have been widely used for normalizing words. However, even using
normalization on large texts, the curse of dimensionality can disturb the
performance of summarizers. This paper describes a new method for normalization
of words to further reduce the space of representation. We propose to reduce
each word to its initial letters, as a form of Ultra-stemming. The results show
that Ultra-stemming not only preserve the content of summaries produced by this
representation, but often the performances of the systems can be dramatically
improved. Summaries on trilingual corpora were evaluated automatically with
Fresa. Results confirm an increase in the performance, regardless of summarizer
system used.
|
1209.3129
|
Analog readout for optical reservoir computers
|
cs.ET cs.LG cs.NE physics.optics
|
Reservoir computing is a new, powerful and flexible machine learning
technique that is easily implemented in hardware. Recently, by using a
time-multiplexed architecture, hardware reservoir computers have reached
performance comparable to digital implementations. Operating speeds allowing
for real time information operation have been reached using optoelectronic
systems. At present the main performance bottleneck is the readout layer which
uses slow, digital postprocessing. We have designed an analog readout suitable
for time-multiplexed optoelectronic reservoir computers, capable of working in
real time. The readout has been built and tested experimentally on a standard
benchmark task. Its performance is better than non-reservoir methods, with
ample room for further improvement. The present work thereby overcomes one of
the major limitations for the future development of hardware reservoir
computers.
|
1209.3137
|
Diophantine Approach to Blind Interference Alignment of Homogeneous
K-user 2x1 MISO Broadcast Channels
|
cs.IT math.IT
|
Although the sufficient condition for a blindly interference-aligned (BIA)
2-user 2x1 broadcast channel (BC) in homogeneous fading to achieve its maximal
4/3 DoF is well understood, its counterpart for the general K-user 2x1 MISO BC
in homogeneous block fading to achieve the corresponding 2k/(2+K-1) (DoF)
remains unsolved and is, thus, the focus of this paper. An interference channel
is said BIA-feasible if it achieves its maximal DoF only via BIA. In this
paper, we cast this general feasibility problem in the framework of finding
integer solutions for a system of linear Diophantine equations. By assuming
independent user links each of the same coherence time and by studying the
solvability of the Diophantine system, we derive the sufficient and necessary
conditions on the K users' fading block offsets to ensure the BIA feasibility
of the K-user BC. If the K offsets are independent and uniformly distributed
over a coherence block, we can further prove that 11 users are enough for one
to find, with certainty of 95%, 3 users among them to form a BIA-feasible
3-user 2x1 BC.
|
1209.3150
|
Agent-based Exploration of Wirings of Biological Neural Networks:
Position Paper
|
cs.NE q-bio.NC
|
The understanding of human central nervous system depends on knowledge of its
wiring. However, there are still gaps in our understanding of its wiring due to
technical difficulties. While some information is coming out from human
experiments, medical research is lacking of simulation models to put current
findings together to obtain the global picture and to predict hypotheses to
lead future experiments. Agent-based modeling and simulation (ABMS) is a strong
candidate for the simulation model. In this position paper, we discuss the
current status of "neural wiring" and "ABMS in biological systems". In
particular, we discuss that the ABMS context provides features required for
exploration of biological neural wiring.
|
1209.3286
|
Music Recommendation System for Million Song Dataset Challenge
|
cs.IR cs.SI
|
In this paper a system that took 8th place in Million Song Dataset challenge
is described. Given full listening history for 1 million of users and half of
listening history for 110000 users participatints should predict the missing
half. The system proposed here uses memory-based collaborative filtering
approach and user-based similarity. MAP@500 score of 0.15037 was achieved.
|
1209.3300
|
Normal Factor Graphs as Probabilistic Models
|
cs.IT math.IT
|
We present a new probabilistic modelling framework based on the recent notion
of normal factor graph (NFG). We show that the proposed NFG models and their
transformations unify some existing models such as factor graphs, convolutional
factor graphs, and cumulative distribution networks. The two subclasses of the
NFG models, namely the constrained and generative models, exhibit a duality in
their dependence structure. Transformation of NFG models further extends the
power of this modelling framework. We point out the well-known NFG
representations of parity and generator realizations of a linear code as
generative and constrained models, and comment on a more prevailing duality in
this context. Finally, we address the algorithmic aspect of computing the
exterior function of NFGs and the inference problem on NFGs.
|
1209.3307
|
Natural emergence of clusters and bursts in network evolution
|
physics.soc-ph cond-mat.stat-mech cs.SI nlin.AO
|
Network models with preferential attachment, where new nodes are injected
into the network and form links with existing nodes proportional to their
current connectivity, have been well studied for some time. Extensions have
been introduced where nodes attach proportionally to arbitrary fitness
functions. However, in these models, attaching to a node always increases the
ability of that node to gain more links in the future. We study network growth
where nodes attach proportionally to the clustering coefficients, or local
densities of triangles, of existing nodes. Attaching to a node typically lowers
its clustering coefficient, in contrast to preferential attachment or
rich-get-richer models. This simple modification naturally leads to a variety
of rich phenomena, including aging, non-Poissonian bursty dynamics, and
community formation. This theoretical model shows that complex network
structure can be generated without artificially imposing multiple dynamical
mechanisms and may reveal potentially overlooked mechanisms present in complex
systems.
|
1209.3312
|
Stable Manifold Embeddings with Structured Random Matrices
|
cs.IT math.DG math.IT
|
The fields of compressed sensing (CS) and matrix completion have shown that
high-dimensional signals with sparse or low-rank structure can be effectively
projected into a low-dimensional space (for efficient acquisition or
processing) when the projection operator achieves a stable embedding of the
data by satisfying the Restricted Isometry Property (RIP). It has also been
shown that such stable embeddings can be achieved for general Riemannian
submanifolds when random orthoprojectors are used for dimensionality reduction.
Due to computational costs and system constraints, the CS community has
recently explored the RIP for structured random matrices (e.g., random
convolutions, localized measurements, deterministic constructions). The main
contribution of this paper is to show that any matrix satisfying the RIP (i.e.,
providing a stable embedding for sparse signals) can be used to construct a
stable embedding for manifold-modeled signals by randomizing the column signs
and paying reasonable additional factors in the number of measurements. We
demonstrate this result with several new constructions for stable manifold
embeddings using structured matrices. This result allows advances in efficient
projection schemes for sparse signals to be immediately applied to manifold
signal models.
|
1209.3318
|
Hessian Schatten-Norm Regularization for Linear Inverse Problems
|
math.OC cs.CV cs.NA
|
We introduce a novel family of invariant, convex, and non-quadratic
functionals that we employ to derive regularized solutions of ill-posed linear
inverse imaging problems. The proposed regularizers involve the Schatten norms
of the Hessian matrix, computed at every pixel of the image. They can be viewed
as second-order extensions of the popular total-variation (TV) semi-norm since
they satisfy the same invariance properties. Meanwhile, by taking advantage of
second-order derivatives, they avoid the staircase effect, a common artifact of
TV-based reconstructions, and perform well for a wide range of applications. To
solve the corresponding optimization problems, we propose an algorithm that is
based on a primal-dual formulation. A fundamental ingredient of this algorithm
is the projection of matrices onto Schatten norm balls of arbitrary radius.
This operation is performed efficiently based on a direct link we provide
between vector projections onto $\ell_q$ norm balls and matrix projections onto
Schatten norm balls. Finally, we demonstrate the effectiveness of the proposed
methods through experimental results on several inverse imaging problems with
real and simulated data.
|
1209.3330
|
Predator confusion is sufficient to evolve swarming behavior
|
q-bio.PE cs.NE nlin.AO q-bio.NC
|
Swarming behaviors in animals have been extensively studied due to their
implications for the evolution of cooperation, social cognition, and
predator-prey dynamics. An important goal of these studies is discerning which
evolutionary pressures favor the formation of swarms. One hypothesis is that
swarms arise because the presence of multiple moving prey in swarms causes
confusion for attacking predators, but it remains unclear how important this
selective force is. Using an evolutionary model of a predator-prey system, we
show that predator confusion provides a sufficient selection pressure to evolve
swarming behavior in prey. Furthermore, we demonstrate that the evolutionary
effect of predator confusion on prey could in turn exert pressure on the
structure of the predator's visual field, favoring the frontally oriented,
high-resolution visual systems commonly observed in predators that feed on
swarming animals. Finally, we provide evidence that when prey evolve swarming
in response to predator confusion, there is a change in the shape of the
functional response curve describing the predator's consumption rate as prey
density increases. Thus, we show that a relatively simple perceptual
constraint--predator confusion--could have pervasive evolutionary effects on
prey behavior, predator sensory mechanisms, and the ecological interactions
between predators and prey.
|
1209.3331
|
Outage-based ergodic link adaptation for fading channels with delayed
CSIT
|
cs.IT math.IT
|
Link adaptation in which the transmission data rate is dynamically adjusted
according to channel variation is often used to deal with time-varying nature
of wireless channel. When channel state information at the transmitter (CSIT)
is delayed by more than channel coherence time due to feedback delay, however,
the effect of link adaptation can possibly be taken away if this delay is not
taken into account. One way to deal with such delay is to predict current
channel quality given available observation, but this would inevitably result
in prediction error. In this paper, an algorithm with different view point is
proposed. By using conditional cdf of current channel given observation, outage
probability can be computed for each value of transmission rate $R$. By
assuming that the transmission block error rate (BLER) is dominated by outage
probability, the expected throughput can also be computed, and $R$ can be
determined to maximize it. The proposed scheme is designed to be optimal if
channel has ergodicity, and it is shown to considerably outperform conventional
schemes in certain Rayleigh fading channel model.
|
1209.3332
|
High-throughput Execution of Hierarchical Analysis Pipelines on Hybrid
Cluster Platforms
|
cs.DC cs.SY
|
We propose, implement, and experimentally evaluate a runtime middleware to
support high-throughput execution on hybrid cluster machines of large-scale
analysis applications. A hybrid cluster machine consists of computation nodes
which have multiple CPUs and general purpose graphics processing units (GPUs).
Our work targets scientific analysis applications in which datasets are
processed in application-specific data chunks, and the processing of a data
chunk is expressed as a hierarchical pipeline of operations. The proposed
middleware system combines a bag-of-tasks style execution with coarse-grain
dataflow execution. Data chunks and associated data processing pipelines are
scheduled across cluster nodes using a demand driven approach, while within a
node operations in a given pipeline instance are scheduled across CPUs and
GPUs. The runtime system implements several optimizations, including
performance aware task scheduling, architecture aware process placement, data
locality conscious task assignment, and data prefetching and asynchronous data
copy, to maximize utilization of the aggregate computing power of CPUs and GPUs
and minimize data copy overheads. The application and performance benefits of
the runtime middleware are demonstrated using an image analysis application,
which is employed in a brain cancer study, on a state-of-the-art hybrid cluster
in which each node has two 6-core CPUs and three GPUs. Our results show that
implementing and scheduling application data processing as a set of fine-grain
operations provide more opportunities for runtime optimizations and attain
better performance than a coarser-grain, monolithic implementation. The
proposed runtime system can achieve high-throughput processing of large
datasets - we were able to process an image dataset consisting of 36,848
4Kx4K-pixel image tiles at about 150 tiles/second rate on 100 nodes.
|
1209.3344
|
Combining Schemes for Hybrid ARQ with Interference-Aware Successive
Decoding
|
cs.IT math.IT
|
For decades, cellular networks have greatly evolved to support high data
rates over reliable communication. Hybrid automatic-repeat-request (ARQ) is one
of the techniques to make such improvement possible. However, this advancement
is reduced at the cell edge where interference is not negligible. In order to
overcome the challenge at the cell edge, the concept of interference-aware
receiver has been recently proposed in which both desired and interference
signals are successively decoded, called interference-aware successive decoding
(IASD). Although IASD is the advanced receiver technology, interference signals
are out of the mobile station's control so that they cannot be requested by the
mobile station. For this reason, this paper proposes new combining schemes for
the IASD receiver, which operate with hybrid ARQ in a bit level or in a symbol
level. In addition, this paper compares the memory requirement among the
proposed combining schemes and analyzes the impact of discrete modulation on
the proposed scheme. Simulation results presents the superiority of the
proposed combining schemes and shows the improvement in terms of the number of
transmission.
|
1209.3352
|
Thompson Sampling for Contextual Bandits with Linear Payoffs
|
cs.LG cs.DS stat.ML
|
Thompson Sampling is one of the oldest heuristics for multi-armed bandit
problems. It is a randomized algorithm based on Bayesian ideas, and has
recently generated significant interest after several studies demonstrated it
to have better empirical performance compared to the state-of-the-art methods.
However, many questions regarding its theoretical performance remained open. In
this paper, we design and analyze a generalization of Thompson Sampling
algorithm for the stochastic contextual multi-armed bandit problem with linear
payoff functions, when the contexts are provided by an adaptive adversary. This
is among the most important and widely studied versions of the contextual
bandits problem. We provide the first theoretical guarantees for the contextual
version of Thompson Sampling. We prove a high probability regret bound of
$\tilde{O}(d^{3/2}\sqrt{T})$ (or $\tilde{O}(d\sqrt{T \log(N)})$), which is the
best regret bound achieved by any computationally efficient algorithm available
for this problem in the current literature, and is within a factor of
$\sqrt{d}$ (or $\sqrt{\log(N)}$) of the information-theoretic lower bound for
this problem.
|
1209.3353
|
Further Optimal Regret Bounds for Thompson Sampling
|
cs.LG cs.DS stat.ML
|
Thompson Sampling is one of the oldest heuristics for multi-armed bandit
problems. It is a randomized algorithm based on Bayesian ideas, and has
recently generated significant interest after several studies demonstrated it
to have better empirical performance compared to the state of the art methods.
In this paper, we provide a novel regret analysis for Thompson Sampling that
simultaneously proves both the optimal problem-dependent bound of
$(1+\epsilon)\sum_i \frac{\ln T}{\Delta_i}+O(\frac{N}{\epsilon^2})$ and the
first near-optimal problem-independent bound of $O(\sqrt{NT\ln T})$ on the
expected regret of this algorithm. Our near-optimal problem-independent bound
solves a COLT 2012 open problem of Chapelle and Li. The optimal
problem-dependent regret bound for this problem was first proven recently by
Kaufmann et al. [ALT 2012]. Our novel martingale-based analysis techniques are
conceptually simple, easily extend to distributions other than the Beta
distribution, and also extend to the more general contextual bandits setting
[Manuscript, Agrawal and Goyal, 2012].
|
1209.3358
|
Computation in Multicast Networks: Function Alignment and Converse
Theorems
|
cs.IT math.IT
|
The classical problem in network coding theory considers communication over
multicast networks. Multiple transmitters send independent messages to multiple
receivers which decode the same set of messages. In this work, computation over
multicast networks is considered: each receiver decodes an identical function
of the original messages. For a countably infinite class of two-transmitter
two-receiver single-hop linear deterministic networks, the computing capacity
is characterized for a linear function (modulo-2 sum) of Bernoulli sources.
Inspired by the geometric concept of interference alignment in networks, a new
achievable coding scheme called function alignment is introduced. A new
converse theorem is established that is tighter than cut-set based and
genie-aided bounds. Computation (vs. communication) over multicast networks
requires additional analysis to account for multiple receivers sharing a
network's computational resources. We also develop a network decomposition
theorem which identifies elementary parallel subnetworks that can constitute an
original network without loss of optimality. The decomposition theorem provides
a conceptually-simpler algebraic proof of achievability that generalizes to
$L$-transmitter $L$-receiver networks.
|
1209.3366
|
Implement Blind Interference Alignment over Homogeneous 3-user 2x1
Broadcast Channel
|
cs.IT math.IT
|
This paper first studies the homogeneous 3-user 2x1 broadcast channel (BC)
with no CSIT. We show a sufficient condition for it to achieve the optimal 3/2
degrees of freedom (DoF) by using Blind Interference Alignment (BIA). BIA
refers to the interference alignment method without the need of CSIT. It
further studies the 2x1 broadcast network in which there are K>=3 homogeneous
single-antenna users, and their coherence time offsets are independently and
uniformly distributed. We show that, if K>=11, the two-antenna transmitter can
find, with more than 95% certainty, three users to form a BIA-feasible 3-user
BC and achieve the optimal 3/2 DoF.
|
1209.3394
|
Distribution of the largest eigenvalue for real Wishart and Gaussian
random matrices and a simple approximation for the Tracy-Widom distribution
|
cs.IT math.IT math.ST stat.TH
|
We derive efficient recursive formulas giving the exact distribution of the
largest eigenvalue for finite dimensional real Wishart matrices and for the
Gaussian Orthogonal Ensemble (GOE). In comparing the exact distribution with
the limiting distribution of large random matrices, we also found that the
Tracy-Widom law can be approximated by a properly scaled and shifted Gamma
distribution, with great accuracy for the values of common interest in
statistical applications.
|
1209.3411
|
A Computational Model of the Effects of Drug Addiction on Neural
Population Dynamics
|
q-bio.NC cs.SI
|
Reward processing and derangements thereof, such as drug addiction, involve
the coordinated activity of many brain areas. Prior work has identified many
behavioral, molecular biological and single neuron changes throughout the
mesocorticolimbic system that reflect and drive addictive behavior.
Subpopulations in the ventral tegemental area (VTA) encode positive reward
prediction error, negative reward prediction error, and the magnitude of the
reward. Phasic activity in VTA dopaminergic neurons correlates with hedonic
value. Tonic activity of groups in the dorsomedial prefrontal cortex (dmPFC)
can encode antidepressant states. However, little is known about how drug
addiction might affect population encoding across larger brain regions. Here,
we compare the information content associated with network patterns in naive,
acutely intoxicated and chronically addicted states in a plastic attractor
network. We found that addiction decreases the network's ability to store and
discriminate among patterns of activity. Altered dopaminergic tone flattens the
energy landscape and decreases the entropy associated with each network
pattern. Altered dmPFC activity produces signal-to-noise deficits similar to
computational models of schizophrenia. Our results provide a conceptual
framework for interpreting altered neural population dynamics in
psychopathological states based on information theory. They also suggest a view
of the subtypes of depression as on a continuum of combinations of cortical and
subcortical dysfunction. This suggests that patients who suffer from depression
with psychotic features will have more cortical than mesolimbic dysfunction.
Furthermore, our framework can be applied to other psychiatric illnesses and so
may help us, in general, quantitatively understand psychiatric illnesses as
disorders in the representation and processing of information by distributed
brain networks.
|
1209.3416
|
Distributed Resource Allocation Algorithm Design for Multi-Cell Networks
Based on Advanced Decomposition Theory
|
cs.IT math.IT
|
In this letter, we investigate the resource allocation for downlink
multi-cell coordinated OFDMA wireless networks, in which power allocation and
subcarrier scheduling are jointly optimized. Aiming at maximizing the weighted
sum of the minimal user rates (WSMR) of coordinated cells under individual
power constraints at each base station, an effective distributed resource
allocation algorithm using a modified decomposition method is proposed, which
is suitable by practical implementation due to its low complexity and fast
convergence speed. Simulation results demonstrate that the proposed
decentralized algorithm provides substantial throughput gains with lower
computational cost compared to existing schemes.
|
1209.3419
|
Tractable Optimization Problems through Hypergraph-Based Structural
Restrictions
|
cs.AI
|
Several variants of the Constraint Satisfaction Problem have been proposed
and investigated in the literature for modelling those scenarios where
solutions are associated with some given costs. Within these frameworks
computing an optimal solution is an NP-hard problem in general; yet, when
restricted over classes of instances whose constraint interactions can be
modelled via (nearly-)acyclic graphs, this problem is known to be solvable in
polynomial time. In this paper, larger classes of tractable instances are
singled out, by discussing solution approaches based on exploiting hypergraph
acyclicity and, more generally, structural decomposition methods, such as
(hyper)tree decompositions.
|
1209.3433
|
A Hajj And Umrah Location Classification System For Video Crowded Scenes
|
cs.CV cs.CY cs.LG
|
In this paper, a new automatic system for classifying ritual locations in
diverse Hajj and Umrah video scenes is investigated. This challenging subject
has mostly been ignored in the past due to several problems one of which is the
lack of realistic annotated video datasets. HUER Dataset is defined to model
six different Hajj and Umrah ritual locations[26].
The proposed Hajj and Umrah ritual location classifying system consists of
four main phases: Preprocessing, segmentation, feature extraction, and location
classification phases. The shot boundary detection and background/foregroud
segmentation algorithms are applied to prepare the input video scenes into the
KNN, ANN, and SVM classifiers. The system improves the state of art results on
Hajj and Umrah location classifications, and successfully recognizes the six
Hajj rituals with more than 90% accuracy. The various demonstrated experiments
show the promising results.
|
1209.3460
|
Expander-like Codes based on Finite Projective Geometry
|
cs.IT math.IT
|
We present a novel error correcting code and decoding algorithm which have
construction similar to expander codes. The code is based on a bipartite graph
derived from the subsumption relations of finite projective geometry, and
Reed-Solomon codes as component codes. We use a modified version of well-known
Zemor's decoding algorithm for expander codes, for decoding our codes. By
derivation of geometric bounds rather than eigenvalue bounds, it has been
proved that for practical values of the code rate, the random error correction
capability of our codes is much better than those derived for previously
studied graph codes, including Zemor's bound. MATLAB simulations further reveal
that the average case performance of this code is 10 times better than these
geometric bounds obtained, in almost 99% of the test cases. By exploiting the
symmetry of projective space lattices, we have designed a corresponding decoder
that has optimal throughput. The decoder design has been prototyped on Xilinx
Virtex 5 FPGA. The codes are designed for potential applications in secondary
storage media. As an application, we also discuss usage of these codes to
improve the burst error correction capability of CD-ROM decoder.
|
1209.3487
|
A framework for large-scale distributed AI search across disconnected
heterogeneous infrastructures
|
cs.DC cs.AI math.CO
|
We present a framework for a large-scale distributed eScience Artificial
Intelligence search. Our approach is generic and can be used for many different
problems. Unlike many other approaches, we do not require dedicated machines,
homogeneous infrastructure or the ability to communicate between nodes. We give
special consideration to the robustness of the framework, minimising the loss
of effort even after total loss of infrastructure, and allowing easy
verification of every step of the distribution process. In contrast to most
eScience applications, the input data and specification of the problem is very
small, being easily given in a paragraph of text. The unique challenges our
framework tackles are related to the combinatorial explosion of the space that
contains the possible solutions and the robustness of long-running
computations. Not only is the time required to finish the computations unknown,
but also the resource requirements may change during the course of the
computation. We demonstrate the applicability of our framework by using it to
solve a challenging and hitherto open problem in computational mathematics. The
results demonstrate that our approach easily scales to computations of a size
that would have been impossible to tackle in practice just a decade ago.
|
1209.3505
|
Cognitive Energy Harvesting and Transmission from a Network Perspective
|
cs.IT math.IT
|
Wireless networks can be self-sustaining by harvesting energy from
radio-frequency (RF) signals. Building on classic cognitive radio networks, we
propose a novel method for network coexisting where mobiles from a secondary
network, called secondary transmitters (STs), either harvest energy from
transmissions by nearby transmitters from a primary network, called primary
transmitters (PTs), or transmit information if PTs are sufficiently far away;
STs store harvested energy in rechargeable batteries with finite capacity and
use all available energy for subsequent transmission when batteries are fully
charged. In this model, each PT is centered at a guard zone and a harvesting
zone that are disks with given radiuses; a ST harvests energy if it lies in
some harvesting zone, transmits fixed-power signals if it is outside all guard
zones or else idles. Based on this model, the spatial throughput of the
secondary network is maximized using a stochastic-geometry model where PTs and
STs are modeled as independent homogeneous Poisson point processes (HPPPs),
under the outage constraints for coexisting networks and obtained in a simple
closed-form. It is observed from the result that the maximum secondary
throughput decreases linearly with the growing PT density, and the optimal ST
density is inversely proportional to the derived transmission probability for
STs.
|
1209.3549
|
Nash Equilibria for Stochastic Games with Asymmetric Information-Part 1:
Finite Games
|
cs.GT cs.SY
|
A model of stochastic games where multiple controllers jointly control the
evolution of the state of a dynamic system but have access to different
information about the state and action processes is considered. The asymmetry
of information among the controllers makes it difficult to compute or
characterize Nash equilibria. Using common information among the controllers,
the game with asymmetric information is shown to be equivalent to another game
with symmetric information. Further, under certain conditions, a Markov state
is identified for the equivalent symmetric information game and its Markov
perfect equilibria are characterized. This characterization provides a backward
induction algorithm to find Nash equilibria of the original game with
asymmetric information in pure or behavioral strategies. Each step of this
algorithm involves finding Bayesian Nash equilibria of a one-stage Bayesian
game. The class of Nash equilibria of the original game that can be
characterized in this backward manner are named common information based Markov
perfect equilibria.
|
1209.3573
|
A short note on the kissing number of the lattice in Gaussian wiretap
coding
|
cs.CR cs.IT math.IT
|
We show that on an $n=24m+8k$-dimensional even unimodular lattice, if the
shortest vector length is $\geq 2m$, then as the number of vectors of length
$2m$ decreases, the secrecy gain increases. We will also prove a similar result
on general unimodular lattices. Furthermore, assuming the conjecture by
Belfiore and Sol\'e, we will calculate the difference between inverses of
secrecy gains as the number of vectors varies. Finally, we will show by an
example that there exist two lattices in the same dimension with the same
shortest vector length and the same kissing number, but different secrecy
gains.
|
1209.3590
|
Information Retrieval From Internet Applications For Digital Forensic
|
cs.CR cs.IR cs.SI
|
Advanced internet technologies providing services like e-mail, social
networking, online banking, online shopping etc., have made day-to-day
activities simple and convenient. Increasing dependency on the internet,
convenience, and decreasing cost of electronic devices have resulted in
frequent use of online services. However, increased indulgence over the
internet has also accelerated the pace of digital crimes. The increase in
number and complexity of digital crimes has caught the attention of forensic
investigators. The Digital Investigators are faced with the challenge of
gathering accurate digital evidence from as many sources as possible. In this
paper, an attempt was made to recover digital evidence from a system's RAM in
the form of information about the most recent browsing session of the user.
Four different applications were chosen and the experiment was conducted across
two browsers. It was found that crucial information about the target user such
as, user name, passwords, etc., was recoverable.
|
1209.3600
|
Output Feedback H_2 Model Matching for Decentralized Systems with Delays
|
cs.SY math.OC
|
This paper gives a new solution to the output feedback H_2 model matching
problem for a large class of delayed information sharing patterns. Existing
methods for such problems typically reduce the decentralized problem to a
centralized problem of higher state dimension. In contrast, the controller
given in this paper is constructed from the solutions to the centralized
control and estimation Riccati equations for the original system. The problem
is solved by decomposing the controller into two components. One is
centralized, but delayed, while the other is decentralized with finite impulse
response (FIR). It is then shown that the optimal controller can be constructed
through a combination of centralized spectral factorization and quadratic
programming.
|
1209.3607
|
Some refined results on convergence of curvelet transform
|
cs.IT math.IT
|
Article presents proof that M-term non-linear approximation of functions that
are C^3 apart from C^3 edges in curvelet frame have squared L^2 approximation
bounded by M^(-2).
|
1209.3650
|
A survey on social network sites' functional features
|
cs.HC cs.SI
|
Through social network sites (SNS) are between the most popular sites in the
Web, there is not a formal study on their functional features. This paper
introduces a comprehensive list of them. Then, it shows how these features are
supported by top 16 social network platforms. Results show some universal
features, such as comments support, public sharing of contents, system
notifications and profile pages with avatars. A strong tendency in using
external services for authentication and contact recognition has been found,
which is quite significant in top SNS. Most popular content types include text,
pictures and video. The home page is the site for publishing content and
following activities, whilst profile pages mainly include owner's contacts and
content lists.
|
1209.3672
|
1-Bit Matrix Completion
|
math.ST cs.IT math.IT stat.TH
|
In this paper we develop a theory of matrix completion for the extreme case
of noisy 1-bit observations. Instead of observing a subset of the real-valued
entries of a matrix M, we obtain a small number of binary (1-bit) measurements
generated according to a probability distribution determined by the real-valued
entries of M. The central question we ask is whether or not it is possible to
obtain an accurate estimate of M from this data. In general this would seem
impossible, but we show that the maximum likelihood estimate under a suitable
constraint returns an accurate estimate of M when ||M||_{\infty} <= \alpha, and
rank(M) <= r. If the log-likelihood is a concave function (e.g., the logistic
or probit observation models), then we can obtain this maximum likelihood
estimate by optimizing a convex program. In addition, we also show that if
instead of recovering M we simply wish to obtain an estimate of the
distribution generating the 1-bit measurements, then we can eliminate the
requirement that ||M||_{\infty} <= \alpha. For both cases, we provide lower
bounds showing that these estimates are near-optimal. We conclude with a suite
of experiments that both verify the implications of our theorems as well as
illustrate some of the practical applications of 1-bit matrix completion. In
particular, we compare our program to standard matrix completion methods on
movie rating data in which users submit ratings from 1 to 5. In order to use
our program, we quantize this data to a single bit, but we allow the standard
matrix completion program to have access to the original ratings (from 1 to 5).
Surprisingly, the approach based on binary data performs significantly better.
|
1209.3686
|
Active Learning for Crowd-Sourced Databases
|
cs.LG cs.DB
|
Crowd-sourcing has become a popular means of acquiring labeled data for a
wide variety of tasks where humans are more accurate than computers, e.g.,
labeling images, matching objects, or analyzing sentiment. However, relying
solely on the crowd is often impractical even for data sets with thousands of
items, due to time and cost constraints of acquiring human input (which cost
pennies and minutes per label). In this paper, we propose algorithms for
integrating machine learning into crowd-sourced databases, with the goal of
allowing crowd-sourcing applications to scale, i.e., to handle larger datasets
at lower costs. The key observation is that, in many of the above tasks, humans
and machine learning algorithms can be complementary, as humans are often more
accurate but slow and expensive, while algorithms are usually less accurate,
but faster and cheaper.
Based on this observation, we present two new active learning algorithms to
combine humans and algorithms together in a crowd-sourced database. Our
algorithms are based on the theory of non-parametric bootstrap, which makes our
results applicable to a broad class of machine learning models. Our results, on
three real-life datasets collected with Amazon's Mechanical Turk, and on 15
well-known UCI data sets, show that our methods on average ask humans to label
one to two orders of magnitude fewer items to achieve the same accuracy as a
baseline that labels random images, and two to eight times fewer questions than
previous active learning schemes.
|
1209.3694
|
Submodularity in Batch Active Learning and Survey Problems on Gaussian
Random Fields
|
cs.LG cs.AI cs.DS
|
Many real-world datasets can be represented in the form of a graph whose edge
weights designate similarities between instances. A discrete Gaussian random
field (GRF) model is a finite-dimensional Gaussian process (GP) whose prior
covariance is the inverse of a graph Laplacian. Minimizing the trace of the
predictive covariance Sigma (V-optimality) on GRFs has proven successful in
batch active learning classification problems with budget constraints. However,
its worst-case bound has been missing. We show that the V-optimality on GRFs as
a function of the batch query set is submodular and hence its greedy selection
algorithm guarantees an (1-1/e) approximation ratio. Moreover, GRF models have
the absence-of-suppressor (AofS) condition. For active survey problems, we
propose a similar survey criterion which minimizes 1'(Sigma)1. In practice,
V-optimality criterion performs better than GPs with mutual information gain
criteria and allows nonuniform costs for different nodes.
|
1209.3702
|
Multiple-Input Multiple-Output Two-Way Relaying: A Space-Division
Approach
|
cs.IT math.IT
|
We propose a novel space-division based network-coding scheme for
multiple-input multiple-output (MIMO) two-way relay channels (TWRCs), in which
two multi-antenna users exchange information via a multi-antenna relay. In the
proposed scheme, the overall signal space at the relay is divided into two
subspaces. In one subspace, the spatial streams of the two users have nearly
orthogonal directions, and are completely decoded at the relay. In the other
subspace, the signal directions of the two users are nearly parallel, and
linear functions of the spatial streams are computed at the relay, following
the principle of physical-layer network coding (PNC). Based on the recovered
messages and message-functions, the relay generates and forwards network-coded
messages to the two users. We show that, at high signal-to-noise ratio (SNR),
the proposed scheme achieves the asymptotic sum rate capacity of MIMO TWRCs
within 1/2log(5/4) = 0.161 bits per user-antenna for any antenna configuration
and channel realization. We perform large-system analysis to derive the average
sum-rate of the proposed scheme over Rayleigh-fading MIMO TWRCs. We show that
the average asymptotic sum rate gap to the capacity upper bound is at most
0.053 bits per relay-antenna. It is demonstrated that the proposed scheme
significantly outperforms the existing schemes.
|
1209.3728
|
Linear Precoding Designs for Amplify-and-Forward Multiuser Two-Way Relay
Systems
|
cs.IT math.IT
|
Two-way relaying can improve spectral efficiency in two-user cooperative
communications. It also has great potential in multiuser systems. A major
problem of designing a multiuser two-way relay system (MU-TWRS) is transceiver
or precoding design to suppress co-channel interference. This paper aims to
study linear precoding designs for a cellular MU-TWRS where a multi-antenna
base station (BS) conducts bi-directional communications with multiple mobile
stations (MSs) via a multi-antenna relay station (RS) with amplify-and-forward
relay strategy. The design goal is to optimize uplink performance, including
total mean-square error (Total-MSE) and sum rate, while maintaining individual
signal-to-interference-plus-noise ratio (SINR) requirement for downlink
signals. We show that the BS precoding design with the RS precoder fixed can be
converted to a standard second order cone programming (SOCP) and the optimal
solution is obtained efficiently. The RS precoding design with the BS precoder
fixed, on the other hand, is non-convex and we present an iterative algorithm
to find a local optimal solution. Then, the joint BS-RS precoding is obtained
by solving the BS precoding and the RS precoding alternately. Comprehensive
simulation is conducted to demonstrate the effectiveness of the proposed
precoding designs.
|
1209.3733
|
Cascade Failures from Distributed Generation in Power Grids
|
physics.soc-ph cs.SY
|
Power grids are nowadays experiencing a transformation due to the
introduction of Distributed Generation based on Renewable Sources. At
difference with classical Distributed Generation, where local power sources
mitigate anomalous user consumption peaks, Renewable Sources introduce in the
grid intrinsically erratic power inputs. By introducing a simple schematic (but
realistic) model for power grids with stochastic distributed generation, we
study the effects of erratic sources on the robustness of several IEEE power
grid test networks with up to 2000 buses. We find that increasing the
penetration of erratic sources causes the grid to fail with a sharp transition.
We compare such results with the case of failures caused by the natural
increasing power demand.
|
1209.3734
|
RIO: Minimizing User Interaction in Ontology Debugging
|
cs.AI
|
Efficient ontology debugging is a cornerstone for many activities in the
context of the Semantic Web, especially when automatic tools produce (parts of)
ontologies such as in the field of ontology matching. The best currently known
interactive debugging systems rely upon some meta information in terms of fault
probabilities, which can speed up the debugging procedure in the good case, but
can also have negative impact on the performance in the bad case. The problem
is that assessment of the meta information is only possible a-posteriori.
Consequently, as long as the actual fault is unknown, there is always some risk
of suboptimal interactive diagnoses discrimination. As an alternative, one
might prefer to rely on a tool which pursues a no-risk strategy. In this case,
however, possibly well-chosen meta information cannot be exploited, resulting
again in inefficient debugging actions. In this work we present a reinforcement
learning strategy that continuously adapts its behavior depending on the
performance achieved and minimizes the risk of using low-quality meta
information. Therefore, this method is suitable for application scenarios where
reliable a-priori fault estimates are difficult to obtain. Using problematic
ontologies in the field of ontology matching, we show that the proposed
risk-aware query strategy outperforms both active learning approaches and
no-risk strategies on average in terms of required amount of user interaction.
|
1209.3737
|
Key to Network Controllability
|
physics.soc-ph cond-mat.stat-mech cs.SI q-bio.MN
|
Liu et al recently proposed a minimum number of driver nodes needed to obtain
full structural controllability over a directed network. Driver nodes are
unmatched nodes, from which there are directed paths to all matched nodes.
Their most important assertion is that a system's controllability is to a great
extent encoded by the underlying network's degree distribution, $P(k_{in},
k_{out})$. Is the controllability of a network decided almost completely by the
immediate neighbourhood of a node, while, even slightly distant nodes play no
role at all? Motivated by the above question, in this communication, we argue
that an effective understanding of controllability in directed networks can be
reached using distance based measures of closeness centrality and betweenness
centrality and may not require the knowledge of local connectivity measures
like in-degree and out-degree.
|
1209.3756
|
Incomplete Information in RDF
|
cs.DB
|
We extend RDF with the ability to represent property values that exist, but
are unknown or partially known, using constraints. Following ideas from the
incomplete information literature, we develop a semantics for this extension of
RDF, called RDFi, and study SPARQL query evaluation in this framework.
|
1209.3761
|
Generalized Canonical Correlation Analysis for Disparate Data Fusion
|
stat.ML cs.LG
|
Manifold matching works to identify embeddings of multiple disparate data
spaces into the same low-dimensional space, where joint inference can be
pursued. It is an enabling methodology for fusion and inference from multiple
and massive disparate data sources. In this paper we focus on a method called
Canonical Correlation Analysis (CCA) and its generalization Generalized
Canonical Correlation Analysis (GCCA), which belong to the more general Reduced
Rank Regression (RRR) framework. We present an efficiency investigation of CCA
and GCCA under different training conditions for a particular text document
classification task.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.