id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1306.4363 | Social Network Dynamics in a Massive Online Game: Network Turnover,
Non-densification, and Team Engagement in Halo Reach | cs.SI physics.data-an physics.soc-ph | Online multiplayer games are a popular form of social interaction, used by
hundreds of millions of individuals. However, little is known about the social
networks within these online games, or how they evolve over time. Understanding
human social dynamics within massive online games can shed new light on social
interactions in general and inform the development of more engaging systems.
Here, we study a novel, large friendship network, inferred from nearly 18
billion social interactions over 44 weeks between 17 million individuals in the
popular online game Halo: Reach. This network is one of the largest, most
detailed temporal interaction networks studied to date, and provides a novel
perspective on the dynamics of online friendship networks, as opposed to mere
interaction graphs. Initially, this network exhibits strong structural turnover
and decays rapidly from a peak size. In the following period, however, both
network size and turnover stabilize, producing a dynamic structural
equilibrium. In contrast to other studies, we find that the Halo friendship
network is non-densifying: both the mean degree and the average pairwise
distance are stable, suggesting that densification cannot occur when
maintaining friendships is costly. Finally, players with greater long-term
engagement exhibit stronger local clustering, suggesting a group-level social
engagement process. These results demonstrate the utility of online games for
studying social networks, shed new light on empirical temporal graph patterns,
and clarify the claims of universality of network densification.
|
1306.4391 | On the Fundamental Limits of Recovering Tree Sparse Vectors from Noisy
Linear Measurements | cs.IT math.IT math.ST stat.ML stat.TH | Recent breakthrough results in compressive sensing (CS) have established that
many high dimensional signals can be accurately recovered from a relatively
small number of non-adaptive linear observations, provided that the signals
possess a sparse representation in some basis. Subsequent efforts have shown
that the performance of CS can be improved by exploiting additional structure
in the locations of the nonzero signal coefficients during inference, or by
utilizing some form of data-dependent adaptive measurement focusing during the
sensing process. To our knowledge, our own previous work was the first to
establish the potential benefits that can be achieved when fusing the notions
of adaptive sensing and structured sparsity -- that work examined the task of
support recovery from noisy linear measurements, and established that an
adaptive sensing strategy specifically tailored to signals that are tree-sparse
can significantly outperform adaptive and non-adaptive sensing strategies that
are agnostic to the underlying structure. In this work we establish fundamental
performance limits for the task of support recovery of tree-sparse signals from
noisy measurements, in settings where measurements may be obtained either
non-adaptively (using a randomized Gaussian measurement strategy motivated by
initial CS investigations) or by any adaptive sensing strategy. Our main
results here imply that the adaptive tree sensing procedure analyzed in our
previous work is nearly optimal, in the sense that no other sensing and
estimation strategy can perform fundamentally better for identifying the
support of tree-sparse signals.
|
1306.4401 | Voter models with contrarian agents | physics.soc-ph cs.SI | In the voter and many other opinion formation models, agents are assumed to
behave as congregators (also called the conformists); they are attracted to the
opinions of others. In this study, I investigate linear extensions of the voter
model with contrarian agents. An agent is either congregator or contrarian and
assumes a binary opinion. I investigate three models that differ in the
behavior of the contrarian toward other agents. In model 1, contrarians mimic
the opinions of other contrarians and oppose (i.e., try to select the opinion
opposite to) those of congregators. In model 2, contrarians mimic the opinions
of congregators and oppose those of other contrarians. In model 3, contrarians
oppose anybody. In all models, congregators are assumed to like anybody. I show
that even a small number of contrarians prohibits the consensus in the entire
population to be reached in all three models. I also obtain the equilibrium
distributions using the van Kampen small-fluctuation approximation and the
Fokker-Planck equation for the case of many contrarians and a single
contrarian, respectively. I show that the fluctuation around the symmetric
coexistence equilibrium is much larger in model 2 than in models 1 and 3 when
contrarians are rare.
|
1306.4410 | Joint estimation of sparse multivariate regression and conditional
graphical models | stat.ML cs.LG | Multivariate regression model is a natural generalization of the classical
univari- ate regression model for fitting multiple responses. In this paper, we
propose a high- dimensional multivariate conditional regression model for
constructing sparse estimates of the multivariate regression coefficient matrix
that accounts for the dependency struc- ture among the multiple responses. The
proposed method decomposes the multivariate regression problem into a series of
penalized conditional log-likelihood of each response conditioned on the
covariates and other responses. It allows simultaneous estimation of the sparse
regression coefficient matrix and the sparse inverse covariance matrix. The
asymptotic selection consistency and normality are established for the
diverging dimension of the covariates and number of responses. The
effectiveness of the pro- posed method is also demonstrated in a variety of
simulated examples as well as an application to the Glioblastoma multiforme
cancer data.
|
1306.4411 | Event-Object Reasoning with Curated Knowledge Bases: Deriving Missing
Information | cs.AI | The broader goal of our research is to formulate answers to why and how
questions with respect to knowledge bases, such as AURA. One issue we face when
reasoning with many available knowledge bases is that at times needed
information is missing. Examples of this include partially missing information
about next sub-event, first sub-event, last sub-event, result of an event,
input to an event, destination of an event, and raw material involved in an
event. In many cases one can recover part of the missing knowledge through
reasoning. In this paper we give a formal definition about how such missing
information can be recovered and then give an ASP implementation of it. We then
discuss the implication of this with respect to answering why and how
questions.
|
1306.4414 | Symbol and Bit Mapping Optimization for Physical-Layer Network Coding
with Pulse Amplitude Modulation | cs.IT math.IT | In this paper, we consider a two-way relay network in which two users
exchange messages through a single relay using a physical-layer network coding
(PNC) based protocol. The protocol comprises two phases of communication. In
the multiple access (MA) phase, two users transmit their modulated signals
concurrently to the relay, and in the broadcast (BC) phase, the relay
broadcasts a network-coded (denoised) signal to both users. Nonbinary and
binary network codes are considered for uniform and nonuniform pulse amplitude
modulation (PAM) adopted in the MA phase, respectively. We examine the effect
of different choices of symbol mapping (i.e., mapping from the denoised signal
to the modulation symbols at the relay) and bit mapping (i.e., mapping from the
modulation symbols to the source bits at the user) on the system error-rate
performance. A general optimization framework is proposed to determine the
optimal symbol/bit mappings with joint consideration of noisy transmissions in
both communication phases. Complexity-reduction techniques are developed for
solving the optimization problems. It is shown that the optimal symbol/bit
mappings depend on the signal-to-noise ratio (SNR) of the channel and the
modulation scheme. A general strategy for choosing good symbol/bit mappings is
also presented based on a high-SNR analysis, which suggests using a symbol
mapping that aligns the error patterns in both communication phases and Gray
and binary bit mappings for uniform and nonuniform PAM, respectively.
|
1306.4418 | Structure Based Extended Resolution for Constraint Programming | cs.AI | Nogood learning is a powerful approach to reducing search in Constraint
Programming (CP) solvers. The current state of the art, called Lazy Clause
Generation (LCG), uses resolution to derive nogoods expressing the reasons for
each search failure. Such nogoods can prune other parts of the search tree,
producing exponential speedups on a wide variety of problems. Nogood learning
solvers can be seen as resolution proof systems. The stronger the proof system,
the faster it can solve a CP problem. It has recently been shown that the proof
system used in LCG is at least as strong as general resolution. However,
stronger proof systems such as \emph{extended resolution} exist. Extended
resolution allows for literals expressing arbitrary logical concepts over
existing variables to be introduced and can allow exponentially smaller proofs
than general resolution. The primary problem in using extended resolution is to
figure out exactly which literals are useful to introduce. In this paper, we
show that we can use the structural information contained in a CP model in
order to introduce useful literals, and that this can translate into
significant speedups on a range of problems.
|
1306.4427 | Multidimensional User Data Model for Web Personalization | cs.IR | Personalization is being applied to great extend in many systems. This paper
presents a multi-dimensional user data model and its application in web search.
Online and Offline activities of the user are tracked for creating the user
model. The main phases are identification of relevant documents and the
representation of relevance and similarity of the documents. The concepts
Keywords, Topics, URLs and clusters are used in the implementation. The
algorithms for profiling, grading and clustering the concepts in the user model
and algorithm for determining the personalized search results by re-ranking the
results in a search bank are presented in this paper. Simple experiments for
evaluation of the model and their results are described.
|
1306.4447 | Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data
from Machine Learning Classifiers | cs.CR cs.LG stat.ML | Machine Learning (ML) algorithms are used to train computers to perform a
variety of complex tasks and improve with experience. Computers learn how to
recognize patterns, make unintended decisions, or react to a dynamic
environment. Certain trained machines may be more effective than others because
they are based on more suitable ML algorithms or because they were trained
through superior training sets. Although ML algorithms are known and publicly
released, training sets may not be reasonably ascertainable and, indeed, may be
guarded as trade secrets. While much research has been performed about the
privacy of the elements of training sets, in this paper we focus our attention
on ML classifiers and on the statistical information that can be unconsciously
or maliciously revealed from them. We show that it is possible to infer
unexpected but useful information from ML classifiers. In particular, we build
a novel meta-classifier and train it to hack other classifiers, obtaining
meaningful information about their training sets. This kind of information
leakage can be exploited, for example, by a vendor to build more effective
classifiers or to simply acquire trade secrets from a competitor's apparatus,
potentially violating its intellectual property rights.
|
1306.4460 | Implementing a Wall-In Building Placement in StarCraft with Declarative
Programming | cs.AI | In real-time strategy games like StarCraft, skilled players often block the
entrance to their base with buildings to prevent the opponent's units from
getting inside. This technique, called "walling-in", is a vital part of
player's skill set, allowing him to survive early aggression. However, current
artificial players (bots) do not possess this skill, due to numerous
inconveniences surfacing during its implementation in imperative languages like
C++ or Java. In this text, written as a guide for bot programmers, we address
the problem of finding an appropriate building placement that would block the
entrance to player's base, and present a ready to use declarative solution
employing the paradigm of answer set programming (ASP). We also encourage the
readers to experiment with different declarative approaches to this problem.
|
1306.4478 | Finite Element Based Tracking of Deforming Surfaces | cs.CV cs.GR | We present an approach to robustly track the geometry of an object that
deforms over time from a set of input point clouds captured from a single
viewpoint. The deformations we consider are caused by applying forces to known
locations on the object's surface. Our method combines the use of prior
information on the geometry of the object modeled by a smooth template and the
use of a linear finite element method to predict the deformation. This allows
the accurate reconstruction of both the observed and the unobserved sides of
the object. We present tracking results for noisy low-quality point clouds
acquired by either a stereo camera or a depth camera, and simulations with
point clouds corrupted by different error terms. We show that our method is
also applicable to large non-linear deformations.
|
1306.4479 | Robust State and fault Estimation of Linear Discrete Time Systems with
Unknown Disturbances | cs.SY | This paper presents a new robust fault and state estimation based on
recursive least square filter for linear stochastic systems with unknown
disturbances. The novel elements of the algorithm are : a simple, easily
implementable, square root method which is shown to solve the numerical
problems affecting the unknown input filter algorithm and related information
filter and smoothing algorithms; an iterative framework, where information and
covariance filters and smoothing are sequentially run in order to estimate the
state and fault. This method provides a direct estimate of the state and fault
in a single block with a simple formulation. A numerical example is given in
order to illustrate the performance of the proposed filter.
|
1306.4495 | Uplink Performance of Time-Reversal MRC in Massive MIMO Systems Subject
to Phase Noise | cs.IT math.IT | Multi-user multiple-input multiple-output (MU-MIMO) cellular systems with an
excess of base station (BS) antennas (Massive MIMO) offer unprecedented
multiplexing gains and radiated energy efficiency. Oscillator phase noise is
introduced in the transmitter and receiver radio frequency chains and severely
degrades the performance of communication systems. We study the effect of
oscillator phase noise in frequency-selective Massive MIMO systems with
imperfect channel state information (CSI). In particular, we consider two
distinct operation modes, namely when the phase noise processes at the $M$ BS
antennas are identical (synchronous operation) and when they are independent
(non-synchronous operation). We analyze a linear and low-complexity
time-reversal maximum-ratio combining (TR-MRC) reception strategy. For both
operation modes we derive a lower bound on the sum-capacity and we compare
their performance. Based on the derived achievable sum-rates, we show that with
the proposed receive processing an $O(\sqrt{M})$ array gain is achievable. Due
to the phase noise drift the estimated effective channel becomes progressively
outdated. Therefore, phase noise effectively limits the length of the interval
used for data transmission and the number of scheduled users. The derived
achievable rates provide insights into the optimum choice of the data interval
length and the number of scheduled users.
|
1306.4514 | Towards Compact and Frequency-Tunable Antenna Solutions for MIMO
Transmission with a Single RF Chain | cs.IT math.IT | Recently, a technique called beam-space MIMO has been demonstrated as an
effective approach for transmitting multiple signals while using a single
RF-chain. In this work, we present novel design considerations and a compact
antenna solution to stimulate the deployment of beam-space MIMO in future
wireless applications. Targeting integration in small wireless devices, the
novel antenna is made of a single integrated radiator rather than an array of
physically-separated dipoles. It also drastically simplifies the implementation
of variable loads and DC bias circuits for BPSK modulated signals, and does not
require any external reconfigurable matching circuit. Finally, we show that
this antenna system could be reconfigured by dynamic adjustment of terminating
loads to preserve its beam-space multiplexing capabilities over a 1:2 tuning
range, thereby promoting the convergence of MIMO and dynamic spectrum
allocation via reduced-complexity hardware. A prototype achieving
single-RF-chain multiplexing at a fixed frequency is designed and measured,
showing excellent agreement between simulations and measurements.
|
1306.4532 | Verifying the Steane code with Quantomatic | quant-ph cs.AI cs.LO | In this paper we give a partially mechanized proof of the correctness of
Steane's 7-qubit error correcting code, using the tool Quantomatic. To the best
of our knowledge, this represents the largest and most complicated verification
task yet carried out using Quantomatic.
|
1306.4534 | Exploiting Cellular Data for Disease Containment and Information
Campaigns Strategies in Country-Wide Epidemics | cs.SI physics.soc-ph | Human mobility is one of the key factors at the basis of the spreading of
diseases in a population. Containment strategies are usually devised on
movement scenarios based on coarse-grained assumptions. Mobility phone data
provide a unique opportunity for building models and defining strategies based
on very precise information about the movement of people in a region or in a
country. Another very important aspect is the underlying social structure of a
population, which might play a fundamental role in devising information
campaigns to promote vaccination and preventive measures, especially in
countries with a strong family (or tribal) structure.
In this paper we analyze a large-scale dataset describing the mobility and
the call patterns of a large number of individuals in Ivory Coast. We present a
model that describes how diseases spread across the country by exploiting
mobility patterns of people extracted from the available data. Then, we
simulate several epidemics scenarios and we evaluate mechanisms to contain the
epidemic spreading of diseases, based on the information about people mobility
and social ties, also gathered from the phone call data. More specifically, we
find that restricting mobility does not delay the occurrence of an endemic
state and that an information campaign based on one-to-one phone conversations
among members of social groups might be an effective countermeasure.
|
1306.4549 | Sigma-Delta quantization of sub-Gaussian frame expansions and its
application to compressed sensing | cs.IT math.IT math.NA | Suppose that the collection $\{e_i\}_{i=1}^m$ forms a frame for $\R^k$, where
each entry of the vector $e_i$ is a sub-Gaussian random variable. We consider
expansions in such a frame, which are then quantized using a Sigma-Delta
scheme. We show that an arbitrary signal in $\R^k$ can be recovered from its
quantized frame coefficients up to an error which decays root-exponentially in
the oversampling rate $m/k$. Here the quantization scheme is assumed to be
chosen appropriately depending on the oversampling rate and the quantization
alphabet can be coarse. The result holds with high probability on the draw of
the frame uniformly for all signals. The crux of the argument is a bound on the
extreme singular values of the product of a deterministic matrix and a
sub-Gaussian frame. For fine quantization alphabets, we leverage this bound to
show polynomial error decay in the context of compressed sensing. Our results
extend previous results for structured deterministic frame expansions and
Gaussian compressed sensing measurements.
|
1306.4552 | A Novel Lowest Density MDS Array Code | cs.IT math.IT | In this paper we introduce a novel MDS array code with lowest density. In
contrast to existing codes, this one has no restrictions on the size or the
number of erasures it can correct. It is based on a simple matrix construction
involving totally nonsingular matrices. We also introduce a simple decoding
algorithm based on the structure of the code.
|
1306.4592 | Time Efficient Approach To Offline Hand Written Character Recognition
Using Associative Memory Net | cs.NE cs.CV | In this paper, an efficient Offline Hand Written Character Recognition
algorithm is proposed based on Associative Memory Net (AMN). The AMN used in
this work is basically auto associative. The implementation is carried out
completely in 'C' language. To make the system perform to its best with minimal
computation time, a Parallel algorithm is also developed using an API package
OpenMP. Characters are mainly English alphabets (Small (26), Capital (26))
collected from system (52) and from different persons (52). The characters
collected from system are used to train the AMN and characters collected from
different persons are used for testing the recognition ability of the net. The
detailed analysis showed that the network recognizes the hand written
characters with recognition rate of 72.20% in average case. However, in best
case, it recognizes the collected hand written characters with 88.5%. The
developed network consumes 3.57 sec (average) in Serial implementation and 1.16
sec (average) in Parallel implementation using OpenMP.
|
1306.4598 | Analysis of roles and groups in blogosphere | cs.SI physics.soc-ph | In the paper different roles of users in social media, taking into
consideration their strength of influence and different degrees of
cooperativeness, are introduced. Such identified roles are used for the
analysis of characteristics of groups of strongly connected entities. The
different classes of groups, considering the distribution of roles of users
belonging to them, are presented and discussed.
|
1306.4606 | Keyphrase Cloud Generation of Broadcast News | cs.IR | This paper describes an enhanced automatic keyphrase extraction method
applied to Broadcast News. The keyphrase extraction process is used to create a
concept level for each news. On top of words resulting from a speech
recognition system output and news indexation and it contributes to the
generation of a tag/keyphrase cloud of the top news included in a Multimedia
Monitoring Solution system for TV and Radio news/programs, running daily, and
monitoring 12 TV channels and 4 Radios.
|
1306.4608 | Hourly Traffic Prediction of News Stories | cs.IR | The process of predicting news stories popularity from several news sources
has become a challenge of great importance for both news producers and readers.
In this paper, we investigate methods for automatically predicting the number
of clicks on a news story during one hour. Our approach is a combination of
additive regression and bagging applied over a M5P regression tree using a
logarithmic scale (log10). The features included are social-based (social
network metadata from Facebook), content-based (automatically extracted
keyphrases, and stylometric statistics from news titles), and time-based. In
1st Sapo Data Challenge we obtained 11.99% as mean relative error value which
put us in the 4th place out of 26 participants.
|
1306.4621 | English Character Recognition using Artificial Neural Network | cs.NE | This work focuses on development of a Offline Hand Written English Character
Recognition algorithm based on Artificial Neural Network (ANN). The ANN
implemented in this work has single output neuron which shows whether the
tested character belongs to a particular cluster or not. The implementation is
carried out completely in 'C' language. Ten sets of English alphabets
(small-26, capital-26) were used to train the ANN and 5 sets of English
alphabets were used to test the network. The characters were collected from
different persons over duration of about 25 days. The algorithm was tested with
5 capital letters and 5 small letter sets. However, the result showed that the
algorithm recognized English alphabet patterns with maximum accuracy of 92.59%
and False Rejection Rate (FRR) of 0%.
|
1306.4622 | Solution to Quadratic Equation Using Genetic Algorithm | cs.NE | Solving Quadratic equation is one of the intrinsic interests as it is the
simplest nonlinear equations. A novel approach for solving Quadratic Equation
based on Genetic Algorithms (GAs) is presented. Genetic Algorithms (GAs) are a
technique to solve problems which need optimization. Generation of trial
solutions have been formed by this method. Many examples have been worked out,
and in most cases we find out the exact solution. We have discussed the effect
of different parameters on the performance of the developed algorithm. The
results are concluded after rigorous testing on different equations.
|
1306.4623 | The Academic Social Network | cs.SI cs.DL physics.soc-ph | Through academic publications, the authors of these publications form a
social network. Instead of sharing casual thoughts and photos (as in Facebook),
authors pick co-authors and reference papers written by other authors. Thanks
to various efforts (such as Microsoft Libra and DBLP), the data necessary for
analyzing the academic social network is becoming more available on the
Internet. What type of information and queries would be useful for users to
find out, beyond the search queries already available from services such as
Google Scholar? In this paper, we explore this question by defining a variety
of ranking metrics on different entities -authors, publication venues and
institutions. We go beyond traditional metrics such as paper counts, citations
and h-index. Specifically, we define metrics such as influence, connections and
exposure for authors. An author gains influence by receiving more citations,
but also citations from influential authors. An author increases his/her
connections by co-authoring with other authors, and specially from other
authors with high connections. An author receives exposure by publishing in
selective venues where publications received high citations in the past, and
the selectivity of these venues also depends on the influence of the authors
who publish there. We discuss the computation aspects of these metrics, and
similarity between different metrics. With additional information of
author-institution relationships, we are able to study institution rankings
based on the corresponding authors' rankings for each type of metric as well as
different domains. We are prepared to demonstrate these ideas with a web site
(http://pubstat.org) built from millions of publications and authors.
|
1306.4626 | Activity clocks: spreading dynamics on temporal networks of human
contact | physics.soc-ph cs.SI nlin.AO | Dynamical processes on time-varying complex networks are key to understanding
and modeling a broad variety of processes in socio-technical systems. Here we
focus on empirical temporal networks of human proximity and we aim at
understanding the factors that, in simulation, shape the arrival time
distribution of simple spreading processes. Abandoning the notion of wall-clock
time in favour of node-specific clocks based on activity exposes robust
statistical patterns in the arrival times across different social contexts.
Using randomization strategies and generative models constrained by data, we
show that these patterns can be understood in terms of heterogeneous
inter-event time distributions coupled with heterogeneous numbers of events per
edge. We also show, both empirically and by using a synthetic dataset, that
significant deviations from the above behavior can be caused by the presence of
edge classes with strong activity correlations.
|
1306.4629 | Non-Correlated Character Recognition using Artificial Neural Network | cs.NE cs.CV | This paper investigates a method of Handwritten English Character Recognition
using Artificial Neural Network (ANN). This work has been done in offline
Environment for non correlated characters, which do not possess any linear
relationships among them. We test that whether the particular tested character
belongs to a cluster or not. The implementation is carried out in Matlab
environment and successfully tested. Fifty-two sets of English alphabets are
used to train the ANN and test the network. The algorithms are tested with 26
capital letters and 26 small letters. The testing result showed that the
proposed ANN based algorithm showed a maximum recognition rate of 85%.
|
1306.4631 | Table of Content detection using Machine Learning | cs.LG cs.DL cs.IR | Table of content (TOC) detection has drawn attention now a day because it
plays an important role in digitization of multipage document. Generally book
document is multipage document. So it becomes necessary to detect Table of
Content page for easy navigation of multipage document and also to make
information retrieval faster for desirable data from the multipage document.
All the Table of content pages follow the different layout, different way of
presenting the contents of the document like chapter, section, subsection etc.
This paper introduces a new method to detect Table of content using machine
learning technique with different features. With the main aim to detect Table
of Content pages is to structure the document according to their contents.
|
1306.4633 | A Fuzzy Based Approach to Text Mining and Document Clustering | cs.LG cs.IR | Fuzzy logic deals with degrees of truth. In this paper, we have shown how to
apply fuzzy logic in text mining in order to perform document clustering. We
took an example of document clustering where the documents had to be clustered
into two categories. The method involved cleaning up the text and stemming of
words. Then, we chose m number of features which differ significantly in their
word frequencies (WF), normalized by document length, between documents
belonging to these two clusters. The documents to be clustered were represented
as a collection of m normalized WF values. Fuzzy c-means (FCM) algorithm was
used to cluster these documents into two clusters. After the FCM execution
finished, the documents in the two clusters were analysed for the values of
their respective m features. It was known that documents belonging to a
document type, say X, tend to have higher WF values for some particular
features. If the documents belonging to a cluster had higher WF values for
those same features, then that cluster was said to represent X. By fuzzy logic,
we not only get the cluster name, but also the degree to which a document
belongs to a cluster.
|
1306.4635 | Towards Multistage Design of Modular Systems | cs.AI cs.SY | The paper describes multistage design of composite (modular) systems (i.e.,
design of a system trajectory). This design process consists of the following:
(i) definition of a set of time/logical points; (ii) modular design of the
system for each time/logical point (e.g., on the basis of combinatorial
synthesis as hierarchical morphological design or multiple choice problem) to
obtain several system solutions; (iii) selection of the system solution for
each time/logical point while taking into account their quality and the quality
of compatibility between neighbor selected system solutions (here,
combinatorial synthesis is used as well). Mainly, the examined time/logical
points are based on a time chain. In addition, two complicated cases are
considered: (a) the examined logical points are based on a tree-like structure,
(b) the examined logical points are based on a digraph. Numerical examples
illustrate the approach.
|
1306.4650 | Stochastic Majorization-Minimization Algorithms for Large-Scale
Optimization | stat.ML cs.LG math.OC | Majorization-minimization algorithms consist of iteratively minimizing a
majorizing surrogate of an objective function. Because of its simplicity and
its wide applicability, this principle has been very popular in statistics and
in signal processing. In this paper, we intend to make this principle scalable.
We introduce a stochastic majorization-minimization scheme which is able to
deal with large-scale or possibly infinite data sets. When applied to convex
optimization problems under suitable assumptions, we show that it achieves an
expected convergence rate of $O(1/\sqrt{n})$ after $n$ iterations, and of
$O(1/n)$ for strongly convex functions. Equally important, our scheme almost
surely converges to stationary points for a large class of non-convex problems.
We develop several efficient algorithms based on our framework. First, we
propose a new stochastic proximal gradient method, which experimentally matches
state-of-the-art solvers for large-scale $\ell_1$-logistic regression. Second,
we develop an online DC programming algorithm for non-convex sparse estimation.
Finally, we demonstrate the effectiveness of our approach for solving
large-scale structured matrix factorization problems.
|
1306.4653 | Multiarmed Bandits With Limited Expert Advice | cs.LG | We solve the COLT 2013 open problem of \citet{SCB} on minimizing regret in
the setting of advice-efficient multiarmed bandits with expert advice. We give
an algorithm for the setting of K arms and N experts out of which we are
allowed to query and use only M experts' advices in each round, which has a
regret bound of \tilde{O}\bigP{\sqrt{\frac{\min\{K, M\} N}{M} T}} after T
rounds. We also prove that any algorithm for this problem must have expected
regret at least \tilde{\Omega}\bigP{\sqrt{\frac{\min\{K, M\} N}{M}T}}, thus
showing that our upper bound is nearly tight.
|
1306.4672 | A Novel Approach for Intelligent Robot Path Planning | cs.RO | Path planning of Robot is one of the challenging fields in the area of
Robotics research. In this paper, we proposed a novel algorithm to find path
between starting and ending position for an intelligent system. An intelligent
system is considered to be a device/robot having an antenna connected with
sensor-detector system. The proposed algorithm is based on Neural Network
training concept. The considered neural network is Adapti ve to the knowledge
bases. However, implementation of this algorithm is slightly expensive due to
hardware it requires. From detailed analysis, it can be proved that the
resulted path of this algorithm is efficient.
|
1306.4714 | Penetration Testing == POMDP Solving? | cs.AI cs.CR | Penetration Testing is a methodology for assessing network security, by
generating and executing possible attacks. Doing so automatically allows for
regular and systematic testing without a prohibitive amount of human labor. A
key question then is how to generate the attacks. This is naturally formulated
as a planning problem. Previous work (Lucangeli et al. 2010) used classical
planning and hence ignores all the incomplete knowledge that characterizes
hacking. More recent work (Sarraute et al. 2011) makes strong independence
assumptions for the sake of scaling, and lacks a clear formal concept of what
the attack planning problem actually is. Herein, we model that problem in terms
of partially observable Markov decision processes (POMDP). This grounds
penetration testing in a well-researched formalism, highlighting important
aspects of this problem's nature. POMDPs allow to model information gathering
as an integral part of the problem, thus providing for the first time a means
to intelligently mix scanning actions with actual exploits.
|
1306.4721 | On Localization of A Non-Cooperative Target with Non-Coherent Binary
Detectors | cs.IT math.IT | Localization of a non-cooperative target with binary detectors is considered.
A general expression for the Fisher information for estimation of target
location and power is developed. This general expression is then used to derive
closed-form approximations for the Cramer-Rao bound for the case of
non-coherent detectors. Simulations show that the approximations are quite
consistent with the exact bounds.
|
1306.4724 | Computer simulation based parameter selection for resistance exercise | cs.CV cs.HC | In contrast to most scientific disciplines, sports science research has been
characterized by comparatively little effort investment in the development of
relevant phenomenological models. Scarcer yet is the application of said models
in practice. We present a framework which allows resistance training
practitioners to employ a recently proposed neuromuscular model in actual
training program design. The first novelty concerns the monitoring aspect of
coaching. A method for extracting training performance characteristics from
loosely constrained video sequences, effortlessly and with minimal human input,
using computer vision is described. The extracted data is subsequently used to
fit the underlying neuromuscular model. This is achieved by solving an inverse
dynamics problem corresponding to a particular exercise. Lastly, a computer
simulation of hypothetical training bouts, using athlete-specific capability
parameters, is used to predict the effected adaptation and changes in
performance. The software described here allows the practitioner to manipulate
hypothetical training parameters and immediately see their effect on predicted
adaptation for a specific athlete. Thus, this work presents a holistic view of
the monitoring-assessment-adjustment loop.
|
1306.4727 | On the second Hamming weight of some Reed-Muller type codes | cs.IT math.AC math.IT | We study affine cartesian codes, which are a Reed-Muller type of evaluation
codes, where polynomials are evaluated at the cartesian product of n subsets of
a finite field F_q. These codes appeared recently in a work by H. Lopez, C.
Renteria-Marquez and R. Villareal and, in a generalized form, in a work by O.
Geil and C. Thomsen. Using methods from Gr\"obner basis theory we determine the
second Hamming weight (also called next-to-minimal weight) for particular cases
of affine cartesian codes and also some higher Hamming weights of this type of
code.
|
1306.4746 | Felzenszwalb-Baum-Welch: Event Detection by Changing Appearance | cs.CV | We propose a method which can detect events in videos by modeling the change
in appearance of the event participants over time. This method makes it
possible to detect events which are characterized not by motion, but by the
changing state of the people or objects involved. This is accomplished by using
object detectors as output models for the states of a hidden Markov model
(HMM). The method allows an HMM to model the sequence of poses of the event
participants over time, and is effective for poses of humans and inanimate
objects. The ability to use existing object-detection methods as part of an
event model makes it possible to leverage ongoing work in the object-detection
community. A novel training method uses an EM loop to simultaneously learn the
temporal structure and object models automatically, without the need to specify
either the individual poses to be modeled or the frames in which they occur.
The E-step estimates the latent assignment of video frames to HMM states, while
the M-step estimates both the HMM transition probabilities and state output
models, including the object detectors, which are trained on the weighted
subset of frames assigned to their state. A new dataset was gathered because
little work has been done on events characterized by changing object pose, and
suitable datasets are not available. Our method produced results superior to
that of comparison systems on this dataset.
|
1306.4748 | New Analysis of Manifold Embeddings and Signal Recovery from Compressive
Measurements | cs.IT math.IT | Compressive Sensing (CS) exploits the surprising fact that the information
contained in a sparse signal can be preserved in a small number of compressive,
often random linear measurements of that signal. Strong theoretical guarantees
have been established concerning the embedding of a sparse signal family under
a random measurement operator and on the accuracy to which sparse signals can
be recovered from noisy compressive measurements. In this paper, we address
similar questions in the context of a different modeling framework. Instead of
sparse models, we focus on the broad class of manifold models, which can arise
in both parametric and non-parametric signal families. Using tools from the
theory of empirical processes, we improve upon previous results concerning the
embedding of low-dimensional manifolds under random measurement operators. We
also establish both deterministic and probabilistic instance-optimal bounds in
$\ell_2$ for manifold-based signal recovery and parameter estimation from noisy
compressive measurements. In line with analogous results for sparsity-based CS,
we conclude that much stronger bounds are possible in the probabilistic
setting. Our work supports the growing evidence that manifold-based models can
be used with high accuracy in compressive signal processing.
|
1306.4753 | Galerkin Methods for Complementarity Problems and Variational
Inequalities | cs.LG cs.AI math.OC | Complementarity problems and variational inequalities arise in a wide variety
of areas, including machine learning, planning, game theory, and physical
simulation. In all of these areas, to handle large-scale problem instances, we
need fast approximate solution methods. One promising idea is Galerkin
approximation, in which we search for the best answer within the span of a
given set of basis functions. Bertsekas proposed one possible Galerkin method
for variational inequalities. However, this method can exhibit two problems in
practice: its approximation error is worse than might be expected based on the
ability of the basis to represent the desired solution, and each iteration
requires a projection step that is not always easy to implement efficiently.
So, in this paper, we present a new Galerkin method with improved behavior: our
new error bounds depend directly on the distance from the true solution to the
subspace spanned by our basis, and the only projections we require are onto the
feasible region or onto the span of our basis.
|
1306.4754 | On Finite Block-Length Quantization Distortion | cs.IT math.IT | We investigate the upper and lower bounds on the quantization distortions for
independent and identically distributed sources in the finite block-length
regime. Based on the convex optimization framework of the rate-distortion
theory, we derive a lower bound on the quantization distortion under finite
block-length, which is shown to be greater than the asymptotic distortion given
by the rate-distortion theory. We also derive two upper bounds on the
quantization distortion based on random quantization codebooks, which can
achieve any distortion above the asymptotic one. Moreover, we apply the new
upper and lower bounds to two types of sources, the discrete binary symmetric
source and the continuous Gaussian source. For the binary symmetric source, we
obtain the closed-form expressions of the upper and lower bounds. For the
Gaussian source, we propose a computational tractable method to numerically
compute the upper and lower bounds, for both bounded and unbounded quantization
codebooks.Numerical results show that the gap between the upper and lower
bounds is small for reasonable block length and hence the bounds are tight.
|
1306.4755 | Hybrid Group Decoding for Scalable Video over MIMO-OFDM Downlink Systems | cs.IT math.IT | We propose a scalable video broadcasting scheme over MIMO-OFDM systems. The
scalable video source layers are channel encoded and modulated into independent
signal streams, which are then transmitted from the allocated antennas in
certain time-frequency blocks. Each receiver employs the successive group
decoder to decode the signal streams of interest by treating other signal
streams as interference. The transmitter performs adaptive coding and
modulation, and transmission antenna and subcarrier allocation, based on the
rate feedback from the receivers. We also propose a hybrid receiver that
switches between the successive group decoder and the MMSE decoder depending on
the rate. Extensive simulations are provided to demonstrate the performance
gain of the proposed group-decoding-based scalable video broadcasting scheme
over the one based on the conventional MMSE decoding.
|
1306.4758 | Analysing Word Importance for Image Annotation | cs.IR cs.CV | Image annotation provides several keywords automatically for a given image
based on various tags to describe its contents which is useful in Image
retrieval. Various researchers are working on text based and content based
image annotations [7,9]. It is seen, in traditional Image annotation
approaches, annotation words are treated equally without considering the
importance of each word in real world. In context of this, in this work, images
are annotated with keywords based on their frequency count and word
correlation. Moreover this work proposes an approach to compute importance
score of candidate keywords, having same frequency count.
|
1306.4774 | Repair Locality with Multiple Erasure Tolerance | cs.IT math.IT | In distributed storage systems, erasure codes with locality $r$ is preferred
because a coordinate can be recovered by accessing at most $r$ other
coordinates which in turn greatly reduces the disk I/O complexity for small
$r$. However, the local repair may be ineffective when some of the $r$
coordinates accessed for recovery are also erased.
To overcome this problem, we propose the $(r,\delta)_c$-locality providing
$\delta -1$ local repair options for a coordinate. Consequently, the repair
locality $r$ can tolerate $\delta-1$ erasures in total. We derive an upper
bound on the minimum distance $d$ for any linear $[n,k]$ code with information
$(r,\delta)_c$-locality. For general parameters, we prove existence of the
codes that attain this bound when $n\geq k(r(\delta-1)+1)$, implying tightness
of this bound. Although the locality $(r,\delta)$ defined by Prakash et al
provides the same level of locality and local repair tolerance as our
definition, codes with $(r,\delta)_c$-locality are proved to have more
advantage in the minimum distance. In particular, we construct a class of codes
with all symbol $(r,\delta)_c$-locality where the gain in minimum distance is
$\Omega(\sqrt{r})$ and the information rate is close to 1.
|
1306.4793 | Evolving Boolean Regulatory Networks with Epigenetic Control | cs.NE q-bio.MN | The significant role of epigenetic mechanisms within natural systems has
become increasingly clear. This paper uses a recently presented abstract,
tunable Boolean genetic regulatory network model to explore aspects of
epigenetics. It is shown how dynamically controlling transcription via a DNA
methylation-inspired mechanism can be selected for by simulated evolution under
various single and multiple cell scenarios. Further, it is shown that the
effects of such control can be inherited without detriment to fitness.
|
1306.4807 | Nonlinear continuous integral-derivative observer | cs.SY math.DS | In this paper, a high-order nonlinear continuous integral-derivative observer
is presented based on finite-time stability and singular perturbation
technique. The proposed integral-derivative observer can not only obtain the
multiple integrals of a signal, but can also estimate the derivatives.
Conditions are given ensuring finite-time stability for the presented
integral-derivative observer, and the stability and robustness in time domain
are analysed. The merits of the presented integral-derivative observer include
its synchronous estimation of integrals and derivatives, finite-time stability,
ease of parameters selection, sufficient stochastic noises rejection and almost
no drift phenomenon. The theoretical results are confirmed by computational
analysis and simulations.
|
1306.4849 | A generalization of bounds for cyclic codes, including the HT and BS
bounds | cs.IT math.CO math.IT | We use the algebraic structure of cyclic codes and some properties of the
discrete Fourier transform to give a reformulation of several classical bounds
for the distance of cyclic codes, by extending techniques of linear algebra. We
propose a bound, whose computational complexity is polynomial bounded, which is
a generalization of the Hartmann-Tzeng bound and the Betti-Sala bound. In the
majority of computed cases, our bound is the tightest among all known
polynomial-time bounds, including the Roos bound.
|
1306.4883 | Fault-Tolerant Control of a 2 DOF Helicopter (TRMS System) Based on
H_infinity | cs.SY | In this paper, a Fault-Tolerant control of 2 DOF Helicopter (TRMS System)
Based on H-infinity is presented. In particular, the introductory part of the
paper presents a Fault-Tolerant Control (FTC), the first part of this paper
presents a description of the mathematical model of TRMS, and the last part of
the paper presented and a polytypic Unknown Input Observer (UIO) is synthesized
using equalities and LMIs. This UIO is used to observe the faults and then
compensate them, in this part the shown how to design a fault-tolerant control
strategy for this particular class of non-linear systems.
|
1306.4886 | Supervised Topical Key Phrase Extraction of News Stories using
Crowdsourcing, Light Filtering and Co-reference Normalization | cs.CL cs.IR | Fast and effective automated indexing is critical for search and personalized
services. Key phrases that consist of one or more words and represent the main
concepts of the document are often used for the purpose of indexing. In this
paper, we investigate the use of additional semantic features and
pre-processing steps to improve automatic key phrase extraction. These features
include the use of signal words and freebase categories. Some of these features
lead to significant improvements in the accuracy of the results. We also
experimented with 2 forms of document pre-processing that we call light
filtering and co-reference normalization. Light filtering removes sentences
from the document, which are judged peripheral to its main content.
Co-reference normalization unifies several written forms of the same named
entity into a unique form. We also needed a "Gold Standard" - a set of labeled
documents for training and evaluation. While the subjective nature of key
phrase selection precludes a true "Gold Standard", we used Amazon's Mechanical
Turk service to obtain a useful approximation. Our data indicates that the
biggest improvements in performance were due to shallow semantic features, news
categories, and rhetorical signals (nDCG 78.47% vs. 68.93%). The inclusion of
deeper semantic features such as Freebase sub-categories was not beneficial by
itself, but in combination with pre-processing, did cause slight improvements
in the nDCG scores.
|
1306.4890 | Key Phrase Extraction of Lightly Filtered Broadcast News | cs.CL cs.IR | This paper explores the impact of light filtering on automatic key phrase
extraction (AKE) applied to Broadcast News (BN). Key phrases are words and
expressions that best characterize the content of a document. Key phrases are
often used to index the document or as features in further processing. This
makes improvements in AKE accuracy particularly important. We hypothesized that
filtering out marginally relevant sentences from a document would improve AKE
accuracy. Our experiments confirmed this hypothesis. Elimination of as little
as 10% of the document sentences lead to a 2% improvement in AKE precision and
recall. AKE is built over MAUI toolkit that follows a supervised learning
approach. We trained and tested our AKE method on a gold standard made of 8 BN
programs containing 110 manually annotated news stories. The experiments were
conducted within a Multimedia Monitoring Solution (MMS) system for TV and radio
news/programs, running daily, and monitoring 12 TV and 4 radio channels.
|
1306.4895 | PMU-based Voltage Instability Detection through Linear Regression | cs.SY | Timely recognition of voltage instability is crucial to allow for effective
control and protection interventions. Phasor measurements units (PMUs) can be
utilized to provide high sampling rate time-synchronized voltage and current
phasors suitable for wide-area voltage instability detection. However, PMU data
contains unwanted measurement errors and noise, which may affect the results of
applications using these measurements for voltage instability detection. The
aim of this article is to revisit a sensitivities calculation to detect voltage
instability by applying a method utilizing linear regression for preprocessing
PMU data. The methodology is validated using both real-time
hardware-in-the-loop simulation and real PMU measurements from Norwegian
network.
|
1306.4905 | From-Below Approximations in Boolean Matrix Factorization: Geometry and
New Algorithm | cs.NA cs.LG | We present new results on Boolean matrix factorization and a new algorithm
based on these results. The results emphasize the significance of
factorizations that provide from-below approximations of the input matrix.
While the previously proposed algorithms do not consider the possibly different
significance of different matrix entries, our results help measure such
significance and suggest where to focus when computing factors. An experimental
evaluation of the new algorithm on both synthetic and real data demonstrates
its good performance in terms of good coverage by the first k factors as well
as a small number of factors needed for exact decomposition and indicates that
the algorithm outperforms the available ones in these terms. We also propose
future research topics.
|
1306.4908 | Recognition of Named-Event Passages in News Articles | cs.CL cs.IR | We extend the concept of Named Entities to Named Events - commonly occurring
events such as battles and earthquakes. We propose a method for finding
specific passages in news articles that contain information about such events
and report our preliminary evaluation results. Collecting "Gold Standard" data
presents many problems, both practical and conceptual. We present a method for
obtaining such data using the Amazon Mechanical Turk service.
|
1306.4925 | A Multi-Engine Approach to Answer Set Programming | cs.AI cs.LO | Answer Set Programming (ASP) is a truly-declarative programming paradigm
proposed in the area of non-monotonic reasoning and logic programming, that has
been recently employed in many applications. The development of efficient ASP
systems is, thus, crucial. Having in mind the task of improving the solving
methods for ASP, there are two usual ways to reach this goal: $(i)$ extending
state-of-the-art techniques and ASP solvers, or $(ii)$ designing a new ASP
solver from scratch. An alternative to these trends is to build on top of
state-of-the-art solvers, and to apply machine learning techniques for choosing
automatically the "best" available solver on a per-instance basis.
In this paper we pursue this latter direction. We first define a set of
cheap-to-compute syntactic features that characterize several aspects of ASP
programs. Then, we apply classification methods that, given the features of the
instances in a {\sl training} set and the solvers' performance on these
instances, inductively learn algorithm selection strategies to be applied to a
{\sl test} set. We report the results of a number of experiments considering
solvers and different training and test sets of instances taken from the ones
submitted to the "System Track" of the 3rd ASP Competition. Our analysis shows
that, by applying machine learning techniques to ASP solving, it is possible to
obtain very robust performance: our approach can solve more instances compared
with any solver that entered the 3rd ASP Competition. (To appear in Theory and
Practice of Logic Programming (TPLP).)
|
1306.4934 | On the Corner Points of the Capacity Region of a Two-User Gaussian
Interference Channel | cs.IT math.IT | This work considers the corner points of the capacity region of a two-user
Gaussian interference channel (GIC). In a two-user GIC, the rate pairs where
one user transmits its data at the single-user capacity (without interference),
and the other at the largest rate for which reliable communication is still
possible are called corner points. This paper relies on existing outer bounds
on the capacity region of a two-user GIC that are used to derive informative
bounds on the corner points of the capacity region. The new bounds refer to a
weak two-user GIC (i.e., when both cross-link gains in standard form are
positive and below 1), and a refinement of these bounds is obtained for the
case where the transmission rate of one user is within $\varepsilon > 0$ of the
single-user capacity. The bounds on the corner points are asymptotically tight
as the transmitted powers tend to infinity, and they are also useful for the
case of moderate SNR and INR. Upper and lower bounds on the gap (denoted by
$\Delta$) between the sum-rate and the maximal achievable total rate at the two
corner points are derived. This is followed by an asymptotic analysis analogous
to the study of the generalized degrees of freedom (where the SNR and INR
scalings are coupled such that $\frac{\log(\text{INR})}{\log(\text{SNR})} =
\alpha \geq 0$), leading to an asymptotic characterization of this gap which is
exact for the whole range of $\alpha$. The upper and lower bounds on $\Delta$
are asymptotically tight in the sense that they achieve the exact asymptotic
characterization. Improved bounds on $\Delta$ are derived for finite SNR and
INR, and their improved tightness is exemplified numerically.
|
1306.4947 | Machine Teaching for Bayesian Learners in the Exponential Family | cs.LG | What if there is a teacher who knows the learning goal and wants to design
good training data for a machine learner? We propose an optimal teaching
framework aimed at learners who employ Bayesian models. Our framework is
expressed as an optimization problem over teaching examples that balance the
future loss of the learner and the effort of the teacher. This optimization
problem is in general hard. In the case where the learner employs conjugate
exponential family models, we present an approximate algorithm for finding the
optimal teaching set. Our algorithm optimizes the aggregate sufficient
statistics, then unpacks them into actual teaching examples. We give several
examples to illustrate our framework.
|
1306.4949 | Minimizing Convergence Error in Multi-Agent Systems via Leader
Selection: A Supermodular Optimization Approach | cs.SY | In a leader-follower multi-agent system (MAS), the leader agents act as
control inputs and influence the states of the remaining follower agents. The
rate at which the follower agents converge to their desired states, as well as
the errors in the follower agent states prior to convergence, are determined by
the choice of leader agents. In this paper, we study leader selection in order
to minimize convergence errors experienced by the follower agents, which we
define as a norm of the distance between the follower agents' intermediate
states and the convex hull of the leader agent states. By introducing a novel
connection to random walks on the network graph, we show that the convergence
error has an inherent supermodular structure as a function of the leader set.
Supermodularity enables development of efficient discrete optimization
algorithms that directly approximate the optimal leader set, provide provable
performance guarantees, and do not rely on continuous relaxations. We formulate
two leader selection problems within the supermodular optimization framework,
namely, the problem of selecting a fixed number of leader agents in order to
minimize the convergence error, as well as the problem of selecting the
minimum-size set of leader agents to achieve a given bound on the convergence
error. We introduce algorithms for approximating the optimal solution to both
problems in static networks, dynamic networks with known topology
distributions, and dynamic networks with unknown and unpredictable topology
distributions. Our approach is shown to provide significantly lower convergence
errors than existing random and degree-based leader selection methods in a
numerical study.
|
1306.4966 | Determining Points on Handwritten Mathematical Symbols | cs.CV cs.CY | In a variety of applications, such as handwritten mathematics and diagram
labelling, it is common to have symbols of many different sizes in use and for
the writing not to follow simple baselines. In order to understand the scale
and relative positioning of individual characters, it is necessary to identify
the location of certain expected features. These are typically identified by
particular points in the symbols, for example, the baseline of a lower case "p"
would be identified by the lowest part of the bowl, ignoring the descender. We
investigate how to find these special points automatically so they may be used
in a number of problems, such as improving two-dimensional mathematical
recognition and in handwriting neatening, while preserving the original style.
|
1306.4999 | Safeguarding E-Commerce against Advisor Cheating Behaviors: Towards More
Robust Trust Models for Handling Unfair Ratings | cs.SI cs.AI | In electronic marketplaces, after each transaction buyers will rate the
products provided by the sellers. To decide the most trustworthy sellers to
transact with, buyers rely on trust models to leverage these ratings to
evaluate the reputation of sellers. Although the high effectiveness of
different trust models for handling unfair ratings have been claimed by their
designers, recently it is argued that these models are vulnerable to more
intelligent attacks, and there is an urgent demand that the robustness of the
existing trust models has to be evaluated in a more comprehensive way. In this
work, we classify the existing trust models into two broad categories and
propose an extendable e-marketplace testbed to evaluate their robustness
against different unfair rating attacks comprehensively. On top of highlighting
the robustness of the existing trust models for handling unfair ratings is far
from what they were claimed to be, we further propose and validate a novel
combination mechanism for the existing trust models, Discount-then-Filter, to
notably enhance their robustness against the investigated attacks.
|
1306.5018 | Information embedding and the triple role of control | cs.IT math.IT | We consider the problem of information embedding where the encoder modifies a
white Gaussian host signal in a power-constrained manner to encode a message,
and the decoder recovers both the embedded message and the modified host
signal. This partially extends the recent work of Sumszyk and Steinberg to the
continuous-alphabet Gaussian setting. Through a control-theoretic lens, we
observe that the problem is a minimalist example of what is called the "triple
role" of control actions. We show that a dirty-paper-coding strategy achieves
the optimal rate for perfect recovery of the modified host and the message for
any message rate. For imperfect recovery of the modified host, by deriving
bounds on the minimum mean-square error (MMSE) in recovering the modified host
signal, we show that DPC-based strategies are guaranteed to attain within a
uniform constant factor of 16 of the optimal weighted sum of power required in
host signal modification and the MMSE in the modified host signal
reconstruction for all weights and all message rates. When specialized to the
zero-rate case, our results provide the tightest known lower bounds on the
asymptotic costs for the vector version of a famous open problem in
decentralized control: the Witsenhausen counterexample. Numerically, this
tighter bound helps us characterize the asymptotically optimal costs for the
vector Witsenhausen problem to within a factor of 1.3 for all problem
parameters, improving on the earlier best known bound of 2.
|
1306.5039 | On Quantum Algorithm for Binary Search and Its Computational Complexity | quant-ph cs.IT math.IT | A new quantum algorithm for a search problem and its computational complexity
are discussed. It is shown in the search problem containing 2^n objects that
our algorithm runs in polynomial time.
|
1306.5042 | Identifying Influential Spreaders by Weighted LeaderRank | physics.soc-ph cs.SI physics.data-an | Identifying influential spreaders is crucial for understanding and
controlling spreading processes on social networks. Via assigning
degree-dependent weights onto links associated with the ground node, we
proposed a variant to a recent ranking algorithm named LeaderRank [L. Lv et
al., PLoS ONE 6 (2011) e21202]. According to the simulations on the standard
SIR model, the weighted LeaderRank performs better than LeaderRank in three
aspects: (i) the ability to find out more influential spreaders, (ii) the
higher tolerance to noisy data, and (iii) the higher robustness to intentional
attacks.
|
1306.5044 | Multi-Agent Consensus With Relative-State-Dependent Measurement Noises | cs.SY | In this note, the distributed consensus corrupted by relative-state-dependent
measurement noises is considered. Each agent can measure or receive its
neighbors' state information with random noises, whose intensity is a vector
function of agents' relative states. By investigating the structure of this
interaction and the tools of stochastic differential equations, we develop
several small consensus gain theorems to give sufficient conditions in terms of
the control gain, the number of agents and the noise intensity function to
ensure mean square (m. s.) and almost sure (a. s.) consensus and quantify the
convergence rate and the steady-state error. Especially, for the case with
homogeneous communication and control channels, a necessary and sufficient
condition to ensure m. s. consensus on the control gain is given and it is
shown that the control gain is independent of the specific network topology,
but only depends on the number of nodes and the noise coefficient constant. For
symmetric measurement models, the almost sure convergence rate is estimated by
the Iterated Logarithm Law of Brownian motions.
|
1306.5053 | Breaking Symmetry with Different Orderings | cs.AI cs.CC | We can break symmetry by eliminating solutions within each symmetry class.
For instance, the Lex-Leader method eliminates all but the smallest solution in
the lexicographical ordering. Unfortunately, the Lex-Leader method is
intractable in general. We prove that, under modest assumptions, we cannot
reduce the worst case complexity of breaking symmetry by using other orderings
on solutions. We also prove that a common type of symmetry, where rows and
columns in a matrix of decision variables are interchangeable, is intractable
to break when we use two promising alternatives to the lexicographical
ordering: the Gray code ordering (which uses a different ordering on
solutions), and the Snake-Lex ordering (which is a variant of the
lexicographical ordering that re-orders the variables). Nevertheless, we show
experimentally that using other orderings like the Gray code to break symmetry
can be beneficial in practice as they may better align with the objective
function and branching heuristic.
|
1306.5056 | Class Proportion Estimation with Application to Multiclass Anomaly
Rejection | stat.ML cs.LG | This work addresses two classification problems that fall under the heading
of domain adaptation, wherein the distributions of training and testing
examples differ. The first problem studied is that of class proportion
estimation, which is the problem of estimating the class proportions in an
unlabeled testing data set given labeled examples of each class. Compared to
previous work on this problem, our approach has the novel feature that it does
not require labeled training data from one of the classes. This property allows
us to address the second domain adaptation problem, namely, multiclass anomaly
rejection. Here, the goal is to design a classifier that has the option of
assigning a "reject" label, indicating that the instance did not arise from a
class present in the training data. We establish consistent learning strategies
for both of these domain adaptation problems, which to our knowledge are the
first of their kind. We also implement the class proportion estimation
technique and demonstrate its performance on several benchmark data sets.
|
1306.5070 | 3-SAT Problem A New Memetic-PSO Algorithm | cs.AI cs.NE | 3-SAT problem is of great importance to many technical and scientific
applications. This paper presents a new hybrid evolutionary algorithm for
solving this satisfiability problem. 3-SAT problem has the huge search space
and hence it is known as a NP-hard problem. So, deterministic approaches are
not applicable in this context. Thereof, application of evolutionary processing
approaches and especially PSO will be very effective for solving these kinds of
problems. In this paper, we introduce a new evolutionary optimization technique
based on PSO, Memetic algorithm and local search approaches. When some
heuristics are mixed, their advantages are collected as well and we can reach
to the better outcomes. Finally, we test our proposed algorithm over some
benchmarks used by some another available algorithms. Obtained results show
that our new method leads to the suitable results by the appropriate time.
Thereby, it achieves a better result in compared with the existent approaches
such as pure genetic algorithm and some verified types
|
1306.5093 | Performance Analysis and Design of Maximum Ratio Combining in
Channel-Aware MIMO Decision Fusion | cs.IT math.IT | In this paper we present a theoretical performance analysis of the maximum
ratio combining (MRC) rule for channel-aware decision fusion over
multiple-input multiple-output (MIMO) channels for (conditionally) dependent
and independent local decisions. The system probabilities of false alarm and
detection conditioned on the channel realization are derived in closed form and
an approximated threshold choice is given. Furthermore, the channel-averaged
(CA) performances are evaluated in terms of the CA system probabilities of
false alarm and detection and the area under the receiver operating
characteristic (ROC) through the closed form of the conditional moment
generating function (MGF) of the MRC statistic, along with Gauss-Chebyshev (GC)
quadrature rules. Furthermore, we derive the deflection coefficients in closed
form, which are used for sensor threshold design. Finally, all the results are
confirmed through Monte Carlo simulations.
|
1306.5096 | Computer Aided ECG Analysis - State of the Art and Upcoming Challenges | cs.CV | In this paper we present current achievements in computer aided ECG analysis
and their applicability in real world medical diagnosis process. Most of the
current work is covering problems of removing noise, detecting heartbeats and
rhythm-based analysis. There are some advancements in particular ECG segments
detection and beat classifications but with limited evaluations and without
clinical approvals. This paper presents state of the art advancements in those
areas till present day. Besides this short computer science and signal
processing literature review, paper covers future challenges regarding the ECG
signal morphology analysis deriving from the medical literature review. Paper
is concluded with identified gaps in current advancements and testing, upcoming
challenges for future research and a bullseye test is suggested for morphology
analysis evaluation.
|
1306.5098 | Wisdom of Crowds Algorithm for Stock Market Predictions | cs.SI physics.soc-ph | In this paper we present a mathematical model for collaborative filtering
implementation in stock market predictions. In popular literature collaborative
filtering, also known as Wisdom of Crowds, assumes that group has a greater
knowledge than the individual while each individual can improve group's
performance by its specific information input. There are commercially available
tools for collaborative stock market predictions and patent protected web-based
software solutions. Mathematics that lies behind those algorithms is not
disclosed in the literature, so the presented model and algorithmic
implementation are the main contributions of this work.
|
1306.5099 | SVM based on personal identification system using Electrocardiograms | cs.SY | This paper presents a new algorithm for personal identification from their
Electrocardiograms (ECG) which is based on morphological descriptors and
Hermite Polynomials Expansion coefficients (HPEc). After preprocessing, we
extracted ten morphological descriptors which were divided into homogeneous
groups (amplitude, surface interval and slope) and we extracted sixty Hermite
Polynomials Expansion coefficients(HPEc) from each heartbeat. For the
classification, we employed a binary Support Vector Machines with Gaussian
kernel and we adopted a particular strategy: we first classified groups of
morphological descriptors separately then we combined them in one system. On
the other hand, we classified the Hermite Polynomials Expansion coefficients
apart and we associated them with all groups of morphological descriptors in a
single system in order to improve overall performance. We tested our algorithm
on 18 different healthy signals of the MIT_BIH database. The analysis of
different groups separately showed that the best recognition performance is
96.45% for all morphological descriptors and the results of experiments showed
that the proposed hybrid approach has led to an overall maximum of 98.97%.
|
1306.5109 | Complex Morlet Wavelet Analysis of the DNA Frequency Chaos Game Signal
and Revealing Specific Motifs of Introns in C.elegans | cs.SY q-bio.GN | Nowadays, studying introns is becoming a very promising field in the
genomics. Even though they play a role in the dynamic regulation of gene and in
the organism's evolution, introns have not attracted enough attention like
exons did; especially of digital signal processing researchers. Thus, we focus
on analysis of the C.elegans introns. In this paper, we propose the complex
Morlet wavelet analysis to investigate introns' characterization in the
C.elegans genes. However, catching the change in frequency response with
respect to time of the gene sequences is hindered by their presence in the form
of strings of characters. This can only be counteracted by assigning numerical
values to each of the DNA characters. This operation defines the so called "DNA
coding approach". In this context, we propose a new coding technique based on
the Frequency Chaos Game Representation (FCGR) that we name the "Frequency
Chaos Game Signal" (FCGS). Results of the complex Morlet wavelet Analysis
applied to the Celegans FCGS are showing a very distinguished texture. The
visual interpretation of the colour scalograms is proved to be an efficient
tool for revealing significant information about intronic sequences.
|
1306.5111 | Low-Density Parity-Check Codes From Transversal Designs With Improved
Stopping Set Distributions | cs.IT cs.DM math.CO math.IT | This paper examines the construction of low-density parity-check (LDPC) codes
from transversal designs based on sets of mutually orthogonal Latin squares
(MOLS). By transferring the concept of configurations in combinatorial designs
to the level of Latin squares, we thoroughly investigate the occurrence and
avoidance of stopping sets for the arising codes. Stopping sets are known to
determine the decoding performance over the binary erasure channel and should
be avoided for small sizes. Based on large sets of simple-structured MOLS, we
derive powerful constraints for the choice of suitable subsets, leading to
improved stopping set distributions for the corresponding codes. We focus on
LDPC codes with column weight 4, but the results are also applicable for the
construction of codes with higher column weights. Finally, we show that a
subclass of the presented codes has quasi-cyclic structure which allows
low-complexity encoding.
|
1306.5151 | Fine-Grained Visual Classification of Aircraft | cs.CV | This paper introduces FGVC-Aircraft, a new dataset containing 10,000 images
of aircraft spanning 100 aircraft models, organised in a three-level hierarchy.
At the finer level, differences between models are often subtle but always
visually measurable, making visual recognition challenging but possible. A
benchmark is obtained by defining corresponding classification tasks and
evaluation protocols, and baseline results are presented. The construction of
this dataset was made possible by the work of aircraft enthusiasts, a strategy
that can extend to the study of number of other object classes. Compared to the
domains usually considered in fine-grained visual classification (FGVC), for
example animals, aircraft are rigid and hence less deformable. They, however,
present other interesting modes of variation, including purpose, size,
designation, structure, historical style, and branding.
|
1306.5158 | Scenario Analysis, Decision Trees and Simulation for Cost Benefit
Analysis of the Cargo Screening Process | cs.CE stat.AP | In this paper we present our ideas for conducting a cost benefit analysis by
using three different methods: scenario analysis, decision trees and
simulation. Then we introduce our case study and examine these methods in a
real world situation. We show how these tools can be used and what the results
are for each of them. Our aim is to conduct a comparison of these different
probabilistic methods of estimating costs for port security risk assessment
studies. Methodologically, we are trying to understand the limits of all the
tools mentioned above by focusing on rare events.
|
1306.5160 | Towards modelling cost and risks of infrequent events in the cargo
screening process | cs.CE | We introduce a simulation model of the port of Calais with a focus on the
operation of immigration controls. Our aim is to compare the cost and benefits
of different screening policies. Methodologically, we are trying to understand
the limits of discrete event simulation of rare events. When will they become
'too rare' for simulation to give meaningful results?
|
1306.5166 | A variant of the multi-agent rendezvous problem | cs.MA cs.CG cs.DS cs.RO math.PR | The classical multi-agent rendezvous problem asks for a deterministic
algorithm by which $n$ points scattered in a plane can move about at constant
speed and merge at a single point, assuming each point can use only the
locations of the others it sees when making decisions and that the visibility
graph as a whole is connected. In time complexity analyses of such algorithms,
only the number of rounds of computation required are usually considered, not
the amount of computation done per round. In this paper, we consider
$\Omega(n^2 \log n)$ points distributed independently and uniformly at random
in a disc of radius $n$ and, assuming each point can not only see but also, in
principle, communicate with others within unit distance, seek a randomised
merging algorithm which asymptotically almost surely (a.a.s.) runs in time
O(n), in other words in time linear in the radius of the disc rather than in
the number of points. Under a precise set of assumptions concerning the
communication capabilities of neighboring points, we describe an algorithm
which a.a.s. runs in time O(n) provided the number of points is $o(n^3)$.
Several questions are posed for future work.
|
1306.5170 | Clinical Relationships Extraction Techniques from Patient Narratives | cs.IR cs.CL | The Clinical E-Science Framework (CLEF) project was used to extract important
information from medical texts by building a system for the purpose of clinical
research, evidence-based healthcare and genotype-meets-phenotype informatics.
The system is divided into two parts, one part concerns with the identification
of relationships between clinically important entities in the text. The full
parses and domain-specific grammars had been used to apply many approaches to
extract the relationship. In the second part of the system, statistical machine
learning (ML) approaches are applied to extract relationship. A corpus of
oncology narratives that hand annotated with clinical relationships can be used
to train and test a system that has been designed and implemented by supervised
machine learning (ML) approaches. Many features can be extracted from these
texts that are used to build a model by the classifier. Multiple supervised
machine learning algorithms can be applied for relationship extraction. Effects
of adding the features, changing the size of the corpus, and changing the type
of the algorithm on relationship extraction are examined. Keywords: Text
mining; information extraction; NLP; entities; and relations.
|
1306.5173 | On the Hardnesses of Several Quantum Decoding Problems | quant-ph cs.IT math.IT | We classify the time complexities of three important decoding problems for
quantum stabilizer codes. First, regardless of the channel model, quantum
bounded distance decoding is shown to be NP-hard, like what Berlekamp, McEliece
and Tilborg did for classical binary linear codes in 1978. Then over the
depolarizing channel, the decoding problems for finding a most likely error and
for minimizing the decoding error probability are also shown to be NP-hard. Our
results indicate that finding a polynomial-time decoding algorithm for general
stabilizer codes may be impossible, but this, on the other hand, strengthens
the foundation of quantum code-based cryptography.
|
1306.5204 | Is the Sample Good Enough? Comparing Data from Twitter's Streaming API
with Twitter's Firehose | cs.SI physics.soc-ph | Twitter is a social media giant famous for the exchange of short,
140-character messages called "tweets". In the scientific community, the
microblogging site is known for openness in sharing its data. It provides a
glance into its millions of users and billions of tweets through a "Streaming
API" which provides a sample of all tweets matching some parameters preset by
the API user. The API service has been used by many researchers, companies, and
governmental institutions that want to extract knowledge in accordance with a
diverse array of questions pertaining to social media. The essential drawback
of the Twitter API is the lack of documentation concerning what and how much
data users get. This leads researchers to question whether the sampled data is
a valid representation of the overall activity on Twitter. In this work we
embark on answering this question by comparing data collected using Twitter's
sampled API service with data collected using the full, albeit costly, Firehose
stream that includes every single published tweet. We compare both datasets
using common statistical metrics as well as metrics that allow us to compare
topics, networks, and locations of tweets. The results of our work will help
researchers and practitioners understand the implications of using the
Streaming API.
|
1306.5215 | Epistemology of Modeling and Simulation: How can we gain Knowledge from
Simulations? | cs.GL cs.AI | Epistemology is the branch of philosophy that deals with gaining knowledge.
It is closely related to ontology. The branch that deals with questions like
"What is real?" and "What do we know?" as it provides these components. When
using modeling and simulation, we usually imply that we are doing so to either
apply knowledge, in particular when we are using them for training and
teaching, or that we want to gain new knowledge, for example when doing
analysis or conducting virtual experiments. This paper looks at the history of
science to give a context to better cope with the question, how we can gain
knowledge from simulation. It addresses aspects of computability and the
general underlying mathematics, and applies the findings to validation and
verification and development of federations. As simulations are understood as
computable executable hypotheses, validation can be understood as hypothesis
testing and theory building. The mathematical framework allows furthermore
addressing some challenges when developing federations and the potential
introduction of contradictions when composing different theories, as they are
represented by the federated simulation systems.
|
1306.5219 | On the Heisenberg principle at macroscopic scales: understanding
classical negative information. Towards a general physical theory of
information | cs.IT math.IT q-bio.NC | With the aid of a toy model, the Monty Hall Problem (MHP), the
counterintuitive and theoretically problematic concept of negative information
in classical systems is well understood. It is shown that, as its quantum
counterpart, classical local mutual information, obtained through a
measurement, can be expressed as the difference between the information gained
with the evidence and the negative information generated due to the
inefficiency of the measurement itself; a novel local Shannon metric, the
transfer information content, is defined as this difference, which is negative
if the measurement generates more disturbance than the evidence, i.e.,
generates a classical measurement back action. This metric is valid for both,
Classical and Quantum measurements, and it is proposed as a starting point
towards a general physical theory of information. This information-disturbance
trade-off in classical measurements is a kind of Heisenberg principle at
macroscopic scales, and it is proposed, as further work, to incorporate this
result in the already existing generalized uncertainty principles in the field
of quantum gravity.
|
1306.5226 | Global registration of multiple point clouds using semidefinite
programming | cs.CV cs.NA math.NA math.OC | Consider $N$ points in $\mathbb{R}^d$ and $M$ local coordinate systems that
are related through unknown rigid transforms. For each point we are given
(possibly noisy) measurements of its local coordinates in some of the
coordinate systems. Alternatively, for each coordinate system, we observe the
coordinates of a subset of the points. The problem of estimating the global
coordinates of the $N$ points (up to a rigid transform) from such measurements
comes up in distributed approaches to molecular conformation and sensor network
localization, and also in computer vision and graphics.
The least-squares formulation of this problem, though non-convex, has a well
known closed-form solution when $M=2$ (based on the singular value
decomposition). However, no closed form solution is known for $M\geq 3$.
In this paper, we demonstrate how the least-squares formulation can be
relaxed into a convex program, namely a semidefinite program (SDP). By setting
up connections between the uniqueness of this SDP and results from rigidity
theory, we prove conditions for exact and stable recovery for the SDP
relaxation. In particular, we prove that the SDP relaxation can guarantee
recovery under more adversarial conditions compared to earlier proposed
spectral relaxations, and derive error bounds for the registration error
incurred by the SDP relaxation.
We also present results of numerical experiments on simulated data to confirm
the theoretical findings. We empirically demonstrate that (a) unlike the
spectral relaxation, the relaxation gap is mostly zero for the semidefinite
program (i.e., we are able to solve the original non-convex least-squares
problem) up to a certain noise threshold, and (b) the semidefinite program
performs significantly better than spectral and manifold-optimization methods,
particularly at large noise levels.
|
1306.5229 | A Physical-layer Rateless Code for Wireless Channels | cs.IT math.IT | In this paper, we propose a physical-layer rateless code for wireless
channels. A novel rateless encoding scheme is developed to overcome the high
error floor problem caused by the low-density generator matrix (LDGM)-like
encoding scheme in conventional rateless codes. This is achieved by providing
each symbol with approximately equal protection in the encoding process. An
extrinsic information transfer (EXIT) chart based optimization approach is
proposed to obtain a robust check node degree distribution, which can achieve
near-capacity performances for a wide range of signal to noise ratios (SNR).
Simulation results show that, under the same channel conditions and
transmission overheads, the bit-error-rate (BER) performance of the proposed
scheme considerably outperforms the existing rateless codes in additive white
Gaussian noise (AWGN) channels, particularly at low BER regions.
|
1306.5263 | Discriminative Training: Learning to Describe Video with Sentences, from
Video Described with Sentences | cs.CV cs.CL | We present a method for learning word meanings from complex and realistic
video clips by discriminatively training (DT) positive sentential labels
against negative ones, and then use the trained word models to generate
sentential descriptions for new video. This new work is inspired by recent work
which adopts a maximum likelihood (ML) framework to address the same problem
using only positive sentential labels. The new method, like the ML-based one,
is able to automatically determine which words in the sentence correspond to
which concepts in the video (i.e., ground words to meanings) in a weakly
supervised fashion. While both DT and ML yield comparable results with
sufficient training data, DT outperforms ML significantly with smaller training
sets because it can exploit negative training labels to better constrain the
learning problem.
|
1306.5268 | Static and Dynamic Aspects of Scientific Collaboration Networks | cs.SI cs.DL physics.soc-ph | Collaboration networks arise when we map the connections between scientists
which are formed through joint publications. These networks thus display the
social structure of academia, and also allow conclusions about the structure of
scientific knowledge. Using the computer science publication database DBLP, we
compile relations between authors and publications as graphs and proceed with
examining and quantifying collaborative relations with graph-based methods. We
review standard properties of the network and rank authors and publications by
centrality. Additionally, we detect communities with modularity-based
clustering and compare the resulting clusters to a ground-truth based on
conferences and thus topical similarity. In a second part, we are the first to
combine DBLP network data with data from the Dagstuhl Seminars: We investigate
whether seminars of this kind, as social and academic events designed to
connect researchers, leave a visible track in the structure of the
collaboration network. Our results suggest that such single events are not
influential enough to change the network structure significantly. However, the
network structure seems to influence a participant's decision to accept or
decline an invitation.
|
1306.5277 | Weight distribution of two classes of cyclic codes with respect to two
distinct order elements | cs.IT math.IT math.NT | Cyclic codes are an interesting type of linear codes and have wide
applications in communication and storage systems due to their efficient
encoding and decoding algorithms. Cyclic codes have been studied for many
years, but their weight distribution are known only for a few cases. In this
paper, let $\Bbb F_r$ be an extension of a finite field $\Bbb F_q$ and $r=q^m$,
we determine the weight distribution of the cyclic codes $\mathcal C=\{c(a, b):
a, b \in \Bbb F_r\},$ $$c(a, b)=(\mbox {Tr}_{r/q}(ag_1^0+bg_2^0), \ldots, \mbox
{Tr}_{r/q}(ag_1^{n-1}+bg_2^{n-1})), g_1, g_2\in \Bbb F_r,$$ in the following
two cases: (1) $\ord(g_1)=n, n|r-1$ and $g_2=1$; (2) $\ord(g_1)=n$,
$g_2=g_1^2$, $\ord(g_2)=\frac n 2$, $m=2$ and $\frac{2(r-1)}n|(q+1)$.
|
1306.5279 | Affect Control Processes: Intelligent Affective Interaction using a
Partially Observable Markov Decision Process | cs.HC cs.AI | This paper describes a novel method for building affectively intelligent
human-interactive agents. The method is based on a key sociological insight
that has been developed and extensively verified over the last twenty years,
but has yet to make an impact in artificial intelligence. The insight is that
resource bounded humans will, by default, act to maintain affective
consistency. Humans have culturally shared fundamental affective sentiments
about identities, behaviours, and objects, and they act so that the transient
affective sentiments created during interactions confirm the fundamental
sentiments. Humans seek and create situations that confirm or are consistent
with, and avoid and supress situations that disconfirm or are inconsistent
with, their culturally shared affective sentiments. This "affect control
principle" has been shown to be a powerful predictor of human behaviour. In
this paper, we present a probabilistic and decision-theoretic generalisation of
this principle, and we demonstrate how it can be leveraged to build affectively
intelligent artificial agents. The new model, called BayesAct, can maintain
multiple hypotheses about sentiments simultaneously as a probability
distribution, and can make use of an explicit utility function to make
value-directed action choices. This allows the model to generate affectively
intelligent interactions with people by learning about their identity,
predicting their behaviours using the affect control principle, and taking
actions that are simultaneously goal-directed and affect-sensitive. We
demonstrate this generalisation with a set of simulations. We then show how our
model can be used as an emotional "plug-in" for artificially intelligent
systems that interact with humans in two different settings: an exam practice
assistant (tutor) and an assistive device for persons with a cognitive
disability.
|
1306.5288 | Efficiently Estimating Motif Statistics of Large Networks | cs.SI physics.soc-ph | Exploring statistics of locally connected subgraph patterns (also known as
network motifs) has helped researchers better understand the structure and
function of biological and online social networks (OSNs). Nowadays the massive
size of some critical networks -- often stored in already overloaded relational
databases -- effectively limits the rate at which nodes and edges can be
explored, making it a challenge to accurately discover subgraph statistics. In
this work, we propose sampling methods to accurately estimate subgraph
statistics from as few queried nodes as possible. We present sampling
algorithms that efficiently and accurately estimate subgraph properties of
massive networks. Our algorithms require no pre-computation or complete network
topology information. At the same time, we provide theoretical guarantees of
convergence. We perform experiments using widely known data sets, and show that
for the same accuracy, our algorithms require an order of magnitude less
queries (samples) than the current state-of-the-art algorithms.
|
1306.5291 | Throughput of Large One-hop Wireless Networks with General Fading | cs.IT math.IT | Consider $n$ source-destination pairs randomly located in a shared wireless
medium, resulting in interference between different transmissions. All wireless
links are modeled by independently and identically distributed (i.i.d.) random
variables, indicating that the dominant channel effect is the random fading
phenomenon. We characterize the throughput of one-hop communication in such
network. First, we present a closed-form expression for throughput scaling of a
heuristic strategy, for a completely general channel power distribution. This
heuristic strategy is based on activating the source-destination pairs with the
best direct links, and forcing the others to be silent. Then, we present the
results for several common examples, namely, Gamma (Nakagami-$m$ fading),
Weibull, Pareto, and Log-normal channel power distributions. Finally -- by
proposing an upper bound on throughput of all possible strategies for
super-exponential distributions -- we prove that the aforementioned heuristic
method is order-optimal for Nakagami-$m$ fading.
|
1306.5293 | New Approach of Estimating PSNR-B For De-blocked Images | cs.CV | Measurement of image quality is very crucial to many image processing
applications. Quality metrics are used to measure the quality of improvement in
the images after they are processed and compared with the original images.
Compression is one of the applications where it is required to monitor the
quality of decompressed or decoded image. JPEG compression is the lossy
compression which is most prevalent technique for image codecs. But it suffers
from blocking artifacts. Various deblocking filters are used to reduce blocking
artifacts. The efficiency of deblocking filters which improves visual signals
degraded by blocking artifacts from compression will also be studied. Objective
quality metrics like PSNR, SSIM, and PSNRB for analyzing the quality of
deblocked images will be studied. We introduce a new approach of PSNR-B for
analyzing quality of deblocked images. Simulation results show that new
approach of PSNR-B called modified PSNR-B. it gives even better results
compared to existing well known blockiness specific indices
|
1306.5296 | Design and Implementation of an Unmanned Vehicle using a GSM Network
without Microcontrollers | cs.SY | In the recent past, wireless controlled vehicles had been extensively used in
a lot of areas like unmanned rescue missions, military usage for unmanned
combat and many others. But the major disadvantage of these wireless unmanned
robots is that they typically make use of RF circuits for maneuver and control.
Essentially RF circuits suffer from a lot of drawbacks such as limited
frequency range i.e. working range, and limited control. To overcome such
problems associated with RF control, few papers have been written, describing
methods which make use of the GSM network and the DTMF function of a cell phone
to control the robotic vehicle. This paper although uses the same principle
technology of the GSM network and the DTMF based mobile phone but it
essentially shows the construction of a circuit using only 4 bits of wireless
data communication to control the motion of the vehicle without the use of any
microcontroller. This improvement results in considerable reduction of circuit
complexity and of manpower for software development as the circuit built using
this system does not require any form of programming. Moreover, practical
results obtained showed an appreciable degree of accuracy of the system and
friendliness without the use of any microcontroller.
|
1306.5299 | Secret key generation from Gaussian sources using lattice hashing | cs.IT math.IT | We propose a simple yet complete lattice-based scheme for secret key
generation from Gaussian sources in the presence of an eavesdropper, and show
that it achieves strong secret key rates up to 1/2 nat from the optimal in the
case of "degraded" source models. The novel ingredient of our scheme is a
lattice-hashing technique, based on the notions of flatness factor and channel
intrinsic randomness. The proposed scheme does not require dithering.
|
1306.5305 | Benchmarking Practical RRM Algorithms for D2D Communications in LTE
Advanced | cs.IT cs.NI math.IT | Device-to-device (D2D) communication integrated into cellular networks is a
means to take advantage of the proximity of devices and allow for reusing
cellular resources and thereby to increase the user bitrates and the system
capacity. However, when D2D (in the 3rd Generation Partnership Project also
called Long Term Evolution (LTE) Direct) communication in cellular spectrum is
supported, there is a need to revisit and modify the existing radio resource
management (RRM) and power control (PC) techniques to realize the potential of
the proximity and reuse gains and to limit the interference at the cellular
layer. In this paper, we examine the performance of the flexible LTE PC tool
box and benchmark it against a utility optimal iterative scheme. We find that
the open loop PC scheme of LTE performs well for cellular users both in terms
of the used transmit power levels and the achieved
signal-to-interference-and-noise-ratio (SINR) distribution. However, the
performance of the D2D users as well as the overall system throughput can be
boosted by the utility optimal scheme, because the utility maximizing scheme
takes better advantage of both the proximity and the reuse gains. Therefore, in
this paper we propose a hybrid PC scheme, in which cellular users employ the
open loop path compensation method of LTE, while D2D users use the utility
optimizing distributed PC scheme. In order to protect the cellular layer, the
hybrid scheme allows for limiting the interference caused by the D2D layer at
the cost of having a small impact on the performance of the D2D layer. To
ensure feasibility, we limit the number of iterations to a practically feasible
level. We make the point that the hybrid scheme is not only near optimal, but
it also allows for a distributed implementation for the D2D users, while
preserving the LTE PC scheme for the cellular users.
|
1306.5308 | Cognitive Interpretation of Everyday Activities: Toward Perceptual
Narrative Based Visuo-Spatial Scene Interpretation | cs.AI cs.CV cs.HC cs.RO | We position a narrative-centred computational model for high-level knowledge
representation and reasoning in the context of a range of assistive
technologies concerned with "visuo-spatial perception and cognition" tasks. Our
proposed narrative model encompasses aspects such as \emph{space, events,
actions, change, and interaction} from the viewpoint of commonsense reasoning
and learning in large-scale cognitive systems. The broad focus of this paper is
on the domain of "human-activity interpretation" in smart environments, ambient
intelligence etc. In the backdrop of a "smart meeting cinematography" domain,
we position the proposed narrative model, preliminary work on perceptual
narrativisation, and the immediate outlook on constructing general-purpose
open-source tools for perceptual narrativisation.
ACM Classification: I.2 Artificial Intelligence: I.2.0 General -- Cognitive
Simulation, I.2.4 Knowledge Representation Formalisms and Methods, I.2.10
Vision and Scene Understanding: Architecture and control structures, Motion,
Perceptual reasoning, Shape, Video analysis
General keywords: cognitive systems; human-computer interaction; spatial
cognition and computation; commonsense reasoning; spatial and temporal
reasoning; assistive technologies
|
1306.5323 | The Geometry of Fusion Inspired Channel Design | cs.IT math.IT | This paper is motivated by the problem of integrating multiple sources of
measurements. We consider two multiple-input-multiple-output (MIMO) channels, a
primary channel and a secondary channel, with dependent input signals. The
primary channel carries the signal of interest, and the secondary channel
carries a signal that shares a joint distribution with the primary signal. The
problem of particular interest is designing the secondary channel matrix, when
the primary channel matrix is fixed. We formulate the problem as an
optimization problem, in which the optimal secondary channel matrix maximizes
an information-based criterion. An analytical solution is provided in a special
case. Two fast-to-compute algorithms, one extrinsic and the other intrinsic,
are proposed to approximate the optimal solutions in general cases. In
particular, the intrinsic algorithm exploits the geometry of the unit sphere, a
manifold embedded in Euclidean space. The performances of the proposed
algorithms are examined through a simulation study. A discussion of the choice
of dimension for the secondary channel is given.
|
1306.5326 | Cryptanalysis of a non-commutative key exchange protocol | cs.IT cs.CR math.IT | In the papers by Alvarez et al. and Pathak and Sanghi a non-commutative based
public key exchange is described. A similiar version of it has also been
patented (US7184551). In this paper we present a polynomial time attack that
breaks the variants of the protocol presented in the two papers. Moreover we
show that breaking the patented cryptosystem US7184551 can be easily reduced to
factoring. We also give some examples to show how efficiently the attack works.
|
1306.5338 | Active influence in dynamical models of structural balance in social
networks | cs.SI physics.soc-ph | We consider a nonlinear dynamical system on a signed graph, which can be
interpreted as a mathematical model of social networks in which the links can
have both positive and negative connotations. In accordance with a concept from
social psychology called structural balance, the negative links play a key role
in both the structure and dynamics of the network. Recent research has shown
that in a nonlinear dynamical system modeling the time evolution of
"friendliness levels" in the network, two opposing factions emerge from almost
any initial condition. Here we study active external influence in this
dynamical model and show that any agent in the network can achieve any desired
structurally balanced state from any initial condition by perturbing its own
local friendliness levels. Based on this result, we also introduce a new
network centrality measure for signed networks. The results are illustrated in
an international relations network using United Nations voting record data from
1946 to 2008 to estimate friendliness levels amongst various countries.
|
1306.5349 | Song-based Classification techniques for Endangered Bird Conservation | cs.LG | The work presented in this paper is part of a global framework which long
term goal is to design a wireless sensor network able to support the
observation of a population of endangered birds. We present the first stage for
which we have conducted a knowledge discovery approach on a sample of
acoustical data. We use MFCC features extracted from bird songs and we exploit
two knowledge discovery techniques. One that relies on clustering-based
approaches, that highlights the homogeneity in the songs of the species. The
other, based on predictive modeling, that demonstrates the good performances of
various machine learning techniques for the identification process. The
knowledge elicited provides promising results to consider a widespread study
and to elicit guidelines for designing a first version of the automatic
approach for data collection based on acoustic sensors.
|
1306.5350 | Error Correction for NOR Memory Devices with Exponentially Distributed
Read Noise | cs.IT math.IT | The scaling of high density NOR Flash memory devices with multi level cell
(MLC) hits the reliability break wall because of relatively high intrinsic bit
error rate (IBER). The chip maker companies offer two solutions to meet the
output bit error rate (OBER) specification: either partial coverage with error
correction code (ECC) or data storage in single level cell (SLC) with
significant increase of the die cost. The NOR flash memory allows to write
information in small portions, therefore the full error protection becomes
costly due to high required redundancy, e.g. $\sim$50%. This is very different
from the NAND flash memory writing at once large chunks of information; NAND
ECC requires just $\sim$10% redundancy. This paper gives an analysis of a novel
error protection scheme applicable to NOR storage of one byte. The method does
not require any redundant cells, but assumes 5th program level. The information
is mapped to states in the 4-dimensional space separated by the minimal
Manhattan distance equal 2. This code preserves the information capacity: one
byte occupies four memory cells. We demonstrate the OBER $\sim$ IBER$^{3/2}$
scaling law, where IBER is calculated for the 4-level MLC memory. As an
example, the 4-level MLC with IBER $\sim10^{-9}$, which is unacceptable for
high density products, can be converted to OBER $\sim10^{-12}$. We assume that
the IBER is determined by the exponentially distributed read noise. This is the
case for NOR Flash memory devices, since the exponential tails are typical for
the random telegraph signal (RTS) noise and for most of the charge loss, charge
gain, and charge sharing data losses.
|
1306.5358 | Monotonicity of a relative R\'enyi entropy | math-ph cs.IT math.FA math.IT math.MP quant-ph | We show that a recent definition of relative R\'enyi entropy is monotone
under completely positive, trace preserving maps. This proves a recent
conjecture of M\"uller-Lennert et al.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.