id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1403.2668 | Revealing effective classifiers through network comparison | physics.soc-ph cs.SI physics.data-an | The ability to compare complex systems can provide new insight into the
fundamental nature of the processes captured in ways that are otherwise
inaccessible to observation. Here, we introduce the $n$-tangle method to
directly compare two networks for structural similarity, based on the
distribution of edge density in network subgraphs. We demonstrate that this
method can efficiently introduce comparative analysis into network science and
opens the road for many new applications. For example, we show how the
construction of a phylogenetic tree across animal taxa according to their
social structure can reveal commonalities in the behavioral ecology of the
populations, or how students create similar networks according to the
University size. Our method can be expanded to study a multitude of additional
properties, such as network classification, changes during time evolution,
convergence of growth models, and detection of structural changes during
damage.
|
1403.2702 | On the efficiency of transmission strategies for broadcast channels
using finite size constellations | cs.IT math.IT | In this paper, achievable rates regions are derived for power constrained
Gaussian broadcast channel of two users using finite dimension constellations.
Various transmission strategies are studied, namely superposition coding (SC)
and superposition modulation (SM) and compared to standard schemes such as time
sharing (TS). The maximal achievable rates regions for SM and SC strategies are
obtained by optimizing over both the joint probability distribution and over
the positions of constellation symbols. The improvement in achievable rates for
each scheme of increasing complexity is evaluated in terms of SNR savings for a
given target achievable rate or/and percentage of gain in achievable rates for
one user with reference to a classical scenario.
|
1403.2708 | Beyond network structure: How heterogenous susceptibility modulates the
spread of epidemics | physics.soc-ph cs.SI q-bio.PE | The compartmental models used to study epidemic spreading often assume the
same susceptibility for all individuals, and are therefore, agnostic about the
effects that differences in susceptibility can have on epidemic spreading. Here
we show that--for the SIS model--differential susceptibility can make networks
more vulnerable to the spread of diseases when the correlation between a node's
degree and susceptibility are positive, and less vulnerable when this
correlation is negative. Moreover, we show that networks become more likely to
contain a pocket of infection when individuals are more likely to connect with
others that have similar susceptibility (the network is segregated). These
results show that the failure to include differential susceptibility to
epidemic models can lead to a systematic over/under estimation of fundamental
epidemic parameters when the structure of the networks is not independent from
the susceptibility of the nodes or when there are correlations between the
susceptibility of connected individuals.
|
1403.2732 | The Bursty Dynamics of the Twitter Information Network | cs.SI physics.soc-ph stat.ML | In online social media systems users are not only posting, consuming, and
resharing content, but also creating new and destroying existing connections in
the underlying social network. While each of these two types of dynamics has
individually been studied in the past, much less is known about the connection
between the two. How does user information posting and seeking behavior
interact with the evolution of the underlying social network structure?
Here, we study ways in which network structure reacts to users posting and
sharing content. We examine the complete dynamics of the Twitter information
network, where users post and reshare information while they also create and
destroy connections. We find that the dynamics of network structure can be
characterized by steady rates of change, interrupted by sudden bursts.
Information diffusion in the form of cascades of post re-sharing often creates
such sudden bursts of new connections, which significantly change users' local
network structure. These bursts transform users' networks of followers to
become structurally more cohesive as well as more homogenous in terms of
follower interests. We also explore the effect of the information content on
the dynamics of the network and find evidence that the appearance of new topics
and real-world events can lead to significant changes in edge creations and
deletions. Lastly, we develop a model that quantifies the dynamics of the
network and the occurrence of these bursts as a function of the information
spreading through the network. The model can successfully predict which
information diffusion events will lead to bursts in network dynamics.
|
1403.2739 | Sufficient statistics for linear control strategies in decentralized
systems with partial history sharing | math.OC cs.SY | In decentralized control systems with linear dynamics, quadratic cost, and
Gaussian disturbance (also called decentralized LQG systems) linear control
strategies are not always optimal. Nonetheless, linear control strategies are
appealing due to analytic and implementation simplicity. In this paper, we
investigate decentralized LQG systems with partial history sharing information
structure and identify finite dimensional sufficient statistics for such
systems. Unlike prior work on decentralized LQG systems, we do not assume
partially nestedness or quadratic invariance. Our approach is based on the
common information approach of Nayyar \emph{et al}, 2013 and exploits the
linearity of the system dynamics and control strategies. To illustrate our
methodology, we identify sufficient statistics for linear strategies in
decentralized systems where controllers communicate over a strongly connected
graph with finite delays, and for decentralized systems consisting of coupled
subsystems with control sharing or one-sided one step delay sharing information
structures.
|
1403.2740 | A consistent model for cardiac deformation estimation under abnormal
ventricular muscle conditions | cs.CE cs.NA | Deformation modeling of cardiac muscle is an important issue in the field of
cardiac analysis. For this reason, many approaches have been developed to best
estimate the cardiac muscle deformation, and to obtain a practical model to use
in diagnostic procedures. But there are some conditions, like in case of
myocardial infarction, in which the regular modeling approaches are not useful.
In this section, using a point-wise approach in deformation estimation, we try
to estimate the deformation under some abnormal conditions of cardiac muscle.
First, the endocardial and epicardial contour points are ordered with respect
to the center of gravity of endocardial contour and boundary point displacement
vectors are extracted. Then to solve the governing equation of deformation,
which is an elliptic equation, we apply boundary conditions in accordance with
the computed displacement vectors and then the Finite Element method (FEM) will
be used to solve the governing equation. Using obtained displacement field
through the cardiac muscle, strain map is extracted to show the mechanical
behavior of cardiac muscle. To validate the proposed algorithm in case of
infracted muscle, a non-homogeneous ring is modeled using ANSYS under a uniform
time varying internal pressure, which is the case in real cardiac muscle
deformation and then the proposed algorithm implemented in MATLAB and the
results for such problem are extracted.
|
1403.2763 | Aggregate Estimation Over Dynamic Hidden Web Databases | cs.DB | Many databases on the web are "hidden" behind (i.e., accessible only through)
their restrictive, form-like, search interfaces. Recent studies have shown that
it is possible to estimate aggregate query answers over such hidden web
databases by issuing a small number of carefully designed search queries
through the restrictive web interface. A problem with these existing work,
however, is that they all assume the underlying database to be static, while
most real-world web databases (e.g., Amazon, eBay) are frequently updated. In
this paper, we study the novel problem of estimating/tracking aggregates over
dynamic hidden web databases while adhering to the stringent query-cost
limitation they enforce (e.g., at most 1,000 search queries per day).
Theoretical analysis and extensive real-world experiments demonstrate the
effectiveness of our proposed algorithms and their superiority over baseline
solutions (e.g., the repeated execution of algorithms designed for static web
databases).
|
1403.2779 | Erasure codes with simplex locality | cs.IT math.IT | We focus on erasure codes for distributed storage. The distributed storage
setting imposes locality requirements because of easy repair demands on the
decoder. We first establish the characterization of various locality properties
in terms of the generator matrix of the code. These lead to bounds on locality
and notions of optimality. We then examine the locality properties of a family
of non-binary codes with simplex structure. We investigate their optimality and
design several easy repair decoding methods. In particular, we show that any
correctable erasure pattern can be solved by easy repair.
|
1403.2787 | Principles of scientific research team formation and evolution | physics.soc-ph astro-ph.IM cs.DL cs.SI | Research teams are the fundamental social unit of science, and yet there is
currently no model that describes their basic property: size. In most fields
teams have grown significantly in recent decades. We show that this is partly
due to the change in the character of team-size distribution. We explain these
changes with a comprehensive yet straightforward model of how teams of
different sizes emerge and grow. This model accurately reproduces the evolution
of empirical team-size distribution over the period of 50 years. The modeling
reveals that there are two modes of knowledge production. The first and more
fundamental mode employs relatively small, core teams. Core teams form by a
Poisson process and produce a Poisson distribution of team sizes in which
larger teams are exceedingly rare. The second mode employs extended teams,
which started as core teams, but subsequently accumulated new members
proportional to the past productivity of their members. Given time, this mode
gives rise to a power-law tail of large teams (10-1000 members), which features
in many fields today. Based on this model we construct an analytical functional
form that allows the contribution of different modes of authorship to be
determined directly from the data and is applicable to any field. The model
also offers a solid foundation for studying other social aspects of science,
such as productivity and collaboration.
|
1403.2802 | Learning Deep Face Representation | cs.CV cs.LG | Face representation is a crucial step of face recognition systems. An optimal
face representation should be discriminative, robust, compact, and very
easy-to-implement. While numerous hand-crafted and learning-based
representations have been proposed, considerable room for improvement is still
present. In this paper, we present a very easy-to-implement deep learning
framework for face representation. Our method bases on a new structure of deep
network (called Pyramid CNN). The proposed Pyramid CNN adopts a
greedy-filter-and-down-sample operation, which enables the training procedure
to be very fast and computation-efficient. In addition, the structure of
Pyramid CNN can naturally incorporate feature sharing across multi-scale face
representations, increasing the discriminative ability of resulting
representation. Our basic network is capable of achieving high recognition
accuracy ($85.8\%$ on LFW benchmark) with only 8 dimension representation. When
extended to feature-sharing Pyramid CNN, our system achieves the
state-of-the-art performance ($97.3\%$) on LFW benchmark. We also introduce a
new benchmark of realistic face images on social network and validate our
proposed representation has a good ability of generalization.
|
1403.2821 | Towards an Agent-Oriented Modeling and Evaluation Approach For Vehicular
Systems Security | cs.SE cs.MA | Agent technology is a software paradigm that permits to implement large and
complex distributed applications. In order to assist the development of
multi-agent systems, agent-oriented methodologies (AOM) have been created in
the last years to support modeling more and more complex applications in many
different domains. By defining in a non-ambiguous way concepts used in a
specific domain, Meta modeling may represent a step towards such
interoperability. In the Transport domain, this paper propose an agent-oriented
meta-model that provides rigorous concepts for conducting transportation system
problem modeling. The aim is to allow analysts to produce a transportation
system model that precisely captures the knowledge of an organization so that
an agent-oriented requirements specification of the system-to-be and its
operational corporate environment can be derived from it. To this end, we
extend and adapt an existing meta-model, Extended Gaia, to build a meta-model
and an adequate model for transportation problems. Our new agent-oriented
meta-model aims to allow the analyst to model and specify any transportation
system as a multi-agent system. Based on the proposed meta-model, we proposes
an approach for modeling and evaluating the Transportation System based on
Stochastic Activity Network (SAN) components. The proposed process is based on
seven steps from Recognition phase to Quantitative Analysis phase. These
analyzes are based on the Dependability models which are built using the
formalism Stochastic Activity Network. A real case study of Urban Public
Transportation System has been conducted to show the benefits of the approach.
|
1403.2835 | Compressive Signal Processing with Circulant Sensing Matrices | cs.IT math.IT | Compressive sensing achieves effective dimensionality reduction of signals,
under a sparsity constraint, by means of a small number of random measurements
acquired through a sensing matrix. In a signal processing system, the problem
arises of processing the random projections directly, without first
reconstructing the signal. In this paper, we show that circulant sensing
matrices allow to perform a variety of classical signal processing tasks such
as filtering, interpolation, registration, transforms, and so forth, directly
in the compressed domain and in an exact fashion, \emph{i.e.}, without relying
on estimators as proposed in the existing literature. The advantage of the
techniques presented in this paper is to enable direct
measurement-to-measurement transformations, without the need of costly recovery
procedures.
|
1403.2837 | HPS: a hierarchical Persian stemming method | cs.CL | In this paper, a novel hierarchical Persian stemming approach based on the
Part-Of-Speech of the word in a sentence is presented. The implemented stemmer
includes hash tables and several deterministic finite automata in its different
levels of hierarchy for removing the prefixes and suffixes of the words. We had
two intentions in using hash tables in our method. The first one is that the
DFA don't support some special words, so hash table can partly solve the
addressed problem. the second goal is to speed up the implemented stemmer with
omitting the time that deterministic finite automata need. Because of the
hierarchical organization, this method is fast and flexible enough. Our
experiments on test sets from Hamshahri collection and security news (istna.ir)
show that our method has the average accuracy of 95.37% which is even improved
in using the method on a test set with common topics.
|
1403.2842 | Application of Particle Swarm Optimization to Microwave Tapered
Microstrip Lines | cs.NE | Application of metaheuristic algorithms has been of continued interest in the
field of electrical engineering because of their powerful features. In this
work special design is done for a tapered transmission line used for matching
an arbitrary real load to a 50{\Omega} line. The problem at hand is to match
this arbitrary load to 50 {\Omega} line using three section tapered
transmission line with impedances in decreasing order from the load. So the
problem becomes optimizing an equation with three unknowns with various
conditions. The optimized values are obtained using Particle Swarm
Optimization. It can easily be shown that PSO is very strong in solving this
kind of multiobjective optimization problems.
|
1403.2848 | Delineation of Techniques to implement on the enhanced proposed model
using data mining for protein sequence classification | cs.DB cs.CE | In post genomic era with the advent of new technologies a huge amount of
complex molecular data are generated with high throughput. The management of
this biological data is definitely a challenging task due to complexity and
heterogeneity of data for discovering new knowledge. Issues like managing noisy
and incomplete data are needed to be dealt with. Use of data mining in
biological domain has made its inventory success. Discovering new knowledge
from the biological data is a major challenge in data mining technique. The
novelty of the proposed model is its combined use of intelligent techniques to
classify the protein sequence faster and efficiently. Use of FFT, fuzzy
classifier, String weighted algorithm, gram encoding method, neural network
model and rough set classifier in a single model and in an appropriate place
can enhance the quality of the classification system.Thus the primary challenge
is to identify and classify the large protein sequences in a very fast and easy
but intellectual way to decrease the time complexity and space complexity.
|
1403.2850 | Fat-tailed fluctuations in the size of organizations: the role of social
influence | physics.soc-ph cs.SI | Organizational growth processes have consistently been shown to exhibit a
fatter-than-Gaussian growth-rate distribution in a variety of settings. Long
periods of relatively small changes are interrupted by sudden changes in all
size scales. This kind of extreme events can have important consequences for
the development of biological and socio-economic systems. Existing models do
not derive this aggregated pattern from agent actions at the micro level. We
develop an agent-based simulation model on a social network. We take our
departure in a model by a Schwarzkopf et al. on a scale-free network. We
reproduce the fat-tailed pattern out of internal dynamics alone, and also find
that it is robust with respect to network topology. Thus, the social network
and the local interactions are a prerequisite for generating the pattern, but
not the network topology itself. We further extend the model with a parameter
$\delta$ that weights the relative fraction of an individual's neighbours
belonging to a given organization, representing a contextual aspect of social
influence. In the lower limit of this parameter, the fraction is irrelevant and
choice of organization is random. In the upper limit of the parameter, the
largest fraction quickly dominates, leading to a winner-takes-all situation. We
recover the real pattern as an intermediate case between these two extremes.
|
1403.2871 | Shape-Based Plagiarism Detection for Flowchart Figures in Texts | cs.CV cs.IR | Plagiarism detection is well known phenomenon in the academic arena. Copying
other people is considered as serious offence that needs to be checked. There
are many plagiarism detection systems such as turn-it-in that has been
developed to provide this checks. Most, if not all, discard the figures and
charts before checking for plagiarism. Discarding the figures and charts
results in look holes that people can take advantage. That means people can
plagiarized figures and charts easily without the current plagiarism systems
detecting it. There are very few papers which talks about flowcharts plagiarism
detection. Therefore, there is a need to develop a system that will detect
plagiarism in figures and charts. This paper presents a method for detecting
flow chart figure plagiarism based on shape-based image processing and
multimedia retrieval. The method managed to retrieve flowcharts with ranked
similarity according to different matching sets.
|
1403.2877 | A survey of dimensionality reduction techniques | stat.ML cs.LG q-bio.QM | Experimental life sciences like biology or chemistry have seen in the recent
decades an explosion of the data available from experiments. Laboratory
instruments become more and more complex and report hundreds or thousands
measurements for a single experiment and therefore the statistical methods face
challenging tasks when dealing with such high dimensional data. However, much
of the data is highly redundant and can be efficiently brought down to a much
smaller number of variables without a significant loss of information. The
mathematical procedures making possible this reduction are called
dimensionality reduction techniques; they have widely been developed by fields
like Statistics or Machine Learning, and are currently a hot research topic. In
this review we categorize the plethora of dimension reduction techniques
available and give the mathematical insight behind them.
|
1403.2895 | Indoor 3D Video Monitoring Using Multiple Kinect Depth-Cameras | cs.CV | This article describes the design and development of a system for remote
indoor 3D monitoring using an undetermined number of Microsoft(R) Kinect
sensors. In the proposed client-server system, the Kinect cameras can be
connected to different computers, addressing this way the hardware limitation
of one sensor per USB controller. The reason behind this limitation is the high
bandwidth needed by the sensor, which becomes also an issue for the distributed
system TCP/IP communications. Since traffic volume is too high, 3D data has to
be compressed before it can be sent over the network. The solution consists in
selfcoding the Kinect data into RGB images and then using a standard multimedia
codec to compress color maps. Information from different sources is collected
into a central client computer, where point clouds are transformed to
reconstruct the scene in 3D. An algorithm is proposed to merge the skeletons
detected locally by each Kinect conveniently, so that monitoring of people is
robust to self and inter-user occlusions. Final skeletons are labeled and
trajectories of every joint can be saved for event reconstruction or further
analysis.
|
1403.2902 | A Novel Antenna Selection Scheme for Spatially Correlated Massive MIMO
Uplinks with Imperfect Channel Estimation | cs.IT math.IT | We propose a new antenna selection scheme for a massive MIMO system with a
single user terminal and a base station with a large number of antennas. We
consider a practical scenario where there is a realistic correlation among the
antennas and imperfect channel estimation at the receiver side. The proposed
scheme exploits the sparsity of the channel matrix for the effective selection
of a limited number of antennas. To this end, we compute a sparse channel
matrix by minimising the mean squared error. This optimisation problem is then
solved by the well-known orthogonal matching pursuit algorithm. Widely used
models for spatial correlation among the antennas and channel estimation errors
are considered in this work. Simulation results demonstrate that when the
impacts of spatial correlation and imperfect channel estimation introduced, the
proposed scheme in the paper can significantly reduce complexity of the
receiver, without degrading the system performance compared to the maximum
ratio combining.
|
1403.2906 | Uav Route Planning For Maximum Target Coverage | cs.RO cs.NE | Utilization of Unmanned Aerial Vehicles (UAVs) in military and civil
operations is getting popular. One of the challenges in effectively tasking
these expensive vehicles is planning the flight routes to monitor the targets.
In this work, we aim to develop an algorithm which produces routing plans for a
limited number of UAVs to cover maximum number of targets considering their
flight range. The proposed solution for this practical optimization problem is
designed by modifying the Max-Min Ant System (MMAS) algorithm. To evaluate the
success of the proposed method, an alternative approach, based on the Nearest
Neighbour (NN) heuristic, has been developed as well. The results showed the
success of the proposed MMAS method by increasing the number of covered targets
compared to the solution based on the NN heuristic.
|
1403.2912 | Nonuniform Fuchsian codes for noisy channels | cs.IT math.IT | We develop a new transmission scheme for additive white Gaussian noisy (AWGN)
channels based on Fuchsian groups from rational quaternion algebras. The
structure of the proposed Fuchsian codes is nonlinear and nonuniform, hence
conventional decoding methods based on linearity and symmetry do not apply.
Previously, only brute force decoding methods with complexity that is linear in
the code size exist for general nonuniform codes. However, the properly
discontinuous character of the action of the Fuchsian groups on the complex
upper half-plane translates into decoding complexity that is logarithmic in the
code size via a recently introduced point reduction algorithm.
|
1403.2923 | Adaptive Representations for Tracking Breaking News on Twitter | cs.IR cs.NE | Twitter is often the most up-to-date source for finding and tracking breaking
news stories. Therefore, there is considerable interest in developing filters
for tweet streams in order to track and summarize stories. This is a
non-trivial text analytics task as tweets are short, and standard retrieval
methods often fail as stories evolve over time. In this paper we examine the
effectiveness of adaptive mechanisms for tracking and summarizing breaking news
stories. We evaluate the effectiveness of these mechanisms on a number of
recent news events for which manually curated timelines are available.
Assessments based on ROUGE metrics indicate that an adaptive approaches are
best suited for tracking evolving stories on Twitter.
|
1403.2933 | Efficiently inferring community structure in bipartite networks | cs.SI physics.data-an physics.soc-ph q-bio.QM stat.ML | Bipartite networks are a common type of network data in which there are two
types of vertices, and only vertices of different types can be connected. While
bipartite networks exhibit community structure like their unipartite
counterparts, existing approaches to bipartite community detection have
drawbacks, including implicit parameter choices, loss of information through
one-mode projections, and lack of interpretability. Here we solve the community
detection problem for bipartite networks by formulating a bipartite stochastic
block model, which explicitly includes vertex type information and may be
trivially extended to $k$-partite networks. This bipartite stochastic block
model yields a projection-free and statistically principled method for
community detection that makes clear assumptions and parameter choices and
yields interpretable results. We demonstrate this model's ability to
efficiently and accurately find community structure in synthetic bipartite
networks with known structure and in real-world bipartite networks with unknown
structure, and we characterize its performance in practical contexts.
|
1403.2941 | People Like Us: Mining Scholarly Data for Comparable Researchers | cs.DL cs.SI | We present the problem of finding comparable researchers for any given
researcher. This problem has many motivations. Firstly, know thyself. The
answers of where we stand among research community and who we are most alike
may not be easily found by existing evaluations of ones' research mainly based
on citation counts. Secondly, there are many situations where one needs to find
comparable researchers e.g., for reviewing peers, constructing programming
committees or compiling teams for grants. It is often done through an ad hoc
and informal basis. Utilizing the large scale scholarly data accessible on the
web, we address the problem of automatically finding comparable researchers. We
propose a standard to quantify the quality of research output, via the quality
of publishing venues. We represent a researcher as a sequence of her
publication records, and develop a framework of comparison of researchers by
sequence matching. Several variations of comparisons are considered including
matching by quality of publication venue and research topics, and performing
prefix matching. We evaluate our methods on a large corpus and demonstrate the
effectiveness of our methods through examples. In the end, we identify several
promising directions for further work.
|
1403.2950 | Cancer Prognosis Prediction Using Balanced Stratified Sampling | cs.LG | High accuracy in cancer prediction is important to improve the quality of the
treatment and to improve the rate of survivability of patients. As the data
volume is increasing rapidly in the healthcare research, the analytical
challenge exists in double. The use of effective sampling technique in
classification algorithms always yields good prediction accuracy. The SEER
public use cancer database provides various prominent class labels for
prognosis prediction. The main objective of this paper is to find the effect of
sampling techniques in classifying the prognosis variable and propose an ideal
sampling method based on the outcome of the experimentation. In the first phase
of this work the traditional random sampling and stratified sampling techniques
have been used. At the next level the balanced stratified sampling with
variations as per the choice of the prognosis class labels have been tested.
Much of the initial time has been focused on performing the pre_processing of
the SEER data set. The classification model for experimentation has been built
using the breast cancer, respiratory cancer and mixed cancer data sets with
three traditional classifiers namely Decision Tree, Naive Bayes and K-Nearest
Neighbor. The three prognosis factors survival, stage and metastasis have been
used as class labels for experimental comparisons. The results shows a steady
increase in the prediction accuracy of balanced stratified model as the sample
size increases, but the traditional approach fluctuates before the optimum
results.
|
1403.2958 | An Approach for Normalizing Fuzzy Relational Databases Based on Join
Dependency | cs.DB | Fuzziness in databases is used to denote uncertain or incomplete data.
Relational Databases stress on the nature of the data to be certain. This
certainty based data is used as the basis of the normalization approach
designed for traditional relational databases. But real world data may not
always be certain, thereby making it necessary to design an approach for
normalization that deals with fuzzy data. This paper focuses on the approach
for designing the fifth normal form (5NF) based on join dependencies for fuzzy
data. The basis of join dependency for fuzzy relational databases is derived
from the basic relational database concepts. As join dependency implies an
multivalued dependency by symmetry the proof of join dependency based
normalization is stated from the perspective of multivalued dependency based
normalization on fuzzy relational databases.
|
1403.2980 | 3D Well-composed Polyhedral Complexes | cs.CV | A binary three-dimensional (3D) image $I$ is well-composed if the boundary
surface of its continuous analog is a 2D manifold. Since 3D images are not
often well-composed, there are several voxel-based methods ("repairing"
algorithms) for turning them into well-composed ones but these methods either
do not guarantee the topological equivalence between the original image and its
corresponding well-composed one or involve sub-sampling the whole image.
In this paper, we present a method to locally "repair" the cubical complex
$Q(I)$ (embedded in $\mathbb{R}^3$) associated to $I$ to obtain a polyhedral
complex $P(I)$ homotopy equivalent to $Q(I)$ such that the boundary of every
connected component of $P(I)$ is a 2D manifold. The reparation is performed via
a new codification system for $P(I)$ under the form of a 3D grayscale image
that allows an efficient access to cells and their faces.
|
1403.3005 | NetworKit: A Tool Suite for Large-scale Complex Network Analysis | cs.SI cs.DC physics.soc-ph | We introduce NetworKit, an open-source software package for analyzing the
structure of large complex networks. Appropriate algorithmic solutions are
required to handle increasingly common large graph data sets containing up to
billions of connections. We describe the methodology applied to develop
scalable solutions to network analysis problems, including techniques like
parallelization, heuristics for computationally expensive problems, efficient
data structures, and modular software architecture. Our goal for the software
is to package results of our algorithm engineering efforts and put them into
the hands of domain experts. NetworKit is implemented as a hybrid combining the
kernels written in C++ with a Python front end, enabling integration into the
Python ecosystem of tested tools for data analysis and scientific computing.
The package provides a wide range of functionality (including common and novel
analytics algorithms and graph generators) and does so via a convenient
interface. In an experimental comparison with related software, NetworKit shows
the best performance on a range of typical analysis tasks.
|
1403.3011 | The influence of persuasion in opinion formation and polarization | physics.soc-ph cs.SI | We present a model that explores the influence of persuasion in a population
of agents with positive and negative opinion orientations. The opinion of each
agent is represented by an integer number $k$ that expresses its level of
agreement on a given issue, from totally against $k=-M$ to totally in favor
$k=M$. Same-orientation agents persuade each other with probability $p$,
becoming more extreme, while opposite-orientation agents become more moderate
as they reach a compromise with probability $q$. The population initially
evolves to (a) a polarized state for $r=p/q>1$, where opinions' distribution is
peaked at the extreme values $k=\pm M$, or (b) a centralized state for $r<1$,
with most opinions around $k=\pm 1$. When $r \gg 1$, polarization lasts for a
time that diverges as $r^M \ln N$, where $N$ is the population's size. Finally,
an extremist consensus ($k=M$ or $-M$) is reached in a time that scales as
$r^{-1}$ for $r \ll 1$.
|
1403.3021 | Image reconstruction from limited range projections using orthogonal
moments | cs.CV math.NA | A set of orthonormal polynomials is proposed for image reconstruction from
projection data. The relationship between the projection moments and image
moments is discussed in detail, and some interesting properties are
demonstrated. Simulation results are provided to validate the method and to
compare its performance with previous works.
|
1403.3022 | Efficient Legendre moment computation for grey level images | cs.CV math.NA | Legendre orthogonal moments have been widely used in the field of image
analysis. Because their computation by a direct method is very time expensive,
recent efforts have been devoted to the reduction of computational complexity.
Nevertheless, the existing algorithms are mainly focused on binary images. We
propose here a new fast method for computing the Legendre moments, which is not
only suitable for binary images but also for grey levels. We first set up the
recurrence formula of one-dimensional (1D) Legendre moments by using the
recursive property of Legendre polynomials. As a result, the 1D Legendre
moments of order p, Lp = Lp(0), can be expressed as a linear combination of
Lp-1(1) and Lp-2(0). Based on this relationship, the 1D Legendre moments Lp(0)
is thus obtained from the array of L1(a) and L0(a) where a is an integer number
less than p. To further decrease the computation complexity, an algorithm, in
which no multiplication is required, is used to compute these quantities. The
method is then extended to the calculation of the two-dimensional Legendre
moments Lpq. We show that the proposed method is more efficient than the direct
method.
|
1403.3036 | Capacity Bounds for a Class of Interference Relay Channels | cs.IT math.IT | The capacity of a class of Interference Relay Channels (IRC) -the Injective
Semideterministic IRC where the relay can only observe one of the sources- is
investigated. We first derive a novel outer bound and two inner bounds which
are based on a careful use of each of the available cooperative strategies
together with the adequate interference decoding technique. The outer bound
extends Telatar and Tse's work while the inner bounds contain several known
results in the literature as special cases. Our main result is the
characterization of the capacity region of the Gaussian class of IRCs studied
within a fixed number of bits per dimension -constant gap. The proof relies on
the use of the different cooperative strategies in specific SNR regimes due to
the complexity of the schemes. As a matter of fact, this issue reveals the
complex nature of the Gaussian IRC where the combination of a single coding
scheme for the Gaussian relay and interference channel may not lead to a good
coding scheme for this problem, even when the focus is only on capacity to
within a constant gap over all possible fading statistics.
|
1403.3057 | Evaluation of Image Segmentation and Filtering With ANN in the Papaya
Leaf | cs.NE cs.CV | Precision agriculture is area with lack of cheap technology. The refinement
of the production system brings large advantages to the producer and the use of
images makes the monitoring a more cheap methodology. Macronutrients monitoring
can to determine the health and vulnerability of the plant in specific stages.
In this paper is analyzed the method based on computational intelligence to
work with image segmentation in the identification of symptoms of plant
nutrient deficiency. Artificial neural networks are evaluated for image
segmentation and filtering, several variations of parameters and insertion
impulsive noise were evaluated too. Satisfactory results are achieved with
artificial neural for segmentation same with high noise levels.
|
1403.3060 | Non linear Prediction of Antitubercular Activity Of Oxazolines and
Oxazoles derivatives Making Use of Compact TS-Fuzzy models Through Clustering
with orthogonal least sqaure technique and Fuzzy identification system | cs.CE | The prediction of uncertain and predictive nonlinear systems is an important
and challenging problem. Fuzzy logic models are often a good choice to describe
such systems however in many cases these become complex soon. commonlly, too
less effort is put into descriptor selection and in the creation of suitable
local rules. Moreover, in common no model reduction is applied, while this may
analyze the model by removing redundant data. This paper suggests a combined
method that deal with these issues in order to create compact Takagi Sugeno
(TS) models that can be effectively used to represent complex predictive
systems. A new fuzzy clustering method is come up with for the identification
of compact TS-fuzzy models. The best relevant consequent variables of the TS
model are choosen by an orthogonal least squares technique based on the
obtained clusters.For the selection of the relevant antecedent (scheduling)
variables a new method has been developed based on Fisher's interclass
separability basis. This complete approach is demonstrated by means of the
Oxazolines and Oxazoles derivatives as antituberculosis agent for nonlinear
regression benchmark. The results are compared with results obtained by
neuro-fuzzy i.e. ANFIS algorithm and advanced fuzzyy clustering techniques i.e
FMID toolbox .
|
1403.3061 | A Comparative Study of Audio Compression Based on Compressed Sensing and
Sparse Fast Fourier Transform (SFFT): Performance and Challenges | cs.IT math.IT | Audio compression has become one of the basic multimedia technologies.
Choosing an efficient compression scheme that is capable of preserving the
signal quality while providing a high compression ratio is desirable in the
different standards worldwide. In this paper we study the application of two
highly acclaimed sparse signal processing algorithms, namely, Compressed
Sensing (CS) and Sparse Fart Fourier transform, to audio compression. In
addition, we present a Sparse Fast Fourier transform (SFFT)-based framework to
compress audio signal. This scheme embeds the K-largest frequencies indices as
part of the transmitted signal and thus saves in the bandwidth required for
transmission
|
1403.3077 | Set-Membership Adaptive Constant Modulus Algorithm with a Generalized
Sidelobe Canceler and Dynamic Bounds for Beamforming | cs.IT math.IT | In this work, we propose an adaptive set-membership constant modulus (SM-CM)
algorithm with a generalized sidelobe canceler (GSC) structure for blind
beamforming. We develop a stochastic gradient (SG) type algorithm based on the
concept of SM filtering for adaptive implementation. The filter weights are
updated only if the constraint cannot be satisfied. In addition, we also
propose an extension of two schemes of time-varying bounds for beamforming with
a GSC structure and incorporate parameter and interference dependence to
characterize the environment which improves the tracking performance of the
proposed algorithm in dynamic scenarios. A convergence analysis of the proposed
adaptive SM filtering techniques is carried out. Simulation results show that
the proposed adaptive SM-CM-GSC algorithm with dynamic bounds achieves superior
performance to previously reported methods at a reduced update rate.
|
1403.3080 | Statistical Decision Making for Optimal Budget Allocation in Crowd
Labeling | cs.LG math.OC stat.ML | In crowd labeling, a large amount of unlabeled data instances are outsourced
to a crowd of workers. Workers will be paid for each label they provide, but
the labeling requester usually has only a limited amount of the budget. Since
data instances have different levels of labeling difficulty and workers have
different reliability, it is desirable to have an optimal policy to allocate
the budget among all instance-worker pairs such that the overall labeling
accuracy is maximized. We consider categorical labeling tasks and formulate the
budget allocation problem as a Bayesian Markov decision process (MDP), which
simultaneously conducts learning and decision making. Using the dynamic
programming (DP) recurrence, one can obtain the optimal allocation policy.
However, DP quickly becomes computationally intractable when the size of the
problem increases. To solve this challenge, we propose a computationally
efficient approximate policy, called optimistic knowledge gradient policy. Our
MDP is a quite general framework, which applies to both pull crowdsourcing
marketplaces with homogeneous workers and push marketplaces with heterogeneous
workers. It can also incorporate the contextual information of instances when
they are available. The experiments on both simulated and real data show that
the proposed policy achieves a higher labeling accuracy than other existing
policies at the same budget level.
|
1403.3083 | A Novel Method to Extract Rocks from Mars Images | cs.CV | In this paper, a novel method is proposed to extract rocks from Martian
surface images by using 8 data field. It models the interaction between two
pixels of an image in the context of imagery 9 characteristics. First,
foreground rocks are differed from background information by binarizing 10
image on roughly partitioned images. Second, foreground rocks are grouped into
clusters by 11 locating the centers and edges of clusters in data field via
hierarchical grids. Third, the target 12 rocks are discovered for the Mars
Exploration Rover (MER) to keep healthy paths. The 13 experiment with images
taken by MER shows the proposed method is practical and potential.
|
1403.3084 | Emerging archetypes in massive artificial societies for literary
purposes using genetic algorithms | cs.AI | The creation of fictional stories is a very complex task that usually implies
a creative process where the author has to combine characters, conflicts and
plots to create an engaging narrative. This work presents a simulated
environment with hundreds of characters that allows the study of coherent and
interesting literary archetypes (or behaviours), plots and sub-plots. We will
use this environment to perform a study about the number of profiles
(parameters that define the personality of a character) needed to create two
emergent scenes of archetypes: "natality control" and "revenge". A Genetic
Algorithm (GA) will be used to find the fittest number of profiles and
parameter configuration that enables the existence of the desired archetypes
(played by the characters without their explicit knowledge). The results show
that parametrizing this complex system is possible and that these kind of
archetypes can emerge in the given environment.
|
1403.3100 | Engaging with Massive Online Courses | cs.SI physics.soc-ph stat.ML | The Web has enabled one of the most visible recent developments in
education---the deployment of massive open online courses. With their global
reach and often staggering enrollments, MOOCs have the potential to become a
major new mechanism for learning. Despite this early promise, however, MOOCs
are still relatively unexplored and poorly understood.
In a MOOC, each student's complete interaction with the course materials
takes place on the Web, thus providing a record of learner activity of
unprecedented scale and resolution. In this work, we use such trace data to
develop a conceptual framework for understanding how users currently engage
with MOOCs. We develop a taxonomy of individual behavior, examine the different
behavioral patterns of high- and low-achieving students, and investigate how
forum participation relates to other parts of the course.
We also report on a large-scale deployment of badges as incentives for
engagement in a MOOC, including randomized experiments in which the
presentation of badges was varied across sub-populations. We find that making
badges more salient produced increases in forum engagement.
|
1403.3109 | Sparse Recovery with Linear and Nonlinear Observations: Dependent and
Noisy Data | cs.IT cs.LG math.IT math.ST stat.TH | We formulate sparse support recovery as a salient set identification problem
and use information-theoretic analyses to characterize the recovery performance
and sample complexity. We consider a very general model where we are not
restricted to linear models or specific distributions. We state non-asymptotic
bounds on recovery probability and a tight mutual information formula for
sample complexity. We evaluate our bounds for applications such as sparse
linear regression and explicitly characterize effects of correlation or noisy
features on recovery performance. We show improvements upon previous work and
identify gaps between the performance of recovery algorithms and fundamental
information.
|
1403.3115 | Memory Capacity of Neural Networks using a Circulant Weight Matrix | cs.NE | This paper presents results on the memory capacity of a generalized feedback
neural network using a circulant matrix. Children are capable of learning soon
after birth which indicates that the neural networks of the brain have prior
learnt capacity that is a consequence of the regular structures in the brain's
organization. Motivated by this idea, we consider the capacity of circulant
matrices as weight matrices in a feedback network.
|
1403.3117 | Distributed Estimation using Bayesian Consensus Filtering | math.OC cs.IT math.IT math.PR | We present the Bayesian consensus filter (BCF) for tracking a moving target
using a networked group of sensing agents and achieving consensus on the best
estimate of the probability distributions of the target's states. Our BCF
framework can incorporate nonlinear target dynamic models, heterogeneous
nonlinear measurement models, non-Gaussian uncertainties, and higher-order
moments of the locally estimated posterior probability distribution of the
target's states obtained using Bayesian filters. If the agents combine their
estimated posterior probability distributions using a logarithmic opinion pool,
then the sum of Kullback--Leibler divergences between the consensual
probability distribution and the local posterior probability distributions is
minimized. Rigorous stability and convergence results for the proposed BCF
algorithm with single or multiple consensus loops are presented. Communication
of probability distributions and computational methods for implementing the BCF
algorithm are discussed along with a numerical example.
|
1403.3118 | Parallel WiSARD object tracker: a ram-based tracking system | cs.CV | This paper proposes the Parallel WiSARD Object Tracker (PWOT), a new object
tracker based on the WiSARD weightless neural network that is robust against
quantization errors. Object tracking in video is an important and challenging
task in many applications. Difficulties can arise due to weather conditions,
target trajectory and appearance, occlusions, lighting conditions and noise.
Tracking is a high-level application and requires the object location frame by
frame in real time. This paper proposes a fast hybrid image segmentation
(threshold and edge detection) in YcbCr color model and a parallel RAM based
discriminator that improves efficiency when quantization errors occur. The
original WiSARD training algorithm was changed to allow the tracking.
|
1403.3126 | Signaling in sensor networks for sequential detection | cs.SY | Sequential detection problems in sensor networks are considered. The true
state of nature/true hypothesis is modeled as a binary random variable $H$ with
known prior distribution. There are $N$ sensors making noisy observations about
the hypothesis; $\mathcal{N} =\{1,2,\ldots,N\}$ denotes the set of sensors.
Sensor $i$ can receive messages from a subset $\mathcal{P}^i \subset
\mathcal{N}$ of sensors and send a message to a subset $\mathcal{C}^i \subset
\mathcal{N}$. Each sensor is faced with a stopping problem. At each time $t$,
based on the observations it has taken so far and the messages it may have
received, sensor $i$ can decide to stop and communicate a binary decision to
the sensors in $\mathcal{C}^i$, or it can continue taking observations and
receiving messages. After sensor $i$'s binary decision has been sent, it
becomes inactive. Sensors incur operational costs (cost of taking observations,
communication costs etc.) while they are active. In addition, the system incurs
a terminal cost that depends on the true hypothesis $H$, the sensors' binary
decisions and their stopping times. The objective is to determine decision
strategies for all sensors to minimize the total expected cost.
|
1403.3142 | ARSENAL: Automatic Requirements Specification Extraction from Natural
Language | cs.CL cs.SE | Requirements are informal and semi-formal descriptions of the expected
behavior of a complex system from the viewpoints of its stakeholders
(customers, users, operators, designers, and engineers). However, for the
purpose of design, testing, and verification for critical systems, we can
transform requirements into formal models that can be analyzed automatically.
ARSENAL is a framework and methodology for systematically transforming natural
language (NL) requirements into analyzable formal models and logic
specifications. These models can be analyzed for consistency and
implementability. The ARSENAL methodology is specialized to individual domains,
but the approach is general enough to be adapted to new domains.
|
1403.3148 | Heat kernel based community detection | cs.SI cs.DS physics.soc-ph | The heat kernel is a particular type of graph diffusion that, like the
much-used personalized PageRank diffusion, is useful in identifying a community
nearby a starting seed node. We present the first deterministic, local
algorithm to compute this diffusion and use that algorithm to study the
communities that it produces. Our algorithm is formally a relaxation method for
solving a linear system to estimate the matrix exponential in a degree-weighted
norm. We prove that this algorithm stays localized in a large graph and has a
worst-case constant runtime that depends only on the parameters of the
diffusion, not the size of the graph. Our experiments on real-world networks
indicate that the communities produced by this method have better conductance
than those produced by PageRank, although they take slightly longer to compute
on large graphs. On a real-world community identification task, the heat kernel
communities perform better than those from the PageRank diffusion.
|
1403.3155 | Spectral Unmixing via Data-guided Sparsity | cs.CV | Hyperspectral unmixing, the process of estimating a common set of spectral
bases and their corresponding composite percentages at each pixel, is an
important task for hyperspectral analysis, visualization and understanding.
From an unsupervised learning perspective, this problem is very
challenging---both the spectral bases and their composite percentages are
unknown, making the solution space too large. To reduce the solution space,
many approaches have been proposed by exploiting various priors. In practice,
these priors would easily lead to some unsuitable solution. This is because
they are achieved by applying an identical strength of constraints to all the
factors, which does not hold in practice. To overcome this limitation, we
propose a novel sparsity based method by learning a data-guided map to describe
the individual mixed level of each pixel. Through this data-guided map, the
$\ell_{p}(0<p<1)$ constraint is applied in an adaptive manner. Such
implementation not only meets the practical situation, but also guides the
spectral bases toward the pixels under highly sparse constraint. What's more,
an elegant optimization scheme as well as its convergence proof have been
provided in this paper. Extensive experiments on several datasets also
demonstrate that the data-guided map is feasible, and high quality unmixing
results could be obtained by our method.
|
1403.3159 | Iterative Detection for Compressive Sensing:Turbo CS | cs.IT math.IT | We consider compressive sensing as a source coding method for signal
transmission. We concatenate a convolutional coding system with 1-bit
compressive sensing to obtain a serial concatenated system model for sparse
signal transmission over an AWGN channel. The proposed source/channel decoder,
which we refer to as turbo CS, is robust against channel noise and its signal
reconstruction performance at the receiver increases considerably through
iterations. We show 12 dB improvement with six turbo CS iterations compared to
a non-iterative concatenated source/channel decoder.
|
1403.3185 | Sentiment Analysis by Using Fuzzy Logic | cs.IR cs.CL | How could a product or service is reasonably evaluated by anyone in the
shortest time? A million dollar question but it is having a simple answer:
Sentiment analysis. Sentiment analysis is consumers review on products and
services which helps both the producers and consumers (stakeholders) to take
effective and efficient decision within a shortest period of time. Producers
can have better knowledge of their products and services through the sentiment
analysis (ex. positive and negative comments or consumers likes and dislikes)
which will help them to know their products status (ex. product limitations or
market status). Consumers can have better knowledge of their interested
products and services through the sentiment analysis (ex. positive and negative
comments or consumers likes and dislikes) which will help them to know their
deserving products status (ex. product limitations or market status). For more
specification of the sentiment values, fuzzy logic could be introduced.
Therefore, sentiment analysis with the help of fuzzy logic (deals with
reasoning and gives closer views to the exact sentiment values) will help the
producers or consumers or any interested person for taking the effective
decision according to their product or service interest.
|
1403.3196 | Secure Beamforming For MIMO Broadcasting With Wireless Information And
Power Transfer | cs.IT math.IT | This paper considers a basic MIMO information-energy (I-E) broadcast system,
where a multi-antenna transmitter transmits information and energy
simultaneously to a multi-antenna information receiver and a dual-functional
multi-antenna energy receiver which is also capable of decoding information.
Due to the open nature of wireless medium and the dual purpose of information
and energy transmission, secure information transmission while ensuring
efficient energy harvesting is a critical issue for such a broadcast system.
Assuming that physical layer security techniques are applied to the system to
ensure secure transmission from the transmitter to the information receiver, we
study beamforming design to maximize the achievable secrecy rate subject to a
total power constraint and an energy harvesting constraint. First, based on
semidefinite relaxation, we propose global optimal solutions to the secrecy
rate maximization (SRM) problem in the single-stream case and a specific
full-stream case where the difference of Gram matrices of the channel matrices
is positive semidefinite. Then, we propose a simple iterative algorithm named
inexact block coordinate descent (IBCD) algorithm to tackle the SRM problem of
general case with arbitrary number of streams. We proves that the IBCD
algorithm can monotonically converge to a Karush-Kuhn-Tucker (KKT) solution to
the SRM problem. Furthermore, we extend the IBCD algorithm to the joint
beamforming and artificial noise design problem. Finally, simulations are
performed to validate the performance of the proposed beamforming algorithms.
|
1403.3228 | Fractal multi-level organisation of human groups in a virtual world | physics.soc-ph cs.SI | Humans are fundamentally social. They have progressively dominated their
environment by the strength and creativity provided by and within their
grouping. It is well recognised that human groups are highly structured, and
the anthropological literature has loosely classified them according to their
size and function, such as support cliques, sympathy groups, bands, cognitive
groups, tribes, linguistic groups and so on. Recently, combining data on human
grouping patterns in a comprehensive and systematic study, Zhou et al.
identified a quantitative discrete hierarchy of group sizes with a preferred
scaling ratio close to $3$, which was later confirmed for hunter-gatherer
groups and for other mammalian societies. Using high precision large scale
Internet-based social network data, we extend these early findings on a very
large data set. We analyse the organisational structure of a complete,
multi-relational, large social multiplex network of a human society consisting
of about 400,000 odd players of a massive multiplayer online game for which we
know all about the group memberships of every player. Remarkably, the online
players exhibit the same type of structured hierarchical layers as the
societies studied by anthropologists, where each of these layers is three to
four times the size of the lower layer. Our findings suggest that the
hierarchical organisation of human society is deeply nested in human
psychology.
|
1403.3251 | Numerical Investigations on Hatching Process Strategies for Powder Bed
Based Additive Manufacturing using an Electron Beam | cs.CE | This paper investigates in hatching process strategies for additive
manufacturing using an electron beam by numerical simulations. The underlying
physical model and the corresponding three dimensional thermal free surface
lattice Boltzmann method of the simulation software are briefly presented. The
simulation software has already been validated on the basis of experiments up
to 1.2 kW beam power by hatching a cuboid with a basic process strategy,
whereby the results are classified into `porous', `good' and `uneven',
depending on their relative density and top surface smoothness. In this paper
we study the limitations of this basic process strategy in terms of higher beam
powers and scan velocities to exploit the future potential of high power
electron beam guns up to 10 kW. Subsequently, we introduce modified process
strategies, which circumvent these restrictions, to build the part as fast as
possible under the restriction of a fully dense part with a smooth top surface.
These process strategies are suitable to reduce the build time and costs,
maximize the beam power usage and therefore use the potential of high power
electron beam guns.
|
1403.3286 | FAUST$^2$: Formal Abstractions of Uncountable-STate STochastic processes | cs.SY | FAUST$^2$ is a software tool that generates formal abstractions of (possibly
non-deterministic) discrete-time Markov processes (dtMP) defined over
uncountable (continuous) state spaces. A dtMP model is specified in MATLAB and
abstracted as a finite-state Markov chain or Markov decision processes. The
abstraction procedure runs in MATLAB and employs parallel computations and fast
manipulations based on vector calculus. The abstract model is formally put in
relationship with the concrete dtMP via a user-defined maximum threshold on the
approximation error introduced by the abstraction procedure. FAUST$^2$ allows
exporting the abstract model to well-known probabilistic model checkers, such
as PRISM or MRMC. Alternatively, it can handle internally the computation of
PCTL properties (e.g. safety or reach-avoid) over the abstract model, and
refine the outcomes over the concrete dtMP via a quantified error that depends
on the abstraction procedure and the given formula. The toolbox is available at
http://sourceforge.net/projects/faust2/
|
1403.3297 | Channel Capacity Analysis of MIMO System in Correlated Nakagami-m Fading
Environment | cs.IT math.IT | We consider Vertical Bell Laboratories Layered Space-Time (V-BLAST) systems
in correlated multiple-input multiple-output (MIMO) Nakagami-m fading channels
with equal power allocated to each transmit antenna and also we consider that
the channel state information (CSI) is available only at the receiver. Now for
practical application, study of the VBLAST MIMO system in correlated
environment is necessary. In this paper, we present a detailed study of the
channel capacity in correlated and uncorrelated channel condition and also
validated the result with appropriate mathematical relation.
|
1403.3298 | The role of network embeddedness on the selection of collaboration
partners: An agent-based model with empirical validation | physics.soc-ph cs.SI | We use a data-driven agent-based model to study the core-periphery structure
of two collaboration networks, R&D alliances between firms and co-authorship
relations between scientists. To characterize the network embeddedness of
agents, we introduce a coreness value, obtained from a weighted $k$-core
decomposition. We study the change of these coreness values when collaborations
with newcomers or established agents are formed. Our agent-based model is able
to reproduce the empirical coreness differences of collaboration partners and
to explain why we observe a change in partner selection for agents with high
network embeddedness.
|
1403.3300 | Limiting Behavior of LQ Deterministic Infinite Horizon Nash Games with
Symmetric Players as the Number of Players goes to Infinity | cs.GT cs.SY math.OC | A Linear Quadratic Deterministic Continuous Time Game with many symmetric
players is considered and the Linear Feedback Nash strategies are studied as
the number of players goes to infinity. We show that under some conditions the
limit of the solutions exists and can be used to approximate the case with a
finite but large number of players. It is shown that in the limit each player
acts as if he were faced with one player only, who represents the average
behavior of the others.
|
1403.3304 | A Spatial Data Model for Moving Object Databases | cs.DB | Moving Object Databases will have significant role in Geospatial Information
Systems as they allow users to model continuous movements of entities in the
databases and perform spatio-temporal analysis. For representing and querying
moving objects, and algebra with a comprehensive framework of User Defined
Types together with a set of functions on those types is needed. Moreover,
concerning real world applications, moving objects move along constrained
environments like transportation networks so that an extra algebra for modeling
networks is demanded, too. These algebras can be inserted in any data model if
their designs are based on available standards such as Open Geospatial
Consortium that provides a common model for existing DBMS's. In this paper, we
focus on extending a spatial data model for constrained moving objects. Static
and moving geometries in our model are based on Open Geospatial Consortium
standards. We also extend Structured Query Language for retrieving, querying,
and manipulating spatio-temporal data related to moving objects as a simple and
expressive query language. Finally as a proof of concept, we implement a
generator to generate data for moving objects constrained by a transportation
network. Such a generator primarily aims at traffic planning applications.
|
1403.3305 | Noise Facilitation in Associative Memories of Exponential Capacity | cs.NE | Recent advances in associative memory design through structured pattern sets
and graph-based inference algorithms have allowed reliable learning and recall
of an exponential number of patterns. Although these designs correct external
errors in recall, they assume neurons that compute noiselessly, in contrast to
the highly variable neurons in brain regions thought to operate associatively
such as hippocampus and olfactory cortex.
Here we consider associative memories with noisy internal computations and
analytically characterize performance. As long as the internal noise level is
below a specified threshold, the error probability in the recall phase can be
made exceedingly small. More surprisingly, we show that internal noise actually
improves the performance of the recall phase while the pattern retrieval
capacity remains intact, i.e., the number of stored patterns does not reduce
with noise (up to a threshold). Computational experiments lend additional
support to our theoretical analysis. This work suggests a functional benefit to
noisy neurons in biological neuronal networks.
|
1403.3312 | Optimal number of users in Co-operative spectrum sensing in WRAN using
Cyclo-Stationary Detector | cs.NI cs.IT math.IT | Cognitive radio allows unlicensed users to access licensed frequency bands
through dynamic spectrum access so as to reduce spectrum scarcity. This
requires intelligent spectrum sensing techniques. This paper investigates the
use of cyclo-stationary detector and performance evaluation for Digital Video
Broadcast-Terrestrial (DVB-T) signals. Generally, DVB-T is specified in IEEE
802.22 standard in VHF and UHF TV broadcasting spectrum. Simulations results
show that implementing co-operative spectrum sensing help in better utilization
of resources. The paper further proposes to find number of optimal users in a
scenario to optimize the detection probability and makes use of the particle
swarm optimization (PSO) technique to find an optimum value of threshold.
|
1403.3320 | Numerical Approaches for Linear Left-invariant Diffusions on SE(2),
their Comparison to Exact Solutions, and their Applications in Retinal
Imaging | math.NA cs.CV | Left-invariant PDE-evolutions on the roto-translation group $SE(2)$ (and
their resolvent equations) have been widely studied in the fields of cortical
modeling and image analysis. They include hypo-elliptic diffusion (for contour
enhancement) proposed by Citti & Sarti, and Petitot, and they include the
direction process (for contour completion) proposed by Mumford. This paper
presents a thorough study and comparison of the many numerical approaches,
which, remarkably, is missing in the literature. Existing numerical approaches
can be classified into 3 categories: Finite difference methods, Fourier based
methods (equivalent to $SE(2)$-Fourier methods), and stochastic methods (Monte
Carlo simulations). There are also 3 types of exact solutions to the
PDE-evolutions that were derived explicitly (in the spatial Fourier domain) in
previous works by Duits and van Almsick in 2005. Here we provide an overview of
these 3 types of exact solutions and explain how they relate to each of the 3
numerical approaches. We compute relative errors of all numerical approaches to
the exact solutions, and the Fourier based methods show us the best performance
with smallest relative errors. We also provide an improvement of Mathematica
algorithms for evaluating Mathieu-functions, crucial in implementations of the
exact solutions. Furthermore, we include an asymptotical analysis of the
singularities within the kernels and we propose a probabilistic extension of
underlying stochastic processes that overcomes the singular behavior in the
origin of time-integrated kernels. Finally, we show retinal imaging
applications of combining left-invariant PDE-evolutions with invertible
orientation scores.
|
1403.3339 | Capacity of a Nonlinear Optical Channel with Finite Memory | cs.IT math.IT physics.optics | The channel capacity of a nonlinear, dispersive fiber-optic link is
revisited. To this end, the popular Gaussian noise (GN) model is extended with
a parameter to account for the finite memory of realistic fiber channels. This
finite-memory model is harder to analyze mathematically but, in contrast to
previous models, it is valid also for nonstationary or heavy-tailed input
signals. For uncoded transmission and standard modulation formats, the new
model gives the same results as the regular GN model when the memory of the
channel is about 10 symbols or more. These results confirm previous results
that the GN model is accurate for uncoded transmission. However, when coding is
considered, the results obtained using the finite-memory model are very
different from those obtained by previous models, even when the channel memory
is large. In particular, the peaky behavior of the channel capacity, which has
been reported for numerous nonlinear channel models, appears to be an artifact
of applying models derived for independent input in a coded (i.e., dependent)
scenario.
|
1403.3342 | The Potential Benefits of Filtering Versus Hyper-Parameter Optimization | stat.ML cs.LG | The quality of an induced model by a learning algorithm is dependent on the
quality of the training data and the hyper-parameters supplied to the learning
algorithm. Prior work has shown that improving the quality of the training data
(i.e., by removing low quality instances) or tuning the learning algorithm
hyper-parameters can significantly improve the quality of an induced model. A
comparison of the two methods is lacking though. In this paper, we estimate and
compare the potential benefits of filtering and hyper-parameter optimization.
Estimating the potential benefit gives an overly optimistic estimate but also
empirically demonstrates an approximation of the maximum potential benefit of
each method. We find that, while both significantly improve the induced model,
improving the quality of the training set has a greater potential effect than
hyper-parameter optimization.
|
1403.3344 | Collective attention in the age of (mis)information | cs.SI cs.CY physics.soc-ph | In this work we study, on a sample of 2.3 million individuals, how Facebook
users consumed different information at the edge of political discussion and
news during the last Italian electoral competition. Pages are categorized,
according to their topics and the communities of interests they pertain to, in
a) alternative information sources (diffusing topics that are neglected by
science and main stream media); b) online political activism; and c) main
stream media. We show that attention patterns are similar despite the different
qualitative nature of the information, meaning that unsubstantiated claims
(mainly conspiracy theories) reverberate for as long as other information.
Finally, we categorize users according to their interaction patterns among the
different topics and measure how a sample of this social ecosystem (1279 users)
responded to the injection of 2788 false information posts. Our analysis
reveals that users which are prominently interacting with alternative
information sources (i.e. more exposed to unsubstantiated claims) are more
prone to interact with false claims.
|
1403.3351 | Semantic Unification A sheaf theoretic approach to natural language | cs.CL | Language is contextual and sheaf theory provides a high level mathematical
framework to model contextuality. We show how sheaf theory can model the
contextual nature of natural language and how gluing can be used to provide a
global semantics for a discourse by putting together the local logical
semantics of each sentence within the discourse. We introduce a presheaf
structure corresponding to a basic form of Discourse Representation Structures.
Within this setting, we formulate a notion of semantic unification --- gluing
meanings of parts of a discourse into a coherent whole --- as a form of
sheaf-theoretic gluing. We illustrate this idea with a number of examples where
it can used to represent resolutions of anaphoric references. We also discuss
multivalued gluing, described using a distributions functor, which can be used
to represent situations where multiple gluings are possible, and where we may
need to rank them using quantitative measures.
Dedicated to Jim Lambek on the occasion of his 90th birthday.
|
1403.3369 | Controlling Recurrent Neural Networks by Conceptors | cs.NE | The human brain is a dynamical system whose extremely complex sensor-driven
neural processes give rise to conceptual, logical cognition. Understanding the
interplay between nonlinear neural dynamics and concept-level cognition remains
a major scientific challenge. Here I propose a mechanism of neurodynamical
organization, called conceptors, which unites nonlinear dynamics with basic
principles of conceptual abstraction and logic. It becomes possible to learn,
store, abstract, focus, morph, generalize, de-noise and recognize a large
number of dynamical patterns within a single neural system; novel patterns can
be added without interfering with previously acquired ones; neural noise is
automatically filtered. Conceptors help explaining how conceptual-level
information processing emerges naturally and robustly in neural systems, and
remove a number of roadblocks in the theory and applications of recurrent
neural networks.
|
1403.3371 | Spectral Correlation Hub Screening of Multivariate Time Series | stat.OT cs.LG stat.AP | This chapter discusses correlation analysis of stationary multivariate
Gaussian time series in the spectral or Fourier domain. The goal is to identify
the hub time series, i.e., those that are highly correlated with a specified
number of other time series. We show that Fourier components of the time series
at different frequencies are asymptotically statistically independent. This
property permits independent correlation analysis at each frequency,
alleviating the computational and statistical challenges of high-dimensional
time series. To detect correlation hubs at each frequency, an existing
correlation screening method is extended to the complex numbers to accommodate
complex-valued Fourier components. We characterize the number of hub
discoveries at specified correlation and degree thresholds in the regime of
increasing dimension and fixed sample size. The theory specifies appropriate
thresholds to apply to sample correlation matrices to detect hubs and also
allows statistical significance to be attributed to hub discoveries. Numerical
results illustrate the accuracy of the theory and the usefulness of the
proposed spectral framework.
|
1403.3376 | Massive MIMO performance evaluation based on measured propagation data | cs.IT math.IT | Massive MIMO, also known as very-large MIMO or large-scale antenna systems,
is a new technique that potentially can offer large network capacities in
multi-user scenarios. With a massive MIMO system, we consider the case where a
base station equipped with a large number of antenna elements simultaneously
serves multiple single-antenna users in the same time-frequency resource. So
far, investigations are mostly based on theoretical channels with independent
and identically distributed (i.i.d.) complex Gaussian coefficients, i.e.,
i.i.d. Rayleigh channels. Here, we investigate how massive MIMO performs in
channels measured in real propagation environments. Channel measurements were
performed at 2.6 GHz using a virtual uniform linear array (ULA) which has a
physically large aperture, and a practical uniform cylindrical array (UCA)
which is more compact in size, both having 128 antenna ports. Based on
measurement data, we illustrate channel behavior of massive MIMO in three
representative propagation conditions, and evaluate the corresponding
performance. The investigation shows that the measured channels, for both array
types, allow us to achieve performance close to that in i.i.d. Rayleigh
channels. It is concluded that in real propagation environments we have
characteristics that can allow for efficient use of massive MIMO, i.e., the
theoretical advantages of this new technology can also be harvested in real
channels.
|
1403.3378 | Box Drawings for Learning with Imbalanced Data | stat.ML cs.LG | The vast majority of real world classification problems are imbalanced,
meaning there are far fewer data from the class of interest (the positive
class) than from other classes. We propose two machine learning algorithms to
handle highly imbalanced classification problems. The classifiers constructed
by both methods are created as unions of parallel axis rectangles around the
positive examples, and thus have the benefit of being interpretable. The first
algorithm uses mixed integer programming to optimize a weighted balance between
positive and negative class accuracies. Regularization is introduced to improve
generalization performance. The second method uses an approximation in order to
assist with scalability. Specifically, it follows a \textit{characterize then
discriminate} approach, where the positive class is characterized first by
boxes, and then each box boundary becomes a separate discriminative classifier.
This method has the computational advantages that it can be easily
parallelized, and considers only the relevant regions of feature space.
|
1403.3427 | Explicit Matrices with the Restricted Isometry Property: Breaking the
Square-Root Bottleneck | math.FA cs.IT math.CO math.IT | Matrices with the restricted isometry property (RIP) are of particular
interest in compressed sensing. To date, the best known RIP matrices are
constructed using random processes, while explicit constructions are notorious
for performing at the "square-root bottleneck," i.e., they only accept sparsity
levels on the order of the square root of the number of measurements. The only
known explicit matrix which surpasses this bottleneck was constructed by
Bourgain, Dilworth, Ford, Konyagin and Kutzarova. This chapter provides three
contributions to further the groundbreaking work of Bourgain et al.: (i) we
develop an intuition for their matrix construction and underlying proof
techniques; (ii) we prove a generalized version of their main result; and (iii)
we apply this more general result to maximize the extent to which their matrix
construction surpasses the square-root bottleneck.
|
1403.3434 | A New Event-Driven Cooperative Receding Horizon Controller for
Multi-agent Systems in Uncertain Environments | cs.SY math.OC | In previous work, a Cooperative Receding Horizon (CRH) controller was
developed for solving cooperative multi-agent problems in uncertain
environments. In this paper, we overcome several limitations of this
controller, including potential instabilities in the agent trajectories and
poor performance due to inaccurate estimation of a reward-to-go function. We
propose an event-driven CRH controller to solve the maximum reward collection
problem (MRCP) where multiple agents cooperate to maximize the total reward
collected from a set of stationary targets in a given mission space. Rewards
are non-increasing functions of time and the environment is uncertain with new
targets detected by agents at random time instants. The controller sequentially
solves optimization problems over a planning horizon and executes the control
for a shorter action horizon, where both are defined by certain events
associated with new information becoming available. In contrast to the earlier
CRH controller, we reduce the originally infinite-dimensional feasible control
set to a finite set at each time step. We prove some properties of this new
controller and include simulation results showing its improved performance
compared to the original one.
|
1403.3438 | Neighborhood Selection for Thresholding-based Subspace Clustering | stat.ML cs.IT math.IT | Subspace clustering refers to the problem of clustering high-dimensional data
points into a union of low-dimensional linear subspaces, where the number of
subspaces, their dimensions and orientations are all unknown. In this paper, we
propose a variation of the recently introduced thresholding-based subspace
clustering (TSC) algorithm, which applies spectral clustering to an adjacency
matrix constructed from the nearest neighbors of each data point with respect
to the spherical distance measure. The new element resides in an individual and
data-driven choice of the number of nearest neighbors. Previous performance
results for TSC, as well as for other subspace clustering algorithms based on
spectral clustering, come in terms of an intermediate performance measure,
which does not address the clustering error directly. Our main analytical
contribution is a performance analysis of the modified TSC algorithm (as well
as the original TSC algorithm) in terms of the clustering error directly.
|
1403.3448 | Coloring Large Complex Networks | cs.SI cs.DS math.CO physics.soc-ph | Given a large social or information network, how can we partition the
vertices into sets (i.e., colors) such that no two vertices linked by an edge
are in the same set while minimizing the number of sets used. Despite the
obvious practical importance of graph coloring, existing works have not
systematically investigated or designed methods for large complex networks. In
this work, we develop a unified framework for coloring large complex networks
that consists of two main coloring variants that effectively balances the
tradeoff between accuracy and efficiency. Using this framework as a fundamental
basis, we propose coloring methods designed for the scale and structure of
complex networks. In particular, the methods leverage triangles,
triangle-cores, and other egonet properties and their combinations. We
systematically compare the proposed methods across a wide range of networks
(e.g., social, web, biological networks) and find a significant improvement
over previous approaches in nearly all cases. Additionally, the solutions
obtained are nearly optimal and sometimes provably optimal for certain classes
of graphs (e.g., collaboration networks). We also propose a parallel algorithm
for the problem of coloring neighborhood subgraphs and make several key
observations. Overall, the coloring methods are shown to be (i) accurate with
solutions close to optimal, (ii) fast and scalable for large networks, and
(iii) flexible for use in a variety of applications.
|
1403.3460 | Scalable and Robust Construction of Topical Hierarchies | cs.LG cs.CL cs.DB cs.IR | Automated generation of high-quality topical hierarchies for a text
collection is a dream problem in knowledge engineering with many valuable
applications. In this paper a scalable and robust algorithm is proposed for
constructing a hierarchy of topics from a text collection. We divide and
conquer the problem using a top-down recursive framework, based on a tensor
orthogonal decomposition technique. We solve a critical challenge to perform
scalable inference for our newly designed hierarchical topic model. Experiments
with various real-world datasets illustrate its ability to generate robust,
high-quality hierarchies efficiently. Our method reduces the time of
construction by several orders of magnitude, and its robust feature renders it
possible for users to interactively revise the hierarchy.
|
1403.3461 | Aspects of Favorable Propagation in Massive MIMO | cs.IT math.IT | Favorable propagation, defined as mutual orthogonality among the
vector-valued channels to the terminals, is one of the key properties of the
radio channel that is exploited in Massive MIMO. However, there has been little
work that studies this topic in detail. In this paper, we first show that
favorable propagation offers the most desirable scenario in terms of maximizing
the sum-capacity. One useful proxy for whether propagation is favorable or not
is the channel condition number. However, this proxy is not good for the case
where the norms of the channel vectors may not be equal. For this case, to
evaluate how favorable the propagation offered by the channel is, we propose a
``distance from favorable propagation'' measure, which is the gap between the
sum-capacity and the maximum capacity obtained under favorable propagation.
Secondly, we examine how favorable the channels can be for two extreme
scenarios: i.i.d. Rayleigh fading and uniform random line-of-sight (UR-LoS).
Both environments offer (nearly) favorable propagation. Furthermore, to analyze
the UR-LoS model, we propose an urns-and-balls model. This model is simple and
explains the singular value spread characteristic of the UR-LoS model well.
|
1403.3465 | A Survey of Algorithms and Analysis for Adaptive Online Learning | cs.LG | We present tools for the analysis of Follow-The-Regularized-Leader (FTRL),
Dual Averaging, and Mirror Descent algorithms when the regularizer
(equivalently, prox-function or learning rate schedule) is chosen adaptively
based on the data. Adaptivity can be used to prove regret bounds that hold on
every round, and also allows for data-dependent regret bounds as in
AdaGrad-style algorithms (e.g., Online Gradient Descent with adaptive
per-coordinate learning rates). We present results from a large number of prior
works in a unified manner, using a modular and tight analysis that isolates the
key arguments in easily re-usable lemmas. This approach strengthens pre-viously
known FTRL analysis techniques to produce bounds as tight as those achieved by
potential functions or primal-dual analysis. Further, we prove a general and
exact equivalence between an arbitrary adaptive Mirror Descent algorithm and a
correspond- ing FTRL update, which allows us to analyze any Mirror Descent
algorithm in the same framework. The key to bridging the gap between Dual
Averaging and Mirror Descent algorithms lies in an analysis of the
FTRL-Proximal algorithm family. Our regret bounds are proved in the most
general form, holding for arbitrary norms and non-smooth regularizers with
time-varying weight.
|
1403.3495 | Analyzing Large Biological Datasets with an Improved Algorithm for MIC | cs.DB cs.CE | A computational framework utilizes the traditional similarity measures for
mining the significant relationships in biological annotations is recently
proposed by Tatiana V. Karpinets et al. [2]. In this paper, an improved
approximation algorithm for MIC (maximal information coefficient) named IAMIC
is suggested to perfect this framework for discovering the hidden regularities
between biological annotations. Further, IAMIC is the enhanced algorithm for
approximating a novel similarity coefficient MIC with generality and
equitability, which makes it more appropriate for data exploration. Here it is
shown that IAMIC is also applicable for identify the associations between
biological annotations.
|
1403.3515 | Concept Trees: Building Dynamic Concepts from Semi-Structured Data using
Nature-Inspired Methods | cs.IR | This paper describes a method for creating structure from heterogeneous
sources, as part of an information database, or more specifically, a 'concept
base'. Structures called 'concept trees' can grow from the semi-structured
sources when consistent sequences of concepts are presented. They might be
considered to be dynamic databases, possibly a variation on the distributed
Agent-Based or Cellular Automata models, or even related to Markov models.
Semantic comparison of text is required, but the trees can be built more, from
automatic knowledge and statistical feedback. This reduced model might also be
attractive for security or privacy reasons, as not all of the potential data
gets saved. The construction process maintains the key requirement of
generality, allowing it to be used as part of a generic framework. The nature
of the method also means that some level of optimisation or normalisation of
the information will occur. This gives comparisons with databases or
knowledge-bases, but a database system would firstly model its environment or
datasets and then populate the database with instance values. The concept base
deals with a more uncertain environment and therefore cannot fully model it
beforehand. The model itself therefore evolves over time. Similar to databases,
it also needs a good indexing system, where the construction process provides
memory and indexing structures. These allow for more complex concepts to be
automatically created, stored and retrieved, possibly as part of a more
cognitive model. There are also some arguments, or more abstract ideas, for
merging physical-world laws into these automatic processes.
|
1403.3522 | An inertial forward-backward algorithm for monotone inclusions | cs.CV cs.NA math.NA math.OC | In this paper, we propose an inertial forward backward splitting algorithm to
compute a zero of the sum of two monotone operators, with one of the two
operators being co-coercive. The algorithm is inspired by the accelerated
gradient method of Nesterov, but can be applied to a much larger class of
problems including convex-concave saddle point problems and general monotone
inclusions. We prove convergence of the algorithm in a Hilbert space setting
and show that several recently proposed first-order methods can be obtained as
special cases of the general algorithm. Numerical results show that the
proposed algorithm converges faster than existing methods, while keeping the
computational cost of each iteration basically unchanged.
|
1403.3533 | Quantum linear network coding as one-way quantum computation | quant-ph cs.IT math.IT | Network coding is a technique to maximize communication rates within a
network, in communication protocols for simultaneous multi-party transmission
of information. Linear network codes are examples of such protocols in which
the local computations performed at the nodes in the network are limited to
linear transformations of their input data (represented as elements of a ring,
such as the integers modulo 2). The quantum linear network coding protocols of
Kobayashi et al [arXiv:0908.1457 and arXiv:1012.4583] coherently simulate
classical linear network codes, using supplemental classical communication. We
demonstrate that these protocols correspond in a natural way to
measurement-based quantum computations with graph states over over qudits
[arXiv:quant-ph/0301052, arXiv:quant-ph/0603226, and arXiv:0704.1263] having a
structure directly related to the network.
|
1403.3568 | Modeling Social Dynamics in a Collaborative Environment | physics.soc-ph cs.CY cs.SI physics.data-an | Wikipedia is a prime example of today's value production in a collaborative
environment. Using this example, we model the emergence, persistence and
resolution of severe conflicts during collaboration by coupling opinion
formation with article editing in a bounded confidence dynamics. The complex
social behavior involved in editing articles is implemented as a minimal model
with two basic elements; (i) individuals interact directly to share information
and convince each other, and (ii) they edit a common medium to establish their
own opinions. Opinions of the editors and that represented by the article are
characterised by a scalar variable. When the pool of editors is fixed, three
regimes can be distinguished: (a) a stable mainstream article opinion is
continuously contested by editors with extremist views and there is slow
convergence towards consensus, (b) the article oscillates between editors with
extremist views, reaching consensus relatively fast at one of the extremes, and
(c) the extremist editors are converted very fast to the mainstream opinion and
the article has an erratic evolution. When editors are renewed with a certain
rate, a dynamical transition occurs between different kinds of edit wars, which
qualitatively reflect the dynamics of conflicts as observed in real Wikipedia
data.
|
1403.3579 | On Projection-Based Model Reduction of Biochemical Networks-- Part I:
The Deterministic Case | math.OC cs.SY | This paper addresses the problem of model reduction for dynamical system
models that describe biochemical reaction networks. Inherent in such models are
properties such as stability, positivity and network structure. Ideally these
properties should be preserved by model reduction procedures, although
traditional projection based approaches struggle to do this. We propose a
projection based model reduction algorithm which uses generalised block
diagonal Gramians to preserve structure and positivity. Two algorithms are
presented, one provides more accurate reduced order models, the second provides
easier to simulate reduced order models. The results are illustrated through
numerical examples.
|
1403.3583 | Threshold Analysis of Non-Binary Spatially-Coupled LDPC Codes with
Windowed Decoding | cs.IT math.IT | In this paper we study the iterative decoding threshold performance of
non-binary spatially-coupled low-density parity-check (NB-SC-LDPC) code
ensembles for both the binary erasure channel (BEC) and the binary-input
additive white Gaussian noise channel (BIAWGNC), with particular emphasis on
windowed decoding (WD). We consider both (2,4)-regular and (3,6)-regular
NB-SC-LDPC code ensembles constructed using protographs and compute their
thresholds using protograph versions of NB density evolution and NB extrinsic
information transfer analysis. For these code ensembles, we show that WD of
NB-SC-LDPC codes, which provides a significant decrease in latency and
complexity compared to decoding across the entire parity-check matrix, results
in a negligible decrease in the near-capacity performance for a sufficiently
large window size W on both the BEC and the BIAWGNC. Also, we show that
NB-SC-LDPC code ensembles exhibit gains in the WD threshold compared to the
corresponding block code ensembles decoded across the entire parity-check
matrix, and that the gains increase as the finite field size q increases.
Moreover, from the viewpoint of decoding complexity, we see that (3,6)-regular
NB-SC-LDPC codes are particularly attractive due to the fact that they achieve
near-capacity thresholds even for small q and W.
|
1403.3594 | Sparse Polynomial Interpolation Codes and their decoding beyond half the
minimal distance | cs.SC cs.IT math.IT | We present algorithms performing sparse univariate polynomial interpolation
with errors in the evaluations of the polynomial. Based on the initial work by
Comer, Kaltofen and Pernet [Proc. ISSAC 2012], we define the sparse polynomial
interpolation codes and state that their minimal distance is precisely the
length divided by twice the sparsity. At ISSAC 2012, we have given a decoding
algorithm for as much as half the minimal distance and a list decoding
algorithm up to the minimal distance. Our new polynomial-time list decoding
algorithm uses sub-sequences of the received evaluations indexed by a linear
progression, allowing the decoding for a larger radius, that is, more errors in
the evaluations while returning a list of candidate sparse polynomials. We
quantify this improvement for all typically small values of number of terms and
number of errors, and provide a worst case asymptotic analysis of this
improvement. For instance, for sparsity T = 5 with up to 10 errors we can list
decode in polynomial-time from 74 values of the polynomial with unknown terms,
whereas our earlier algorithm required 2T (E + 1) = 110 evaluations. We then
propose two variations of these codes in characteristic zero, where appropriate
choices of values for the variable yield a much larger minimal distance: the
length minus twice the sparsity.
|
1403.3602 | Spontaneous expression classification in the encrypted domain | cs.CV cs.CR | To date, most facial expression analysis have been based on posed image
databases and is carried out without being able to protect the identity of the
subjects whose expressions are being recognised. In this paper, we propose and
implement a system for classifying facial expressions of images in the
encrypted domain based on a Paillier cryptosystem implementation of Fisher
Linear Discriminant Analysis and k-nearest neighbour (FLDA + kNN). We present
results of experiments carried out on a recently developed natural visible and
infrared facial expression (NVIE) database of spontaneous images. To the best
of our knowledge, this is the first system that will allow the recog-nition of
encrypted spontaneous facial expressions by a remote server on behalf of a
client.
|
1403.3610 | Making Risk Minimization Tolerant to Label Noise | cs.LG | In many applications, the training data, from which one needs to learn a
classifier, is corrupted with label noise. Many standard algorithms such as SVM
perform poorly in presence of label noise. In this paper we investigate the
robustness of risk minimization to label noise. We prove a sufficient condition
on a loss function for the risk minimization under that loss to be tolerant to
uniform label noise. We show that the $0-1$ loss, sigmoid loss, ramp loss and
probit loss satisfy this condition though none of the standard convex loss
functions satisfy it. We also prove that, by choosing a sufficiently large
value of a parameter in the loss function, the sigmoid loss, ramp loss and
probit loss can be made tolerant to non-uniform label noise also if we can
assume the classes to be separable under noise-free data distribution. Through
extensive empirical studies, we show that risk minimization under the $0-1$
loss, the sigmoid loss and the ramp loss has much better robustness to label
noise when compared to the SVM algorithm.
|
1403.3616 | Predictability of extreme events in social media | physics.soc-ph cs.SI physics.data-an | It is part of our daily social-media experience that seemingly ordinary items
(videos, news, publications, etc.) unexpectedly gain an enormous amount of
attention. Here we investigate how unexpected these events are. We propose a
method that, given some information on the items, quantifies the predictability
of events, i.e., the potential of identifying in advance the most successful
items defined as the upper bound for the quality of any prediction based on the
same information. Applying this method to different data, ranging from views in
YouTube videos to posts in Usenet discussion groups, we invariantly find that
the predictability increases for the most extreme events. This indicates that,
despite the inherently stochastic collective dynamics of users, efficient
prediction is possible for the most extreme events.
|
1403.3628 | Mixed-norm Regularization for Brain Decoding | cs.LG | This work investigates the use of mixed-norm regularization for sensor
selection in Event-Related Potential (ERP) based Brain-Computer Interfaces
(BCI). The classification problem is cast as a discriminative optimization
framework where sensor selection is induced through the use of mixed-norms.
This framework is extended to the multi-task learning situation where several
similar classification tasks related to different subjects are learned
simultaneously. In this case, multi-task learning helps in leveraging data
scarcity issue yielding to more robust classifiers. For this purpose, we have
introduced a regularizer that induces both sensor selection and classifier
similarities. The different regularization approaches are compared on three ERP
datasets showing the interest of mixed-norm regularization in terms of sensor
selection. The multi-task approaches are evaluated when a small number of
learning examples are available yielding to significant performance
improvements especially for subjects performing poorly.
|
1403.3665 | A Low-Complexity Algorithm for Throughput Maximization in Wireless
Powered Communication Networks | cs.IT math.IT | This paper investigates a wireless powered communication network (WPCN) under
the protocol of harvest-then-transmit,where a hybrid access point with constant
power supply replenishes the passive user nodes by wireless power transfer in
the downlink,then each user node transmit independent information to the hybrid
AP in a time division multiple access (TDMA) scheme in the uplink.The
sum-throughput maximization and min-throughput maximization problems are
considered in this paper.The optimal time allocation for the sum-throughput
maximization is proposed based on the Jensen's inequality,which provides more
insight into the design of WPCNs.A low-complexity fixed-point iteration
algorithm for the min-throughput maximization problem,which promises a much
better computation complexity than the state-of-the-art algorithm.Simulation
results confirm the effectiveness of the proposed algorithm.
|
1403.3668 | Language Heedless of Logic - Philosophy Mindful of What? Failures of
Distributive and Absorption Laws | cs.CL | Much of philosophical logic and all of philosophy of language make empirical
claims about the vernacular natural language. They presume semantics under
which `and' and `or' are related by the dually paired distributive and
absorption laws. However, at least one of each pair of laws fails in the
vernacular. `Implicature'-based auxiliary theories associated with the
programme of H.P. Grice do not prove remedial. Conceivable alternatives that
might replace the familiar logics as descriptive instruments are briefly noted:
(i) substructural logics and (ii) meaning composition in linear algebras over
the reals, occasionally constrained by norms of classical logic. Alternative
(ii) locates the problem in violations of one of the idempotent laws. Reasons
for a lack of curiosity about elementary and easily testable implications of
the received theory are considered. The concept of `reflective equilibrium' is
critically examined for its role in reconciling normative desiderata and
descriptive commitments.
|
1403.3678 | The Effect of Saturation on Belief Propagation Decoding of LDPC Codes | cs.IT math.IT | We consider the effect of LLR saturation on belief propagation decoding of
low-density parity-check codes. Saturation occurs universally in practice and
is known to have a significant effect on error floor performance. Our focus is
on threshold analysis and stability of density evolution.
We analyze the decoder for certain low-density parity-check code ensembles
and show that belief propagation decoding generally degrades gracefully with
saturation. Stability of density evolution is, on the other hand, rather
strongly affected by saturation and the asymptotic qualitative effect of
saturation is similar to reduction of variable node degree by one.
|
1403.3683 | Removal and Contraction Operations in $n$D Generalized Maps for
Efficient Homology Computation | cs.CV | In this paper, we show that contraction operations preserve the homology of
$n$D generalized maps, under some conditions. Removal and contraction
operations are used to propose an efficient algorithm that compute homology
generators of $n$D generalized maps. Its principle consists in simplifying a
generalized map as much as possible by using removal and contraction
operations. We obtain a generalized map having the same homology than the
initial one, while the number of cells decreased significantly.
Keywords: $n$D Generalized Maps; Cellular Homology; Homology Generators;
Contraction and Removal Operations.
|
1403.3707 | Learning the Latent State Space of Time-Varying Graphs | cs.SI cs.LG physics.soc-ph stat.ML | From social networks to Internet applications, a wide variety of electronic
communication tools are producing streams of graph data; where the nodes
represent users and the edges represent the contacts between them over time.
This has led to an increased interest in mechanisms to model the dynamic
structure of time-varying graphs. In this work, we develop a framework for
learning the latent state space of a time-varying email graph. We show how the
framework can be used to find subsequences that correspond to global real-time
events in the Email graph (e.g. vacations, breaks, ...etc.). These events
impact the underlying graph process to make its characteristics non-stationary.
Within the framework, we compare two different representations of the temporal
relationships; discrete vs. probabilistic. We use the two representations as
inputs to a mixture model to learn the latent state transitions that correspond
to important changes in the Email graph structure over time.
|
1403.3710 | Saving Energy in Mobile Devices for On-Demand Multimedia Streaming -- A
Cross-Layer Approach | cs.MM cs.IT math.IT | This paper proposes a novel energy-efficient multimedia delivery system
called EStreamer. First, we study the relationship between buffer size at the
client, burst-shaped TCP-based multimedia traffic, and energy consumption of
wireless network interfaces in smartphones. Based on the study, we design and
implement EStreamer for constant bit rate and rate-adaptive streaming.
EStreamer can improve battery lifetime by 3x, 1.5x and 2x while streaming over
Wi-Fi, 3G and 4G respectively.
|
1403.3724 | VESICLE: Volumetric Evaluation of Synaptic Interfaces using Computer
vision at Large Scale | cs.CV cs.CE q-bio.QM | An open challenge problem at the forefront of modern neuroscience is to
obtain a comprehensive mapping of the neural pathways that underlie human brain
function; an enhanced understanding of the wiring diagram of the brain promises
to lead to new breakthroughs in diagnosing and treating neurological disorders.
Inferring brain structure from image data, such as that obtained via electron
microscopy (EM), entails solving the problem of identifying biological
structures in large data volumes. Synapses, which are a key communication
structure in the brain, are particularly difficult to detect due to their small
size and limited contrast. Prior work in automated synapse detection has relied
upon time-intensive biological preparations (post-staining, isotropic slice
thicknesses) in order to simplify the problem.
This paper presents VESICLE, the first known approach designed for mammalian
synapse detection in anisotropic, non-post-stained data. Our methods explicitly
leverage biological context, and the results exceed existing synapse detection
methods in terms of accuracy and scalability. We provide two different
approaches - one a deep learning classifier (VESICLE-CNN) and one a lightweight
Random Forest approach (VESICLE-RF) to offer alternatives in the
performance-scalability space. Addressing this synapse detection challenge
enables the analysis of high-throughput imaging data soon expected to reach
petabytes of data, and provide tools for more rapid estimation of brain-graphs.
Finally, to facilitate community efforts, we developed tools for large-scale
object detection, and demonstrated this framework to find $\approx$ 50,000
synapses in 60,000 $\mu m ^3$ (220 GB on disk) of electron microscopy data.
|
1403.3740 | Interference Alignment with Partial CSI Feedback in MIMO Cellular
Networks | cs.IT math.IT | Interference alignment (IA) is a linear precoding strategy that can achieve
optimal capacity scaling at high SNR in interference networks. However, most
existing IA designs require full channel state information (CSI) at the
transmitters, which would lead to significant CSI signaling overhead. There are
two techniques, namely CSI quantization and CSI feedback filtering, to reduce
the CSI feedback overhead. In this paper, we consider IA processing with CSI
feedback filtering in MIMO cellular networks. We introduce a novel metric,
namely the feedback dimension, to quantify the first order CSI feedback cost
associated with the CSI feedback filtering. The CSI feedback filtering poses
several important challenges in IA processing. First, there is a hidden partial
CSI knowledge constraint in IA precoder design which cannot be handled using
conventional IA design methodology. Furthermore, existing results on the
feasibility conditions of IA cannot be applied due to the partial CSI
knowledge. Finally, it is very challenging to find out how much CSI feedback is
actually needed to support IA processing. We shall address the above challenges
and propose a new IA feasibility condition under partial CSIT knowledge in MIMO
cellular networks. Based on this, we consider the CSI feedback profile design
subject to the degrees of freedom requirements, and we derive closed-form
trade-off results between the CSI feedback cost and IA performance in MIMO
cellular networks.
|
1403.3741 | Near-optimal Reinforcement Learning in Factored MDPs | stat.ML cs.LG | Any reinforcement learning algorithm that applies to all Markov decision
processes (MDPs) will suffer $\Omega(\sqrt{SAT})$ regret on some MDP, where $T$
is the elapsed time and $S$ and $A$ are the cardinalities of the state and
action spaces. This implies $T = \Omega(SA)$ time to guarantee a near-optimal
policy. In many settings of practical interest, due to the curse of
dimensionality, $S$ and $A$ can be so enormous that this learning time is
unacceptable. We establish that, if the system is known to be a \emph{factored}
MDP, it is possible to achieve regret that scales polynomially in the number of
\emph{parameters} encoding the factored MDP, which may be exponentially smaller
than $S$ or $A$. We provide two algorithms that satisfy near-optimal regret
bounds in this context: posterior sampling reinforcement learning (PSRL) and an
upper confidence bound algorithm (UCRL-Factored).
|
1403.3758 | Big Data Analytics - Retour vers le Futur 3; De Statisticien \`a Data
Scientist | math.ST cs.DB stat.TH | The rapid evolution of information systems managing more and more voluminous
data has caused profound paradigm shifts in the job of statistician, becoming
successively data miner, bioinformatician and now data scientist. Without the
sake of completeness and after having illustrated these successive mutations,
this article briefly introduced the new research issues that quickly rise in
Statistics, and more generally in Mathematics, in order to integrate the
characteristics: volume, variety and velocity, of big data.
|
1403.3759 | Parallel Interleaver Design for a High Throughput HSPA+/LTE
Multi-Standard Turbo Decoder | cs.IT cs.AR cs.DC math.IT | To meet the evolving data rate requirements of emerging wireless
communication technologies, many parallel architectures have been proposed to
implement high throughput turbo decoders. However, concurrent memory
reading/writing in parallel turbo decoding architectures leads to severe memory
conflict problem, which has become a major bottleneck for high throughput turbo
decoders. In this paper, we propose a flexible and efficient VLSI architecture
to solve the memory conflict problem for highly parallel turbo decoders
targeting multi-standard 3G/4G wireless communication systems. To demonstrate
the effectiveness of the proposed parallel interleaver architecture, we
implemented an HSPA+/LTE/LTE-Advanced multi-standard turbo decoder with a 45nm
CMOS technology. The implemented turbo decoder consists of 16 Radix-4 MAP
decoder cores, and the chip core area is 2.43 mm^2. When clocked at 600 MHz,
this turbo decoder can achieve a maximum decoding throughput of 826 Mbps in the
HSPA+ mode and 1.67 Gbps in the LTE/LTE-Advanced mode, exceeding the peak data
rate requirements for both standards.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.