id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1301.2354 | A New Approach for Solving Singular Systems in Topology Optimization
Using Krylov Subspace Methods | cs.CE math.NA | In topology optimization, the design parameter with no contribution to the
objective function vanishes. This causes the stiffness matrix to become
singular. We show that a local optimal solution is obtained by Conjugate
Residual Method and Conjugate Gradient Method even if the stiffness matrix
becomes singular. We prove that CGMconverges to a local optimal solution in
that case. Computer simulation shows that CGM gives the same solutions obtained
by CRM in case of a cantilever beam problem.
|
1301.2362 | Quasi-SLCA based Keyword Query Processing over Probabilistic XML Data | cs.DB | The probabilistic threshold query is one of the most common queries in
uncertain databases, where a result satisfying the query must be also with
probability meeting the threshold requirement. In this paper, we investigate
probabilistic threshold keyword queries (PrTKQ) over XML data, which is not
studied before. We first introduce the notion of quasi-SLCA and use it to
represent results for a PrTKQ with the consideration of possible world
semantics. Then we design a probabilistic inverted (PI) index that can be used
to quickly return the qualified answers and filter out the unqualified ones
based on our proposed lower/upper bounds. After that, we propose two efficient
and comparable algorithms: Baseline Algorithm and PI index-based Algorithm. To
accelerate the performance of algorithms, we also utilize probability density
function. An empirical study using real and synthetic data sets has verified
the effectiveness and the efficiency of our approaches.
|
1301.2369 | Combinatorial and approximative analyses in a spatially random division
process | cond-mat.stat-mech cs.DM cs.SI math-ph math.MP nlin.AO | For a spatial characteristic, there exist commonly fat-tail frequency
distributions of fragment-size and -mass of glass, areas enclosed by city
roads, and pore size/volume in random packings. In order to give a new
analytical approach for the distributions, we consider a simple model which
constructs a fractal-like hierarchical network based on random divisions of
rectangles. The stochastic process makes a Markov chain and corresponds to
directional random walks with splitting into four particles. We derive a
combinatorial analytical form and its continuous approximation for the
distribution of rectangle areas, and numerically show a good fitting with the
actual distribution in the averaging behavior of the divisions.
|
1301.2375 | Context-based Diversification for Keyword Queries over XML Data | cs.DB | While keyword query empowers ordinary users to search vast amount of data,
the ambiguity of keyword query makes it difficult to effectively answer keyword
queries, especially for short and vague keyword queries. To address this
challenging problem, in this paper we propose an approach that automatically
diversifies XML keyword search based on its different contexts in the XML data.
Given a short and vague keyword query and XML data to be searched, we firstly
derive keyword search candidates of the query by a classifical feature
selection model. And then, we design an effective XML keyword search
diversification model to measure the quality of each candidate. After that,
three efficient algorithms are proposed to evaluate the possible generated
query candidates representing the diversified search intentions, from which we
can find and return top-$k$ qualified query candidates that are most relevant
to the given keyword query while they can cover maximal number of distinct
results.At last, a comprehensive evaluation on real and synthetic datasets
demonstrates the effectiveness of our proposed diversification model and the
efficiency of our algorithms.
|
1301.2378 | Query-driven Frequent Co-occurring Term Extraction over Relational Data
using MapReduce | cs.DB | In this paper we study how to efficiently compute \textit{frequent
co-occurring terms} (FCT) in the results of a keyword query in parallel using
the popular MapReduce framework. Taking as input a keyword query q and an
integer k, an FCT query reports the k terms that are not in q, but appear most
frequently in the results of the keyword query q over multiple joined
relations. The returned terms of FCT search can be used to do query expansion
and query refinement for traditional keyword search. Different from the method
of FCT search in a single platform, our proposed approach can efficiently
answer a FCT query using the MapReduce Paradigm without pre-computing the
results of the original keyword query, which is run in parallel platform. In
this work, we can output the final FCT search results by two MapReduce jobs:
the first is to extract the statistical information of the data; and the second
is to calculate the total frequency of each term based on the output of the
first job. At the two MapReduce jobs, we would guarantee the load balance of
mappers and the computational balance of reducers as much as possible.
Analytical and experimental evaluations demonstrate the efficiency and
scalability of our proposed approach using TPC-H benchmark datasets with
different sizes.
|
1301.2405 | Dating medieval English charters | stat.AP cs.CL | Deeds, or charters, dealing with property rights, provide a continuous
documentation which can be used by historians to study the evolution of social,
economic and political changes. This study is concerned with charters (written
in Latin) dating from the tenth through early fourteenth centuries in England.
Of these, at least one million were left undated, largely due to administrative
changes introduced by William the Conqueror in 1066. Correctly dating such
charters is of vital importance in the study of English medieval history. This
paper is concerned with computer-automated statistical methods for dating such
document collections, with the goal of reducing the considerable efforts
required to date them manually and of improving the accuracy of assigned dates.
Proposed methods are based on such data as the variation over time of word and
phrase usage, and on measures of distance between documents. The extensive (and
dated) Documents of Early England Data Set (DEEDS) maintained at the University
of Toronto was used for this purpose.
|
1301.2427 | Analysis of the LTE Access Reservation Protocol for Real-Time Traffic | cs.IT cs.NI math.IT | LTE is increasingly seen as a system for serving real-time Machine-to-Machine
(M2M) communication needs. The asynchronous M2M user access in LTE is obtained
through a two-phase access reservation protocol (contention and data phase).
Existing analysis related to these protocols is based on the following
assumptions: (1) there are sufficient resources in the data phase for all
detected contention tokens, and (2) the base station is able to detect
collisions, i.e., tokens activated by multiple users. These assumptions are not
always applicable to LTE - specifically, (1) due to the variable amount of
available data resources caused by variable load, and (2) detection of
collisions in contention phase may not be possible. All of this affects
transmission of real-time M2M traffic, where data packets have to be sent
within a deadline and may have only one contention opportunity. We analyze the
features of the two-phase LTE reservation protocol and derive its throughput,
i.e., the number of successful transmissions in the data phase, when
assumptions (1) and (2) do not hold.
|
1301.2444 | TEI and LMF crosswalks | cs.CL | The present paper explores various arguments in favour of making the Text
Encoding Initia-tive (TEI) guidelines an appropriate serialisation for ISO
standard 24613:2008 (LMF, Lexi-cal Mark-up Framework) . It also identifies the
issues that would have to be resolved in order to reach an appropriate
implementation of these ideas, in particular in terms of infor-mational
coverage. We show how the customisation facilities offered by the TEI
guidelines can provide an adequate background, not only to cover missing
components within the current Dictionary chapter of the TEI guidelines, but
also to allow specific lexical projects to deal with local constraints. We
expect this proposal to be a basis for a future ISO project in the context of
the on going revision of LMF.
|
1301.2464 | Time as a limited resource: Communication Strategy in Mobile Phone
Networks | physics.soc-ph cs.SI physics.data-an | We used a large database of 9 billion calls from 20 million mobile users to
examine the relationships between aggregated time spent on the phone, personal
network size, tie strength and the way in which users distributed their limited
time across their network (disparity). Compared to those with smaller networks,
those with large networks did not devote proportionally more time to
communication and had on average weaker ties (as measured by time spent
communicating). Further, there were not substantially different levels of
disparity between individuals, in that mobile users tend to distribute their
time very unevenly across their network, with a large proportion of calls going
to a small number of individuals. Together, these results suggest that there
are time constraints which limit tie strength in large personal networks, and
that even high levels of mobile communication do not fundamentally alter the
disparity of time allocation across networks.
|
1301.2466 | Determining token sequence mistakes in responses to questions with open
text answer | cs.CL cs.CY | When learning grammar of the new language, a teacher should routinely check
student's exercises for grammatical correctness. The paper describes a method
of automatically detecting and reporting grammar mistakes, regarding an order
of tokens in the response. It could report extra tokens, missing tokens and
misplaced tokens. The method is useful when teaching language, where order of
tokens is important, which includes most formal languages and some natural ones
(like English). The method was implemented in a question type plug-in
CorrectWriting for the widely used learning manage system Moodle.
|
1301.2479 | Weight Distribution of a Class of Cyclic Codes with Arbitrary Number of
Zeros | cs.IT math.IT | Cyclic codes have been widely used in digital communication systems and
consume electronics as they have efficient encoding and decoding algorithms.
The weight distribution of cyclic codes has been an important topic of study
for many years. It is in general hard to determine the weight distribution of
linear codes. In this paper, a class of cyclic codes with any number of zeros
are described and their weight distributions are determined.
|
1301.2497 | Update-Efficient Regenerating Codes with Minimum Per-Node Storage | cs.IT cs.DM math.IT | Regenerating codes provide an efficient way to recover data at failed nodes
in distributed storage systems. It has been shown that regenerating codes can
be designed to minimize the per-node storage (called MSR) or minimize the
communication overhead for regeneration (called MBR). In this work, we propose
a new encoding scheme for [n,d] error- correcting MSR codes that generalizes
our earlier work on error-correcting regenerating codes. We show that by
choosing a suitable diagonal matrix, any generator matrix of the [n,{\alpha}]
Reed-Solomon (RS) code can be integrated into the encoding matrix. Hence, MSR
codes with the least update complexity can be found. An efficient decoding
scheme is also proposed that utilizes the [n,{\alpha}] RS code to perform data
reconstruction. The proposed decoding scheme has better error correction
capability and incurs the least number of node accesses when errors are
present.
|
1301.2498 | Modeling complex systems by Generalized Factor Analysis | cs.SY | We propose a new modeling paradigm for large dimensional aggregates of
stochastic systems by Generalized Factor Analysis (GFA) models. These models
describe the data as the sum of a flocking plus an uncorrelated idiosyncratic
component. The flocking component describes a sort of collective orderly motion
which admits a much simpler mathematical description than the whole ensemble
while the idiosyncratic component describes weakly correlated noise. We first
discuss static GFA representations and characterize in a rigorous way the
properties of the two components. The extraction of the dynamic flocking
component is discussed for time-stationary linear systems and for a simple
classes of separable random fields.
|
1301.2533 | A Novel Analytical Method for Evolutionary Graph Theory Problems | cs.GT cs.SI q-bio.PE | Evolutionary graph theory studies the evolutionary dynamics of populations
structured on graphs. A central problem is determining the probability that a
small number of mutants overtake a population. Currently, Monte Carlo
simulations are used for estimating such fixation probabilities on general
directed graphs, since no good analytical methods exist. In this paper, we
introduce a novel deterministic framework for computing fixation probabilities
for strongly connected, directed, weighted evolutionary graphs under neutral
drift. We show how this framework can also be used to calculate the expected
number of mutants at a given time step (even if we relax the assumption that
the graph is strongly connected), how it can extend to other related models
(e.g. voter model), how our framework can provide non-trivial bounds for
fixation probability in the case of an advantageous mutant, and how it can be
used to find a non-trivial lower bound on the mean time to fixation. We provide
various experimental results determining fixation probabilities and expected
number of mutants on different graphs. Among these, we show that our method
consistently outperforms Monte Carlo simulations in speed by several orders of
magnitude. Finally we show how our approach can provide insight into synaptic
competition in neurology.
|
1301.2542 | Enhancing the retrieval performance by combing the texture and edge
features | cs.CV cs.IR | In this paper, anew algorithm which is based on geometrical moments and local
binary patterns (LBP) for content based image retrieval (CBIR) is proposed. In
geometrical moments, each vector is compared with the all other vectors for
edge map generation. The same concept is utilized at LBP calculation which is
generating nine LBP patterns from a given 3x3 pattern. Finally, nine LBP
histograms are calculated which are used as a feature vector for image
retrieval. Moments are important features used in recognition of different
types of images. Two experiments have been carried out for proving the worth of
our algorithm. The results after being investigated shows a significant
improvement in terms of their evaluation measures as compared to LBP and other
existing transform domain techniques.
|
1301.2556 | Information field theory | astro-ph.IM cs.IT math.IT physics.data-an stat.ML | Non-linear image reconstruction and signal analysis deal with complex inverse
problems. To tackle such problems in a systematic way, I present information
field theory (IFT) as a means of Bayesian, data based inference on spatially
distributed signal fields. IFT is a statistical field theory, which permits the
construction of optimal signal recovery algorithms even for non-linear and
non-Gaussian signal inference problems. IFT algorithms exploit spatial
correlations of the signal fields and benefit from techniques developed to
investigate quantum and statistical field theories, such as Feynman diagrams,
re-normalisation calculations, and thermodynamic potentials. The theory can be
used in many areas, and applications in cosmology and numerics are presented.
|
1301.2561 | Modeling complex systems with adaptive networks | cs.SI nlin.AO physics.soc-ph | Adaptive networks are a novel class of dynamical networks whose topologies
and states coevolve. Many real-world complex systems can be modeled as adaptive
networks, including social networks, transportation networks, neural networks
and biological networks. In this paper, we introduce fundamental concepts and
unique properties of adaptive networks through a brief, non-comprehensive
review of recent literature on mathematical/computational modeling and analysis
of such networks. We also report our recent work on several applications of
computational adaptive network modeling and analysis to real-world problems,
including temporal development of search and rescue operational networks,
automated rule discovery from empirical network evolution data, and cultural
integration in corporate merger.
|
1301.2603 | Robust subspace clustering | cs.LG cs.IT math.IT math.OC math.ST stat.ML stat.TH | Subspace clustering refers to the task of finding a multi-subspace
representation that best fits a collection of points taken from a
high-dimensional space. This paper introduces an algorithm inspired by sparse
subspace clustering (SSC) [In IEEE Conference on Computer Vision and Pattern
Recognition, CVPR (2009) 2790-2797] to cluster noisy data, and develops some
novel theory demonstrating its correctness. In particular, the theory uses
ideas from geometric functional analysis to show that the algorithm can
accurately recover the underlying subspaces under minimal requirements on their
orientation, and on the number of samples per subspace. Synthetic as well as
real data experiments complement our theoretical study, illustrating our
approach and demonstrating its effectiveness.
|
1301.2604 | Structure and function in flow networks | physics.soc-ph cs.SI | This Letter presents a unified approach for the fundamental relationship
between structure and function in flow networks by solving analytically the
voltages in a resistor network, transforming the network structure to an
effective all-to-all topology, and then measuring the resultant flows.
Moreover, it defines a way to study the structural resilience of the graph and
to detect possible communities.
|
1301.2609 | Learning to Optimize Via Posterior Sampling | cs.LG | This paper considers the use of a simple posterior sampling algorithm to
balance between exploration and exploitation when learning to optimize actions
such as in multi-armed bandit problems. The algorithm, also known as Thompson
Sampling, offers significant advantages over the popular upper confidence bound
(UCB) approach, and can be applied to problems with finite or infinite action
spaces and complicated relationships among action rewards. We make two
theoretical contributions. The first establishes a connection between posterior
sampling and UCB algorithms. This result lets us convert regret bounds
developed for UCB algorithms into Bayesian regret bounds for posterior
sampling. Our second theoretical contribution is a Bayesian regret bound for
posterior sampling that applies broadly and can be specialized to many model
classes. This bound depends on a new notion we refer to as the eluder
dimension, which measures the degree of dependence among action rewards.
Compared to UCB algorithm Bayesian regret bounds for specific model classes,
our general bound matches the best available for linear models and is stronger
than the best available for generalized linear models. Further, our analysis
provides insight into performance advantages of posterior sampling, which are
highlighted through simulation results that demonstrate performance surpassing
recently proposed UCB algorithms.
|
1301.2613 | An Analytical Framework for Heterogeneous Partial Feedback Design in
Heterogeneous Multicell OFDMA Networks | cs.IT math.IT | The inherent heterogeneous structure resulting from user densities and large
scale channel effects motivates heterogeneous partial feedback design in
heterogeneous networks. In such emerging networks, a distributed scheduling
policy which enjoys multiuser diversity as well as maintains fairness among
users is favored for individual user rate enhancement and guarantees. For a
system employing the cumulative distribution function based scheduling, which
satisfies the two above mentioned desired features, we develop an analytical
framework to investigate heterogeneous partial feedback in a general
OFDMA-based heterogeneous multicell employing the best-M partial feedback
strategy. Exact sum rate analysis is first carried out and closed form
expressions are obtained by a novel decomposition of the probability density
function of the selected user's signal-to-interference-plus-noise ratio. To
draw further insight, we perform asymptotic analysis using extreme value theory
to examine the effect of partial feedback on the randomness of multiuser
diversity, show the asymptotic optimality of best-1 feedback, and derive an
asymptotic approximation for the sum rate in order to determine the minimum
required partial feedback.
|
1301.2628 | Robust Text Detection in Natural Scene Images | cs.CV cs.IR cs.LG | Text detection in natural scene images is an important prerequisite for many
content-based image analysis tasks. In this paper, we propose an accurate and
robust method for detecting texts in natural scene images. A fast and effective
pruning algorithm is designed to extract Maximally Stable Extremal Regions
(MSERs) as character candidates using the strategy of minimizing regularized
variations. Character candidates are grouped into text candidates by the
ingle-link clustering algorithm, where distance weights and threshold of the
clustering algorithm are learned automatically by a novel self-training
distance metric learning algorithm. The posterior probabilities of text
candidates corresponding to non-text are estimated with an character
classifier; text candidates with high probabilities are then eliminated and
finally texts are identified with a text classifier. The proposed system is
evaluated on the ICDAR 2011 Robust Reading Competition dataset; the f measure
is over 76% and is significantly better than the state-of-the-art performance
of 71%. Experimental results on a publicly available multilingual dataset also
show that our proposed method can outperform the other competitive method with
the f measure increase of over 9 percent. Finally, we have setup an online demo
of our proposed scene text detection system at
http://kems.ustb.edu.cn/learning/yin/dtext.
|
1301.2629 | Upper-Bounding the Capacity of Relay Communications - Part I | cs.IT math.IT | This paper focuses on the capacity of point-to-point relay communications
wherein the transmitter is assisted by an intermediate relay. We detail the
mathematical model of cutset and amplify and forward (AF) relaying strategy. We
present the upper bound capacity of each relaying strategy from information
theory viewpoint and also in networks with Gaussian channels. We exemplify
various outer region capacities of the addressed strategies with two different
case studies. The results exhibit that in low signal-to-noise ratio (SNR)
environments the cutset performance is better than amplify and forward
strategy.
|
1301.2634 | Blind source separation methods for deconvolution of complex signals in
cancer biology | q-bio.QM cs.CE q-bio.GN | Two blind source separation methods (Independent Component Analysis and
Non-negative Matrix Factorization), developed initially for signal processing
in engineering, found recently a number of applications in analysis of
large-scale data in molecular biology. In this short review, we present the
common idea behind these methods, describe ways of implementing and applying
them and point out to the advantages compared to more traditional statistical
approaches. We focus more specifically on the analysis of gene expression in
cancer. The review is finalized by listing available software implementations
for the methods described.
|
1301.2638 | Computational Intelligence for Deepwater Reservoir Depositional
Environments Interpretation | cs.NE physics.geo-ph | Predicting oil recovery efficiency of a deepwater reservoir is a challenging
task. One approach to characterize a deepwater reservoir and to predict its
producibility is by analyzing its depositional information. This research
proposes a deposition-based stratigraphic interpretation framework for
deepwater reservoir characterization. In this framework, one critical task is
the identification and labeling of the stratigraphic components in the
reservoir, according to their depositional environments. This interpretation
process is labor intensive and can produce different results depending on the
stratigrapher who performs the analysis. To relieve stratigrapher's workload
and to produce more consistent results, we have developed a novel methodology
to automate this process using various computational intelligence techniques.
Using a well log data set, we demonstrate that the developed methodology and
the designed workflow can produce finite state transducer models that interpret
deepwater reservoir depositional environments adequately.
|
1301.2648 | A New Distributed Localization Method for Sensor Networks | cs.IT cs.DC cs.NI math.IT | This paper studies the problem of determining the sensor locations in a large
sensor network using relative distance (range) measurements only. Our work
follows from a seminal paper by Khan et al. [1] where a distributed algorithm,
known as DILOC, for sensor localization is given using the barycentric
coordinate. A main limitation of the DILOC algorithm is that all sensor nodes
must be inside the convex hull of the anchor nodes. In this paper, we consider
a general sensor network without the convex hull assumption, which incurs
challenges in determining the sign pattern of the barycentric coordinate. A
criterion is developed to address this issue based on available distance
measurements. Also, a new distributed algorithm is proposed to guarantee the
asymptotic localization of all localizable sensor nodes.
|
1301.2655 | Functional Regularized Least Squares Classi cation with Operator-valued
Kernels | cs.LG stat.ML | Although operator-valued kernels have recently received increasing interest
in various machine learning and functional data analysis problems such as
multi-task learning or functional regression, little attention has been paid to
the understanding of their associated feature spaces. In this paper, we explore
the potential of adopting an operator-valued kernel feature space perspective
for the analysis of functional data. We then extend the Regularized Least
Squares Classification (RLSC) algorithm to cover situations where there are
multiple functions per observation. Experiments on a sound recognition problem
show that the proposed method outperforms the classical RLSC algorithm.
|
1301.2656 | Multiple functional regression with both discrete and continuous
covariates | stat.ML cs.LG | In this paper we present a nonparametric method for extending functional
regression methodology to the situation where more than one functional
covariate is used to predict a functional response. Borrowing the idea from
Kadri et al. (2010a), the method, which support mixed discrete and continuous
explanatory variables, is based on estimating a function-valued function in
reproducing kernel Hilbert spaces by virtue of positive operator-valued
kernels.
|
1301.2659 | A Triclustering Approach for Time Evolving Graphs | cs.LG cs.SI stat.ML | This paper introduces a novel technique to track structures in time evolving
graphs. The method is based on a parameter free approach for three-dimensional
co-clustering of the source vertices, the target vertices and the time. All
these features are simultaneously segmented in order to build time segments and
clusters of vertices whose edge distributions are similar and evolve in the
same way over the time segments. The main novelty of this approach lies in that
the time segments are directly inferred from the evolution of the edge
distribution between the vertices, thus not requiring the user to make an a
priori discretization. Experiments conducted on a synthetic dataset illustrate
the good behaviour of the technique, and a study of a real-life dataset shows
the potential of the proposed approach for exploratory data analysis.
|
1301.2678 | Verification of Agent-Based Artifact Systems | cs.MA cs.AI cs.LO | Artifact systems are a novel paradigm for specifying and implementing
business processes described in terms of interacting modules called artifacts.
Artifacts consist of data and lifecycles, accounting respectively for the
relational structure of the artifacts' states and their possible evolutions
over time. In this paper we put forward artifact-centric multi-agent systems, a
novel formalisation of artifact systems in the context of multi-agent systems
operating on them. Differently from the usual process-based models of services,
the semantics we give explicitly accounts for the data structures on which
artifact systems are defined. We study the model checking problem for
artifact-centric multi-agent systems against specifications written in a
quantified version of temporal-epistemic logic expressing the knowledge of the
agents in the exchange. We begin by noting that the problem is undecidable in
general. We then identify two noteworthy restrictions, one syntactical and one
semantical, that enable us to find bisimilar finite abstractions and therefore
reduce the model checking problem to the instance on finite models. Under these
assumptions we show that the model checking problem for these systems is
EXPSPACE-complete. We then introduce artifact-centric programs, compact and
declarative representations of the programs governing both the artifact system
and the agents. We show that, while these in principle generate infinite-state
systems, under natural conditions their verification problem can be solved on
finite abstractions that can be effectively computed from the programs. Finally
we exemplify the theoretical results of the paper through a mainstream
procurement scenario from the artifact systems literature.
|
1301.2683 | BliStr: The Blind Strategymaker | cs.AI cs.LG cs.LO | BliStr is a system that automatically develops strategies for E prover on a
large set of problems. The main idea is to interleave (i) iterated
low-timelimit local search for new strategies on small sets of similar easy
problems with (ii) higher-timelimit evaluation of the new strategies on all
problems. The accumulated results of the global higher-timelimit runs are used
to define and evolve the notion of "similar easy problems", and to control the
selection of the next strategy to be improved. The technique was used to
significantly strengthen the set of E strategies used by the MaLARea, PS-E,
E-MaLeS, and E systems in the CASC@Turing 2012 competition, particularly in the
Mizar division. Similar improvement was obtained on the problems created from
the Flyspeck corpus.
|
1301.2696 | Reduced-Rank Space-Time Interference Suppression with Joint Iterative
Least Squares Algorithms for Spread Spectrum Systems | cs.IT math.IT | This paper presents novel adaptive space-time reduced-rank interference
suppression least squares algorithms based on joint iterative optimization of
parameter vectors. The proposed space-time reduced-rank scheme consists of a
joint iterative optimization of a projection matrix that performs
dimensionality reduction and an adaptive reduced-rank parameter vector that
yields the symbol estimates. The proposed techniques do not require singular
value decomposition (SVD) and automatically find the best set of basis for
reduced-rank processing. We present least squares (LS) expressions for the
design of the projection matrix and the reduced-rank parameter vector and we
conduct an analysis of the convergence properties of the LS algorithms. We then
develop recursive least squares (RLS) adaptive algorithms for their
computationally efficient estimation and an algorithm for automatically
adjusting the rank of the proposed scheme. A convexity analysis of the LS
algorithms is carried out along with the development of a proof of convergence
for the proposed algorithms. Simulations for a space-time interference
suppression application with a DS-CDMA system show that the proposed scheme
outperforms in convergence and tracking the state-of-the-art reduced-rank
schemes at a comparable complexity.
|
1301.2697 | Adaptive Reduced-Rank Equalization Algorithms Based on Alternating
Optimization Design Techniques for Multi-Antenna Systems | cs.IT math.IT | This paper presents a novel adaptive reduced-rank {multi-input multi-output}
(MIMO) equalization scheme and algorithms based on alternating optimization
design techniques for MIMO spatial multiplexing systems. The proposed
reduced-rank equalization structure consists of a joint iterative optimization
of two equalization stages, namely, a transformation matrix that performs
dimensionality reduction and a reduced-rank estimator that retrieves the
desired transmitted symbol. The proposed reduced-rank architecture is
incorporated into an equalization structure that allows both decision feedback
and linear schemes for mitigating the inter-antenna and inter-symbol
interference. We develop alternating least squares (LS) expressions for the
design of the transformation matrix and the reduced-rank estimator along with
computationally efficient alternating recursive least squares (RLS) adaptive
estimation algorithms. We then present an algorithm for automatically adjusting
the model order of the proposed scheme. An analysis of the LS algorithms is
carried out along with sufficient conditions for convergence and a proof of
convergence of the proposed algorithms to the reduced-rank Wiener filter.
Simulations show that the proposed equalization algorithms outperform the
existing reduced-rank and full-rank algorithms, while requiring a comparable
computational cost.
|
1301.2698 | Evaluating community structure in large network with random walks | cs.SI physics.soc-ph | Community structure is one of the most important properties of networks. Most
community algorithms are not suitable for large networks because of their time
consuming. In fact there are lots of networks with millons even billons of
nodes. In such case, most algorithms running in time O(n2logn) or even larger
are not practical. What we need are linear or approximately linear time
algorithm. Rising in response to such needs, we propose a quick methods to
evaluate community structure in networks and then put forward a local community
algorithm with nearly linear time based on random walks. Using our community
evaluating measure, we could find some difference results from measures used
before, i.e., the Newman Modularity. Our algorithm are effective in small
benchmark networks with small less accuracy than more complex algorithms but a
great of advantage in time consuming for large networks, especially super large
networks.
|
1301.2710 | A High-Order Sliding Mode Observer: Torpedo Guidance Application | cs.SY | The guidance of a torpedo represents a hard task because of the smooth
nonlinear aspect of this system and because of the extreme external
disturbances. The torpedo guidance reposes on the speed and the position
control. In fact, the control approach which is very solicited for the
electromechanical systems is the sliding mode control (SMC) which proved its
effectiveness through the different studies. The SMC is robust versus
disturbances and model uncertainties; however, a sharp discontinuous control is
needed which induces the chattering phenomenon. The angular velocity
measurement is a hard task because of the high level of disturbances. In this
way, the sliding mode observer could be a solution for the velocity estimation
instead of a sensor. This article deals with torpedo guidance by SMC to reach
the desired path in a short time and with high precision quality. Simulation
results show that this control strategy and observer can attain excellent
control performances with no chattering problem.
|
1301.2711 | A Sliding Mode Multimodel Control for a Sensorless Photovoltaic System | cs.SY hep-ex | In this work we will talk about a new control test using the sliding mode
control with a nonlinear sliding mode observer, which are very solicited in
tracking problems, for a sensorless photovoltaic panel. In this case, the panel
system will has as a set point the sun position at every second during the day
for a period of five years; then the tracker, using sliding mode multimodel
controller and a sliding mode observer, will track these positions to make the
sunrays orthogonal to the photovoltaic cell that produces more energy. After
sunset, the tracker goes back to the initial position (which of sunrise).
Experimental measurements show that this autonomic dual axis Sun Tracker
increases the power production by over 40%.
|
1301.2714 | A Sliding Mode-Multimodel Control with Sliding Mode Observer for a
Sensorless Pumping System | cs.SY | This work deals with the design of a sliding mode observer with a
multi-surfaces sliding mode multimodel control (SM-MMC) for a mechanical
sensorless pumping system. The observer is designed to estimate the speed and
the mechanical position of the DC motor operating in the process. Robustness
tests validated by simulation show the effectiveness of the sliding mode
observer associated with this control approach (SM-MMC).
|
1301.2715 | Binocular disparity as an explanation for the moon illusion | cs.CV physics.pop-ph | We present another explanation for the moon illusion, the phenomenon in which
the moon looks larger near the horizon than near the zenith. In our model of
the moon illusion, the sky is considered a spatially-contiguous and
geometrically-smooth surface. When an object such as the moon breaks the
contiguity of the surface, instead of perceiving the object as appearing
through a hole in the surface, humans perceive an occlusion of the surface.
Binocular vision dictates that the moon is distant, but this perception model
contradicts our binocular vision, dictating that the moon is closer than the
sky. To resolve the contradiction, the brain distorts the projections of the
moon to increase the binocular disparity, which results in an increase in the
perceived size of the moon. The degree of distortion depends upon the apparent
distance to the sky, which is influenced by the surrounding objects and the
condition of the sky. As the apparent distance to the sky decreases, the
illusion becomes stronger. At the horizon, apparent distance to the sky is
minimal, whereas at the zenith, few distance cues are present, causing
difficulty with distance estimation and weakening the illusion.
|
1301.2722 | Distributed Consensus Formation Through Unconstrained Gossiping | cs.SY math.OC | Gossip algorithms are widely used to solve the distributed consensus problem,
but issues can arise when nodes receive multiple signals either at the same
time or before they are able to finish processing their current work load.
Specifically, a node may assume a new state that represents a linear
combination of all received signals; even if such a state makes no sense in the
problem domain. As a solution to this problem, we introduce the notion of
conflict resolution for gossip algorithms and prove that their application
leads to a valid consensus state when the underlying communication network
possesses certain properties. We also introduce a methodology based on
absorbing Markov chains for analyzing gossip algorithms that make use of these
conflict resolution algorithms. This technique allows us to calculate both the
probabilities of converging to a specific consensus state and the time that
such convergence is expected to take. Finally, we make use of simulation to
validate our methodology and explore the temporal behavior of gossip algorithms
as the size of the network, the number of states per node, and the network
density increase.
|
1301.2725 | Robust High Dimensional Sparse Regression and Matching Pursuit | stat.ML cs.IT cs.LG math.IT math.ST stat.TH | We consider high dimensional sparse regression, and develop strategies able
to deal with arbitrary -- possibly, severe or coordinated -- errors in the
covariance matrix $X$. These may come from corrupted data, persistent
experimental errors, or malicious respondents in surveys/recommender systems,
etc. Such non-stochastic error-in-variables problems are notoriously difficult
to treat, and as we demonstrate, the problem is particularly pronounced in
high-dimensional settings where the primary goal is {\em support recovery} of
the sparse regressor. We develop algorithms for support recovery in sparse
regression, when some number $n_1$ out of $n+n_1$ total covariate/response
pairs are {\it arbitrarily (possibly maliciously) corrupted}. We are interested
in understanding how many outliers, $n_1$, we can tolerate, while identifying
the correct support. To the best of our knowledge, neither standard outlier
rejection techniques, nor recently developed robust regression algorithms (that
focus only on corrupted response variables), nor recent algorithms for dealing
with stochastic noise or erasures, can provide guarantees on support recovery.
Perhaps surprisingly, we also show that the natural brute force algorithm that
searches over all subsets of $n$ covariate/response pairs, and all subsets of
possible support coordinates in order to minimize regression error, is
remarkably poor, unable to correctly identify the support with even $n_1 =
O(n/k)$ corrupted points, where $k$ is the sparsity. This is true even in the
basic setting we consider, where all authentic measurements and noise are
independent and sub-Gaussian. In this setting, we provide a simple algorithm --
no more computationally taxing than OMP -- that gives stronger performance
guarantees, recovering the support with up to $n_1 = O(n/(\sqrt{k} \log p))$
corrupted points, where $p$ is the dimension of the signal to be recovered.
|
1301.2750 | Load-aware Channel Selection for 802.11 WLANs with Limited Measurement | cs.NI cs.IT cs.PF math.IT | It has been known that load unaware channel selection in 802.11 networks
results in high level interference, and can significantly reduce the network
throughput. In current implementation, the only way to determine the traffic
load on a channel is to measure that channel for a certain duration of time.
Therefore, in order to find the best channel with the minimum load all channels
have to be measured, which is costly and can cause unacceptable communication
interruptions between the AP and the stations. In this paper, we propose a
learning based approach which aims to find the channel with the minimum load by
measuring only limited number of channels. Our method uses Gaussian Process
Regressing to accurately track the traffic load on each channel based on the
previous measured load. We confirm the performance of our algorithm by using
experimental data, and show that the time consumed for the load measurement can
be reduced up to 46% compared to the case where all channels are monitored.
|
1301.2774 | Crowd Labeling: a survey | cs.AI | Recently, there has been a burst in the number of research projects on human
computation via crowdsourcing. Multiple choice (or labeling) questions could be
referred to as a common type of problem which is solved by this approach. As an
application, crowd labeling is applied to find true labels for large machine
learning datasets. Since crowds are not necessarily experts, the labels they
provide are rather noisy and erroneous. This challenge is usually resolved by
collecting multiple labels for each sample, and then aggregating them to
estimate the true label. Although the mechanism leads to high-quality labels,
it is not actually cost-effective. As a result, efforts are currently made to
maximize the accuracy in estimating true labels, while fixing the number of
acquired labels.
This paper surveys methods to aggregate redundant crowd labels in order to
estimate unknown true labels. It presents a unified statistical latent model
where the differences among popular methods in the field correspond to
different choices for the parameters of the model. Afterwards, algorithms to
make inference on these models will be surveyed. Moreover, adaptive methods
which iteratively collect labels based on the previously collected labels and
estimated models will be discussed. In addition, this paper compares the
distinguished methods, and provides guidelines for future work required to
address the current open issues.
|
1301.2785 | A comparison of SVM and RVM for Document Classification | cs.IR cs.LG | Document classification is a task of assigning a new unclassified document to
one of the predefined set of classes. The content based document classification
uses the content of the document with some weighting criteria to assign it to
one of the predefined classes. It is a major task in library science,
electronic document management systems and information sciences. This paper
investigates document classification by using two different classification
techniques (1) Support Vector Machine (SVM) and (2) Relevance Vector Machine
(RVM). SVM is a supervised machine learning technique that can be used for
classification task. In its basic form, SVM represents the instances of the
data into space and tries to separate the distinct classes by a maximum
possible wide gap (hyper plane) that separates the classes. On the other hand
RVM uses probabilistic measure to define this separation space. RVM uses
Bayesian inference to obtain succinct solution, thus RVM uses significantly
fewer basis functions. Experimental studies on three standard text
classification datasets reveal that although RVM takes more training time, its
classification is much better as compared to SVM.
|
1301.2811 | Cutting Recursive Autoencoder Trees | cs.CL cs.AI | Deep Learning models enjoy considerable success in Natural Language
Processing. While deep architectures produce useful representations that lead
to improvements in various tasks, they are often difficult to interpret. This
makes the analysis of learned structures particularly difficult. In this paper,
we rely on empirical tests to see whether a particular structure makes sense.
We present an analysis of the Semi-Supervised Recursive Autoencoder, a
well-known model that produces structural representations of text. We show that
for certain tasks, the structure of the autoencoder can be significantly
reduced without loss of classification accuracy and we evaluate the produced
structures using human judgment.
|
1301.2820 | Clustering Learning for Robotic Vision | cs.CV | We present the clustering learning technique applied to multi-layer
feedforward deep neural networks. We show that this unsupervised learning
technique can compute network filters with only a few minutes and a much
reduced set of parameters. The goal of this paper is to promote the technique
for general-purpose robotic vision systems. We report its use in static image
datasets and object tracking datasets. We show that networks trained with
clustering learning can outperform large networks trained for many hours on
complex datasets.
|
1301.2831 | Beyond consistent reconstructions: optimality and sharp bounds for
generalized sampling, and application to the uniform resampling problem | math.NA cs.IT math.IT | Generalized sampling is a recently developed linear framework for sampling
and reconstruction in separable Hilbert spaces. It allows one to recover any
element in any finite-dimensional subspace given finitely many of its samples
with respect to an arbitrary frame. Unlike more common approaches for this
problem, such as the consistent reconstruction technique of Eldar et al, it
leads to completely stable numerical methods possessing both guaranteed
stability and accuracy.
The purpose of this paper is twofold. First, we give a complete and formal
analysis of generalized sampling, the main result of which being the derivation
of new, sharp bounds for the accuracy and stability of this approach. Such
bounds improve those given previously, and result in a necessary and sufficient
condition, the stable sampling rate, which guarantees a priori a good
reconstruction. Second, we address the topic of optimality. Under some
assumptions, we show that generalized sampling is an optimal, stable
reconstruction. Correspondingly, whenever these assumptions hold, the stable
sampling rate is a universal quantity. In the final part of the paper we
illustrate our results by applying generalized sampling to the so-called
uniform resampling problem.
|
1301.2840 | Unsupervised Feature Learning for low-level Local Image Descriptors | cs.CV cs.LG stat.ML | Unsupervised feature learning has shown impressive results for a wide range
of input modalities, in particular for object classification tasks in computer
vision. Using a large amount of unlabeled data, unsupervised feature learning
methods are utilized to construct high-level representations that are
discriminative enough for subsequently trained supervised classification
algorithms. However, it has never been \emph{quantitatively} investigated yet
how well unsupervised learning methods can find \emph{low-level
representations} for image patches without any additional supervision. In this
paper we examine the performance of pure unsupervised methods on a low-level
correspondence task, a problem that is central to many Computer Vision
applications. We find that a special type of Restricted Boltzmann Machines
(RBMs) performs comparably to hand-crafted descriptors. Additionally, a simple
binarization scheme produces compact representations that perform better than
several state-of-the-art descriptors.
|
1301.2851 | Efficient algorithm to study interconnected networks | physics.comp-ph cond-mat.stat-mech cs.SI physics.soc-ph | Interconnected networks have been shown to be much more vulnerable to random
and targeted failures than isolated ones, raising several interesting questions
regarding the identification and mitigation of their risk. The paradigm to
address these questions is the percolation model, where the resilience of the
system is quantified by the dependence of the size of the largest cluster on
the number of failures. Numerically, the major challenge is the identification
of this cluster and the calculation of its size. Here, we propose an efficient
algorithm to tackle this problem. We show that the algorithm scales as O(N log
N), where N is the number of nodes in the network, a significant improvement
compared to O(N^2) for a greedy algorithm, what permits studying much larger
networks. Our new strategy can be applied to any network topology and
distribution of interdependencies, as well as any sequence of failures.
|
1301.2857 | SpeedRead: A Fast Named Entity Recognition Pipeline | cs.CL | Online content analysis employs algorithmic methods to identify entities in
unstructured text. Both machine learning and knowledge-base approaches lie at
the foundation of contemporary named entities extraction systems. However, the
progress in deploying these approaches on web-scale has been been hampered by
the computational cost of NLP over massive text corpora. We present SpeedRead
(SR), a named entity recognition pipeline that runs at least 10 times faster
than Stanford NLP pipeline. This pipeline consists of a high performance Penn
Treebank- compliant tokenizer, close to state-of-art part-of-speech (POS)
tagger and knowledge-based named entity recognizer.
|
1301.2860 | Rateless Resilient Network Coding Against Byzantine Adversaries | cs.IT math.IT | This paper considers rateless network error correction codes for reliable
multicast in the presence of adversarial errors. Most existing network error
correction codes are designed for a given network capacity and maximum number
of errors known a priori to the encoder and decoder. However, in certain
practical settings it may be necessary to operate without such a priori
knowledge. We present rateless coding schemes for two adversarial models, where
the source sends more redundancy over time, until decoding succeeds. The first
model assumes there is a secret channel between the source and the destination
that the adversaries cannot overhear. The rate of the channel is negligible
compared to the main network. In the second model, instead of a secret channel,
the source and destination share random secrets independent of the input
information. The amount of secret information required is negligible compared
to the amount of information sent. Both schemes are optimal in that decoding
succeeds with high probability when the total amount of information received by
the sink satisfies the cut set bound with respect to the amount of message and
error information. The schemes are distributed, polynomial-time and end-to-end
in that other than the source and destination nodes, other intermediate nodes
carry out classical random linear network coding.
|
1301.2866 | Generalized Multiscale Finite Element Methods (GMsFEM) | math.NA cs.CE cs.NA math.AP | In this paper, we propose a general approach called Generalized Multiscale
Finite Element Method (GMsFEM) for performing multiscale simulations for
problems without scale separation over a complex input space. As in multiscale
finite element methods (MsFEMs), the main idea of the proposed approach is to
construct a small dimensional local solution space that can be used to generate
an efficient and accurate approximation to the multiscale solution with a
potentially high dimensional input parameter space. In the proposed approach,
we present a general procedure to construct the offline space that is used for
a systematic enrichment of the coarse solution space in the online stage. The
enrichment in the online stage is performed based on a spectral decomposition
of the offline space. In the online stage, for any input parameter, a
multiscale space is constructed to solve the global problem on a coarse grid.
The online space is constructed via a spectral decomposition of the offline
space and by choosing the eigenvectors corresponding to the largest
eigenvalues. The computational saving is due to the fact that the construction
of the online multiscale space for any input parameter is fast and this space
can be re-used for solving the forward problem with any forcing and boundary
condition. Compared with the other approaches where global snapshots are used,
the local approach that we present in this paper allows us to eliminate
unnecessary degrees of freedom on a coarse-grid level. We present various
examples in the paper and some numerical results to demonstrate the
effectiveness of our method.
|
1301.2884 | Wavelet-based Scale Saliency | cs.CV | Both pixel-based scale saliency (PSS) and basis project methods focus on
multiscale analysis of data content and structure. Their theoretical relations
and practical combination are previously discussed. However, no models have
ever been proposed for calculating scale saliency on basis-projected
descriptors since then. This paper extend those ideas into mathematical models
and implement them in the wavelet-based scale saliency (WSS). While PSS uses
pixel-value descriptors, WSS treats wavelet sub-bands as basis descriptors. The
paper discusses different wavelet descriptors: discrete wavelet transform
(DWT), wavelet packet transform (DWPT), quaternion wavelet transform (QWT) and
best basis quaternion wavelet packet transform (QWPTBB). WSS saliency maps of
different descriptors are generated and compared against other saliency methods
by both quantitative and quanlitative methods. Quantitative results, ROC
curves, AUC values and NSS values are collected from simulations on Bruce and
Kootstra image databases with human eye-tracking data as ground-truth.
Furthermore, qualitative visual results of saliency maps are analyzed and
compared against each other as well as eye-tracking data inclusive in the
databases.
|
1301.2907 | Conditions on the generator for forging ElGamal signature | cs.CR cs.IT math.IT | This paper describes new conditions on parameters selection that lead to an
efficient algorithm for forging ElGamal digital signature. Our work is inspired
by Bleichenbacher's ideas.
|
1301.2935 | Novel Subcarrier-pair based Opportunistic DF Protocol for Cooperative
Downlink OFDMA | cs.IT cs.NI math.IT | A novel subcarrier-pair based opportunistic DF protocol is proposed for
cooperative downlink OFDMA transmission aided by a decode-and-forward (DF)
relay. Specifically, user message bits are transmitted in two consecutive
equal-duration time slots. A subcarrier in the first slot can be paired with a
subcarrier in the second slot for the DF relay-aided transmission to a user. In
particular, the source and the relay can transmit simultaneously to implement
beamforming at the subcarrier in the second slot for the relay-aided
transmission. Each unpaired subcarrier in either the first or second slot is
used by the source for direct transmission to a user without the relay's
assistance. The sum rate maximized resource allocation (RA) problem is
addressed for this protocol under a total power constraint. It is shown that
the novel protocol leads to a maximum sum rate greater than or equal to that
for a benchmark one, which does not allow the source to implement beamforming
at the subcarrier in the second slot for the relay-aided transmission. Then, a
polynomial-complexity RA algorithm is developed to find an (at least
approximately) optimum resource allocation (i.e., source/relay power,
subcarrier pairing and assignment to users) for either the proposed or
benchmark protocol. Numerical experiments illustrate that the novel protocol
can lead to a much greater sum rate than the benchmark one.
|
1301.2941 | Power minimization for OFDM Transmission with Subcarrier-pair based
Opportunistic DF Relaying | cs.IT cs.NI math.IT | This paper develops a sum-power minimized resource allocation (RA) algorithm
subject to a sum-rate constraint for cooperative orthogonal frequency division
modulation (OFDM) transmission with subcarrier-pair based opportunistic
decode-and-forward (DF) relaying. The improved DF protocol first proposed in
[1] is used with optimized subcarrier pairing. Instrumental to the RA algorithm
design is appropriate definition of variables to represent source/relay power
allocation, subcarrier pairing and transmission-mode selection elegantly, so
that after continuous relaxation, the dual method and the Hungarian algorithm
can be used to find an (at least approximately) optimum RA with polynomial
complexity. Moreover, the bisection method is used to speed up the search of
the optimum Lagrange multiplier for the dual method. Numerical results are
shown to illustrate the power-reduction benefit of the improved DF protocol
with optimized subcarrier pairing.
|
1301.2944 | Competing of Sznajd and voter dynamics in the Watts-Strogatz network | physics.soc-ph cs.SI physics.comp-ph | We investigate the Watts-Strogatz network with the clustering coefficient C
dependent on the rewiring probability. The network is an area of two opposite
contact processes, where nodes can be in two states, S or D. One of the
processes is governed by the Sznajd dynamics: if there are two connected nodes
in D-state, all their neighbors become D with probability p. For the opposite
process it is sufficient to have only one neighbor in state S; this transition
occurs with probability 1. The concentration of S-nodes changes abruptly at
given value of the probability p. The result is that for small p, in
clusterized networks the activation of S nodes prevails. This result is
explained by a comparison of two limit cases: the Watts-Strogatz network
without rewiring, where C=0.5, and the Bethe lattice where C=0.
|
1301.2952 | The Anatomy of a Scientific Rumor | physics.soc-ph cond-mat.stat-mech cs.SI | The announcement of the discovery of a Higgs boson-like particle at CERN will
be remembered as one of the milestones of the scientific endeavor of the 21st
century. In this paper we present a study of information spreading processes on
Twitter before, during and after the announcement of the discovery of a new
particle with the features of the elusive Higgs boson on 4th July 2012. We
report evidence for non-trivial spatio-temporal patterns in user activities at
individual and global level, such as tweeting, re-tweeting and replying to
existing tweets. We provide a possible explanation for the observed
time-varying dynamics of user activities during the spreading of this
scientific "rumor". We model the information spreading in the corresponding
network of individuals who posted a tweet related to the Higgs boson discovery.
Finally, we show that we are able to reproduce the global behavior of about
500,000 individuals with remarkable accuracy.
|
1301.2957 | Characters and patterns of communities in networks | cs.SI physics.soc-ph | In this paper, we propose some new notions to characterize and analyze the
communities. The new notions are general characters of the communities or local
structures of networks. At first, we introduce the notions of internal
dominating set and external dominating set of a community. We show that most
communities in real networks have a small internal dominating set and a small
external dominating set, and that the internal dominating set of a community
keeps much of the information of the community. Secondly, based on the notions
of the internal dominating set and the external dominating set, we define an
internal slope (ISlope, for short) and an external slope (ESlope, for short) to
measure the internal heterogeneity and external heterogeneity of a community
respectively. We show that the internal slope (ISlope) of a community largely
determines the structure of the community, that most communities in real
networks are heterogeneous, meaning that most of the communities have a
core/periphery structure, and that both ISlopes and ESlopes (reflecting the
structure of communities) of all the communities of a network approximately
follow a normal distribution. Therefore typical values of both ISolpes and
ESoples of all the communities of a given network are in a narrow interval, and
there is only a small number of communities having ISlopes or ESlopes out of
the range of typical values of the ISlopes and ESlopes of the network. Finally,
we show that all the communities of the real networks we studied, have a three
degree separation phenomenon, that is, the average distance of communities is
approximately 3, implying a general property of true communities for many real
networks, and that good community finding algorithms find communities that
amplify clustering coefficients of the networks, for many real networks.
|
1301.2959 | New elements for a network (including brain) general theory during
learning period | nlin.AO cs.NE nlin.CD | This study deals with the evolution of the so called 'intelligent' networks
(insect society without leader, cells of an organism, brain,...) during their
learning period. First we summarize briefly the Version 2 (published in
French), whose the main characteristics are: 1) A network connected to its
environment is considered as immersed into an information field created by this
environment which so dictates to it the learning constraints. 2) The used
formalism draws one's inspiration from the one of the Quantum field theory
(Principle of stationary action, gauge fields, invariance by symmetry
transformations,...). 3) We obtain Lagrange equations whose solutions describe
the network evolution during the whole learning period. 4) Then, while
proceeding with the same formalism inspiration, we suggest other study ways
capable of evolving the knowledge in the considered scope. In a second part,
after a reminder of the points to be improved, we exhibit the Version 5 which
brings, we think, relevant improvements. Indeed: 5) We consider the weighted
averages of the variables; this introduces probabilities. 6) We define two
observables (L average of information flux and A activity of the network) which
could be measured and so be compared with experimental results. 7) We find that
L , weighted average of information flows, is an invariant. 8) Finally, we
propose two expressions for the conactance, from which we deduce the
corresponding Lagrange equations which have to be solved to know the evolution
of the considered weighted averages. But, at the present stage, we think that
we can progress only by carrying out experiments (see projects like Human brain
project) and discovering invariants, symmetries which would allow us, like in
Physics, to classify networks and above all to understand better the
connections between them. Indeed, and that is what we propose among the future
research ways, the underlying problem is to understand how, after their
learning period, several networks can connect together to produce, in the brain
case for instance, what we call mental states.
|
1301.2995 | Measuring Cultural Dynamics Through the Eurovision Song Contest | physics.soc-ph cs.SI physics.data-an | Measuring culture and its dynamics through surveys has important limitations,
but the emerging field of computational social science allows us to overcome
them by analyzing large-scale datasets. In this article, we study cultural
dynamics through the votes in the Eurovision song contest, which are decided by
a crowd-based scheme in which viewers vote through mobile phone messages.
Taking into account asymmetries and imperfect perception of culture, we measure
cultural relations among European countries in terms of cultural affinity. We
propose the Friend-or-Foe coefficient, a metric to measure voting biases among
participants of a Eurovision contest. We validate how this metric represents
cultural affinity through its relation with known cultural distances, and
through numerical analysis of biased Eurovision contests. We apply this metric
to the historical set of Eurovision contests from 1975 to 2012, finding new
patterns of stronger modularity than using votes alone. Furthermore, we define
a measure of polarization that, when applied to empirical data, shows a sharp
increase within EU countries during 2010 and 2011. We empirically validate the
relation between this polarization and economic indicators in the EU, showing
how political decisions influence both the economy and the way citizens relate
to the culture of other EU members.
|
1301.3003 | On the Vector Linear Solvability of Networks and Discrete Polymatroids | cs.IT math.IT | We consider the vector linear solvability of networks over a field
$\mathbb{F}_q.$ It is well known that a scalar linear solution over
$\mathbb{F}_q$ exists for a network if and only if the network is
\textit{matroidal} with respect to a \textit{matroid} representable over
$\mathbb{F}_q.$ A \textit{discrete polymatroid} is the multi-set analogue of a
matroid. In this paper, a \textit{discrete polymatroidal} network is defined
and it is shown that a vector linear solution over a field $\mathbb{F}_q$
exists for a network if and only if the network is discrete polymatroidal with
respect to a discrete polymatroid representable over $\mathbb{F}_q.$ An
algorithm to construct networks starting from a discrete polymatroid is
provided. Every representation over $\mathbb{F}_q$ for the discrete
polymatroid, results in a vector linear solution over $\mathbb{F}_q$ for the
constructed network. Examples which illustrate the construction algorithm are
provided, in which the resulting networks admit vector linear solution but no
scalar linear solution over $\mathbb{F}_q.$
|
1301.3021 | Accurate detection of moving targets via random sensor arrays and
Kerdock codes | math.NA cs.IT math.IT | The detection and parameter estimation of moving targets is one of the most
important tasks in radar. Arrays of randomly distributed antennas have been
popular for this purpose for about half a century. Yet, surprisingly little
rigorous mathematical theory exists for random arrays that addresses
fundamental question such as how many targets can be recovered, at what
resolution, at which noise level, and with which algorithm. In a different line
of research in radar, mathematicians and engineers have invested significant
effort into the design of radar transmission waveforms which satisfy various
desirable properties. In this paper we bring these two seemingly unrelated
areas together. Using tools from compressive sensing we derive a theoretical
framework for the recovery of targets in the azimuth-range-Doppler domain via
random antennas arrays. In one manifestation of our theory we use Kerdock codes
as transmission waveforms and exploit some of their peculiar properties in our
analysis. Our paper provides two main contributions: (i) We derive the first
rigorous mathematical theory for the detection of moving targets using random
sensor arrays. (ii) The transmitted waveforms satisfy a variety of properties
that are very desirable and important from a practical viewpoint. Thus our
approach does not just lead to useful theoretical insights, but is also of
practical importance. Various extensions of our results are derived and
numerical simulations confirming our theory are presented.
|
1301.3106 | Topological Interference Management through Index Coding | cs.IT math.IT | This work studies linear interference networks, both wired and wireless, with
no channel state information at the transmitters (CSIT) except a coarse
knowledge of the end-to-end one-hop topology of the network that only allows a
distinction between weak (zero) and significant (non-zero) channels and no
further knowledge of the channel coefficients' realizations. The network
capacity (wired) and DoF (wireless) are found to be bounded above by the
capacity of an index coding problem for which the antidote graph is the
complement of the given interference graph. The problems are shown to be
equivalent under linear solutions. An interference alignment perspective is
then used to translate the existing index coding solutions into the wired
network capacity and wireless network DoF solutions, as well as to find new and
unified solutions to different classes of all three problems.
|
1301.3118 | A parallel implementation of a derivative pricing model incorporating
SABR calibration and probability lookup tables | cs.DC cs.CE q-fin.CP | We describe a high performance parallel implementation of a derivative
pricing model, within which we introduce a new parallel method for the
calibration of the industry standard SABR (stochastic-\alpha \beta \rho)
stochastic volatility model using three strike inputs. SABR calibration
involves a non-linear three dimensional minimisation and parallelisation is
achieved by incorporating several assumptions unique to the SABR class of
models. Our calibration method is based on principles of surface intersection,
guarantees convergence to a unique solution and operates by iteratively
refining a two dimensional grid with local mesh refinement. As part of our
pricing model we additionally present a fast parallel iterative algorithm for
the creation of dynamically sized cumulative probability lookup tables that are
able to cap maximum estimated linear interpolation error. We optimise
performance for probability distributions that exhibit clustering of linear
interpolation error. We also make an empirical assessment of error propagation
through our pricing model as a result of changes in accuracy parameters within
the pricing model's multiple algorithmic steps. Algorithms are implemented on a
GPU (graphics processing unit) using Nvidia's Fermi architecture. The pricing
model targets the evaluation of spread options using copula methods, however
the presented algorithms can be applied to a wider class of financial
instruments.
|
1301.3120 | An edge density definition of overlapping and weighted graph communities | physics.soc-ph cs.SI | Community detection in networks refers to the process of seeking strongly
internally connected groups of nodes which are weakly externally connected. In
this work, we introduce and study a community definition based on internal edge
density. Beginning with the simple concept that edge density equals number of
edges divided by maximal number of edges, we apply this definition to a variety
of node and community arrangements to show that our definition yields sensible
results. Our community definition is equivalent to that of the Absolute Potts
Model community detection method (Phys. Rev. E 81, 046114 (2010)), and the
performance of that method validates the usefulness of our definition across a
wide variety of network types. We discuss how this definition can be extended
to weighted, and multigraphs, and how the definition is capable of handling
overlapping communities and local algorithms. We further validate our
definition against the recently proposed Affiliation Graph Model
(arXiv:1205.6228 [cs.SI]) and show that we can precisely solve these
benchmarks. More than proposing an end-all community definition, we explain how
studying the detailed properties of community definitions is important in order
to validate that definitions do not have negative analytic properties. We urge
that community definitions be separated from community detection algorithms and
propose that community definitions be further evaluated by criteria such as
these.
|
1301.3154 | Coherent Quantum Filtering for Physically Realizable Linear Quantum
Plants | quant-ph cs.SY math.OC math.PR | The paper is concerned with a problem of coherent (measurement-free)
filtering for physically realizable (PR) linear quantum plants. The state
variables of such systems satisfy canonical commutation relations and are
governed by linear quantum stochastic differential equations, dynamically
equivalent to those of an open quantum harmonic oscillator. The problem is to
design another PR quantum system, connected unilaterally to the output of the
plant and playing the role of a quantum filter, so as to minimize a mean square
discrepancy between the dynamic variables of the plant and the output of the
filter. This coherent quantum filtering (CQF) formulation is a simplified
feedback-free version of the coherent quantum LQG control problem which remains
open despite recent studies. The CQF problem is transformed into a constrained
covariance control problem which is treated by using the Frechet
differentiation of an appropriate Lagrange function with respect to the
matrices of the filter.
|
1301.3174 | Loss Visibility Optimized Real-time Video Transmission over MIMO Systems | cs.IT cs.MM math.IT | The structured nature of video data motivates introducing video-aware
decisions that make use of this structure for improved video transmission over
wireless networks. In this paper, we introduce an architecture for real-time
video transmission over multiple-input multiple-output (MIMO) wireless
communication systems using loss visibility side information. We quantify the
perceptual importance of a packet through the packet loss visibility and use
the loss visibility distribution to provide a notion of relative packet
importance. To jointly achieve video quality and low latency, we define the
optimization objective function as the throughput weighted by the loss
visibility of each packet, a proxy for the total perceptual value of successful
packets per unit time. We solve the problem of mapping video packets to MIMO
subchannels and adapting per-stream rates to maximize the proposed objective.
We show that the solution enables jointly reaping gains in terms of improved
video quality and lower latency. Optimized packet-stream mapping enables
transmission of more relevant packets over more reliable streams while unequal
modulation opportunistically increases the transmission rate on the stronger
streams to enable low latency delivery of high priority packets. We extend the
solution to capture codebook-based limited feedback and MIMO mode adaptation.
Results show that the composite quality and throughput gains are significant
under full channel state information as well as limited feedback. Tested on
H.264-encoded video sequences, for a 4x4 MIMO with 3 spatial streams, the
proposed architecture achieves 8 dB power reduction for the same video quality
and supports 2.4x higher throughput due to unequal modulation. Furthermore, the
gains are achieved at the expense of few bits of cross-layer overhead rather
than a complex cross-layer design.
|
1301.3192 | Matrix Approximation under Local Low-Rank Assumption | cs.LG stat.ML | Matrix approximation is a common tool in machine learning for building
accurate prediction models for recommendation systems, text mining, and
computer vision. A prevalent assumption in constructing matrix approximations
is that the partially observed matrix is of low-rank. We propose a new matrix
approximation model where we assume instead that the matrix is only locally of
low-rank, leading to a representation of the observed matrix as a weighted sum
of low-rank matrices. We analyze the accuracy of the proposed local low-rank
modeling. Our experiments show improvements in prediction accuracy in
recommendation tasks.
|
1301.3193 | Learning Graphical Model Parameters with Approximate Marginal Inference | cs.LG cs.CV | Likelihood based-learning of graphical models faces challenges of
computational-complexity and robustness to model mis-specification. This paper
studies methods that fit parameters directly to maximize a measure of the
accuracy of predicted marginals, taking into account both model and inference
approximations at training time. Experiments on imaging problems suggest
marginalization-based learning performs better than likelihood-based
approximations on difficult problems where the model being fit is approximate
in nature.
|
1301.3195 | Audio Classical Composer Identification by Deep Neural Network | cs.NE cs.IR | Audio Classical Composer Identification (ACC) is an important problem in
Music Information Retrieval (MIR) which aims at identifying the composer for
audio classical music clips. The famous annual competition, Music Information
Retrieval Evaluation eXchange (MIREX), also takes it as one of the four
training&testing tasks. We built a hybrid model based on Deep Belief Network
(DBN) and Stacked Denoising Autoencoder (SDA) to identify the composer from
audio signal. As a matter of copyright, sponsors of MIREX cannot publish their
data set. We built a comparable data set to test our model. We got an accuracy
of 76.26% in our data set which is better than some pure models and shallow
models. We think our method is promising even though we test it in a different
data set, since our data set is comparable to that in MIREX by size. We also
found that samples from different classes become farther away from each other
when transformed by more layers in our model.
|
1301.3214 | The Manifold of Human Emotions | cs.CL | Sentiment analysis predicts the presence of positive or negative emotions in
a text document. In this paper, we consider higher dimensional extensions of
the sentiment concept, which represent a richer set of human emotions. Our
approach goes beyond previous work in that our model contains a continuous
manifold rather than a finite set of human emotions. We investigate the
resulting model, compare it to psychological observations, and explore its
predictive capabilities.
|
1301.3220 | A Low-Complexity Encoding of Quasi-Cyclic Codes Based on Galois Fourier
Transform | cs.IT math.IT | The encoding complexity of a general (en,ek) quasi-cyclic code is
O[(e^2)(n-k)k]. This paper presents a novel low-complexity encoding algorithm
for quasi-cyclic (QC) codes based on matrix transformation. First, a message
vector is encoded into a transformed codeword in the transform domain. Then,
the transmitted codeword is obtained from the transformed codeword by the
inverse Galois Fourier transform. For binary QC codes, a simple and fast
mapping is required to post-process the transformed codeword such that the
transmitted codeword is binary as well. The complexity of our proposed encoding
algorithm is O[e(n-k)k] symbol operations for non-binary codes and
O[ek(n-k)(log_2 e)] bit operations for binary codes. These complexities are
much lower than their traditional counterpart O[(e^2)(n-k)k]. For example, our
complexity of encoding a 64-ary (4095,2160) QC code is only 1.59% of that of
traditional encoding, and our complexities of encoding the binary (4095, 2160)
and (8176, 7154) QC codes are respectively 9.52% and 1.77% of those of
traditional encoding. We also study the application of our low-complexity
encoding algorithm to one of the most important subclasses of QC codes, namely
QC-LDPC codes, especially when their parity-check matrices are rank deficient.
|
1301.3224 | Efficient Learning of Domain-invariant Image Representations | cs.LG | We present an algorithm that learns representations which explicitly
compensate for domain mismatch and which can be efficiently realized as linear
classifiers. Specifically, we form a linear transformation that maps features
from the target (test) domain to the source (training) domain as part of
training the classifier. We optimize both the transformation and classifier
parameters jointly, and introduce an efficient cost function based on
misclassification loss. Our method combines several features previously
unavailable in a single algorithm: multi-class adaptation through
representation learning, ability to map across heterogeneous feature spaces,
and scalability to large datasets. We present experiments on several image
datasets that demonstrate improved accuracy and computational advantages
compared to previous approaches.
|
1301.3226 | The Expressive Power of Word Embeddings | cs.LG cs.CL stat.ML | We seek to better understand the difference in quality of the several
publicly released embeddings. We propose several tasks that help to distinguish
the characteristics of different embeddings. Our evaluation of sentiment
polarity and synonym/antonym relations shows that embeddings are able to
capture surprisingly nuanced semantics even in the absence of sentence
structure. Moreover, benchmarking the embeddings shows great variance in
quality and characteristics of the semantics captured by the tested embeddings.
Finally, we show the impact of varying the number of dimensions and the
resolution of each dimension on the effective useful features captured by the
embedding space. Our contributions highlight the importance of embeddings for
NLP tasks and the effect of their quality on the final results.
|
1301.3235 | Robust control of quantum gates via sequential convex programming | quant-ph cs.SY | Resource tradeoffs can often be established by solving an appropriate robust
optimization problem for a variety of scenarios involving constraints on
optimization variables and uncertainties. Using an approach based on sequential
convex programming, we demonstrate that quantum gate transformations can be
made substantially robust against uncertainties while simultaneously using
limited resources of control amplitude and bandwidth. Achieving such a high
degree of robustness requires a quantitative model that specifies the range and
character of the uncertainties. Using a model of a controlled one-qubit system
for illustrative simulations, we identify robust control fields for a universal
gate set and explore the tradeoff between the worst-case gate fidelity and the
field fluence. Our results demonstrate that, even for this simple model, there
exist a rich variety of control design possibilities. In addition, we study the
effect of noise represented by a stochastic uncertainty model.
|
1301.3248 | Sparse Recovery with Coherent Tight Frames via Analysis Dantzig Selector
and Analysis LASSO | cs.IT math.IT math.NA | This article considers recovery of signals that are sparse or approximately
sparse in terms of a (possibly) highly overcomplete and coherent tight frame
from undersampled data corrupted with additive noise. We show that the properly
constrained $l_1$-analysis, called analysis Dantzig selector, stably recovers a
signal which is nearly sparse in terms of a tight frame provided that the
measurement matrix satisfies a restricted isometry property adapted to the
tight frame. As a special case, we consider the Gaussian noise. Further, under
a sparsity scenario, with high probability, the recovery error from noisy data
is within a log-like factor of the minimax risk over the class of vectors which
are at most $s$ sparse in terms of the tight frame. Similar results for the
analysis LASSO are showed.
The above two algorithms provide guarantees only for noise that is bounded or
bounded with high probability (for example, Gaussian noise). However, when the
underlying measurements are corrupted by sparse noise, these algorithms perform
suboptimally. We demonstrate robust methods for reconstructing signals that are
nearly sparse in terms of a tight frame in the presence of bounded noise
combined with sparse noise. The analysis in this paper is based on the
restricted isometry property adapted to a tight frame, which is a natural
extension to the standard restricted isometry property.
|
1301.3258 | New variant of ElGamal signature scheme | cs.CR cs.IT math.IT | In this paper, a new variant of ElGamal signature scheme is presented and its
security analyzed. We also give, for its theoretical interest, a general form
of the signature equation.
|
1301.3323 | Auto-pooling: Learning to Improve Invariance of Image Features from
Image Sequences | cs.CV cs.LG | Learning invariant representations from images is one of the hardest
challenges facing computer vision. Spatial pooling is widely used to create
invariance to spatial shifting, but it is restricted to convolutional models.
In this paper, we propose a novel pooling method that can learn soft clustering
of features from image sequences. It is trained to improve the temporal
coherence of features, while keeping the information loss at minimum. Our
method does not use spatial information, so it can be used with
non-convolutional models too. Experiments on images extracted from natural
videos showed that our method can cluster similar features together. When
trained by convolutional features, auto-pooling outperformed traditional
spatial pooling on an image classification task, even though it does not use
the spatial topology of features.
|
1301.3342 | Barnes-Hut-SNE | cs.LG cs.CV stat.ML | The paper presents an O(N log N)-implementation of t-SNE -- an embedding
technique that is commonly used for the visualization of high-dimensional data
in scatter plots and that normally runs in O(N^2). The new implementation uses
vantage-point trees to compute sparse pairwise similarities between the input
data objects, and it uses a variant of the Barnes-Hut algorithm - an algorithm
used by astronomers to perform N-body simulations - to approximate the forces
between the corresponding points in the embedding. Our experiments show that
the new algorithm, called Barnes-Hut-SNE, leads to substantial computational
advantages over standard t-SNE, and that it makes it possible to learn
embeddings of data sets with millions of objects.
|
1301.3347 | Multi-agent learning using Fictitious Play and Extended Kalman Filter | cs.MA cs.LG math.OC stat.ML | Decentralised optimisation tasks are important components of multi-agent
systems. These tasks can be interpreted as n-player potential games: therefore
game-theoretic learning algorithms can be used to solve decentralised
optimisation tasks. Fictitious play is the canonical example of these
algorithms. Nevertheless fictitious play implicitly assumes that players have
stationary strategies. We present a novel variant of fictitious play where
players predict their opponents' strategies using Extended Kalman filters and
use their predictions to update their strategies.
We show that in 2 by 2 games with at least one pure Nash equilibrium and in
potential games where players have two available actions, the proposed
algorithm converges to the pure Nash equilibrium. The performance of the
proposed algorithm was empirically tested, in two strategic form games and an
ad-hoc sensor network surveillance problem. The proposed algorithm performs
better than the classic fictitious play algorithm in these games and therefore
improves the performance of game-theoretical learning in decentralised
optimisation.
|
1301.3369 | Self-synchronizing pulse position modulation with error tolerance | cs.IT math.CO math.IT | Pulse position modulation (PPM) is a popular signal modulation technique
which creates M-ary data by means of the position of a pulse within a time
interval. While PPM and its variations have great advantages in many contexts,
this type of modulation is vulnerable to loss of synchronization, potentially
causing a severe error floor or throughput penalty even when little or no noise
is assumed. Another disadvantage is that this type of modulation typically
offers no error correction mechanism on its own, making them sensitive to
intersymbol interference and environmental noise. In this paper we propose a
coding theoretic variation of PPM that allows for significantly more efficient
symbol and frame synchronization as well as strong error correction. The
proposed scheme can be divided into a synchronization layer and a modulation
layer. This makes our technique compatible with major existing techniques such
as standard PPM, multipluse PPM, and expurgated PPM as well in that the scheme
can be realized by adding a simple synchronization layer to one of these
standard techniques. We also develop a generalization of expurgated PPM suited
for the modulation layer of the proposed self-synchronizing modulation scheme.
This generalized PPM can also be used as stand-alone error-correcting PPM with
a larger number of available symbols.
|
1301.3375 | On the Identifiability of Overcomplete Dictionaries via the Minimisation
Principle Underlying K-SVD | cs.IT math.IT | This article gives theoretical insights into the performance of K-SVD, a
dictionary learning algorithm that has gained significant popularity in
practical applications. The particular question studied here is when a
dictionary $\Phi\in \mathbb{R}^{d \times K}$ can be recovered as local minimum
of the minimisation criterion underlying K-SVD from a set of $N$ training
signals $y_n =\Phi x_n$. A theoretical analysis of the problem leads to two
types of identifiability results assuming the training signals are generated
from a tight frame with coefficients drawn from a random symmetric
distribution. First, asymptotic results showing, that in expectation the
generating dictionary can be recovered exactly as a local minimum of the K-SVD
criterion if the coefficient distribution exhibits sufficient decay. Second,
based on the asymptotic results it is demonstrated that given a finite number
of training samples $N$, such that $N/\log N = O(K^3d)$, except with
probability $O(N^{-Kd})$ there is a local minimum of the K-SVD criterion within
distance $O(KN^{-1/4})$ to the generating dictionary.
|
1301.3385 | Recurrent Online Clustering as a Spatio-Temporal Feature Extractor in
DeSTIN | cs.CV | This paper presents a basic enhancement to the DeSTIN deep learning
architecture by replacing the explicitly calculated transition tables that are
used to capture temporal features with a simpler, more scalable mechanism. This
mechanism uses feedback of state information to cluster over a space comprised
of both the spatial input and the current state. The resulting architecture
achieves state-of-the-art results on the MNIST classification benchmark.
|
1301.3388 | Confluently Persistent Sets and Maps | cs.DS cs.DB | Ordered sets and maps play important roles as index structures in relational
data models. When a shared index in a multi-user system is modified
concurrently, the current state of the index will diverge into multiple
versions containing the local modifications performed in each work flow. The
confluent persistence problem arises when versions should be melded in commit
and refresh operations so that modifications performed by different users
become merged.
Confluently Persistent Sets and Maps are functional binary search trees that
support efficient set operations both when operands are disjoint and when they
are overlapping. Treap properties with hash values as priorities are maintained
and with hash-consing of nodes a unique representation is provided.
Non-destructive set merge algorithms that skip inspection of equal subtrees and
a conflict detecting meld algorithm based on set merges are presented. The meld
algorithm is used in commit and refresh operations. With m modifications in one
flow and n items in total, the expected cost of the operations is O(m
log(n/m)).
|
1301.3389 | The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization | cs.NA cs.LG | Non-negative matrix factorization (NMF) has become a popular machine learning
approach to many problems in text mining, speech and image processing,
bio-informatics and seismic data analysis to name a few. In NMF, a matrix of
non-negative data is approximated by the low-rank product of two matrices with
non-negative entries. In this paper, the approximation quality is measured by
the Kullback-Leibler divergence between the data and its low-rank
reconstruction. The existence of the simple multiplicative update (MU)
algorithm for computing the matrix factors has contributed to the success of
NMF. Despite the availability of algorithms showing faster convergence, MU
remains popular due to its simplicity. In this paper, a diagonalized Newton
algorithm (DNA) is proposed showing faster convergence while the implementation
remains simple and suitable for high-rank problems. The DNA algorithm is
applied to various publicly available data sets, showing a substantial speed-up
on modern hardware.
|
1301.3391 | Feature grouping from spatially constrained multiplicative interaction | cs.LG | We present a feature learning model that learns to encode relationships
between images. The model is defined as a Gated Boltzmann Machine, which is
constrained such that hidden units that are nearby in space can gate each
other's connections. We show how frequency/orientation "columns" as well as
topographic filter maps follow naturally from training the model on image
pairs. The model also helps explain why square-pooling models yield feature
groups with similar grouping properties. Experimental results on synthetic
image transformations show that spatially constrained gating is an effective
way to reduce the number of parameters and thereby to regularize a
transformation-learning model.
|
1301.3452 | Exponential communication gap between weak and strong classical
simulations of quantum communication | quant-ph cs.IT math.IT | The most trivial way to simulate classically the communication of a quantum
state is to transmit the classical description of the quantum state itself.
However, this requires an infinite amount of classical communication if the
simulation is exact. A more intriguing and potentially less demanding strategy
would encode the full information about the quantum state into the probability
distribution of the communicated variables, so that this information is never
sent in each single shot. This kind of simulation is called weak, as opposed to
strong simulations, where the quantum state is communicated in individual
shots. In this paper, we introduce a bounded-error weak protocol for simulating
the communication of an arbitrary number of qubits and a subsequent two-outcome
measurement consisting of an arbitrary pure state projector and its complement.
This protocol requires an amount of classical communication independent of the
number of qubits and proportional to Delta^{-1}, where Delta is the error and a
free parameter of the protocol. Conversely, a bounded-error strong protocol
requires an amount of classical communication growing exponentially with the
number of qubits for a fixed error. Our result improves a previous protocol,
based on the Johnson-Lindenstrauss lemma, with communication cost scaling as
Delta^{-2} log Delta^{-1}.
|
1301.3457 | A Geometric Descriptor for Cell-Division Detection | cs.CV | We describe a method for cell-division detection based on a geometric-driven
descriptor that can be represented as a 5-layers processing network, based
mainly on wavelet filtering and a test for mirror symmetry between pairs of
pixels. After the centroids of the descriptors are computed for a sequence of
frames, the two-steps piecewise constant function that best fits the sequence
of centroids determines the frame where the division occurs.
|
1301.3461 | Factorized Topic Models | cs.LG cs.CV cs.IR | In this paper we present a modification to a latent topic model, which makes
the model exploit supervision to produce a factorized representation of the
observed data. The structured parameterization separately encodes variance that
is shared between classes from variance that is private to each class by the
introduction of a new prior over the topic space. The approach allows for a
more eff{}icient inference and provides an intuitive interpretation of the data
in terms of an informative signal together with structured noise. The
factorized representation is shown to enhance inference performance for image,
text, and video classification.
|
1301.3468 | Boltzmann Machines and Denoising Autoencoders for Image Denoising | stat.ML cs.CV cs.LG | Image denoising based on a probabilistic model of local image patches has
been employed by various researchers, and recently a deep (denoising)
autoencoder has been proposed by Burger et al. [2012] and Xie et al. [2012] as
a good model for this. In this paper, we propose that another popular family of
models in the field of deep learning, called Boltzmann machines, can perform
image denoising as well as, or in certain cases of high level of noise, better
than denoising autoencoders. We empirically evaluate the two models on three
different sets of images with different types and levels of noise. Throughout
the experiments we also examine the effect of the depth of the models. The
experiments confirmed our claim and revealed that the performance can be
improved by adding more hidden layers, especially when the level of noise is
high.
|
1301.3476 | Pushing Stochastic Gradient towards Second-Order Methods --
Backpropagation Learning with Transformations in Nonlinearities | cs.LG cs.CV stat.ML | Recently, we proposed to transform the outputs of each hidden neuron in a
multi-layer perceptron network to have zero output and zero slope on average,
and use separate shortcut connections to model the linear dependencies instead.
We continue the work by firstly introducing a third transformation to normalize
the scale of the outputs of each hidden neuron, and secondly by analyzing the
connections to second order optimization methods. We show that the
transformations make a simple stochastic gradient behave closer to second-order
optimization methods and thus speed up learning. This is shown both in theory
and with experiments. The experiments on the third transformation show that
while it further increases the speed of learning, it can also hurt performance
by converging to a worse local optimum, where both the inputs and outputs of
many hidden neurons are close to zero.
|
1301.3485 | A Semantic Matching Energy Function for Learning with Multi-relational
Data | cs.LG | Large-scale relational learning becomes crucial for handling the huge amounts
of structured data generated daily in many application domains ranging from
computational biology or information retrieval, to natural language processing.
In this paper, we present a new neural network architecture designed to embed
multi-relational graphs into a flexible continuous vector space in which the
original data is kept and enhanced. The network is trained to encode the
semantics of these graphs in order to assign high probabilities to plausible
components. We empirically show that it reaches competitive performance in link
prediction on standard datasets from the literature.
|
1301.3488 | Various improvements to text fingerprinting | cs.DS cs.DM cs.IR | Let s = s_1 .. s_n be a text (or sequence) on a finite alphabet \Sigma of
size \sigma. A fingerprint in s is the set of distinct characters appearing in
one of its substrings. The problem considered here is to compute the set {\cal
F} of all fingerprints of all substrings of s in order to answer efficiently
certain questions on this set. A substring s_i .. s_j is a maximal location for
a fingerprint f in F (denoted by <i,j>) if the alphabet of s_i .. s_j is f and
s_{i-1}, s_{j+1}, if defined, are not in f. The set of maximal locations ins is
{\cal L} (it is easy to see that |{\cal L}| \leq n \sigma). Two maximal
locations <i,j> and <k,l> such that s_i .. s_j = s_k .. s_l are named {\em
copies}, and the quotient set of {\cal L} according to the copy relation is
denoted by {\cal L}_C. We present new exact and approximate efficient
algorithms and data structures for the following three problems: (1) to compute
{\cal F}; (2) given f as a set of distinct characters in \Sigma, to answer if f
represents a fingerprint in {\cal F}; (3) given f, to find all maximal
locations of f in s.
|
1301.3509 | Kidney Exchange in Dynamic Sparse Heterogenous Pools | cs.DS cs.SI | Current kidney exchange pools are of moderate size and thin, as they consist
of many highly sensitized patients. Creating a thicker pool can be done by
waiting for many pairs to arrive. We analyze a simple class of matching
algorithms that search periodically for allocations. We find that if only 2-way
cycles are conducted, in order to gain a significant amount of matches over the
online scenario (matching each time a new incompatible pair joins the pool) the
waiting period should be "very long". If 3-way cycles are also allowed we find
regimes in which waiting for a short period also increases the number of
matches considerably. Finally, a significant increase of matches can be
obtained by using even one non-simultaneous chain while still matching in an
online fashion. Our theoretical findings and data-driven computational
experiments lead to policy recommendations.
|
1301.3516 | Learnable Pooling Regions for Image Classification | cs.CV cs.LG | Biologically inspired, from the early HMAX model to Spatial Pyramid Matching,
pooling has played an important role in visual recognition pipelines. Spatial
pooling, by grouping of local codes, equips these methods with a certain degree
of robustness to translation and deformation yet preserving important spatial
information. Despite the predominance of this approach in current recognition
systems, we have seen little progress to fully adapt the pooling strategy to
the task at hand. This paper proposes a model for learning task dependent
pooling scheme -- including previously proposed hand-crafted pooling schemes as
a particular instantiation. In our work, we investigate the role of different
regularization terms showing that the smooth regularization term is crucial to
achieve strong performance using the presented architecture. Finally, we
propose an efficient and parallel method to train the model. Our experiments
show improved performance over hand-crafted pooling schemes on the CIFAR-10 and
CIFAR-100 datasets -- in particular improving the state-of-the-art to 56.29% on
the latter.
|
1301.3524 | How good is the Electricity benchmark for evaluating concept drift
adaptation | cs.LG | In this correspondence, we will point out a problem with testing adaptive
classifiers on autocorrelated data. In such a case random change alarms may
boost the accuracy figures. Hence, we cannot be sure if the adaptation is
working well.
|
1301.3527 | Block Coordinate Descent for Sparse NMF | cs.LG cs.NA | Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data
analysis. An important variant is the sparse NMF problem which arises when we
explicitly require the learnt features to be sparse. A natural measure of
sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms,
such as L$_1$/L$_2$ measure, have been shown to model sparsity robustly, based
on intuitive attributes that such measures need to satisfy. This is in contrast
to computationally cheaper alternatives such as the plain L$_1$ norm. However,
present algorithms designed for optimizing the mixed norm L$_1$/L$_2$ are slow
and other formulations for sparse NMF have been proposed such as those based on
L$_1$ and L$_0$ norms. Our proposed algorithm allows us to solve the mixed norm
sparsity constraints while not sacrificing computation time. We present
experimental evidence on real-world datasets that shows our new algorithm
performs an order of magnitude faster compared to the current state-of-the-art
solvers optimizing the mixed norm and is suitable for large-scale datasets.
|
1301.3528 | An Efficient Sufficient Dimension Reduction Method for Identifying
Genetic Variants of Clinical Significance | q-bio.GN cs.LG stat.ML | Fast and cheaper next generation sequencing technologies will generate
unprecedentedly massive and highly-dimensional genomic and epigenomic variation
data. In the near future, a routine part of medical record will include the
sequenced genomes. A fundamental question is how to efficiently extract genomic
and epigenomic variants of clinical utility which will provide information for
optimal wellness and interference strategies. Traditional paradigm for
identifying variants of clinical validity is to test association of the
variants. However, significantly associated genetic variants may or may not be
usefulness for diagnosis and prognosis of diseases. Alternative to association
studies for finding genetic variants of predictive utility is to systematically
search variants that contain sufficient information for phenotype prediction.
To achieve this, we introduce concepts of sufficient dimension reduction and
coordinate hypothesis which project the original high dimensional data to very
low dimensional space while preserving all information on response phenotypes.
We then formulate clinically significant genetic variant discovery problem into
sparse SDR problem and develop algorithms that can select significant genetic
variants from up to or even ten millions of predictors with the aid of dividing
SDR for whole genome into a number of subSDR problems defined for genomic
regions. The sparse SDR is in turn formulated as sparse optimal scoring
problem, but with penalty which can remove row vectors from the basis matrix.
To speed up computation, we develop the modified alternating direction method
for multipliers to solve the sparse optimal scoring problem which can easily be
implemented in parallel. To illustrate its application, the proposed method is
applied to simulation data and the NHLBI's Exome Sequencing Project dataset
|
1301.3530 | The Neural Representation Benchmark and its Evaluation on Brain and
Machine | cs.NE cs.CV cs.LG q-bio.NC | A key requirement for the development of effective learning representations
is their evaluation and comparison to representations we know to be effective.
In natural sensory domains, the community has viewed the brain as a source of
inspiration and as an implicit benchmark for success. However, it has not been
possible to directly test representational learning algorithms directly against
the representations contained in neural systems. Here, we propose a new
benchmark for visual representations on which we have directly tested the
neural representation in multiple visual cortical areas in macaque (utilizing
data from [Majaj et al., 2012]), and on which any computer vision algorithm
that produces a feature space can be tested. The benchmark measures the
effectiveness of the neural or machine representation by computing the
classification loss on the ordered eigendecomposition of a kernel matrix
[Montavon et al., 2011]. In our analysis we find that the neural representation
in visual area IT is superior to visual area V4. In our analysis of
representational learning algorithms, we find that three-layer models approach
the representational performance of V4 and the algorithm in [Le et al., 2012]
surpasses the performance of V4. Impressively, we find that a recent supervised
algorithm [Krizhevsky et al., 2012] achieves performance comparable to that of
IT for an intermediate level of image variation difficulty, and surpasses IT at
a higher difficulty level. We believe this result represents a major milestone:
it is the first learning algorithm we have found that exceeds our current
estimate of IT representation performance. We hope that this benchmark will
assist the community in matching the representational performance of visual
cortex and will serve as an initial rallying point for further correspondence
between representations derived in brains and machines.
|
1301.3533 | Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint | cs.NE cs.LG stat.ML | Deep Belief Networks (DBN) have been successfully applied on popular machine
learning tasks. Specifically, when applied on hand-written digit recognition,
DBNs have achieved approximate accuracy rates of 98.8%. In an effort to
optimize the data representation achieved by the DBN and maximize their
descriptive power, recent advances have focused on inducing sparse constraints
at each layer of the DBN. In this paper we present a theoretical approach for
sparse constraints in the DBN using the mixed norm for both non-overlapping and
overlapping groups. We explore how these constraints affect the classification
accuracy for digit recognition in three different datasets (MNIST, USPS, RIMES)
and provide initial estimations of their usefulness by altering different
parameters such as the group size and overlap percentage.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.