id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1203.3879
|
Powerline Communications Channel Modelling Methodology Based on
Statistical Features
|
cs.IT math.IT
|
This paper proposes a new channel modelling method for powerline
communications networks based on the multipath profile in the time domain. The
new channel model is developed to be applied in a range of Powerline
Communications (PLC) research topics such as impulse noise modelling,
deployment and coverage studies, and communications theory analysis. To develop
the methodology, channels are categorised according to their propagation
distance and power delay profile. The statistical multipath parameters such as
path arrival time, magnitude and interval for each category are analyzed to
build the model. Each generated channel based on the proposed statistical model
represents a different realisation of a PLC network. Simulation results in
similar the time and frequency domains show that the proposed statistical
modelling method, which integrates the impact of network topology presents the
PLC channel features as the underlying transmission line theory model.
Furthermore, two potential application scenarios are described to show the
channel model is applicable to capacity analysis and correlated impulse noise
modelling for PLC networks.
|
1203.3887
|
Learning loopy graphical models with latent variables: Efficient methods
and guarantees
|
stat.ML cs.AI cs.LG math.ST stat.TH
|
The problem of structure estimation in graphical models with latent variables
is considered. We characterize conditions for tractable graph estimation and
develop efficient methods with provable guarantees. We consider models where
the underlying Markov graph is locally tree-like, and the model is in the
regime of correlation decay. For the special case of the Ising model, the
number of samples $n$ required for structural consistency of our method scales
as $n=\Omega(\theta_{\min}^{-\delta\eta(\eta+1)-2}\log p)$, where p is the
number of variables, $\theta_{\min}$ is the minimum edge potential, $\delta$ is
the depth (i.e., distance from a hidden node to the nearest observed nodes),
and $\eta$ is a parameter which depends on the bounds on node and edge
potentials in the Ising model. Necessary conditions for structural consistency
under any algorithm are derived and our method nearly matches the lower bound
on sample requirements. Further, the proposed method is practical to implement
and provides flexibility to control the number of latent variables and the
cycle lengths in the output graph.
|
1203.3935
|
Distributed Cooperative Q-learning for Power Allocation in Cognitive
Femtocell Networks
|
cs.LG cs.GT
|
In this paper, we propose a distributed reinforcement learning (RL) technique
called distributed power control using Q-learning (DPC-Q) to manage the
interference caused by the femtocells on macro-users in the downlink. The DPC-Q
leverages Q-Learning to identify the sub-optimal pattern of power allocation,
which strives to maximize femtocell capacity, while guaranteeing macrocell
capacity level in an underlay cognitive setting. We propose two different
approaches for the DPC-Q algorithm: namely, independent, and cooperative. In
the former, femtocells learn independently from each other while in the latter,
femtocells share some information during learning in order to enhance their
performance. Simulation results show that the independent approach is capable
of mitigating the interference generated by the femtocells on macro-users.
Moreover, the results show that cooperation enhances the performance of the
femtocells in terms of speed of convergence, fairness and aggregate femtocell
capacity.
|
1203.3946
|
Preserving Co-Location Privacy in Geo-Social Networks
|
cs.SI cs.CY physics.soc-ph
|
The number of people on social networks has grown exponentially. Users share
very large volumes of personal informations and content every days. This
content could be tagged with geo-spatial and temporal coordinates that may be
considered sensitive for some users. While there is clearly a demand for users
to share this information with each other, there is also substantial demand for
greater control over the conditions under which their information is shared.
Content published in a geo-aware social networks (GeoSN) often involves
multiple users and it is often accessible to multiple users, without the
publisher being aware of the privacy preferences of those users. This makes
difficult for GeoSN users to control which information about them is available
and to whom it is available. Thus, the lack of means to protect users privacy
scares people bothered about privacy issues. This paper addresses a particular
privacy threats that occur in GeoSNs: the Co-location privacy threat. It
concerns the availability of information about the presence of multiple users
in a same locations at given times, against their will. The challenge addressed
is that of supporting privacy while still enabling useful services.
|
1203.3951
|
Path Planning Algorithm for Extinguishing Forest Fires
|
cs.RO
|
One of the major impacts of climatic changes is due to destroying of forest.
Destroying of forest takes place in many ways but the majority of the forest is
destroyed due to wild forest fires. In this paper we have presented a path
planning algorithm for extinguishing fires which uses Wireless Sensor and Actor
Networks (WSANs) for detecting fires. Since most of the works on forest fires
are based on Wireless Sensor Networks (WSNs) and a collection of work has been
done on coverage, message transmission, deployment of nodes, battery power
depletion of sensor nodes in WSNs we focused our work in path planning approach
of the Actor to move to the target area where the fire has occurred and
extinguish it. An incremental approach is presented in order to determine the
successive moves of the Actor to extinguish fire in an environment with and
without obstacles. This is done by comparing the moves determined with target
location readings obtained using sensors until the Actor reaches the target
area to extinguish fires.
|
1203.3967
|
Control Complexity in Bucklin, Fallback, and Plurality Voting: An
Experimental Approach
|
cs.GT cs.CC cs.MA
|
Walsh [Wal10, Wal09], Davies et al. [DKNW10, DKNW11], and Narodytska et al.
[NWX11] studied various voting systems empirically and showed that they can
often be manipulated effectively, despite their manipulation problems being
NP-hard. Such an experimental approach is sorely missing for NP-hard control
problems, where control refers to attempts to tamper with the outcome of
elections by adding/deleting/partitioning either voters or candidates. We
experimentally tackle NP-hard control problems for Bucklin and fallback voting.
Among natural voting systems with efficient winner determination, fallback
voting is currently known to display the broadest resistance to control in
terms of NP-hardness, and Bucklin voting has been shown to behave almost as
well in terms of control resistance [ER10, EPR11, EFPR11]. We also investigate
control resistance experimentally for plurality voting, one of the first voting
systems analyzed with respect to electoral control [BTT92, HHR07]. Our findings
indicate that NP-hard control problems can often be solved effectively in
practice. Moreover, our experiments allow a more fine-grained analysis and
comparison-across various control scenarios, vote distribution models, and
voting systems-than merely stating NP-hardness for all these control problems.
|
1203.4008
|
Adaptive Network Coding for Scheduling Real-time Traffic with Hard
Deadlines
|
cs.SY cs.NI
|
We study adaptive network coding (NC) for scheduling real-time traffic over a
single-hop wireless network. To meet the hard deadlines of real-time traffic,
it is critical to strike a balance between maximizing the throughput and
minimizing the risk that the entire block of coded packets may not be decodable
by the deadline. Thus motivated, we explore adaptive NC, where the block size
is adapted based on the remaining time to the deadline, by casting this
sequential block size adaptation problem as a finite-horizon Markov decision
process. One interesting finding is that the optimal block size and its
corresponding action space monotonically decrease as the deadline approaches,
and the optimal block size is bounded by the "greedy" block size. These unique
structures make it possible to narrow down the search space of dynamic
programming, building on which we develop a monotonicity-based backward
induction algorithm (MBIA) that can solve for the optimal block size in
polynomial time. Since channel erasure probabilities would be time-varying in a
mobile network, we further develop a joint real-time scheduling and channel
learning scheme with adaptive NC that can adapt to channel dynamics. We also
generalize the analysis to multiple flows with hard deadlines and long-term
delivery ratio constraints, devise a low-complexity online scheduling algorithm
integrated with the MBIA, and then establish its asymptotical
throughput-optimality. In addition to analysis and simulation results, we
perform high fidelity wireless emulation tests with real radio transmissions to
demonstrate the feasibility of the MBIA in finding the optimal block size in
real time.
|
1203.4009
|
Scilab and SIP for Image Processing
|
cs.MS cs.CV
|
This paper is an overview of Image Processing and Analysis using Scilab, a
free prototyping environment for numerical calculations similar to Matlab. We
demonstrate the capabilities of SIP -- the Scilab Image Processing Toolbox --
which extends Scilab with many functions to read and write images in over 100
major file formats, including PNG, JPEG, BMP, and TIFF. It also provides
routines for image filtering, edge detection, blurring, segmentation, shape
analysis, and image recognition. Basic directions to install Scilab and SIP are
given, and also a mini-tutorial on Scilab. Three practical examples of image
analysis are presented, in increasing degrees of complexity, showing how
advanced image analysis techniques seems uncomplicated in this environment.
|
1203.4011
|
Understanding Sampling Style Adversarial Search Methods
|
cs.AI
|
UCT has recently emerged as an exciting new adversarial reasoning technique
based on cleverly balancing exploration and exploitation in a Monte-Carlo
sampling setting. It has been particularly successful in the game of Go but the
reasons for its success are not well understood and attempts to replicate its
success in other domains such as Chess have failed. We provide an in-depth
analysis of the potential of UCT in domain-independent settings, in cases where
heuristic values are available, and the effect of enhancing random playouts to
more informed playouts between two weak minimax players. To provide further
insights, we develop synthetic game tree instances and discuss interesting
properties of UCT, both empirically and analytically.
|
1203.4031
|
FEAST Eigenvalue Solver v3.0 User Guide
|
cs.MS cs.CE physics.chem-ph physics.comp-ph
|
The FEAST eigensolver package is a free high-performance numerical library
for solving the Hermitian and non-Hermitian eigenvalue problems, and obtaining
all the eigenvalues and (right/left) eigenvectors within a given search
interval or arbitrary contour in the complex plane. Its originality lies with a
new transformative numerical approach to the traditional eigenvalue algorithm
design - the FEAST algorithm. The FEAST eigensolver combines simplicity and
efficiency and it offers many important capabilities for achieving high
performance, robustness, accuracy, and scalability on parallel architectures.
FEAST is both a comprehensive library package, and an easy to use software. It
includes flexible reverse communication interfaces and ready to use predefined
interfaces for dense, banded and sparse systems. The current version v3.0 of
the FEAST package can address both Hermitian and non-Hermitian eigenvalue
problems (real symmetric, real non-symmetric, complex Hermitian, complex
symmetric, or complex general systems) on both shared-memory and distributed
memory architectures (i.e contains both FEAST-SMP and FEAST-MPI packages). This
User's guide provides instructions for installation setup, a detailed
description of the FEAST interfaces and a large number of examples.
|
1203.4040
|
New decoding scheme for LDPC codes based on simple product code
structure
|
cs.IT math.IT
|
In this paper, a new decoding scheme for low-density parity-check (LDPC)
codes using the concept of simple product code structure is proposed based on
combining two independently received soft-decision data for the same codeword.
LDPC codes act as horizontal codes of the product codes and simple algebraic
codes are used as vertical codes to help decoding of the LDPC codes. The
decoding capability of the proposed decoding scheme is defined and analyzed
using the paritycheck matrices of vertical codes and especially the
combined-decodability is derived for the case of single parity-check (SPC) and
Hamming codes being used as vertical codes. It is also shown that the proposed
decoding scheme achieves much better error-correcting capability in high signal
to noise ratio (SNR) region with low additional decoding complexity, compared
with a conventional decoding scheme.
|
1203.4043
|
Your Facebook Deactivated Friend or a Cloaked Spy (Extended Abstract)
|
cs.SI cs.CY physics.soc-ph
|
With over 750 million active users, Facebook is the most famous social
networking website. One particular aspect of Facebook widely discussed in the
news and heavily researched in academic circles is the privacy of its users. In
this paper we introduce a zero day privacy loophole in Facebook. We call this
the deactivated friend attack. The concept of the attack is very similar to
cloaking in Star Trek while its seriousness could be estimated from the fact
that once the attacker is a friend of the victim, it is highly probable the
attacker has indefinite access to the victims private information in a cloaked
way. We demonstrate the impact of the attack by showing the ease of gaining
trust of Facebook users and being befriended online. With targeted friend
requests we were able to add over 4300 users and maintain access to their
Facebook profile information for at least 261 days. No user was able to
unfriend us during this time due to cloaking and short de-cloaking sessions.
The short de-cloaking sessions were enough to get updates about the victims. We
also provide several solutions for the loophole, which range from mitigation to
a permanent solution
|
1203.4049
|
The geometry of low-rank Kalman filters
|
math.OC cs.SY
|
An important property of the Kalman filter is that the underlying Riccati
flow is a contraction for the natural metric of the cone of symmetric positive
definite matrices. The present paper studies the geometry of a low-rank version
of the Kalman filter. The underlying Riccati flow evolves on the manifold of
fixed rank symmetric positive semidefinite matrices. Contraction properties of
the low-rank flow are studied by means of a suitable metric recently introduced
by the authors.
|
1203.4070
|
An ADMM Algorithm for Solving l_1 Regularized MPC
|
cs.SY math.OC
|
We present an Alternating Direction Method of Multipliers (ADMM) algorithm
for solving optimization problems with an l_1 regularized least-squares cost
function subject to recursive equality constraints. The considered optimization
problem has applications in control, for example in l_1 regularized MPC. The
ADMM algorithm is easy to implement, converges fast to a solution of moderate
accuracy, and enables separation of the optimization problem into sub-problems
that may be solved in parallel. We show that the most costly step of the
proposed ADMM algorithm is equivalent to solving an LQ regulator problem with
an extra linear term in the cost function, a problem that can be solved
efficiently using a Riccati recursion. We apply the ADMM algorithm to an
example of l_1 regularized MPC. The numerical examples confirm fast convergence
to moderate accuracy and a linear complexity in the MPC prediction horizon.
|
1203.4111
|
Reducing the Arity in Unbiased Black-Box Complexity
|
cs.NE
|
We show that for all $1<k \leq \log n$ the $k$-ary unbiased black-box
complexity of the $n$-dimensional $\onemax$ function class is $O(n/k)$. This
indicates that the power of higher arity operators is much stronger than what
the previous $O(n/\log k)$ bound by Doerr et al. (Faster black-box algorithms
through higher arity operators, Proc. of FOGA 2011, pp. 163--172, ACM, 2011)
suggests.
The key to this result is an encoding strategy, which might be of independent
interest. We show that, using $k$-ary unbiased variation operators only, we may
simulate an unrestricted memory of size $O(2^k)$ bits.
|
1203.4153
|
Optimal Investment Under Transaction Costs
|
q-fin.PM cs.SY
|
We investigate how and when to diversify capital over assets, i.e., the
portfolio selection problem, from a signal processing perspective. To this end,
we first construct portfolios that achieve the optimal expected growth in
i.i.d. discrete-time two-asset markets under proportional transaction costs. We
then extend our analysis to cover markets having more than two stocks. The
market is modeled by a sequence of price relative vectors with arbitrary
discrete distributions, which can also be used to approximate a wide class of
continuous distributions. To achieve the optimal growth, we use threshold
portfolios, where we introduce a recursive update to calculate the expected
wealth. We then demonstrate that under the threshold rebalancing framework, the
achievable set of portfolios elegantly form an irreducible Markov chain under
mild technical conditions. We evaluate the corresponding stationary
distribution of this Markov chain, which provides a natural and efficient
method to calculate the cumulative expected wealth. Subsequently, the
corresponding parameters are optimized yielding the growth optimal portfolio
under proportional transaction costs in i.i.d. discrete-time two-asset markets.
As a widely known financial problem, we next solve optimal portfolio selection
in discrete-time markets constructed by sampling continuous-time Brownian
markets. For the case that the underlying discrete distributions of the price
relative vectors are unknown, we provide a maximum likelihood estimator that is
also incorporated in the optimization framework in our simulations.
|
1203.4156
|
Optimal Investment Under Transaction Costs: A Threshold Rebalanced
Portfolio Approach
|
q-fin.PM cs.SY
|
We study optimal investment in a financial market having a finite number of
assets from a signal processing perspective. We investigate how an investor
should distribute capital over these assets and when he should reallocate the
distribution of the funds over these assets to maximize the cumulative wealth
over any investment period. In particular, we introduce a portfolio selection
algorithm that maximizes the expected cumulative wealth in i.i.d. two-asset
discrete-time markets where the market levies proportional transaction costs in
buying and selling stocks. We achieve this using "threshold rebalanced
portfolios", where trading occurs only if the portfolio breaches certain
thresholds. Under the assumption that the relative price sequences have
log-normal distribution from the Black-Scholes model, we evaluate the expected
wealth under proportional transaction costs and find the threshold rebalanced
portfolio that achieves the maximal expected cumulative wealth over any
investment period. Our derivations can be readily extended to markets having
more than two stocks, where these extensions are pointed out in the paper. As
predicted from our derivations, we significantly improve the achieved wealth
over portfolio selection algorithms from the literature on historical data
sets.
|
1203.4157
|
Quartile Clustering: A quartile based technique for Generating
Meaningful Clusters
|
cs.DB
|
Clustering is one of the main tasks in exploratory data analysis and
descriptive statistics where the main objective is partitioning observations in
groups. Clustering has a broad range of application in varied domains like
climate, business, information retrieval, biology, psychology, to name a few. A
variety of methods and algorithms have been developed for clustering tasks in
the last few decades. We observe that most of these algorithms define a cluster
in terms of value of the attributes, density, distance etc. However these
definitions fail to attach a clear meaning/semantics to the generated clusters.
We argue that clusters having understandable and distinct semantics defined in
terms of quartiles/halves are more appealing to business analysts than the
clusters defined by data boundaries or prototypes. On the samepremise, we
propose our new algorithm named as quartile clustering technique. Through a
series of experiments we establish efficacy of this algorithm. We demonstrate
that the quartile clustering technique adds clear meaning to each of the
clusters compared to K-means. We use DB Index to measure goodness of the
clusters and show our method is comparable to EM (Expectation Maximization),
PAM (Partition around Medoid) and K Means. We have explored its capability in
detecting outlier and the benefit of added semantics. We discuss some of the
limitations in its present form and also provide a rough direction in
addressing the issue of merging the generated clusters.
|
1203.4160
|
A Novel Robust Approach to Least Squares Problems with Bounded Data
Uncertainties
|
cs.SY
|
In this correspondence, we introduce a minimax regret criteria to the least
squares problems with bounded data uncertainties and solve it using
semi-definite programming. We investigate a robust minimax least squares
approach that minimizes a worst case difference regret. The regret is defined
as the difference between a squared data error and the smallest attainable
squared data error of a least squares estimator. We then propose a robust
regularized least squares approach to the regularized least squares problem
under data uncertainties by using a similar framework. We show that both
unstructured and structured robust least squares problems and robust
regularized least squares problem can be put in certain semi-definite
programming forms. Through several simulations, we demonstrate the merits of
the proposed algorithms with respect to the the well-known alternatives in the
literature.
|
1203.4163
|
Outlier Detection Techniques for SQL and ETL Tuning
|
cs.DB
|
RDBMS is the heart for both OLTP and OLAP types of applications. For both
types of applications thousands of queries expressed in terms of SQL are
executed on daily basis. All the commercial DBMS engines capture various
attributes in system tables about these executed queries. These queries need to
conform to best practices and need to be tuned to ensure optimal performance.
While we use checklists, often tools to enforce the same, a black box technique
on the queries for profiling, outlier detection is not employed for a summary
level understanding. This is the motivation of the paper, as this not only
points out to inefficiencies built in the system, but also has the potential to
point evolving best practices and inappropriate usage. Certainly this can
reduce latency in information flow and optimal utilization of hardware and
software capacity. In this paper we start with formulating the problem. We
explore four outlier detection techniques. We apply these techniques over rich
corpora of production queries and analyze the results. We also explore benefit
of an ensemble approach. We conclude with future courses of action. The same
philosophy we have used for optimization of extraction, transform, load (ETL)
jobs in one of our previous work. We give a brief introduction of the same in
section four.
|
1203.4168
|
Linear MMSE-Optimal Turbo Equalization Using Context Trees
|
cs.SY
|
Formulations of the turbo equalization approach to iterative equalization and
decoding vary greatly when channel knowledge is either partially or completely
unknown. Maximum aposteriori probability (MAP) and minimum mean square error
(MMSE) approaches leverage channel knowledge to make explicit use of soft
information (priors over the transmitted data bits) in a manner that is
distinctly nonlinear, appearing either in a trellis formulation (MAP) or inside
an inverted matrix (MMSE). To date, nearly all adaptive turbo equalization
methods either estimate the channel or use a direct adaptation equalizer in
which estimates of the transmitted data are formed from an expressly linear
function of the received data and soft information, with this latter
formulation being most common. We study a class of direct adaptation turbo
equalizers that are both adaptive and nonlinear functions of the soft
information from the decoder. We introduce piecewise linear models based on
context trees that can adaptively approximate the nonlinear dependence of the
equalizer on the soft information such that it can choose both the partition
regions as well as the locally linear equalizer coefficients in each region
independently, with computational complexity that remains of the order of a
traditional direct adaptive linear equalizer. This approach is guaranteed to
asymptotically achieve the performance of the best piecewise linear equalizer
and we quantify the MSE performance of the resulting algorithm and the
convergence of its MSE to that of the linear minimum MSE estimator as the depth
of the context tree and the data length increase.
|
1203.4176
|
SignsWorld; Deeping Into the Silence World and Hearing Its Signs (State
of the Art)
|
cs.CL cs.CV
|
Automatic speech processing systems are employed more and more often in real
environments. Although the underlying speech technology is mostly language
independent, differences between languages with respect to their structure and
grammar have substantial effect on the recognition systems performance. In this
paper, we present a review of the latest developments in the sign language
recognition research in general and in the Arabic sign language (ArSL) in
specific. This paper also presents a general framework for improving the deaf
community communication with the hearing people that is called SignsWorld. The
overall goal of the SignsWorld project is to develop a vision-based technology
for recognizing and translating continuous Arabic sign language ArSL.
|
1203.4184
|
The Initial Conditions of the Universe from Constrained Simulations
|
astro-ph.CO cs.AI
|
I present a new approach to recover the primordial density fluctuations and
the cosmic web structure underlying a galaxy distribution. The method is based
on sampling Gaussian fields which are compatible with a galaxy distribution and
a structure formation model. This is achieved by splitting the inversion
problem into two Gibbs-sampling steps: the first being a Gaussianisation step
transforming a distribution of point sources at Lagrangian positions -which are
not a priori given- into a linear alias-free Gaussian field. This step is based
on Hamiltonian sampling with a Gaussian-Poisson model. The second step consists
on a likelihood comparison in which the set of matter tracers at the initial
conditions is constrained on the galaxy distribution and the assumed structure
formation model. For computational reasons second order Lagrangian Perturbation
Theory is used. However, the presented approach is flexible to adopt any
structure formation model. A semi-analytic halo-model based galaxy mock catalog
is taken to demonstrate that the recovered initial conditions are closely
unbiased with respect to the actual ones from the corresponding N-body
simulation down to scales of a ~ 5 Mpc/h. The cross-correlation between them
shows a substantial gain of information, being at k ~ 0.3 h/Mpc more than
doubled. In addition the initial conditions are extremely well Gaussian
distributed and the power-spectra follow the shape of the linear power-spectrum
being very close to the actual one from the simulation down to scales of k ~ 1
h/Mpc.
|
1203.4204
|
Clustering Using Isoperimetric Number of Trees
|
cs.CV
|
In this paper we propose a graph-based data clustering algorithm which is
based on exact clustering of a minimum spanning tree in terms of a minimum
isoperimetry criteria. We show that our basic clustering algorithm runs in $O(n
\log n)$ and with post-processing in $O(n^2)$ (worst case) time where $n$ is
the size of the data set. We also show that our generalized graph model which
also allows the use of potentials at vertices can be used to extract a more
detailed pack of information as the {\it outlier profile} of the data set. In
this direction we show that our approach can be used to define the concept of
an outlier-set in a precise way and we propose approximation algorithms for
finding such sets. We also provide a comparative performance analysis of our
algorithm with other related ones and we show that the new clustering algorithm
(without the outlier extraction procedure) behaves quite effectively even on
hard benchmarks and handmade examples.
|
1203.4206
|
Low Complexity Turbo-Equalization: A Clustering Approach
|
cs.SY
|
We introduce a low complexity approach to iterative equalization and
decoding, or "turbo equalization", that uses clustered models to better match
the nonlinear relationship that exists between likelihood information from a
channel decoder and the symbol estimates that arise in soft-input channel
equalization. The introduced clustered turbo equalizer uses piecewise linear
models to capture the nonlinear dependency of the linear minimum mean square
error (MMSE) symbol estimate on the symbol likelihoods produced by the channel
decoder and maintains a computational complexity that is only linear in the
channel memory. By partitioning the space of likelihood information from the
decoder, based on either hard or soft clustering, and using locally-linear
adaptive equalizers within each clustered region, the performance gap between
the linear MMSE equalizer and low-complexity, LMS-based linear turbo equalizers
can be dramatically narrowed.
|
1203.4209
|
A New Analysis of an Adaptive Convex Mixture: A Deterministic Approach
|
cs.SY
|
We introduce a new analysis of an adaptive mixture method that combines
outputs of two constituent filters running in parallel to model an unknown
desired signal. This adaptive mixture is shown to achieve the mean square error
(MSE) performance of the best constituent filter, and in some cases outperforms
both, in the steady-state. However, the MSE analysis of this mixture in the
steady-state and during the transient regions uses approximations and relies on
statistical models on the underlying signals and systems. Hence, such an
analysis may not be useful or valid for signals generated by various real life
systems that show high degrees of nonstationarity, limit cycles and, in many
cases, that are even chaotic. To this end, we perform the transient and the
steady-state analysis of this adaptive mixture in a "strong" deterministic
sense without any approximations in the derivations or statistical assumptions
on the underlying signals such that our results are guaranteed to hold. In
particular, we relate the time-accumulated squared estimation error of this
adaptive mixture at any time to the time-accumulated squared estimation error
of the optimal convex mixture of the constituent filters directly tuned to the
underlying signal in an individual sequence manner.
|
1203.4238
|
Do Linguistic Style and Readability of Scientific Abstracts affect their
Virality?
|
cs.SI cs.CL cs.DL
|
Reactions to textual content posted in an online social network show
different dynamics depending on the linguistic style and readability of the
submitted content. Do similar dynamics exist for responses to scientific
articles? Our intuition, supported by previous research, suggests that the
success of a scientific article depends on its content, rather than on its
linguistic style. In this article, we examine a corpus of scientific abstracts
and three forms of associated reactions: article downloads, citations, and
bookmarks. Through a class-based psycholinguistic analysis and readability
indices tests, we show that certain stylistic and readability features of
abstracts clearly concur in determining the success and viral capability of a
scientific article.
|
1203.4280
|
Reconstruction of hidden 3D shapes using diffuse reflections
|
physics.optics cs.CV
|
We analyze multi-bounce propagation of light in an unknown hidden volume and
demonstrate that the reflected light contains sufficient information to recover
the 3D structure of the hidden scene. We formulate the forward and inverse
theory of secondary and tertiary scattering reflection using ideas from energy
front propagation and tomography. We show that using careful choice of
approximations, such as Fresnel approximation, greatly simplifies this problem
and the inversion can be achieved via a backpropagation process. We provide a
theoretical analysis of the invertibility, uniqueness and choices of
space-time-angle dimensions using synthetic examples. We show that a 2D streak
camera can be used to discover and reconstruct hidden geometry. Using a 1D high
speed time of flight camera, we show that our method can be used recover 3D
shapes of objects "around the corner".
|
1203.4287
|
Parameter Learning in PRISM Programs with Continuous Random Variables
|
cs.AI
|
Probabilistic Logic Programming (PLP), exemplified by Sato and Kameya's
PRISM, Poole's ICL, De Raedt et al's ProbLog and Vennekens et al's LPAD,
combines statistical and logical knowledge representation and inference.
Inference in these languages is based on enumerative construction of proofs
over logic programs. Consequently, these languages permit very limited use of
random variables with continuous distributions. In this paper, we extend PRISM
with Gaussian random variables and linear equality constraints, and consider
the problem of parameter learning in the extended language. Many statistical
models such as finite mixture models and Kalman filter can be encoded in
extended PRISM. Our EM-based learning algorithm uses a symbolic inference
procedure that represents sets of derivations without enumeration. This permits
us to learn the distribution parameters of extended PRISM programs with
discrete as well as Gaussian variables. The learning algorithm naturally
generalizes the ones used for PRISM and Hybrid Bayesian Networks.
|
1203.4311
|
Estimation with a helper who knows the interference
|
cs.IT math.IT
|
We consider the problem of estimating a signal corrupted by independent
interference with the assistance of a cost-constrained helper who knows the
interference causally or noncausally. When the interference is known causally,
we characterize the minimum distortion incurred in estimating the desired
signal. In the noncausal case, we present a general achievable scheme for
discrete memoryless systems and novel lower bounds on the distortion for the
binary and Gaussian settings. Our Gaussian setting coincides with that of
assisted interference suppression introduced by Grover and Sahai. Our lower
bound for this setting is based on the relation recently established by Verd\'u
between divergence and minimum mean squared error. We illustrate with a few
examples that this lower bound can improve on those previously developed. Our
bounds also allow us to characterize the optimal distortion in several
interesting regimes. Moreover, we show that causal and noncausal estimation are
not equivalent for this problem. Finally, we consider the case where the
desired signal is also available at the helper. We develop new lower bounds for
this setting that improve on those previously developed, and characterize the
optimal distortion up to a constant multiplicative factor for some regimes of
interest.
|
1203.4345
|
Robust Filtering and Smoothing with Gaussian Processes
|
cs.SY cs.AI cs.RO stat.ML
|
We propose a principled algorithm for robust Bayesian filtering and smoothing
in nonlinear stochastic dynamic systems when both the transition function and
the measurement function are described by non-parametric Gaussian process (GP)
models. GPs are gaining increasing importance in signal processing, machine
learning, robotics, and control for representing unknown system functions by
posterior probability distributions. This modern way of "system identification"
is more robust than finding point estimates of a parametric function
representation. In this article, we present a principled algorithm for robust
analytic smoothing in GP dynamic systems, which are increasingly used in
robotics and control. Our numerical evaluations demonstrate the robustness of
the proposed approach in situations where other state-of-the-art Gaussian
filters and smoothers can fail.
|
1203.4349
|
Onboard Flight Control of a Small Quadrotor Using Single Strapdown
Optical Flow Sensor
|
cs.RO
|
This paper considers onboard control of a small-sized quadrotor using a
strapdown embedded optical flow sensor which is conventionally used for desktop
mice. The vehicle considered in this paper can carry only few dozen grams of
payload, therefore conventional camera-based optical flow methods are not
applicable. We present hovering control of the small-sized quadrotor using a
single-chip optical flow sensor, implemented on an 8-bit microprocessor without
external sensors or communication with a ground control station. Detailed
description of all the system components is provided along with evaluation of
the accuracy. Experimental results from flight tests are validated with the
ground-truth data provided by a high-accuracy reference system.
|
1203.4355
|
Real-time Image-based 6-DOF Localization in Large-Scale Environments
|
cs.CV cs.RO
|
We present a real-time approach for image-based localization within large
scenes that have been reconstructed offline using structure from motion (Sfm).
From monocular video, our method continuously computes a precise 6-DOF camera
pose, by efficiently tracking natural features and matching them to 3D points
in the Sfm point cloud. Our main contribution lies in efficiently interleaving
a fast keypoint tracker that uses inexpensive binary feature descriptors with a
new approach for direct 2D-to-3D matching. The 2D-to-3D matching avoids the
need for online extraction of scale-invariant features. Instead, offline we
construct an indexed database containing multiple DAISY descriptors per 3D
point extracted at multiple scales. The key to the efficiency of our method
lies in invoking DAISY descriptor extraction and matching sparingly during
localization, and in distributing this computation over a window of successive
frames. This enables the algorithm to run in real-time, without fluctuations in
the latency over long durations. We evaluate the method in large indoor and
outdoor scenes. Our algorithm runs at over 30 Hz on a laptop and at 12 Hz on a
low-power, mobile computer suitable for onboard computation on a quadrotor
micro aerial vehicle.
|
1203.4358
|
On optimum parameter modulation-estimation from a large deviations
perspective
|
cs.IT math.IT
|
We consider the problem of jointly optimum modulation and estimation of a
real-valued random parameter, conveyed over an additive white Gaussian noise
(AWGN) channel, where the performance metric is the large deviations behavior
of the estimator, namely, the exponential decay rate (as a function of the
observation time) of the probability that the estimation error would exceed a
certain threshold. Our basic result is in providing an exact characterization
of the fastest achievable exponential decay rate, among all possible
modulator-estimator (transmitter-receiver) pairs, where the modulator is
limited only in the signal power, but not in bandwidth. This exponential rate
turns out to be given by the reliability function of the AWGN channel. We also
discuss several ways to achieve this optimum performance, and one of them is
based on quantization of the parameter, followed by optimum channel coding and
modulation, which gives rise to a separation-based transmitter, if one views
this setting from the perspective of joint source-channel coding. This is in
spite of the fact that, in general, when error exponents are considered, the
source-channel separation theorem does not hold true. We also discuss several
observations, modifications and extensions of this result in several
directions, including other channels, and the case of multidimensional
parameter vectors. One of our findings concerning the latter, is that there is
an abrupt threshold effect in the dimensionality of the parameter vector: below
a certain critical dimension, the probability of excess estimation error may
still decay exponentially, but beyond this value, it must converge to unity.
|
1203.4380
|
Analyzing closed frequent itemsets with convex polytopes
|
cs.DB
|
Frequent itemsets form a polytope and can be found and analyzed with Linear
Programming.
|
1203.4385
|
Optimal Rate and Maximum Erasure Probability LDPC Codes in Binary
Erasure Channel
|
cs.IT math.IT
|
In this paper, we present a novel way for solving the main problem of
designing the capacity approaching irregular low-density parity-check (LDPC)
code ensemble over binary erasure channel (BEC). The proposed method is much
simpler, faster, accurate and practical than other methods. Our method does not
use any relaxation or any approximate solution like previous works. Our method
works and finds optimal answer for any given check node degree distribution.
The proposed method was implemented and it works well in practice with
polynomial time complexity. As a result, we represent some degree distributions
that their rates are close to the capacity with maximum erasure probability and
maximum code rate.
|
1203.4416
|
On Training Deep Boltzmann Machines
|
cs.NE cs.AI cs.LG
|
The deep Boltzmann machine (DBM) has been an important development in the
quest for powerful "deep" probabilistic models. To date, simultaneous or joint
training of all layers of the DBM has been largely unsuccessful with existing
training methods. We introduce a simple regularization scheme that encourages
the weight vectors associated with each hidden unit to have similar norms. We
demonstrate that this regularization can be easily combined with standard
stochastic maximum likelihood to yield an effective training strategy for the
simultaneous training of all layers of the deep Boltzmann machine.
|
1203.4422
|
Semi-Supervised Single- and Multi-Domain Regression with Multi-Domain
Training
|
stat.ML cs.LG
|
We address the problems of multi-domain and single-domain regression based on
distinct and unpaired labeled training sets for each of the domains and a large
unlabeled training set from all domains. We formulate these problems as a
Bayesian estimation with partial knowledge of statistical relations. We propose
a worst-case design strategy and study the resulting estimators. Our analysis
explicitly accounts for the cardinality of the labeled sets and includes the
special cases in which one of the labeled sets is very large or, in the other
extreme, completely missing. We demonstrate our estimators in the context of
removing expressions from facial images and in the context of audio-visual word
recognition, and provide comparisons to several recently proposed multi-modal
learning algorithms.
|
1203.4475
|
Automation of Mobile Pick and Place Robotic System for Small Food
Industry
|
cs.ET cs.RO
|
The use of robotics in food industry is becoming more popular in recent
years. The trend seems to continue as long as the robotics technology meets
diverse and challenging needs of the food producers. Rapid developments in
digital computers and control systems technologies have significant impact in
robotics like any other engineering fields. By utilizing new hardware and
software tools, design of these complex systems that need strong integration of
distinct disciplines is no longer difficult compared to the past. Therefore,
the purpose of this paper is to design and implement a micro-controller based
on reliable and high performance robotic system for food / biscuit
manufacturing line. We propose a design of a vehicle. The robot is capable of
picking unbaked biscuits tray and places them into furnace and then after
baking it picks the biscuits tray from the furnace. A special gripper is
designed to pick and place the biscuits tray with flexibility.
|
1203.4487
|
Recommender systems in industrial contexts
|
cs.IR
|
This thesis consists of four parts: - An analysis of the core functions and
the prerequisites for recommender systems in an industrial context: we identify
four core functions for recommendation systems: Help do Decide, Help to
Compare, Help to Explore, Help to Discover. The implementation of these
functions has implications for the choices at the heart of algorithmic
recommender systems. - A state of the art, which deals with the main techniques
used in automated recommendation system: the two most commonly used algorithmic
methods, the K-Nearest-Neighbor methods (KNN) and the fast factorization
methods are detailed. The state of the art presents also purely content-based
methods, hybridization techniques, and the classical performance metrics used
to evaluate the recommender systems. This state of the art then gives an
overview of several systems, both from academia and industry (Amazon, Google
...). - An analysis of the performances and implications of a recommendation
system developed during this thesis: this system, Reperio, is a hybrid
recommender engine using KNN methods. We study the performance of the KNN
methods, including the impact of similarity functions used. Then we study the
performance of the KNN method in critical uses cases in cold start situation. -
A methodology for analyzing the performance of recommender systems in
industrial context: this methodology assesses the added value of algorithmic
strategies and recommendation systems according to its core functions.
|
1203.4494
|
Can an Ad-hoc ontology Beat a Medical Search Engine? The Chronious
Search Engine case
|
cs.IR cs.DL
|
Chronious is an Open, Ubiquitous and Adaptive Chronic Disease Management
Platform for Chronic Obstructive Pulmonary Disease(COPD) Chronic Kidney Disease
(CKD) and Renal Insufficiency. It consists of several modules: an ontology
based literature search engine, a rule based decision support system, remote
sensors interacting with lifestyle interfaces (PDA, monitor touch-screen) and a
machine learning module. All these modules interact each other to allow the
monitoring of two types of chronic diseases and to help clinician in taking
decision for care purpose. This paper illustrates how the ontology search
engine was created and fed and how some comparative test indicated that the
ontology based approach give better results, on some estimation parameters,
than the main reference web search engine.
|
1203.4523
|
On the Equivalence between Herding and Conditional Gradient Algorithms
|
cs.LG math.OC stat.ML
|
We show that the herding procedure of Welling (2009) takes exactly the form
of a standard convex optimization algorithm--namely a conditional gradient
algorithm minimizing a quadratic moment discrepancy. This link enables us to
invoke convergence results from convex optimization and to consider faster
alternatives for the task of approximating integrals in a reproducing kernel
Hilbert space. We study the behavior of the different variants through
numerical simulations. The experiments indicate that while we can improve over
herding on the task of approximating integrals, the original herding algorithm
tends to approach more often the maximum entropy distribution, shedding more
light on the learning bias behind herding.
|
1203.4544
|
Quantum Codes from Toric Surfaces
|
math.AG cs.IT math.IT
|
A theory for constructing quantum error correcting codes from Toric surfaces
by the Calderbank-Shor-Steane method is presented. In particular we study the
method on toric Hirzebruch surfaces. The results are obtained by constructing a
dualizing differential form for the toric surface and by using the cohomology
and the intersection theory of toric varieties. In earlier work the author
developed methods to construct linear error correcting codes from toric
varieties and derive the code parameters using the cohomology and the
intersection theory on toric varieties. This method is generalized in section
to construct linear codes suitable for constructing quantum codes by the
Calderbank-Shor-Steane method. Essential for the theory is the existence and
the application of a dualizing differential form on the toric surface. A.R.
Calderbank, P.W. Shor and A.M. Steane produced stabilizer codes from linear
codes containing their dual codes. These two constructions are merged to obtain
results for toric surfaces. Similar merging has been done for algebraic curves
with different methods by A. Ashikhmin, S. Litsyn and M.A. Tsfasman.
|
1203.4580
|
Sparsity Constrained Nonlinear Optimization: Optimality Conditions and
Algorithms
|
cs.IT math.IT math.OC
|
This paper treats the problem of minimizing a general continuously
differentiable function subject to sparsity constraints. We present and analyze
several different optimality criteria which are based on the notions of
stationarity and coordinate-wise optimality. These conditions are then used to
derive three numerical algorithms aimed at finding points satisfying the
resulting optimality criteria: the iterative hard thresholding method and the
greedy and partial sparse-simplex methods. The first algorithm is essentially a
gradient projection method while the remaining two algorithms are of coordinate
descent type. The theoretical convergence of these methods and their relations
to the derived optimality conditions are studied. The algorithms and results
are illustrated by several numerical examples.
|
1203.4583
|
Multi-Antenna System Design with Bright Transmitters and Blind Receivers
|
cs.IT math.IT
|
This paper considers a scenario for multi-input multi-output (MIMO)
communication systems when perfect channel state information at the transmitter
(CSIT) is given while the equivalent channel state information at the receiver
(CSIR) is not available. Such an assumption is valid for the downlink
multi-user MIMO systems with linear precoders that depend on channels to all
receivers. We propose a concept called dual systems with zero-forcing designs
based on the duality principle, originally proposed to relate Gaussian
multi-access channels (MACs) and Gaussian broadcast channels (BCs). For the
two-user N*2 MIMO BC with N antennas at the transmitter and two antennas at
each of the receivers, we design a downlink interference cancellation (IC)
transmission scheme using the dual of uplink MAC systems employing IC methods.
The transmitter simultaneously sends two precoded Alamouti codes, one for each
user. Each receiver can zero-force the unintended user's Alamouti codes and
decouple its own data streams using two simple linear operations independent of
CSIR. Analysis shows that the proposed scheme achieves a diversity gain of
2(N-1) for equal energy constellations with short-term power and rate
constraints. Power allocation between two users can also be performed, and it
improves the array gain but not the diversity gain. Numerical results
demonstrate that the bit error rate of the downlink IC scheme has a substantial
gain compared to the block diagonalization method, which requires global
channel information at each node.
|
1203.4587
|
High Speed Compressed Sensing Reconstruction in Dynamic Parallel MRI
Using Augmented Lagrangian and Parallel Processing
|
cs.IT cs.DS math.IT
|
Magnetic Resonance Imaging (MRI) is one of the fields that the compressed
sensing theory is well utilized to reduce the scan time significantly leading
to faster imaging or higher resolution images. It has been shown that a small
fraction of the overall measurements are sufficient to reconstruct images with
the combination of compressed sensing and parallel imaging. Various
reconstruction algorithms has been proposed for compressed sensing, among which
Augmented Lagrangian based methods have been shown to often perform better than
others for many different applications. In this paper, we propose new Augmented
Lagrangian based solutions to the compressed sensing reconstruction problem
with analysis and synthesis prior formulations. We also propose a computational
method which makes use of properties of the sampling pattern to significantly
improve the speed of the reconstruction for the proposed algorithms in
Cartesian sampled MRI. The proposed algorithms are shown to outperform earlier
methods especially for the case of dynamic MRI for which the transfer function
tends to be a very large matrix and significantly ill conditioned. It is also
demonstrated that the proposed algorithm can be accelerated much further than
other methods in case of a parallel implementation with graphics processing
units (GPUs).
|
1203.4592
|
Remarks on low weight codewords of generalized affine and projective
Reed-Muller codes
|
cs.IT math.IT
|
We propose new results on low weight codewords of affine and projective
generalized Reed-Muller codes. In the affine case we prove that if the size of
the working finite field is large compared to the degree of the code, the low
weight codewords are products of affine functions. Then in the general case we
study some types of codewords and prove that they cannot be second, thirds or
fourth weight depending on the hypothesis. In the projective case the second
distance of generalized Reed-Muller codes is estimated, namely a lower bound
and an upper bound of this weight are given.
|
1203.4597
|
A Novel Training Algorithm for HMMs with Partial and Noisy Access to the
States
|
cs.LG stat.ML
|
This paper proposes a new estimation algorithm for the parameters of an HMM
as to best account for the observed data. In this model, in addition to the
observation sequence, we have \emph{partial} and \emph{noisy} access to the
hidden state sequence as side information. This access can be seen as "partial
labeling" of the hidden states. Furthermore, we model possible mislabeling in
the side information in a joint framework and derive the corresponding EM
updates accordingly. In our simulations, we observe that using this side
information, we considerably improve the state recognition performance, up to
70%, with respect to the "achievable margin" defined by the baseline
algorithms. Moreover, our algorithm is shown to be robust to the training
conditions.
|
1203.4598
|
Adaptive Mixture Methods Based on Bregman Divergences
|
cs.LG
|
We investigate adaptive mixture methods that linearly combine outputs of $m$
constituent filters running in parallel to model a desired signal. We use
"Bregman divergences" and obtain certain multiplicative updates to train the
linear combination weights under an affine constraint or without any
constraints. We use unnormalized relative entropy and relative entropy to
define two different Bregman divergences that produce an unnormalized
exponentiated gradient update and a normalized exponentiated gradient update on
the mixture weights, respectively. We then carry out the mean and the
mean-square transient analysis of these adaptive algorithms when they are used
to combine outputs of $m$ constituent filters. We illustrate the accuracy of
our results and demonstrate the effectiveness of these updates for sparse
mixture systems.
|
1203.4605
|
Arabic Keyphrase Extraction using Linguistic knowledge and Machine
Learning Techniques
|
cs.CL
|
In this paper, a supervised learning technique for extracting keyphrases of
Arabic documents is presented. The extractor is supplied with linguistic
knowledge to enhance its efficiency instead of relying only on statistical
information such as term frequency and distance. During analysis, an annotated
Arabic corpus is used to extract the required lexical features of the document
words. The knowledge also includes syntactic rules based on part of speech tags
and allowed word sequences to extract the candidate keyphrases. In this work,
the abstract form of Arabic words is used instead of its stem form to represent
the candidate terms. The Abstract form hides most of the inflections found in
Arabic words. The paper introduces new features of keyphrases based on
linguistic knowledge, to capture titles and subtitles of a document. A simple
ANOVA test is used to evaluate the validity of selected features. Then, the
learning model is built using the LDA - Linear Discriminant Analysis - and
training documents. Although, the presented system is trained using documents
in the IT domain, experiments carried out show that it has a significantly
better performance than the existing Arabic extractor systems, where precision
and recall values reach double their corresponding values in the other systems
especially for lengthy and non-scientific articles.
|
1203.4626
|
Active sequential hypothesis testing
|
cs.IT math.IT math.OC math.ST stat.TH
|
Consider a decision maker who is responsible to dynamically collect
observations so as to enhance his information about an underlying phenomena of
interest in a speedy manner while accounting for the penalty of wrong
declaration. Due to the sequential nature of the problem, the decision maker
relies on his current information state to adaptively select the most
``informative'' sensing action among the available ones. In this paper, using
results in dynamic programming, lower bounds for the optimal total cost are
established. The lower bounds characterize the fundamental limits on the
maximum achievable information acquisition rate and the optimal reliability.
Moreover, upper bounds are obtained via an analysis of two heuristic policies
for dynamic selection of actions. It is shown that the first proposed heuristic
achieves asymptotic optimality, where the notion of asymptotic optimality, due
to Chernoff, implies that the relative difference between the total cost
achieved by the proposed policy and the optimal total cost approaches zero as
the penalty of wrong declaration (hence the number of collected samples)
increases. The second heuristic is shown to achieve asymptotic optimality only
in a limited setting such as the problem of a noisy dynamic search. However, by
considering the dependency on the number of hypotheses, under a technical
condition, this second heuristic is shown to achieve a nonzero information
acquisition rate, establishing a lower bound for the maximum achievable rate
and error exponent. In the case of a noisy dynamic search with size-independent
noise, the obtained nonzero rate and error exponent are shown to be maximum.
|
1203.4627
|
Truthfulness, Proportional Fairness, and Efficiency
|
cs.GT cs.DS cs.MA
|
How does one allocate a collection of resources to a set of strategic agents
in a fair and efficient manner without using money? For in many scenarios it is
not feasible to use money to compensate agents for otherwise unsatisfactory
outcomes. This paper studies this question, looking at both fairness and
efficiency measures.
We employ the proportionally fair solution, which is a well-known fairness
concept for money-free settings. But although finding a proportionally fair
solution is computationally tractable, it cannot be implemented in a truthful
fashion. Consequently, we seek approximate solutions. We give several truthful
mechanisms which achieve proportional fairness in an approximate sense. We use
a strong notion of approximation, requiring the mechanism to give each agent a
good approximation of its proportionally fair utility. In particular, one of
our mechanisms provides a better and better approximation factor as the minimum
demand for every good increases. A motivating example is provided by the
massive privatization auction in the Czech republic in the early 90s.
With regard to efficiency, prior work has shown a lower bound of 0.5 on the
approximation factor of any swap-dictatorial mechanism approximating a social
welfare measure even for the two agents and multiple goods case. We surpass
this lower bound by designing a non-swap-dictatorial mechanism for this case.
Interestingly, the new mechanism builds on the notion of proportional fairness.
|
1203.4642
|
Why Watching Movie Tweets Won't Tell the Whole Story?
|
cs.SI physics.soc-ph
|
Data from Online Social Networks (OSNs) are providing analysts with an
unprecedented access to public opinion on elections, news, movies etc. However,
caution must be taken to determine whether and how much of the opinion
extracted from OSN user data is indeed reflective of the opinion of the larger
online population. In this work we study this issue in the context of movie
reviews on Twitter and compare the opinion of Twitter users with that of the
online population of IMDb and Rotten Tomatoes. We introduce new metrics to show
that the Twitter users can be characteristically different from general users,
both in their rating and their relative preference for Oscar-nominated and
non-nominated movies. Additionally, we investigate whether such data can truly
predict a movie's box-office success.
|
1203.4685
|
A Local Approach for Identifying Clusters in Networks
|
cs.SI physics.soc-ph
|
Graph clustering is a fundamental problem that has been extensively studied
both in theory and practice. The problem has been defined in several ways in
literature and most of them have been proven to be NP-Hard. Due to their high
practical relevancy, several heuristics for graph clustering have been
introduced which constitute a central tool for coping with NP-completeness, and
are used in applications of clustering ranging from computer vision, to data
analysis, to learning. There exist many methodologies for this problem, however
most of them are global in nature and are unlikely to scale well for very large
networks. In this paper, we propose two scalable local approaches for
identifying the clusters in any network. We further extend one of these
approaches for discovering the overlapping clusters in these networks. Some
experimentation results obtained for the proposed approaches are also
presented.
|
1203.4693
|
On the Stability of Contention Resolution Diversity Slotted ALOHA
|
cs.IT math.IT
|
In this paper a Time Division Multiple Access (TDMA) based Random Access (RA)
channel with Successive Interference Cancellation (SIC) is considered for a
finite user population and reliable retransmission mechanism on the basis of
Contention Resolution Diversity Slotted ALOHA (CRDSA). A general mathematical
model based on Markov Chains is derived which makes it possible to predict the
stability regions of SIC-RA channels, the expected delays in equilibrium and
the selection of parameters for a stable channel configuration. Furthermore the
model enables the estimation of the average time before reaching instability.
The presented model is verified against simulations and numerical results are
provided for comparison of the stability of CRDSA versus the stability of
traditional Slotted ALOHA (SA). The presented results show that CRDSA has not
only a high gain over SA in terms of throughput but also in its stability.
|
1203.4732
|
A Unifying Framework to Characterize the Power of a Language to Express
Relations
|
cs.DB
|
In this extended abstract we provide a unifying framework that can be used to
characterize and compare the expressive power of query languages for different
data base models. The framework is based upon the new idea of valid partition,
that is a partition of the elements of a given data base, where each class of
the partition is composed by elements that cannot be separated (distinguished)
according to some level of information contained in the data base. We describe
two applications of this new framework, first by deriving a new syntactic
characterization of the expressive power of relational algebra which is
equivalent to the one given by Paredaens, and subsequently by studying the
expressive power of a simple graph-based data model.
|
1203.4746
|
Sublinear Time, Approximate Model-based Sparse Recovery For All
|
cs.IT math.IT
|
We describe a probabilistic, {\it sublinear} runtime, measurement-optimal
system for model-based sparse recovery problems through dimensionality
reducing, {\em dense} random matrices. Specifically, we obtain a linear sketch
$u\in \R^M$ of a vector $\bestsignal\in \R^N$ in high-dimensions through a
matrix $\Phi \in \R^{M\times N}$ $(M<N)$. We assume this vector can be well
approximated by $K$ non-zero coefficients (i.e., it is $K$-sparse). In
addition, the nonzero coefficients of $\bestsignal$ can obey additional
structure constraints such as matroid, totally unimodular, or knapsack
constraints, which dub as model-based sparsity. We construct the dense
measurement matrix using a probabilistic method so that it satisfies the
so-called restricted isometry property in the $\ell_2$-norm. While recovery
using such matrices is measurement-optimal as they require the smallest sketch
sizes $\numsam= O(\sparsity \log(\dimension/\sparsity))$, the existing
algorithms require superlinear runtime $\Omega(N\log(N/K))$ with the exception
of Porat and Strauss, which requires $O(\beta^5\epsilon^{-3}K(N/K)^{1/\beta}),
~\beta \in \mathbb{Z}_{+}, $ but provides an $\ell_1/\ell_1$ approximation
guarantee. In contrast, our approach features $ O\big(\max \lbrace \sketch
\sparsity \log^{O(1)} \dimension, ~\sketch \sparsity^2 \log^2
(\dimension/\sparsity) \rbrace\big) $ complexity where $ L \in \mathbb{Z}_{+}$
is a design parameter, independent of $\dimension$, requires a smaller sketch
size, can accommodate model sparsity, and provides a stronger $\ell_2/\ell_1$
guarantee. Our system applies to "for all" sparse signals, is robust against
bounded perturbations in $u$ as well as perturbations on $\bestsignal$ itself.
|
1203.4764
|
On the Design of a Novel Joint Network-Channel Coding Scheme for the
Multiple Access Relay Channel
|
cs.IT math.IT
|
This paper proposes a novel joint non-binary network-channel code for the
Time-Division Decode-and-Forward Multiple Access Relay Channel (TD-DF-MARC),
where the relay linearly combines -- over a non-binary finite field -- the
coded sequences from the source nodes. A method based on an EXIT chart analysis
is derived for selecting the best coefficients of the linear combination.
Moreover, it is shown that for different setups of the system, different
coefficients should be chosen in order to improve the performance. This
conclusion contrasts with previous works where a random selection was
considered. Monte Carlo simulations show that the proposed scheme outperforms,
in terms of its gap to the outage probabilities, the previously published joint
network-channel coding approaches. Besides, this gain is achieved by using very
short-length codewords, which makes the scheme particularly attractive for
low-latency applications.
|
1203.4788
|
Very Short Literature Survey From Supervised Learning To Surrogate
Modeling
|
cs.LG
|
The past century was era of linear systems. Either systems (especially
industrial ones) were simple (quasi)linear or linear approximations were
accurate enough. In addition, just at the ending decades of the century
profusion of computing devices were available, before then due to lack of
computational resources it was not easy to evaluate available nonlinear system
studies. At the moment both these two conditions changed, systems are highly
complex and also pervasive amount of computation strength is cheap and easy to
achieve. For recent era, a new branch of supervised learning well known as
surrogate modeling (meta-modeling, surface modeling) has been devised which
aimed at answering new needs of modeling realm. This short literature survey is
on to introduce surrogate modeling to whom is familiar with the concepts of
supervised learning. Necessity, challenges and visions of the topic are
considered.
|
1203.4810
|
Estimating a Random Walk First-Passage Time from Noisy or Delayed
Observations
|
cs.IT math.IT stat.OT
|
A random walk (or a Wiener process), possibly with drift, is observed in a
noisy or delayed fashion. The problem considered in this paper is to estimate
the first time \tau the random walk reaches a given level. Specifically, the
p-moment (p\geq 1) optimization problem \inf_\eta \ex|\eta-\tau|^p is
investigated where the infimum is taken over the set of stopping times that are
defined on the observation process.
When there is no drift, optimal stopping rules are characterized for both
types of observations. When there is a drift, upper and lower bounds on
\inf_\eta \ex|\eta-\tau|^p are established for both types of observations. The
bounds are tight in the large-level regime for noisy observations and in the
large-level-large-delay regime for delayed observations. Noteworthy, for noisy
observations there exists an asymptotically optimal stopping rule that is a
function of a single observation.
Simulation results are provided that corroborate the validity of the results
for non-asymptotic settings.
|
1203.4844
|
Practical Coding Schemes for Cognitive Overlay Radios
|
cs.IT math.IT
|
We develop practical coding schemes for the cognitive overlay radios as
modeled by the cognitive interference channel, a variation of the classical two
user interference channel where one of the transmitters has knowledge of both
messages. Inspired by information theoretical results, we develop a coding
strategy for each of the three parameter regimes where capacity is known. A key
feature of the capacity achieving schemes in these regimes is the joint
decoding of both users' codewords, which we accomplish by performing a
posteriori probability calculation over a combined trellis. The schemes are
shown to perform close to the capacity limit with low error rate.
|
1203.4855
|
Texture Classification Approach Based on Combination of Edge &
Co-occurrence and Local Binary Pattern
|
cs.CV cs.AI
|
Texture classification is one of the problems which has been paid much
attention on by computer scientists since late 90s. If texture classification
is done correctly and accurately, it can be used in many cases such as Pattern
recognition, object tracking, and shape recognition. So far, there have been so
many methods offered to solve this problem. Near all these methods have tried
to extract and define features to separate different labels of textures really
well. This article has offered an approach which has an overall process on the
images of textures based on Local binary pattern and Gray Level Co-occurrence
matrix and then by edge detection, and finally, extracting the statistical
features from the images would classify them. Although, this approach is a
general one and is could be used in different applications, the method has been
tested on the stone texture and the results have been compared with some of the
previous approaches to prove the quality of proposed approach.
|
1203.4865
|
Successive Refinement with Decoder Cooperation and its Channel Coding
Duals
|
cs.IT math.IT
|
We study cooperation in multi terminal source coding models involving
successive refinement. Specifically, we study the case of a single encoder and
two decoders, where the encoder provides a common description to both the
decoders and a private description to only one of the decoders. The decoders
cooperate via cribbing, i.e., the decoder with access only to the common
description is allowed to observe, in addition, a deterministic function of the
reconstruction symbols produced by the other. We characterize the fundamental
performance limits in the respective settings of non-causal, strictly-causal
and causal cribbing. We use a new coding scheme, referred to as Forward
Encoding and Block Markov Decoding, which is a variant of one recently used by
Cuff and Zhao for coordination via implicit communication. Finally, we use the
insight gained to introduce and solve some dual channel coding scenarios
involving Multiple Access Channels with cribbing.
|
1203.4867
|
Multi-hop Analog Network Coding: An Amplify-and-Forward Approach
|
cs.IT math.IT
|
In this paper, we study the performance of an amplify-and-forward (AF) based
analog network coding (ANC) relay scheme in a multi-hop wireless network under
individual power constraints. In the first part, a unicast scenario is
considered. The problem of finding the maximum achievable rate is formulated as
an optimization problem. Rather than solving this non-concave maximization
problem, we derive upper and lower bounds for the optimal rate. A cut-set like
upper bound is obtained in a closed form for a layered relay network. A
pseudo-optimal AF scheme is developed for a two-hop parallel network, which is
different from the conventional scheme with all amplification gains chosen as
the maximum possible values. The conditions under which either the novel scheme
or the conventional one achieves a rate within half a bit of the upper bound
are found. Then we provide an AF-based multi-hop ANC scheme with the two
schemes for a layered relay network. It is demonstrated that the lower bound of
the optimal rate can asymptotically achieve the upper bound when the network is
in the generalized high-SNR regime. In the second part, the optimal rate region
for a two-hop multiple access channel (MAC) via AF relays is investigated. In a
similar manner, we first derive an outer bound for it and then focus on
designing low complexity AF-based ANC schemes for different scenarios. Several
examples are given and the numerical results indicate that the achievable rate
region of the ANC schemes can perform close to the outer bound.
|
1203.4870
|
Variational Bayesian algorithm for quantized compressed sensing
|
cs.IT math.IT
|
Compressed sensing (CS) is on recovery of high dimensional signals from their
low dimensional linear measurements under a sparsity prior and digital
quantization of the measurement data is inevitable in practical implementation
of CS algorithms. In the existing literature, the quantization error is modeled
typically as additive noise and the multi-bit and 1-bit quantized CS problems
are dealt with separately using different treatments and procedures. In this
paper, a novel variational Bayesian inference based CS algorithm is presented,
which unifies the multi- and 1-bit CS processing and is applicable to various
cases of noiseless/noisy environment and unsaturated/saturated quantizer. By
decoupling the quantization error from the measurement noise, the quantization
error is modeled as a random variable and estimated jointly with the signal
being recovered. Such a novel characterization of the quantization error
results in superior performance of the algorithm which is demonstrated by
extensive simulations in comparison with state-of-the-art methods for both
multi-bit and 1-bit CS problems.
|
1203.4874
|
A Co-Prime Blur Scheme for Data Security in Video Surveillance
|
cs.CV
|
This paper presents a novel Coprime Blurred Pair (CBP) model for visual
data-hiding for security in camera surveillance. While most previous approaches
have focused on completely encrypting the video stream, we introduce a spatial
encryption scheme by blurring the image/video contents to create a CBP. Our
goal is to obscure detail in public video streams by blurring while allowing
behavior to be recognized and to quickly deblur the stream so that details are
available if behavior is recognized as suspicious. We create a CBP by blurring
the same latent image with two unknown kernels. The two kernels are coprime
when mapped to bivariate polynomials in the z domain. To deblur the CBP we
first use the coprime constraint to approximate the kernels and sample the
bivariate CBP polynomials in one dimension on the unit circle. At each sample
point, we factor the 1D polynomial pair and compose the results into a 2D
kernel matrix. Finally, we compute the inverse Fast Fourier Transform (FFT) of
the kernel matrices to recover the coprime kernels and then the latent video
stream. It is therefore only possible to deblur the video stream if a user has
access to both streams. To improve the practicability of our algorithm, we
implement our algorithm using a graphics processing unit (GPU) to decrypt the
blurred video streams in real-time, and extensive experimental results
demonstrate that our new scheme can effectively protect sensitive identity
information in surveillance videos and faithfully reconstruct the unblurred
video stream when two blurred sequences are available.
|
1203.4875
|
Spontaneous Symmetry Breaking in Interdependent Networked Game
|
physics.soc-ph cs.SI
|
Spatial evolution game has traditionally assumed that players interact with
neighbors on a single network, which is isolated and not influenced by other
systems. We introduce the simple game model into the interdependent networks
composed of two networks, and show that when the interdependent factor $\alpha$
is smaller than a particular value $\alpha_C$, homogeneous cooperation can be
guaranteed. However, as interdependent factor exceeds $\alpha_C$, spontaneous
symmetry breaking of fraction of cooperators presents itself between different
networks. In addition, our results can be well predicted by the strategy-couple
pair approximation method.
|
1203.4881
|
Computational Complexity Analysis of Multi-Objective Genetic Programming
|
cs.NE
|
The computational complexity analysis of genetic programming (GP) has been
started recently by analyzing simple (1+1) GP algorithms for the problems ORDER
and MAJORITY. In this paper, we study how taking the complexity as an
additional criteria influences the runtime behavior. We consider
generalizations of ORDER and MAJORITY and present a computational complexity
analysis of (1+1) GP using multi-criteria fitness functions that take into
account the original objective and the complexity of a syntax tree as a
secondary measure. Furthermore, we study the expected time until
population-based multi-objective genetic programming algorithms have computed
the Pareto front when taking the complexity of a syntax tree as an equally
important objective.
|
1203.4882
|
Large-System Analysis of Joint User Selection and Vector Precoding with
Zero-Forcing Transmit Beamforming for MIMO Broadcast Channels
|
cs.IT math.IT
|
Multiple-input multiple-output (MIMO) broadcast channels (BCs) (MIMO-BCs)
with perfect channel state information (CSI) at the transmitter are considered.
As joint user selection (US) and vector precoding (VP) (US-VP) with
zero-forcing transmit beamforming (ZF-BF), US and continuous VP (CVP) (US-CVP)
and data-dependent US (DD-US) are investigated. The replica method, developed
in statistical physics, is used to analyze the energy penalties for the two
US-VP schemes in the large-system limit, where the number of users, the number
of selected users, and the number of transmit antennas tend to infinity with
their ratios kept constant. Four observations are obtained in the large-system
limit: First, the assumptions of replica symmetry (RS) and 1-step replica
symmetry breaking (1RSB) for DD-US can provide acceptable approximations for
low and moderate system loads, respectively. Secondly, DD-US outperforms CVP
with random US in terms of the energy penalty for low-to-moderate system loads.
Thirdly, the asymptotic energy penalty of DD-US is indistinguishable from that
of US-CVP for low system loads. Finally, a greedy algorithm of DD-US proposed
in authors' previous work can achieve nearly optimal performance for
low-to-moderate system loads.
|
1203.4903
|
Distance Queries from Sampled Data: Accurate and Efficient
|
cs.DS cs.DB math.ST stat.TH
|
Distance queries are a basic tool in data analysis. They are used for
detection and localization of change for the purpose of anomaly detection,
monitoring, or planning. Distance queries are particularly useful when data
sets such as measurements, snapshots of a system, content, traffic matrices,
and activity logs are collected repeatedly.
Random sampling, which can be efficiently performed over streamed or
distributed data, is an important tool for scalable data analysis. The sample
constitutes an extremely flexible summary, which naturally supports domain
queries and scalable estimation of statistics, which can be specified after the
sample is generated. The effectiveness of a sample as a summary, however,
hinges on the estimators we have.
We derive novel estimators for estimating $L_p$ distance from sampled data.
Our estimators apply with the most common weighted sampling schemes: Poisson
Probability Proportional to Size (PPS) and its fixed sample size variants. They
also apply when the samples of different data sets are independent or
coordinated. Our estimators are admissible (Pareto optimal in terms of
variance) and have compelling properties.
We study the performance of our Manhattan and Euclidean distance ($p=1,2$)
estimators on diverse datasets, demonstrating scalability and accuracy even
when a small fraction of the data is sampled. Our work, for the first time,
facilitates effective distance estimation over sampled data.
|
1203.4924
|
A Flexible Channel Coding Approach for Short-Length Codewords
|
cs.IT math.IT
|
This letter introduces a novel channel coding design framework for
short-length codewords that permits balancing the tradeoff between the bit
error rate floor and waterfall region by modifying a single real-valued
parameter. The proposed approach is based on combining convolutional coding
with a $q$-ary linear combination and unequal energy allocation, the latter
being controlled by the aforementioned parameter. EXIT charts are used to shed
light on the convergence characteristics of the associated iterative decoder,
which is described in terms of factor graphs. Simulation results show that the
proposed scheme is able to adjust its end-to-end error rate performance
efficiently and easily, on the contrary to previous approaches that require a
full code redesign when the error rate requirements of the application change.
Simulations also show that, at mid-range bit-error rates, there is a small
performance penalty with respect to the previous approaches. However, the EXIT
chart analysis and the simulation results suggest that for very low bit-error
rates the proposed system will exhibit lower error floors than previous
approaches.
|
1203.4930
|
Kernels for linear time invariant system identification
|
cs.SY
|
In this paper, we study the problem of identifying the impulse response of a
linear time invariant (LTI) dynamical system from the knowledge of the input
signal and a finite set of noisy output observations. We adopt an approach
based on regularization in a Reproducing Kernel Hilbert Space (RKHS) that takes
into account both continuous and discrete time systems. The focus of the paper
is on designing spaces that are well suited for temporal impulse response
modeling. To this end, we construct and characterize general families of
kernels that incorporate system properties such as stability, relative degree,
absence of oscillatory behavior, smoothness, or delay. In addition, we discuss
the possibility of automatically searching over these classes by means of
kernel learning techniques, so as to capture different modes of the system to
be identified.
|
1203.4933
|
Reduplicated MWE (RMWE) helps in improving the CRF based Manipuri POS
Tagger
|
cs.CL
|
This paper gives a detail overview about the modified features selection in
CRF (Conditional Random Field) based Manipuri POS (Part of Speech) tagging.
Selection of features is so important in CRF that the better are the features
then the better are the outputs. This work is an attempt or an experiment to
make the previous work more efficient. Multiple new features are tried to run
the CRF and again tried with the Reduplicated Multiword Expression (RMWE) as
another feature. The CRF run with RMWE because Manipuri is rich of RMWE and
identification of RMWE becomes one of the necessities to bring up the result of
POS tagging. The new CRF system shows a Recall of 78.22%, Precision of 73.15%
and F-measure of 75.60%. With the identification of RMWE and considering it as
a feature makes an improvement to a Recall of 80.20%, Precision of 74.31% and
F-measure of 77.14%.
|
1203.5028
|
Hybridizing PSM and RSM Operator for Solving NP-Complete Problems:
Application to Travelling Salesman Problem
|
cs.NE
|
In this paper, we present a new mutation operator, Hybrid Mutation (HPRM),
for a genetic algorithm that generates high quality solutions to the Traveling
Salesman Problem (TSP). The Hybrid Mutation operator constructs an offspring
from a pair of parents by hybridizing two mutation operators, PSM and RSM. The
efficiency of the HPRM is compared as against some existing mutation operators;
namely, Reverse Sequence Mutation (RSM) and Partial Shuffle Mutation (PSM) for
BERLIN52 as instance of TSPLIB. Experimental results show that the new mutation
operator is better than the RSM and PSM.
|
1203.5037
|
On the Convergence Speed of Turbo Demodulation with Turbo Decoding
|
cs.IT math.IT
|
Iterative processing is widely adopted nowadays in modern wireless receivers
for advanced channel codes like turbo and LDPC codes. Extension of this
principle with an additional iterative feedback loop to the demapping function
has proven to provide substantial error performance gain. However, the adoption
of iterative demodulation with turbo decoding is constrained by the additional
implied implementation complexity, heavily impacting latency and power
consumption. In this paper, we analyze the convergence speed of these combined
two iterative processes in order to determine the exact required number of
iterations at each level. Extrinsic information transfer (EXIT) charts are used
for a thorough analysis at different modulation orders and code rates. An
original iteration scheduling is proposed reducing two demapping iterations
with reasonable performance loss of less than 0.15 dB. Analyzing and
normalizing the computational and memory access complexity, which directly
impact latency and power consumption, demonstrates the considerable gains of
the proposed scheduling and the promising contributions of the proposed
analysis.
|
1203.5051
|
Analysing Temporally Annotated Corpora with CAVaT
|
cs.CL
|
We present CAVaT, a tool that performs Corpus Analysis and Validation for
TimeML. CAVaT is an open source, modular checking utility for statistical
analysis of features specific to temporally-annotated natural language corpora.
It provides reporting, highlights salient links between a variety of general
and time-specific linguistic features, and also validates a temporal annotation
to ensure that it is logically consistent and sufficiently annotated. Uniquely,
CAVaT provides analysis specific to TimeML-annotated temporal information.
TimeML is a standard for annotating temporal information in natural language
text. In this paper, we present the reporting part of CAVaT, and then its
error-checking ability, including the workings of several novel TimeML document
verification methods. This is followed by the execution of some example tasks
using the tool to show relations between times, events, signals and links. We
also demonstrate inconsistencies in a TimeML corpus (TimeBank) that have been
detected with CAVaT.
|
1203.5055
|
Using Signals to Improve Automatic Classification of Temporal Relations
|
cs.CL
|
Temporal information conveyed by language describes how the world around us
changes through time. Events, durations and times are all temporal elements
that can be viewed as intervals. These intervals are sometimes temporally
related in text. Automatically determining the nature of such relations is a
complex and unsolved problem. Some words can act as "signals" which suggest a
temporal ordering between intervals. In this paper, we use these signal words
to improve the accuracy of a recent approach to classification of temporal
links.
|
1203.5060
|
USFD2: Annotating Temporal Expresions and TLINKs for TempEval-2
|
cs.CL
|
We describe the University of Sheffield system used in the TempEval-2
challenge, USFD2. The challenge requires the automatic identification of
temporal entities and relations in text. USFD2 identifies and anchors temporal
expressions, and also attempts two of the four temporal relation assignment
tasks. A rule-based system picks out and anchors temporal expressions, and a
maximum entropy classifier assigns temporal link labels, based on features that
include descriptions of associated temporal signal words. USFD2 identified
temporal expressions successfully, and correctly classified their type in 90%
of cases. Determining the relation between an event and time expression in the
same sentence was performed at 63% accuracy, the second highest score in this
part of the challenge.
|
1203.5062
|
An Annotation Scheme for Reichenbach's Verbal Tense Structure
|
cs.CL
|
In this paper we present RTMML, a markup language for the tenses of verbs and
temporal relations between verbs. There is a richness to tense in language that
is not fully captured by existing temporal annotation schemata. Following
Reichenbach we present an analysis of tense in terms of abstract time points,
with the aim of supporting automated processing of tense and temporal relations
in language. This allows for precise reasoning about tense in documents, and
the deduction of temporal relations between the times and verbal events in a
discourse. We define the syntax of RTMML, and demonstrate the markup in a range
of situations.
|
1203.5066
|
A Corpus-based Study of Temporal Signals
|
cs.CL
|
Automatic temporal ordering of events described in discourse has been of
great interest in recent years. Event orderings are conveyed in text via va
rious linguistic mechanisms including the use of expressions such as "before",
"after" or "during" that explicitly assert a temporal relation -- temporal
signals. In this paper, we investigate the role of temporal signals in temporal
relation extraction and provide a quantitative analysis of these expres sions
in the TimeBank annotated corpus.
|
1203.5073
|
USFD at KBP 2011: Entity Linking, Slot Filling and Temporal Bounding
|
cs.CL
|
This paper describes the University of Sheffield's entry in the 2011 TAC KBP
entity linking and slot filling tasks. We chose to participate in the
monolingual entity linking task, the monolingual slot filling task and the
temporal slot filling tasks. We set out to build a framework for
experimentation with knowledge base population. This framework was created, and
applied to multiple KBP tasks. We demonstrated that our proposed framework is
effective and suitable for collaborative development efforts, as well as useful
in a teaching environment. Finally we present results that, while very modest,
provide improvements an order of magnitude greater than our 2010 attempt.
|
1203.5076
|
Massively Increasing TIMEX3 Resources: A Transduction Approach
|
cs.CL
|
Automatic annotation of temporal expressions is a research challenge of great
interest in the field of information extraction. Gold standard
temporally-annotated resources are limited in size, which makes research using
them difficult. Standards have also evolved over the past decade, so not all
temporally annotated data is in the same format. We vastly increase available
human-annotated temporal expression resources by converting older format
resources to TimeML/TIMEX3. This task is difficult due to differing annotation
methods. We present a robust conversion tool and a new, large temporal
expression resource. Using this, we evaluate our conversion process by using it
as training data for an existing TimeML annotation tool, achieving a 0.87 F1
measure -- better than any system in the TempEval-2 timex recognition exercise.
|
1203.5078
|
Kernel Density Feature Points Estimator for Content-Based Image
Retrieval
|
cs.CV
|
Research is taking place to find effective algorithms for content-based image
representation and description. There is a substantial amount of algorithms
available that use visual features (color, shape, texture). Shape feature has
attracted much attention from researchers that there are many shape
representation and description algorithms in literature. These shape image
representation and description algorithms are usually not application
independent or robust, making them undesirable for generic shape description.
This paper presents an object shape representation using Kernel Density Feature
Points Estimator (KDFPE). In this method, the density of feature points within
defined rings around the centroid of the image is obtained. The KDFPE is then
applied to the vector of the image. KDFPE is invariant to translation, scale
and rotation. This method of image representation shows improved retrieval rate
when compared to Density Histogram Feature Points (DHFP) method. Analytic
analysis is done to justify our method, which was compared with the DHFP to
prove its robustness.
|
1203.5084
|
A Data Driven Approach to Query Expansion in Question Answering
|
cs.CL cs.IR
|
Automated answering of natural language questions is an interesting and
useful problem to solve. Question answering (QA) systems often perform
information retrieval at an initial stage. Information retrieval (IR)
performance, provided by engines such as Lucene, places a bound on overall
system performance. For example, no answer bearing documents are retrieved at
low ranks for almost 40% of questions.
In this paper, answer texts from previous QA evaluations held as part of the
Text REtrieval Conferences (TREC) are paired with queries and analysed in an
attempt to identify performance-enhancing words. These words are then used to
evaluate the performance of a query expansion method.
Data driven extension words were found to help in over 70% of difficult
questions. These words can be used to improve and evaluate query expansion
methods. Simple blind relevance feedback (RF) was correctly predicted as
unlikely to help overall performance, and an possible explanation is provided
for its low value in IR for QA.
|
1203.5086
|
"Selfish" algorithm for optimizing the network survivability analysis
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
In Nature, the primary goal of any network is to survive. This is less
obvious for engineering networks (electric power, gas, water, transportation
systems etc.) that are expected to operate under normal conditions most of
time. As a result, the ability of a network to withstand massive sudden damage
caused by adverse events (or survivability) has not been among traditional
goals in the network design. Reality, however, calls for the adjustment of
design priorities. As modern networks develop toward increasing their size,
complexity, and integration, the likelihood of adverse events increases too due
to technological development, climate change, and activities in the political
arena among other factors. Under such circumstances, a network failure has an
unprecedented effect on lives and economy. To mitigate the impact of adverse
events on the network operability, the survivability analysis must be conducted
at the early stage of the network design. Such analysis requires the
development of new analytical and computational tools. Computational analysis
of the network survivability is the exponential time problem at least. The
current paper describes a new algorithm, in which the reduction of the
computational complexity is achieved by mapping an initial network topology
with multiple sources and sinks onto a set of simpler smaller topologies with
multiple sources and a single sink. Steps for further reducing the time and
space expenses of computations are also discussed.
|
1203.5124
|
Parallel Matrix Factorization for Binary Response
|
cs.LG stat.AP
|
Predicting user affinity to items is an important problem in applications
like content optimization, computational advertising, and many more. While
bilinear random effect models (matrix factorization) provide state-of-the-art
performance when minimizing RMSE through a Gaussian response model on explicit
ratings data, applying it to imbalanced binary response data presents
additional challenges that we carefully study in this paper. Data in many
applications usually consist of users' implicit response that are often binary
-- clicking an item or not; the goal is to predict click rates, which is often
combined with other measures to calculate utilities to rank items at runtime of
the recommender systems. Because of the implicit nature, such data are usually
much larger than explicit rating data and often have an imbalanced distribution
with a small fraction of click events, making accurate click rate prediction
difficult. In this paper, we address two problems. First, we show previous
techniques to estimate bilinear random effect models with binary data are less
accurate compared to our new approach based on adaptive rejection sampling,
especially for imbalanced response. Second, we develop a parallel bilinear
random effect model fitting framework using Map-Reduce paradigm that scales to
massive datasets. Our parallel algorithm is based on a "divide and conquer"
strategy coupled with an ensemble approach. Through experiments on the
benchmark MovieLens data, a small Yahoo! Front Page data set, and a large
Yahoo! Front Page data set that contains 8M users and 1B binary observations,
we show that careful handling of binary response as well as identifiability
issues are needed to achieve good performance for click rate prediction, and
that the proposed adaptive rejection sampler and the partitioning as well as
ensemble techniques significantly improve model performance.
|
1203.5126
|
Online detection of temporal communities in evolving networks by
estrangement confinement
|
cs.SI cond-mat.stat-mech physics.soc-ph
|
Temporal communities result from a consistent partitioning of nodes across
multiple snapshots of an evolving complex network that can help uncover how
dense clusters in a network emerge, combine, split and decay with time. Current
methods for finding communities in a single snapshot are not straightforwardly
generalizable to finding temporal communities since the quality functions used
for finding static communities have highly degenerate landscapes, and the
eventual partition chosen among the many partitions of similar quality is
highly sensitive to small changes in the network. To reliably detect temporal
communities we need not only to find a good community partition in a given
snapshot but also ensure that it bears some similarity to the partition(s)
found in immediately preceding snapshots. We present a new measure of partition
distance called "estrangement" motivated by the inertia of inter-node
relationships which, when incorporated into the measurement of partition
quality, facilitates the detection of meaningful temporal communities.
Specifically, we propose the estrangement confinement method, which postulates
that neighboring nodes in a community prefer to continue to share community
affiliation as the network evolves. Constraining estrangement enables us to
find meaningful temporal communities at various degrees of temporal smoothness
in diverse real-world datasets. Specifically, we study the evolution of voting
behavior of senators in the United States Congress, the evolution of proximity
in human mobility datasets, and the detection of evolving communities in
synthetic networks that are otherwise hard to find. Estrangement confinement
thus provides a principled approach to uncovering temporal communities in
evolving networks.
|
1203.5128
|
Acceleration of the shiftable O(1) algorithm for bilateral filtering and
non-local means
|
cs.CV cs.DC
|
A direct implementation of the bilateral filter [1] requires O(\sigma_s^2)
operations per pixel, where \sigma_s is the (effective) width of the spatial
kernel. A fast implementation of the bilateral filter was recently proposed in
[2] that required O(1) operations per pixel with respect to \sigma_s. This was
done by using trigonometric functions for the range kernel of the bilateral
filter, and by exploiting their so-called shiftability property. In particular,
a fast implementation of the Gaussian bilateral filter was realized by
approximating the Gaussian range kernel using raised cosines. Later, it was
demonstrated in [3] that this idea could be extended to a larger class of
filters, including the popular non-local means filter [4]. As already observed
in [2], a flip side of this approach was that the run time depended on the
width \sigma_r of the range kernel. For an image with (local) intensity
variations in the range [0,T], the run time scaled as O(T^2/\sigma^2_r) with
\sigma_r. This made it difficult to implement narrow range kernels,
particularly for images with large dynamic range. We discuss this problem in
this note, and propose some simple steps to accelerate the implementation in
general, and for small \sigma_r in particular.
[1] C. Tomasi and R. Manduchi, "Bilateral filtering for gray and color
images", Proc. IEEE International Conference on Computer Vision, 1998.
[2] K.N. Chaudhury, Daniel Sage, and M. Unser, "Fast O(1) bilateral filtering
using trigonometric range kernels", IEEE Transactions on Image Processing,
2011.
[3] K.N. Chaudhury, "Constant-time filtering using shiftable kernels", IEEE
Signal Processing Letters, 2011.
[4] A. Buades, B. Coll, and J.M. Morel, "A review of image denoising
algorithms, with a new one", Multiscale Modeling and Simulation, 2005.
|
1203.5156
|
A New Low-Complexity Selected Mapping Scheme Using Cyclic Shifted IFFT
for PAPR Reduction in OFDM Systems
|
cs.IT math.IT
|
In this paper, a new peak-to-average power ratio (PAPR) reduction scheme for
orthogonal frequency division multiplexing (OFDM) is proposed based on the
selected mapping (SLM) scheme. The proposed SLM scheme generates alternative
OFDM signal sequences by cyclically shifting the connections in each subblock
at an intermediate stage of inverse fast Fourier transform (IFFT). Compared
with the conventional SLM scheme, the proposed SLM scheme achieves similar PAPR
reduction performance with much lower computational complexity and no bit error
rate (BER) degradation. The performance of the proposed SLM scheme is verified
through numerical analysis. Also, it is shown that the proposed SLM scheme has
the lowest computational complexity among the existing low-complexity SLM
schemes exploiting the signals at an intermediate stage of IFFT.
|
1203.5158
|
Evolutionary Events in a Mathematical Sciences Research Collaboration
Network
|
physics.soc-ph cs.DL cs.SI math.HO
|
This study examines long-term trends and shifting behavior in the
collaboration network of mathematics literature, using a subset of data from
Mathematical Reviews spanning 1985-2009. Rather than modeling the network
cumulatively, this study traces the evolution of the "here and now" using
fixed-duration sliding windows. The analysis uses a suite of common network
diagnostics, including the distributions of degrees, distances, and clustering,
to track network structure. Several random models that call these diagnostics
as parameters help tease them apart as factors from the values of others. Some
behaviors are consistent over the entire interval, but most diagnostics
indicate that the network's structural evolution is dominated by occasional
dramatic shifts in otherwise steady trends. These behaviors are not distributed
evenly across the network; stark differences in evolution can be observed
between two major subnetworks, loosely thought of as "pure" and "applied",
which approximately partition the aggregate. The paper characterizes two major
events along the mathematics network trajectory and discusses possible
explanatory factors.
|
1203.5161
|
Effect of correlations on network controllability
|
physics.soc-ph cond-mat.stat-mech cs.SI cs.SY math.OC
|
A dynamical system is controllable if by imposing appropriate external
signals on a subset of its nodes, it can be driven from any initial state to
any desired state in finite time. Here we study the impact of various network
characteristics on the minimal number of driver nodes required to control a
network. We find that clustering and modularity have no discernible impact, but
the symmetries of the underlying matching problem can produce linear, quadratic
or no dependence on degree correlation coefficients, depending on the nature of
the underlying correlations. The results are supported by numerical simulations
and help narrow the observed gap between the predicted and the observed number
of driver nodes in real networks.
|
1203.5181
|
$k$-MLE: A fast algorithm for learning statistical mixture models
|
cs.LG stat.ML
|
We describe $k$-MLE, a fast and efficient local search algorithm for learning
finite statistical mixtures of exponential families such as Gaussian mixture
models. Mixture models are traditionally learned using the
expectation-maximization (EM) soft clustering technique that monotonically
increases the incomplete (expected complete) likelihood. Given prescribed
mixture weights, the hard clustering $k$-MLE algorithm iteratively assigns data
to the most likely weighted component and update the component models using
Maximum Likelihood Estimators (MLEs). Using the duality between exponential
families and Bregman divergences, we prove that the local convergence of the
complete likelihood of $k$-MLE follows directly from the convergence of a dual
additively weighted Bregman hard clustering. The inner loop of $k$-MLE can be
implemented using any $k$-means heuristic like the celebrated Lloyd's batched
or Hartigan's greedy swap updates. We then show how to update the mixture
weights by minimizing a cross-entropy criterion that implies to update weights
by taking the relative proportion of cluster points, and reiterate the mixture
parameter update and mixture weight update processes until convergence. Hard EM
is interpreted as a special case of $k$-MLE when both the component update and
the weight update are performed successively in the inner loop. To initialize
$k$-MLE, we propose $k$-MLE++, a careful initialization of $k$-MLE guaranteeing
probabilistically a global bound on the best possible complete likelihood.
|
1203.5184
|
A Universal Model of Commuting Networks
|
math.ST cs.SI physics.soc-ph stat.TH
|
We test a recently proposed model of commuting networks on 80 case studies
from different regions of the world (Europe and United-States) and with
geographic units of different sizes (municipality, county, region). The model
takes as input the number of commuters coming in and out of each geographic
unit and generates the matrix of commuting flows betwen the geographic units.
We show that the single parameter of the model, which rules the compromise
between the influence of the distance and job opportunities, follows a
universal law that depends only on the average surface of the geographic units.
We verified that the law derived from a part of the case studies yields
accurate results on other case studies. We also show that our model
significantly outperforms the two other approaches proposing a universal
commuting model (Balcan et al. (2009); Simini et al. (2012)), particularly when
the geographic units are small (e.g. municipalities).
|
1203.5188
|
Semi-Automatically Extracting FAQs to Improve Accessibility of Software
Development Knowledge
|
cs.SE cs.CL cs.IR
|
Frequently asked questions (FAQs) are a popular way to document software
development knowledge. As creating such documents is expensive, this paper
presents an approach for automatically extracting FAQs from sources of software
development discussion, such as mailing lists and Internet forums, by combining
techniques of text mining and natural language processing. We apply the
approach to popular mailing lists and carry out a survey among software
developers to show that it is able to extract high-quality FAQs that may be
further improved by experts.
|
1203.5218
|
Coteries, Social Circles and Hamlets Close Communities: A Study of
Acquaintance Networks
|
cs.SI cs.DM math.CO physics.soc-ph
|
In the analysis of social networks many relatively loose and heuristic
definitions of 'community' abound. In this paper the concept of closely knit
communities is studied as defined by the property that every pair of its
members are neighbors or has at least one common neighbor, where the
neighboring relationship is based on some more or less durable and stable
acquaintance or contact relation. In this paper these are studied in the form
of graphs or networks of diameter two (2-clubs). Their structure can be
characterized by investigating shortest spanning trees and girth leading to a
typology containing just three or, in combination, six types of close
communities.
|
1203.5244
|
Second weight codewords of generalized Reed-Muller codes
|
math.NT cs.IT math.IT
|
In this paper we give the second weight codewords of the generalized
Reed-Muller code of order r and length $q^m$.
|
1203.5255
|
Post-Editing Error Correction Algorithm for Speech Recognition using
Bing Spelling Suggestion
|
cs.CL
|
ASR short for Automatic Speech Recognition is the process of converting a
spoken speech into text that can be manipulated by a computer. Although ASR has
several applications, it is still erroneous and imprecise especially if used in
a harsh surrounding wherein the input speech is of low quality. This paper
proposes a post-editing ASR error correction method and algorithm based on
Bing's online spelling suggestion. In this approach, the ASR recognized output
text is spell-checked using Bing's spelling suggestion technology to detect and
correct misrecognized words. More specifically, the proposed algorithm breaks
down the ASR output text into several word-tokens that are submitted as search
queries to Bing search engine. A returned spelling suggestion implies that a
query is misspelled; and thus it is replaced by the suggested correction;
otherwise, no correction is performed and the algorithm continues with the next
token until all tokens get validated. Experiments carried out on various
speeches in different languages indicated a successful decrease in the number
of ASR errors and an improvement in the overall error correction rate. Future
research can improve upon the proposed algorithm so much so that it can be
parallelized to take advantage of multiprocessor computers.
|
1203.5262
|
ASR Context-Sensitive Error Correction Based on Microsoft N-Gram Dataset
|
cs.CL
|
At the present time, computers are employed to solve complex tasks and
problems ranging from simple calculations to intensive digital image processing
and intricate algorithmic optimization problems to computationally-demanding
weather forecasting problems. ASR short for Automatic Speech Recognition is yet
another type of computational problem whose purpose is to recognize human
spoken speech and convert it into text that can be processed by a computer.
Despite that ASR has many versatile and pervasive real-world applications,it is
still relatively erroneous and not perfectly solved as it is prone to produce
spelling errors in the recognized text, especially if the ASR system is
operating in a noisy environment, its vocabulary size is limited, and its input
speech is of bad or low quality. This paper proposes a post-editing ASR error
correction method based on MicrosoftN-Gram dataset for detecting and correcting
spelling errors generated by ASR systems. The proposed method comprises an
error detection algorithm for detecting word errors; a candidate corrections
generation algorithm for generating correction suggestions for the detected
word errors; and a context-sensitive error correction algorithm for selecting
the best candidate for correction. The virtue of using the Microsoft N-Gram
dataset is that it contains real-world data and word sequences extracted from
the web which canmimica comprehensive dictionary of words having a large and
all-inclusive vocabulary. Experiments conducted on numerous speeches, performed
by different speakers, showed a remarkable reduction in ASR errors. Future
research can improve upon the proposed algorithm so much so that it can be
parallelized to take advantage of multiprocessor and distributed systems.
|
1203.5324
|
Improving an Hybrid Literary Book Recommendation System through Author
Ranking
|
cs.IR cs.DL
|
Literary reading is an important activity for individuals and choosing to
read a book can be a long time commitment, making book choice an important task
for book lovers and public library users. In this paper we present an hybrid
recommendation system to help readers decide which book to read next. We study
book and author recommendation in an hybrid recommendation setting and test our
approach in the LitRec data set. Our hybrid book recommendation approach
purposed combines two item-based collaborative filtering algorithms to predict
books and authors that the user will like. Author predictions are expanded in
to a book list that is subsequently aggregated with the former list generated
through the initial collaborative recommender. Finally, the resulting book list
is used to yield the top-n book recommendations. By means of various
experiments, we demonstrate that author recommendation can improve overall book
recommendation.
|
1203.5325
|
Exact-Repair Minimum Bandwidth Regenerating Codes Based on Evaluation of
Linearized Polynomials
|
cs.IT math.IT
|
In this paper, we propose two new constructions of exact-repair minimum
storage regenerating (exact-MBR) codes. Both constructions obtain the encoded
symbols by first treating the message vector over GF(q) as a linearized
polynomial and then evaluating it over an extension field GF(q^m). The
evaluation points are chosen so that the encoded symbols at any node are
conjugates of each other, while corresponding symbols of different nodes are
linearly dependent with respect to GF(q). These properties ensure that data
repair can be carried out over the base field GF(q), instead of matrix
inversion over the extension field required by some existing exact-MBR codes.
To the best of our knowledge, this approach is novel in the construction of
exact-MBR codes. One of our constructions leads to exact-MBR codes with
arbitrary parameters. These exact-MBR codes have higher data reconstruction
complexities but lower data repair complexities than their counterparts based
on the product-matrix approach; hence they may be suitable for applications
that need a small number of data reconstructions but a large number of data
repairs.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.