id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1404.3378 | Complexity theoretic limitations on learning DNF's | cs.LG cs.CC | Using the recently developed framework of [Daniely et al, 2014], we show that
under a natural assumption on the complexity of refuting random K-SAT formulas,
learning DNF formulas is hard. Furthermore, the same assumption implies the
hardness of learning intersections of $\omega(\log(n))$ halfspaces,
agnostically learning conjunctions, as well as virtually all (distribution
free) learning problems that were previously shown hard (under complexity
assumptions).
|
1404.3389 | Mean-Field Games for Marriage | math.OC cs.GT cs.SY math.DS math.PR | This article examines mean-field games for marriage. The results support the
argument that optimizing the long-term well-being through effort and social
feeling state distribution (mean-field) will help to stabilize marriage.
However, if the cost of effort is very high, the couple fluctuates in a bad
feeling state or the marriage breaks down. We then examine the influence of
society on a couple using mean field sentimental games. We show that, in
mean-field equilibrium, the optimal effort is always higher than the one-shot
optimal effort. We illustrate numerically the influence of the couple's network
on their feeling states and their well-being.
|
1404.3394 | Decentralized and Collaborative Subspace Pursuit: A
Communication-Efficient Algorithm for Joint Sparsity Pattern Recovery with
Sensor Networks | cs.IT math.IT | In this paper, we consider the problem of joint sparsity pattern recovery in
a distributed sensor network. The sparse multiple measurement vector signals
(MMVs) observed by all the nodes are assumed to have a common (but unknown)
sparsity pattern. To accurately recover the common sparsity pattern in a
decentralized manner with a low communication overhead of the network, we
develop an algorithm named decentralized and collaborative subspace pursuit
(DCSP). In DCSP, each node is required to perform three kinds of operations per
iteration: 1) estimate the local sparsity pattern by finding the subspace that
its measurement vector most probably lies in; 2) share its local sparsity
pattern estimate with one-hop neighboring nodes; and 3) update the final
sparsity pattern estimate by majority vote based fusion of all the local
sparsity pattern estimates obtained in its neighborhood. The convergence of
DCSP is proved and its communication overhead is quantitatively analyzed. We
also propose another decentralized algorithm named generalized DCSP (GDCSP) by
allowing more information exchange among neighboring nodes to further improve
the accuracy of sparsity pattern recovery at the cost of increased
communication overhead. Experimental results show that, 1) compared with
existing decentralized algorithms, DCSP provides much better accuracy of
sparsity pattern recovery at a comparable communication cost; and 2) the
accuracy of GDCSP is very close to that of centralized processing.
|
1404.3411 | Achievable Secrecy Rates over MIMOME Gaussian Channels with GMM Signals
in Low-Noise Regime | cs.IT math.IT | We consider a wiretap multiple-input multiple-output multiple-eavesdropper
(MIMOME) channel, where agent Alice aims at transmitting a secret message to
agent Bob, while leaking no information on it to an eavesdropper agent Eve. We
assume that Alice has more antennas than both Bob and Eve, and that she has
only statistical knowledge of the channel towards Eve. We focus on the
low-noise regime, and assess the secrecy rates that are achievable when the
secret message determines the distribution of a multivariate Gaussian mixture
model (GMM) from which a realization is generated and transmitted over the
channel. In particular, we show that if Eve has fewer antennas than Bob, secret
transmission is always possible at low-noise. Moreover, we show that in the
low-noise limit the secrecy capacity of our scheme coincides with its
unconstrained capacity, by providing a class of covariance matrices that allow
to attain such limit without the need of wiretap coding.
|
1404.3415 | Generalized version of the support vector machine for binary
classification problems: supporting hyperplane machine | cs.LG stat.ML | In this paper there is proposed a generalized version of the SVM for binary
classification problems in the case of using an arbitrary transformation x ->
y. An approach similar to the classic SVM method is used. The problem is widely
explained. Various formulations of primal and dual problems are proposed. For
one of the most important cases the formulae are derived in detail. A simple
computational example is demonstrated. The algorithm and its implementation is
presented in Octave language.
|
1404.3418 | Active Learning for Undirected Graphical Model Selection | stat.ML cs.IT math.IT math.ST stat.TH | This paper studies graphical model selection, i.e., the problem of estimating
a graph of statistical relationships among a collection of random variables.
Conventional graphical model selection algorithms are passive, i.e., they
require all the measurements to have been collected before processing begins.
We propose an active learning algorithm that uses junction tree representations
to adapt future measurements based on the information gathered from prior
measurements. We prove that, under certain conditions, our active learning
algorithm requires fewer scalar measurements than any passive algorithm to
reliably estimate a graph. A range of numerical results validate our theory and
demonstrates the benefits of active learning.
|
1404.3435 | Web Search of New Linearized Medical Drug Leads | cs.IR | The Web is a potentially huge source of medical drug leads. But despite the
significant amount of multi- dimensional information about drugs, currently
commercial search engines accept only linear keyword strings as inputs. This
work uses linearized fragments of molecular structures as knowledge
representation units to serve as inputs to search engines. It is shown that
quite arbitrary fragments are surprisingly free of ambiguity, obtaining
relatively small result sets, which are both manageable and rich in novel
potential drug leads.
|
1404.3438 | Constant Delay and Constant Feedback Moving Window Network Coding for
Wireless Multicast: Design and Asymptotic Analysis | cs.IT math.IT | A major challenge of wireless multicast is to be able to support a large
number of users while simultaneously maintaining low delay and low feedback
overhead. In this paper, we develop a joint coding and feedback scheme named
Moving Window Network Coding with Anonymous Feedback (MWNC-AF) that
successfully addresses this challenge. In particular, we show that our scheme
simultaneously achieves both a constant decoding delay and a constant feedback
overhead, irrespective of the number of receivers $n$, without sacrificing
either throughput or reliability. We explicitly characterize the asymptotic
decay rate of the tail of the delay distribution, and prove that transmitting a
fixed amount of information bits into the MWNC-AF encoder buffer in each
time-slot (called "constant data injection process") achieves the fastest decay
rate, thus showing how to obtain delay optimality in a large deviation sense.
We then investigate the average decoding delay of MWNC-AF, and show that when
the traffic load approaches the capacity, the average decoding delay under the
constant injection process is at most one half of that under a Bernoulli
injection process. In addition, we prove that the per-packet encoding and
decoding complexity of MWNC-AF both scale as $O(\log n)$, with the number of
receivers $n$. Our simulations further underscore the performance of our scheme
through comparisons with other schemes and show that the delay, encoding and
decoding complexity are low even for a large number of receivers, demonstrating
the efficiency, scalability, and ease of implementability of MWNC-AF.
|
1404.3439 | Anytime Hierarchical Clustering | stat.ML cs.IR cs.LG | We propose a new anytime hierarchical clustering method that iteratively
transforms an arbitrary initial hierarchy on the configuration of measurements
along a sequence of trees we prove for a fixed data set must terminate in a
chain of nested partitions that satisfies a natural homogeneity requirement.
Each recursive step re-edits the tree so as to improve a local measure of
cluster homogeneity that is compatible with a number of commonly used (e.g.,
single, average, complete) linkage functions. As an alternative to the standard
batch algorithms, we present numerical evidence to suggest that appropriate
adaptations of this method can yield decentralized, scalable algorithms
suitable for distributed/parallel computation of clustering hierarchies and
online tracking of clustering trees applicable to large, dynamically changing
databases and anomaly detection.
|
1404.3442 | Optimal versus Nash Equilibrium Computation for Networked Resource
Allocation | cs.GT cs.DM cs.SY math.CO | Motivated by emerging resource allocation and data placement problems such as
web caches and peer-to-peer systems, we consider and study a class of resource
allocation problems over a network of agents (nodes). In this model, nodes can
store only a limited number of resources while accessing the remaining ones
through their closest neighbors. We consider this problem under both
optimization and game-theoretic frameworks. In the case of optimal resource
allocation we will first show that when there are only k=2 resources, the
optimal allocation can be found efficiently in O(n^2\log n) steps, where n
denotes the total number of nodes. However, for k>2 this problem becomes
NP-hard with no polynomial time approximation algorithm with a performance
guarantee better than 1+1/102k^2, even under metric access costs. We then
provide a 3-approximation algorithm for the optimal resource allocation which
runs only in linear time O(n). Subsequently, we look at this problem under a
selfish setting formulated as a noncooperative game and provide a
3-approximation algorithm for obtaining its pure Nash equilibria under metric
access costs. We then establish an equivalence between the set of pure Nash
equilibria and flip-optimal solutions of the Max-k-Cut problem over a specific
weighted complete graph. Using this reduction, we show that finding the
lexicographically smallest Nash equilibrium for k> 2 is NP-hard, and provide an
algorithm to find it in O(n^3 2^n) steps. While the reduction to weighted
Max-k-Cut suggests that finding a pure Nash equilibrium using best response
dynamics might be PLS-hard, it allows us to use tools from quadratic
programming to devise more systematic algorithms towards obtaining Nash
equilibrium points.
|
1404.3447 | Group homomorphisms as error correcting codes | cs.IT math.GR math.IT | We investigate the minimum distance of the error correcting code formed by
the homomorphisms between two finite groups $G$ and $H$. We prove some general
structural results on how the distance behaves with respect to natural group
operations, such as passing to subgroups and quotients, and taking products.
Our main result is a general formula for the distance when $G$ is solvable or
$H$ is nilpotent, in terms of the normal subgroup structure of $G$ as well as
the prime divisors of $|G|$ and $|H|$. In particular, we show that in the above
case, the distance is independent of the subgroup structure of $H$. We
complement this by showing that, in general, the distance depends on the
subgroup structure $G$.
|
1404.3448 | Solving The Longest Overlap Region Problem for Noncoding DNA Sequences
with GPU | cs.DC cs.CE | Early hardware limitations of GPU (lack of synchronization primitives and
limited memory caching mechanisms) can make GPU-based computation inefficient.
Now Bio-technologies bring more chances to Bioinformatics and Biological
Engineering. Our paper introduces a way to solve the longest overlap region of
non-coding DNA sequences on using the Compute Unified Device Architecture
(CUDA) platform Intel(R) Core(TM) i3- 3110m quad-core. Compared to standard CPU
implementation, CUDA performance proves the method of the longest overlap
region recognition of noncoding DNA is an efficient approach to
high-performance bioinformatics applications. Studies show the fact that
efficiency of GPU performance is more than 20 times speedup than that of CPU
serial implementation. We believe our method gives a cost-efficient solution to
the bioinformatics community for solving longest overlap region recognition
problem and other related fields.
|
1404.3456 | A Way For Accelerating The DNA Sequence Reconstruction Problem By CUDA | cs.DC cs.CE | Traditionally, we usually utilize the method of shotgun to cut a DNA sequence
into pieces and we have to reconstruct the original DNA sequence from the
pieces, those are widely used method for DNA assembly. Emerging DNA sequence
technologies open up more opportunities for molecular biology. This paper
introduce a new method to improve the efficiency of reconstructing DNA sequence
using suffix array based on CUDA programming model. The experimental result
show the construction of suffix array using GPU is an more efficient approach
on Intel(R) Core(TM) i3-3110K quad-core and NVIDIA GeForce 610M GPU, and study
show the performance of our method is more than 20 times than that of CPU
serial implementation. We believe our method give a cost-efficient solution to
the bioinformatics community.
|
1404.3458 | Novel Polynomial Basis and Its Application to Reed-Solomon Erasure Codes | cs.IT math.IT | In this paper, we present a new basis of polynomial over finite fields of
characteristic two and then apply it to the encoding/decoding of Reed-Solomon
erasure codes. The proposed polynomial basis allows that $h$-point polynomial
evaluation can be computed in $O(h\log_2(h))$ finite field operations with
small leading constant. As compared with the canonical polynomial basis, the
proposed basis improves the arithmetic complexity of addition, multiplication,
and the determination of polynomial degree from $O(h\log_2(h)\log_2\log_2(h))$
to $O(h\log_2(h))$. Based on this basis, we then develop the encoding and
erasure decoding algorithms for the $(n=2^r,k)$ Reed-Solomon codes. Thanks to
the efficiency of transform based on the polynomial basis, the encoding can be
completed in $O(n\log_2(k))$ finite field operations, and the erasure decoding
in $O(n\log_2(n))$ finite field operations. To the best of our knowledge, this
is the first approach supporting Reed-Solomon erasure codes over
characteristic-2 finite fields while achieving a complexity of $O(n\log_2(n))$,
in both additive and multiplicative complexities. As the complexity leading
factor is small, the algorithms are advantageous in practical applications.
|
1404.3461 | A 2D based Partition Strategy for Solving Ranking under Team Context
(RTP) | cs.DB | In this paper, we propose a 2D based partition method for solving the problem
of Ranking under Team Context(RTC) on datasets without a priori. We first map
the data into 2D space using its minimum and maximum value among all
dimensions. Then we construct window queries with consideration of current team
context. Besides, during the query mapping procedure, we can pre-prune some
tuples which are not top ranked ones. This pre-classified step will defer
processing those tuples and can save cost while providing solutions for the
problem. Experiments show that our algorithm performs well especially on large
datasets with correctness.
|
1404.3482 | On the hardness of the decoding and the minimum distance problems for
rank codes | cs.CC cs.IT math.IT | In this paper we give a randomized reduction for the Rank Syndrome Decoding
problem and Rank Minimum Distance problem for rank codes. Our results are based
on an embedding from linear codes equipped with Hamming distance unto linear
codes over an extension field equipped with the rank metric. We prove that if
both previous problems for rank metric are in ZPP = RP$\cap$coRP, then we would
have NP=ZPP. We also give complexity results for the respective approximation
problems in rank metric.
|
1404.3497 | Using Wireless Network Coding to Replace a Wired with Wireless Backhaul | cs.IT math.IT | Cellular networks are evolving towards dense deployment of small cells. This
in turn demands flexible and efficient backhauling solutions. A viable solution
that reuses the same spectrum is wireless backhaul where the Small Base Station
(SBS) acts as a relay. In this paper we consider a reference system that uses
wired backhaul and each Mobile Station (MS) in the small cell has its uplink
and downlink rates defined. The central question is: if we remove the wired
backhaul, how much extra power should the wireless backhaul use in order to
support the same uplink/downlink rates? We introduce the idea of
wireless-emulated wire (WEW), based on two-way relaying and network coding.
Furthermore, in a scenario where two SBSs are served simultaneously, WEW gives
rise to new communication strategies, partially inspired by the private/public
messages from the Han-Kobayashi scheme for interference channel. We formulate
and solve the associated optimization problems. The proposed approach provides
a convincing argument that two-way communication is the proper context to
design and optimize wireless backhauling solutions.
|
1404.3520 | A Theoretical Assessment of Solution Quality in Evolutionary Algorithms
for the Knapsack Problem | cs.NE | Evolutionary algorithms are well suited for solving the knapsack problem.
Some empirical studies claim that evolutionary algorithms can produce good
solutions to the 0-1 knapsack problem. Nonetheless, few rigorous investigations
address the quality of solutions that evolutionary algorithms may produce for
the knapsack problem. The current paper focuses on a theoretical investigation
of three types of (N+1) evolutionary algorithms that exploit bitwise mutation,
truncation selection, plus different repair methods for the 0-1 knapsack
problem. It assesses the solution quality in terms of the approximation ratio.
Our work indicates that the solution produced by pure strategy and mixed
strategy evolutionary algorithms is arbitrarily bad. Nevertheless, the
evolutionary algorithm using helper objectives may produce 1/2-approximation
solutions to the 0-1 knapsack problem.
|
1404.3525 | Distributed Asynchronous Optimization Framework for the MISO
Interference Channel | cs.IT math.IT | We study the distributed optimization of transmit strategies in a
multiple-input, single-output (MISO) interference channel (IFC). Existing
distributed algorithms rely on stricly synchronized update steps by the
individual users. They require a global synchronization mechanism and
potentially suffer from the synchronization penalty caused by e.g., backhaul
communication delays and fixed update sequences. We establish a general
optimization framework that allows asynchronous update steps. The users perform
their computations at arbitrary instants of time, and do not wait for
information that has been sent to them. Based on certain bounds on the amount
of asynchronism that is present in the execution of the algorithm, we are able
to characterize its convergence. As illustrated by our numerical results, the
proposed algorithm can alleviate communication overloads and is not excessively
slowed down by neither communication delays, nor by differences in the
computation intervals.
|
1404.3538 | Proceedings of The 38th Annual Workshop of the Austrian Association for
Pattern Recognition (\"OAGM), 2014 | cs.CV | The 38th Annual Workshop of the Austrian Association for Pattern Recognition
(\"OAGM) will be held at IST Austria, on May 22-23, 2014. The workshop provides
a platform for researchers and industry to discuss traditional and new areas of
computer vision. This year the main topic is: Pattern Recognition:
interdisciplinary challenges and opportunities.
|
1404.3543 | Recover Canonical-View Faces in the Wild with Deep Neural Networks | cs.CV | Face images in the wild undergo large intra-personal variations, such as
poses, illuminations, occlusions, and low resolutions, which cause great
challenges to face-related applications. This paper addresses this challenge by
proposing a new deep learning framework that can recover the canonical view of
face images. It dramatically reduces the intra-person variances, while
maintaining the inter-person discriminativeness. Unlike the existing face
reconstruction methods that were either evaluated in controlled 2D environment
or employed 3D information, our approach directly learns the transformation
from the face images with a complex set of variations to their canonical views.
At the training stage, to avoid the costly process of labeling canonical-view
images from the training set by hand, we have devised a new measurement to
automatically select or synthesize a canonical-view image for each identity. As
an application, this face recovery approach is used for face verification.
Facial features are learned from the recovered canonical-view face images by
using a facial component-based convolutional neural network. Our approach
achieves the state-of-the-art performance on the LFW dataset.
|
1404.3580 | Joint Estimation and Localization in Sensor Networks | cs.MA cs.NI cs.RO cs.SY | This paper addresses the problem of collaborative tracking of dynamic targets
in wireless sensor networks. A novel distributed linear estimator, which is a
version of a distributed Kalman filter, is derived. We prove that the filter is
mean square consistent in the case of static target estimation. When large
sensor networks are deployed, it is common that the sensors do not have good
knowledge of their locations, which affects the target estimation procedure.
Unlike most existing approaches for target tracking, we investigate the
performance of our filter when the sensor poses need to be estimated by an
auxiliary localization procedure. The sensors are localized via a distributed
Jacobi algorithm from noisy relative measurements. We prove strong convergence
guarantees for the localization method and in turn for the joint localization
and target estimation approach. The performance of our algorithms is
demonstrated in simulation on environmental monitoring and target tracking
tasks.
|
1404.3581 | Random forests with random projections of the output space for high
dimensional multi-label classification | stat.ML cs.LG | We adapt the idea of random projections applied to the output space, so as to
enhance tree-based ensemble methods in the context of multi-label
classification. We show how learning time complexity can be reduced without
affecting computational complexity and accuracy of predictions. We also show
that random output space projections may be used in order to reach different
bias-variance tradeoffs, over a broad panel of benchmark problems, and that
this may lead to improved accuracy while reducing significantly the
computational burden of the learning stage.
|
1404.3591 | Hybrid Conditional Gradient - Smoothing Algorithms with Applications to
Sparse and Low Rank Regularization | math.OC cs.LG stat.ML | We study a hybrid conditional gradient - smoothing algorithm (HCGS) for
solving composite convex optimization problems which contain several terms over
a bounded set. Examples of these include regularization problems with several
norms as penalties and a norm constraint. HCGS extends conditional gradient
methods to cases with multiple nonsmooth terms, in which standard conditional
gradient methods may be difficult to apply. The HCGS algorithm borrows
techniques from smoothing proximal methods and requires first-order
computations (subgradients and proximity operations). Unlike proximal methods,
HCGS benefits from the advantages of conditional gradient methods, which render
it more efficient on certain large scale optimization problems. We demonstrate
these advantages with simulations on two matrix optimization problems:
regularization of matrices with combined $\ell_1$ and trace norm penalties; and
a convex relaxation of sparse PCA.
|
1404.3596 | Face Detection with a 3D Model | cs.CV | This paper presents a part-based face detection approach where the spatial
relationship between the face parts is represented by a hidden 3D model with
six parameters. The computational complexity of the search in the six
dimensional pose space is addressed by proposing meaningful 3D pose candidates
by image-based regression from detected face keypoint locations. The 3D pose
candidates are evaluated using a parameter sensitive classifier based on
difference features relative to the 3D pose. A compatible subset of candidates
is then obtained by non-maximal suppression. Experiments on two standard face
detection datasets show that the proposed 3D model based approach obtains
results comparable to or better than state of the art.
|
1404.3606 | PCANet: A Simple Deep Learning Baseline for Image Classification? | cs.CV cs.LG cs.NE | In this work, we propose a very simple deep learning network for image
classification which comprises only the very basic data processing components:
cascaded principal component analysis (PCA), binary hashing, and block-wise
histograms. In the proposed architecture, PCA is employed to learn multistage
filter banks. It is followed by simple binary hashing and block histograms for
indexing and pooling. This architecture is thus named as a PCA network (PCANet)
and can be designed and learned extremely easily and efficiently. For
comparison and better understanding, we also introduce and study two simple
variations to the PCANet, namely the RandNet and LDANet. They share the same
topology of PCANet but their cascaded filters are either selected randomly or
learned from LDA. We have tested these basic networks extensively on many
benchmark visual datasets for different tasks, such as LFW for face
verification, MultiPIE, Extended Yale B, AR, FERET datasets for face
recognition, as well as MNIST for hand-written digits recognition.
Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with
the state of the art features, either prefixed, highly hand-crafted or
carefully learned (by DNNs). Even more surprisingly, it sets new records for
many classification tasks in Extended Yale B, AR, FERET datasets, and MNIST
variations. Additional experiments on other public datasets also demonstrate
the potential of the PCANet serving as a simple but highly competitive baseline
for texture classification and object recognition.
|
1404.3610 | Targeting HIV-related Medication Side Effects and Sentiment Using
Twitter Data | cs.SI cs.CL cs.IR | We present a descriptive analysis of Twitter data. Our study focuses on
extracting the main side effects associated with HIV treatments. The crux of
our work was the identification of personal tweets referring to HIV. We
summarize our results in an infographic aimed at the general public. In
addition, we present a measure of user sentiment based on hand-rated tweets.
|
1404.3626 | Optimal Power Flow as a Polynomial Optimization Problem | math.OC cs.SY | Formulating the alternating current optimal power flow (ACOPF) as a
polynomial optimization problem makes it possible to solve large instances in
practice and to guarantee asymptotic convergence in theory.
|
1404.3637 | A Game-Theoretic Framework for Decentralized Cooperative Data Exchange
using Network Coding | cs.IT math.IT | In this paper, we introduce a game theoretic framework for studying the
problem of minimizing the delay of instantly decodable network coding (IDNC)
for cooperative data exchange (CDE) in decentralized wireless network. In this
configuration, clients cooperate with each other to recover the erased packets
without a central controller. Game theory is employed herein as a tool for
improving the distributed solution by overcoming the need for a central
controller or additional signaling in the system. We model the session by
self-interested players in a non-cooperative potential game. The utility
functions are designed such that increasing individual payoff results in a
collective behavior achieving both a desirable system performance in a shared
network environment and the Nash bargaining solution. Three games are
developed: the first aims to reduce the completion time, the second to reduce
the maximum decoding delay and the third the sum decoding delay. We improve
these formulations to include punishment policy upon collision occurrence and
achieve the Nash bargaining solution. Through extensive simulations, our
framework is tested against the best performance that could be found in the
conventional point-to-multipoint (PMP) recovery process in numerous cases:
first we simulate the problem with complete information. We, then, simulate
with incomplete information and finally we test it in lossy feedback scenario.
Numerical results show that our formulation with complete information largely
outperforms the conventional PMP scheme in most situations and achieves a lower
delay. They also show that the completion time formulation with incomplete
information also outperforms the conventional PMP.
|
1404.3638 | Approximate MMSE Estimator for Linear Dynamic Systems with Gaussian
Mixture Noise | cs.SY | In this work we propose an approximate Minimum Mean-Square Error (MMSE)
filter for linear dynamic systems with Gaussian Mixture noise. The proposed
estimator tracks each component of the Gaussian Mixture (GM) posterior with an
individual filter and minimizes the trace of the covariance matrix of the bank
of filters, as opposed to minimizing the MSE of individual filters in the
commonly used Gaussian sum filter (GSF). Hence, the spread of means in the
proposed method is smaller than that of GSF which makes it more robust to
removing components. Consequently, lower complexity reduction schemes can be
used with the proposed filter without losing estimation accuracy and precision.
This is supported through simulations on synthetic data as well as experimental
data related to an indoor localization system. Additionally, we show that in
two limit cases the state estimation provided by our proposed method converges
to that of GSF, and we provide simulation results supporting this in other
cases.
|
1404.3656 | Methods for Ordinal Peer Grading | cs.LG cs.IR | MOOCs have the potential to revolutionize higher education with their wide
outreach and accessibility, but they require instructors to come up with
scalable alternates to traditional student evaluation. Peer grading -- having
students assess each other -- is a promising approach to tackling the problem
of evaluation at scale, since the number of "graders" naturally scales with the
number of students. However, students are not trained in grading, which means
that one cannot expect the same level of grading skills as in traditional
settings. Drawing on broad evidence that ordinal feedback is easier to provide
and more reliable than cardinal feedback, it is therefore desirable to allow
peer graders to make ordinal statements (e.g. "project X is better than project
Y") and not require them to make cardinal statements (e.g. "project X is a
B-"). Thus, in this paper we study the problem of automatically inferring
student grades from ordinal peer feedback, as opposed to existing methods that
require cardinal peer feedback. We formulate the ordinal peer grading problem
as a type of rank aggregation problem, and explore several probabilistic models
under which to estimate student grades and grader reliability. We study the
applicability of these methods using peer grading data collected from a real
class -- with instructor and TA grades as a baseline -- and demonstrate the
efficacy of ordinal feedback techniques in comparison to existing cardinal peer
grading methods. Finally, we compare these peer-grading techniques to
traditional evaluation techniques.
|
1404.3659 | Avoiding Undesired Choices Using Intelligent Adaptive Systems | cs.AI | We propose a number of heuristics that can be used for identifying when
intransitive choice behaviour is likely to occur in choice situations. We also
suggest two methods for avoiding undesired choice behaviour, namely transparent
communication and adaptive choice-set generation. We believe that these two
ways can contribute to the avoidance of decision biases in choice situations
that may often be regretted.
|
1404.3666 | Unitary Query for the $M \times L \times N$ MIMO Backscatter RFID
Channel | cs.IT math.IT | A MIMO backscatter RFID system consists of three operational ends: the query
end (with $M$ reader transmitting antennas), the tag end (with $L$ tag
antennas) and the receiving end (with $N$ reader receiving antennas). Such an
$M \times L \times N$ setting in RFID can bring spatial diversity and has been
studied for STC at the tag end. Current understanding of the query end is that
it is only an energy provider for the tag and query signal designs cannot
improve the performance. However, we propose a novel \textit{unitary query}
scheme, which creates time diversity \emph{within channel coherent time} and
can yield \emph{significant} performance improvements. To overcome the
difficulty of evaluating the performance when the unitary query is employed at
the query end and STC is employed at the tag end, we derive a new measure based
on the ranks of certain carefully constructed matrices. The measure implies
that the unitary query has superior performance. Simulations show that the
unitary query can bring $5-10$ dB gain in mid SNR regimes. In addition, the
unitary query can also improve the performance of single-antenna tags
significantly, allowing employing low complex and small-size single-antenna
tags for high performance. This improvement is unachievable for single-antenna
tags when the conventional uniform query is employed.
|
1404.3675 | On Backdoors To Tractable Constraint Languages | cs.AI cs.CC | In the context of CSPs, a strong backdoor is a subset of variables such that
every complete assignment yields a residual instance guaranteed to have a
specified property. If the property allows efficient solving, then a small
strong backdoor provides a reasonable decomposition of the original instance
into easy instances. An important challenge is the design of algorithms that
can find quickly a small strong backdoor if one exists. We present a systematic
study of the parameterized complexity of backdoor detection when the target
property is a restricted type of constraint language defined by means of a
family of polymorphisms. In particular, we show that under the weak assumption
that the polymorphisms are idempotent, the problem is unlikely to be FPT when
the parameter is either r (the constraint arity) or k (the size of the
backdoor) unless P = NP or FPT = W[2]. When the parameter is k+r, however, we
are able to identify large classes of languages for which the problem of
finding a small backdoor is FPT.
|
1404.3677 | Decoding Delay Controlled Reduction of Completion Time in Instantly
Decodable Network Coding | cs.IT math.IT | For several years, the completion time and the decoding delay problems in
Instantly Decodable Network Coding (IDNC) were considered separately and were
thought to completely act against each other. Recently, some works aimed to
balance the effects of these two important IDNC metrics but none of them
studied a further optimization of one by controlling the other. In this paper,
we study the effect of controlling the decoding delay to reduce the completion
time below its currently best known solution in persistent erasure channels. We
first derive the decoding-delay-dependent expressions of the users' and overall
completion times. Although using such expressions to find the optimal overall
completion time is NP-hard, we design two novel heuristics that minimizes the
probability of increasing the maximum of these decoding-delay-dependent
completion time expressions after each transmission through a layered control
of their decoding delays. We, then, extend our study to the limited feedback
scenario. Simulation results show that our new algorithms achieves both a lower
mean completion time and mean decoding delay compared to the best known
heuristic for completion time reduction. The gap in performance becomes
significant for harsh erasure scenarios.
|
1404.3697 | The configuration multi-edge model: Assessing the effect of fixing node
strengths on weighted network magnitudes | physics.soc-ph cond-mat.stat-mech cs.SI | Complex networks grow subject to structural constraints which affect their
measurable properties. Assessing the effect that such constraints impose on
their observables is thus a crucial aspect to be taken into account in their
analysis. To this end,we examine the effect of fixing the strength sequence in
multi-edge networks on several network observables such as degrees, disparity,
average neighbor properties and weight distribution using an ensemble approach.
We provide a general method to calculate any desired weighted network metric
and we show that several features detected in real data could be explained
solely by structural constraints. We thus justify the need of analytical null
models to be used as basis to assess the relevance of features found in real
data represented in weighted network form.
|
1404.3702 | Upgrade of A Robot Workstation for Positioning of Measuring Objects on
CMM | cs.RO | In order to decrease the measuring cycle time on the coordinate measuring
machine (CMM) a robot workstation for the positioning of measuring objects was
created. The application of a simple 5-axis industrial robot enables the
positioning of the objects within the working space of CMM and measuring of
different surfaces on the same object without human intervention. In this
article an upgrade of an existing robot workstation through different design
measures is shown. The main goal of this upgrade is to improve the measuring
accuracy of the complex robot-CMM system.
|
1404.3706 | Using industrial robot to manipulate the measured object in CMM | cs.RO | Coordinate measuring machines (CMMs) are widely used to check dimensions of
manufactured parts, especially in automotive industry. The major obstacles in
automation of these measurements are fixturing and clamping assemblies, which
are required in order to position the measured object within the CMM. This
paper describes how an industrial robot can be used to manipulate the measured
object within the CMM work space, in order to enable automation of complex
geometry measurement.
|
1404.3708 | Inferring Social Status and Rich Club Effects in Enterprise
Communication Networks | cs.SI cs.AI physics.soc-ph | Social status, defined as the relative rank or position that an individual
holds in a social hierarchy, is known to be among the most important motivating
forces in social behaviors. In this paper, we consider the notion of status
from the perspective of a position or title held by a person in an enterprise.
We study the intersection of social status and social networks in an
enterprise. We study whether enterprise communication logs can help reveal how
social interactions and individual status manifest themselves in social
networks. To that end, we use two enterprise datasets with three communication
channels --- voice call, short message, and email --- to demonstrate the
social-behavioral differences among individuals with different status. We have
several interesting findings and based on these findings we also develop a
model to predict social status. On the individual level, high-status
individuals are more likely to be spanned as structural holes by linking to
people in parts of the enterprise networks that are otherwise not well
connected to one another. On the community level, the principle of homophily,
social balance and clique theory generally indicate a "rich club" maintained by
high-status individuals, in the sense that this community is much more
connected, balanced and dense. Our model can predict social status of
individuals with 93% accuracy.
|
1404.3722 | Design of Policy-Aware Differentially Private Algorithms | cs.DB cs.CR | The problem of designing error optimal differentially private algorithms is
well studied. Recent work applying differential privacy to real world settings
have used variants of differential privacy that appropriately modify the notion
of neighboring databases. The problem of designing error optimal algorithms for
such variants of differential privacy is open. In this paper, we show a novel
transformational equivalence result that can turn the problem of query
answering under differential privacy with a modified notion of neighbors to one
of query answering under standard differential privacy, for a large class of
neighbor definitions.
We utilize the Blowfish privacy framework that generalizes differential
privacy. Blowfish uses a {\em policy graph} to instantiate different notions of
neighboring databases. We show that the error incurred when answering a
workload $\mathbf{W}$ on a database $\mathbf{x}$ under a Blowfish policy graph
$G$ is identical to the error required to answer a transformed workload
$f_G(\mathbf{W})$ on database $g_G(\mathbf{x})$ under standard differential
privacy, where $f_G$ and $g_G$ are linear transformations based on $G$. Using
this result, we develop error efficient algorithms for releasing histograms and
multidimensional range queries under different Blowfish policies. We believe
the tools we develop will be useful for finding mechanisms to answer many other
classes of queries with low error under other policy graphs.
|
1404.3733 | Quantum Information Complexity and Amortized Communication | quant-ph cs.CC cs.IT math.IT | We define a new notion of information cost for quantum protocols, and a
corresponding notion of quantum information complexity for bipartite quantum
channels, and then investigate the properties of such quantities. These are the
fully quantum generalizations of the analogous quantities for bipartite
classical functions that have found many applications recently, in particular
for proving communication complexity lower bounds. Our definition is strongly
tied to the quantum state redistribution task.
Previous attempts have been made to define such a quantity for quantum
protocols, with particular applications in mind; our notion differs from these
in many respects. First, it directly provides a lower bound on the quantum
communication cost, independent of the number of rounds of the underlying
protocol. Secondly, we provide an operational interpretation for quantum
information complexity: we show that it is exactly equal to the amortized
quantum communication complexity of a bipartite channel on a given state. This
generalizes a result of Braverman and Rao to quantum protocols, and even
strengthens the classical result in a bounded round scenario. Also, this
provides an analogue of the Schumacher source compression theorem for
interactive quantum protocols, and answers a question raised by Braverman.
We also discuss some potential applications to quantum communication
complexity lower bounds by specializing our definition for classical functions
and inputs. Building on work of Jain, Radhakrishnan and Sen, we provide new
evidence suggesting that the bounded round quantum communication complexity of
the disjointness function is \Omega (n/M + M), for M-message protocols. This
would match the best known upper bound.
|
1404.3757 | Inheritance patterns in citation networks reveal scientific memes | cs.SI cs.DL physics.soc-ph | Memes are the cultural equivalent of genes that spread across human culture
by means of imitation. What makes a meme and what distinguishes it from other
forms of information, however, is still poorly understood. Our analysis of
memes in the scientific literature reveals that they are governed by a
surprisingly simple relationship between frequency of occurrence and the degree
to which they propagate along the citation graph. We propose a simple
formalization of this pattern and we validate it with data from close to 50
million publication records from the Web of Science, PubMed Central, and the
American Physical Society. Evaluations relying on human annotators, citation
network randomizations, and comparisons with several alternative approaches
confirm that our formula is accurate and effective, without a dependence on
linguistic or ontological knowledge and without the application of arbitrary
thresholds or filters.
|
1404.3759 | Meta-evaluation of comparability metrics using parallel corpora | cs.CL | Metrics for measuring the comparability of corpora or texts need to be
developed and evaluated systematically. Applications based on a corpus, such as
training Statistical MT systems in specialised narrow domains, require finding
a reasonable balance between the size of the corpus and its consistency, with
controlled and benchmarked levels of comparability for any newly added
sections. In this article we propose a method that can meta-evaluate
comparability metrics by calculating monolingual comparability scores
separately on the 'source' and 'target' sides of parallel corpora. The range of
scores on the source side is then correlated (using Pearson's r coefficient)
with the range of 'target' scores; the higher the correlation - the more
reliable is the metric. The intuition is that a good metric should yield the
same distance between different domains in different languages. Our method
gives consistent results for the same metrics on different data sets, which
indicates that it is reliable and can be used for metric comparison or for
optimising settings of parametrised metrics.
|
1404.3766 | Distributed Approximate Message Passing for Compressed Sensing | cs.DC cs.IT math.IT | In this paper, an efficient distributed approach for implementing the
approximate message passing (AMP) algorithm, named distributed AMP (DAMP), is
developed for compressed sensing (CS) recovery in sensor networks with the
sparsity K unknown. In the proposed DAMP, distributed sensors do not have to
use or know the entire global sensing matrix, and the burden of computation and
storage for each sensor is reduced. To reduce communications among the sensors,
a new data query algorithm, called global computation for AMP (GCAMP), is
proposed. The proposed GCAMP based DAMP approach has exactly the same recovery
solution as the centralized AMP algorithm, which is proved theoretically in the
paper. The performance of the DAMP approach is evaluated in terms of the
communication cost saved by using GCAMP. For comparison purpose, thresholding
algorithm (TA), a well known distributed Top-K algorithm, is modified so that
it also leads to the same recovery solution as the centralized AMP. Numerical
results demonstrate that the GCAMP based DAMP outperforms the Modified TA based
DAMP, and reduces the communication cost significantly.
|
1404.3785 | Reducing the Barrier to Entry of Complex Robotic Software: a MoveIt!
Case Study | cs.RO | Developing robot agnostic software frameworks involves synthesizing the
disparate fields of robotic theory and software engineering while
simultaneously accounting for a large variability in hardware designs and
control paradigms. As the capabilities of robotic software frameworks increase,
the setup difficulty and learning curve for new users also increase. If the
entry barriers for configuring and using the software on robots is too high,
even the most powerful of frameworks are useless. A growing need exists in
robotic software engineering to aid users in getting started with, and
customizing, the software framework as necessary for particular robotic
applications. In this paper a case study is presented for the best practices
found for lowering the barrier of entry in the MoveIt! framework, an
open-source tool for mobile manipulation in ROS, that allows users to 1)
quickly get basic motion planning functionality with minimal initial setup, 2)
automate its configuration and optimization, and 3) easily customize its
components. A graphical interface that assists the user in configuring MoveIt!
is the cornerstone of our approach, coupled with the use of an existing
standardized robot model for input, automatically generated robot-specific
configuration files, and a plugin-based architecture for extensibility. These
best practices are summarized into a set of barrier to entry design principles
applicable to other robotic software. The approaches for lowering the entry
barrier are evaluated by usage statistics, a user survey, and compared against
our design objectives for their effectiveness to users.
|
1404.3788 | Data Modeling with Large Random Matrices in a Cognitive Radio Network
Testbed: Initial Experimental Demonstrations with 70 Nodes | cs.IT math.IT | This short paper reports some initial experimental demonstrations of the
theoretical framework: the massive amount of data in the large-scale cognitive
radio network can be naturally modeled as (large) random matrices. In
particular, using experimental data we will demonstrate that the empirical
spectral distribution of the large sample covariance matrix---a Hermitian
random matrix---agree with its theoretical distribution (Marchenko-Pastur law).
On the other hand, the eigenvalues of the large data matrix ---a non-Hermitian
random matrix---are experimentally found to follow the single ring law, a
theoretical result that has been discovered relatively recently. To our best
knowledge, our paper is the first such attempt, in the context of large-scale
wireless network, to compare theoretical predictions with experimental
findings.
|
1404.3808 | Robust Dynamic State Feedback Guaranteed Cost Control of Nonlinear
Systems using Copies of Plant Nonlinearities | cs.SY | This paper presents a systematic approach to the design of a robust dynamic
state feedback controller using copies of the plant nonlinearities, which is
based on the use of IQCs and minimax LQR control. The approach combines a
linear state feedback guaranteed cost controller and copies of the plant
nonlinearities to form a robust nonlinear controller.
|
1404.3811 | A strong restricted isometry property, with an application to phaseless
compressed sensing | cs.IT math.IT math.NA | The many variants of the restricted isometry property (RIP) have proven to be
crucial theoretical tools in the fields of compressed sensing and matrix
completion. The study of extending compressed sensing to accommodate phaseless
measurements naturally motivates a strong notion of restricted isometry
property (SRIP), which we develop in this paper. We show that if $A \in
\mathbb{R}^{m\times n}$ satisfies SRIP and phaseless measurements $|Ax_0| = b$
are observed about a $k$-sparse signal $x_0 \in \mathbb{R}^n$, then minimizing
the $\ell_1$ norm subject to $ |Ax| = b $ recovers $x_0$ up to multiplication
by a global sign. Moreover, we establish that the SRIP holds for the random
Gaussian matrices typically used for standard compressed sensing, implying that
phaseless compressed sensing is possible from $O(k \log (n/k))$ measurements
with these matrices via $\ell_1$ minimization over $|Ax| = b$. Our analysis
also yields an erasure robust version of the Johnson-Lindenstrauss Lemma.
|
1404.3839 | Towards Understanding Cyberbullying Behavior in a Semi-Anonymous Social
Network | cs.SI physics.soc-ph | Cyberbullying has emerged as an important and growing social problem, wherein
people use online social networks and mobile phones to bully victims with
offensive text, images, audio and video on a 247 basis. This paper studies
negative user behavior in the Ask.fm social network, a popular new site that
has led to many cases of cyberbullying, some leading to suicidal behavior.We
examine the occurrence of negative words in Ask.fms question+answer profiles
along with the social network of likes of questions+answers. We also examine
properties of users with cutting behavior in this social network.
|
1404.3840 | Surpassing Human-Level Face Verification Performance on LFW with
GaussianFace | cs.CV cs.LG stat.ML | Face verification remains a challenging problem in very complex conditions
with large variations such as pose, illumination, expression, and occlusions.
This problem is exacerbated when we rely unrealistically on a single training
data source, which is often insufficient to cover the intrinsically complex
face variations. This paper proposes a principled multi-task learning approach
based on Discriminative Gaussian Process Latent Variable Model, named
GaussianFace, to enrich the diversity of training data. In comparison to
existing methods, our model exploits additional data from multiple
source-domains to improve the generalization performance of face verification
in an unknown target-domain. Importantly, our model can adapt automatically to
complex data distributions, and therefore can well capture complex face
variations inherent in multiple sources. Extensive experiments demonstrate the
effectiveness of the proposed model in learning from diverse data sources and
generalize to unseen domain. Specifically, the accuracy of our algorithm
achieves an impressive accuracy rate of 98.52% on the well-known and
challenging Labeled Faces in the Wild (LFW) benchmark. For the first time, the
human-level performance in face verification (97.53%) on LFW is surpassed.
|
1404.3862 | Optimizing the CVaR via Sampling | stat.ML cs.AI cs.LG | Conditional Value at Risk (CVaR) is a prominent risk measure that is being
used extensively in various domains. We develop a new formula for the gradient
of the CVaR in the form of a conditional expectation. Based on this formula, we
propose a novel sampling-based estimator for the CVaR gradient, in the spirit
of the likelihood-ratio method. We analyze the bias of the estimator, and prove
the convergence of a corresponding stochastic gradient descent algorithm to a
local CVaR optimum. Our method allows to consider CVaR optimization in new
domains. As an example, we consider a reinforcement learning application, and
learn a risk-sensitive controller for the game of Tetris.
|
1404.3881 | Collision Tolerant Packet Scheduling for Underwater Acoustic
Localization | cs.IT cs.NI math.IT | This article considers the joint problem of packet scheduling and
self-localization in an underwater acoustic sensor network where sensor nodes
are distributed randomly in an operating area. In terms of packet scheduling,
our goal is to minimize the localization time, and to do so we consider two
packet transmission schemes, namely a collision-free scheme (CFS), and a
collision-tolerant scheme (CTS). The required localization time is formulated
for these schemes, and through analytical results and numerical examples their
performances are shown to be generally comparable. However, when the packet
duration is short (as is the case for a localization packet), and the operating
area is large (above 3km in at least one dimension), the collision-tolerant
scheme requires a smaller localization time than the collision-free scheme.
After gathering enough measurements, an iterative Gauss-Newton algorithm is
employed by each sensor node for self-localization, and the Cramer Rao lower
bound is evaluated as a benchmark. Although CTS consumes more energy for packet
transmission, it provides a better localization accuracy. Additionally, in this
scheme the anchor nodes work independently of each other, and can operate
asynchronously which leads to a simplified implementation.
|
1404.3884 | Performance Analysis and Coherent Guaranteed Cost Control for Uncertain
Quantum Systems | quant-ph cs.SY | This paper presents several results on performance analysis for a class of
uncertain linear quantum systems subject to either quadratic or non-quadratic
perturbations in the system Hamiltonian. Also, coherent guaranteed cost
controllers are designed for the uncertain quantum systems to achieve improved
control performance. The coherent controller is realized by adding a control
Hamiltonian to the quantum system and its performance is demonstrated by an
example.
|
1404.3905 | Tensor completion in hierarchical tensor representations | math.NA cs.IT math.IT | Compressed sensing extends from the recovery of sparse vectors from
undersampled measurements via efficient algorithms to the recovery of matrices
of low rank from incomplete information. Here we consider a further extension
to the reconstruction of tensors of low multi-linear rank in recently
introduced hierarchical tensor formats from a small number of measurements.
Hierarchical tensors are a flexible generalization of the well-known Tucker
representation, which have the advantage that the number of degrees of freedom
of a low rank tensor does not scale exponentially with the order of the tensor.
While corresponding tensor decompositions can be computed efficiently via
successive applications of (matrix) singular value decompositions, some
important properties of the singular value decomposition do not extend from the
matrix to the tensor case. This results in major computational and theoretical
difficulties in designing and analyzing algorithms for low rank tensor
recovery. For instance, a canonical analogue of the tensor nuclear norm is
NP-hard to compute in general, which is in stark contrast to the matrix case.
In this book chapter we consider versions of iterative hard thresholding
schemes adapted to hierarchical tensor formats. A variant builds on methods
from Riemannian optimization and uses a retraction mapping from the tangent
space of the manifold of low rank tensors back to this manifold. We provide
first partial convergence results based on a tensor version of the restricted
isometry property (TRIP) of the measurement map. Moreover, an estimate of the
number of measurements is provided that ensures the TRIP of a given tensor rank
with high probability for Gaussian measurement maps.
|
1404.3925 | Complexity of Grammar Induction for Quantum Types | cs.CL math.CT | Most categorical models of meaning use a functor from the syntactic category
to the semantic category. When semantic information is available, the problem
of grammar induction can therefore be defined as finding preimages of the
semantic types under this forgetful functor, lifting the information flow from
the semantic level to a valid reduction at the syntactic level. We study the
complexity of grammar induction, and show that for a variety of type systems,
including pivotal and compact closed categories, the grammar induction problem
is NP-complete. Our approach could be extended to linguistic type systems such
as autonomous or bi-closed categories.
|
1404.3933 | Scalable Matting: A Sub-linear Approach | cs.CV | Natural image matting, which separates foreground from background, is a very
important intermediate step in recent computer vision algorithms. However, it
is severely underconstrained and difficult to solve. State-of-the-art
approaches include matting by graph Laplacian, which significantly improves the
underconstrained nature by reducing the solution space. However, matting by
graph Laplacian is still very difficult to solve and gets much harder as the
image size grows: current iterative methods slow down as $\mathcal{O}\left(n^2
\right)$ in the resolution $n$. This creates uncomfortable practical limits on
the resolution of images that we can matte. Current literature mitigates the
problem, but they all remain super-linear in complexity. We expose properties
of the problem that remain heretofore unexploited, demonstrating that an
optimization technique originally intended to solve PDEs can be adapted to take
advantage of this knowledge to solve the matting problem, not heuristically,
but exactly and with sub-linear complexity. This makes ours the most efficient
matting solver currently known by a very wide margin and allows matting finally
to be practical and scalable in the future as consumer photos exceed many
dozens of megapixels, and also relieves matting from being a bottleneck for
vision algorithms that depend on it.
|
1404.3945 | A Game Theoretic Approach to Minimize the Completion Time of Network
Coded Cooperative Data Exchange | cs.IT cs.GT math.IT | In this paper, we introduce a game theoretic framework for studying the
problem of minimizing the completion time of instantly decodable network coding
(IDNC) for cooperative data exchange (CDE) in decentralized wireless network.
In this configuration, clients cooperate with each other to recover the erased
packets without a central controller. Game theory is employed herein as a tool
for improving the distributed solution by overcoming the need for a central
controller or additional signaling in the system. We model the session by
self-interested players in a non-cooperative potential game. The utility
function is designed such that increasing individual payoff results in a
collective behavior achieving both a desirable system performance in a shared
network environment and the Pareto optimal solution. Through extensive
simulations, our approach is compared to the best performance that could be
found in the conventional point-to-multipoint (PMP) recovery process. Numerical
results show that our formulation largely outperforms the conventional PMP
scheme in most practical situations and achieves a lower delay.
|
1404.3959 | Is it morally acceptable for a system to lie to persuade me? | cs.CY cs.CL | Given the fast rise of increasingly autonomous artificial agents and robots,
a key acceptability criterion will be the possible moral implications of their
actions. In particular, intelligent persuasive systems (systems designed to
influence humans via communication) constitute a highly sensitive topic because
of their intrinsically social nature. Still, ethical studies in this area are
rare and tend to focus on the output of the required action. Instead, this work
focuses on the persuasive acts themselves (e.g. "is it morally acceptable that
a machine lies or appeals to the emotions of a person to persuade her, even if
for a good end?"). Exploiting a behavioral approach, based on human assessment
of moral dilemmas -- i.e. without any prior assumption of underlying ethical
theories -- this paper reports on a set of experiments. These experiments
address the type of persuader (human or machine), the strategies adopted
(purely argumentative, appeal to positive emotions, appeal to negative
emotions, lie) and the circumstances. Findings display no differences due to
the agent, mild acceptability for persuasion and reveal that truth-conditional
reasoning (i.e. argument validity) is a significant dimension affecting
subjects' judgment. Some implications for the design of intelligent persuasive
systems are discussed.
|
1404.3984 | Nonparametric Infinite Horizon Kullback-Leibler Stochastic Control | cs.SY | We present two nonparametric approaches to Kullback-Leibler (KL) control, or
linearly-solvable Markov decision problem (LMDP) based on Gaussian processes
(GP) and Nystr\"{o}m approximation. Compared to recently developed parametric
methods, the proposed data-driven frameworks feature accurate function
approximation and efficient on-line operations. Theoretically, we derive the
mathematical connection of KL control based on dynamic programming with earlier
work in control theory which relies on information theoretic dualities for the
infinite time horizon case. Algorithmically, we give explicit optimal control
policies in nonparametric forms, and propose on-line update schemes with
budgeted computational costs. Numerical results demonstrate the effectiveness
and usefulness of the proposed frameworks.
|
1404.3991 | Spiralet Sparse Representation | cs.CV | This is the first report on Working Paper WP-RFM-14-01. The potential and
capability of sparse representations is well-known. However, their
(multivariate variable) vectorial form, which is completely fine in many fields
and disciplines, results in removal and filtering of important "spatial"
relations that are implicitly carried by two-dimensional [or multi-dimensional]
objects, such as images. In this paper, a new approach, called spiralet sparse
representation, is proposed in order to develop an augmented representation and
therefore a modified sparse representation and theory, which is capable to
preserve the data associated to the spatial relations.
|
1404.3992 | Assessing the Quality of MT Systems for Hindi to English Translation | cs.CL | Evaluation plays a vital role in checking the quality of MT output. It is
done either manually or automatically. Manual evaluation is very time consuming
and subjective, hence use of automatic metrics is done most of the times. This
paper evaluates the translation quality of different MT Engines for
Hindi-English (Hindi data is provided as input and English is obtained as
output) using various automatic metrics like BLEU, METEOR etc. Further the
comparison automatic evaluation results with Human ranking have also been
given.
|
1404.3997 | Lossless Coding of Correlated Sources with Actions | cs.IT math.IT | This work studies the problem of distributed compression of correlated
sources with an action-dependent joint distribution. This class of problems is,
in fact, an extension of the Slepian-Wolf model, but where cost-constrained
actions taken by the encoder or the decoder affect the generation of one of the
sources. The purpose of this work is to study the implications of actions on
the achievable rates.
In particular, two cases where transmission occurs over a rate-limited link
are studied; case A for actions taken at the decoder and case B where actions
are taken at the encoder. A complete single-letter characterization of the set
of achievable rates is given in both cases. Furthermore, a network coding setup
is investigated for the case where actions are taken at the encoder. The
sources are generated at different nodes of the network and are required at a
set of terminal nodes, yet transmission occurs over a general, acyclic,
directed network. For this setup, generalized cut-set bounds are derived, and a
full characterization of the set of achievable rates using single-letter
expressions is provided. For this scenario, random linear network coding is
proved to be optimal, even though this is not a classical multicast problem.
Additionally, two binary examples are investigated and demonstrate how actions
taken at different nodes of the system have a significant affect on the
achievable rate region in comparison to a naive time-sharing strategy.
|
1404.4032 | Recovery of Coherent Data via Low-Rank Dictionary Pursuit | stat.ME cs.IT cs.LG math.IT math.ST stat.TH | The recently established RPCA method provides us a convenient way to restore
low-rank matrices from grossly corrupted observations. While elegant in theory
and powerful in reality, RPCA may be not an ultimate solution to the low-rank
matrix recovery problem. Indeed, its performance may not be perfect even when
data are strictly low-rank. This is because conventional RPCA ignores the
clustering structures of the data which are ubiquitous in modern applications.
As the number of cluster grows, the coherence of data keeps increasing, and
accordingly, the recovery performance of RPCA degrades. We show that the
challenges raised by coherent data (i.e., the data with high coherence) could
be alleviated by Low-Rank Representation (LRR), provided that the dictionary in
LRR is configured appropriately. More precisely, we mathematically prove that
if the dictionary itself is low-rank then LRR is immune to the coherence
parameter which increases with the underlying cluster number. This provides an
elementary principle for dealing with coherent data. Subsequently, we devise a
practical algorithm to obtain proper dictionaries in unsupervised environments.
Our extensive experiments on randomly generated matrices verify our claims.
|
1404.4038 | Discovering and Exploiting Entailment Relationships in Multi-Label
Learning | cs.LG | This work presents a sound probabilistic method for enforcing adherence of
the marginal probabilities of a multi-label model to automatically discovered
deterministic relationships among labels. In particular we focus on discovering
two kinds of relationships among the labels. The first one concerns pairwise
positive entailement: pairs of labels, where the presence of one implies the
presence of the other in all instances of a dataset. The second concerns
exclusion: sets of labels that do not coexist in the same instances of the
dataset. These relationships are represented with a Bayesian network. Marginal
probabilities are entered as soft evidence in the network and adjusted through
probabilistic inference. Our approach offers robust improvements in mean
average precision compared to the standard binary relavance approach across all
12 datasets involved in our experiments. The discovery process helps
interesting implicit knowledge to emerge, which could be useful in itself.
|
1404.4067 | An effective AHP-based metaheuristic approach to solve supplier
selection problem | cs.NE | The supplier selection problem is based on electing the best supplier from a
group of pre-specified candidates, is identified as a Multi Criteria Decision
Making (MCDM), is proportionately significant in terms of qualitative and
quantitative attributes. It is a fundamental issue to achieve a trade-off
between such quantifiable and unquantifiable attributes with an aim to
accomplish the best solution to the abovementioned problem. This article
portrays a metaheuristic based optimization model to solve this NP-Complete
problem. Initially the Analytic Hierarchy Process (AHP) is implemented to
generate an initial feasible solution of the problem. Thereafter a Simulated
Annealing (SA) algorithm is exploited to improve the quality of the obtained
solution. The Taguchi robust design method is exploited to solve the critical
issues on the subject of the parameter selection of the SA technique. In order
to verify the proposed methodology the numerical results are demonstrated based
on tangible industry data.
|
1404.4078 | Modeling Massive Amount of Experimental Data with Large Random Matrices
in a Real-Time UWB-MIMO System | cs.IT math.IT | The aim of this paper is to study data modeling for massive datasets. Large
random matrices are used to model the massive amount of data collected from our
experimental testbed. This testbed was developed for a real-time
ultra-wideband, multiple input multiple output (UWB-MIMO) system. Empirical
spectral density is the relevant information we seek for. After we treat this
UWB-MIMO system as a black box, we aim to model the output of the black box as
a large statistical system, whose outputs can be described by (large) random
matrices. This model is extremely general to allow for the study of non-linear
and non-Gaussian phenomenon. The good agreements between the theoretical
predictions and the empirical findings validate the correctness of the our
suggested data model.
|
1404.4088 | Ensemble Classifiers and Their Applications: A Review | cs.LG | Ensemble classifier refers to a group of individual classifiers that are
cooperatively trained on data set in a supervised classification problem. In
this paper we present a review of commonly used ensemble classifiers in the
literature. Some ensemble classifiers are also developed targeting specific
applications. We also present some application driven ensemble classifiers in
this paper.
|
1404.4089 | On the Role of Canonicity in Bottom-up Knowledge Compilation | cs.AI | We consider the problem of bottom-up compilation of knowledge bases, which is
usually predicated on the existence of a polytime function for combining
compilations using Boolean operators (usually called an Apply function). While
such a polytime Apply function is known to exist for certain languages (e.g.,
OBDDs) and not exist for others (e.g., DNNF), its existence for certain
languages remains unknown. Among the latter is the recently introduced language
of Sentential Decision Diagrams (SDDs), for which a polytime Apply function
exists for unreduced SDDs, but remains unknown for reduced ones (i.e. canonical
SDDs). We resolve this open question in this paper and consider some of its
theoretical and practical implications. Some of the findings we report question
the common wisdom on the relationship between bottom-up compilation, language
canonicity and the complexity of the Apply function.
|
1404.4095 | Multi-borders classification | stat.ML cs.LG | The number of possible methods of generalizing binary classification to
multi-class classification increases exponentially with the number of class
labels. Often, the best method of doing so will be highly problem dependent.
Here we present classification software in which the partitioning of
multi-class classification problems into binary classification problems is
specified using a recursive control language.
|
1404.4104 | Sparse Bilinear Logistic Regression | math.OC cs.CV cs.LG | In this paper, we introduce the concept of sparse bilinear logistic
regression for decision problems involving explanatory variables that are
two-dimensional matrices. Such problems are common in computer vision,
brain-computer interfaces, style/content factorization, and parallel factor
analysis. The underlying optimization problem is bi-convex; we study its
solution and develop an efficient algorithm based on block coordinate descent.
We provide a theoretical guarantee for global convergence and estimate the
asymptotical convergence rate using the Kurdyka-{\L}ojasiewicz inequality. A
range of experiments with simulated and real data demonstrate that sparse
bilinear logistic regression outperforms current techniques in several
important applications.
|
1404.4105 | Sparse Compositional Metric Learning | cs.LG cs.AI stat.ML | We propose a new approach for metric learning by framing it as learning a
sparse combination of locally discriminative metrics that are inexpensive to
generate from the training data. This flexible framework allows us to naturally
derive formulations for global, multi-task and local metric learning. The
resulting algorithms have several advantages over existing methods in the
literature: a much smaller number of parameters to be estimated and a
principled way to generalize learned metrics to new testing data points. To
analyze the approach theoretically, we derive a generalization bound that
justifies the sparse combination. Empirically, we evaluate our algorithms on
several datasets against state-of-the-art metric learning methods. The results
are consistent with our theoretical findings and demonstrate the superiority of
our approach in terms of classification performance and scalability.
|
1404.4108 | Representation as a Service | cs.LG | Consider a Machine Learning Service Provider (MLSP) designed to rapidly
create highly accurate learners for a never-ending stream of new tasks. The
challenge is to produce task-specific learners that can be trained from few
labeled samples, even if tasks are not uniquely identified, and the number of
tasks and input dimensionality are large. In this paper, we argue that the MLSP
should exploit knowledge from previous tasks to build a good representation of
the environment it is in, and more precisely, that useful representations for
such a service are ones that minimize generalization error for a new hypothesis
trained on a new task. We formalize this intuition with a novel method that
minimizes an empirical proxy of the intra-task small-sample generalization
error. We present several empirical results showing state-of-the art
performance on single-task transfer, multitask learning, and the full lifelong
learning problem.
|
1404.4114 | Structured Stochastic Variational Inference | cs.LG | Stochastic variational inference makes it possible to approximate posterior
distributions induced by large datasets quickly using stochastic optimization.
The algorithm relies on the use of fully factorized variational distributions.
However, this "mean-field" independence approximation limits the fidelity of
the posterior approximation, and introduces local optima. We show how to relax
the mean-field approximation to allow arbitrary dependencies between global
parameters and local hidden variables, producing better parameter estimates by
reducing bias, sensitivity to local optima, and sensitivity to hyperparameters.
|
1404.4120 | Harvest-Then-Cooperate: Wireless-Powered Cooperative Communications | cs.IT math.IT | In this paper, we consider a wireless-powered cooperative communication
network consisting of one hybrid access-point (AP), one source, and one relay.
In contrast to conventional cooperative networks, the source and relay in the
considered network have no embedded energy supply. They need to rely on the
energy harvested from the signals broadcasted by the AP for their cooperative
information transmission. Based on this three-node reference model, we propose
a harvest-then-cooperate (HTC) protocol, in which the source and relay harvest
energy from the AP in the downlink and work cooperatively in the uplink for the
source's information transmission. Considering a delay-limited transmission
mode, the approximate closed-form expression for the average throughput of the
proposed protocol is derived over Rayleigh fading channels. Subsequently, this
analysis is extended to the multi-relay scenario, where the approximate
throughput of the HTC protocol with two popular relay selection schemes is
derived. The asymptotic analyses for the throughput performance of the
considered schemes at high signal-to-noise radio are also provided. All
theoretical results are validated by numerical simulations. The impacts of the
system parameters, such as time allocation, relay number, and relay position,
on the throughput performance are extensively investigated.
|
1404.4157 | Phase Precoding for the Compute-and-Forward Protocol | cs.IT math.IT | The compute-and-forward (CoF) is a relaying protocol, which uses algebraic
structured codes to harness the interference and remove the noise in wireless
networks. We propose the use of phase precoders at the transmitters of a
network, where relays apply CoF strategy. We define the {\em phase precoded
computation rate} and show that it is greater than the original computation
rate of CoF protocol. We further give a new low-complexity method for finding
network equations. We finally show that the proposed precoding scheme increases
the degrees-of-freedom (DoF) of CoF protocol. This overcomes the limitations on
the DoF of the CoF protocol, recently presented by Niesen and Whiting. Using
tools from Diophantine approximation and algebraic geometry, we prove the
existence of a phase precoder that approaches the maximum DoF when the number
of transmitters tends to infinity.
|
1404.4163 | Multiplicative weights in monotropic games | cs.GT cs.MA math.OC | We introduce a new class of population games that we call monotropic; these
are games characterized by the presence of a unique globally neutrally stable
Nash equilibrium. Monotropic games generalize strictly concave potential games
and zero sum games with a unique minimax solution. Within the class of
monotropic games, we study a multiplicative weights dynamic. We show that,
depending on a parameter called the learning rate, multiplicative weights are
interior globally convergent to the unique equilibrium of monotropic games, but
may also induce chaotic behavior if the learning rate is not carefully chosen.
|
1404.4164 | Time-Frequency Packing for High Capacity Coherent Optical Links | cs.IT math.IT | We consider realistic long-haul optical links, with linear and nonlinear
impairments, and investigate the application of time-frequency packing with
low-order constellations as a possible solution to increase the spectral
efficiency. A detailed comparison with available techniques from the literature
will be also performed. We will see that this technique represents a feasible
solution to overcome the relevant theoretical and technological issues related
to this spectral efficiency increase and could be more effective than the
simple adoption of high-order modulation formats.
|
1404.4171 | Dropout Training for Support Vector Machines | cs.LG | Dropout and other feature noising schemes have shown promising results in
controlling over-fitting by artificially corrupting the training data. Though
extensive theoretical and empirical studies have been performed for generalized
linear models, little work has been done for support vector machines (SVMs),
one of the most successful approaches for supervised learning. This paper
presents dropout training for linear SVMs. To deal with the intractable
expectation of the non-smooth hinge loss under corrupting distributions, we
develop an iteratively re-weighted least square (IRLS) algorithm by exploring
data augmentation techniques. Our algorithm iteratively minimizes the
expectation of a re-weighted least square problem, where the re-weights have
closed-form solutions. The similar ideas are applied to develop a new IRLS
algorithm for the expected logistic loss under corrupting distributions. Our
algorithms offer insights on the connection and difference between the hinge
loss and logistic loss in dropout training. Empirical results on several real
datasets demonstrate the effectiveness of dropout training on significantly
boosting the classification accuracy of linear SVMs.
|
1404.4175 | MEG Decoding Across Subjects | stat.ML cs.LG q-bio.NC | Brain decoding is a data analysis paradigm for neuroimaging experiments that
is based on predicting the stimulus presented to the subject from the
concurrent brain activity. In order to make inference at the group level, a
straightforward but sometimes unsuccessful approach is to train a classifier on
the trials of a group of subjects and then to test it on unseen trials from new
subjects. The extreme difficulty is related to the structural and functional
variability across the subjects. We call this approach "decoding across
subjects". In this work, we address the problem of decoding across subjects for
magnetoencephalographic (MEG) experiments and we provide the following
contributions: first, we formally describe the problem and show that it belongs
to a machine learning sub-field called transductive transfer learning (TTL).
Second, we propose to use a simple TTL technique that accounts for the
differences between train data and test data. Third, we propose the use of
ensemble learning, and specifically of stacked generalization, to address the
variability across subjects within train data, with the aim of producing more
stable classifiers. On a face vs. scramble task MEG dataset of 16 subjects, we
compare the standard approach of not modelling the differences across subjects,
to the proposed one of combining TTL and ensemble learning. We show that the
proposed approach is consistently more accurate than the standard one.
|
1404.4181 | Prediction of Transformed (DCT) Video Coding Residual for Video
Compression | cs.IT cs.MM math.IT | Video compression has been investigated by means of analysis-synthesis, and
more particularly by means of inpainting. The first part of our approach has
been to develop the inpainting of DCT coefficients in an image. This has shown
good results for image compression without overpassing todays compression
standards like JPEG. We then looked at integrating the same approach in a video
coder, and in particular in the widely used H264 AVC standard coder, but the
same approach can be used in the framework of HEVC. The originality of this
work consists in cancelling at the coder, then automatically restoring, at the
decoder, some well chosen DCT residual coefficients. For this purpose, we have
developed a restoration model of transformed coefficients. By using a total
variation based model, we derive conditions for the reconstruction of
transformed coefficients that have been suppressed or altered. The main purpose
here, in a video coding context, is to improve the ratedistortion performance
of existing coders. To this end DCT restoration is used as an additional
prediction step to the spatial prediction of the transformed coefficients,
based on an image regularization process. The method has been successfully
tested with the H.264 AVC video codec standard.
|
1404.4191 | The Dynamics of Emotional Chats with Bots: Experiment and Agent-Based
Simulations | cs.SI physics.soc-ph | Quantitative research of emotions in psychology and machine-learning methods
for extracting emotion components from text messages open an avenue for
physical science to explore the nature of stochastic processes in which
emotions play a role, e.g., in human dynamics online. Here, we investigate the
occurrence of collective behavior of users that is induced by chats with
emotional Bots. The Bots, designed in an experimental environment, are
considered. Furthermore, using the agent-based modeling approach, the activity
of these experimental Bots is simulated within a social network of interacting
emotional agents. Quantitative analysis of time series carrying emotional
messages by agents suggests temporal correlations and persistent fluctuations
with clustering according to emotion similarity. {All data used in this study
are fully anonymized.}
|
1404.4258 | An Analysis of State-Relevance Weights and Sampling Distributions on
L1-Regularized Approximate Linear Programming Approximation Accuracy | cs.AI | Recent interest in the use of $L_1$ regularization in the use of value
function approximation includes Petrik et al.'s introduction of
$L_1$-Regularized Approximate Linear Programming (RALP). RALP is unique among
$L_1$-regularized approaches in that it approximates the optimal value function
using off-policy samples. Additionally, it produces policies which outperform
those of previous methods, such as LSPI. RALP's value function approximation
quality is affected heavily by the choice of state-relevance weights in the
objective function of the linear program, and by the distribution from which
samples are drawn; however, there has been no discussion of these
considerations in the previous literature. In this paper, we discuss and
explain the effects of choices in the state-relevance weights and sampling
distribution on approximation quality, using both theoretical and experimental
illustrations. The results provide insight not only onto these effects, but
also provide intuition into the types of MDPs which are especially well suited
for approximation with RALP.
|
1404.4273 | List decoding group homomorphisms between supersolvable groups | cs.IT cs.CC math.IT | We show that the set of homomorphisms between two supersolvable groups can be
locally list decoded up to the minimum distance of the code, extending the
results of Dinur et al who studied the case where the groups are abelian.
Moreover, when specialized to the abelian case, our proof is more streamlined
and gives a better constant in the exponent of the list size. The constant is
improved from about 3.5 million to 105.
|
1404.4274 | Managing Change in Graph-structured Data Using Description Logics (long
version with appendix) | cs.AI cs.LO | In this paper, we consider the setting of graph-structured data that evolves
as a result of operations carried out by users or applications. We study
different reasoning problems, which range from ensuring the satisfaction of a
given set of integrity constraints after a given sequence of updates, to
deciding the (non-)existence of a sequence of actions that would take the data
to an (un)desirable state, starting either from a specific data instance or
from an incomplete description of it. We consider an action language in which
actions are finite sequences of conditional insertions and deletions of nodes
and labels, and use Description Logics for describing integrity constraints and
(partial) states of the data. We then formalize the above data management
problems as a static verification problem and several planning problems. We
provide algorithms and tight complexity bounds for the formalized problems,
both for an expressive DL and for a variant of DL-Lite.
|
1404.4275 | A Bitcoin system with no mining and no history transactions: Build a
compact Bitcoin system | cs.CE cs.CR q-fin.GN | We give an explicit definition of decentralization and show you that
decentralization is almost impossible for the current stage and Bitcoin is the
first truly noncentralized currency in the currency history. We propose a new
framework of noncentralized cryptocurrency system with an assumption of the
existence of a weak adversary for a bank alliance. It abandons the mining
process and blockchain, and removes history transactions from data
synchronization. We propose a consensus algorithm named Converged Consensus for
a noncentralized cryptocurrency system.
|
1404.4282 | Modeling the wind circulation around mills with a Lagrangian stochastic
approach | cs.CE | This work aims at introducing model methodology and numerical studies related
to a Lagrangian stochastic approach applied to the computation of the wind
circulation around mills. We adapt the Lagrangian stochastic downscaling method
that we have introduced in [3] and [4] to the atmospheric boundary layer and we
introduce here a Lagrangian version of the actuator disc methods to take
account of the mills. We present our numerical method and numerical experiments
in the case of non rotating and rotating actuator disc models. We also present
some features of our numerical method, in particular the computation of the
probability distribution of the wind in the wake zone, as a byproduct of the
fluid particle model and the associated PDF method.
|
1404.4286 | Case study: Data Mining of Associate Degree Accepted Candidates by
Modular Method | cs.DB | Since about 10 years ago, University of Applied Science and Technology (UAST)
in Iran has admitted students in discontinuous associate degree by modular
method, so that almost 100,000 students are accepted every year. Although the
first aim of holding such courses was to improve scientific and skill level of
employees, over time a considerable group of unemployed people have been
interested to participate in these courses. According to this fact, in this
paper, we mine and analyze a sample data of accepted candidates in modular 2008
and 2009 courses by using unsupervised and supervised learning paradigms. In
the first step, by using unsupervised paradigm, we grouped (clustered) set of
modular accepted candidates based on their student status and labeled data sets
by three classes so that each class somehow shows educational and student
status of modular accepted candidates. In the second step, by using supervised
and unsupervised algorithms, we generated predicting models in 2008 data sets.
Then, by making a comparison between performances of generated models, we
selected predicting model of association rules through which some rules were
extracted. Finally, this model is executed for Test set which includes accepted
candidates of next course then by evaluation of results, the percentage of
correctness and confidentiality of obtained results can be viewed.
|
1404.4304 | Automated Classification of Airborne Laser Scanning Point Clouds | cs.CE cs.AI | Making sense of the physical world has always been at the core of mapping. Up
until recently, this has always dependent on using the human eye. Using
airborne lasers, it has become possible to quickly "see" more of the world in
many more dimensions. The resulting enormous point clouds serve as data sources
for applications far beyond the original mapping purposes ranging from flooding
protection and forestry to threat mitigation. In order to process these large
quantities of data, novel methods are required. In this contribution, we
develop models to automatically classify ground cover and soil types. Using the
logic of machine learning, we critically review the advantages of supervised
and unsupervised methods. Focusing on decision trees, we improve accuracy by
including beam vector components and using a genetic algorithm. We find that
our approach delivers consistently high quality classifications, surpassing
classical methods.
|
1404.4314 | An Empirical Comparison of Parsing Methods for Stanford Dependencies | cs.CL | Stanford typed dependencies are a widely desired representation of natural
language sentences, but parsing is one of the major computational bottlenecks
in text analysis systems. In light of the evolving definition of the Stanford
dependencies and developments in statistical dependency parsing algorithms,
this paper revisits the question of Cer et al. (2010): what is the tradeoff
between accuracy and speed in obtaining Stanford dependencies in particular? We
also explore the effects of input representations on this tradeoff:
part-of-speech tags, the novel use of an alternative dependency representation
as input, and distributional representaions of words. We find that direct
dependency parsing is a more viable solution than it was found to be in the
past. An accompanying software release can be found at:
http://www.ark.cs.cmu.edu/TBSD
|
1404.4316 | Generic Object Detection With Dense Neural Patterns and Regionlets | cs.CV | This paper addresses the challenge of establishing a bridge between deep
convolutional neural networks and conventional object detection frameworks for
accurate and efficient generic object detection. We introduce Dense Neural
Patterns, short for DNPs, which are dense local features derived from
discriminatively trained deep convolutional neural networks. DNPs can be easily
plugged into conventional detection frameworks in the same way as other dense
local features(like HOG or LBP). The effectiveness of the proposed approach is
demonstrated with the Regionlets object detection framework. It achieved 46.1%
mean average precision on the PASCAL VOC 2007 dataset, and 44.1% on the PASCAL
VOC 2010 dataset, which dramatically improves the original Regionlets approach
without DNPs.
|
1404.4326 | Open Question Answering with Weakly Supervised Embedding Models | cs.CL cs.LG | Building computers able to answer questions on any subject is a long standing
goal of artificial intelligence. Promising progress has recently been achieved
by methods that learn to map questions to logical forms or database queries.
Such approaches can be effective but at the cost of either large amounts of
human-labeled data or by defining lexicons and grammars tailored by
practitioners. In this paper, we instead take the radical approach of learning
to map questions to vectorial feature representations. By mapping answers into
the same space one can query any knowledge base independent of its schema,
without requiring any grammar or lexicon. Our method is trained with a new
optimization procedure combining stochastic gradient descent followed by a
fine-tuning step using the weak supervision provided by blending automatically
and collaboratively generated resources. We empirically demonstrate that our
model can capture meaningful signals from its noisy supervision leading to
major improvements over paralex, the only existing method able to be trained on
similar weakly labeled data.
|
1404.4350 | Kalman meets Shannon | cs.IT math.IT math.OC | We consider the problem of communicating the state of a dynamical system via
a Shannon Gaussian channel. The receiver, which acts as both a decoder and
estimator, observes the noisy measurement of the channel output and makes an
optimal estimate of the state of the dynamical system in the minimum mean
square sense. The transmitter observes a possibly noisy measurement of the
state of the dynamical system. These measurements are then used to encode the
message to be transmitted over a noisy Gaussian channel, where a per sample
power constraint is imposed on the transmitted message. Thus, we get a mixed
problem of Shannon's source-channel coding problem and a sort of Kalman
filtering problem. We first consider the problem of communication with full
state measurements at the transmitter and show that optimal linear encoders
don't need to have memory and the optimal linear decoders have an order of at
most that of the state dimension. We also give explicitly the structure of the
optimal linear filters. For the case where the transmitter has access to noisy
measurements of the state, we derive a separation principle for the optimal
communication scheme, where the transmitter needs a filter with an order of at
most the dimension of the state of the dynamical system. The results are
derived for first order linear dynamical systems, but may be extended to MIMO
systems with arbitrary order.
|
1404.4351 | Stable Graphical Models | cs.LG stat.ML | Stable random variables are motivated by the central limit theorem for
densities with (potentially) unbounded variance and can be thought of as
natural generalizations of the Gaussian distribution to skewed and heavy-tailed
phenomenon. In this paper, we introduce stable graphical (SG) models, a class
of multivariate stable densities that can also be represented as Bayesian
networks whose edges encode linear dependencies between random variables. One
major hurdle to the extensive use of stable distributions is the lack of a
closed-form analytical expression for their densities. This makes penalized
maximum-likelihood based learning computationally demanding. We establish
theoretically that the Bayesian information criterion (BIC) can asymptotically
be reduced to the computationally more tractable minimum dispersion criterion
(MDC) and develop StabLe, a structure learning algorithm based on MDC. We use
simulated datasets for five benchmark network topologies to empirically
demonstrate how StabLe improves upon ordinary least squares (OLS) regression.
We also apply StabLe to microarray gene expression data for lymphoblastoid
cells from 727 individuals belonging to eight global population groups. We
establish that StabLe improves test set performance relative to OLS via
ten-fold cross-validation. Finally, we develop SGEX, a method for quantifying
differential expression of genes between different population groups.
|
1404.4356 | Phase transition in kinetic exchange opinion models with independence | physics.soc-ph cond-mat.stat-mech cs.SI | In this work we study the critical behavior of a three-state ($+1$, $-1$,
$0$) opinion model with independence. Each agent has a probability $q$ to act
as independent, i.e., he/she can choose his/her opinion independently of the
opinions of the other agents. On the other hand, with the complementary
probability $1-q$ the agent interacts with a randomly chosen individual through
a kinetic exchange. Our analytical and numerical results show that the
independence mechanism acts as a noise that induces an order-disorder
transition at critical points $q_{c}$ that depend on the individuals'
flexibility. For a special value of this flexibility the system undergoes a
transition to an absorbing state with all opinions $0$.
|
1404.4386 | Probabilistic Data Association-Feedback Particle Filter for Multiple
Target Tracking Applications | math.PR cs.SY math.OC | This paper is concerned with the problem of tracking single or multiple
targets with multiple non-target specific observations (measurements). For such
filtering problems with data association uncertainty, a novel feedback
control-based particle filter algorithm is introduced. The algorithm is
referred to as the probabilistic data association-feedback particle filter
(PDA-FPF). The proposed filter is shown to represent a generalization to the
nonlinear non-Gaussian case of the classical Kalman filter-based probabilistic
data association filter (PDAF). One remarkable conclusion is that the proposed
PDA-FPF algorithm retains the innovation error-based feedback structure of the
classical PDAF algorithm, even in the nonlinear non-Gaussian case. The
theoretical results are illustrated with the aid of numerical examples
motivated by multiple target tracking applications.
|
1404.4388 | Partially Observed, Multi-objective Markov Games | math.OC cs.AI cs.GT | The intent of this research is to generate a set of non-dominated policies
from which one of two agents (the leader) can select a most preferred policy to
control a dynamic system that is also affected by the control decisions of the
other agent (the follower). The problem is described by an infinite horizon,
partially observed Markov game (POMG). At each decision epoch, each agent
knows: its past and present states, its past actions, and noise corrupted
observations of the other agent's past and present states. The actions of each
agent are determined at each decision epoch based on these data. The leader
considers multiple objectives in selecting its policy. The follower considers a
single objective in selecting its policy with complete knowledge of and in
response to the policy selected by the leader. This leader-follower assumption
allows the POMG to be transformed into a specially structured, partially
observed Markov decision process (POMDP). This POMDP is used to determine the
follower's best response policy. A multi-objective genetic algorithm (MOGA) is
used to create the next generation of leader policies based on the fitness
measures of each leader policy in the current generation. Computing a fitness
measure for a leader policy requires a value determination calculation, given
the leader policy and the follower's best response policy. The policies from
which the leader can select a most preferred policy are the non-dominated
policies of the final generation of leader policies created by the MOGA. An
example is presented that illustrates how these results can be used to support
a manager of a liquid egg production process (the leader) in selecting a
sequence of actions to best control this process over time, given that there is
an attacker (the follower) who seeks to contaminate the liquid egg production
process with a chemical or biological toxin.
|
1404.4391 | Control of Robotic Mobility-On-Demand Systems: a Queueing-Theoretical
Perspective | cs.RO cs.MA | In this paper we present and analyze a queueing-theoretical model for
autonomous mobility-on-demand (MOD) systems where robotic, self-driving
vehicles transport customers within an urban environment and rebalance
themselves to ensure acceptable quality of service throughout the entire
network. We cast an autonomous MOD system within a closed Jackson network model
with passenger loss. It is shown that an optimal rebalancing algorithm
minimizing the number of (autonomously) rebalancing vehicles and keeping
vehicles availabilities balanced throughout the network can be found by solving
a linear program. The theoretical insights are used to design a robust,
real-time rebalancing algorithm, which is applied to a case study of New York
City. The case study shows that the current taxi demand in Manhattan can be met
with about 8,000 robotic vehicles (roughly 60% of the size of the current taxi
fleet). Finally, we extend our queueing-theoretical setup to include congestion
effects, and we study the impact of autonomously rebalancing vehicles on
overall congestion. Collectively, this paper provides a rigorous approach to
the problem of system-wide coordination of autonomously driving vehicles, and
provides one of the first characterizations of the sustainability benefits of
robotic transportation networks.
|
1404.4400 | Strong Divergence of Reconstruction Procedures for the Paley-Wiener
Space $\mathcal{PW}^1_\pi$ and the Hardy Space $\mathcal{H}^1$ | cs.IT math.IT | Previous results on certain sampling series have left open if divergence only
occurs for certain subsequences or, in fact, in the limit. Here we prove that
divergence occurs in the limit.
We consider three canonical reconstruction methods for functions in the
Paley-Wiener space $\mathcal{PW}^1_\pi$. For each of these we prove an instance
when the reconstruction diverges in the limit. This is a much stronger
statement than previous results that provide only $\limsup$ divergence. We also
address reconstruction for functions in the Hardy space $\mathcal{H}^1$ and
show that for any subsequence of the natural numbers there exists a function in
$\mathcal{H}^1$ for which reconstruction diverges in $\limsup$. For two of
these sampling series we show that when divergence occurs, the sampling series
has strong oscillations so that the maximum and the minimum tend to positive
and negative infinity. Our results are of interest in functional analysis
because they go beyond the type of result that can be obtained using the
Banach-Steinhaus Theorem. We discuss practical implications of this work; in
particular the work shows that methods using specially chosen subsequences of
reconstructions cannot yield convergence for the Paley-Wiener Space
$\mathcal{PW}^1_\pi$.
|
1404.4412 | Efficient Nonnegative Tucker Decompositions: Algorithms and Uniqueness | cs.LG cs.CV stat.ML | Nonnegative Tucker decomposition (NTD) is a powerful tool for the extraction
of nonnegative parts-based and physically meaningful latent components from
high-dimensional tensor data while preserving the natural multilinear structure
of data. However, as the data tensor often has multiple modes and is
large-scale, existing NTD algorithms suffer from a very high computational
complexity in terms of both storage and computation time, which has been one
major obstacle for practical applications of NTD. To overcome these
disadvantages, we show how low (multilinear) rank approximation (LRA) of
tensors is able to significantly simplify the computation of the gradients of
the cost function, upon which a family of efficient first-order NTD algorithms
are developed. Besides dramatically reducing the storage complexity and running
time, the new algorithms are quite flexible and robust to noise because any
well-established LRA approaches can be applied. We also show how nonnegativity
incorporating sparsity substantially improves the uniqueness property and
partially alleviates the curse of dimensionality of the Tucker decompositions.
Simulation results on synthetic and real-world data justify the validity and
high efficiency of the proposed NTD algorithms.
|
1404.4420 | Random Matrix Systems with Block-Based Behavior and Operator-Valued
Models | math.PR cs.IT math.IT | A model to estimate the asymptotic isotropic mutual information of a
multiantenna channel is considered. Using a block-based dynamics and the angle
diversity of the system, we derived what may be thought of as the
operator-valued version of the Kronecker correlation model. This model turns
out to be more flexible than the classical version, as it incorporates both an
arbitrary channel correlation and the correlation produced by the asymptotic
antenna patterns. A method to calculate the asymptotic isotropic mutual
information of the system is established using operator-valued free probability
tools. A particular case is considered in which we start with explicit Cauchy
transforms and all the computations are done with diagonal matrices, which make
the implementation simpler and more efficient.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.