id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1402.2664 | Network-Based Vertex Dissolution | cs.DM cs.DS cs.SI math.CO | We introduce a graph-theoretic vertex dissolution model that applies to a
number of redistribution scenarios such as gerrymandering in political
districting or work balancing in an online situation. The central aspect of our
model is the deletion of certain vertices and the redistribution of their load
to neighboring vertices in a completely balanced way.
We investigate how the underlying graph structure, the knowledge of which
vertices should be deleted, and the relation between old and new vertex loads
influence the computational complexity of the underlying graph problems. Our
results establish a clear borderline between tractable and intractable cases.
|
1402.2667 | On Zeroth-Order Stochastic Convex Optimization via Random Walks | cs.LG stat.ML | We propose a method for zeroth order stochastic convex optimization that
attains the suboptimality rate of $\tilde{\mathcal{O}}(n^{7}T^{-1/2})$ after
$T$ queries for a convex bounded function $f:{\mathbb R}^n\to{\mathbb R}$. The
method is based on a random walk (the \emph{Ball Walk}) on the epigraph of the
function. The randomized approach circumvents the problem of gradient
estimation, and appears to be less sensitive to noisy function evaluations
compared to noiseless zeroth order methods.
|
1402.2671 | Aggregate Characterization of User Behavior in Twitter and Analysis of
the Retweet Graph | cs.SI physics.soc-ph | Most previous analysis of Twitter user behavior is focused on individual
information cascades and the social followers graph. We instead study aggregate
user behavior and the retweet graph with a focus on quantitative descriptions.
We find that the lifetime tweet distribution is a type-II discrete Weibull
stemming from a power law hazard function, the tweet rate distribution,
although asymptotically power law, exhibits a lognormal cutoff over finite
sample intervals, and the inter-tweet interval distribution is power law with
exponential cutoff. The retweet graph is small-world and scale-free, like the
social graph, but is less disassortative and has much stronger clustering.
These differences are consistent with it better capturing the real-world social
relationships of and trust between users. Beyond just understanding and
modeling human communication patterns and social networks, applications for
alternative, decentralized microblogging systems-both predicting real-word
performance and detecting spam-are discussed.
|
1402.2673 | Real-Time Hand Shape Classification | cs.CV | The problem of hand shape classification is challenging since a hand is
characterized by a large number of degrees of freedom. Numerous shape
descriptors have been proposed and applied over the years to estimate and
classify hand poses in reasonable time. In this paper we discuss our parallel
framework for real-time hand shape classification applicable in real-time
applications. We show how the number of gallery images influences the
classification accuracy and execution time of the parallel algorithm. We
present the speedup and efficiency analyses that prove the efficacy of the
parallel implementation. Noteworthy, different methods can be used at each step
of our parallel framework. Here, we combine the shape contexts with the
appearance-based techniques to enhance the robustness of the algorithm and to
increase the classification score. An extensive experimental study proves the
superiority of the proposed approach over existing state-of-the-art methods.
|
1402.2676 | Ranking via Robust Binary Classification and Parallel Parameter
Estimation in Large-Scale Data | stat.ML cs.DC cs.LG stat.CO | We propose RoBiRank, a ranking algorithm that is motivated by observing a
close connection between evaluation metrics for learning to rank and loss
functions for robust classification. The algorithm shows a very competitive
performance on standard benchmark datasets against other representative
algorithms in the literature. On the other hand, in large scale problems where
explicit feature vectors and scores are not given, our algorithm can be
efficiently parallelized across a large number of machines; for a task that
requires 386,133 x 49,824,519 pairwise interactions between items to be ranked,
our algorithm finds solutions that are of dramatically higher quality than that
can be found by a state-of-the-art competitor algorithm, given the same amount
of wall-clock time for computation.
|
1402.2681 | Packing and Padding: Coupled Multi-index for Accurate Image Retrieval | cs.CV | In Bag-of-Words (BoW) based image retrieval, the SIFT visual word has a low
discriminative power, so false positive matches occur prevalently. Apart from
the information loss during quantization, another cause is that the SIFT
feature only describes the local gradient distribution. To address this
problem, this paper proposes a coupled Multi-Index (c-MI) framework to perform
feature fusion at indexing level. Basically, complementary features are coupled
into a multi-dimensional inverted index. Each dimension of c-MI corresponds to
one kind of feature, and the retrieval process votes for images similar in both
SIFT and other feature spaces. Specifically, we exploit the fusion of local
color feature into c-MI. While the precision of visual match is greatly
enhanced, we adopt Multiple Assignment to improve recall. The joint cooperation
of SIFT and color features significantly reduces the impact of false positive
matches.
Extensive experiments on several benchmark datasets demonstrate that c-MI
improves the retrieval accuracy significantly, while consuming only half of the
query time compared to the baseline. Importantly, we show that c-MI is well
complementary to many prior techniques. Assembling these methods, we have
obtained an mAP of 85.8% and N-S score of 3.85 on Holidays and Ukbench
datasets, respectively, which compare favorably with the state-of-the-arts.
|
1402.2695 | Visualizing Digital Collections | cs.DL cs.IR | Data visualizations can greatly enhance search in digital collections by
providing information about the scope and context of a collection and allowing
users to more easily browse and explore the contents. This article discusses
the benefits of incorporating visualizations into digital collections based on
the experiences of the Cold War International History Project (CWIHP) in
developing a user-friendly tool for searching and visualizing the project's
complex set of historical documents. The paper concludes with a tutorial on
using the free Library of Congress tool Viewshare to create visualizations
based on real data from the CWIHP Digital Archive.
|
1402.2696 | Information Based Complexity of Networks | nlin.AO cs.SI | Review article of various complexity measures of networks
|
1402.2699 | On the Security of Trustee-based Social Authentications | cs.CR cs.SI | Recently, authenticating users with the help of their friends (i.e.,
trustee-based social authentication) has been shown to be a promising backup
authentication mechanism. A user in this system is associated with a few
trustees that were selected from the user's friends. When the user wants to
regain access to the account, the service provider sends different verification
codes to the user's trustees. The user must obtain at least k (i.e., recovery
threshold) verification codes from the trustees before being directed to reset
his or her password.
In this paper, we provide the first systematic study about the security of
trustee-based social authentications. Specifically, we first introduce a novel
framework of attacks, which we call forest fire attacks. In these attacks, an
attacker initially obtains a small number of compromised users, and then the
attacker iteratively attacks the rest of users by exploiting trustee-based
social authentications. Then, we construct a probabilistic model to formalize
the threats of forest fire attacks and their costs for attackers. Moreover, we
introduce various defense strategies. Finally, we apply our framework to
extensively evaluate various concrete attack and defense strategies using three
real-world social network datasets. Our results have strong implications for
the design of more secure trustee-based social authentications.
|
1402.2704 | Sex as Gibbs Sampling: a probability model of evolution | q-bio.PE cs.NE | We show that evolutionary computation can be implemented as standard
Markov-chain Monte-Carlo (MCMC) sampling. With some care, `genetic algorithms'
can be constructed that are reversible Markov chains that satisfy detailed
balance; it follows that the stationary distribution of populations is a Gibbs
distribution in a simple factorised form. For some standard and popular
nonparametric probability models, we exhibit Gibbs-sampling procedures that are
plausible genetic algorithms. At mutation-selection equilibrium, a population
of genomes is analogous to a sample from a Bayesian posterior, and the genomes
are analogous to latent variables. We suggest this is a general, tractable, and
insightful formulation of evolutionary computation in terms of standard machine
learning concepts and techniques.
In addition, we show that evolutionary processes in which selection acts by
differences in fecundity are not reversible, and also that it is not possible
to construct reversible evolutionary models in which each child is produced by
only two parents.
|
1402.2707 | Analysis of Non-Coherent Joint-Transmission Cooperation in Heterogeneous
Cellular Networks | cs.IT math.IT | Base station (BS) cooperation is set to play a key role in managing
interference in dense heterogeneous cellular networks (HCNs). Non-coherent
joint transmission (JT) is particularly appealing due to its low complexity,
smaller overhead, and ability for load balancing. However, a general analysis
of this technique is difficult mostly due to the lack of tractable models. This
paper addresses this gap and presents a tractable model for analyzing
non-coherent JT in HCNs, while incorporating key system parameters such as
user-centric BS clustering and channel-dependent cooperation activation.
Assuming all BSs of each tier follow a stationary Poisson point process, the
coverage probability for non-coherent JT is derived. Using the developed model,
it is shown that for small cooperative clusters of small-cell BSs, non-coherent
JT by small cells provides spectral efficiency gains without significantly
increasing cell load. Further, when cooperation is aggressively triggered
intra-cluster frequency reuse within small cells is favorable over
intra-cluster coordinated scheduling.
|
1402.2708 | Game theoretic controller synthesis for multi-robot motion planning Part
I : Trajectory based algorithms | cs.MA cs.GT cs.SY math.OC | We consider a class of multi-robot motion planning problems where each robot
is associated with multiple objectives and decoupled task specifications. The
problems are formulated as an open-loop non-cooperative differential game. A
distributed anytime algorithm is proposed to compute a Nash equilibrium of the
game. The following properties are proven: (i) the algorithm asymptotically
converges to the set of Nash equilibrium; (ii) for scalar cost functionals, the
price of stability equals one; (iii) for the worst case, the computational
complexity and communication cost are linear in the robot number.
|
1402.2720 | Noise Analysis for Lensless Compressive Imaging | cs.CV | We analyze the signal to noise ratio (SNR) in a recently proposed lensless
compressive imaging architecture. The architecture consists of a sensor of a
single detector element and an aperture assembly of an array of aperture
elements, each of which has a programmable transmittance. This lensless
compressive imaging architecture can be used in conjunction with compressive
sensing to capture images in a compressed form of compressive measurements. In
this paper, we perform noise analysis of this lensless compressive imaging
architecture and compare it with pinhole aperture imaging and lens aperture
imaging. We will show that the SNR in the lensless compressive imaging is
independent of the image resolution, while that in either pinhole aperture
imaging or lens aperture imaging decreases as the image resolution increases.
Consequently, the SNR in the lensless compressive imaging can be much higher if
the image resolution is large enough.
|
1402.2733 | An efficient algorithm for the entropy rate of a hidden Markov model
with unambiguous symbols | cs.IT math.IT | We demonstrate an efficient formula to compute the entropy rate $H(\mu)$ of a
hidden Markov process with $q$ output symbols where at least one symbol is
unambiguously received. Using an approximation to $H(\mu)$ to the first $N$
terms we give a $O(Nq^3$) algorithm to compute the entropy rate of the hidden
Markov model. We use the algorithm to estimate the entropy rate when the
parameters of the hidden Markov model are unknown.In the case of $q =2$ the
process is the output of the Z-channel and we use this fact to give bounds on
the capacity of the Gilbert channel.
|
1402.2735 | Optimal Parameter Identification for Discrete Mechanical Systems with
Application to Flexible Object Manipulation | cs.RO | We present a method for system identification of flexible objects by
measuring forces and displacement during interaction with a manipulating arm.
We model the object's structure and flexibility by a chain of rigid bodies
connected by torsional springs. Unlike previous work, the proposed optimal
control approach using variational integrators allows identification of closed
loops, which include the robot arm itself. This allows using the resulting
models for planning in configuration space of the robot. In order to solve the
resulting problem efficiently, we develop a novel method for fast discrete-time
adjoint-based gradient calculation. The feasibility of the approach is
demonstrated using full physics simulation in trep and using data recorded from
a 7-DOF series elastic robot arm.
|
1402.2773 | Noisy Gradient Descent Bit-Flip Decoding for LDPC Codes | cs.IT math.IT | A modified Gradient Descent Bit Flipping (GDBF) algorithm is proposed for
decoding Low Density Parity Check (LDPC) codes on the binary-input additive
white Gaussian noise channel. The new algorithm, called Noisy GDBF (NGDBF),
introduces a random perturbation into each symbol metric at each iteration. The
noise perturbation allows the algorithm to escape from undesirable local
maxima, resulting in improved performance. A combination of heuristic
improvements to the algorithm are proposed and evaluated. When the proposed
heuristics are applied, NGDBF performs better than any previously reported GDBF
variant, and comes within 0.5 dB of the belief propagation algorithm for
several tested codes. Unlike other previous GDBF algorithms that provide an
escape from local maxima, the proposed algorithm uses only local, fully
parallelizable operations and does not require computing a global objective
function or a sort over symbol metrics, making it highly efficient in
comparison. The proposed NGDBF algorithm requires channel state information
which must be obtained from a signal to noise ratio (SNR) estimator.
Architectural details are presented for implementing the NGDBF algorithm.
Complexity analysis and optimizations are also discussed.
|
1402.2793 | Computing Agents for Decision Support Systems | cs.MA | In decision support systems, it is essential to get a candidate solution
fast, even if it means resorting to an approximation. This constraint
introduces a scalability requirement with regard to the kind of heuristics
which can be used in such systems. As execution time is bounded, these
algorithms need to give better results and scale up with additional computing
resources instead of additional time. In this paper, we show how multi-agent
systems can fulfil these requirements. We recall as an example the concept of
Evolutionary Multi-Agent Systems, which combine evolutionary and agent
computing paradigms. We describe several possible implementations and present
experimental results demonstrating how additional resources improve the
efficiency of such systems.
|
1402.2796 | PR2: A Language Independent Unsupervised Tool for Personality
Recognition from Text | cs.CL | We present PR2, a personality recognition system available online, that
performs instance-based classification of Big5 personality types from
unstructured text, using language-independent features. It has been tested on
English and Italian, achieving performances up to f=.68.
|
1402.2807 | Efficient Truss Maintenance in Evolving Networks | cs.DB cs.SI | Truss was proposed to study social network data represented by graphs. A
k-truss of a graph is a cohesive subgraph, in which each edge is contained in
at least k-2 triangles within the subgraph. While truss has been demonstrated
as superior to model the close relationship in social networks and efficient
algorithms for finding trusses have been extensively studied, very little
attention has been paid to truss maintenance. However, most social networks are
evolving networks. It may be infeasible to recompute trusses from scratch from
time to time in order to find the up-to-date $k$-trusses in the evolving
networks. In this paper, we discuss how to maintain trusses in a graph with
dynamic updates. We first discuss a set of properties on maintaining trusses,
then propose algorithms on maintaining trusses on edge deletions and
insertions, finally, we discuss truss index maintenance. We test the proposed
techniques on real datasets. The experiment results show the promise of our
work.
|
1402.2826 | Realtime Multilevel Crowd Tracking using Reciprocal Velocity Obstacles | cs.CV | We present a novel, realtime algorithm to compute the trajectory of each
pedestrian in moderately dense crowd scenes. Our formulation is based on an
adaptive particle filtering scheme that uses a multi-agent motion model based
on velocity-obstacles, and takes into account local interactions as well as
physical and personal constraints of each pedestrian. Our method dynamically
changes the number of particles allocated to each pedestrian based on different
confidence metrics. Additionally, we use a new high-definition crowd video
dataset, which is used to evaluate the performance of different pedestrian
tracking algorithms. This dataset consists of videos of indoor and outdoor
scenes, recorded at different locations with 30-80 pedestrians. We highlight
the performance benefits of our algorithm over prior techniques using this
dataset. In practice, our algorithm can compute trajectories of tens of
pedestrians on a multi-core desktop CPU at interactive rates (27-30 frames per
second). To the best of our knowledge, our approach is 4-5 times faster than
prior methods, which provide similar accuracy.
|
1402.2845 | Efficient Localization of Discontinuities in Complex Computational
Simulations | cs.CE | Surrogate models for computational simulations are input-output
approximations that allow computationally intensive analyses, such as
uncertainty propagation and inference, to be performed efficiently. When a
simulation output does not depend smoothly on its inputs, the error and
convergence rate of many approximation methods deteriorate substantially. This
paper details a method for efficiently localizing discontinuities in the input
parameter domain, so that the model output can be approximated as a piecewise
smooth function. The approach comprises an initialization phase, which uses
polynomial annihilation to assign function values to different regions and thus
seed an automated labeling procedure, followed by a refinement phase that
adaptively updates a kernel support vector machine representation of the
separating surface via active learning. The overall approach avoids structured
grids and exploits any available simplicity in the geometry of the separating
surface, thus reducing the number of model evaluations required to localize the
discontinuity. The method is illustrated on examples of up to eleven
dimensions, including algebraic models and ODE/PDE systems, and demonstrates
improved scaling and efficiency over other discontinuity localization
approaches.
|
1402.2863 | On the Randomized Kaczmarz Algorithm | cs.SY math.OC | The Randomized Kaczmarz Algorithm is a randomized method which aims at
solving a consistent system of over determined linear equations. This note
discusses how to find an optimized randomization scheme for this algorithm,
which is related to the question raised by \cite{c2}. Illustrative experiments
are conducted to support the findings.
|
1402.2864 | Sparse Estimation From Noisy Observations of an Overdetermined Linear
System | cs.SY stat.ML | This note studies a method for the efficient estimation of a finite number of
unknown parameters from linear equations, which are perturbed by Gaussian
noise.
In case the unknown parameters have only few nonzero entries, the proposed
estimator performs more efficiently than a traditional approach.
The method consists of three steps:
(1) a classical Least Squares Estimate (LSE),
(2) the support is recovered through a Linear Programming (LP) optimization
problem which can be computed using a soft-thresholding step,
(3) a de-biasing step using a LSE on the estimated support set.
The main contribution of this note is a formal derivation of an associated
ORACLE property of the final estimate.
That is, when the number of samples is large enough, the estimate is shown to
equal the LSE based on the support of the {\em true} parameters.
|
1402.2871 | Planning for Decentralized Control of Multiple Robots Under Uncertainty | cs.RO cs.AI cs.MA | We describe a probabilistic framework for synthesizing control policies for
general multi-robot systems, given environment and sensor models and a cost
function. Decentralized, partially observable Markov decision processes
(Dec-POMDPs) are a general model of decision processes where a team of agents
must cooperate to optimize some objective (specified by a shared reward or cost
function) in the presence of uncertainty, but where communication limitations
mean that the agents cannot share their state, so execution must proceed in a
decentralized fashion. While Dec-POMDPs are typically intractable to solve for
real-world problems, recent research on the use of macro-actions in Dec-POMDPs
has significantly increased the size of problem that can be practically solved
as a Dec-POMDP. We describe this general model, and show how, in contrast to
most existing methods that are specialized to a particular problem class, it
can synthesize control policies that use whatever opportunities for
coordination are present in the problem, while balancing off uncertainty in
outcomes, sensor information, and information about other agents. We use three
variations on a warehouse task to show that a single planner of this type can
generate cooperative behavior using task allocation, direct communication, and
signaling, as appropriate.
|
1402.2892 | Efficient Analysis of Pattern and Association Rule Mining Approaches | cs.DB | The process of data mining produces various patterns from a given data
source. The most recognized data mining tasks are the process of discovering
frequent itemsets, frequent sequential patterns, frequent sequential rules and
frequent association rules. Numerous efficient algorithms have been proposed to
do the above processes. Frequent pattern mining has been a focused topic in
data mining research with a good number of references in literature and for
that reason an important progress has been made, varying from performant
algorithms for frequent itemset mining in transaction databases to complex
algorithms, such as sequential pattern mining, structured pattern mining,
correlation mining. Association Rule mining (ARM) is one of the utmost current
data mining techniques designed to group objects together from large databases
aiming to extract the interesting correlation and relation among huge amount of
data. In this article, we provide a brief review and analysis of the current
status of frequent pattern mining and discuss some promising research
directions. Additionally, this paper includes a comparative study between the
performance of the described approaches.
|
1402.2925 | Modeling Switched Behavior with Hybrid Bond Graph: Application to a Tank
system | cs.SY | Different approaches have been used in the development of system models. In
addition, modeling and simulation approaches are essential for design,
analysis, control, and diagnosis of complex systems. This work presents a
Simulink model for systems with mixed continuous and discrete behaviors. The
model simulated was developed using the bond graph methodology and we model
hybrid systems using hybrid bond graphs (HBGs), that incorporates local
switching functions that enable the reconfiguration of energy flow paths. This
approach has been implemented as a software tool called the MOdeling and
Transformation of HBGs for Simulation (MOTHS) tool suite which incorporates a
model translator that create Simulink models. Simulation model of a three-tank
system that includes a switching component was developed using the bond graph
methodology, and MoTHS software were used to build a Simulink model of the
dynamic behavior.
|
1402.2936 | R-dimensional ESPRIT-type algorithms for strictly second-order
non-circular sources and their performance analysis | cs.IT math.IT | High-resolution parameter estimation algorithms designed to exploit the prior
knowledge about incident signals from strictly second-order (SO) non-circular
(NC) sources allow for a lower estimation error and can resolve twice as many
sources. In this paper, we derive the R-D NC Standard ESPRIT and the R-D NC
Unitary ESPRIT algorithms that provide a significantly better performance
compared to their original versions for arbitrary source signals. They are
applicable to shift-invariant R-D antenna arrays and do not require a
centrosymmetric array structure. Moreover, we present a first-order asymptotic
performance analysis of the proposed algorithms, which is based on the error in
the signal subspace estimate arising from the noise perturbation. The derived
expressions for the resulting parameter estimation error are explicit in the
noise realizations and asymptotic in the effective signal-to-noise ratio (SNR),
i.e., the results become exact for either high SNRs or a large sample size. We
also provide mean squared error (MSE) expressions, where only the assumptions
of a zero mean and finite SO moments of the noise are required, but no
assumptions about its statistics are necessary. As a main result, we
analytically prove that the asymptotic performance of both R-D NC ESPRIT-type
algorithms is identical in the high effective SNR regime. Finally, a case study
shows that no improvement from strictly non-circular sources can be achieved in
the special case of a single source.
|
1402.2941 | Multispectral Palmprint Encoding and Recognition | cs.CV | Palmprints are emerging as a new entity in multi-modal biometrics for human
identification and verification. Multispectral palmprint images captured in the
visible and infrared spectrum not only contain the wrinkles and ridge structure
of a palm, but also the underlying pattern of veins; making them a highly
discriminating biometric identifier. In this paper, we propose a feature
encoding scheme for robust and highly accurate representation and matching of
multispectral palmprints. To facilitate compact storage of the feature, we
design a binary hash table structure that allows for efficient matching in
large databases. Comprehensive experiments for both identification and
verification scenarios are performed on two public datasets -- one captured
with a contact-based sensor (PolyU dataset), and the other with a contact-free
sensor (CASIA dataset). Recognition results in various experimental setups show
that the proposed method consistently outperforms existing state-of-the-art
methods. Error rates achieved by our method (0.003% on PolyU and 0.2% on CASIA)
are the lowest reported in literature on both dataset and clearly indicate the
viability of palmprint as a reliable and promising biometric. All source codes
are publicly available.
|
1402.2959 | Local Optima Networks: A New Model of Combinatorial Fitness Landscapes | cs.NE cs.AI | This chapter overviews a recently introduced network-based model of
combinatorial landscapes: Local Optima Networks (LON). The model compresses the
information given by the whole search space into a smaller mathematical object
that is a graph having as vertices the local optima and as edges the possible
weighted transitions between them. Two definitions of edges have been proposed:
basin-transition and escape-edges, which capture relevant topological features
of the underlying search spaces. This network model brings a new set of metrics
to characterize the structure of combinatorial landscapes, those associated
with the science of complex networks. These metrics are described, and results
are presented of local optima network extraction and analysis for two selected
combinatorial landscapes: NK landscapes and the quadratic assignment problem.
Network features are found to correlate with and even predict the performance
of heuristic search algorithms operating on these problems.
|
1402.2996 | Robot Training Under Conditions of Incomplete Information | cs.RO | The development of the works of the author about adaptive algorithms of
teaching the robotic systems with the help of operator is described here. An
operator is assumed to be an experience decision-maker and sane carrier of a
target which the robotic system needs to achieve. The works characteristic is
that the behavior of the robotic system is not specified a priori (as standard)
but is formed adaptively based on the information about the situation and
decisions made by a decision-maker. In this scheme the robotic system and the
decision-maker can cooperate in the normal operation mode of the robotic system
or in the time sharing mode with the possibility to plan actively the
experiment on the robotic system. If the adaptive scheme is chosen, there are
teaching stages and operating stages of the robotic system. At that the
decision-maker can act slowly having the possibility to weigh the decision
made. This way allows the robotic system reacting flexibly by switching between
preset models and respond to the environment instability. The data integrity
about the environment condition and about target preferences of an operator
plays a very important role in robotic system work. The effective work of the
robotic system depends on the effective settings of a preference model of the
robotic system based on the decisions of the decision-maker and on the
effective control. The influence of settings and control factors on the index
of effectiveness of the robotic system is subject of this work. The uncertainty
may be caused by the data flow limitation received by the operator on the stage
of the model setting.
|
1402.3010 | 1-D and 2-D Parallel Algorithms for All-Pairs Similarity Problem | cs.IR cs.DC | All-pairs similarity problem asks to find all vector pairs in a set of
vectors the similarities of which surpass a given similarity threshold, and it
is a computational kernel in data mining and information retrieval for several
tasks. We investigate the parallelization of a recent fast sequential
algorithm. We propose effective 1-D and 2-D data distribution strategies that
preserve the essential optimizations in the fast algorithm. 1-D parallel
algorithms distribute either dimensions or vectors, whereas the 2-D parallel
algorithm distributes data both ways. Additional contributions to the 1-D
vertical distribution include a local pruning strategy to reduce the number of
candidates, a recursive pruning algorithm, and block processing to reduce
imbalance. The parallel algorithms were programmed in OCaml which affords much
convenience. Our experiments indicate that the performance depends on the
dataset, therefore a variety of parallelizations is useful.
|
1402.3022 | To react or not to react? Intrinsic stochasticity of human control in
virtual stick balancing | physics.bio-ph cs.SY nlin.AO q-bio.NC | Understanding how humans control unstable systems is central to many research
problems, with applications ranging from quiet standing to aircraft landing.
Increasingly much evidence appears in favor of event-driven control hypothesis:
human operators only start actively controlling the system when the discrepancy
between the current and desired system states becomes large enough. The
event-driven models based on the concept of threshold can explain many features
of the experimentally observed dynamics. However, much still remains unclear
about the dynamics of human-controlled systems, which likely indicates that
humans employ more intricate control mechanisms. The present paper argues that
control activation in humans may be not threshold-driven, but instead
intrinsically stochastic, noise-driven. Specifically, we suggest that control
activation stems from stochastic interplay between the operator's need to keep
the controlled system near the goal state on one hand and the tendency to
postpone interrupting the system dynamics on the other hand. We propose a model
capturing this interplay and show that it matches the experimental data on
human balancing of virtual overdamped stick. Our results illuminate that the
noise-driven activation mechanism plays a crucial role at least in the
considered task, and, hypothetically, in a broad range of human-controlled
processes.
|
1402.3032 | Regularization for Multiple Kernel Learning via Sum-Product Networks | stat.ML cs.LG | In this paper, we are interested in constructing general graph-based
regularizers for multiple kernel learning (MKL) given a structure which is used
to describe the way of combining basis kernels. Such structures are represented
by sum-product networks (SPNs) in our method. Accordingly we propose a new
convex regularization method for MLK based on a path-dependent kernel weighting
function which encodes the entire SPN structure in our method. Under certain
conditions and from the view of probability, this function can be considered to
follow multinomial distributions over the weights associated with product nodes
in SPNs. We also analyze the convexity of our regularizer and the complexity of
our induced classifiers, and further propose an efficient wrapper algorithm to
optimize our formulation. In our experiments, we apply our method to ......
|
1402.3040 | Event Structure of Transitive Verb: A MARVS perspective | cs.CL | Module-Attribute Representation of Verbal Semantics (MARVS) is a theory of
the representation of verbal semantics that is based on Mandarin Chinese data
(Huang et al. 2000). In the MARVS theory, there are two different types of
modules: Event Structure Modules and Role Modules. There are also two sets of
attributes: Event-Internal Attributes and Role-Internal Attributes, which are
linked to the Event Structure Module and the Role Module, respectively. In this
study, we focus on four transitive verbs as chi1(eat), wan2(play),
huan4(change) and shao1(burn) and explore their event structures by the MARVS
theory.
|
1402.3044 | Finding a Collective Set of Items: From Proportional Multirepresentation
to Group Recommendation | cs.GT cs.AI cs.MA | We consider the following problem: There is a set of items (e.g., movies) and
a group of agents (e.g., passengers on a plane); each agent has some intrinsic
utility for each of the items. Our goal is to pick a set of $K$ items that
maximize the total derived utility of all the agents (i.e., in our example we
are to pick $K$ movies that we put on the plane's entertainment system).
However, the actual utility that an agent derives from a given item is only a
fraction of its intrinsic one, and this fraction depends on how the agent ranks
the item among the chosen, available, ones. We provide a formal specification
of the model and provide concrete examples and settings where it is applicable.
We show that the problem is hard in general, but we show a number of
tractability results for its natural special cases.
|
1402.3067 | A Bayesian Characterization of Relative Entropy | cs.IT math-ph math.IT math.MP math.PR quant-ph | We give a new characterization of relative entropy, also known as the
Kullback-Leibler divergence. We use a number of interesting categories related
to probability theory. In particular, we consider a category FinStat where an
object is a finite set equipped with a probability distribution, while a
morphism is a measure-preserving function $f: X \to Y$ together with a
stochastic right inverse $s: Y \to X$. The function $f$ can be thought of as a
measurement process, while s provides a hypothesis about the state of the
measured system given the result of a measurement. Given this data we can
define the entropy of the probability distribution on $X$ relative to the
"prior" given by pushing the probability distribution on $Y$ forwards along
$s$. We say that $s$ is "optimal" if these distributions agree. We show that
any convex linear, lower semicontinuous functor from FinStat to the additive
monoid $[0,\infty]$ which vanishes when $s$ is optimal must be a scalar
multiple of this relative entropy. Our proof is independent of all earlier
characterizations, but inspired by the work of Petz.
|
1402.3070 | Squeezing bottlenecks: exploring the limits of autoencoder semantic
representation capabilities | cs.IR cs.LG stat.ML | We present a comprehensive study on the use of autoencoders for modelling
text data, in which (differently from previous studies) we focus our attention
on the following issues: i) we explore the suitability of two different models
bDA and rsDA for constructing deep autoencoders for text data at the sentence
level; ii) we propose and evaluate two novel metrics for better assessing the
text-reconstruction capabilities of autoencoders; and iii) we propose an
automatic method to find the critical bottleneck dimensionality for text
language representations (below which structural information is lost).
|
1402.3072 | Community Detection via Random and Adaptive Sampling | cs.SI physics.soc-ph | In this paper, we consider networks consisting of a finite number of
non-overlapping communities. To extract these communities, the interaction
between pairs of nodes may be sampled from a large available data set, which
allows a given node pair to be sampled several times. When a node pair is
sampled, the observed outcome is a binary random variable, equal to 1 if nodes
interact and to 0 otherwise. The outcome is more likely to be positive if nodes
belong to the same communities. For a given budget of node pair samples or
observations, we wish to jointly design a sampling strategy (the sequence of
sampled node pairs) and a clustering algorithm that recover the hidden
communities with the highest possible accuracy. We consider both non-adaptive
and adaptive sampling strategies, and for both classes of strategies, we derive
fundamental performance limits satisfied by any sampling and clustering
algorithm. In particular, we provide necessary conditions for the existence of
algorithms recovering the communities accurately as the network size grows
large. We also devise simple algorithms that accurately reconstruct the
communities when this is at all possible, hence proving that the proposed
necessary conditions for accurate community detection are also sufficient. The
classical problem of community detection in the stochastic block model can be
seen as a particular instance of the problems consider here. But our framework
covers more general scenarios where the sequence of sampled node pairs can be
designed in an adaptive manner. The paper provides new results for the
stochastic block model, and extends the analysis to the case of adaptive
sampling.
|
1402.3074 | Scheduling Advantages of Network Coded Storage in Point-to-Multipoint
Networks | cs.IT cs.NI math.IT | We consider scheduling strategies for point-to-multipoint (PMP) storage area
networks (SANs) that use network coded storage (NCS). In particular, we present
a simple SAN system model, two server scheduling algorithms for PMP networks,
and analytical expressions for internal and external blocking probability. We
point to select scheduling advantages in NCS systems under normal operating
conditions, where content requests can be temporarily denied owing to finite
system capacity from drive I/O access or storage redundancy limitations. NCS
can lead to improvements in throughput and blocking probability due to
increased immediate scheduling options, and complements other well documented
NCS advantages such as regeneration, and can be used as a guide for future
storage system design.
|
1402.3080 | Software Requirement Specification Using Reverse Speech Technology | cs.CL cs.SD | Speech analysis had been taken to a new level with the discovery of Reverse
Speech (RS). RS is the discovery of hidden messages, referred as reversals, in
normal speech. Works are in progress for exploiting the relevance of RS in
different real world applications such as investigation, medical field etc. In
this paper we represent an innovative method for preparing a reliable Software
Requirement Specification (SRS) document with the help of reverse speech. As
SRS act as the backbone for the successful completion of any project, a
reliable method is needed to overcome the inconsistencies. Using RS such a
reliable method for SRS documentation was developed.
|
1402.3096 | Relations on FP-Soft Sets Applied to Decision Making Problems | math.LO cs.AI | In this work, we first define relations on the fuzzy parametrized soft sets
and study their properties. We also give a decision making method based on
these relations. In approximate reasoning, relations on the fuzzy parametrized
soft sets have shown to be of a primordial importance. Finally, the method is
successfully applied to a problems that contain uncertainties.
|
1402.3119 | Cellular Interference Alignment | cs.IT math.IT | Interference alignment promises that, in Gaussian interference channels, each
link can support half of a degree of freedom (DoF) per pair of transmit-receive
antennas. However, in general, this result requires to precode the data bearing
signals over a signal space of asymptotically large diversity, e.g., over an
infinite number of dimensions for time-frequency varying fading channels, or
over an infinite number of rationally independent signal levels, in the case of
time-frequency invariant channels. In this work we consider a wireless cellular
system scenario where the promised optimal DoFs are achieved with linear
precoding in one-shot (i.e., over a single time-frequency slot). We focus on
the uplink of a symmetric cellular system, where each cell is split into three
sectors with orthogonal intra-sector multiple access. In our model,
interference is "local", i.e., it is due to transmitters in neighboring cells
only. We consider a message-passing backhaul network architecture, in which
nearby sectors can exchange already decoded messages and propose an alignment
solution that can achieve the optimal DoFs. To avoid signaling schemes relying
on the strength of interference, we further introduce the notion of
\emph{topologically robust} schemes, which are able to guarantee a minimum rate
(or DoFs) irrespectively of the strength of the interfering links. Towards this
end, we design an alignment scheme which is topologically robust and still
achieves the same optimum DoFs.
|
1402.3125 | Information Theoretical Cryptogenography | cs.CR cs.IT math.IT | We consider problems where $n$ people are communicating and a random subset
of them is trying to leak information, without making it clear who are leaking
the information. We introduce a measure of suspicion, and show that the amount
of leaked information will always be bounded by the expected increase in
suspicion, and that this bound is tight. We ask the question: Suppose a large
number of people have some information they want to leak, but they want to
ensure that after the communication, an observer will assign probability at
most $c$ to the events that each of them is trying to leak the information. How
much information can they reliably leak, per person who is leaking? We show
that the answer is $- \frac{\log(1-c)}{c} -\log(e)$ bits.
|
1402.3138 | Social Networks and the Choices People Make | cs.SI | Social marketing is becoming increasingly important in contemporary business.
Central to social marketing is quantifying how consumers choose between
alternatives and how they influence each other. This work considers a new but
simple multinomial choice model for multiple agents connected in a
recommendation network based on the explicit modeling of choice adoption
behavior. Efficiently computable closed-form solutions, absent from analyses of
threshold/cascade models, are obtained together with insights on how the
network affects aggregate decision making. A stylized "brand ambassador"
selection problem is posed to model targeting in social marketing. Therein, it
is shown that a greedy selection strategy leads to solutions achieving at least
$1-1/e$ of the optimal value. In an extended example of imposing exogenous
controls, a pricing problem is considered wherein it is shown that the single
player profit optimization problem is concave, implying the existence of pure
strategy equilibria for the associated pricing game.
|
1402.3144 | A Robust Ensemble Approach to Learn From Positive and Unlabeled Data
Using SVM Base Models | stat.ML cs.LG | We present a novel approach to learn binary classifiers when only positive
and unlabeled instances are available (PU learning). This problem is routinely
cast as a supervised task with label noise in the negative set. We use an
ensemble of SVM models trained on bootstrap resamples of the training data for
increased robustness against label noise. The approach can be considered in a
bagging framework which provides an intuitive explanation for its mechanics in
a semi-supervised setting. We compared our method to state-of-the-art
approaches in simulations using multiple public benchmark data sets. The
included benchmark comprises three settings with increasing label noise: (i)
fully supervised, (ii) PU learning and (iii) PU learning with false positives.
Our approach shows a marginal improvement over existing methods in the second
setting and a significant improvement in the third.
|
1402.3173 | Homogenization of coupled heat and moisture transport in masonry
structures including interfaces | cs.CE | Homogenization of a simultaneous heat and moisture flow in a masonry wall is
presented in this paper. The principle objective is to examine an impact of the
assumed imperfect hydraulic contact on the resulting homogenized properties.
Such a contact is characterized by a certain mismatching resistance allowing us
to represent a discontinuous evolution of temperature and moisture fields
across the interface, which is in general attributed to discontinuous capillary
pressures caused by different pore size distributions of the adjacent porous
materials. In achieving this, two particular laboratory experiments were
performed to provide distributions of temperature and relative humidity in a
sample of the masonry wall, which in turn served to extract the corresponding
jumps and subsequently to obtain the required interface transition parameters
by matching numerical predictions and experimental results. The results suggest
a low importance of accounting for imperfect hydraulic contact for the
derivation of macroscopic homogenized properties. On the other hand, they
strongly support the need for a fully coupled multi-scale analysis due to
significant dependence of the homogenized properties on actual moisture
gradients and corresponding values of both macroscopic temperature and relative
humidity.
|
1402.3174 | Modeling of Degradation Processes in Historical Mortars | cs.CE | The aim of presented paper is modeling of degradation processes in historical
mortars exposed to moisture impact during freezing. Internal damage caused by
ice crystallization in pores is one of the most important factors limiting the
service life of historical structures. Coupling the transport processes with
the mechanical part will allow us to address the impact of moisture on the
durability, strength and stiffness of mortars. This should be accomplished with
the help of a complex thermo-hygro-mechanical model representing one of the
prime objectives of this work. The proposed formulation is based on the
extension of the classical poroelasticity models with the damage mechanics. An
example of two-dimensional moisture transport in the environment with
temperature below freezing point is presented to support the theoretical
derivations.
|
1402.3175 | Information-Geometric Equivalence of Transportation Polytopes | cs.IT math.CO math.IT | This paper deals with transportation polytopes in the probability simplex
(that is, sets of categorical bivariate probability distributions with
prescribed marginals). Information projections between such polytopes are
studied, and a sufficient condition is described under which these mappings are
homeomorphisms.
|
1402.3193 | Characterizations and Kullback-Leibler Divergence of Gompertz
Distributions | cs.IT math.IT | In this note, we characterize the Gompertz distribution in terms of extreme
value distributions and point out that it implicitly models the interplay of
two antagonistic growth processes. In addition, we derive a closed form
expressions for the Kullback-Leibler divergence between two Gompertz
Distributions. Although the latter is rather easy to obtain, it seems not to
have been widely reported before.
|
1402.3210 | On the Convergence of Approximate Message Passing with Arbitrary
Matrices | cs.IT math.IT | Approximate message passing (AMP) methods and their variants have attracted
considerable recent attention for the problem of estimating a random vector
$\mathbf{x}$ observed through a linear transform $\mathbf{A}$. In the case of
large i.i.d. zero-mean Gaussian $\mathbf{A}$, the methods exhibit fast
convergence with precise analytic characterizations on the algorithm behavior.
However, the convergence of AMP under general transforms $\mathbf{A}$ is not
fully understood. In this paper, we provide sufficient conditions for the
convergence of a damped version of the generalized AMP (GAMP) algorithm in the
case of quadratic cost functions (i.e., Gaussian likelihood and prior). It is
shown that, with sufficient damping, the algorithm is guaranteed to converge,
although the amount of damping grows with peak-to-average ratio of the squared
singular values of the transforms $\mathbf{A}$. This result explains the good
performance of AMP on i.i.d. Gaussian transforms $\mathbf{A}$, but also their
difficulties with ill-conditioned or non-zero-mean transforms $\mathbf{A}$. A
related sufficient condition is then derived for the local stability of the
damped GAMP method under general cost functions, assuming certain strict
convexity conditions.
|
1402.3213 | Proceedings of the 1st Workshop on Robotics Challenges and Vision
(RCV2013) | cs.RO | Proceedings of the 1st Workshop on Robotics Challenges and Vision (RCV2013)
|
1402.3215 | Analysis of Compressed Sensing with Spatially-Coupled Orthogonal
Matrices | cs.IT math.IT | Recent development in compressed sensing (CS) has revealed that the use of a
special design of measurement matrix, namely the spatially-coupled matrix, can
achieve the information-theoretic limit of CS. In this paper, we consider the
measurement matrix which consists of the spatially-coupled \emph{orthogonal}
matrices. One example of such matrices are the randomly selected discrete
Fourier transform (DFT) matrices. Such selection enjoys a less memory
complexity and a faster multiplication procedure. Our contributions are the
replica calculations to find the mean-square-error (MSE) of the Bayes-optimal
reconstruction for such setup. We illustrate that the reconstruction thresholds
under the spatially-coupled orthogonal and Gaussian ensembles are quite
different especially in the noisy cases. In particular, the spatially coupled
orthogonal matrices achieve the faster convergence rate, the lower measurement
rate, and the reduced MSE.
|
1402.3225 | Market-Based Power Allocation for a Differentially Priced FDMA System | cs.IT cs.GT cs.NI math.IT | In this paper, we study the problem of differential pricing and QoS
assignment by a broadband data provider. In our model, the broadband data
provider decides on the power allocated to an end-user not only based on
parameters of the transmission medium, but also based on the price the user is
willing to pay. In addition, end-users bid the price that they are willing to
pay to the BTS based on their channel condition, the throughput they require,
and their belief about other users' parameters. We will characterize the
optimum power allocation by the BTS which turns out to be a modification of the
solution to the well-known water-filling problem. We also characterize the
optimum bidding strategy of end-users using the belief of each user about the
cell condition.
|
1402.3247 | Learning-Based Optimization of Cache Content in a Small Cell Base
Station | cs.IT math.IT | Optimal cache content placement in a wireless small cell base station (sBS)
with limited backhaul capacity is studied. The sBS has a large cache memory and
provides content-level selective offloading by delivering high data rate
contents to users in its coverage area. The goal of the sBS content controller
(CC) is to store the most popular contents in the sBS cache memory such that
the maximum amount of data can be fetched directly form the sBS, not relying on
the limited backhaul resources during peak traffic periods. If the popularity
profile is known in advance, the problem reduces to a knapsack problem.
However, it is assumed in this work that, the popularity profile of the files
is not known by the CC, and it can only observe the instantaneous demand for
the cached content. Hence, the cache content placement is optimised based on
the demand history. By refreshing the cache content at regular time intervals,
the CC tries to learn the popularity profile, while exploiting the limited
cache capacity in the best way possible. Three algorithms are studied for this
cache content placement problem, leading to different exploitation-exploration
trade-offs. We provide extensive numerical simulations in order to study the
time-evolution of these algorithms, and the impact of the system parameters,
such as the number of files, the number of users, the cache size, and the
skewness of the popularity profile, on the performance. It is shown that the
proposed algorithms quickly learn the popularity profile for a wide range of
system parameters.
|
1402.3261 | Hand-Eye and Robot-World Calibration by Global Polynomial Optimization | cs.CV math.OC | The need to relate measurements made by a camera to a different known
coordinate system arises in many engineering applications. Historically, it
appeared for the first time in the connection with cameras mounted on robotic
systems. This problem is commonly known as hand-eye calibration. In this paper,
we present several formulations of hand-eye calibration that lead to
multivariate polynomial optimization problems. We show that the method of
convex linear matrix inequality (LMI) relaxations can be used to effectively
solve these problems and to obtain globally optimal solutions. Further, we show
that the same approach can be used for the simultaneous hand-eye and
robot-world calibration. Finally, we validate the proposed solutions using both
synthetic and real datasets.
|
1402.3264 | Polynomial Time Attack on Wild McEliece Over Quadratic Extensions | cs.CR cs.IT math.IT math.NT | We present a polynomial time structural attack against the McEliece system
based on Wild Goppa codes from a quadratic finite field extension. This attack
uses the fact that such codes can be distinguished from random codes to compute
some filtration, that is to say a family of nested subcodes which will reveal
their secret algebraic description.
|
1402.3281 | Partitioning Complex Networks via Size-constrained Clustering | cs.DC cs.DS cs.SI | The most commonly used method to tackle the graph partitioning problem in
practice is the multilevel approach. During a coarsening phase, a multilevel
graph partitioning algorithm reduces the graph size by iteratively contracting
nodes and edges until the graph is small enough to be partitioned by some other
algorithm. A partition of the input graph is then constructed by successively
transferring the solution to the next finer graph and applying a local search
algorithm to improve the current solution.
In this paper, we describe a novel approach to partition graphs effectively
especially if the networks have a highly irregular structure. More precisely,
our algorithm provides graph coarsening by iteratively contracting
size-constrained clusterings that are computed using a label propagation
algorithm. The same algorithm that provides the size-constrained clusterings
can also be used during uncoarsening as a fast and simple local search
algorithm.
Depending on the algorithm's configuration, we are able to compute partitions
of very high quality outperforming all competitors, or partitions that are
comparable to the best competitor in terms of quality, hMetis, while being
nearly an order of magnitude faster on average. The fastest configuration
partitions the largest graph available to us with 3.3 billion edges using a
single machine in about ten minutes while cutting less than half of the edges
than the fastest competitor, kMetis.
|
1402.3288 | Two Steps to Obfuscation | cs.SI physics.soc-ph | This note addresses the historical antecedents of the 1998 PageRank measure
of centrality. An identity relation links it to 1990-1991 models of Friedkin
and Johnsen.
|
1402.3301 | Privacy and National Security Issues in Social Networks: The Challenges | cs.SI cs.CY | Online social networks are becoming a major growth point of the internet, as
individuals, companies and governments constantly desire to interact with one
another, the ability of the internet to deliver this networking capabilities
grows stronger. In this paper, we looked at the structure and components of the
member profile and the challenges of privacy issues faced by individuals and
governments that participate in social networking. We also looked at how it can
be used to distort national security, how it became the new weapons of mass
mobilization and also how social networks have became the rallying forces for
revolutions and social justice.
|
1402.3314 | Distributed synthesis for acyclic architectures | cs.LO cs.SY | The distributed synthesis problem is about constructing cor- rect distributed
systems, i.e., systems that satisfy a given specification. We consider a
slightly more general problem of distributed control, where the goal is to
restrict the behavior of a given distributed system in order to satisfy the
specification. Our systems are finite state machines that communicate via
rendez-vous (Zielonka automata). We show decidability of the synthesis problem
for all omega-regular local specifications, under the restriction that the
communication graph of the system is acyclic. This result extends a previous
decidability result for a restricted form of local reachability specifications.
|
1402.3317 | Multiple Window Moving Horizon Estimation | cs.SY | Long horizon lengths in Moving Horizon Estimation are desirable to reach the
performance limits of the full information estimator. However, the conventional
MHE technique suffers from a number of deficiencies in this respect. First, the
problem complexity scales at least linearly with the horizon length selected,
which restrains from selecting long horizons if computational limitations are
present. Second, there is no monitoring of constraint activity/inactivity which
results in conducting redundant constrained minimizations even when no
constraints are active. In this study we develop a Multiple-Window Moving
Horizon Estimation strategy (MW-MHE) that exploits constraint inactivity to
reduce the problem size in long horizon estimation problems. The arrival cost
is approximated using the unconstrained full information estimator arrival cost
to guarantee stability of the technique. A new horizon length selection
criteria is developed based on maximum sensitivity between remote states in
time. The development will be in terms of general causal descriptor systems,
which includes the standard state space representation as a special case. The
potential of the new estimation algorithm will be demonstrated with an example
showing a significant reduction in both computation time and numerical errors
compared to conventional MHE.
|
1402.3329 | Differential Privacy: An Economic Method for Choosing Epsilon | cs.DB | Differential privacy is becoming a gold standard for privacy research; it
offers a guaranteed bound on loss of privacy due to release of query results,
even under worst-case assumptions. The theory of differential privacy is an
active research area, and there are now differentially private algorithms for a
wide range of interesting problems.
However, the question of when differential privacy works in practice has
received relatively little attention. In particular, there is still no rigorous
method for choosing the key parameter $\epsilon$, which controls the crucial
tradeoff between the strength of the privacy guarantee and the accuracy of the
published results.
In this paper, we examine the role that these parameters play in concrete
applications, identifying the key questions that must be addressed when
choosing specific values. This choice requires balancing the interests of two
different parties: the data analyst and the prospective participant, who must
decide whether to allow their data to be included in the analysis. We propose a
simple model that expresses this balance as formulas over a handful of
parameters, and we use our model to choose $\epsilon$ on a series of simple
statistical studies. We also explore a surprising insight: in some
circumstances, a differentially private study can be more accurate than a
non-private study for the same cost, under our model. Finally, we discuss the
simplifying assumptions in our model and outline a research agenda for possible
refinements.
|
1402.3331 | L-infinity Norm Design of Linear-phase Robust Broadband Beamformers
using Constrained Optimization | cs.SY cs.IT math.IT | A new method for the design of linear-phase robust far-field broadband
beamformers using constrained optimization is proposed. In the method, the
maximum passband ripple and minimum stopband attenuation are ensured to be
within prescribed levels, while at the same time maintaining a good
linear-phase characteristic at a prescribed group delay in the passband. Since
the beamformer is intended primarily for small-sized microphone arrays where
the microphone spacing is small relative to the wavelength at low frequencies,
the beamformer can become highly sensitive to spatial white noise and array
imperfections if a direct minimization of the error is performed. Therefore, to
limit the sensitivity of the beamformer the optimization is carried out by
constraining a sensitivity parameter, namely, the white noise gain (WNG) to be
above prescribed levels across the frequency band. Two novel design variants
have been developed. The first variant is formulated as a convex optimization
problem where the maximum error in the passband is minimized, while the second
variant is formulated as an iterative optimization problem and has the
advantage of significantly improving the linear-phase characteristics of the
beamformer under any prescribed group delay or linear-array configuration. In
the second variant, the passband group-delay deviation is minimized while
ensuring that the maximum passband ripple and stopband attenuation are within
prescribed levels. To reduce the computational effort in carrying out the
optimization, a nonuniform variable sampling approach over the frequency and
angular dimensions is used to compute the required parameters. Experiment
results show that beamformers designed using the proposed methods have much
smaller passband group-delay deviation for similar passband ripple and stopband
attenuation than a modified version of an existing method.
|
1402.3337 | Zero-bias autoencoders and the benefits of co-adapting features | stat.ML cs.CV cs.LG cs.NE | Regularized training of an autoencoder typically results in hidden unit
biases that take on large negative values. We show that negative biases are a
natural result of using a hidden layer whose responsibility is to both
represent the input data and act as a selection mechanism that ensures sparsity
of the representation. We then show that negative biases impede the learning of
data distributions whose intrinsic dimensionality is high. We also propose a
new activation function that decouples the two roles of the hidden layer and
that allows us to learn representations on data with very high intrinsic
dimensionality, where standard autoencoders typically fail. Since the decoupled
activation function acts like an implicit regularizer, the model can be trained
by minimizing the reconstruction error of training data, without requiring any
additional regularization.
|
1402.3344 | Intrinsically Motivated Learning of Visual Motion Perception and Smooth
Pursuit | cs.CV q-bio.NC | We extend the framework of efficient coding, which has been used to model the
development of sensory processing in isolation, to model the development of the
perception/action cycle. Our extension combines sparse coding and reinforcement
learning so that sensory processing and behavior co-develop to optimize a
shared intrinsic motivational signal: the fidelity of the neural encoding of
the sensory input under resource constraints. Applying this framework to a
model system consisting of an active eye behaving in a time varying
environment, we find that this generic principle leads to the simultaneous
development of both smooth pursuit behavior and model neurons whose properties
are similar to those of primary visual cortical neurons selective for different
directions of visual motion. We suggest that this general principle may form
the basis for a unified and integrated explanation of many perception/action
loops.
|
1402.3346 | Geometry and Expressive Power of Conditional Restricted Boltzmann
Machines | cs.NE cs.LG stat.ML | Conditional restricted Boltzmann machines are undirected stochastic neural
networks with a layer of input and output units connected bipartitely to a
layer of hidden units. These networks define models of conditional probability
distributions on the states of the output units given the states of the input
units, parametrized by interaction weights and biases. We address the
representational power of these models, proving results their ability to
represent conditional Markov random fields and conditional distributions with
restricted supports, the minimal size of universal approximators, the maximal
model approximation errors, and on the dimension of the set of representable
conditional distributions. We contribute new tools for investigating
conditional probability models, which allow us to improve the results that can
be derived from existing work on restricted Boltzmann machine probability
models.
|
1402.3352 | Improved Design Method for Nearly Linear-Phase IIR Filters Using
Constrained Optimization | cs.SY | A new optimization method for the design of nearly linear-phase IIR digital
filters that satisfy prescribed specifications is proposed. The group-delay
deviation is minimized under the constraint that the passband ripple and
stopband attenuation are within the prescribed specifications and either a
prescribed or an optimized group delay can be achieved. By representing the
filter in terms of a cascade of second-order sections, a non-restrictive
stability constraint characterized by a set of linear inequality constraints
can be incorporated in the optimization algorithm. An additional feature of the
method, which is very useful in certain applications, is that it provides the
capability of constraining the maximum gain in transition bands to be below a
prescribed level. Experimental results show that filters designed using the
proposed method have much lower group-delay deviation for the same passband
ripple and stopband attenuation when compared with corresponding filters
designed with several state-of-the-art competing methods.
|
1402.3364 | Metric tree-like structures in real-life networks: an empirical study | cs.SI cs.DS | Based on solid theoretical foundations, we present strong evidences that a
number of real-life networks, taken from different domains like Internet
measurements, biological data, web graphs, social and collaboration networks,
exhibit tree-like structures from a metric point of view. We investigate few
graph parameters, namely, the tree-distortion and the tree-stretch, the
tree-length and the tree-breadth, the Gromov's hyperbolicity, the
cluster-diameter and the cluster-radius in a layering partition of a graph,
which capture and quantify this phenomenon of being metrically close to a tree.
By bringing all those parameters together, we not only provide efficient means
for detecting such metric tree-like structures in large-scale networks but also
show how such structures can be used, for example, to efficiently and compactly
encode approximate distance and almost shortest path information and to fast
and accurately estimate diameters and radii of those networks. Estimating the
diameter and the radius of a graph or distances between its arbitrary vertices
are fundamental primitives in many data and graph mining algorithms.
|
1402.3371 | An evaluative baseline for geo-semantic relatedness and similarity | cs.CL | In geographic information science and semantics, the computation of semantic
similarity is widely recognised as key to supporting a vast number of tasks in
information integration and retrieval. By contrast, the role of geo-semantic
relatedness has been largely ignored. In natural language processing, semantic
relatedness is often confused with the more specific semantic similarity. In
this article, we discuss a notion of geo-semantic relatedness based on Lehrer's
semantic fields, and we compare it with geo-semantic similarity. We then
describe and validate the Geo Relatedness and Similarity Dataset (GeReSiD), a
new open dataset designed to evaluate computational measures of geo-semantic
relatedness and similarity. This dataset is larger than existing datasets of
this kind, and includes 97 geographic terms combined into 50 term pairs rated
by 203 human subjects. GeReSiD is available online and can be used as an
evaluation baseline to determine empirically to what degree a given
computational model approximates geo-semantic relatedness and similarity.
|
1402.3382 | Machine Learning of Phonologically Conditioned Noun Declensions For
Tamil Morphological Generators | cs.CL | This paper presents machine learning solutions to a practical problem of
Natural Language Generation (NLG), particularly the word formation in
agglutinative languages like Tamil, in a supervised manner. The morphological
generator is an important component of Natural Language Processing in
Artificial Intelligence. It generates word forms given a root and affixes. The
morphophonemic changes like addition, deletion, alternation etc., occur when
two or more morphemes or words joined together. The Sandhi rules should be
explicitly specified in the rule based morphological analyzers and generators.
In machine learning framework, these rules can be learned automatically by the
system from the training samples and subsequently be applied for new inputs. In
this paper we proposed the machine learning models which learn the
morphophonemic rules for noun declensions from the given training data. These
models are trained to learn sandhi rules using various learning algorithms and
the performance of those algorithms are presented. From this we conclude that
machine learning of morphological processing such as word form generation can
be successfully learned in a supervised manner, without explicit description of
rules. The performance of Decision trees and Bayesian machine learning
algorithms on noun declensions are discussed.
|
1402.3384 | A Minimax Distortion View of Differentially Private Query Release | cs.CR cs.DB cs.IT math.IT | We consider the problem of differentially private query release through a
synthetic database approach. Departing from the existing approaches that
require the query set to be specified in advance, we advocate to devise
query-set independent mechanisms, with an ambitious goal of providing accurate
answers, while meeting the privacy constraints, for all queries in a general
query class. Specifically, a differentially private mechanism is constructed to
"encode" rich stochastic structure into the synthetic database, and
"customized" companion estimators are then derived to provide accurate answers
by making use of all available information, including the mechanism (which is
public information) and the query functions. Accordingly, the distortion under
the best of this kind of mechanisms at the worst-case query in a general query
class, so called the minimax distortion, provides a fundamental
characterization of differentially private query release.
For the general class of statistical queries, we prove that with the
squared-error distortion measure, the minimax distortion is $O(1/n)$ by
deriving asymptotically tight upper and lower bounds in the regime that the
database size $n$ goes to infinity. The upper bound is achievable by a
mechanism $\mathcal{E}$ and its corresponding companion estimators, which
points directly to the feasibility of the proposed approach in large databases.
We further evaluate the mechanism $\mathcal{E}$ and the companion estimators
through experiments on real datasets from Netflix and Facebook. Experimental
results show improvement over the state-of-art MWEM algorithm and verify the
scaling behavior $O(1/n)$ of the minimax distortion.
|
1402.3392 | Interleaved entropy coders | cs.IT math.IT | The ANS family of arithmetic coders developed by Jarek Duda has the unique
property that encoder and decoder are completely symmetric in the sense that a
decoder reading bits will be in the exact same state that the encoder was in
when writing those bits---all "buffering" of information is explicitly part of
the coder state and identical between encoder and decoder. As a consequence,
the output from multiple ABS/ANS coders can be interleaved into the same
bitstream without any additional metadata. This allows for very efficient
encoding and decoding on CPUs supporting superscalar execution or SIMD
instructions, as well as GPU implementations. We also show how interleaving
without additional metadata can be implemented for any entropy coder, at some
increase in encoder complexity.
|
1402.3405 | Authorship Analysis based on Data Compression | cs.CL cs.DL cs.IR stat.ML | This paper proposes to perform authorship analysis using the Fast Compression
Distance (FCD), a similarity measure based on compression with dictionaries
directly extracted from the written texts. The FCD computes a similarity
between two documents through an effective binary search on the intersection
set between the two related dictionaries. In the reported experiments the
proposed method is applied to documents which are heterogeneous in style,
written in five different languages and coming from different historical
periods. Results are comparable to the state of the art and outperform
traditional compression-based methods.
|
1402.3427 | Indian Buffet Process Deep Generative Models for Semi-Supervised
Classification | cs.LG | Deep generative models (DGMs) have brought about a major breakthrough, as
well as renewed interest, in generative latent variable models. However, DGMs
do not allow for performing data-driven inference of the number of latent
features needed to represent the observed data. Traditional linear formulations
address this issue by resorting to tools from the field of nonparametric
statistics. Indeed, linear latent variable models imposed an Indian Buffet
Process (IBP) prior have been extensively studied by the machine learning
community; inference for such models can been performed either via exact
sampling or via approximate variational techniques. Based on this inspiration,
in this paper we examine whether similar ideas from the field of Bayesian
nonparametrics can be utilized in the context of modern DGMs in order to
address the latent variable dimensionality inference problem. To this end, we
propose a novel DGM formulation, based on the imposition of an IBP prior. We
devise an efficient Black-Box Variational inference algorithm for our model,
and exhibit its efficacy in a number of semi-supervised classification
experiments. In all cases, we use popular benchmark datasets, and compare to
state-of-the-art DGMs.
|
1402.3435 | Generalized Huffman Coding for Binary Trees with Choosable Edge Lengths | cs.IT cs.DS math.CO math.IT | In this paper we study binary trees with choosable edge lengths, in
particular rooted binary trees with the property that the two edges leading
from every non-leaf to its two children are assigned integral lengths $l_1$ and
$l_2$ with $l_1+l_2 =k$ for a constant $k\in\mathbb{N}$. The depth of a leaf is
the total length of the edges of the unique root-leaf-path.
We present a generalization of the Huffman Coding that can decide in
polynomial time if for given values $d_1,...,d_n\in\mathbb{N}_{\geq 0}$ there
exists a rooted binary tree with choosable edge lengths with $n$ leaves having
depths at most $d_1,..., d_n$.
|
1402.3470 | Designing an Ontology for the Data Documentation Initiative | cs.IR cs.DL | An ontology of the DDI 3 data model will be designed by following the
ontology engineering methodology to be evolved based on state-of-the-art
methodologies. Hence DDI 3 data and metadata can be represented in form of a
standard web interchange format RDF and processed by highly available RDF
tools. As a consequence the DDI community has the possibility to publish and
link LOD data sets to become part of the LOD cloud.
|
1402.3483 | News Cohesiveness: an Indicator of Systemic Risk in Financial Markets | cs.SI physics.soc-ph q-fin.ST | Motivated by recent financial crises significant research efforts have been
put into studying contagion effects and herding behaviour in financial markets.
Much less has been said about influence of financial news on financial markets.
We propose a novel measure of collective behaviour in financial news on the
Web, News Cohesiveness Index (NCI), and show that it can be used as a systemic
risk indicator. We evaluate the NCI on financial documents from large Web news
sources on a daily basis from October 2011 to July 2013 and analyse the
interplay between financial markets and financially related news. We
hypothesized that strong cohesion in financial news reflects movements in the
financial markets. Cohesiveness is more general and robust measure of systemic
risk expressed in news, than measures based on simple occurrences of specific
terms. Our results indicate that cohesiveness in the financial news is highly
correlated with and driven by volatility on the financial markets.
|
1402.3484 | Simulation and Bisimulation over Multiple Time Scales in a Behavioral
Setting | cs.SY | This paper introduces a new behavioral system model with distinct external
and internal signals possibly evolving on different time scales. This allows to
capture abstraction processes or signal aggregation in the context of control
and verification of large scale systems. For this new system model different
notions of simulation and bisimulation are derived, ensuring that they are,
respectively, preorders and equivalence relations for the system class under
consideration. These relations can capture a wide selection of similarity
notions available in the literature. This paper therefore provides a suitable
framework for their comparison
|
1402.3488 | A Unifying Model for Representing Time-Varying Graphs | cs.DS cs.DM cs.SI | Graph-based models form a fundamental aspect of data representation in Data
Sciences and play a key role in modeling complex networked systems. In
particular, recently there is an ever-increasing interest in modeling dynamic
complex networks, i.e. networks in which the topological structure (nodes and
edges) may vary over time. In this context, we propose a novel model for
representing finite discrete Time-Varying Graphs (TVGs), which are typically
used to model dynamic complex networked systems. We analyze the data structures
built from our proposed model and demonstrate that, for most practical cases,
the asymptotic memory complexity of our model is in the order of the
cardinality of the set of edges. Further, we show that our proposal is an
unifying model that can represent several previous (classes of) models for
dynamic networks found in the recent literature, which in general are unable to
represent each other. In contrast to previous models, our proposal is also able
to intrinsically model cyclic (i.e. periodic) behavior in dynamic networks.
These representation capabilities attest the expressive power of our proposed
unifying model for TVGs. We thus believe our unifying model for TVGs is a step
forward in the theoretical foundations for data analysis of complex networked
systems.
|
1402.3490 | D numbers theory: a generalization of Dempster-Shafer theory | cs.AI | Dempster-Shafer theory is widely applied to uncertainty modelling and
knowledge reasoning due to its ability of expressing uncertain information.
However, some conditions, such as exclusiveness hypothesis and completeness
constraint, limit its development and application to a large extend. To
overcome these shortcomings in Dempster-Shafer theory and enhance its
capability of representing uncertain information, a novel theory called D
numbers theory is systematically proposed in this paper. Within the proposed
theory, uncertain information is expressed by D numbers, reasoning and
synthesization of information are implemented by D numbers combination rule.
The proposed D numbers theory is an generalization of Dempster-Shafer theory,
which inherits the advantage of Dempster-Shafer theory and strengthens its
capability of uncertainty modelling.
|
1402.3506 | Constructing (Bi)Similar Finite State Abstractions using Asynchronous
$l$-Complete Approximations | cs.SY | This paper constructs a finite state abstraction of a possibly
continuous-time and infinite state model in two steps. First, a finite external
signal space is added, generating a so called $\Phi$-dynamical system.
Secondly, the strongest asynchronous $l$-complete approximation of the external
dynamics is constructed. As our main results, we show that (i) the abstraction
simulates the original system, and (ii) bisimilarity between the original
system and its abstraction holds, if and only if the original system is
$l$-complete and its state space satisfies an additional property.
|
1402.3511 | A Clockwork RNN | cs.NE cs.LG | Sequence prediction and classification are ubiquitous and challenging
problems in machine learning that can require identifying complex dependencies
between temporally distant inputs. Recurrent Neural Networks (RNNs) have the
ability, in theory, to cope with these temporal dependencies by virtue of the
short-term memory implemented by their recurrent (feedback) connections.
However, in practice they are difficult to train successfully when the
long-term memory is required. This paper introduces a simple, yet powerful
modification to the standard RNN architecture, the Clockwork RNN (CW-RNN), in
which the hidden layer is partitioned into separate modules, each processing
inputs at its own temporal granularity, making computations only at its
prescribed clock rate. Rather than making the standard RNN models more complex,
CW-RNN reduces the number of RNN parameters, improves the performance
significantly in the tasks tested, and speeds up the network evaluation. The
network is demonstrated in preliminary experiments involving two tasks: audio
signal generation and TIMIT spoken word classification, where it outperforms
both RNN and LSTM networks.
|
1402.3520 | Spatially-Coupled LDPC Codes for Decode-and-Forward Relaying of Two
Correlated Sources over the BEC | cs.IT math.IT | We present a decode-and-forward transmission scheme based on
spatially-coupled low-density parity-check (SC-LDPC) codes for a network
consisting of two (possibly correlated) sources, one relay, and one
destination. The links between the nodes are modeled as binary erasure
channels. Joint source-channel coding with joint channel decoding is used to
exploit the correlation. The relay performs network coding. We derive
analytical bounds on the achievable rates for the binary erasure time-division
multiple-access relay channel with correlated sources. We then design bilayer
SC-LDPC codes and analyze their asymptotic performance for this scenario. We
prove analytically that the proposed coding scheme achieves the theoretical
limit for symmetric channel conditions and uncorrelated sources. Using density
evolution, we furthermore demonstrate that our scheme approaches the
theoretical limit also for non-symmetric channel conditions and when the
sources are correlated, and we observe the threshold saturation effect that is
typical for spatially-coupled systems. Finally, we give simulation results for
large block lengths, which validate the DE analysis.
|
1402.3557 | Improving Streaming Video Segmentation with Early and Mid-Level Visual
Processing | cs.CV | Despite recent advances in video segmentation, many opportunities remain to
improve it using a variety of low and mid-level visual cues. We propose
improvements to the leading streaming graph-based hierarchical video
segmentation (streamGBH) method based on early and mid level visual processing.
The extensive experimental analysis of our approach validates the improvement
of hierarchical supervoxel representation by incorporating motion and color
with effective filtering. We also pose and illuminate some open questions
towards intermediate level video analysis as further extension to streamGBH. We
exploit the supervoxels as an initialization towards estimation of dominant
affine motion regions, followed by merging of such motion regions in order to
hierarchically segment a video in a novel motion-segmentation framework which
aims at subsequent applications such as foreground recognition.
|
1402.3578 | Learning-assisted Theorem Proving with Millions of Lemmas | cs.AI cs.DL cs.LG cs.LO | Large formal mathematical libraries consist of millions of atomic inference
steps that give rise to a corresponding number of proved statements (lemmas).
Analogously to the informal mathematical practice, only a tiny fraction of such
statements is named and re-used in later proofs by formal mathematicians. In
this work, we suggest and implement criteria defining the estimated usefulness
of the HOL Light lemmas for proving further theorems. We use these criteria to
mine the large inference graph of the lemmas in the HOL Light and Flyspeck
libraries, adding up to millions of the best lemmas to the pool of statements
that can be re-used in later proofs. We show that in combination with
learning-based relevance filtering, such methods significantly strengthen
automated theorem proving of new conjectures over large formal mathematical
libraries such as Flyspeck.
|
1402.3588 | Outdoor flocking and formation flight with autonomous aerial robots | cs.RO cs.MA | We present the first decentralized multi-copter flock that performs stable
autonomous outdoor flight with up to 10 flying agents. By decentralized and
autonomous we mean that all members navigate themselves based on the dynamic
information received from other robots in the vicinity. We do not use central
data processing or control; instead, all the necessary computations are carried
out by miniature on-board computers. The only global information the system
exploits is from GPS receivers, while the units use wireless modules to share
this positional information with other flock members locally. Collective
behavior is based on a decentralized control framework with bio-inspiration
from statistical physical modelling of animal swarms. In addition, the model is
optimized for stable group flight even in a noisy, windy, delayed and
error-prone environment. Using this framework we successfully implemented
several fundamental collective flight tasks with up to 10 units: i) we achieved
self-propelled flocking in a bounded area with self-organized object avoidance
capabilities and ii) performed collective target tracking with stable formation
flights (grid, rotating ring, straight line). With realistic numerical
simulations we demonstrated that the local broadcast-type communication and the
decentralized autonomous control method allows for the scalability of the model
for much larger flocks.
|
1402.3606 | Routing and Staffing when Servers are Strategic | cs.GT cs.SY math.OC | Traditionally, research focusing on the design of routing and staffing
policies for service systems has modeled servers as having fixed (possibly
heterogeneous) service rates. However, service systems are generally staffed by
people. Furthermore, people respond to workload incentives; that is, how hard a
person works can depend both on how much work there is, and how the work is
divided between the people responsible for it. In a service system, the routing
and staffing policies control such workload incentives; and so the rate servers
work will be impacted by the system's routing and staffing policies. This
observation has consequences when modeling service system performance, and our
objective is to investigate those consequences.
We do this in the context of the M/M/N queue, which is the canonical model
for large service systems. First, we present a model for "strategic" servers
that choose their service rate in order to maximize a trade-off between an
"effort cost", which captures the idea that servers exert more effort when
working at a faster rate, and a "value of idleness", which assumes that servers
value having idle time. Next, we characterize the symmetric Nash equilibrium
service rate under any routing policy that routes based on the server idle
time. We find that the system must operate in a quality-driven regime, in which
servers have idle time, in order for an equilibrium to exist, which implies
that the staffing must have a first-order term that strictly exceeds that of
the common square-root staffing policy. Then, within the class of policies that
admit an equilibrium, we (asymptotically) solve the problem of minimizing the
total cost, when there are linear staffing costs and linear waiting costs.
Finally, we end by exploring the question of whether routing policies that are
based on the service rate, instead of the server idle time, can improve system
performance.
|
1402.3610 | Potential Games are Necessary to Ensure Pure Nash Equilibria in Cost
Sharing Games | cs.GT cs.MA cs.SY math.CO | We consider the problem of designing distribution rules to share "welfare"
(cost or revenue) among individually strategic agents. There are many known
distribution rules that guarantee the existence of a (pure) Nash equilibrium in
this setting, e.g., the Shapley value and its weighted variants; however, a
characterization of the space of distribution rules that guarantee the
existence of a Nash equilibrium is unknown. Our work provides an exact
characterization of this space for a specific class of scalable and separable
games, which includes a variety of applications such as facility location,
routing, network formation, and coverage games. Given arbitrary local welfare
functions W, we prove that a distribution rule guarantees equilibrium existence
for all games (i.e., all possible sets of resources, agent action sets, etc.)
if and only if it is equivalent to a generalized weighted Shapley value on some
"ground" welfare functions W', which can be distinct from W. However, if
budget-balance is required in addition to the existence of a Nash equilibrium,
then W' must be the same as W. We also provide an alternate characterization of
this space in terms of "generalized" marginal contributions, which is more
appealing from the point of view of computational tractability. A possibly
surprising consequence of our result is that, in order to guarantee equilibrium
existence in all games with any fixed local welfare functions, it is necessary
to work within the class of potential games.
|
1402.3613 | Finding Coordinated Paths for Multiple Holonomic Agents in 2-d Polygonal
Environment | cs.AI cs.RO | Avoiding collisions is one of the vital tasks for systems of autonomous
mobile agents. We focus on the problem of finding continuous coordinated paths
for multiple mobile disc agents in a 2-d environment with polygonal obstacles.
The problem is PSPACE-hard, with the state space growing exponentially in the
number of agents. Therefore, the state of the art methods include mainly
reactive techniques and sampling-based iterative algorithms.
We compare the performance of a widely-used reactive method ORCA with three
variants of a popular planning algorithm RRT* applied to multi-agent path
planning and find that an algorithm combining reactive collision avoidance and
RRT* planning, which we call ORCA-RRT* can be used to solve instances that are
out of the reach of either of the techniques. We experimentally show that: 1)
the reactive part of the algorithm can efficiently solve many multi-agent path
finding problems involving large number of agents, for which RRT* algorithm is
often unable to find a solution in limited time and 2) the planning component
of the algorithm is able to solve many instances containing local minima, where
reactive techniques typically fail.
|
1402.3626 | Strong converse for the quantum capacity of the erasure channel for
almost all codes | quant-ph cs.IT math.IT | A strong converse theorem for channel capacity establishes that the error
probability in any communication scheme for a given channel necessarily tends
to one if the rate of communication exceeds the channel's capacity.
Establishing such a theorem for the quantum capacity of degradable channels has
been an elusive task, with the strongest progress so far being a so-called
"pretty strong converse". In this work, Morgan and Winter proved that the
quantum error of any quantum communication scheme for a given degradable
channel converges to a value larger than $1/\sqrt{2}$ in the limit of many
channel uses if the quantum rate of communication exceeds the channel's quantum
capacity. The present paper establishes a theorem that is a counterpart to this
"pretty strong converse". We prove that the large fraction of codes having a
rate exceeding the erasure channel's quantum capacity have a quantum error
tending to one in the limit of many channel uses. Thus, our work adds to the
body of evidence that a fully strong converse theorem should hold for the
quantum capacity of the erasure channel. As a side result, we prove that the
classical capacity of the quantum erasure channel obeys the strong converse
property.
|
1402.3631 | Privately Solving Linear Programs | cs.DS cs.CR cs.LG | In this paper, we initiate the systematic study of solving linear programs
under differential privacy. The first step is simply to define the problem: to
this end, we introduce several natural classes of private linear programs that
capture different ways sensitive data can be incorporated into a linear
program. For each class of linear programs we give an efficient, differentially
private solver based on the multiplicative weights framework, or we give an
impossibility result.
|
1402.3634 | Collective Decision-Making in Ideal Networks: The Speed-Accuracy
Tradeoff | math.OC cs.MA cs.SY | We study collective decision-making in a model of human groups, with network
interactions, performing two alternative choice tasks. We focus on the
speed-accuracy tradeoff, i.e., the tradeoff between a quick decision and a
reliable decision, for individuals in the network. We model the evidence
aggregation process across the network using a coupled drift diffusion model
(DDM) and consider the free response paradigm in which individuals take their
time to make the decision. We develop reduced DDMs as decoupled approximations
to the coupled DDM and characterize their efficiency. We determine high
probability bounds on the error rate and the expected decision time for the
reduced DDM. We show the effect of the decision-maker's location in the network
on their decision-making performance under several threshold selection
criteria. Finally, we extend the coupled DDM to the coupled Ornstein-Uhlenbeck
model for decision-making in two alternative choice tasks with recency effects,
and to the coupled race model for decision-making in multiple alternative
choice tasks.
|
1402.3648 | Auto Spell Suggestion for High Quality Speech Synthesis in Hindi | cs.CL cs.SD | The goal of Text-to-Speech (TTS) synthesis in a particular language is to
convert arbitrary input text to intelligible and natural sounding speech.
However, for a particular language like Hindi, which is a highly confusing
language (due to very close spellings), it is not an easy task to identify
errors/mistakes in input text and an incorrect text degrade the quality of
output speech hence this paper is a contribution to the development of high
quality speech synthesis with the involvement of Spellchecker which generates
spell suggestions for misspelled words automatically. Involvement of
spellchecker would increase the efficiency of speech synthesis by providing
spell suggestions for incorrect input text. Furthermore, we have provided the
comparative study for evaluating the resultant effect on to phonetic text by
adding spellchecker on to input text.
|
1402.3653 | Crowdsourcing Swarm Manipulation Experiments: A Massive Online User
Study with Large Swarms of Simple Robots | cs.RO | Micro- and nanorobotics have the potential to revolutionize many applications
including targeted material delivery, assembly, and surgery. The same
properties that promise breakthrough solutions---small size and large
populations---present unique challenges to generating controlled motion. We
want to use large swarms of robots to perform manipulation tasks;
unfortunately, human-swarm interaction studies as conducted today are limited
in sample size, are difficult to reproduce, and are prone to hardware failures.
We present an alternative.
This paper examines the perils, pitfalls, and possibilities we discovered by
launching SwarmControl.net, an online game where players steer swarms of up to
500 robots to complete manipulation challenges. We record statistics from
thousands of players, and use the game to explore aspects of large-population
robot control. We present the game framework as a new, open-source tool for
large-scale user experiments. Our results have potential applications in human
control of micro- and nanorobots, supply insight for automatic controllers, and
provide a template for large online robotic research experiments.
|
1402.3654 | Temperature Control using Fuzzy Logic | cs.SY | The aim of the temperature control is to heat the system up todelimitated
temperature, afterwardhold it at that temperature in insured manner. Fuzzy
Logic Controller (FLC) is best way in which this type of precision control can
be accomplished by controller. During past twenty yearssignificant amount of
research using fuzzy logichas done in this field of control of non-linear
dynamical system. Here we have developed temperature control system using fuzzy
logic. Control theory techniques are the root from which convention controllers
are deducted. The desired response of the output can be guaranteed by the
feedback controller.
|
1402.3656 | Analysis of Carrier Frequency Selective Offset Estimation - Using
Zero-IF and ZCZIn MC-DS-CDMA | cs.NI cs.IT math.IT | A new method for frequency synchronization based upon Zero-Intermediate
Frequency Zero-IF receiver and characteristics of the received signal s power
spectrum for MC-DSCDMA Up link system is proposed in this paper. In addition to
this, employing Zero Correlation Zone (ZCZ) sequences, designed specifically
for quasi synchronous up link transmissions, is proposed to exploit frequency
and temporal diversity in frequency-selective block fading channels. The
variance for Carrier Frequency Offset (CFO) estimators of MC DS CDMA Uplink is
compared with that of an OFDM system to estimate the CFO. Our study and results
show that the MC DS CDMA system is outperforming the OFDM method
|
1402.3657 | A Narrative Vehicle Protection Representation for Vehicle Speed
Regulator Under Driver Exhaustion -- A Study | cs.CV cs.HC | Driver fatigue is one of the important factors that cause traffic accidents,
and the ever-increasing number due to diminished drivers vigilance level has
become a problem of serious concern to society. Drivers with a diminished
vigilance level suffer from a marked decline in their abilities of perception,
recognition, and vehicle control, and therefore pose serious danger to their
own life and the lives of other people. Exhaustion resulting from sleep
deprivation or sleep disorders is an important factor in the creasing number of
accidents. In this projected work, we discuss the various methods of the
existing and the proposed method based on a real time online safety prototype
that controls the vehicle speed under driver fatigue. The purpose of such a
model is to advance a system to detect fatigue symptoms in drivers and control
the speed of vehicle to avoid accidents. This system was tested adequately with
subjects of different technology of various researchers finally the validity of
the proposed model for vehicle speed controller based on driver fatigue
detection is shown.
|
1402.3664 | Parameter estimation based on interval-valued belief structures | cs.AI | Parameter estimation based on uncertain data represented as belief structures
is one of the latest problems in the Dempster-Shafer theory. In this paper, a
novel method is proposed for the parameter estimation in the case where belief
structures are uncertain and represented as interval-valued belief structures.
Within our proposed method, the maximization of likelihood criterion and
minimization of estimated parameter's uncertainty are taken into consideration
simultaneously. As an illustration, the proposed method is employed to estimate
parameters for deterministic and uncertain belief structures, which
demonstrates its effectiveness and versatility.
|
1402.3689 | Sound Representation and Classification Benchmark for Domestic Robots | cs.SD cs.RO | We address the problem of sound representation and classification and present
results of a comparative study in the context of a domestic robotic scenario. A
dataset of sounds was recorded in realistic conditions (background noise,
presence of several sound sources, reverberations, etc.) using the humanoid
robot NAO. An extended benchmark is carried out to test a variety of
representations combined with several classifiers. We provide results obtained
with the annotated dataset and we assess the methods quantitatively on the
basis of their classification scores, computation times and memory
requirements. The annotated dataset is publicly available at
https://team.inria.fr/perception/nard/.
|
1402.3718 | Simulating urban expansion in the parcel level for all Chinese cities | cs.MA | Large-scale models are generally associated with big modelling units in
space, like counties or super grids (several to dozens km2). Few applied urban
models can pursue large-scale extent with fine-level units simultaneously due
to data availability and computation load. The framework of automatic
identification and characterization parcels developed by Long and Liu (2013)
makes such an ideal model possible by establishing existing urban parcels using
road networks and points of interest for a super large area (like a country or
a continent). In this study, a mega-vector-parcels cellular automata model
(MVP-CA) is developed for simulating urban expansion in the parcel level for
all 654 Chinese cities. Existing urban parcels in 2012, for initiating MVP-CA,
are generated using multi-levelled road networks and ubiquitous points of
interest, followed by simulating parcel-based urban expansion of all cities
during 2012-2017. Reflecting national spatial development strategies discussed
extensively by academics and decision makers, the baseline scenario and other
two simulated urban expansion scenarios have been tested and compared
horizontally. As the first fine-scale urban expansion model from the national
scope, its academic contributions, practical applications, and potential biases
are discussed in this paper as well.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.