id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1305.1044 | Optimal Fully Electric Vehicle load balancing with an ADMM algorithm in
Smartgrids | cs.SY cs.DC | In this paper we present a system architecture and a suitable control
methodology for the load balancing of Fully Electric Vehicles at Charging
Station (CS). Within the proposed architecture, control methodologies allow to
adapt Distributed Energy Resources (DER) generation profiles and active loads
to ensure economic benefits to each actor. The key aspect is the organization
in two levels of control: at local level a Load Area Controller (LAC) optimally
calculates the FEVs charging sessions, while at higher level a Macro Load Area
Aggregator (MLAA) provides DER with energy production profiles, and LACs with
energy withdrawal profiles. Proposed control methodologies involve the solution
of a Walrasian market equilibrium and the design of a distributed algorithm.
|
1305.1052 | Hybridization of Otsu Method and Median Filter for Color Image
Segmentation | cs.CV | In this article a novel algorithm for color image segmentation has been
developed. The proposed algorithm based on combining two existing methods in
such a novel way to obtain a significant method to partition the color image
into significant regions. On the first phase, the traditional Otsu method for
gray channel image segmentation were applied for each of the R,G, and B
channels separately to determine the suitable automatic threshold for each
channel. After that, the new modified channels are integrated again to
formulate a new color image. The resulted image suffers from some kind of
distortion. To get rid of this distortion, the second phase is arise which is
the median filter to smooth the image and increase the segmented regions. This
process looks very significant by the ocular eye. Experimental results were
presented on a variety of test images to support the proposed algorithm.
|
1305.1060 | On Rational Closure in Description Logics of Typicality | cs.AI | We define the notion of rational closure in the context of Description Logics
extended with a tipicality operator. We start from ALC+T, an extension of ALC
with a typicality operator T: intuitively allowing to express concepts of the
form T(C), meant to select the "most normal" instances of a concept C. The
semantics we consider is based on rational model. But we further restrict the
semantics to minimal models, that is to say, to models that minimise the rank
of domain elements. We show that this semantics captures exactly a notion of
rational closure which is a natural extension to Description Logics of Lehmann
and Magidor's original one. We also extend the notion of rational closure to
the Abox component. We provide an ExpTime algorithm for computing the rational
closure of an Abox and we show that it is sound and complete with respect to
the minimal model semantics.
|
1305.1082 | Random Linear Network Codes for Secrecy over Wireless Broadcast Channels | cs.IT cs.CR math.IT | We consider a set of $n$ messages and a group of $k$ clients. Each client is
privileged for receiving an arbitrary subset of the messages over a broadcast
erasure channel, which generalizes scenario of a previous work. We propose a
method for secretly delivering each message to its privileged recipients in a
way that each receiver can decode its own messages but not the others'. Our
method is based on combining the messages using linear network coding and
hiding the decoding coefficients from the unprivileged clients. We provide an
information theoretic proof for the secrecy of the proposed method. In
particular we show that an unprivileged client cannot obtain any meaningful
information even if it holds the entire set of coded data packets transmitted
over the channel. Moreover, in our method, the decoding complexity is desirably
low at the receiver side.
|
1305.1091 | Further improvements on the Feng-Rao bound for dual codes | cs.IT math.AC math.AG math.IT | Salazar, Dunn and Graham in [Salazar et. al., 2006] presented an improved
Feng-Rao bound for the minimum distance of dual codes. In this work we take the
improvement a step further. Both the original bound by Salazar et. al., as well
as our improvement are lifted so that they deal with generalized Hamming
weights. We also demonstrate the advantage of working with one-way
well-behaving pairs rather than weakly well-behaving or well-behaving pairs.
|
1305.1102 | Incremental Sampling-based Algorithm for Minimum-violation Motion
Planning | cs.RO | This paper studies the problem of control strategy synthesis for dynamical
systems with differential constraints to fulfill a given reachability goal
while satisfying a set of safety rules. Particular attention is devoted to
goals that become feasible only if a subset of the safety rules are violated.
The proposed algorithm computes a control law, that minimizes the level of
unsafety while the desired goal is guaranteed to be reached. This problem is
motivated by an autonomous car navigating an urban environment while following
rules of the road such as "always travel in right lane'' and "do not change
lanes frequently''. Ideas behind sampling based motion-planning algorithms,
such as Probabilistic Road Maps (PRMs) and Rapidly-exploring Random Trees
(RRTs), are employed to incrementally construct a finite concretization of the
dynamics as a durational Kripke structure. In conjunction with this, a weighted
finite automaton that captures the safety rules is used in order to find an
optimal trajectory that minimizes the violation of safety rules. We prove that
the proposed algorithm guarantees asymptotic optimality, i.e., almost-sure
convergence to optimal solutions. We present results of simulation experiments
and an implementation on an autonomous urban mobility-on-demand system.
|
1305.1112 | json2run: a tool for experiment design & analysis | cs.CE | json2run is a tool to automate the running, storage and analysis of
experiments. The main advantage of json2run is that it allows to describe a set
of experiments concisely as a JSON-formatted parameter tree. It also supports
parallel execution of experiments, automatic parameter tuning through the
F-Race framework and storage and analysis of experiments with MongoDB and R.
|
1305.1114 | Towards User Profile Modelling in Recommender System | cs.IR | The notion of profile appeared in the 1970s decade, which was mainly due to
the need to create custom applications that could be adapted to the user. In
this paper, we treat the different aspects of the user's profile, defining it,
profile, its features and its indicators of interest, and then we describe the
different approaches of modelling and acquiring the user's interests.
|
1305.1120 | The predictability of consumer visitation patterns | physics.soc-ph cs.SI | We consider hundreds of thousands of individual economic transactions to ask:
how predictable are consumers in their merchant visitation patterns? Our
results suggest that, in the long-run, much of our seemingly elective activity
is actually highly predictable. Notwithstanding a wide range of individual
preferences, shoppers share regularities in how they visit merchant locations
over time. Yet while aggregate behavior is largely predictable, the
interleaving of shopping events introduces important stochastic elements at
short time scales. These short- and long-scale patterns suggest a theoretical
upper bound on predictability, and describe the accuracy of a Markov model in
predicting a person's next location. We incorporate population-level transition
probabilities in the predictive models, and find that in many cases these
improve accuracy. While our results point to the elusiveness of precise
predictions about where a person will go next, they suggest the existence, at
large time-scales, of regularities across the population.
|
1305.1145 | Techniques for Feature Extraction In Speech Recognition System : A
Comparative Study | cs.SD cs.CL | The time domain waveform of a speech signal carries all of the auditory
information. From the phonological point of view, it little can be said on the
basis of the waveform itself. However, past research in mathematics, acoustics,
and speech technology have provided many methods for converting data that can
be considered as information if interpreted correctly. In order to find some
statistically relevant information from incoming data, it is important to have
mechanisms for reducing the information of each segment in the audio signal
into a relatively small number of parameters, or features. These features
should describe each segment in such a characteristic way that other similar
segments can be grouped together by comparing their features. There are
enormous interesting and exceptional ways to describe the speech signal in
terms of parameters. Though, they all have their strengths and weaknesses, we
have presented some of the most used methods with their importance.
|
1305.1163 | A Computer Vision System for Attention Mapping in SLAM based 3D Models | cs.CV | The study of human factors in the frame of interaction studies has been
relevant for usability engi-neering and ergonomics for decades. Today, with the
advent of wearable eye-tracking and Google glasses, monitoring of human factors
will soon become ubiquitous. This work describes a computer vision system that
enables pervasive mapping and monitoring of human attention. The key
contribu-tion is that our methodology enables full 3D recovery of the gaze
pointer, human view frustum and associated human centred measurements directly
into an automatically computed 3D model in real-time. We apply RGB-D SLAM and
descriptor matching methodologies for the 3D modelling, locali-zation and fully
automated annotation of ROIs (regions of interest) within the acquired 3D
model. This innovative methodology will open new avenues for attention studies
in real world environments, bringing new potential into automated processing
for human factors technologies.
|
1305.1169 | Multi-Objective AI Planning: Comparing Aggregation and Pareto Approaches | cs.AI | Most real-world Planning problems are multi-objective, trying to minimize
both the makespan of the solution plan, and some cost of the actions involved
in the plan. But most, if not all existing approaches are based on
single-objective planners, and use an aggregation of the objectives to remain
in the single-objective context. Divide and Evolve (DaE) is an evolutionary
planner that won the temporal deterministic satisficing track at the last
International Planning Competitions (IPC). Like all Evolutionary Algorithms
(EA), it can easily be turned into a Pareto-based Multi-Objective EA. It is
however important to validate the resulting algorithm by comparing it with the
aggregation approach: this is the goal of this paper. The comparative
experiments on a recently proposed benchmark set that are reported here
demonstrate the usefulness of going Pareto-based in AI Planning.
|
1305.1172 | Gromov-Hausdorff Approximation of Metric Spaces with Linear Structure | cs.CG cs.LG math.MG | In many real-world applications data come as discrete metric spaces sampled
around 1-dimensional filamentary structures that can be seen as metric graphs.
In this paper we address the metric reconstruction problem of such filamentary
structures from data sampled around them. We prove that they can be
approximated, with respect to the Gromov-Hausdorff distance by well-chosen Reeb
graphs (and some of their variants) and we provide an efficient and easy to
implement algorithm to compute such approximations in almost linear time. We
illustrate the performances of our algorithm on a few synthetic and real data
sets.
|
1305.1175 | IMDB network revisited: unveiling fractal and modular properties from a
typical small-world network | physics.soc-ph cs.SI | We study a subset of the movie collaboration network, imdb.com, where only
adult movies are included. We show that there are many benefits in using such a
network, which can serve as a prototype for studying social interactions. We
find that the strength of links, i.e., how many times two actors have
collaborated with each other, is an important factor that can significantly
influence the network topology. We see that when we link all actors in the same
movie with each other, the network becomes small-world, lacking a proper
modular structure. On the other hand, by imposing a threshold on the minimum
number of links two actors should have to be in our studied subset, the network
topology becomes naturally fractal. This occurs due to a large number of
meaningless links, namely, links connecting actors that did not actually
interact. We focus our analysis on the fractal and modular properties of this
resulting network, and show that the renormalization group analysis can
characterize the self-similar structure of these networks.
|
1305.1187 | Calculation of the Performance of Communication Systems from Measured
Oscillator Phase Noise | cs.IT math.IT | Oscillator phase noise (PN) is one of the major problems that affect the
performance of communication systems. In this paper, a direct connection
between oscillator measurements, in terms of measured single-side band PN
spectrum, and the optimal communication system performance, in terms of the
resulting error vector magnitude (EVM) due to PN, is mathematically derived and
analyzed. First, a statistical model of the PN, considering the effect of white
and colored noise sources, is derived. Then, we utilize this model to derive
the modified Bayesian Cramer-Rao bound on PN estimation, and use it to find an
EVM bound for the system performance. Based on our analysis, it is found that
the influence from different noise regions strongly depends on the
communication bandwidth, i.e., the symbol rate. For high symbol rate
communication systems, cumulative PN that appears near carrier is of relatively
low importance compared to the white PN far from carrier. Our results also show
that 1/f^3 noise is more predictable compared to 1/f^2 noise and in a fair
comparison it affects the performance less.
|
1305.1193 | Canonical Forms and Automorphisms in the Projective Space | cs.IT cs.DM math.CO math.IT | Let $\C$ be a sequence of multisets of subspaces of a vector space $\F_q^k$.
We describe a practical algorithm which computes a canonical form and the
stabilizer of $\C$ under the group action of the general semilinear group. It
allows us to solve canonical form problems in coding theory, i.e. we are able
to compute canonical forms of linear codes, $\F_{q}$-linear block codes over
the alphabet $\F_{q^s}$ and random network codes under their natural notion of
equivalence. The algorithm that we are going to develop is based on the
partition refinement method and generalizes a previous work by the author on
the computation of canonical forms of linear codes.
|
1305.1199 | How to find real-world applications for compressive sensing | cs.CV | The potential of compressive sensing (CS) has spurred great interest in the
research community and is a fast growing area of research. However, research
translating CS theory into practical hardware and demonstrating clear and
significant benefits with this hardware over current, conventional imaging
techniques has been limited. This article helps researchers to find those niche
applications where the CS approach provides substantial gain over conventional
approaches by articulating lessons learned in finding one such application; sea
skimming missile detection. As a proof of concept, it is demonstrated that a
simplified CS missile detection architecture and algorithm provides comparable
results to the conventional imaging approach but using a smaller FPA. The
primary message is that all of the excitement surrounding CS is necessary and
appropriate for encouraging our creativity but we all must also take off our
"rose colored glasses" and critically judge our ideas, methods and results
relative to conventional imaging approaches.
|
1305.1206 | A Contrario Selection of Optimal Partitions for Image Segmentation | cs.CV | We present a novel segmentation algorithm based on a hierarchical
representation of images. The main contribution of this work is to explore the
capabilities of the A Contrario reasoning when applied to the segmentation
problem, and to overcome the limitations of current algorithms within that
framework. This exploratory approach has three main goals.
Our first goal is to extend the search space of greedy merging algorithms to
the set of all partitions spanned by a certain hierarchy, and to cast the
segmentation as a selection problem within this space. In this way we increase
the number of tested partitions and thus we potentially improve the
segmentation results. In addition, this space is considerably smaller than the
space of all possible partitions, thus we still keep the complexity controlled.
Our second goal aims to improve the locality of region merging algorithms,
which usually merge pairs of neighboring regions. In this work, we overcome
this limitation by introducing a validation procedure for complete partitions,
rather than for pairs of regions.
The third goal is to perform an exhaustive experimental evaluation
methodology in order to provide reproducible results.
Finally, we embed the selection process on a statistical A Contrario
framework which allows us to have only one free parameter related to the
desired scale.
|
1305.1221 | Construction of two SD Codes | cs.IT math.IT | SD codes are erasure codes that address the mixed failure mode of current
RAID systems. Rather than dedicate entire disks to erasure coding, as done in
RAID-5, RAID-6 and Reed-Solomon coding, an SD code dedicates entire disks, plus
individual sectors to erasure coding. The code then tolerates combinations of
disk and sector errors, rather than solely disk errors. It is been an open
problem to construct general codes that have the SD property, and previous work
has relied on Monte Carlo searches. In this paper, we present two general
constructions that address the cases with one disk and two sectors, and two
disks and two sectors. Additionally, we make an observation about shortening SD
codes that allows us to prune Monte Carlo searches.
|
1305.1230 | Rate Distortion Function for a Class of Relative Entropy Sources | cs.IT math.IT | This paper deals with rate distortion or source coding with fidelity
criterion, in measure spaces, for a class of source distributions. The class of
source distributions is described by a relative entropy constraint set between
the true and a nominal distribution. The rate distortion problem for the class
is thus formulated and solved using minimax strategies, which result in robust
source coding with fidelity criterion. It is shown that minimax and maxmin
strategies can be computed explicitly, and they are generalizations of the
classical solution. Finally, for discrete memoryless uncertain sources, the
rate distortion theorem is stated for the class omitting the derivations while
the converse is derived.
|
1305.1256 | A Convex Functional for Image Denoising based on Patches with
Constrained Overlaps and its vectorial application to Low Dose Differential
Phase Tomography | math.NA cs.CV | We solve the image denoising problem with a dictionary learning technique by
writing a convex functional of a new form. This functional contains beside the
usual sparsity inducing term and fidelity term, a new term which induces
similarity between overlapping patches in the overlap regions. The functional
depends on two free regularization parameters: a coefficient multiplying the
sparsity-inducing $L_{1}$ norm of the patch basis functions coefficients, and a
coefficient multiplying the $L_{2}$ norm of the differences between patches in
the overlapping regions. The solution is found by applying the iterative
proximal gradient descent method with FISTA acceleration. In the case of
tomography reconstruction we calculate the gradient by applying projection of
the solution and its error backprojection at each iterative step. We study the
quality of the solution, as a function of the regularization parameters and
noise, on synthetic datas for which the solution is a-priori known. We apply
the method on experimental data in the case of Differential Phase Tomography.
For this case we use an original approach which consists in using vectorial
patches, each patch having two components: one per each gradient component. The
resulting algorithm, implemented in the ESRF tomography reconstruction code
PyHST, results to be robust, efficient, and well adapted to strongly reduce the
required dose and the number of projections in medical tomography.
|
1305.1268 | A Contraction Analysis of the Convergence of Risk-Sensitive Filters | math.OC cs.SY | A contraction analysis of risk-sensitive Riccati equations is proposed. When
the state-space model is reachable and observable, a block-update
implementation of the risk-sensitive filter is used to show that the N-fold
composition of the Riccati map is strictly contractive with respect to the
Riemannian metric of positive definite matrices, when N is larger than the
number of states. The range of values of the risk-sensitivity parameter for
which the map remains contractive can be estimated a priori. It is also found
that a second condition must be imposed on the risk-sensitivity parameter and
on the initial error variance to ensure that the solution of the risk-sensitive
Riccati equation remains positive definite at all times. The two conditions
obtained can be viewed as extending to the multivariable case an earlier
analysis of Whittle for the scalar case.
|
1305.1288 | Decelerated invasion and waning moon patterns in public goods games with
delayed distribution | physics.soc-ph cond-mat.stat-mech cs.SI q-bio.PE | We study the evolution of cooperation in the spatial public goods game,
focusing on the effects that are brought about by the delayed distribution of
goods that accumulate in groups due to the continuous investments of
cooperators. We find that intermediate delays enhance network reciprocity
because of a decelerated invasion of defectors, who are unable to reap the same
high short-term benefits as they do in the absence of delayed distribution.
Long delays, however, introduce a risk because the large accumulated wealth
might fall into the wrong hands. Indeed, as soon as the curvature of a
cooperative cluster turns negative, the engulfed defectors can collect the
heritage of many generations of cooperators, and by doing so start a waning
moon pattern that nullifies the benefits of decelerated invasion. Accidental
meeting points of growing cooperative clusters may also act as triggers for the
waning moon effect, thus linking the success of cooperators with their
propensity to fail in a rather bizarre way. Our results highlight that
"investing into the future" is a good idea only if that future is sufficiently
near and not likely to be burdened by inflation.
|
1305.1295 | Tight Lower Bounds for Greedy Routing in Higher-Dimensional Small-World
Grids | cs.DS cs.CC cs.NI cs.SI | We consider Kleinberg's celebrated small world graph model (Kleinberg, 2000),
in which a D-dimensional grid {0,...,n-1}^D is augmented with a constant number
of additional unidirectional edges leaving each node. These long range edges
are determined at random according to a probability distribution (the
augmenting distribution), which is the same for each node. Kleinberg suggested
using the inverse D-th power distribution, in which node v is the long range
contact of node u with a probability proportional to ||u-v||^(-D). He showed
that such an augmenting distribution allows to route a message efficiently in
the resulting random graph: The greedy algorithm, where in each intermediate
node the message travels over a link that brings the message closest to the
target w.r.t. the Manhattan distance, finds a path of expected length O(log^2
n) between any two nodes. In this paper we prove that greedy routing does not
perform asymptotically better for any uniform and isotropic augmenting
distribution, i.e., the probability that node u has a particular long range
contact v is independent of the labels of u and v and only a function of
||u-v||.
In order to obtain the result, we introduce a novel proof technique: We
define a budget game, in which a token travels over a game board, while the
player manages a "probability budget". In each round, the player bets part of
her remaining probability budget on step sizes. A step size is chosen at random
according to a probability distribution of the player's bet. The token then
makes progress as determined by the chosen step size, while some of the
player's bet is removed from her probability budget. We prove a tight lower
bound for such a budget game, and then obtain a lower bound for greedy routing
in the D-dimensional grid by a reduction.
|
1305.1319 | New Alignment Methods for Discriminative Book Summarization | cs.CL | We consider the unsupervised alignment of the full text of a book with a
human-written summary. This presents challenges not seen in other text
alignment problems, including a disparity in length and, consequent to this, a
violation of the expectation that individual words and phrases should align,
since large passages and chapters can be distilled into a single summary
phrase. We present two new methods, based on hidden Markov models, specifically
targeted to this problem, and demonstrate gains on an extractive book
summarization task. While there is still much room for improvement,
unsupervised alignment holds intrinsic value in offering insight into what
features of a book are deemed worthy of summarization.
|
1305.1343 | Towards an Author-Topic-Term-Model Visualization of 100 Years of German
Sociological Society Proceedings | cs.DL cs.CL cs.IR | Author co-citation studies employ factor analysis to reduce high-dimensional
co-citation matrices to low-dimensional and possibly interpretable factors, but
these studies do not use any information from the text bodies of publications.
We hypothesise that term frequencies may yield useful information for
scientometric analysis. In our work we ask if word features in combination with
Bayesian analysis allow well-founded science mapping studies. This work goes
back to the roots of Mosteller and Wallace's (1964) statistical text analysis
using word frequency features and a Bayesian inference approach, tough with
different goals. To answer our research question we (i) introduce a new data
set on which the experiments are carried out, (ii) describe the Bayesian model
employed for inference and (iii) present first results of the analysis.
|
1305.1344 | Speckle Noise Reduction in Medical Ultrasound Images | cs.CV | Ultrasound imaging is an incontestable vital tool for diagnosis, it provides
in non-invasive manner the internal structure of the body to detect eventually
diseases or abnormalities tissues. Unfortunately, the presence of speckle noise
in these images affects edges and fine details which limit the contrast
resolution and make diagnostic more difficult. In this paper, we propose a
denoising approach which combines logarithmic transformation and a non linear
diffusion tensor. Since speckle noise is multiplicative and nonwhite process,
the logarithmic transformation is a reasonable choice to convert
signaldependent or pure multiplicative noise to an additive one. The key idea
from using diffusion tensor is to adapt the flow diffusion towards the local
orientation by applying anisotropic diffusion along the coherent structure
direction of interesting features in the image. To illustrate the effective
performance of our algorithm, we present some experimental results on
synthetically and real echographic images.
|
1305.1359 | A Differential Equations Approach to Optimizing Regret Trade-offs | cs.LG | We consider the classical question of predicting binary sequences and study
the {\em optimal} algorithms for obtaining the best possible regret and payoff
functions for this problem. The question turns out to be also equivalent to the
problem of optimal trade-offs between the regrets of two experts in an "experts
problem", studied before by \cite{kearns-regret}. While, say, a regret of
$\Theta(\sqrt{T})$ is known, we argue that it important to ask what is the
provably optimal algorithm for this problem --- both because it leads to
natural algorithms, as well as because regret is in fact often comparable in
magnitude to the final payoffs and hence is a non-negligible term.
In the basic setting, the result essentially follows from a classical result
of Cover from '65. Here instead, we focus on another standard setting, of
time-discounted payoffs, where the final "stopping time" is not specified. We
exhibit an explicit characterization of the optimal regret for this setting.
To obtain our main result, we show that the optimal payoff functions have to
satisfy the Hermite differential equation, and hence are given by the solutions
to this equation. It turns out that characterization of the payoff function is
qualitatively different from the classical (non-discounted) setting, and,
namely, there's essentially a unique optimal solution.
|
1305.1363 | One-Pass AUC Optimization | cs.LG | AUC is an important performance measure and many algorithms have been devoted
to AUC optimization, mostly by minimizing a surrogate convex loss on a training
data set. In this work, we focus on one-pass AUC optimization that requires
only going through the training data once without storing the entire training
dataset, where conventional online learning algorithms cannot be applied
directly because AUC is measured by a sum of losses defined over pairs of
instances from different classes. We develop a regression-based algorithm which
only needs to maintain the first and second order statistics of training data
in memory, resulting a storage requirement independent from the size of
training data. To efficiently handle high dimensional data, we develop a
randomized algorithm that approximates the covariance matrices by low rank
matrices. We verify, both theoretically and empirically, the effectiveness of
the proposed algorithm.
|
1305.1371 | Granular association rules for multi-valued data | cs.IR cs.DB | Granular association rule is a new approach to reveal patterns hide in
many-to-many relationships of relational databases. Different types of data
such as nominal, numeric and multi-valued ones should be dealt with in the
process of rule mining. In this paper, we study multi-valued data and develop
techniques to filter out strong however uninteresting rules. An example of such
rule might be "male students rate movies released in 1990s that are NOT
thriller." This kind of rules, called negative granular association rules,
often overwhelms positive ones which are more useful. To address this issue, we
filter out negative granules such as "NOT thriller" in the process of granule
generation. In this way, only positive granular association rules are generated
and strong ones are mined. Experimental results on the movielens data set
indicate that most rules are negative, and our technique is effective to filter
them out.
|
1305.1372 | Cold-start recommendation through granular association rules | cs.IR | Recommender systems are popular in e-commerce as they suggest items of
interest to users. Researchers have addressed the cold-start problem where
either the user or the item is new. However, the situation with both new user
and new item has seldom been considered. In this paper, we propose a cold-start
recommendation approach to this situation based on granular association rules.
Specifically, we provide a means for describing users and items through
information granules, a means for generating association rules between users
and items, and a means for recommending items to users using these rules.
Experiments are undertaken on a publicly available dataset MovieLens. Results
indicate that rule sets perform similarly on the training and the testing sets,
and the appropriate setting of granule is essential to the application of
granular association rules.
|
1305.1375 | Unique Perfect Phylogeny Characterizations via Uniquely Representable
Chordal Graphs | cs.DM cs.CE math.CO q-bio.QM | The perfect phylogeny problem is a classic problem in computational biology,
where we seek an unrooted phylogeny that is compatible with a set of
qualitative characters. Such a tree exists precisely when an intersection graph
associated with the character set, called the partition intersection graph, can
be triangulated using a restricted set of fill edges. Semple and Steel used the
partition intersection graph to characterize when a character set has a unique
perfect phylogeny. Bordewich, Huber, and Semple showed how to use the partition
intersection graph to find a maximum compatible set of characters. In this
paper, we build on these results, characterizing when a unique perfect
phylogeny exists for a subset of partial characters. Our characterization is
stated in terms of minimal triangulations of the partition intersection graph
that are uniquely representable, also known as ur-chordal graphs. Our
characterization is motivated by the structure of ur-chordal graphs, and the
fact that the block structure of minimal triangulations is mirrored in the
graph that has been triangulated.
|
1305.1396 | A new framework for optimal classifier design | cs.CV cs.LG stat.ML | The use of alternative measures to evaluate classifier performance is gaining
attention, specially for imbalanced problems. However, the use of these
measures in the classifier design process is still unsolved. In this work we
propose a classifier designed specifically to optimize one of these alternative
measures, namely, the so-called F-measure. Nevertheless, the technique is
general, and it can be used to optimize other evaluation measures. An algorithm
to train the novel classifier is proposed, and the numerical scheme is tested
with several databases, showing the optimality and robustness of the presented
classifier.
|
1305.1397 | How Many Queries Will Resolve Common Randomness? | cs.IT cs.CR math.IT | A set of m terminals, observing correlated signals, communicate interactively
to generate common randomness for a given subset of them. Knowing only the
communication, how many direct queries of the value of the common randomness
will resolve it? A general upper bound, valid for arbitrary signal alphabets,
is developed for the number of such queries by using a query strategy that
applies to all common randomness and associated communication. When the
underlying signals are independent and identically distributed repetitions of m
correlated random variables, the number of queries can be exponential in signal
length. For this case, the mentioned upper bound is tight and leads to a
single-letter formula for the largest query exponent, which coincides with the
secret key capacity of a corresponding multiterminal source model. In fact, the
upper bound constitutes a strong converse for the optimum query exponent, and
implies also a new strong converse for secret key capacity. A key tool,
estimating the size of a large probability set in terms of Renyi entropy, is
interpreted separately, too, as a lossless block coding result for general
sources. As a particularization, it yields the classic result for a discrete
memoryless source.
|
1305.1415 | Centralized and Cooperative Transmission of Secure Multiple Unicasts
using Network Coding | cs.IT cs.CR math.IT | We introduce a method for securely delivering a set of messages to a group of
clients over a broadcast erasure channel where each client is interested in a
distinct message. Each client is able to obtain its own message but not the
others'. In the proposed method the messages are combined together using a
special variant of random linear network coding. Each client is provided with a
private set of decoding coefficients to decode its own message. Our method
provides security for the transmission sessions against computational
brute-force attacks and also weakly security in information theoretic sense. As
the broadcast channel is assumed to be erroneous, the missing coded packets
should be recovered in some way. We consider two different scenarios. In the
first scenario the missing packets are retransmitted by the base station
(centralized). In the second scenario the clients cooperate with each other by
exchanging packets (decentralized). In both scenarios, network coding
techniques are exploited to increase the total throughput. For the case of
centralized retransmissions we provide an analytical approximation for the
throughput performance of instantly decodable network coded (IDNC)
retransmissions as well as numerical experiments. For the decentralized
scenario, we propose a new IDNC based retransmission method where its
performance is evaluated via simulations and analytical approximation.
Application of this method is not limited to our special problem and can be
generalized to a new class of problems introduced in this paper as the
cooperative index coding problem.
|
1305.1422 | Somoclu: An Efficient Parallel Library for Self-Organizing Maps | cs.DC cs.MS cs.NE | Somoclu is a massively parallel tool for training self-organizing maps on
large data sets written in C++. It builds on OpenMP for multicore execution,
and on MPI for distributing the workload across the nodes in a cluster. It is
also able to boost training by using CUDA if graphics processing units are
available. A sparse kernel is included, which is useful for high-dimensional
but sparse data, such as the vector spaces common in text mining workflows.
Python, R and MATLAB interfaces facilitate interactive use. Apart from fast
execution, memory use is highly optimized, enabling training large emergent
maps even on a single computer.
|
1305.1426 | Speech Enhancement Modeling Towards Robust Speech Recognition System | cs.SD cs.CL | Form about four decades human beings have been dreaming of an intelligent
machine which can master the natural speech. In its simplest form, this machine
should consist of two subsystems, namely automatic speech recognition (ASR) and
speech understanding (SU). The goal of ASR is to transcribe natural speech
while SU is to understand the meaning of the transcription. Recognizing and
understanding a spoken sentence is obviously a knowledge-intensive process,
which must take into account all variable information about the speech
communication process, from acoustics to semantics and pragmatics. While
developing an Automatic Speech Recognition System, it is observed that some
adverse conditions degrade the performance of the Speech Recognition System. In
this contribution, speech enhancement system is introduced for enhancing speech
signals corrupted by additive noise and improving the performance of Automatic
Speech Recognizers in noisy conditions. Automatic speech recognition
experiments show that replacing noisy speech signals by the corresponding
enhanced speech signals leads to an improvement in the recognition accuracies.
The amount of improvement varies with the type of the corrupting noise.
|
1305.1427 | Achievable Rate Derivations and Further Simulation Results for
"Physical-Layer Multicasting by Stochastic Transmit Beamforming and Alamouti
Space-Time Coding" | cs.IT math.IT | This is a companion technical report of the main manuscript "Physical-Layer
Multicasting by Stochastic Transmit Beamforming and Alamouti Space-Time
Coding". The report serves to give detailed derivations of the achievable rate
functions encountered in the main manuscript, which are too long to be included
in the latter. In addition, more simulation results are presented to verify the
viability of the multicast schemes developed in the main manuscript.
|
1305.1429 | Speech User Interface for Information Retrieval | cs.IR | Along with the rapid development of information technology, the amount of
information generated at a given time far exceeds human's ability to organize,
search, and manipulate without the help of automatic systems. Now a days so
many tools and techniques are available for storage and retrieval of
information. User uses interface to interact with these techniques, mostly text
user interface (TUI) or graphical user interface (GUI). Here, I am trying to
introduce a new interface i.e. speech for information retrieval. The goal of
this project is to develop a speech interface that can search and read the
required information from the database effectively, efficiently and more
friendly. This tool will be highly useful to blind people, they will able to
demand the information to the computer by giving voice command/s (keyword)
through microphone and listen the required information using speaker or
headphones.
|
1305.1434 | Gateway Switching in Q/V Band Satellite Feeder Links | cs.IT math.IT | A main challenge towards realizing the next generation Terabit/s broadband
satellite communications (SatCom) is the limited spectrum available in the Ka
band. An attractive solution is to move the feeder link to the higher Q/V band,
where more spectrum is available. When utilizing the Q/V band, due to heavy
rain attenuation, gateway diversity is considered a necessity to ensure the
required feeder link availability. Although receive site diversity has been
studied in the past for SatCom, there is much less maturity in terms of
transmit diversity techniques. In this paper, a modified switch and stay
combining scheme is proposed for a Q/V band feeder link, but its performance is
also evaluated over an end-to-end satellite link. The proposed scheme is
pragmatic and has close to optimal performance with notably lower complexity.
|
1305.1439 | Supervision Localization of Timed Discrete-Event Systems | cs.SY | We study supervisor localization for real-time discrete-event systems (DES)
in the Brandin-Wonham framework of timed supervisory control. We view a
real-time DES as comprised of asynchronous agents which are coupled through
imposed logical and temporal specifications; the essence of supervisor
localization is the decomposition of monolithic (global) control action into
local control strategies for these individual agents. This study extends our
previous work on supervisor localization for untimed DES, in that monolithic
timed control action typically includes not only disabling action as in the
untimed case, but also ``clock preempting'' action which enforces prescribed
temporal behavior. The latter action is executed by a class of special events,
called ``forcible'' events; accordingly, we localize monolithic preemptive
action with respect to these events. We demonstrate the new features of timed
supervisor localization with a manufacturing cell case study, and discuss a
distributed control implementation.
|
1305.1443 | Standard Fingerprint Databases: Manual Minutiae Labeling and Matcher
Performance Analyses | cs.CV | Fingerprint verification and identification algorithms based on minutiae
features are used in many biometric systems today (e.g., governmental e-ID
programs, border control, AFIS, personal authentication for portable devices).
Researchers in industry/academia are now able to utilize many publicly
available fingerprint databases (e.g., Fingerprint Verification Competition
(FVC) & NIST databases) to compare/evaluate their feature extraction and/or
matching algorithm performances against those of others. The results from these
evaluations are typically utilized by decision makers responsible for
implementing the cited biometric systems, in selecting/tuning specific sensors,
feature extractors and matchers. In this study, for a subset of the cited
public fingerprint databases, we report fingerprint minutiae matching results,
which are based on (i) minutiae extracted automatically from fingerprint
images, and (ii) minutiae extracted manually by human subjects. By doing so, we
are able to (i) quantitatively judge the performance differences between these
two cases, (ii) elaborate on performance upper bounds of minutiae matching,
utilizing what can be termed as "ground truth" minutiae features, (iii) analyze
minutiae matching performance, without coupling it with the minutiae extraction
performance beforehand. Further, as we will freely distribute the minutiae
templates, originating from this manual labeling study, in a standard minutiae
template exchange format (ISO 19794-2), we believe that other researchers in
the biometrics community will be able to utilize the associated results &
templates to create their own evaluations pertaining to their fingerprint
minutiae extractors/matchers.
|
1305.1454 | A constrained tropical optimization problem: complete solution and
application example | math.OC cs.SY | The paper focuses on a multidimensional optimization problem, which is
formulated in terms of tropical mathematics and consists in minimizing a
nonlinear objective function subject to linear inequality constraints. To solve
the problem, we follow an approach based on the introduction of an additional
unknown variable to reduce the problem to solving linear inequalities, where
the variable plays the role of a parameter. A necessary and sufficient
condition for the inequalities to hold is used to evaluate the parameter,
whereas the general solution of the inequalities is taken as a solution of the
original problem. Under fairly general assumptions, a complete direct solution
to the problem is obtained in a compact vector form. The result is applied to
solve a problem in project scheduling when an optimal schedule is given by
minimizing the flow time of activities in a project under various activity
precedence constraints. As an illustration, a numerical example of optimal
scheduling is also presented.
|
1305.1459 | EURETILE 2010-2012 summary: first three years of activity of the
European Reference Tiled Experiment | cs.DC cs.AR cs.NE cs.OS cs.PL | This is the summary of first three years of activity of the EURETILE FP7
project 247846. EURETILE investigates and implements brain-inspired and
fault-tolerant foundational innovations to the system architecture of massively
parallel tiled computer architectures and the corresponding programming
paradigm. The execution targets are a many-tile HW platform, and a many-tile
simulator. A set of SW process - HW tile mapping candidates is generated by the
holistic SW tool-chain using a combination of analytic and bio-inspired
methods. The Hardware dependent Software is then generated, providing OS
services with maximum efficiency/minimal overhead. The many-tile simulator
collects profiling data, closing the loop of the SW tool chain. Fine-grain
parallelism inside processes is exploited by optimized intra-tile compilation
techniques, but the project focus is above the level of the elementary tile.
The elementary HW tile is a multi-processor, which includes a fault tolerant
Distributed Network Processor (for inter-tile communication) and ASIP
accelerators. Furthermore, EURETILE investigates and implements the innovations
for equipping the elementary HW tile with high-bandwidth, low-latency
brain-like inter-tile communication emulating 3 levels of connection hierarchy,
namely neural columns, cortical areas and cortex, and develops a dedicated
cortical simulation benchmark: DPSNN-STDP (Distributed Polychronous Spiking
Neural Net with synaptic Spiking Time Dependent Plasticity). EURETILE leverages
on the multi-tile HW paradigm and SW tool-chain developed by the FET-ACA SHAPES
Integrated Project (2006-2009).
|
1305.1477 | Sharp control time for viscoelastic bodys | cs.SY math.AP | It is now well understood that equations of viscoelasticity can be seen as
perturbation of wave type equations. This observation can be exploited in
several different ways and it turns out that it is a usefull tool when studying
controllability. Here we compare a viscoelastic system which fills a surface of
a solid region (the string case has already been studied) with its memoryless
counterpart (which is a generalized telegraph equation) in order to prove exact
controllability of the viscoelastic body at precisely the same times at which
the telegraph equation is controllable.
The comparison is done using a moment method approach to controllability and
we prove, using the perturbations theorems of Paley-Wiener and Bari, that a new
sequence derived from the viscoelastic system is a Riesz sequence, a fact that
implies controllability of the viscoelastic system.
The results so obtained generalize existing controllability results and
furthermore show that the ``sharp'' control time for the telegraph equation and
the viscoelastic system coincide.
|
1305.1478 | Generalised Sphere Decoding for Spatial Modulation | cs.IT math.IT | In this paper, Sphere Decoding (SD) algorithms for Spatial Modulation (SM)
are developed to reduce the computational complexity of Maximum-Likelihood (ML)
detectors. Two SDs specifically designed for SM are proposed and analysed in
terms of Bit Error Ratio (BER) and computational complexity.
Using Monte Carlo simulations and mathematical analysis, it is shown that by
carefully choosing the initial radius the proposed sphere decoder algorithms
offer the same BER as ML detection, with a significant reduction in the
computational complexity.
A tight closed form expression for the BER performance of SM-SD is derived in
the paper, along with an algorithm for choosing the initial radius which
provides near to optimum performance. Also, it is shown that none of the
proposed SDs are always superior to the others, but the best SD to use depends
on the target spectral efficiency. The computational complexity trade-off
offered by the proposed solutions is studied via analysis and simulation, and
is shown to validate our findings. Finally, the performance of SM-SDs are
compared to Spatial Multiplexing (SMX) applying ML decoder and applying SD.
It is shown that for the same spectral efficiency, SM-SD offers up to 84%
reduction in complexity compared to SMX-SD, with up to 1 dB better BER
performance than SMX-ML decoder.
|
1305.1490 | Degrees of Freedom of Certain Interference Alignment Schemes with
Distributed CSIT | cs.IT math.IT | In this work, we consider the use of interference alignment (IA) in a MIMO
interference channel (IC) under the assumption that each transmitter (TX) has
access to channel state information (CSI) that generally differs from that
available to other TXs. This setting is referred to as distributed CSIT. In a
setting where CSI accuracy is controlled by a set of power exponents, we show
that in the static 3-user MIMO square IC, the number of degrees-of-freedom
(DoF) that can be achieved with distributed CSIT is at least equal to the DoF
achieved with the worst accuracy taken across the TXs and across the
interfering links. We conjecture further that this represents exactly the DoF
achieved. This result is in strong contrast with the centralized CSIT
configuration usually studied (where all the TXs share the same, possibly
imperfect, channel estimate) for which it was shown that the DoF achieved at
receiver (RX) i is solely limited by the quality of its own feedback. This
shows the critical impact of CSI discrepancies between the TXs, and highlights
the price paid by distributed precoding.
|
1305.1495 | GReTA - a novel Global and Recursive Tracking Algorithm in three
dimensions | q-bio.QM cs.CV | Tracking multiple moving targets allows quantitative measure of the dynamic
behavior in systems as diverse as animal groups in biology, turbulence in fluid
dynamics and crowd and traffic control. In three dimensions, tracking several
targets becomes increasingly hard since optical occlusions are very likely,
i.e. two featureless targets frequently overlap for several frames. Occlusions
are particularly frequent in biological groups such as bird flocks, fish
schools, and insect swarms, a fact that has severely limited collective animal
behavior field studies in the past. This paper presents a 3D tracking method
that is robust in the case of severe occlusions. To ensure robustness, we adopt
a global optimization approach that works on all objects and frames at once. To
achieve practicality and scalability, we employ a divide and conquer
formulation, thanks to which the computational complexity of the problem is
reduced by orders of magnitude. We tested our algorithm with synthetic data,
with experimental data of bird flocks and insect swarms and with public
benchmark datasets, and show that our system yields high quality trajectories
for hundreds of moving targets with severe overlap. The results obtained on
very heterogeneous data show the potential applicability of our method to the
most diverse experimental situations.
|
1305.1502 | Willingness Optimization for Social Group Activity | cs.SI physics.soc-ph | Studies show that a person is willing to join a social group activity if the
activity is interesting, and if some close friends also join the activity as
companions. The literature has demonstrated that the interests of a person and
the social tightness among friends can be effectively derived and mined from
social networking websites. However, even with the above two kinds of
information widely available, social group activities still need to be
coordinated manually, and the process is tedious and time-consuming for users,
especially for a large social group activity, due to complications of social
connectivity and the diversity of possible interests among friends. To address
the above important need, this paper proposes to automatically select and
recommend potential attendees of a social group activity, which could be very
useful for social networking websites as a value-added service. We first
formulate a new problem, named Willingness mAximization for Social grOup
(WASO). This paper points out that the solution obtained by a greedy algorithm
is likely to be trapped in a local optimal solution. Thus, we design a new
randomized algorithm to effectively and efficiently solve the problem. Given
the available computational budgets, the proposed algorithm is able to
optimally allocate the resources and find a solution with an approximation
ratio. We implement the proposed algorithm in Facebook, and the user study
demonstrates that social groups obtained by the proposed algorithm
significantly outperform the solutions manually configured by users.
|
1305.1520 | A Method for Visuo-Spatial Classification of Freehand Shapes Freely
Sketched | cs.CV | We present the principle and the main steps of a new method for the
visuo-spatial analysis of geometrical sketches recorded online. Visuo-spatial
analysis is a necessary step for multi-level analysis. Multi-level analysis
simultaneously allows classification, comparison or clustering of the
constituent parts of a pattern according to their visuo-spatial properties,
their procedural strategies, their structural or temporal parameters, or any
combination of two or more of those parameters. The first results provided by
this method concern the comparison of sketches to some perfect patterns of
simple geometrical figures and the measure of dissimilarity between real
sketches. The mean rates of good decision higher than 95% obtained are
promising in both cases.
|
1305.1525 | Constant-Envelope Multi-User Precoding for Frequency-Selective Massive
MIMO Systems | cs.IT math.IT | We consider downlink precoding in a frequency-selective multi-user Massive
MIMO system with highly efficient but non-linear power amplifiers at the base
station (BS). A low-complexity precoding algorithm is proposed, which generates
constant-envelope (CE) signals at each BS antenna. To achieve a desired
per-user information rate, the extra total transmit power required under the
per-antenna CE constraint when compared to the commonly used less stringent
total average transmit power constraint, is small.
|
1305.1537 | Shannon capacity of nonlinear regenerative channels | cs.IT math.IT physics.optics | We compute Shannon capacity of nonlinear channels with regenerative elements.
Conditions are found under which capacity of such nonlinear channels is higher
than the Shannon capacity of the classical linear additive white Gaussian noise
channel. We develop a general scheme for designing the proposed channels and
apply it to the particular nonlinear sine-mapping. The upper bound for
regeneration efficiency is found and the asymptotic behavior of the capacity in
the saturation regime is derived.
|
1305.1578 | Projective simulation for classical learning agents: a comprehensive
investigation | nlin.AO cs.AI | We study the model of projective simulation (PS), a novel approach to
artificial intelligence based on stochastic processing of episodic memory which
was recently introduced [H.J. Briegel and G. De las Cuevas. Sci. Rep. 2, 400,
(2012)]. Here we provide a detailed analysis of the model and examine its
performance, including its achievable efficiency, its learning times and the
way both properties scale with the problems' dimension. In addition, we situate
the PS agent in different learning scenarios, and study its learning abilities.
A variety of new scenarios are being considered, thereby demonstrating the
model's flexibility. Furthermore, to put the PS scheme in context, we compare
its performance with those of Q-learning and learning classifier systems, two
popular models in the field of reinforcement learning. It is shown that PS is a
competitive artificial intelligence model of unique properties and strengths.
|
1305.1598 | Abelian Group Codes for Source Coding and Channel Coding | cs.IT math.IT | In this paper, we study the asymptotic performance of Abelian group codes for
the lossy source coding problem for arbitrary discrete (finite alphabet)
memoryless sources as well as the channel coding problem for arbitrary discrete
(finite alphabet) memoryless channels. For the source coding problem, we derive
an achievable rate-distortion function that is characterized in a single-letter
information-theoretic form using the ensemble of Abelian group codes. When the
underlying group is a field, it simplifies to the symmetric rate-distortion
function. Similarly, for the channel coding problem, we find an achievable rate
characterized in a single-letter information-theoretic form using group codes.
This simplifies to the symmetric capacity of the channel when the underlying
group is a field. We compute the rate-distortion function and the achievable
rate for several examples of sources and channels. Due to the non-symmetric
nature of the sources and channels considered, our analysis uses a synergy of
information theoretic and group-theoretic tools.
|
1305.1609 | Formal Representation of the SS-DB Benchmark and Experimental Evaluation
in EXTASCID | cs.DB | Evaluating the performance of scientific data processing systems is a
difficult task considering the plethora of application-specific solutions
available in this landscape and the lack of a generally-accepted benchmark. The
dual structure of scientific data coupled with the complex nature of processing
complicate the evaluation procedure further. SS-DB is the first attempt to
define a general benchmark for complex scientific processing over raw and
derived data. It fails to draw sufficient attention though because of the
ambiguous plain language specification and the extraordinary SciDB results. In
this paper, we remedy the shortcomings of the original SS-DB specification by
providing a formal representation in terms of ArrayQL algebra operators and
ArrayQL/SciQL constructs. These are the first formal representations of the
SS-DB benchmark. Starting from the formal representation, we give a reference
implementation and present benchmark results in EXTASCID, a novel system for
scientific data processing. EXTASCID is complete in providing native support
both for array and relational data and extensible in executing any user code
inside the system by the means of a configurable metaoperator. These features
result in an order of magnitude improvement over SciDB at data loading,
extracting derived data, and operations over derived data.
|
1305.1655 | A short note on estimating intelligence from user profiles in the
context of universal psychometrics: prospects and caveats | cs.AI | There has been an increasing interest in inferring some personality traits
from users and players in social networks and games, respectively. This goes
beyond classical sentiment analysis, and also much further than customer
profiling. The purpose here is to have a characterisation of users in terms of
personality traits, such as openness, conscientiousness, extraversion,
agreeableness, and neuroticism. While this is an incipient area of research, we
ask the question of whether cognitive abilities, and intelligence in
particular, are also measurable from user profiles. However, we pose the
question as broadly as possible in terms of subjects, in the context of
universal psychometrics, including humans, machines and hybrids. Namely, in
this paper we analyse the following question: is it possible to measure the
intelligence of humans and (non-human) bots in a social network or a game just
from their user profiles, i.e., by observation, without the use of interactive
tests, such as IQ tests, the Turing test or other more principled machine
intelligence tests?
|
1305.1679 | High Level Pattern Classification via Tourist Walks in Networks | cs.AI cs.LG | Complex networks refer to large-scale graphs with nontrivial connection
patterns. The salient and interesting features that the complex network study
offer in comparison to graph theory are the emphasis on the dynamical
properties of the networks and the ability of inherently uncovering pattern
formation of the vertices. In this paper, we present a hybrid data
classification technique combining a low level and a high level classifier. The
low level term can be equipped with any traditional classification techniques,
which realize the classification task considering only physical features (e.g.,
geometrical or statistical features) of the input data. On the other hand, the
high level term has the ability of detecting data patterns with semantic
meanings. In this way, the classification is realized by means of the
extraction of the underlying network's features constructed from the input
data. As a result, the high level classification process measures the
compliance of the test instances with the pattern formation of the training
data. Out of various high level perspectives that can be utilized to capture
semantic meaning, we utilize the dynamical features that are generated from a
tourist walker in a networked environment. Specifically, a weighted combination
of transient and cycle lengths generated by the tourist walk is employed for
that end. Interestingly, our study shows that the proposed technique is able to
further improve the already optimized performance of traditional classification
techniques.
|
1305.1690 | Unsatisfiable Cores for Constraint Programming | cs.LO cs.AI | Constraint Programming (CP) solvers typically tackle optimization problems by
repeatedly finding solutions to a problem while placing tighter and tighter
bounds on the solution cost. This approach is somewhat naive, especially for
soft-constraint optimization problems in which the soft constraints are mostly
satisfied. Unsatisfiable-core approaches to solving soft constraint problems in
Boolean Satisfiability (e.g. MAXSAT) force all soft constraints to hold
initially. When solving fails they return an unsatisfiable core, as a set of
soft constraints that cannot hold simultaneously. Using this information the
problem is relaxed to allow certain soft constraint(s) to be violated and
solving continues. Since Lazy Clause Generation (LCG) solvers can also return
unsatisfiable cores we can adapt the MAXSAT unsatisfiable core approach to CP.
We implement the original MAXSAT unsatisfiable core solving algorithms WPM1,
MSU3 in a state-of-the-art LCG solver and show that there exist problems which
benefit from this hybrid approach.
|
1305.1704 | The Extended Parameter Filter | stat.ML cs.AI | The parameters of temporal models, such as dynamic Bayesian networks, may be
modelled in a Bayesian context as static or atemporal variables that influence
transition probabilities at every time step. Particle filters fail for models
that include such variables, while methods that use Gibbs sampling of parameter
variables may incur a per-sample cost that grows linearly with the length of
the observation sequence. Storvik devised a method for incremental computation
of exact sufficient statistics that, for some cases, reduces the per-sample
cost to a constant. In this paper, we demonstrate a connection between
Storvik's filter and a Kalman filter in parameter space and establish more
general conditions under which Storvik's filter works. Drawing on an analogy to
the extended Kalman filter, we develop and analyze, both theoretically and
experimentally, a Taylor approximation to the parameter posterior that allows
Storvik's method to be applied to a broader class of models. Our experiments on
both synthetic examples and real applications show improvement over existing
methods.
|
1305.1707 | Class Imbalance Problem in Data Mining Review | cs.LG | In last few years there are major changes and evolution has been done on
classification of data. As the application area of technology is increases the
size of data also increases. Classification of data becomes difficult because
of unbounded size and imbalance nature of data. Class imbalance problem become
greatest issue in data mining. Imbalance problem occur where one of the two
classes having more sample than other classes. The most of algorithm are more
focusing on classification of major sample while ignoring or misclassifying
minority sample. The minority samples are those that rarely occur but very
important. There are different methods available for classification of
imbalance data set which is divided into three main categories, the algorithmic
approach, data-preprocessing approach and feature selection approach. Each of
this technique has their own advantages and disadvantages. In this paper
systematic study of each approach is define which gives the right direction for
research in class imbalance problem.
|
1305.1713 | Optimization of stochastic database cracking | cs.DB | Variant Stochastic cracking is a significantly more resilient approach to
adaptive indexing. It showed [1]that Stochastic cracking uses each query as a
hint on how to reorganize data, but not blindly so; it gains resilience and
avoids performance bottlenecks by deliberately applying certain arbitrary
choices in its decision making. Therefore bring, adaptive indexing forward to a
mature formulation that confers the workload-robustness that previous
approaches lacked. Original cracking relies on the randomness of the workloads
to converge well. [2][3] However, where the workload is non-random, cracking
needs to introduce randomness on its own. Stochastic Cracking clearly improves
over original cracking by being robust in workload changes while maintaining
all original cracking features when it comes to adaptation. But looking at both
types of cracking, it conveyed an incomplete picture as at some point of time
it is must to know whether the workload is random or sequential. In this paper
our focus is on optimization of variant stochastic cracking, that could be
achieved in two ways either by reducing the initialization cost to make
stochastic cracking even more transparent to the user, especially for queries
that initiate a workload change and hence incur a higher cost or by combining
the strengths of the various stochastic cracking algorithms via a dynamic
component that decides which algorithm to choose for a query on the fly. The
efforts have been put in to make an algorithm that reduces the initialization
cost by using the main notion of both cracking, while considering the
requirements of adaptive indexing [2].
|
1305.1729 | A Simple Technique for the Converse of Finite Blocklength Multiple
Access Channels | cs.IT math.IT | A converse for the Discrete Memoryless Multiple Access Channel is given. The
result in [13] is refined, and the third order term is obtained. Moreover, our
proof is much simpler than [13]. With little modification, the region can be
further improved.
|
1305.1730 | The Redundancy of Slepian-Wolf Coding Revisited | cs.IT math.IT | [Draft] In this paper, the redundancy of Slepian Wolf coding is revisited.
Applying the random binning and converse technique in \cite{yang}, the same
results in \cite{he} are obtained with much simpler proofs. Moreover, our
results reflect more details about the high-order terms of the coding rate. The
redundancy is investigated for both fixed-rate and variable-rate cases. The
normal approximation (or dispersion) can also be obtained with minor
modification.
|
1305.1734 | When Politicians Tweet: A Study on the Members of the German Federal
Diet | cs.SI physics.soc-ph | In this preliminary study we compare the characteristics of retweets and
replies on more than 350,000 messages collected by following members of the
German Federal Diet on Twitter. We find significant differences in the
characteristics pointing to distinct types of usages for retweets and replies.
Using time series and regression analysis we observe that the likelihood of a
politician using replies increases with typical leisure times while retweets
occur constant over time. Including formal references increases the probability
of a message being retweeted but drops its chance of being replied. This hints
to a more professional use for retweets while replies tend to have a personal
connotation.
|
1305.1745 | Mobile Recommender Systems Methods: An Overview | cs.IR | The information that mobiles can access becomes very wide nowadays, and the
user is faced with a dilemma: there is an unlimited pool of information
available to him but he is unable to find the exact information he is looking
for. This is why the current research aims to design Recommender Systems (RS)
able to continually send information that matches the user's interests in order
to reduce his navigation time. In this paper, we treat the different approaches
to recommend.
|
1305.1746 | Structured $H_\infty$-Optimal Control for Nested Interconnections: A
State-Space Solution | math.OC cs.SY | If imposing general structural constraints on controllers, it is unknown how
to design $H_\infty$-controllers by convex optimization. Under a so-called
quadratic invariance structure of the generalized plant, the Youla
parametrization allows to translate the structured synthesis problem into an
infinite-dimensional convex program. Nested interconnections that are
characterized by a standard plant with a block-triangular structure fall into
this class. Recently it has been shown how to design optimal $H_2$-controllers
for such nested structures in the state-space by solving algebraic Riccati
equations. In the present paper we provide a state-space solution of the
corresponding output-feedback $H_\infty$ synthesis problem without any
counterpart in the literature. We argue that a solution based on Riccati
equations is - even for state-feedback problems - not feasible and we
illustrate our results by means of a simple numerical example.
|
1305.1762 | New Bounds on the Capacity of Fiber-Optics Communications | physics.optics cs.IT math.IT | By taking advantage of the temporal correlations of the nonlinear phase noise
in WDM systems we show that the capacity of a nonlinear fiber link is notably
higher than what is currently assumed. This advantage is translated into the
doubling of the link distance for a fixed transmission rate.
|
1305.1783 | Improving Diffusion-Based Molecular Communication with Unanchored
Enzymes | cs.IT math.IT q-bio.QM | In this paper, we propose adding enzymes to the propagation environment of a
diffusive molecular communication system as a strategy for mitigating
intersymbol interference. The enzymes form reaction intermediates with
information molecules and then degrade them so that they have a smaller chance
of interfering with future transmissions. We present the reaction-diffusion
dynamics of this proposed system and derive a lower bound expression for the
expected number of molecules observed at the receiver. We justify a
particle-based simulation framework, and present simulation results that show
both the accuracy of our expression and the potential for enzymes to improve
communication performance.
|
1305.1786 | Quantized Iterative Hard Thresholding: Bridging 1-bit and
High-Resolution Quantized Compressed Sensing | cs.IT math.IT | In this work, we show that reconstructing a sparse signal from quantized
compressive measurement can be achieved in an unified formalism whatever the
(scalar) quantization resolution, i.e., from 1-bit to high resolution
assumption. This is achieved by generalizing the iterative hard thresholding
(IHT) algorithm and its binary variant (BIHT) introduced in previous works to
enforce the consistency of the reconstructed signal with respect to the
quantization model. The performance of this algorithm, simply called quantized
IHT (QIHT), is evaluated in comparison with other approaches (e.g., IHT, basis
pursuit denoise) for several quantization scenarios.
|
1305.1787 | Evolution of the user's content: An Overview of the state of the art | cs.IR | The evolution of the user's content still remains a problem for an accurate
recommendation.This is why the current research aims to design Recommender
Systems (RS) able to continually adapt information that matches the user's
interests. This paper aims to explain this problematic point in outlining the
proposals that have been made in research with their advantages and
disadvantages.
|
1305.1796 | Using Dimensional Analysis to Assess Scalability and Accuracy in
Molecular Communication | cs.IT math.IT q-bio.QM | In this paper, we apply dimensional analysis to study a diffusive molecular
communication system that uses diffusing enzymes in the propagation environment
to mitigate intersymbol interference. The enzymes bind to information molecules
and then degrade them so that they cannot interfere with the detection of
future transmissions at the receiver. We determine when it is accurate to
assume that the concentration of information molecules throughout the receiver
is constant and equal to that expected at the center of the receiver. We show
that a lower bound on the expected number of molecules observed at the receiver
can be arbitrarily scaled over the environmental parameters, and generalize how
the accuracy of the lower bound is qualitatively impacted by those parameters.
|
1305.1809 | Cover Tree Bayesian Reinforcement Learning | stat.ML cs.LG | This paper proposes an online tree-based Bayesian approach for reinforcement
learning. For inference, we employ a generalised context tree model. This
defines a distribution on multivariate Gaussian piecewise-linear models, which
can be updated in closed form. The tree structure itself is constructed using
the cover tree method, which remains efficient in high dimensional spaces. We
combine the model with Thompson sampling and approximate dynamic programming to
obtain effective exploration policies in unknown environments. The flexibility
and computational simplicity of the model render it suitable for many
reinforcement learning problems in continuous state spaces. We demonstrate this
in an experimental comparison with least squares policy iteration.
|
1305.1852 | Graph Theoretic Analysis of Knowledge Networks | cs.SI physics.soc-ph | Purpose of our work is to obtain a basic understanding and comparison of the
performance and structure of real Knowledge Networks, to identify strengths and
weaknesses and to highlight guidelines for improvements. We selected 18
Knowledge Networks from the service sector and 12 networks from the production
sector and estimated their Performance and Structure in terms of 19 indices
from graph theory. Highlights from our work include: 1) As most networks are
unilaterally structured, the direction of knowledge transfer should be taken
into account as illustrated in the analysis of clubs and entropy, 2) The
stability of most Knowledge Networks is questionable, 3) Few networks are
effective in sharing information, while most Knowledge Networks cannot benefit
from the network effect, have rather limited capability for coordination,
information propagation and synchronization and are not able to integrate Tacit
knowledge, 4) Few networks have large cliques which have to be managed with
caution as their role may be highly constructive or destructive, 5) While
agents with rich connections form clubs, as in most social networks, the poor
club effect is not negligible when we take into account the link direction, 6)
The directed link analysis of entropy reveals the low
complexity-diversification of the Knowledge Networks. In fact the only high
entropy network found, has been improved by Knowledge Management Professionals.
As most Knowledge Networks underperform, there is plenty of room for further
customized analysis in order to improve communication efficiency, coordination,
Tacit knowledge dissemination and robustness. This is the first comparative
study of real Knowledge Networks in terms of graph theoretic methods.
|
1305.1861 | Turtle: Identifying frequent k-mers with cache-efficient algorithms | q-bio.GN cs.CE | Counting the frequencies of k-mers in read libraries is often a first step in
the analysis of high-throughput sequencing experiments. Infrequent k-mers are
assumed to be a result of sequencing errors. The frequent k-mers constitute a
reduced but error-free representation of the experiment, which can inform read
error correction or serve as the input to de novo assembly methods. Ideally,
the memory requirement for counting should be linear in the number of frequent
k-mers and not in the, typically much larger, total number of k-mers in the
read library.
We present a novel method that balances time, space and accuracy requirements
to efficiently extract frequent k-mers even for high coverage libraries and
large genomes such as human. Our method is designed to minimize cache-misses in
a cache-efficient manner by using a Pattern-blocked Bloom filter to remove
infrequent k-mers from consideration in combination with a novel
sort-and-compact scheme, instead of a Hash, for the actual counting. While this
increases theoretical complexity, the savings in cache misses reduce the
empirical running times. A variant can resort to a counting Bloom filter for
even larger savings in memory at the expense of false negatives in addition to
the false positives common to all Bloom filter based approaches. A comparison
to the state-of-the-art shows reduced memory requirements and running times.
Note that we also provide the first competitive method to count k-mers up to
size 64.
|
1305.1885 | Distributed Optimization With Local Domains: Applications in MPC and
Network Flows | math.OC cs.IT math.IT | In this paper we consider a network with $P$ nodes, where each node has
exclusive access to a local cost function. Our contribution is a
communication-efficient distributed algorithm that finds a vector $x^\star$
minimizing the sum of all the functions. We make the additional assumption that
the functions have intersecting local domains, i.e., each function depends only
on some components of the variable. Consequently, each node is interested in
knowing only some components of $x^\star$, not the entire vector. This allows
for improvement in communication-efficiency. We apply our algorithm to model
predictive control (MPC) and to network flow problems and show, through
experiments on large networks, that our proposed algorithm requires less
communications to converge than prior algorithms.
|
1305.1899 | Mathematical Modeling of Product Rating: Sufficiency, Misbehavior and
Aggregation Rules | cs.IR cs.SI | Many web services like eBay, Tripadvisor, Epinions, etc, provide historical
product ratings so that users can evaluate the quality of products. Product
ratings are important since they affect how well a product will be adopted by
the market. The challenge is that we only have {\em "partial information"} on
these ratings: Each user provides ratings to only a "{\em small subset of
products}". Under this partial information setting, we explore a number of
fundamental questions: What is the "{\em minimum number of ratings}" a product
needs so one can make a reliable evaluation of its quality? How users' {\em
misbehavior} (such as {\em cheating}) in product rating may affect the
evaluation result? To answer these questions, we present a formal mathematical
model of product evaluation based on partial information. We derive theoretical
bounds on the minimum number of ratings needed to produce a reliable indicator
of a product's quality. We also extend our model to accommodate users'
misbehavior in product rating. We carry out experiments using both synthetic
and real-world data (from TripAdvisor, Amazon and eBay) to validate our model,
and also show that using the "majority rating rule" to aggregate product
ratings, it produces more reliable and robust product evaluation results than
the "average rating rule".
|
1305.1912 | Automated polyp detection in colon capsule endoscopy | cs.CV | Colorectal polyps are important precursors to colon cancer, a major health
problem. Colon capsule endoscopy (CCE) is a safe and minimally invasive
examination procedure, in which the images of the intestine are obtained via
digital cameras on board of a small capsule ingested by a patient. The video
sequence is then analyzed for the presence of polyps. We propose an algorithm
that relieves the labor of a human operator analyzing the frames in the video
sequence. The algorithm acts as a binary classifier, which labels the frame as
either containing polyps or not, based on the geometrical analysis and the
texture content of the frame. The geometrical analysis is based on a
segmentation of an image with the help of a mid-pass filter. The features
extracted by the segmentation procedure are classified according to an
assumption that the polyps are characterized as protrusions that are mostly
round in shape. Thus, we use a best fit ball radius as a decision parameter of
a binary classifier. We present a statistical study of the performance of our
approach on a data set containing over 18,900 frames from the endoscopic video
sequences of five adult patients. The algorithm demonstrates a solid
performance, achieving 47% sensitivity per frame and over 81% sensitivity per
polyp at a specificity level of 90%. On average, with a video sequence length
of 3747 frames, only 367 false positive frames need to be inspected by a human
operator.
|
1305.1925 | Speech: A Challenge to Digital Signal Processing Technology for
Human-to-Computer Interaction | cs.HC cs.CL | This software project based paper is for a vision of the near future in which
computer interaction is characterized by natural face-to-face conversations
with lifelike characters that speak, emote, and gesture. The first step is
speech. The dream of a true virtual reality, a complete human-computer
interaction system will not come true unless we try to give some perception to
machine and make it perceive the outside world as humans communicate with each
other. This software project is under development for listening and replying
machine (Computer) through speech. The Speech interface is developed to convert
speech input into some parametric form (Speech-to-Text) for further processing
and the results, text output to speech synthesis (Text-to-Speech)
|
1305.1926 | Improving Receiver Performance of Diffusive Molecular Communication with
Enzymes | cs.IT cs.ET math.IT | This paper studies the mitigation of intersymbol interference in a diffusive
molecular communication system using enzymes that freely diffuse in the
propagation environment. The enzymes form reaction intermediates with
information molecules and then degrade them so that they cannot interfere with
future transmissions. A lower bound expression on the expected number of
molecules measured at the receiver is derived. A simple binary receiver
detection scheme is proposed where the number of observed molecules is sampled
at the time when the maximum number of molecules is expected. Insight is also
provided into the selection of an appropriate bit interval. The expected bit
error probability is derived as a function of the current and all previously
transmitted bits. Simulation results show the accuracy of the bit error
probability expression and the improvement in communication performance by
having active enzymes present.
|
1305.1946 | Semantic-based Anomalous Pattern Discovery in Moving Object Trajectories | cs.AI cs.IR | In this work, we investigate a novel semantic approach for pattern discovery
in trajectories that, relying on ontologies, enhances object movement
information with event semantics. The approach can be applied to the detection
of movement patterns and behaviors whenever the semantics of events occurring
along the trajectory is, explicitly or implicitly, available. In particular, we
tested it against an exacting case scenario in maritime surveillance, i.e., the
discovery of suspicious container transportations.
The methodology we have developed entails the formalization of the
application domain through a domain ontology, extending the Moving Object
Ontology (MOO) described in this paper. Afterwards, movement patterns have to
be formalized, either as Description Logic (DL) axioms or queries, enabling the
retrieval of the trajectories that follow the patterns.
In our experimental evaluation, we have considered a real world dataset of 18
Million of container events describing the deed undertaken in a port to
accomplish the shipping (e.g., loading on a vessel, export operation).
Leveraging events, we have reconstructed almost 300 thousand container
trajectories referring to 50 thousand containers travelling along three years.
We have formalized the anomalous itinerary patterns as DL axioms, testing
different ontology APIs and DL reasoners to retrieve the suspicious
transportations.
Our experiments demonstrate that the approach is feasible and efficient. In
particular, the joint use of Pellet and SPARQL-DL enables to detect the
trajectories following a given pattern in a reasonable time with big size
datasets.
|
1305.1956 | Joint Topic Modeling and Factor Analysis of Textual Information and
Graded Response Data | stat.ML cs.LG | Modern machine learning methods are critical to the development of
large-scale personalized learning systems that cater directly to the needs of
individual learners. The recently developed SPARse Factor Analysis (SPARFA)
framework provides a new statistical model and algorithms for machine
learning-based learning analytics, which estimate a learner's knowledge of the
latent concepts underlying a domain, and content analytics, which estimate the
relationships among a collection of questions and the latent concepts. SPARFA
estimates these quantities given only the binary-valued graded responses to a
collection of questions. In order to better interpret the estimated latent
concepts, SPARFA relies on a post-processing step that utilizes user-defined
tags (e.g., topics or keywords) available for each question. In this paper, we
relax the need for user-defined tags by extending SPARFA to jointly process
both graded learner responses and the text of each question and its associated
answer(s) or other feedback. Our purely data-driven approach (i) enhances the
interpretability of the estimated latent concepts without the need of
explicitly generating a set of tags or performing a post-processing step, (ii)
improves the prediction performance of SPARFA, and (iii) scales to large
test/assessments where human annotation would prove burdensome. We demonstrate
the efficacy of the proposed approach on two real educational datasets.
|
1305.1958 | The Dynamically Extended Mind -- A Minimal Modeling Case Study | cs.AI cs.NE nlin.CD | The extended mind hypothesis has stimulated much interest in cognitive
science. However, its core claim, i.e. that the process of cognition can extend
beyond the brain via the body and into the environment, has been heavily
criticized. A prominent critique of this claim holds that when some part of the
world is coupled to a cognitive system this does not necessarily entail that
the part is also constitutive of that cognitive system. This critique is known
as the "coupling-constitution fallacy". In this paper we respond to this
reductionist challenge by using an evolutionary robotics approach to create a
minimal model of two acoustically coupled agents. We demonstrate how the
interaction process as a whole has properties that cannot be reduced to the
contributions of the isolated agents. We also show that the neural dynamics of
the coupled agents has formal properties that are inherently impossible for
those neural networks in isolation. By keeping the complexity of the model to
an absolute minimum, we are able to illustrate how the coupling-constitution
fallacy is in fact based on an inadequate understanding of the constitutive
role of nonlinear interactions in dynamical systems theory.
|
1305.1961 | An Improved Three-Weight Message-Passing Algorithm | cs.AI cs.DS math.OC physics.comp-ph | We describe how the powerful "Divide and Concur" algorithm for constraint
satisfaction can be derived as a special case of a message-passing version of
the Alternating Direction Method of Multipliers (ADMM) algorithm for convex
optimization, and introduce an improved message-passing algorithm based on
ADMM/DC by introducing three distinct weights for messages, with "certain" and
"no opinion" weights, as well as the standard weight used in ADMM/DC. The
"certain" messages allow our improved algorithm to implement constraint
propagation as a special case, while the "no opinion" messages speed
convergence for some problems by making the algorithm focus only on active
constraints. We describe how our three-weight version of ADMM/DC can give
greatly improved performance for non-convex problems such as circle packing and
solving large Sudoku puzzles, while retaining the exact performance of ADMM for
convex problems. We also describe the advantages of our algorithm compared to
other message-passing algorithms based upon belief propagation.
|
1305.1980 | Modeling Temporal Activity Patterns in Dynamic Social Networks | physics.soc-ph cs.SI physics.data-an stat.AP | The focus of this work is on developing probabilistic models for user
activity in social networks by incorporating the social network influence as
perceived by the user. For this, we propose a coupled Hidden Markov Model,
where each user's activity evolves according to a Markov chain with a hidden
state that is influenced by the collective activity of the friends of the user.
We develop generalized Baum-Welch and Viterbi algorithms for model parameter
learning and state estimation for the proposed framework. We then validate the
proposed model using a significant corpus of user activity on Twitter. Our
numerical studies show that with sufficient observations to ensure accurate
model learning, the proposed framework explains the observed data better than
either a renewal process-based model or a conventional uncoupled Hidden Markov
Model. We also demonstrate the utility of the proposed approach in predicting
the time to the next tweet. Finally, clustering in the model parameter space is
shown to result in distinct natural clusters of users characterized by the
interaction dynamic between a user and his network.
|
1305.1986 | An Adaptive Statistical Non-uniform Quantizer for Detail Wavelet
Components in Lossy JPEG2000 Image Compression | cs.MM cs.CV | The paper presents a non-uniform quantization method for the Detail
components in the JPEG2000 standard. Incorporating the fact that the
coefficients lying towards the ends of the histogram plot of each Detail
component represent the structural information of an image, the quantization
step sizes become smaller at they approach the ends of the histogram plot. The
variable quantization step sizes are determined by the actual statistics of the
wavelet coefficients. Mean and standard deviation are the two statistical
parameters used iteratively to obtain the variable step sizes. Moreover, the
mean of the coefficients lying within the step size is chosen as the quantized
value, contrary to the deadzone uniform quantizer which selects the midpoint of
the quantization step size as the quantized value. The experimental results of
the deadzone uniform quantizer and the proposed non-uniform quantizer are
objectively compared by using Mean-Squared Error (MSE) and Mean Structural
Similarity Index Measure (MSSIM), to evaluate the quantization error and
reconstructed image quality, respectively. Subjective analysis of the
reconstructed images is also carried out. Through the objective and subjective
assessments, it is shown that the non-uniform quantizer performs better than
the deadzone uniform quantizer in the perceptual quality of the reconstructed
image, especially at low bitrates. More importantly, unlike the deadzone
uniform quantizer, the non-uniform quantizer accomplishes better visual quality
with a few quantized values.
|
1305.1991 | On the universality of cognitive tests | cs.AI | The analysis of the adaptive behaviour of many different kinds of systems
such as humans, animals and machines, requires more general ways of assessing
their cognitive abilities. This need is strengthened by increasingly more tasks
being analysed for and completed by a wider diversity of systems, including
swarms and hybrids. The notion of universal test has recently emerged in the
context of machine intelligence evaluation as a way to define and use the same
cognitive test for a variety of systems, using some principled tasks and
adapting the interface to each particular subject. However, how far can
universal tests be taken? This paper analyses this question in terms of
subjects, environments, space-time resolution, rewards and interfaces. This
leads to a number of findings, insights and caveats, according to several
levels where universal tests may be progressively more difficult to conceive,
implement and administer. One of the most significant contributions is given by
the realisation that more universal tests are defined as maximisations of less
universal tests for a variety of configurations. This means that universal
tests must be necessarily adaptive.
|
1305.2006 | LabelRankT: Incremental Community Detection in Dynamic Networks via
Label Propagation | cs.SI physics.soc-ph | An increasingly important challenge in network analysis is efficient
detection and tracking of communities in dynamic networks for which changes
arrive as a stream. There is a need for algorithms that can incrementally
update and monitor communities whose evolution generates huge realtime data
streams, such as the Internet or on-line social networks. In this paper, we
propose LabelRankT, an online distributed algorithm for detection of
communities in large-scale dynamic networks through stabilized label
propagation. Results of tests on real-world networks demonstrate that
LabelRankT has much lower computational costs than other algorithms. It also
improves the quality of the detected communities compared to dynamic detection
methods and matches the quality achieved by static detection approaches. Unlike
most of other algorithms which apply only to binary networks, LabelRankT works
on weighted and directed networks, which provides a flexible and promising
solution for real-world applications.
|
1305.2038 | A Rank Minrelation - Majrelation Coefficient | stat.ML cs.AI | Improving the detection of relevant variables using a new bivariate measure
could importantly impact variable selection and large network inference
methods. In this paper, we propose a new statistical coefficient that we call
the rank minrelation coefficient. We define a minrelation of X to Y (or
equivalently a majrelation of Y to X) as a measure that estimate p(Y > X) when
X and Y are continuous random variables. The approach is similar to Lin's
concordance coefficient that rather focuses on estimating p(X = Y). In other
words, if a variable X exhibits a minrelation to Y then, as X increases, Y is
likely to increases too. However, on the contrary to concordance or
correlation, the minrelation is not symmetric. More explicitly, if X decreases,
little can be said on Y values (except that the uncertainty on Y actually
increases). In this paper, we formally define this new kind of bivariate
dependencies and propose a new statistical coefficient in order to detect those
dependencies. We show through several key examples that this new coefficient
has many interesting properties in order to select relevant variables, in
particular when compared to correlation.
|
1305.2042 | Balancing experiments on a torque-controlled humanoid with hierarchical
inverse dynamics | cs.RO | Recently several hierarchical inverse dynamics controllers based on cascades
of quadratic programs have been proposed for application on torque controlled
robots. They have important theoretical benefits but have never been
implemented on a torque controlled robot where model inaccuracies and real-time
computation requirements can be problematic. In this contribution we present an
experimental evaluation of these algorithms in the context of balance control
for a humanoid robot. The presented experiments demonstrate the applicability
of the approach under real robot conditions (i.e. model uncertainty, estimation
errors, etc). We propose a simplification of the optimization problem that
allows us to decrease computation time enough to implement it in a fast torque
control loop. We implement a momentum-based balance controller which shows
robust performance in face of unknown disturbances, even when the robot is
standing on only one foot. In a second experiment, a tracking task is evaluated
to demonstrate the performance of the controller with more complicated
hierarchies. Our results show that hierarchical inverse dynamics controllers
can be used for feedback control of humanoid robots and that momentum-based
balance control can be efficiently implemented on a real robot.
|
1305.2091 | Characterizing User Behavior and Information Propagation on a Social
Multimedia Network | cs.SI physics.soc-ph | An increasing portion of modern socializing takes place via online social
networks. Members of these communities often play distinct roles that can be
deduced from observations of users' online activities. One such activity is the
sharing of multimedia, the popularity of which can vary dramatically. Here we
discuss our initial analysis of anonymized, scraped data from consenting
Facebook users, together with associated demographic and psychological
profiles. We present five clusters of users with common observed online
behaviors, where these users also show correlated profile characteristics.
Finally, we identify some common properties of the most popular multimedia
content.
|
1305.2103 | Translating Relational Queries into Spreadsheets | cs.DB | Spreadsheets are among the most commonly used applications for data
management and analysis. Perhaps they are even among the most widely used
computer applications of all kinds. They combine in a natural and intuitive way
data processing with very diverse supplementary features: statistical
functions, visualization tools, pivot tables, pivot charts, linear programming
solvers, Web queries periodically downloading data from external sources, etc.
However, the spreadsheet paradigm of computation still lacks sufficient
analysis.
In this article we demonstrate that a spreadsheet can implement all data
transformations definable in SQL, without any use of macros or built-in
programming languages, merely by utilizing spreadsheet formulas. We provide a
query compiler, which translates any given SQL query into a worksheet of the
same semantics, including NULL values.
Thereby database operations become available to the users who do not want to
migrate to a database. They can define their queries using a high-level
language and then get their execution plans in a plain vanilla spreadsheet. No
sophisticated database system, no spreadsheet plugins or macros are needed.
The functions available in spreadsheets impose severe limitations on the
algorithms one can implement. In this paper we offer $O(n\log^2n)$ sorting
spreadsheet, but using a non-constant number of rows, improving on the
previously known $O(n^2)$ ones.
It is therefore surprising, that a spreadsheet can implement, as we
demonstrate, Depth-First-Search and Breadth-First-Search on graphs, thereby
reaching beyond queries definable in SQL-92.
|
1305.2112 | Intercept Probability Analysis of Cooperative Wireless Networks with
Best Relay Selection in the Presence of Eavesdropping Attack | cs.IT math.IT | Due to the broadcast nature of wireless medium, wireless communication is
extremely vulnerable to eavesdropping attack. Physical-layer security is
emerging as a new paradigm to prevent the eavesdropper from interception by
exploiting the physical characteristics of wireless channels, which has
recently attracted a lot of research attentions. In this paper, we consider the
physical-layer security in cooperative wireless networks with multiple
decode-and-forward (DF) relays and investigate the best relay selection in the
presence of eavesdropping attack. For the comparison purpose, we also examine
the conventional direct transmission without relay and traditional max-min
relay selection. We derive closed-form intercept probability expressions of the
direct transmission, traditional max-min relay selection, and proposed best
relay selection schemes in Rayleigh fading channels. Numerical results show
that the proposed best relay selection scheme strictly outperforms the
traditional direct transmission and max-min relay selection schemes in terms of
intercept probability. In addition, as the number of relays increases, the
intercept probabilities of both traditional max-min relay selection and
proposed best relay selection schemes decrease significantly, showing the
advantage of exploiting multiple relays against eavesdropping attack.
|
1305.2123 | Physical-Layer Multicasting by Stochastic Transmit Beamforming and
Alamouti Space-Time Coding | cs.IT math.IT | Consider transceiver designs in a multiuser multi-input single-output (MISO)
downlink channel, where the users are to receive the same data stream
simultaneously. This problem, known as physical-layer multicasting, has drawn
much interest. Presently, a popularized approach is transmit beamforming, in
which the beamforming optimization is handled by a rank-one approximation
method called semidefinite relaxation (SDR). SDR-based beamforming has been
shown to be promising for a small or moderate number of users. This paper
describes two new transceiver strategies for physical-layer multicasting. The
first strategy, called stochastic beamforming (SBF), randomizes the beamformer
in a per-symbol time-varying manner, so that the rank-one approximation in SDR
can be bypassed. We propose several efficiently realizable SBF schemes, and
prove that their multicast achievable rate gaps with respect to the MISO
multicast capacity must be no worse than 0.8314 bits/s/Hz, irrespective of any
other factors such as the number of users. The use of channel coding and the
assumption of sufficiently long code lengths play a crucial role in achieving
the above result. The second strategy combines transmit beamforming and the
Alamouti space-time code. The result is a rank-two generalization of SDR-based
beamforming. We show by analysis that this SDR-based beamformed Alamouti scheme
has a better worst-case effective signal-to-noise ratio (SNR) scaling, and
hence a better multicast rate scaling, than SDR-based beamforming. We further
the work by combining SBF and the beamformed Alamouti scheme, wherein an
improved constant rate gap of 0.39 bits/s/Hz is proven. Simulation results show
that under a channel-coded, many-user setting, the proposed multicast
transceiver schemes yield significant SNR gains over SDR-based beamforming at
the same bit error rate level.
|
1305.2169 | Robust Hydraulic Fracture Monitoring (HFM) of Multiple Time Overlapping
Events Using a Generalized Discrete Radon Transform | physics.geo-ph cs.IT math.IT stat.AP | In this work we propose a novel algorithm for multiple-event localization for
Hydraulic Fracture Monitoring (HFM) through the exploitation of the sparsity of
the observed seismic signal when represented in a basis consisting of space
time propagators. We provide explicit construction of these propagators using a
forward model for wave propagation which depends non-linearly on the problem
parameters - the unknown source location and mechanism of fracture, time and
extent of event, and the locations of the receivers. Under fairly general
assumptions and an appropriate discretization of these parameters we first
build an over-complete dictionary of generalized Radon propagators and assume
that the data is well represented as a linear superposition of these
propagators. Exploiting this structure we propose sparsity penalized algorithms
and workflow for super-resolution extraction of time overlapping multiple
seismic events from single well data.
|
1305.2170 | Exploiting Structural Complexity for Robust and Rapid Hyperspectral
Imaging | physics.geo-ph cs.IT math.IT stat.AP | This paper presents several strategies for spectral de-noising of
hyperspectral images and hypercube reconstruction from a limited number of
tomographic measurements. In particular we show that the non-noisy spectral
data, when stacked across the spectral dimension, exhibits low-rank. On the
other hand, under the same representation, the spectral noise exhibits a banded
structure. Motivated by this we show that the de-noised spectral data and the
unknown spectral noise and the respective bands can be simultaneously estimated
through the use of a low-rank and simultaneous sparse minimization operation
without prior knowledge of the noisy bands. This result is novel for for
hyperspectral imaging applications. In addition, we show that imaging for the
Computed Tomography Imaging Systems (CTIS) can be improved under limited angle
tomography by using low-rank penalization. For both of these cases we exploit
the recent results in the theory of low-rank matrix completion using nuclear
norm minimization.
|
1305.2173 | Optimality of Orthogonal Access for One-dimensional Convex Cellular
Networks | cs.IT math.IT | It is shown that a greedy orthogonal access scheme achieves the sum degrees
of freedom of all one-dimensional (all nodes placed along a straight line)
convex cellular networks (where cells are convex regions) when no channel
knowledge is available at the transmitters except the knowledge of the network
topology. In general, optimality of orthogonal access holds neither for
two-dimensional convex cellular networks nor for one-dimensional non-convex
cellular networks, thus revealing a fundamental limitation that exists only
when both one-dimensional and convex properties are simultaneously enforced, as
is common in canonical information theoretic models for studying cellular
networks. The result also establishes the capacity of the corresponding class
of index coding problems.
|
1305.2218 | Stochastic gradient descent algorithms for strongly convex functions at
O(1/T) convergence rates | cs.LG cs.AI | With a weighting scheme proportional to t, a traditional stochastic gradient
descent (SGD) algorithm achieves a high probability convergence rate of
O({\kappa}/T) for strongly convex functions, instead of O({\kappa} ln(T)/T). We
also prove that an accelerated SGD algorithm also achieves a rate of
O({\kappa}/T).
|
1305.2221 | Repairing and Inpainting Damaged Images using Diffusion Tensor | cs.CV | Removing or repairing the imperfections of a digital images or videos is a
very active and attractive field of research belonging to the image inpainting
technique. This later has a wide range of applications, such as removing
scratches in old photographic image, removing text and logos or creating
cartoon and artistic effects. In this paper, we propose an efficient method to
repair a damaged image based on a non linear diffusion tensor. The idea is to
track perfectly the local geometry of the damaged image and allowing diffusion
only in the isophotes curves direction. To illustrate the effective performance
of our method, we present some experimental results on test and real
photographic color images
|
1305.2233 | Asymptotic Coverage Probability and Rate in Massive MIMO Networks | cs.IT math.IT | Massive multiple-input multiple-output (MIMO) is a transmission technique for
cellular systems that uses many antennas to support not-as-many users. Thus
far, the performance of massive MIMO has only been examined in finite cellular
networks. In this letter, we analyze its performance in random cellular
networks with Poisson distributed base station locations. Specifically, we
provide analytical expressions for the asymptotic coverage probability and rate
in both downlink and uplink when each base station has a large number of
antennas. The results show that, though limited by pilot contamination, massive
MIMO can provide significantly higher asymptotic data rate per user than the
single-antenna network.
|
1305.2238 | Calibrated Multivariate Regression with Application to Neural Semantic
Basis Discovery | stat.ML cs.LG | We propose a calibrated multivariate regression method named CMR for fitting
high dimensional multivariate regression models. Compared with existing
methods, CMR calibrates regularization for each regression task with respect to
its noise level so that it simultaneously attains improved finite-sample
performance and tuning insensitiveness. Theoretically, we provide sufficient
conditions under which CMR achieves the optimal rate of convergence in
parameter estimation. Computationally, we propose an efficient smoothed
proximal gradient algorithm with a worst-case numerical rate of convergence
$\cO(1/\epsilon)$, where $\epsilon$ is a pre-specified accuracy of the
objective function value. We conduct thorough numerical simulations to
illustrate that CMR consistently outperforms other high dimensional
multivariate regression methods. We also apply CMR to solve a brain activity
prediction problem and find that it is as competitive as a handcrafted model
created by human experts. The R package \texttt{camel} implementing the
proposed method is available on the Comprehensive R Archive Network
\url{http://cran.r-project.org/web/packages/camel/}.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.