id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1301.0701 | Similarity Assessment through blocking and affordance assignment in
Textual CBR | cs.IR cs.AI | It has been conceived that children learn new objects through their
affordances, that is, the actions that can be taken on them. We suggest that
web pages also have affordances defined in terms of the users' information need
they meet. An assumption of the proposed approach is that different parts of a
text may not be equally important / relevant to a given query. Judgment on the
relevance of a web document requires, therefore, a thorough look into its
parts, rather than treating it as a monolithic content. We propose a method to
extract and assign affordances to texts and then use these affordances to
retrieve the corresponding web pages. The overall approach presented in the
paper relies on case-based representations that bridge the queries to the
affordances of web documents. We tested our method on the tourism domain and
the results are promising.
|
1301.0702 | Joint localization and clock synchronization for wireless sensor
networks | cs.IT math.IT | A fully-asynchronous network with one target sensor and a few anchors (nodes
with known locations) is considered. Localization and synchronization are
traditionally treated as two separate problems. In this paper, localization and
synchronization is studied under a unified framework. We present a new model in
which time-stamps obtained either via two-way communication between the nodes
or with a broadcast based protocol can be used in a simple estimator based on
least-squares (LS) to jointly estimate the position of the target node as well
as all the unknown clock-skews and clock-offsets. The Cram\'er-Rao lower bound
(CRLB) is derived for the considered problem and is used as a benchmark to
analyze the performance of the proposed estimator.
|
1301.0722 | Good parts first - a new algorithm for approximate search in lexica and
string databases | cs.CL cs.DS | We present a new efficient method for approximate search in electronic
lexica. Given an input string (the pattern) and a similarity threshold, the
algorithm retrieves all entries of the lexicon that are sufficiently similar to
the pattern. Search is organized in subsearches that always start with an exact
partial match where a substring of the input pattern is aligned with a
substring of a lexicon word. Afterwards this partial match is extended stepwise
to larger substrings. For aligning further parts of the pattern with
corresponding parts of lexicon entries, more errors are tolerated at each
subsequent step. For supporting this alignment order, which may start at any
part of the pattern, the lexicon is represented as a structure that enables
immediate access to any substring of a lexicon word and permits the extension
of such substrings in both directions. Experimental evaluations of the
approximate search procedure are given that show significant efficiency
improvements compared to existing techniques. Since the technique can be used
for large error bounds it offers interesting possibilities for approximate
search in special collections of "long" strings, such as phrases, sentences, or
book ti
|
1301.0725 | The Sum-over-Forests density index: identifying dense regions in a graph | cs.LG stat.ML | This work introduces a novel nonparametric density index defined on graphs,
the Sum-over-Forests (SoF) density index. It is based on a clear and intuitive
idea: high-density regions in a graph are characterized by the fact that they
contain a large amount of low-cost trees with high outdegrees while low-density
regions contain few ones. Therefore, a Boltzmann probability distribution on
the countable set of forests in the graph is defined so that large (high-cost)
forests occur with a low probability while short (low-cost) forests occur with
a high probability. Then, the SoF density index of a node is defined as the
expected outdegree of this node in a non-trivial tree of the forest, thus
providing a measure of density around that node. Following the matrix-forest
theorem, and a statistical physics framework, it is shown that the SoF density
index can be easily computed in closed form through a simple matrix inversion.
Experiments on artificial and real data sets show that the proposed index
performs well on finding dense regions, for graphs of various origins.
|
1301.0730 | On the Low SNR Capacity of Maximum Ratio Combining over Rician Fading
Channels with Full Channel State Information | cs.IT math.IT | In this letter, we study the ergodic capacity of a maximum ratio combining
(MRC) Rician fading channel with full channel state information (CSI) at the
transmitter and at the receiver. We focus on the low Signal-to-Noise Ratio
(SNR) regime and we show that the capacity scales as (L Omega/(K+L)) SNR
log(1/SNR), where Omega is the expected channel gain per branch, K is the
Rician fading factor, and L is the number of diversity branches. We show that
one-bit CSI feedback at the transmitter is enough to achieve this capacity
using an on-off power control scheme. Our framework can be seen as a
generalization of recently established results regarding the fading-channels
capacity characterization in the low-SNR regime.
|
1301.0785 | Adaptive Intelligent Cooperative Spectrum Sensing In Cognitive Radio | cs.NE | Radio Spectrum is most precious and scarce resource and must be utilized
efficiently and effectively. Cognitive radio is the promising solutions for the
optimum utilization of the scared natural resource. The spectrum owned by the
primary user should be shared among the secondary user, but primary user should
not be interfered by the secondary user. In order to utilize the primary user
spectrum, secondary user must detect accurately, the existence of primary in
the band of interest. In cooperative spectrum sensing, the channel between the
secondary users and the cognitive radio base station is non stationary and
causes interference in the decision in decision fusion and in information in
information due to multipath fading. In this paper neural network based
cooperative spectrum sensing method is proposed, the performance of proposed
method is evaluated and observed that, the neural network based scheme
performance improve significantly over the AND,OR and Majority rule
|
1301.0802 | Borrowing strengh in hierarchical Bayes: Posterior concentration of the
Dirichlet base measure | math.ST cs.LG math.PR stat.TH | This paper studies posterior concentration behavior of the base probability
measure of a Dirichlet measure, given observations associated with the sampled
Dirichlet processes, as the number of observations tends to infinity. The base
measure itself is endowed with another Dirichlet prior, a construction known as
the hierarchical Dirichlet processes (Teh et al. [J. Amer. Statist. Assoc. 101
(2006) 1566-1581]). Convergence rates are established in transportation
distances (i.e., Wasserstein metrics) under various conditions on the geometry
of the support of the true base measure. As a consequence of the theory, we
demonstrate the benefit of "borrowing strength" in the inference of multiple
groups of data - a powerful insight often invoked to motivate hierarchical
modeling. In certain settings, the gain in efficiency due to the latent
hierarchy can be dramatic, improving from a standard nonparametric rate to a
parametric rate of convergence. Tools developed include transportation
distances for nonparametric Bayesian hierarchies of random measures, the
existence of tests for Dirichlet measures, and geometric properties of the
support of Dirichlet measures.
|
1301.0803 | Cliques in complex networks reveal link formation and community
evolution | cs.SI physics.soc-ph | Missing link prediction in indirected and un-weighted network is an open and
challenge problem which has been studied intensively in recent years. In this
paper, we studied the relationships between community structure and link
formation and proposed a Fast Block probabilistic Model(FBM). In accordance
with the experiments on four real world networks, we have yielded very good
accuracy of missing link prediction and huge improvement in computing
efficiency compared to conventional methods. By analyzing the mechanism of link
formation, we also discovered that clique structure plays a significant role to
help us understand how links grow in communities. Therefore, we summarized
three principles which are proved to be able to well explain the mechanism of
link formation and network evolution from the theory of graph topology.
|
1301.0805 | Node-weighted interacting network measures improve the representation of
real-world complex systems | physics.soc-ph cs.SI physics.data-an | Network theory provides a rich toolbox consisting of methods, measures, and
models for studying the structure and dynamics of complex systems found in
nature, society, or technology. Recently, it has been pointed out that many
real-world complex systems are more adequately mapped by networks of
interacting or interdependent networks, e.g., a power grid showing
interdependency with a communication network. Additionally, in many real-world
situations it is reasonable to include node weights into complex network
statistics to reflect the varying size or importance of subsystems that are
represented by nodes in the network of interest. E.g., nodes can represent
vastly different surface area in climate networks, volume in brain networks or
economic capacity in trade networks. In this letter, combining both ideas, we
derive a novel class of statistical measures for analysing the structure of
networks of interacting networks with heterogeneous node weights. Using a
prototypical spatial network model, we show that the newly introduced
node-weighted interacting network measures indeed provide an improved
representation of the underlying system's properties as compared to their
unweighted analogues. We apply our method to study the complex network
structure of cross-boundary trade between European Union (EU) and non-EU
countries finding that it provides important information on trade balance and
economic robustness.
|
1301.0859 | Power-Efficient System Design for Cellular-Based Machine-to-Machine
Communications | cs.IT cs.NI math.IT | The growing popularity of Machine-to-Machine (M2M) communications in cellular
networks is driving the need to optimize networks based on the characteristics
of M2M, which are significantly different from the requirements that current
networks are designed to meet. First, M2M requires large number of short
sessions as opposed to small number of long lived sessions required by the
human generated traffic. Second, M2M constitutes a number of battery operated
devices that are static in locations such as basements and tunnels, and need to
transmit at elevated powers compared to the traditional devices. Third,
replacing or recharging batteries of such devices may not be feasible. All
these differences highlight the importance of a systematic framework to study
the power and energy optimal system design in the regime of interest for M2M,
which is the main focus of this paper. For a variety of coordinated and
uncoordinated transmission strategies, we derive results for the optimal
transmit power, energy per bit, and the maximum load supported by the base
station, leading to the following design guidelines: (i) frequency division
multiple access (FDMA), including equal bandwidth allocation, is sum-power
optimal in the asymptotically low spectral efficiency regime, (ii) while FDMA
is the best practical strategy overall, uncoordinated code division multiple
access (CDMA) is almost as good when the base station is lightly loaded, (iii)
the value of optimization within FDMA is in general not significant in the
regime of interest for M2M.
|
1301.0875 | On Event Triggered Tracking for Nonlinear Systems | cs.SY math.OC | In this paper we study an event based control algorithm for trajectory
tracking in nonlinear systems. The desired trajectory is modelled as the
solution of a reference system with an exogenous input and it is assumed that
the desired trajectory and the exogenous input to the reference system are
uniformly bounded. Given a continuous-time control law that guarantees global
uniform asymptotic tracking of the desired trajectory, our algorithm provides
an event based controller that not only guarantees uniform ultimate boundedness
of the tracking error, but also ensures non-accumulation of inter-execution
times. In the case that the derivative of the exogenous input to the reference
system is also uniformly bounded, an arbitrarily small ultimate bound can be
designed. If the exogenous input to the reference system is piecewise
continuous and not differentiable everywhere then the achievable ultimate bound
is constrained and the result is local, though with a known region of
attraction. The main ideas in the paper are illustrated through simulations of
trajectory tracking by a nonlinear system.
|
1301.0878 | Fast and RIP-optimal transforms | cs.NA cs.IT math.IT | We study constructions of $k \times n$ matrices $A$ that both (1) satisfy the
restricted isometry property (RIP) at sparsity $s$ with optimal parameters, and
(2) are efficient in the sense that only $O(n\log n)$ operations are required
to compute $Ax$ given a vector $x$. Our construction is based on repeated
application of independent transformations of the form $DH$, where $H$ is a
Hadamard or Fourier transform and $D$ is a diagonal matrix with random
$\{+1,-1\}$ elements on the diagonal, followed by any $k \times n$ matrix of
orthonormal rows (e.g.\ selection of $k$ coordinates). We provide guarantees
(1) and (2) for a larger regime of parameters for which such constructions were
previously unknown. Additionally, our construction does not suffer from the
extra poly-logarithmic factor multiplying the number of observations $k$ as a
function of the sparsity $s$, as present in the currently best known RIP
estimates for partial random Fourier matrices and other classes of structured
random matrices.
|
1301.0901 | Compressed Sensing under Matrix Uncertainty: Optimum Thresholds and
Robust Approximate Message Passing | cs.IT cond-mat.stat-mech math.IT math.ST stat.TH | In compressed sensing one measures sparse signals directly in a compressed
form via a linear transform and then reconstructs the original signal. However,
it is often the case that the linear transform itself is known only
approximately, a situation called matrix uncertainty, and that the measurement
process is noisy. Here we present two contributions to this problem: first, we
use the replica method to determine the mean-squared error of the Bayes-optimal
reconstruction of sparse signals under matrix uncertainty. Second, we consider
a robust variant of the approximate message passing algorithm and demonstrate
numerically that in the limit of large systems, this algorithm matches the
optimal performance in a large region of parameters.
|
1301.0926 | Source Coding with in-Block Memory and Causally Controllable Side
Information | cs.IT math.IT | The recently proposed set-up of source coding with a side information
"vending machine" allows the decoder to select actions in order to control the
quality of the side information. The actions can depend on the message received
from the encoder and on the previously measured samples of the side
information, and are cost constrained. Moreover, the final estimate of the
source by the decoder is a function of the encoder's message and depends
causally on the side information sequence. Previous work by Permuter and
Weissman has characterized the rate-distortion-cost function in the special
case in which the source and the "vending machine" are memoryless. In this
work, motivated by the related channel coding model introduced by Kramer, the
rate-distortion-cost function characterization is extended to a model with
in-block memory. Various special cases are studied including block-feedforward
and side information repeat request models.
|
1301.0929 | Hybridization of Evolutionary Algorithms | cs.NE | Evolutionary algorithms are good general problem solver but suffer from a
lack of domain specific knowledge. However, the problem specific knowledge can
be added to evolutionary algorithms by hybridizing. Interestingly, all the
elements of the evolutionary algorithms can be hybridized. In this chapter, the
hybridization of the three elements of the evolutionary algorithms is
discussed: the objective function, the survivor selection operator and the
parameter settings. As an objective function, the existing heuristic function
that construct the solution of the problem in traditional way is used. However,
this function is embedded into the evolutionary algorithm that serves as a
generator of new solutions. In addition, the objective function is improved by
local search heuristics. The new neutral selection operator has been developed
that is capable to deal with neutral solutions, i.e. solutions that have the
different representation but expose the equal values of objective function. The
aim of this operator is to directs the evolutionary search into a new
undiscovered regions of the search space. To avoid of wrong setting of
parameters that control the behavior of the evolutionary algorithm, the
self-adaptation is used. Finally, such hybrid self-adaptive evolutionary
algorithm is applied to the two real-world NP-hard problems: the graph
3-coloring and the optimization of markers in the clothing industry. Extensive
experiments shown that these hybridization improves the results of the
evolutionary algorithms a lot. Furthermore, the impact of the particular
hybridizations is analyzed in details as well.
|
1301.0930 | Comparative Studies on Decentralized Multiloop PID Controller Design
Using Evolutionary Algorithms | cs.SY cs.NE | Decentralized PID controllers have been designed in this paper for
simultaneous tracking of individual process variables in multivariable systems
under step reference input. The controller design framework takes into account
the minimization of a weighted sum of Integral of Time multiplied Squared Error
(ITSE) and Integral of Squared Controller Output (ISCO) so as to balance the
overall tracking errors for the process variables and required variation in the
corresponding manipulated variables. Decentralized PID gains are tuned using
three popular Evolutionary Algorithms (EAs) viz. Genetic Algorithm (GA),
Evolutionary Strategy (ES) and Cultural Algorithm (CA). Credible simulation
comparisons have been reported for four benchmark 2x2 multivariable processes.
|
1301.0932 | Knowledge Sharing: A Model | cs.SI cs.AI | We know anything because we learn about it, there is anything we ever share
about it, but now a lot of media that can represent how it happened as
infrastructure of the knowledge sharing. This paper aims to introduce a model
for understanding a problem in knowledge sharing based on interaction.
|
1301.0935 | Multi-user lattice coding for the multiple-access relay channel | cs.IT math.IT | This paper considers the multi-antenna multiple access relay channel (MARC),
in which multiple users transmit messages to a common destination with the
assistance of a relay. In a variety of MARC settings, the dynamic decode and
forward (DDF) protocol is very useful due to its outstanding rate performance.
However, the lack of good structured codebooks so far hinders practical
applications of DDF for MARC. In this work, two classes of structured MARC
codes are proposed: 1) one-to-one relay-mapper aided multiuser lattice coding
(O-MLC), and 2) modulo-sum relay-mapper aided multiuser lattice coding
(MS-MLC). The former enjoys better rate performance, while the latter provides
more flexibility to tradeoff between the complexity of the relay mapper and the
rate performance. It is shown that, in order to approach the rate performance
achievable by an unstructured codebook with maximum-likelihood decoding, it is
crucial to use a new K-stage coset decoder for structured O-MLC, instead of the
one-stage decoder proposed in previous works. However, if O-MLC is decoded with
the one-stage decoder only, it can still achieve the optimal DDF
diversity-multiplexing gain tradeoff in the high signal-to-noise ratio regime.
As for MS-MLC, its rate performance can approach that of the O-MLC by
increasing the complexity of the modulo-sum relay-mapper. Finally, for
practical implementations of both O-MLC and MS-MLC, practical short length
lattice codes with linear mappers are designed, which facilitate efficient
lattice decoding. Simulation results show that the proposed coding schemes
outperform existing schemes in terms of outage probabilities in a variety of
channel settings.
|
1301.0939 | Graph 3-coloring with a hybrid self-adaptive evolutionary algorithm | cs.NE | This paper proposes a hybrid self-adaptive evolutionary algorithm for graph
coloring that is hybridized with the following novel elements: heuristic
genotype-phenotype mapping, a swap local search heuristic, and a neutral
survivor selection operator. This algorithm was compared with the evolutionary
algorithm with the SAW method of Eiben et al., the Tabucol algorithm of Hertz
and de Werra, and the hybrid evolutionary algorithm of Galinier and Hao. The
performance of these algorithms were tested on a test suite consisting of
randomly generated 3-colorable graphs of various structural features, such as
graph size, type, edge density, and variability in sizes of color classes.
Furthermore, the test graphs were generated including the phase transition
where the graphs are hard to color. The purpose of the extensive experimental
work was threefold: to investigate the behavior of the tested algorithms in the
phase transition, to identify what impact hybridization with the DSatur
traditional heuristic has on the evolutionary algorithm, and to show how graph
structural features influence the performance of the graph-coloring algorithms.
The results indicate that the performance of the hybrid self-adaptive
evolutionary algorithm is comparable with, or better than, the performance of
the hybrid evolutionary algorithm which is one of the best graph-coloring
algorithms today. Moreover, the fact that all the considered algorithms
performed poorly on flat graphs confirms that this type of graphs is really the
hardest to color.
|
1301.0954 | Cellular Systems with Many Antennas: Large System Analysis under Pilot
Contamination | cs.IT math.IT | Base stations with a large number of transmit antennas have the potential to
serve a large number of users simultaneously at higher rates. They also promise
a lower power consumption due to coherent combining at the receiver. However,
the receiver processing in the uplink relies on the channel estimates which are
known to suffer from pilot interference. In this work, we perform an uplink
large system analysis of multi-cell multi-antenna system when the receiver
employs a matched filtering with a pilot contaminated estimate. We find the
asymptotic Signal to Interference plus Noise Ratio (SINR) as the number of
antennas and number of users per base station grow large while maintaining a
fixed ratio. To do this, we make use of the similarity of the uplink received
signal in a multi-antenna system to the representation of the received signal
in CDMA systems. The asymptotic SINR expression explicitly captures the effect
of pilot contamination and that of interference averaging. This also explains
the SINR performance of receiver processing schemes at different regimes such
as instances when the number of antennas are comparable to number of users as
well as when antennas exceed greatly the number of users. Finally, we also
propose that the adaptive MMSE symbol detection scheme, which does not require
the explicit channel knowledge, can be employed for cellular systems with large
number of antennas.
|
1301.0955 | Fast Multi-Scale Community Detection based on Local Criteria within a
Multi-Threaded Algorithm | cs.DS cs.SI physics.soc-ph | Many systems can be described using graphs, or networks. Detecting
communities in these networks can provide information about the underlying
structure and functioning of the original systems. Yet this detection is a
complex task and a large amount of work was dedicated to it in the past decade.
One important feature is that communities can be found at several scales, or
levels of resolution, indicating several levels of organisations. Therefore
solutions to the community structure may not be unique. Also networks tend to
be large and hence require efficient processing. In this work, we present a new
algorithm for the fast detection of communities across scales using a local
criterion. We exploit the local aspect of the criterion to enable parallel
computation and improve the algorithm's efficiency further. The algorithm is
tested against large generated multi-scale networks and experiments demonstrate
its efficiency and accuracy.
|
1301.0957 | On Large Scale Distributed Compression and Dispersive Information
Routing for Networks | cs.IT math.IT | This paper considers the problem of distributed source coding for a large
network. A major obstacle that poses an existential threat to practical
deployment of conventional approaches to distributed coding is the exponential
growth of the decoder complexity with the number of sources and the encoding
rates. This growth in complexity renders many traditional approaches
impractical even for moderately sized networks. In this paper, we propose a new
decoding paradigm for large scale distributed compression wherein the decoder
complexity is explicitly controlled during the design. Central to our approach
is a module called the "bit-subset selector" whose role is to judiciously
extract an appropriate subset of the received bits for decoding per individual
source. We propose a practical design strategy, based on deterministic
annealing (DA) for the joint design of the system components, that enables
direct optimization of the decoder complexity-distortion trade-off, and thereby
the desired scalability. We also point out the direct connections between the
problem of large scale distributed compression and a related problem in sensor
networks, namely, dispersive information routing of correlated sources. This
allows us to extend the design principles proposed in the context of large
scale distributed compression to design efficient routers for minimum cost
communication of correlated sources across a network. Experiments on both real
and synthetic data-sets provide evidence for substantial gains over
conventional approaches.
|
1301.0958 | Probabilistic entailment in the setting of coherence: The role of quasi
conjunction and inclusion relation | math.PR cs.AI math.ST stat.TH | In this paper, by adopting a coherence-based probabilistic approach to
default reasoning, we focus the study on the logical operation of quasi
conjunction and the Goodman-Nguyen inclusion relation for conditional events.
We recall that quasi conjunction is a basic notion for defining consistency of
conditional knowledge bases. By deepening some results given in a previous
paper we show that, given any finite family of conditional events F and any
nonempty subset S of F, the family F p-entails the quasi conjunction C(S);
then, given any conditional event E|H, we analyze the equivalence between
p-entailment of E|H from F and p-entailment of E|H from C(S), where S is some
nonempty subset of F. We also illustrate some alternative theorems related with
p-consistency and p-entailment. Finally, we deepen the study of the connections
between the notions of p-entailment and inclusion relation by introducing for a
pair (F,E|H) the (possibly empty) class K of the subsets S of F such that C(S)
implies E|H. We show that the class K satisfies many properties; in particular
K is additive and has a greatest element which can be determined by applying a
suitable algorithm.
|
1301.0975 | Multiple layer Phase Shift Linear Space-time Block Code for High-speed
Visible Light Communications | cs.IT math.IT | In this letter, we consider intensity modulation/direct detection (IM/DD)
channel in the visible light communication (VLC) systems with multiple
transmitter phosphor-based white light emitting diodes (LED) and single
receiver avalanche photo diode (APD). We proposed a Multiple Layer Phase Shift
Linear Space-time Block Code (MLPS-LSTBC). We show that our proposed code for
VLC has the following main features: (a) The symbol transmission rate is
$N/(N+M-1)$, where $N$ is the number of transmitter LED and $M$ denotes the
number of shift intervals contained by a single codeword per layer; (b)
zero-forcing receiver can transform the virtual MIMO matrix channel into
parallel sub-channels even without channel state information at the receiver
side (CSIR); (c) Our MLPS-LSTBC can asymptotically enhance the spectral
efficiency by $\min (M\text{,}N)$, which is attractive for LED-based VLC with
limited electrical modulation bandwidth. By simulations, we achieve the record
data rate of 1.5 Gb/s with the bit error rate performance below the FEC limit
of $2\times10^{-3}$ via multiple 100-MBaud transmission of OOK signal.
|
1301.0977 | DAGGER: A Scalable Index for Reachability Queries in Large Dynamic
Graphs | cs.DB cs.DS | With the ubiquity of large-scale graph data in a variety of application
domains, querying them effectively is a challenge. In particular, reachability
queries are becoming increasingly important, especially for containment,
subsumption, and connectivity checks. Whereas many methods have been proposed
for static graph reachability, many real-world graphs are constantly evolving,
which calls for dynamic indexing. In this paper, we present a fully dynamic
reachability index over dynamic graphs. Our method, called DAGGER, is a
light-weight index based on interval labeling, that scales to million node
graphs and beyond. Our extensive experimental evaluation on real-world and
synthetic graphs confirms its effectiveness over baseline methods.
|
1301.0980 | Upper Bounds on Matching Families in $\mathbb{Z}_{pq}^n$ | cs.IT math.IT | \textit{Matching families} are one of the major ingredients in the
construction of {\em locally decodable codes} (LDCs) and the best known
constructions of LDCs with a constant number of queries are based on matching
families. The determination of the largest size of any matching family in
$\mathbb{Z}_m^n$, where $\mathbb{Z}_m$ is the ring of integers modulo $m$, is
an interesting problem. In this paper, we show an upper bound of
$O((pq)^{0.625n+0.125})$ for the size of any matching family in
$\mathbb{Z}_{pq}^n$, where $p$ and $q$ are two distinct primes. Our bound is
valid when $n$ is a constant, $p\rightarrow \infty$ and $p/q\rightarrow 1$. Our
result improves an upper bound of Dvir {\it et al.}
|
1301.0998 | Stratified SIFT Matching for Human Iris Recognition | cs.CV | This paper proposes an efficient three fold stratified SIFT matching for iris
recognition. The objective is to filter wrongly paired conventional SIFT
matches. In Strata I, the keypoints from gallery and probe iris images are
paired using traditional SIFT approach. Due to high image similarity at
different regions of iris there may be some impairments. These are detected and
filtered by finding gradient of paired keypoints in Strata II. Further, the
scaling factor of paired keypoints is used to remove impairments in Strata III.
The pairs retained after Strata III are likely to be potential matches for iris
recognition. The proposed system performs with an accuracy of 96.08% and 97.15%
on publicly available CASIAV3 and BATH databases respectively. This marks
significant improvement of accuracy and FAR over the existing SIFT matching for
iris.
|
1301.1002 | Dynamic Network Control for Confidential Multi-hop Communications | cs.IT cs.SY math.IT | We consider the problem of resource allocation and control of multihop
networks in which multiple source-destination pairs communicate confidential
messages, to be kept confidential from the intermediate nodes. We pose the
problem as that of network utility maximization, into which confidentiality is
incorporated as an additional quality of service constraint. We develop a
simple, and yet provably optimal dynamic control algorithm that combines flow
control, routing and end-to-end secrecy-encoding. In order to achieve
confidentiality, our scheme exploits multipath diversity and temporal diversity
due to channel variability. Our end-to-end dynamic encoding scheme encodes
confidential messages across multiple packets, to be combined at the ultimate
destination for recovery. We first develop an optimal dynamic policy for the
case in which the number of blocks across which secrecy encoding is performed
is asymptotically large. Next, we consider encoding across a finite number of
packets, which eliminates the possibility of achieving perfect secrecy. For
this case, we develop a dynamic policy to choose the encoding rates for each
message, based on the instantaneous channel state information, queue states and
secrecy outage requirements. By numerical analysis, we observe that the
proposed scheme approaches the optimal rates asymptotically with increasing
block size. Finally, we address the consequences of practical implementation
issues such as infrequent queue updates and de-centralized scheduling. We
demonstrate the efficacy of our policies by numerical studies under various
network conditions.
|
1301.1003 | Charting the Tractability Frontier of Certain Conjunctive Query
Answering | cs.DB cs.LO | An uncertain database is defined as a relational database in which primary
keys need not be satisfied. A repair (or possible world) of such database is
obtained by selecting a maximal number of tuples without ever selecting two
distinct tuples with the same primary key value. For a Boolean query q, the
decision problem CERTAINTY(q) takes as input an uncertain database db and asks
whether q is satisfied by every repair of db. Our main focus is on acyclic
Boolean conjunctive queries without self-join. Previous work has introduced the
notion of (directed) attack graph of such queries, and has proved that
CERTAINTY(q) is first-order expressible if and only if the attack graph of q is
acyclic. The current paper investigates the boundary between tractability and
intractability of CERTAINTY(q). We first classify cycles in attack graphs as
either weak or strong, and then prove among others the following. If the attack
graph of a query q contains a strong cycle, then CERTAINTY(q) is coNP-complete.
If the attack graph of q contains no strong cycle and every weak cycle of it is
terminal (i.e., no edge leads from a vertex in the cycle to a vertex outside
the cycle), then CERTAINTY(q) is in P. We then partially address the only
remaining open case, i.e., when the attack graph contains some nonterminal
cycle and no strong cycle. Finally, we establish a relationship between the
complexities of CERTAINTY(q) and evaluating q on probabilistic databases.
|
1301.1018 | GESPAR: Efficient Phase Retrieval of Sparse Signals | cs.IT math.IT | We consider the problem of phase retrieval, namely, recovery of a signal from
the magnitude of its Fourier transform, or of any other linear transform. Due
to the loss of the Fourier phase information, this problem is ill-posed.
Therefore, prior information on the signal is needed in order to enable its
recovery. In this work we consider the case in which the signal is known to be
sparse, i.e., it consists of a small number of nonzero elements in an
appropriate basis. We propose a fast local search method for recovering a
sparse signal from measurements of its Fourier transform (or other linear
transform) magnitude which we refer to as GESPAR: GrEedy Sparse PhAse
Retrieval. Our algorithm does not require matrix lifting, unlike previous
approaches, and therefore is potentially suitable for large scale problems such
as images. Simulation results indicate that GESPAR is fast and more accurate
than existing techniques in a variety of settings.
|
1301.1027 | On online energy harvesting in multiple access communication systems | cs.IT math.IT | We investigate performance limits of a multiple access communication system
with energy harvesting nodes where the utility function is taken to be the
long-term average sum-throughput. We assume a causal structure for energy
arrivals and study the problem in the continuous time regime. For this setting,
we first characterize a storage dam model that captures the dynamics of a
battery with energy harvesting and variable transmission power. Using this
model, we next establish an upper bound on the throughput problem as a function
of battery capacity. We also formulate a non-linear optimization problem to
determine optimal achievable power policies for transmitters. Applying a
calculus of variation technique, we then derive Euler-Lagrange equations as
necessary conditions for optimum power policies in terms of a system of coupled
partial integro-differential equations (PIDEs). Based on a Gauss-Seidel
algorithm, we devise an iterative algorithm to solve these equations. We also
propose a fixed-point algorithm for the symmetric multiple access setting in
which the statistical descriptions of energy harvesters are identical. Along
with the analysis and to support our iterative algorithms, comprehensive
numerical results are also obtained.
|
1301.1061 | On the Minimum Energy of Sending Gaussian Multiterminal Sources over the
Gaussian MAC | cs.IT math.IT | In this work, we investigate the minimum energy of transmitting correlated
sources over the Gaussian multiple-access channel (MAC). Compared to other
works on joint source-channel coding, we consider the general scenario where
the source and channel bandwidths are not naturally matched. In particular, we
proposed the use of hybrid digital-analog coding over to improve the
transmission energy efficiency. Different models of correlated sources are
studied. We first consider lossless transmission of binary sources over the
MAC. We then treat lossy transmission of Gaussian sources over the Gaussian
MAC, including CEO sources and multiterminal sources. In all cases, we show
that hybrid transmission achieves the best known energy efficiency.
|
1301.1064 | Automatic crosswind flight of tethered wings for airborne wind energy:
modeling, control design and experimental results | cs.SY math.OC | An approach to control tethered wings for airborne wind energy is proposed. A
fixed length of the lines is considered, and the aim of the control system is
to obtain figure-eight crosswind trajectories. The proposed technique is based
on the notion of the wing's "velocity angle" and, in contrast with most
existing approaches, it does not require a measurement of the wind speed or of
the effective wind at the wing's location. Moreover, the proposed approach
features few parameters, whose effects on the system's behavior are very
intuitive, hence simplifying tuning procedures. A simplified model of the
steering dynamics of the wing is derived from first-principle laws, compared
with experimental data and used for the control design. The control algorithm
is divided into a low-level loop for the velocity angle and a high-level
guidance strategy to achieve the desired flight patterns. The robustness of the
inner loop is verified analytically, and the overall control system is tested
experimentally on a small-scale prototype, with varying wind conditions and
using different wings.
|
1301.1065 | Effective number of samples and pseudo-random nonlinear distortions in
digital OFDM coded signal | physics.data-an cs.IT math.IT | This paper concerns theoretical modeling of degradation of signal with OFDM
coding caused by pseudo-random nonlinear distortions introduced by an
analog-to-digital or digital-to-analog converter. A new quantity, effective
number of samples, is defined and used for derivation of accurate expressions
for autocorrelation function and the total power of the distortions. The
derivation is based on probabilistic model of the signal and its transition
probability. It is shown, that for digital (discrete and quantized) signals the
effective number of samples replaces the total number of samples and is the
proper quantity defining their properties.
|
1301.1166 | Quantum channels from association schemes | quant-ph cs.IT math.IT | We propose in this note the study of quantum channels from association
schemes. This is done by interpreting the $(0,1)$-matrices of a scheme as the
Kraus operators of a channel. Working in the framework of one-shot zero-error
information theory, we give bounds and closed formulas for various independence
numbers of the relative non-commutative (confusability) graphs, or,
equivalently, graphical operator systems. We use pseudocyclic association
schemes as an example. In this case, we show that the unitary
entanglement-assisted independence number grows at least quadratically faster,
with respect to matrix size, than the independence number. The latter parameter
was introduced by Beigi and Shor as a generalization of the one-shot Shannon
capacity, in analogy with the corresponding graph-theoretic notion.
|
1301.1218 | Finding the True Frequent Itemsets | cs.LG cs.DB cs.DS stat.ML | Frequent Itemsets (FIs) mining is a fundamental primitive in data mining. It
requires to identify all itemsets appearing in at least a fraction $\theta$ of
a transactional dataset $\mathcal{D}$. Often though, the ultimate goal of
mining $\mathcal{D}$ is not an analysis of the dataset \emph{per se}, but the
understanding of the underlying process that generated it. Specifically, in
many applications $\mathcal{D}$ is a collection of samples obtained from an
unknown probability distribution $\pi$ on transactions, and by extracting the
FIs in $\mathcal{D}$ one attempts to infer itemsets that are frequently (i.e.,
with probability at least $\theta$) generated by $\pi$, which we call the True
Frequent Itemsets (TFIs). Due to the inherently stochastic nature of the
generative process, the set of FIs is only a rough approximation of the set of
TFIs, as it often contains a huge number of \emph{false positives}, i.e.,
spurious itemsets that are not among the TFIs. In this work we design and
analyze an algorithm to identify a threshold $\hat{\theta}$ such that the
collection of itemsets with frequency at least $\hat{\theta}$ in $\mathcal{D}$
contains only TFIs with probability at least $1-\delta$, for some
user-specified $\delta$. Our method uses results from statistical learning
theory involving the (empirical) VC-dimension of the problem at hand. This
allows us to identify almost all the TFIs without including any false positive.
We also experimentally compare our method with the direct mining of
$\mathcal{D}$ at frequency $\theta$ and with techniques based on widely-used
standard bounds (i.e., the Chernoff bounds) of the binomial distribution, and
show that our algorithm outperforms these methods and achieves even better
results than what is guaranteed by the theoretical analysis.
|
1301.1223 | Nearest Neighbor Decoding and Pilot-Aided Channel Estimation for Fading
Channels | cs.IT math.IT | We study the information rates of non-coherent, stationary, Gaussian,
multiple-input multiple-output (MIMO) flat-fading channels that are achievable
with nearest neighbor decoding and pilot-aided channel estimation. In
particular, we investigate the behavior of these achievable rates in the limit
as the signal- to-noise ratio (SNR) tends to infinity by analyzing the capacity
pre-log, which is defined as the limiting ratio of the capacity to the
logarithm of the SNR as the SNR tends to infinity. We demonstrate that a scheme
estimating the channel using pilot symbols and detecting the message using
nearest neighbor decoding (while assuming that the channel estimation is
perfect) essentially achieves the capacity pre-log of non-coherent
multiple-input single-output flat-fading channels, and it essentially achieves
the best so far known lower bound on the capacity pre-log of non-coherent MIMO
flat-fading channels. We then extend our analysis to the multiple-access
channel.
|
1301.1254 | Dynamical Models and Tracking Regret in Online Convex Programming | stat.ML cs.LG | This paper describes a new online convex optimization method which
incorporates a family of candidate dynamical models and establishes novel
tracking regret bounds that scale with the comparator's deviation from the best
dynamical model in this family. Previous online optimization methods are
designed to have a total accumulated loss comparable to that of the best
comparator sequence, and existing tracking or shifting regret bounds scale with
the overall variation of the comparator sequence. In many practical scenarios,
however, the environment is nonstationary and comparator sequences with small
variation are quite weak, resulting in large losses. The proposed Dynamic
Mirror Descent method, in contrast, can yield low regret relative to highly
variable comparator sequences by both tracking the best dynamical model and
forming predictions based on that model. This concept is demonstrated
empirically in the context of sequential compressive observations of a dynamic
scene and tracking a dynamic social network.
|
1301.1279 | Polynomial-complexity, Low-delay Scheduling for SCFDMA-based Wireless
Uplink Networks (Technical Report) | cs.NI cs.IT math.IT | Uplink scheduling/resource allocation under the single-carrier FDMA
constraint is investigated, taking into account the queuing dynamics at the
transmitters. Under the single-carrier constraint, the problem of MaxWeight
scheduling, as well as that of determining if a given number of packets can be
served from all the users, are shown to be NP-complete. Finally, a
matching-based scheduling algorithm is presented that requires only a
polynomial number of computations per timeslot, and in the case of a system
with large bandwidth and user population, provably provides a good delay
(small-queue) performance, even under the single-carrier constraint.
In summary, the results in first part of the paper support the recent push to
remove SCFDMA from the Standards, whereas those in the second part present a
way of working around the single-carrier constraint if it remains in the
Standards.
|
1301.1295 | Time-Frequency Representation of Microseismic Signals using the
Synchrosqueezing Transform | physics.geo-ph cs.CE cs.CV | Resonance frequencies can provide useful information on the deformation
occurring during fracturing experiments or $CO_2$ management, complementary to
the microseismic event distribution. An accurate time-frequency representation
is of crucial importance prior to interpreting the cause of resonance
frequencies during microseismic experiments. The popular methods of Short-Time
Fourier Transform (STFT) and wavelet analysis have limitations in representing
close frequencies and dealing with fast varying instantaneous frequencies and
this is often the nature of microseismic signals. The synchrosqueezing
transform (SST) is a promising tool to track these resonant frequencies and
provide a detailed time-frequency representation. Here we apply the
synchrosqueezing transform to microseismic signals and also show its potential
to general seismic signal processing applications.
|
1301.1299 | Automated Variational Inference in Probabilistic Programming | stat.ML cs.AI cs.LG | We present a new algorithm for approximate inference in probabilistic
programs, based on a stochastic gradient for variational programs. This method
is efficient without restrictions on the probabilistic program; it is
particularly practical for distributions which are not analytically tractable,
including highly structured distributions that arise in probabilistic programs.
We show how to automatically derive mean-field probabilistic programs and
optimize them, and demonstrate that our perspective improves inference
efficiency over other algorithms.
|
1301.1327 | Weighted $\ell_1$-minimization for generalized non-uniform sparse model | cs.IT math.IT | Model-based compressed sensing refers to compressed sensing with extra
structure about the underlying sparse signal known a priori. Recent work has
demonstrated that both for deterministic and probabilistic models imposed on
the signal, this extra information can be successfully exploited to enhance
recovery performance. In particular, weighted $\ell_1$-minimization with
suitable choice of weights has been shown to improve performance in the so
called non-uniform sparse model of signals. In this paper, we consider a full
generalization of the non-uniform sparse model with very mild assumptions. We
prove that when the measurements are obtained using a matrix with i.i.d
Gaussian entries, weighted $\ell_1$-minimization successfully recovers the
sparse signal from its measurements with overwhelming probability. We also
provide a method to choose these weights for any general signal model from the
non-uniform sparse class of signal models.
|
1301.1332 | A Logic Programming Approach to Integration Network Inference | cs.DB cs.AI | The discovery, representation and reconstruction of (technical) integration
networks from Network Mining (NM) raw data is a difficult problem for
enterprises. This is due to large and complex IT landscapes within and across
enterprise boundaries, heterogeneous technology stacks, and fragmented data. To
remain competitive, visibility into the enterprise and partner IT networks on
different, interrelated abstraction levels is desirable.
We present an approach to represent and reconstruct the integration networks
from NM raw data using logic programming based on first-order logic. The raw
data expressed as integration network model is represented as facts, on which
rules are applied to reconstruct the network. We have built a system that is
used to apply this approach to real-world enterprise landscapes and we report
on our experience with this system.
|
1301.1373 | Regularized Zero-Forcing Interference Alignment for the Two-Cell MIMO
Interfering Broadcast Channel | cs.IT math.IT | In this paper, we propose transceiver design strategies for the two-cell
multiple-input multiple-output (MIMO) interfering broadcast channel where
inter-cell interference (ICI) exists in addition to interuser interference
(IUI). We first formulate the generalized zero-forcing interference alignment
(ZF-IA) method based on the alignment of IUI and ICI in multi-dimensional
subspace. We then devise a minimum weighted-mean-square-error (WMSE) method
based on regularizing the precoders and decoders of the generalized ZF-IA
scheme. In contrast to the existing weighted-sum-rate-maximizing transceiver,
our method does not require an iterative calculation of the optimal weights.
Because of this, the proposed scheme, while not designed specifically to
maximize the sum rate, is computationally efficient and achieves a faster
convergence compared to the known weighted-sum-rate maximizing scheme. Through
analysis and simulation, we show the effectiveness of the proposed regularized
ZF-IA scheme.
|
1301.1374 | PaFiMoCS: Particle Filtered Modified-CS and Applications in Visual
Tracking across Illumination Change | cs.CV | We study the problem of tracking (causally estimating) a time sequence of
sparse spatial signals with changing sparsity patterns, as well as other
unknown states, from a sequence of nonlinear observations corrupted by
(possibly) non-Gaussian noise. In many applications, particularly those in
visual tracking, the unknown state can be split into a small dimensional part,
e.g. global motion, and a spatial signal, e.g. illumination or shape
deformation. The spatial signal is often well modeled as being sparse in some
domain. For a long sequence, its sparsity pattern can change over time,
although the changes are usually slow. To address the above problem, we propose
a novel solution approach called Particle Filtered Modified-CS (PaFiMoCS). The
key idea of PaFiMoCS is to importance sample for the small dimensional state
vector, while replacing importance sampling by slow sparsity pattern change
constrained posterior mode tracking for recovering the sparse spatial signal.
We show that the problem of tracking moving objects across spatially varying
illumination change is an example of the above problem and explain how to
design PaFiMoCS for it. Experiments on both simulated data as well as on real
videos with significant illumination changes demonstrate the superiority of the
proposed algorithm as compared with existing particle filter based tracking
algorithms.
|
1301.1385 | Translating NP-SPEC into ASP | cs.AI | NP-SPEC is a language for specifying problems in NP in a declarative way.
Despite the fact that the semantics of the language was given by referring to
Datalog with circumscription, which is very close to ASP, so far the only
existing implementations are by means of ECLiPSe Prolog and via Boolean
satisfiability solvers. In this paper, we present translations from NP-SPEC
into various forms of ASP and analyze them. We also argue that it might be
useful to incorporate certain language constructs of NP-SPEC into mainstream
ASP.
|
1301.1386 | SPARC - Sorted ASP with Consistency Restoring Rules | cs.PL cs.AI | This is a preliminary report on the work aimed at making CR-Prolog -- a
version of ASP with consistency restoring rules -- more suitable for use in
teaching and large applications. First we describe a sorted version of
CR-Prolog called SPARC. Second, we translate a basic version of the CR-Prolog
into the language of DLV and compare the performance with the state of the art
CR-Prolog solver. The results form the foundation for future more efficient and
user friendly implementation of SPARC and shed some light on the relationship
between two useful knowledge representation constructs: consistency restoring
rules and weak constraints of DLV.
|
1301.1387 | Language ASP{f} with Arithmetic Expressions and Consistency-Restoring
Rules | cs.AI | In this paper we continue the work on our extension of Answer Set Programming
by non-Herbrand functions and add to the language support for arithmetic
expressions and various inequality relations over non-Herbrand functions, as
well as consistency-restoring rules from CR-Prolog. We demonstrate the use of
this latest version of the language in the representation of important kinds of
knowledge.
|
1301.1388 | Utilizing ASP for Generating and Visualizing Argumentation Frameworks | cs.AI | Within the area of computational models of argumentation, the
instantiation-based approach is gaining more and more attention, not at least
because meaningful input for Dung's abstract frameworks is provided in that
way. In a nutshell, the aim of instantiation-based argumentation is to form,
from a given knowledge base, a set of arguments and to identify the conflicts
between them. The resulting network is then evaluated by means of
extension-based semantics on an abstract level, i.e. on the resulting graph.
While several systems are nowadays available for the latter step, the
automation of the instantiation process itself has received less attention. In
this work, we provide a novel approach to construct and visualize an
argumentation framework from a given knowledge base. The system we propose
relies on Answer-Set Programming and follows a two-step approach. A first
program yields the logic-based arguments as its answer-sets; a second program
is then used to specify the relations between arguments based on the
answer-sets of the first program. As it turns out, this approach not only
allows for a flexible and extensible tool for instantiation-based
argumentation, but also provides a new method for answer-set visualization in
general.
|
1301.1389 | Planning and Scheduling in Hybrid Domains Using Answer Set Programming | cs.AI | In this paper we present an Action Language-Answer Set Programming based
approach to solving planning and scheduling problems in hybrid domains -
domains that exhibit both discrete and continuous behavior. We use action
language H to represent the domain and then translate the resulting theory into
an A-Prolog program. In this way, we reduce the problem of finding solutions to
planning and scheduling problems to computing answer sets of A-Prolog programs.
We cite a planning and scheduling example from the literature and show how to
model it in H. We show how to translate the resulting H theory into an
equivalent A-Prolog program. We compute the answer sets of the resulting
program using a hybrid solver called EZCSP which loosely integrates a
constraint solver with an answer set solver. The solver allows us reason about
constraints over reals and compute solutions to complex planning and scheduling
problems. Results have shown that our approach can be applied to any planning
and scheduling problem in hybrid domains.
|
1301.1390 | Eliminating Unfounded Set Checking for HEX-Programs | cs.LO cs.AI | HEX-programs are an extension of the Answer Set Programming (ASP) paradigm
incorporating external means of computation into the declarative programming
language through so-called external atoms. Their semantics is defined in terms
of minimal models of the Faber-Leone-Pfeifer (FLP) reduct. Developing native
solvers for HEX-programs based on an appropriate notion of unfounded sets has
been subject to recent research for reasons of efficiency. Although this has
lead to an improvement over naive minimality checking using the FLP reduct,
testing for foundedness remains a computationally expensive task. In this work
we improve on HEX-program evaluation in this respect by identifying a syntactic
class of programs, that can be efficiently recognized and allows to entirely
skip the foundedness check. Moreover, we develop criteria for decomposing a
program into components, such that the search for unfounded sets can be
restricted. Observing that our results apply to many HEX-program applications
provides analytic evidence for the significance and effectiveness of our
approach, which is complemented by a brief discussion of preliminary
experimental validation.
|
1301.1391 | Backdoors to Normality for Disjunctive Logic Programs | cs.LO cs.AI cs.CC | Over the last two decades, propositional satisfiability (SAT) has become one
of the most successful and widely applied techniques for the solution of
NP-complete problems. The aim of this paper is to investigate theoretically how
Sat can be utilized for the efficient solution of problems that are harder than
NP or co-NP. In particular, we consider the fundamental reasoning problems in
propositional disjunctive answer set programming (ASP), Brave Reasoning and
Skeptical Reasoning, which ask whether a given atom is contained in at least
one or in all answer sets, respectively. Both problems are located at the
second level of the Polynomial Hierarchy and thus assumed to be harder than NP
or co-NP. One cannot transform these two reasoning problems into SAT in
polynomial time, unless the Polynomial Hierarchy collapses. We show that
certain structural aspects of disjunctive logic programs can be utilized to
break through this complexity barrier, using new techniques from Parameterized
Complexity. In particular, we exhibit transformations from Brave and Skeptical
Reasoning to SAT that run in time O(2^k n^2) where k is a structural parameter
of the instance and n the input size. In other words, the reduction is
fixed-parameter tractable for parameter k. As the parameter k we take the size
of a smallest backdoor with respect to the class of normal (i.e.,
disjunction-free) programs. Such a backdoor is a set of atoms that when deleted
makes the program normal. In consequence, the combinatorial explosion, which is
expected when transforming a problem from the second level of the Polynomial
Hierarchy to the first level, can now be confined to the parameter k, while the
running time of the reduction is polynomial in the input size n, where the
order of the polynomial is independent of k.
|
1301.1392 | Answer Set Programming for Stream Reasoning | cs.AI | The advance of Internet and Sensor technology has brought about new
challenges evoked by the emergence of continuous data streams. Beyond rapid
data processing, application areas like ambient assisted living, robotics, or
dynamic scheduling involve complex reasoning tasks. We address such scenarios
and elaborate upon approaches to knowledge-intense stream reasoning, based on
Answer Set Programming (ASP). While traditional ASP methods are devised for
singular problem solving, we develop new techniques to formulate and process
problems dealing with emerging as well as expiring data in a seamless way.
|
1301.1393 | Two New Definitions of Stable Models of Logic Programs with Generalized
Quantifiers | cs.LO cs.AI | We present alternative definitions of the first-order stable model semantics
and its extension to incorporate generalized quantifiers by referring to the
familiar notion of a reduct instead of referring to the SM operator in the
original definitions. Also, we extend the FLP stable model semantics to allow
generalized quantifiers by referring to an operator that is similar to the
$\sm$ operator. For a reasonable syntactic class of logic programs, we show
that the two stable model semantics of generalized quantifiers are
interchangeable.
|
1301.1394 | Lloyd-Topor Completion and General Stable Models | cs.LO cs.AI | We investigate the relationship between the generalization of program
completion defined in 1984 by Lloyd and Topor and the generalization of the
stable model semantics introduced recently by Ferraris et al. The main theorem
can be used to characterize, in some cases, the general stable models of a
logic program by a first-order formula. The proof uses Truszczynski's stable
model semantics of infinitary propositional formulas.
|
1301.1395 | Extending FO(ID) with Knowledge Producing Definitions: Preliminary
Results | cs.LO cs.AI | Previous research into the relation between ASP and classical logic has
identified at least two different ways in which the former extends the latter.
First, ASP program typically contain sets of rules that can be naturally
interpreted as inductive definitions, and the language FO(ID) has shown that
such inductive definitions can elegantly be added to classical logic in a
modular way. Second, there is of course also the well-known epistemic component
of ASP, which was mainly emphasized in the early papers on stable model
semantics. To investigate whether this kind of knowledge can also, and in a
similarly modular way, be added to classical logic, the language of Ordered
Epistemic Logic was presented in recent work. However, this logic views the
epistemic component as entirely separate from the inductive definition
component, thus ignoring any possible interplay between the two. In this paper,
we present a language that extends the inductive definition construct found in
FO(ID) with an epistemic component, making such interplay possible. The
eventual goal of this work is to discover whether it is really appropriate to
view the epistemic component and the inductive definition component of ASP as
two separate extensions of classical logic, or whether there is also something
of importance in the combination of the two.
|
1301.1409 | A Dual Number Approach for Numerical Calculation of Velocity and
Acceleration in the Spherical 4R Mechanism | cs.CE | This paper proposes a methodology to calculate both the first and second
derivatives of a vector function of one variable in a single computation step.
The method is based on the nested application of the dual number approach for
first order derivatives.
It has been implemented in Fortran language, a module which contains the dual
version of elementary functions as well as more complex functions, which are
common in the field of rotational kinematics. Since we have three quantities of
interest, namely the function itself and its first and second derivative, our
basic numerical entity has three elements. Then, for a given vector function
$f:\mathbb{R}\to \mathbb{R}^m$, its dual version will have the form
$\tilde{f}:\mathbb{R}^3\to \mathbb{R}^{3m}$.
As a study case, the proposed methodology is used to calculate the velocity
and acceleration of a point moving on the coupler-point curve generated by a
spherical four-bar mechanism.
|
1301.1415 | On Complex LLL Algorithm for Integer Forcing Linear Receivers | cs.IT math.IT | Integer-forcing (IF) linear receiver has been recently introduced for
multiple-input multiple-output (MIMO) fading channels. The receiver has to
compute an integer linear combination of the symbols as a part of the decoding
process. In particular, the integer coefficients have to be chosen based on the
channel realizations, and the choice of such coefficients is known to determine
the receiver performance. The original known solution of finding these integers
was based on exhaustive search. A practical algorithm based on
Hermite-Korkine-Zolotareff (HKZ) and Minkowski lattice reduction algorithms was
also proposed recently. In this paper, we propose a low-complexity method based
on complex LLL algorithm to obtain the integer coefficients for the IF
receiver. For the 2 X 2 MIMO channel, we study the effectiveness of the
proposed method in terms of the ergodic rate. We also compare the bit error
rate (BER) of our approach with that of other linear receivers, and show that
the suggested algorithm outperforms the minimum mean square estimator (MMSE)
and zero-forcing (ZF) linear receivers, but trades-off error performance for
complexity in comparison with the IF receiver based on exhaustive search or on
HKZ and Minkowski lattice reduction algorithms.
|
1301.1423 | Statistical mechanics approach to 1-bit compressed sensing | cs.IT cond-mat.dis-nn math.IT | Compressed sensing is a technique for recovering a high-dimensional signal
from lower-dimensional data, whose components represent partial information
about the signal, utilizing prior knowledge on the sparsity of the signal. For
further reducing the data size of the compressed expression, a scheme to
recover the original signal utilizing only the sign of each entry of the
linearly transformed vector was recently proposed. This approach is often
termed the 1-bit compressed sensing. Here we analyze the typical performance of
an L1-norm based signal recovery scheme for the 1-bit compressed sensing using
statistical mechanics methods. We show that the signal recovery performance
predicted by the replica method under the replica symmetric ansatz, which turns
out to be locally unstable for modes breaking the replica symmetry, is in a
good consistency with experimental results of an approximate recovery algorithm
developed earlier. This suggests that the L1-based recovery problem typically
has many local optima of a similar recovery accuracy, which can be achieved by
the approximate algorithm. We also develop another approximate recovery
algorithm inspired by the cavity method. Numerical experiments show that when
the density of nonzero entries in the original signal is relatively large the
new algorithm offers better performance than the abovementioned scheme and does
so with a lower computational cost.
|
1301.1429 | Adaptation of fictional and online conversations to communication media | physics.soc-ph cs.CL physics.data-an | Conversations allow the quick transfer of short bits of information and it is
reasonable to expect that changes in communication medium affect how we
converse. Using conversations in works of fiction and in an online social
networking platform, we show that the utterance length of conversations is
slowly shortening with time but adapts more strongly to the constraints of the
communication medium. This indicates that the introduction of any new medium of
communication can affect the way natural language evolves.
|
1301.1444 | Object-oriented Bayesian networks for a decision support system for
antitrust enforcement | cs.AI stat.AP | We study an economic decision problem where the actors are two firms and the
Antitrust Authority whose main task is to monitor and prevent firms' potential
anti-competitive behaviour and its effect on the market. The Antitrust
Authority's decision process is modelled using a Bayesian network where both
the relational structure and the parameters of the model are estimated from a
data set provided by the Authority itself. A number of economic variables that
influence this decision process are also included in the model. We analyse how
monitoring by the Antitrust Authority affects firms' strategies about
cooperation. Firms' strategies are modelled as a repeated prisoner's dilemma
using object-oriented Bayesian networks. We show how the integration of firms'
decision process and external market information can be modelled in this way.
Various decision scenarios and strategies are illustrated.
|
1301.1502 | Fuzzy Soft Set Based Classification for Gene Expression Data | cs.AI cs.CE | Classification is one of the major issues in Data Mining Research fields. The
classification problems in medical area often classify medical dataset based on
the result of medical diagnosis or description of medical treatment by the
medical practitioner. This research work discusses the classification process
of Gene Expression data for three different cancers which are breast cancer,
lung cancer and leukemia cancer with two classes which are cancerous stage and
non cancerous stage. We have applied a fuzzy soft set similarity based
classifier to enhance the accuracy to predict the stages among cancer genes and
the informative genes are selected by using Entopy filtering.
|
1301.1534 | Influence Of The User Importance Measure On The Group Evolution
Discovery | cs.SI physics.soc-ph | One of the most interesting topics in social network science are social
groups. Their extraction, dynamics and evolution. One year ago the method for
group evolution discovery (GED) was introduced. The GED method during
extraction process takes into account both the group members quality and
quantity. The quality is reflected by user importance measure. In this paper
the influence of different user importance measures on the results of the GED
method is examined and presented. The results indicate that using global
measures like social position (page rank) allows to achieve more precise
results than using local measures like degree centrality or no measure at all.
|
1301.1549 | A realistic distributed storage system that minimizes data storage and
repair bandwidth | cs.IT cs.DC math.IT | In a realistic distributed storage environment, storage nodes are usually
placed in racks, a metallic support designed to accommodate electronic
equipment. It is known that the communication (bandwidth) cost between nodes
within a rack is much lower than the communication (bandwidth) cost between
nodes within different racks.
In this paper, a new model, where the storage nodes are placed in two racks,
is proposed and analyzed. In this model, the storage nodes have different
repair costs to repair a node depending on the rack where they are placed. A
threshold function, which minimizes the amount of stored data per node and the
bandwidth needed to regenerate a failed node, is shown. This threshold function
generalizes the threshold function from previous distributed storage models.
The tradeoff curve obtained from this threshold function is compared with the
ones obtained from the previous models, and it is shown that this new model
outperforms the previous ones in terms of repair cost.
|
1301.1551 | A novel processing pipeline for optical multi-touch surfaces | cs.CV | In this thesis a new approach for touch detection on optical multi-touch
devices is proposed that exploits the fact that the camera images reveal not
only the actual touch points but also objects above the screen such as the hand
or arm of a user. The touch processing relies on the Maximally Stable Extremal
Regions algorithm for finding the users' fingertips in the camera image. The
hierarchical structure of the generated extremal regions serves as a starting
point for agglomerative clustering of the fingertips into hands. Furthermore, a
heuristic is suggested that supports the identification of individual fingers
as well as the distinction between left hands and right hands if all five
fingers of a hand are in contact with the touch surface.
The evaluation confirmed that the system is robust against detection errors
resulting from non-uniform illumination and reliably assigns touch points to
individual hands based on the implicitly tracked context information. The
efficient multi-threaded implementation handles two-handed input from multiple
users in real-time.
|
1301.1555 | Coupled Neural Associative Memories | cs.NE cs.IT cs.LG math.IT | We propose a novel architecture to design a neural associative memory that is
capable of learning a large number of patterns and recalling them later in
presence of noise. It is based on dividing the neurons into local clusters and
parallel plains, very similar to the architecture of the visual cortex of
macaque brain. The common features of our proposed architecture with those of
spatially-coupled codes enable us to show that the performance of such networks
in eliminating noise is drastically better than the previous approaches while
maintaining the ability of learning an exponentially large number of patterns.
Previous work either failed in providing good performance during the recall
phase or in offering large pattern retrieval (storage) capacities. We also
present computational experiments that lend additional support to the
theoretical analysis.
|
1301.1575 | BigDB: Automatic Machine Learning Optimizer | cs.DB | In this short vision paper, we introduce a machine learning optimizer for
data management and describe its architecture and main functionality.
|
1301.1576 | Optical Flow on Evolving Surfaces with an Application to the Analysis of
4D Microscopy Data | math.OC cs.CV | We extend the concept of optical flow to a dynamic non-Euclidean setting.
Optical flow is traditionally computed from a sequence of flat images. It is
the purpose of this paper to introduce variational motion estimation for images
that are defined on an evolving surface. Volumetric microscopy images depicting
a live zebrafish embryo serve as both biological motivation and test data.
|
1301.1590 | An Efficient Algorithm for Upper Bound on the Partition Function of
Nucleic Acids | q-bio.BM cs.LG | It has been shown that minimum free energy structure for RNAs and RNA-RNA
interaction is often incorrect due to inaccuracies in the energy parameters and
inherent limitations of the energy model. In contrast, ensemble based
quantities such as melting temperature and equilibrium concentrations can be
more reliably predicted. Even structure prediction by sampling from the
ensemble and clustering those structures by Sfold [7] has proven to be more
reliable than minimum free energy structure prediction. The main obstacle for
ensemble based approaches is the computational complexity of the partition
function and base pairing probabilities. For instance, the space complexity of
the partition function for RNA-RNA interaction is $O(n^4)$ and the time
complexity is $O(n^6)$ which are prohibitively large [4,12]. Our goal in this
paper is to give a fast algorithm, based on sparse folding, to calculate an
upper bound on the partition function. Our work is based on the recent
algorithm of Hazan and Jaakkola [10]. The space complexity of our algorithm is
the same as that of sparse folding algorithms, and the time complexity of our
algorithm is $O(MFE(n)\ell)$ for single RNA and $O(MFE(m, n)\ell)$ for RNA-RNA
interaction in practice, in which $MFE$ is the running time of sparse folding
and $\ell \leq n$ ($\ell \leq n + m$) is a sequence dependent parameter.
|
1301.1594 | Identifying the Information Gain of a Quantum Measurement | quant-ph cs.IT math.IT | We show that quantum-to-classical channels, i.e., quantum measurements, can
be asymptotically simulated by an amount of classical communication equal to
the quantum mutual information of the measurement, if sufficient shared
randomness is available. This result generalizes Winter's measurement
compression theorem for fixed independent and identically distributed inputs
[Winter, CMP 244 (157), 2004] to arbitrary inputs, and more importantly, it
identifies the quantum mutual information of a measurement as the information
gained by performing it, independent of the input state on which it is
performed. Our result is a generalization of the classical reverse Shannon
theorem to quantum-to-classical channels. In this sense, it can be seen as a
quantum reverse Shannon theorem for quantum-to-classical channels, but with the
entanglement assistance and quantum communication replaced by shared randomness
and classical communication, respectively. The proof is based on a novel
one-shot state merging protocol for "classically coherent states" as well as
the post-selection technique for quantum channels, and it uses techniques
developed for the quantum reverse Shannon theorem [Berta et al., CMP 306 (579),
2011].
|
1301.1608 | The RNA Newton Polytope and Learnability of Energy Parameters | q-bio.BM cs.CE cs.LG | Despite nearly two scores of research on RNA secondary structure and RNA-RNA
interaction prediction, the accuracy of the state-of-the-art algorithms are
still far from satisfactory. Researchers have proposed increasingly complex
energy models and improved parameter estimation methods in anticipation of
endowing their methods with enough power to solve the problem. The output has
disappointingly been only modest improvements, not matching the expectations.
Even recent massively featured machine learning approaches were not able to
break the barrier. In this paper, we introduce the notion of learnability of
the parameters of an energy model as a measure of its inherent capability. We
say that the parameters of an energy model are learnable iff there exists at
least one set of such parameters that renders every known RNA structure to date
the minimum free energy structure. We derive a necessary condition for the
learnability and give a dynamic programming algorithm to assess it. Our
algorithm computes the convex hull of the feature vectors of all feasible
structures in the ensemble of a given input sequence. Interestingly, that
convex hull coincides with the Newton polytope of the partition function as a
polynomial in energy parameters. We demonstrated the application of our theory
to a simple energy model consisting of a weighted count of A-U and C-G base
pairs. Our results show that this simple energy model satisfies the necessary
condition for less than one third of the input unpseudoknotted
sequence-structure pairs chosen from the RNA STRAND v2.0 database. For another
one third, the necessary condition is barely violated, which suggests that
augmenting this simple energy model with more features such as the Turner loops
may solve the problem. The necessary condition is severely violated for 8%,
which provides a small set of hard cases that require further investigation.
|
1301.1609 | Two Design Issues in Cognitive Sub-Small Cell for Sojourners | cs.IT math.IT | In this paper, we propound a solution named Cognitive Sub-Small Cell for
Sojourners (CSCS) in allusion to a broadly representative small cell scenario,
where users can be categorized into two groups: sojourners and inhabitants.
CSCS contributes to save energy, enhance the number of concurrently supportable
users and enshield inhabitants. We consider two design issues in CSCS: i)
determining the number of transmit antennas on sub-small cell APs; ii)
controlling downlink inter-sub-small cell interference. For issue i), we
excogitate an algorithm helped by the probability distribution of the number of
concurrent sojourners. For issue ii), we propose an interference control scheme
named BDBF: Block Diagonalization (BD) Precoding based on uncertain channel
state information in conjunction with auxiliary optimal Beamformer (BF). In the
simulation, we delve into the issue: how the factors impact the number of
transmit antennas on sub-small cell APs. Moreover, we verify a significant
conclusion: Using BDBF gains more capacity than using optimal BF alone within a
bearably large radius of uncertainty region.
|
1301.1626 | Google matrix analysis of DNA sequences | q-bio.GN cs.IR physics.soc-ph | For DNA sequences of various species we construct the Google matrix G of
Markov transitions between nearby words composed of several letters. The
statistical distribution of matrix elements of this matrix is shown to be
described by a power law with the exponent being close to those of outgoing
links in such scale-free networks as the World Wide Web (WWW). At the same time
the sum of ingoing matrix elements is characterized by the exponent being
significantly larger than those typical for WWW networks. This results in a
slow algebraic decay of the PageRank probability determined by the distribution
of ingoing elements. The spectrum of G is characterized by a large gap leading
to a rapid relaxation process on the DNA sequence networks. We introduce the
PageRank proximity correlator between different species which determines their
statistical similarity from the view point of Markov chains. The properties of
other eigenstates of the Google matrix are also discussed. Our results
establish scale-free features of DNA sequence networks showing their
similarities and distinctions with the WWW and linguistic networks.
|
1301.1661 | Transmission Schemes for Gaussian Interference Channels with Transmitter
Processing Energy | cs.IT math.IT | This work considers communication over Gaussian interference channels with
processing energy cost, which explicitly takes into account the energy expended
for processing when transmitters are on. In the presence of processing energy
cost, transmitting all the time as in the conventional no-cost case is no
longer optimal. For a two-user Gaussian interference channel with processing
energy cost, assuming that the on-off states of transmitters are not utilized
for signaling, several transmission schemes with varying complexities are
proposed and their sum rates are compared with an interference-free upper
bound. Moreover, the very strong interference regime, under which interference
does not incur any rate penalty, is identified and shown to be larger than the
case of no processing energy cost for certain scenarios of interest. Also,
extensions to a three-user cascade Gaussian Z interference channel with
processing energy cost are provided, where scheduling of user transmissions
based on the channel set-up is investigated.
|
1301.1671 | Causal graph-based video segmentation | cs.CV | Numerous approaches in image processing and computer vision are making use of
super-pixels as a pre-processing step. Among the different methods producing
such over-segmentation of an image, the graph-based approach of Felzenszwalb
and Huttenlocher is broadly employed. One of its interesting properties is that
the regions are computed in a greedy manner in quasi-linear time. The algorithm
may be trivially extended to video segmentation by considering a video as a 3D
volume, however, this can not be the case for causal segmentation, when
subsequent frames are unknown. We propose an efficient video segmentation
approach that computes temporally consistent pixels in a causal manner, filling
the need for causal and real time applications.
|
1301.1701 | Secrecy Capacity of Two-Hop Relay Assisted Wiretap Channels | cs.IT math.IT | Incorporating the physical layer characteristics to secure communications has
received considerable attention in recent years. Moreover, cooperation with
some nodes of network can give benefits of multiple-antenna systems, increasing
the secrecy capacity of such channels. In this paper, we consider cooperative
wiretap channel with the help of an Amplify and Forward (AF) relay to transmit
confidential messages from source to legitimate receiver in the presence of an
eavesdropper. In this regard, the secrecy capacity of AF relying is derived,
assuming the relay is subject to a peak power constraint. To this end, an
achievable secrecy rate for Gaussian input is evaluated through solving a
non-convex optimization problem. Then, it is proved that any rates greater than
this secrecy rate is not achievable. To do this, the capacity of a genie-aided
channel as an upper bound for the secrecy capacity of the underlying channel is
derived, showing this upper bound is equal to the computed achievable secrecy
rate with Gaussian input. Accordingly, the corresponding secrecy capacity is
compared to the Decode and Forward (DF) strategy which is served as the
benchmark in the current work.
|
1301.1712 | Blind Adaptive Constrained Constant-Modulus Reduced-Rank Interference
Suppression Algorithms Based on Interpolation, Switched Decimation and
Filtering | cs.IT math.IT | This work proposes a blind adaptive reduced-rank scheme and constrained
constant-modulus (CCM) adaptive algorithms for interference suppression in
wireless communications systems. The proposed scheme and algorithms are based
on a two-stage processing framework that consists of a transformation matrix
that performs dimensionality reduction followed by a reduced-rank estimator.
The complex structure of the transformation matrix of existing methods
motivates the development of a blind adaptive reduced-rank constrained (BARC)
scheme along with a low-complexity reduced-rank decomposition. The proposed
BARC scheme and a reduced-rank decomposition based on the concept of joint
interpolation, switched decimation and reduced-rank estimation subject to a set
of constraints are then detailed. The proposed set of constraints ensures that
the multi-path components of the channel are combined prior to dimensionality
reduction. In order to cost-effectively design the BARC scheme, we develop
low-complexity decimation techniques, stochastic gradient and recursive least
squares reduced-rank estimation algorithms. A model-order selection algorithm
for adjusting the length of the estimators is devised along with techniques for
determining the required number of switching branches to attain a predefined
performance. An analysis of the convergence properties and issues of the
proposed optimization and algorithms is carried out, and the key features of
the optimization problem are discussed. We consider the application of the
proposed algorithms to interference suppression in DS-CDMA systems. The results
show that the proposed algorithms outperform the best known reduced-rank
schemes, while requiring lower complexity.
|
1301.1714 | Parallel Computing of Discrete Element Method on GPU | cs.CE cs.DC | We investigate applicability of GPU to DEM. NVIDIA's code obtained superior
performance than CPU in computational time. A model of contact forces in
NVIDIA's code is too simple for practical use. We modify this model by
replacing it with the practical model. The simulation shows that the practical
model obtains the computing speed 6 times faster than the practical one on CPU
while 7 times slower than the simple one on GPU. The result are analyzed.
|
1301.1722 | Linear Bandits in High Dimension and Recommendation Systems | cs.LG stat.ML | A large number of online services provide automated recommendations to help
users to navigate through a large collection of items. New items (products,
videos, songs, advertisements) are suggested on the basis of the user's past
history and --when available-- her demographic profile. Recommendations have to
satisfy the dual goal of helping the user to explore the space of available
items, while allowing the system to probe the user's preferences.
We model this trade-off using linearly parametrized multi-armed bandits,
propose a policy and prove upper and lower bounds on the cumulative "reward"
that coincide up to constants in the data poor (high-dimensional) regime. Prior
work on linear bandits has focused on the data rich (low-dimensional) regime
and used cumulative "risk" as the figure of merit. For this data rich regime,
we provide a simple modification for our policy that achieves near-optimal risk
performance under more restrictive assumptions on the geometry of the problem.
We test (a variation of) the scheme used for establishing achievability on the
Netflix and MovieLens datasets and obtain good agreement with the qualitative
predictions of the theory we develop.
|
1301.1732 | Sum-Rate Maximization with Minimum Power Consumption for MIMO DF Two-Way
Relaying: Part I - Relay Optimization | cs.IT math.IT | The problem of power allocation is studied for a multiple-input
multiple-output (MIMO) decode-and-forward (DF) two-way relaying system
consisting of two source nodes and one relay. It is shown that achieving
maximum sum-rate in such a system does not necessarily demand the consumption
of all available power at the relay. Instead, the maximum sum-rate can be
achieved through efficient power allocation with minimum power consumption.
Deriving such power allocation, however, is nontrivial due to the fact that it
generally leads to a nonconvex problem. In Part I of this two-part paper, a
sum-rate maximizing power allocation with minimum power consumption is found
for MIMO DF two-way relaying, in which the relay optimizes its own power
allocation strategy given the power allocation strategies of the source nodes.
An algorithm is proposed for efficiently finding the optimal power allocation
of the relay based on the proposed idea of relative water-levels. The
considered scenario features low complexity due to the fact that the relay
optimizes its power allocation without coordinating the source nodes. As a
trade-off for the low complexity, it is shown that there can be waste of power
at the source nodes because of no coordination between the relay and the source
nodes. Simulation results demonstrate the performance of the proposed algorithm
and the effect of asymmetry on the considered system.
|
1301.1740 | Biases in the Experimental Annotations of Protein Function and their
Effect on Our Understanding of Protein Function Space | q-bio.GN cs.DL cs.IT math.IT | The ongoing functional annotation of proteins relies upon the work of
curators to capture experimental findings from scientific literature and apply
them to protein sequence and structure data. However, with the increasing use
of high-throughput experimental assays, a small number of experimental studies
dominate the functional protein annotations collected in databases. Here we
investigate just how prevalent is the "few articles -- many proteins"
phenomenon. We examine the experimentally validated annotation of proteins
provided by several groups in the GO Consortium, and show that the distribution
of proteins per published study is exponential, with 0.14% of articles
providing the source of annotations for 25% of the proteins in the UniProt-GOA
compilation. Since each of the dominant articles describes the use of an assay
that can find only one function or a small group of functions, this leads to
substantial biases in what we know about the function of many proteins.
Mass-spectrometry, microscopy and RNAi experiments dominate high throughput
experiments. Consequently, the functional information derived from these
experiments is mostly of the subcellular location of proteins, and of the
participation of proteins in embryonic developmental pathways. For some
organisms, the information provided by different studies overlap by a large
amount. We also show that the information provided by high throughput
experiments is less specific than those provided by low throughput experiments.
Given the experimental techniques available, certain biases in protein function
annotation due to high-throughput experiments are unavoidable. Knowing that
these biases exist and understanding their characteristics and extent is
important for database curators, developers of function annotation programs,
and anyone who uses protein function annotation data to plan experiments.
|
1301.1746 | Generalized Secure Transmission Protocol for Flexible Load-Balance
Control with Cooperative Relays in Two-Hop Wireless Networks | cs.CR cs.IT cs.NI math.IT | This work considers secure transmission protocol for flexible load-balance
control in two-hop relay wireless networks without the information of both
eavesdropper channels and locations. The available secure transmission
protocols via relay cooperation in physical layer secrecy framework cannot
provide a flexible load-balance control, which may significantly limit their
application scopes. This paper extends the conventional works and proposes a
general transmission protocol with considering load-balance control, in which
the relay is randomly selected from the first $k$ preferable assistant relays
located in the circle area with the radius $r$ and the center at the middle
between source and destination (2HR-($r,k$) for short). This protocol covers
the available works as special cases, like ones with the optimal relay
selection ($r=\infty$, $k=1$) and with the random relay selection ($r=\infty$,
$k = n$ i.e. the number of system nodes) in the case of equal path-loss, ones
with relay selected from relay selection region ($r \in (0, \infty), k = 1$) in
the case of distance-dependent path-loss. The theoretic analysis is further
provided to determine the maximum number of eavesdroppers one network can
tolerate to ensure a desired performance in terms of the secrecy outage
probability and transmission outage probability. The analysis results also show
the proposed protocol can balance load distributed among the relays by a proper
setting of $r$ and $k$ under the premise of specified secure and reliable
requirements.
|
1301.1747 | On Max-SINR Receiver for HMT System over Doubly Dispersive Channel | cs.IT math.IT | In this paper, a novel receiver for Hexagonal Multicarrier Transmission (HMT)
system based on the maximizing Signal-to-Interference-plus-Noise Ratio
(Max-SINR) criterion is proposed. Theoretical analyses show that there is a
timing offset between the prototype pulses of the proposed Max-SINR receiver
and the traditional projection receiver. Meanwhile, the timing offset should be
matched to the channel scattering factor of the doubly dispersive (DD) channel.
The closed form timing offset expressions of the prototype pulse for Max-SINR
HMT receiver over DD channel with different channel scattering functions are
derived. Simulation results show that the proposed Max-SINR receiver
outperforms traditional projection scheme and obtains an approximation to the
theoretical upper bound SINR performance. Consistent with the SINR performance
improvement, the bit error rate (BER) performance of HMT system has also been
further improved by using the proposed Max-SINR receiver. Meanwhile, the SINR
performance of the proposed Max-SINR receiver is robust to the channel delay
spread estimation errors.
|
1301.1748 | Quantum Robust Stability of a Small Josephson Junction in a Resonant
Cavity | quant-ph cs.SY math.OC | This paper applies recent results on the robust stability of nonlinear
quantum systems to the case of a Josephson junction in a resonant cavity. The
Josephson junction is characterized by a Hamiltonian operator which contains a
non-quadratic term involving a cosine function. This leads to a sector bounded
nonlinearity which enables the previously developed theory to be applied to
this system in order to analyze its stability.
|
1301.1751 | On the Complexity of $t$-Closeness Anonymization and Related Problems | cs.DS cs.DB | An important issue in releasing individual data is to protect the sensitive
information from being leaked and maliciously utilized. Famous privacy
preserving principles that aim to ensure both data privacy and data integrity,
such as $k$-anonymity and $l$-diversity, have been extensively studied both
theoretically and empirically. Nonetheless, these widely-adopted principles are
still insufficient to prevent attribute disclosure if the attacker has partial
knowledge about the overall sensitive data distribution. The $t$-closeness
principle has been proposed to fix this, which also has the benefit of
supporting numerical sensitive attributes. However, in contrast to
$k$-anonymity and $l$-diversity, the theoretical aspect of $t$-closeness has
not been well investigated.
We initiate the first systematic theoretical study on the $t$-closeness
principle under the commonly-used attribute suppression model. We prove that
for every constant $t$ such that $0\leq t<1$, it is NP-hard to find an optimal
$t$-closeness generalization of a given table. The proof consists of several
reductions each of which works for different values of $t$, which together
cover the full range. To complement this negative result, we also provide exact
and fixed-parameter algorithms. Finally, we answer some open questions
regarding the complexity of $k$-anonymity and $l$-diversity left in the
literature.
|
1301.1753 | FCA - An Approach On LEACH Protocol Of Wireless Sensor Networks Using
Fuzzy Logic | cs.NI cs.AI | In order to gather information more efficiently, wireless sensor networks are
partitioned into clusters. The most of the proposed clustering algorithms do
not consider the location of the base station. This situation causes hot spots
problem in multi-hop wireless sensor networks. In this paper, we propose a
fuzzy clustering algorithm (FCA) which aims to prolong the lifetime of wireless
sensor networks. FCA adjusts the cluster-head radius considering the residual
energy and the distance to the base station parameters of the sensor nodes.
This helps decreasing the intra-cluster work of the sensor nodes which are
closer to the base station or have lower battery level. We utilize fuzzy logic
for handling the uncertainties in cluster-head radius estimation. We compare
our algorithm with LEACH according to first node dies, half of the nodes alive
and energy-efficiency metrics. Our simulation results show that FCA performs
better than other algorithms in most of the cases. Therefore, our proposed
algorithm is a stable and energy-efficient clustering algorithm.
|
1301.1760 | Carrier phase and amplitude estimation for phase shift keying using
pilots and data | cs.IT math.IT stat.AP | We consider least squares estimators of carrier phase and amplitude from a
noisy communications signal that contains both pilot signals, known to the
receiver, and data signals, unknown to the receiver. We focus on signaling
constellations that have symbols evenly distributed on the complex unit circle,
i.e., M-ary phase shift keying. We show, under reasonably mild conditions on
the distribution of the noise, that the least squares estimator of carrier
phase is strongly consistent and asymptotically normally distributed. However,
the amplitude estimator is not consistent, but converges to a positive real
number that is a function of the true carrier amplitude, the noise distribution
and the size of the constellation. Our theoretical results can also be applied
to the case where no pilot symbols exist, i.e., noncoherent detection. The
results of Monte Carlo simulations are provided and these agree with the
theoretical results.
|
1301.1848 | The forest consensus theorem | cs.MA cs.DM cs.SY math.CO math.OC | We show that the limiting state vector in the differential model of consensus
seeking with an arbitrary communication digraph is obtained by multiplying the
eigenprojection of the Laplacian matrix of the model by the vector of initial
states. Furthermore, the eigenprojection coincides with the stochastic matrix
of maximum out-forests of the weighted communication digraph. These statements
make the forests consensus theorem. A similar result for DeGroot's iterative
pooling model requires the Cesaro (time-average) limit in the general case. The
forests consensus theorem is useful for the analysis of consensus protocols.
|
1301.1887 | Crowd Avoidance and Diversity in Socio-Economic Systems and
Recommendation | physics.soc-ph cs.SI | Recommender systems recommend objects regardless of potential adverse effects
of their overcrowding. We address this shortcoming by introducing
crowd-avoiding recommendation where each object can be shared by only a limited
number of users or where object utility diminishes with the number of users
sharing it. We use real data to show that contrary to expectations, the
introduction of these constraints enhances recommendation accuracy and
diversity even in systems where overcrowding is not detrimental. The observed
accuracy improvements are explained in terms of removing potential bias of the
recommendation method. We finally propose a way to model artificial
socio-economic systems with crowd avoidance and obtain first analytical
results.
|
1301.1894 | An Extensive Analysis of Query by Singing/Humming System Through Query
Proportion | cs.MM cs.IR cs.SD | Query by Singing/Humming (QBSH) is a Music Information Retrieval (MIR) system
with small audio excerpt as query. The rising availability of digital music
stipulates effective music retrieval methods. Further, MIR systems support
content based searching for music and requires no musical acquaintance. Current
work on QBSH focuses mainly on melody features such as pitch, rhythm, note
etc., size of databases, response time, score matching and search algorithms.
Even though a variety of QBSH techniques are proposed, there is a dearth of
work to analyze QBSH through query excerption. Here, we present an analysis
that works on QBSH through query excerpt. To substantiate a series of
experiments are conducted with the help of Mel-Frequency Cepstral Coefficients
(MFCC), Linear Predictive Coefficients (LPC) and Linear Predictive Cepstral
Coefficients (LPCC) to portray the robustness of the knowledge representation.
Proposed experiments attempt to reveal that retrieval performance as well as
precision diminishes in the snail phase with the growing database size.
|
1301.1897 | Image Registration for Stability Testing of MEMS | cs.CV astro-ph.IM | Image registration, or alignment of two or more images covering the same
scenes or objects, is of great interest in many disciplines such as remote
sensing, medical imaging, astronomy, and computer vision. In this paper, we
introduce a new application of image registration algorithms. We demonstrate
how through a wavelet based image registration algorithm, engineers can
evaluate stability of Micro-Electro-Mechanical Systems (MEMS). In particular,
we applied image registration algorithms to assess alignment stability of the
MicroShutters Subsystem (MSS) of the Near Infrared Spectrograph (NIRSpec)
instrument of the James Webb Space Telescope (JWST). This work introduces a new
methodology for evaluating stability of MEMS devices to engineers as well as a
new application of image registration algorithms to computer scientists.
|
1301.1907 | Moon Search Algorithms for NASA's Dawn Mission to Asteroid Vesta | astro-ph.IM astro-ph.EP cs.CV | A moon or natural satellite is a celestial body that orbits a planetary body
such as a planet, dwarf planet, or an asteroid. Scientists seek understanding
the origin and evolution of our solar system by studying moons of these bodies.
Additionally, searches for satellites of planetary bodies can be important to
protect the safety of a spacecraft as it approaches or orbits a planetary body.
If a satellite of a celestial body is found, the mass of that body can also be
calculated once its orbit is determined. Ensuring the Dawn spacecraft's safety
on its mission to the asteroid (4) Vesta primarily motivated the work of Dawn's
Satellite Working Group (SWG) in summer of 2011. Dawn mission scientists and
engineers utilized various computational tools and techniques for Vesta's
satellite search. The objectives of this paper are to 1) introduce the natural
satellite search problem, 2) present the computational challenges, approaches,
and tools used when addressing this problem, and 3) describe applications of
various image processing and computational algorithms for performing satellite
searches to the electronic imaging and computer science community. Furthermore,
we hope that this communication would enable Dawn mission scientists to improve
their satellite search algorithms and tools and be better prepared for
performing the same investigation in 2015, when the spacecraft is scheduled to
approach and orbit the dwarf planet (1) Ceres.
|
1301.1917 | Stability and Cost Optimization in Controlled Random Walks Using
Scheduling Fields | cs.SY cs.IT cs.NI math.IT | The control of large queueing networks is a notoriously difficult problem.
Recently, an interesting new policy design framework for the control problem
called h-MaxWeight has been proposed: h-MaxWeight is a natural generalization
of the famous MaxWeight policy where instead of the quadratic any other
surrogate value function can be applied. Stability of the policy is then
achieved through a perturbation technique. However, stability crucially depends
on parameter choice which has to be adapted in simulations. In this paper we
use a different technique where the required perturbations can be directly
implemented in the weight domain, which we call a scheduling field then.
Specifically, we derive the theoretical arsenal that guarantees universal
stability while still operating close to the underlying cost criterion.
Simulation examples suggest that the new approach to policy synthesis can even
provide significantly higher gains irrespective of any further assumptions on
the network model or parameter choice.
|
1301.1918 | A lower bound for constant dimension codes from multi-component lifted
MRD codes | cs.IT math.IT | In this work we investigate unions of lifted MRD codes of a fixed dimension
and minimum distance and derive an explicit formula for the cardinality of such
codes. This will then imply a lower bound on the cardinality of constant
dimension codes.
|
1301.1932 | An Approach for Classification of Dysfluent and Fluent Speech Using K-NN
And SVM | cs.SD cs.AI | This paper presents a new approach for classification of dysfluent and fluent
speech using Mel-Frequency Cepstral Coefficient (MFCC). The speech is fluent
when person's speech flows easily and smoothly. Sounds combine into syllable,
syllables mix together into words and words link into sentences with little
effort. When someone's speech is dysfluent, it is irregular and does not flow
effortlessly. Therefore, a dysfluency is a break in the smooth, meaningful flow
of speech. Stuttering is one such disorder in which the fluent flow of speech
is disrupted by occurrences of dysfluencies such as repetitions, prolongations,
interjections and so on. In this work we have considered three types of
dysfluencies such as repetition, prolongation and interjection to characterize
dysfluent speech. After obtaining dysfluent and fluent speech, the speech
signals are analyzed in order to extract MFCC features. The k-Nearest Neighbor
(k-NN) and Support Vector Machine (SVM) classifiers are used to classify the
speech as dysfluent and fluent speech. The 80% of the data is used for training
and 20% for testing. The average accuracy of 86.67% and 93.34% is obtained for
dysfluent and fluent speech respectively.
|
1301.1936 | Risk-Aversion in Multi-armed Bandits | cs.LG | Stochastic multi-armed bandits solve the Exploration-Exploitation dilemma and
ultimately maximize the expected reward. Nonetheless, in many practical
problems, maximizing the expected reward is not the most desirable objective.
In this paper, we introduce a novel setting based on the principle of
risk-aversion where the objective is to compete against the arm with the best
risk-return trade-off. This setting proves to be intrinsically more difficult
than the standard multi-arm bandit setting due in part to an exploration risk
which introduces a regret associated to the variability of an algorithm. Using
variance as a measure of risk, we introduce two new algorithms, investigate
their theoretical guarantees, and report preliminary empirical results.
|
1301.1942 | Bayesian Optimization in a Billion Dimensions via Random Embeddings | stat.ML cs.LG | Bayesian optimization techniques have been successfully applied to robotics,
planning, sensor placement, recommendation, advertising, intelligent user
interfaces and automatic algorithm configuration. Despite these successes, the
approach is restricted to problems of moderate dimension, and several workshops
on Bayesian optimization have identified its scaling to high-dimensions as one
of the holy grails of the field. In this paper, we introduce a novel random
embedding idea to attack this problem. The resulting Random EMbedding Bayesian
Optimization (REMBO) algorithm is very simple, has important invariance
properties, and applies to domains with both categorical and continuous
variables. We present a thorough theoretical analysis of REMBO. Empirical
results confirm that REMBO can effectively solve problems with billions of
dimensions, provided the intrinsic dimensionality is low. They also show that
REMBO achieves state-of-the-art performance in optimizing the 47 discrete
parameters of a popular mixed integer linear programming solver.
|
1301.1950 | Syntactic Analysis Based on Morphological Characteristic Features of the
Romanian Language | cs.CL cs.AI | This paper refers to the syntactic analysis of phrases in Romanian, as an
important process of natural language processing. We will suggest a real-time
solution, based on the idea of using some words or groups of words that
indicate grammatical category; and some specific endings of some parts of
sentence. Our idea is based on some characteristics of the Romanian language,
where some prepositions, adverbs or some specific endings can provide a lot of
information about the structure of a complex sentence. Such characteristics can
be found in other languages, too, such as French. Using a special grammar, we
developed a system (DIASEXP) that can perform a dialogue in natural language
with assertive and interogative sentences about a "story" (a set of sentences
describing some events from the real life).
|
1301.1959 | The Effects of Powertrain Mechanical Response on the Dynamics and String
Stability of a Platoon of Adaptive Cruise Control Vehicles | nlin.AO cs.SY physics.soc-ph | The dynamics of a platoon of adaptive cruise control vehicles is analyzed for
a general mechanical response of the vehicle's power-train. Effects of
acceleration-feedback control that were not previously studied are found. For
small acceleration-feedback gain, which produces marginally string-stable
behavior, the reduction of a disturbance (with increasing car number n) is
found to be faster than for the maximum allowable gain. The asymptotic
magnitude of a disturbance is shown to fall off as erf(ct./sq. rt. n) when n
goes to infinity. For gain approaching the lower limit of stability,
oscillations in acceleration associated with a secondary maximum in the
transfer function (as a function of frequency) can occur. A frequency-dependent
gain that reduces the secondary maximum, but does not affect the transfer
function near zero frequency, is proposed. Performance is thereby improved by
elimination of the undesirable oscillations while the rapid disturbance
reduction is retained.
|
1301.2005 | A Distance-based Paraconsistent Semantics for DL-Lite | cs.AI | DL-Lite is an important family of description logics. Recently, there is an
increasing interest in handling inconsistency in DL-Lite as the constraint
imposed by a TBox can be easily violated by assertions in ABox in DL-Lite. In
this paper, we present a distance-based paraconsistent semantics based on the
notion of feature in DL-Lite, which provides a novel way to rationally draw
meaningful conclusions even from an inconsistent knowledge base. Finally, we
investigate several important logical properties of this entailment relation
based on the new semantics and show its promising advantages in non-monotonic
reasoning for DL-Lite.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.