id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1006.1407
|
Begin, After, and Later: a Maximal Decidable Interval Temporal Logic
|
cs.LO cs.AI
|
Interval temporal logics (ITLs) are logics for reasoning about temporal
statements expressed over intervals, i.e., periods of time. The most famous ITL
studied so far is Halpern and Shoham's HS, which is the logic of the thirteen
Allen's interval relations. Unfortunately, HS and most of its fragments have an
undecidable satisfiability problem. This discouraged the research in this area
until recently, when a number non-trivial decidable ITLs have been discovered.
This paper is a contribution towards the complete classification of all
different fragments of HS. We consider different combinations of the interval
relations Begins, After, Later and their inverses Abar, Bbar, and Lbar. We know
from previous works that the combination ABBbarAbar is decidable only when
finite domains are considered (and undecidable elsewhere), and that ABBbar is
decidable over the natural numbers. We extend these results by showing that
decidability of ABBar can be further extended to capture the language
ABBbarLbar, which lays in between ABBar and ABBbarAbar, and that turns out to
be maximal w.r.t decidability over strongly discrete linear orders (e.g. finite
orders, the naturals, the integers). We also prove that the proposed decision
procedure is optimal with respect to the complexity class.
|
1006.1414
|
Model-Checking an Alternating-time Temporal Logic with Knowledge,
Imperfect Information, Perfect Recall and Communicating Coalitions
|
cs.LO cs.MA
|
We present a variant of ATL with distributed knowledge operators based on a
synchronous and perfect recall semantics. The coalition modalities in this
logic are based on partial observation of the full history, and incorporate a
form of cooperation between members of the coalition in which agents issue
their actions based on the distributed knowledge, for that coalition, of the
system history. We show that model-checking is decidable for this logic. The
technique utilizes two variants of games with imperfect information and
partially observable objectives, as well as a subset construction for
identifying states whose histories are indistinguishable to the considered
coalition.
|
1006.1420
|
Landauer's principle in the quantum domain
|
quant-ph cs.IT math.IT
|
Recent papers discussing thermodynamic processes in strongly coupled quantum
systems claim a violation of Landauer's principle and imply a violation of the
second law of thermodynamics. If true, this would have powerful consequences.
Perpetuum mobiles could be build as long as the operating temperature is
brought close to zero. It would also have serious consequences on thermodynamic
derivations of information theoretic results, such as the Holevo bound. Here we
argue why these claims are erroneous. Correlations occurring in the strongly
coupled, quantum domain require a rethink of how entropy, heat and work are
calculated. It is shown that a consistent treatment solves the paradox.
|
1006.1422
|
Engineering Long Range Distance Independent Entanglement through Kondo
Impurities in Spin Chains
|
quant-ph cs.IT math.IT
|
We investigate the entanglement properties of the Kondo spin chain when it is
prepared in its ground state as well as its dynamics following a single bond
quench. We show that a true measure of entanglement such as negativity enables
to characterize the unique features of the gapless Kondo regime. We determine
the spatial extent of the Kondo screening cloud and propose an ansatz for the
ground state in the Kondo regime accessible to this spin chain; we also
demonstrate that the impurity spin is indeed maximally entangled with the Kondo
cloud. We exploit these features of the entanglement in the gapless Kondo
regime to show that a single local quench at one end of a Kondo spin chain may
always induce a fast and long lived oscillatory dynamics, which establishes a
high quality entanglement between the individual spins at the opposite ends of
the chain. This entanglement is a footprint of the presence of the Kondo cloud
and may be engineered so as to attain - even for very large chains- a constant
high value independent of the length; in addition, it is thermally robust. To
better evidence the remarkable peculiarities of the Kondo regime, we carry a
parallel analysis of the entanglement properties of the Kondo spin chain model
in the gapped dimerised regime where these remarkable features are absent.
|
1006.1426
|
Classification of delocalization power of global unitary operations in
terms of LOCC one-piece relocalization
|
quant-ph cs.IT math.IT
|
We study how two pieces of localized quantum information can be delocalized
across a composite Hilbert space when a global unitary operation is applied. We
classify the delocalization power of global unitary operations on quantum
information by investigating the possibility of relocalizing one piece of the
quantum information without using any global quantum resource. We show that
one-piece relocalization is possible if and only if the global unitary
operation is local unitary equivalent of a controlled-unitary operation. The
delocalization power turns out to reveal different aspect of the non-local
properties of global unitary operations characterized by their entangling
power.
|
1006.1429
|
Causality and the Semantics of Provenance
|
cs.LO cs.DB
|
Provenance, or information about the sources, derivation, custody or history
of data, has been studied recently in a number of contexts, including
databases, scientific workflows and the Semantic Web. Many provenance
mechanisms have been developed, motivated by informal notions such as
influence, dependence, explanation and causality. However, there has been
little study of whether these mechanisms formally satisfy appropriate policies
or even how to formalize relevant motivating concepts such as causality. We
contend that mathematical models of these concepts are needed to justify and
compare provenance techniques. In this paper we review a theory of causality
based on structural models that has been developed in artificial intelligence,
and describe work in progress on using causality to give a semantics to
provenance graphs.
|
1006.1434
|
Computing by Means of Physics-Based Optical Neural Networks
|
cs.NE cs.AI
|
We report recent research on computing with biology-based neural network
models by means of physics-based opto-electronic hardware. New technology
provides opportunities for very-high-speed computation and uncovers problems
obstructing the wide-spread use of this new capability. The Computation
Modeling community may be able to offer solutions to these cross-boundary
research problems.
|
1006.1435
|
Distortion Outage Probability in MIMO Block-Fading Channels
|
cs.IT math.IT
|
We study analogue source transmission over MIMO block-fading channels with
receiver-only channel state information. Unlike previous work which considers
the end-to-end expected distortion as a figure of merit, we study the
distortion outage probability. We first consider the well known transmitter
informed bound, which yields a benchmark lower bound to the distortion outage
probability of any coding scheme. We next compare the results with
source-channel separation. The key difference from the expected distortion
approach is that if the channel code rate is chosen appropriately,
source-channel separation can not only achieve the same diversity exponent, but
also the same distortion outage probability as the transmitter informed lower
bound.
|
1006.1450
|
Separating Agent-Functioning and Inter-Agent Coordination by Activated
Modules: The DECOMAS Architecture
|
cs.MA
|
The embedding of self-organizing inter-agent processes in distributed
software applications enables the decentralized coordination system elements,
solely based on concerted, localized interactions. The separation and
encapsulation of the activities that are conceptually related to the
coordination, is a crucial concern for systematic development practices in
order to prepare the reuse and systematic integration of coordination processes
in software systems. Here, we discuss a programming model that is based on the
externalization of processes prescriptions and their embedding in Multi-Agent
Systems (MAS). One fundamental design concern for a corresponding execution
middleware is the minimal-invasive augmentation of the activities that affect
coordination. This design challenge is approached by the activation of agent
modules. Modules are converted to software elements that reason about and
modify their host agent. We discuss and formalize this extension within the
context of a generic coordination architecture and exemplify the proposed
programming model with the decentralized management of (web) service
infrastructures.
|
1006.1512
|
The Deterministic Dendritic Cell Algorithm
|
cs.AI cs.NE
|
The Dendritic Cell Algorithm is an immune-inspired algorithm orig- inally
based on the function of natural dendritic cells. The original instantiation of
the algorithm is a highly stochastic algorithm. While the performance of the
algorithm is good when applied to large real-time datasets, it is difficult to
anal- yse due to the number of random-based elements. In this paper a
deterministic version of the algorithm is proposed, implemented and tested
using a port scan dataset to provide a controllable system. This version
consists of a controllable amount of parameters, which are experimented with in
this paper. In addition the effects are examined of the use of time windows and
variation on the number of cells, both which are shown to influence the
algorithm. Finally a novel metric for the assessment of the algorithms output
is introduced and proves to be a more sensitive metric than the metric used
with the original Dendritic Cell Algorithm.
|
1006.1518
|
The DCA:SOMe Comparison A comparative study between two
biologically-inspired algorithms
|
cs.AI cs.CR cs.NE
|
The Dendritic Cell Algorithm (DCA) is an immune-inspired algorithm, developed
for the purpose of anomaly detection. The algorithm performs multi-sensor data
fusion and correlation which results in a 'context aware' detection system.
Previous applications of the DCA have included the detection of potentially
malicious port scanning activity, where it has produced high rates of true
positives and low rates of false positives. In this work we aim to compare the
performance of the DCA and of a Self-Organizing Map (SOM) when applied to the
detection of SYN port scans, through experimental analysis. A SOM is an ideal
candidate for comparison as it shares similarities with the DCA in terms of the
data fusion method employed. It is shown that the results of the two systems
are comparable, and both produce false positives for the same processes. This
shows that the DCA can produce anomaly detection results to the same standard
as an established technique.
|
1006.1526
|
The Motif Tracking Algorithm
|
cs.AI cs.CE cs.NE
|
The search for patterns or motifs in data represents a problem area of key
interest to finance and economic researchers. In this paper we introduce the
Motif Tracking Algorithm, a novel immune inspired pattern identification tool
that is able to identify unknown motifs of a non specified length which repeat
within time series data. The power of the algorithm comes from the fact that it
uses a small number of parameters with minimal assumptions regarding the data
being examined or the underlying motifs. Our interest lies in applying the
algorithm to financial time series data to identify unknown patterns that
exist. The algorithm is tested using three separate data sets. Particular
suitability to financial data is shown by applying it to oil price data. In all
cases the algorithm identifies the presence of a motif population in a fast and
efficient manner due to the utilisation of an intuitive symbolic
representation. The resulting population of motifs is shown to have
considerable potential value for other applications such as forecasting and
algorithm seeding.
|
1006.1535
|
Tree-structure Expectation Propagation for Decoding LDPC codes over
Binary Erasure Channels
|
cs.IT math.IT
|
Expectation Propagation is a generalization to Belief Propagation (BP) in two
ways. First, it can be used with any exponential family distribution over the
cliques in the graph. Second, it can impose additional constraints on the
marginal distributions. We use this second property to impose pair-wise
marginal distribution constraints in some check nodes of the LDPC Tanner graph.
These additional constraints allow decoding the received codeword when the BP
decoder gets stuck. In this paper, we first present the new decoding algorithm,
whose complexity is identical to the BP decoder, and we then prove that it is
able to decode codewords with a larger fraction of erasures, as the block size
tends to infinity. The proposed algorithm can be also understood as a
simplification of the Maxwell decoder, but without its computational
complexity. We also illustrate that the new algorithm outperforms the BP
decoder for finite block-size
|
1006.1537
|
New worst upper bound for #SAT
|
cs.AI cs.CC
|
The rigorous theoretical analyses of algorithms for #SAT have been proposed
in the literature. As we know, previous algorithms for solving #SAT have been
analyzed only regarding the number of variables as the parameter. However, the
time complexity for solving #SAT instances depends not only on the number of
variables, but also on the number of clauses. Therefore, it is significant to
exploit the time complexity from the other point of view, i.e. the number of
clauses. In this paper, we present algorithms for solving #2-SAT and #3-SAT
with rigorous complexity analyses using the number of clauses as the parameter.
By analyzing the algorithms, we obtain the new worst-case upper bounds
O(1.1892m) for #2-SAT and O(1.4142m) for #3-SAT, where m is the number of
clauses.
|
1006.1543
|
Efficient Discovery of Large Synchronous Events in Neural Spike Streams
|
cs.NE
|
We address the problem of finding patterns from multi-neuronal spike trains
that give us insights into the multi-neuronal codes used in the brain and help
us design better brain computer interfaces. We focus on the synchronous firings
of groups of neurons as these have been shown to play a major role in coding
and communication. With large electrode arrays, it is now possible to
simultaneously record the spiking activity of hundreds of neurons over large
periods of time. Recently, techniques have been developed to efficiently count
the frequency of synchronous firing patterns. However, when the number of
neurons being observed grows they suffer from the combinatorial explosion in
the number of possible patterns and do not scale well. In this paper, we
present a temporal data mining scheme that overcomes many of these problems. It
generates a set of candidate patterns from frequent patterns of smaller size;
all possible patterns are not counted. Also we count only a certain well
defined subset of occurrences and this makes the process more efficient. We
highlight the computational advantage that this approach offers over the
existing methods through simulations.
We also propose methods for assessing the statistical significance of the
discovered patterns. We detect only those patterns that repeat often enough to
be significant and thus be able to automatically fix the threshold for the
data-mining application. Finally we discuss the usefulness of these methods for
brain computer interfaces.
|
1006.1548
|
On Communication over Unknown Sparse Frequency-Selective Block-Fading
Channels
|
cs.IT math.IT
|
This paper considers the problem of reliable communication over discrete-time
channels whose impulse responses have length $L$ and exactly $S\leq L$ non-zero
coefficients, and whose support and coefficients remain fixed over blocks of
$N>L$ channel uses but change independently from block to block. Here, it is
assumed that the channel's support and coefficient realizations are both
unknown, although their statistics are known. Assuming Gaussian
non-zero-coefficients and noise, and focusing on the high-SNR regime, it is
first shown that the ergodic noncoherent channel capacity has pre-log factor
$1-\frac{S}{N}$ for any $L$. It is then shown that, to communicate with
arbitrarily small error probability at rates in accordance with the capacity
pre-log factor, it suffices to use pilot-aided orthogonal frequency-division
multiplexing (OFDM) with $S$ pilots per fading block, in conjunction with an
appropriate noncoherent decoder. Since the achievability result is proven using
a noncoherent decoder whose complexity grows exponentially in the number of
fading blocks $K$, a simpler decoder, based on $S+1$ pilots, is also proposed.
Its $\epsilon$-achievable rate is shown to have pre-log factor equal to
$1-\frac{S+1}{N}$ with the previously considered channel, while its achievable
rate is shown to have pre-log factor $1-\frac{S+1}{N}$ when the support of the
block-fading channel remains fixed over time.
|
1006.1563
|
ToLeRating UR-STD
|
cs.AI cs.CR cs.NE
|
A new emerging paradigm of Uncertain Risk of Suspicion, Threat and Danger,
observed across the field of information security, is described. Based on this
paradigm a novel approach to anomaly detection is presented. Our approach is
based on a simple yet powerful analogy from the innate part of the human immune
system, the Toll-Like Receptors. We argue that such receptors incorporated as
part of an anomaly detector enhance the detector's ability to distinguish
normal and anomalous behaviour. In addition we propose that Toll-Like Receptors
enable the classification of detected anomalies based on the types of attacks
that perpetrate the anomalous behaviour. Classification of such type is either
missing in existing literature or is not fit for the purpose of reducing the
burden of an administrator of an intrusion detection system. For our model to
work, we propose the creation of a taxonomy of the digital Acytota, based on
which our receptors are created.
|
1006.1565
|
Information Theory and Statistical Physics - Lecture Notes
|
cs.IT cond-mat.dis-nn cond-mat.stat-mech math.IT
|
This document consists of lecture notes for a graduate course, which focuses
on the relations between Information Theory and Statistical Physics. The course
is aimed at EE graduate students in the area of Communications and Information
Theory, as well as to graduate students in Physics who have basic background in
Information Theory. Strong emphasis is given to the analogy and parallelism
between Information Theory and Statistical Physics, as well as to the insights,
the analysis tools and techniques that can be borrowed from Statistical Physics
and `imported' to certain problem areas in Information Theory. This is a
research trend that has been very active in the last few decades, and the hope
is that by exposing the student to the meeting points between these two
disciplines, we will enhance his/her background and perspective to carry out
research in the field.
A short outline of the course is as follows: Introduction; Elementary
Statistical Physics and its Relation to Information Theory; Analysis Tools in
Statistical Physics; Systems of Interacting Particles and Phase Transitions;
The Random Energy Model (REM) and Random Channel Coding; Additional Topics
(optional).
|
1006.1568
|
Towards a Conceptual Framework for Innate Immunity
|
cs.AI cs.NE
|
Innate immunity now occupies a central role in immunology. However,
artificial immune system models have largely been inspired by adaptive not
innate immunity. This paper reviews the biological principles and properties of
innate immunity and, adopting a conceptual framework, asks how these can be
incorporated into artificial models. The aim is to outline a meta-framework for
models of innate immunity.
|
1006.1592
|
Information-theoretic Capacity of Clustered Random Networks
|
cs.IT math.IT
|
We analyze the capacity scaling laws of clustered ad hoc networks in which
nodes are distributed according to a doubly stochastic shot-noise Cox process.
We identify five different operational regimes, and for each regime we devise a
communication strategy that allows to achieve a throughput to within a
poly-logarithmic factor (in the number of nodes) of the maximum theoretical
capacity.
|
1006.1658
|
A Link between Guruswami--Sudan's List--Decoding and Decoding of
Interleaved Reed--Solomon Codes
|
cs.IT math.IT
|
The Welch--Berlekamp approach for Reed--Solomon (RS) codes forms a bridge
between classical syndrome--based decoding algorithms and interpolation--based
list--decoding procedures for list size l=1. It returns the univariate
error--locator polynomial and the evaluation polynomial of the RS code as a
y-root. In this paper, we show the connection between the Welch--Berlekamp
approach for a specific Interleaved Reed--Solomon code scheme and the
Guruswami--Sudan principle. It turns out that the decoding of Interleaved RS
codes can be formulated as a modified Guruswami--Sudan problem with a specific
multiplicity assignment. We show that our new approach results in the same
solution space as the Welch--Berlekamp scheme. Furthermore, we prove some
important properties.
|
1006.1661
|
Variants of the LLL Algorithm in Digital Communications: Complexity
Analysis and Fixed-Complexity Implementation
|
cs.IT math.IT
|
The Lenstra-Lenstra-Lov\'asz (LLL) algorithm is the most practical lattice
reduction algorithm in digital communications. In this paper, several variants
of the LLL algorithm with either lower theoretic complexity or fixed-complexity
implementation are proposed and/or analyzed. Firstly, the $O(n^4\log n)$
theoretic average complexity of the standard LLL algorithm under the model of
i.i.d. complex normal distribution is derived. Then, the use of effective LLL
reduction for lattice decoding is presented, where size reduction is only
performed for pairs of consecutive basis vectors. Its average complexity is
shown to be $O(n^3\log n)$, which is an order lower than previously thought. To
address the issue of variable complexity of standard LLL, two fixed-complexity
approximations of LLL are proposed. One is fixed-complexity effective LLL,
while the other is fixed-complexity LLL with deep insertion, which is closely
related to the well known V-BLAST algorithm. Such fixed-complexity structures
are much desirable in hardware implementation since they allow straightforward
constant-throughput implementation.
|
1006.1663
|
Tata Kelola Database Perguruan Tinggi Yang Optimal Dengan Data Warehouse
|
cs.DB
|
The emergence of new higher education institutions has created the
competition in higher education market, and data warehouse can be used as an
effective technology tools for increasing competitiveness in the higher
education market. Data warehouse produce reliable reports for the institution's
high-level management in short time for faster and better decision making, not
only on increasing the admission number of students, but also on the
possibility to find extraordinary, unconventional funds for the institution.
Efficiency comparison was based on length and amount of processed records,
total processed byte, amount of processed tables, time to run query and
produced record on OLTP database and data warehouse. Efficiency percentages was
measured by the formula for percentage increasing and the average efficiency
percentage of 461.801,04% shows that using data warehouse is more powerful and
efficient rather than using OLTP database. Data warehouse was modeled based on
hypercube which is created by limited high demand reports which usually used by
high level management. In every table of fact and dimension fields will be
inserted which represent the loading constructive merge where the ETL
(Extraction, Transformation and Loading) process is run based on the old and
new files.
|
1006.1666
|
On the Proximity Factors of Lattice Reduction-Aided Decoding
|
cs.IT math.IT
|
Lattice reduction-aided decoding features reduced decoding complexity and
near-optimum performance in multi-input multi-output communications. In this
paper, a quantitative analysis of lattice reduction-aided decoding is
presented. To this aim, the proximity factors are defined to measure the
worst-case losses in distances relative to closest point search (in an infinite
lattice). Upper bounds on the proximity factors are derived, which are
functions of the dimension $n$ of the lattice alone. The study is then extended
to the dual-basis reduction. It is found that the bounds for dual basis
reduction may be smaller. Reasonably good bounds are derived in many cases. The
constant bounds on proximity factors not only imply the same diversity order in
fading channels, but also relate the error probabilities of (infinite) lattice
decoding and lattice reduction-aided decoding.
|
1006.1667
|
Interference Channel with Generalized Feedback (a.k.a. with source
cooperation). Part I: Achievable Region
|
cs.IT math.IT
|
An Interference Channel with Generalized Feedback (IFC-GF) is a model for a
wireless network where several source-destination pairs compete for the same
channel resources, and where the sources have the ability to sense the current
channel activity. The signal overheard from the channel provides information
about the activity of the other users, and thus furnishes the basis for
cooperation. In this two-part paper we study achievable strategies and outer
bounds for a general IFC-GF with two source-destination pairs. We then evaluate
the proposed regions for the Gaussian channel. Part I: achievable region. We
propose that the generalized feedback is used to gain knowledge about the
message sent by other user and then exploited in two ways: (a) to {\em relay}
the messages that can be decoded at both destinations--thus realizing the gains
of beam-forming of a distributed multi-antenna system--and (b) to {\em hide}
the messages that can not be decoded at the non-intended destination--thus
leveraging the interference "pre-cancellation" property of dirty-paper-type
coding. We show that our achievable region generalizes several known achievable
regions for IFC-GF and that it reduces to known achievable regions for some of
the channels subsumed by the IFC-GF model.
|
1006.1669
|
On the Universality of Sequential Slotted Amplify and Forward Strategy
in Cooperative Communications
|
cs.IT math.IT
|
While cooperative communication has many benefits and is expected to play an
important role in future wireless networks, many challenges are still unsolved.
Previous research has developed different relaying strategies for cooperative
multiple access channels (CMA), cooperative multiple relay channels (CMR) and
cooperative broadcast channels (CBC). However, there lacks a unifying strategy
that is universally optimal for these three classical channel models.
Sequential slotted amplify and forward (SSAF) strategy was previously proposed
to achieve the optimal diversity and multiplexing tradeoff (DMT) for CMR. In
this paper, the use of SSAF strategy is extended to CBC and CMA, and its
optimality for both of them is shown. For CBC, a CBC-SSAF strategy is proposed
which can asymptotically achieve the DMT upper bound when the number of
cooperative users is large. For CMA, a CMA-SSAF strategy is proposed which even
can exactly achieve the DMT upper bound with any number of cooperative users.
In this way, SSAF strategy is shown to be universally optimal for all these
three classical channel models and has great potential to provide universal
optimality for wireless cooperative networks.
|
1006.1678
|
The MUSIC Algorithm for Sparse Objects: A Compressed Sensing Analysis
|
cs.IT math.AP math.IT physics.data-an
|
The MUSIC algorithm, with its extension for imaging sparse {\em extended}
objects, is analyzed by compressed sensing (CS) techniques. The notion of
restricted isometry property (RIP) and an upper bound on the restricted
isometry constant (RIC) are employed to establish sufficient conditions for the
exact localization by MUSIC with or without the presence of noise. In the
noiseless case, the sufficient condition gives an upper bound on the numbers of
random sampling and incident directions necessary for exact localization. In
the noisy case, the sufficient condition assumes additionally an upper bound
for the noise-to-object ratio in terms of the RIC and the condition number of
objects. Rigorous comparison of performance between MUSIC and the CS
minimization principle, Lasso, is given. In general, the MUSIC algorithm
guarantees to recover, with high probability, $s$ scatterers with $n=\cO(s^2)$
random sampling and incident directions and sufficiently high frequency. For
the favorable imaging geometry where the scatterers are distributed on a
transverse plane MUSIC guarantees to recover, with high probability, $s$
scatterers with a median frequency and $n=\cO(s)$ random sampling/incident
directions. Numerical results confirm that the Lasso outperforms MUSIC in the
well-resolved case while the opposite is true for the under-resolved case. The
latter effect indicates the superresolution capability of the MUSIC algorithm.
Another advantage of MUSIC over the Lasso as applied to imaging is the former's
flexibility with grid spacing and guarantee of {\em approximate} localization
of sufficiently separated objects in an arbitrarily fine grid. The error can be
bounded from above by $\cO(\lambda s)$ for general configurations and
$\cO(\lambda)$ for objects distributed in a transverse plane.
|
1006.1681
|
Towards the Design of Heuristics by Means of Self-Assembly
|
cs.AI cs.NE
|
The current investigations on hyper-heuristics design have sprung up in two
different flavours: heuristics that choose heuristics and heuristics that
generate heuristics. In the latter, the goal is to develop a problem-domain
independent strategy to automatically generate a good performing heuristic for
the problem at hand. This can be done, for example, by automatically selecting
and combining different low-level heuristics into a problem specific and
effective strategy. Hyper-heuristics raise the level of generality on automated
problem solving by attempting to select and/or generate tailored heuristics for
the problem at hand. Some approaches like genetic programming have been
proposed for this. In this paper, we explore an elegant nature-inspired
alternative based on self-assembly construction processes, in which structures
emerge out of local interactions between autonomous components. This idea
arises from previous works in which computational models of self-assembly were
subject to evolutionary design in order to perform the automatic construction
of user-defined structures. Then, the aim of this paper is to present a novel
methodology for the automated design of heuristics by means of self-assembly.
|
1006.1690
|
Full-Duplex Relay based on Zero-Forcing Beamforming
|
cs.IT math.IT
|
In this paper, we propose a full-duplex relay (FDR) based on a zero-forcing
beamforming (ZFBF) for a multiuser MIMO relay system. The ZFBF is employed at
the base station to suppress both the self-interference of the relay and the
multiuser interference at the same time. Numerical results show that the
proposed FDR can enhance the sum rate performance as compared to the
half-duplex relay (HDR), if sufficient isolation between the transmit and
receive antennas is ensured at the relay.
|
1006.1692
|
Measuring interesting rules in Characteristic rule
|
cs.DB cs.AI
|
Finding interesting rule in the sixth strategy step about threshold control
on generalized relations in attribute oriented induction, there is possibility
to select candidate attribute for further generalization and merging of
identical tuples until the number of tuples is no greater than the threshold
value, as implemented in basic attribute oriented induction algorithm. At this
strategy step there is possibility the number of tuples in final generalization
result still greater than threshold value. In order to get the final
generalization result which only small number of tuples and can be easy to
transfer into simple logical formula, the seventh strategy step about rule
transformation is evolved where there will be simplification by unioning or
grouping the identical attribute. Our approach to measure interesting rule is
opposite with heuristic measurement approach by Fudger and Hamilton where the
more complex concept hierarchies, more interesting results are likely to be
found, but our approach the simpler concept hierarchies, more interesting
results are likely to be found and the more complex concept hierarchies, more
complex process generalization in concept tree. The decision to find
interesting rule is influenced with wide or length and depth or level of
concept tree.
|
1006.1694
|
Pure Asymmetric Quantum MDS Codes from CSS Construction: A Complete
Characterization
|
cs.IT math.IT
|
Using the Calderbank-Shor-Steane (CSS) construction, pure $q$-ary asymmetric
quantum error-correcting codes attaining the quantum Singleton bound are
constructed. Such codes are called pure CSS asymmetric quantum maximum distance
separable (AQMDS) codes. Assuming the validity of the classical MDS Conjecture,
pure CSS AQMDS codes of all possible parameters are accounted for.
|
1006.1695
|
Attribute Oriented Induction with simple select SQL statement
|
cs.DB
|
Searching learning or rules in relational database for data mining purposes
with characteristic or classification/discriminant rule in attribute oriented
induction technique can be quicker, easy, and simple with simple SQL statement.
With just only one simple SQL statement, characteristic and classification rule
can be created simultaneously. Collaboration SQL statement with any other
application software will increase the ability for creating t-weight as
measurement the typicality of each record in the characteristic rule and
d-weight as measurement the discriminating behavior of the learned
classification/discriminant rule, particularly for further generalization in
characteristic rule. Handling concept hierarchy into tables based on concept
tree will influence for the successful simple SQL statement and by knowing the
right standard knowledge to transform each of concept tree in concept hierarchy
into one table as transforming concept hierarchy into table, the simple SQL
statement can be run properly.
|
1006.1699
|
Multidimensional Datawarehouse with Combination Formula
|
cs.DB
|
Multidimensional in data warehouse is a compulsion and become the most
important for information delivery, without multidimensional Multidimensional
in data warehouse is a compulsion and become the most important for information
delivery, without multidimensional datawarehouse is incomplete.
Multidimensional give ability to analyze business measurement in many different
ways. Multidimensional is also synonymous with online analytical processing
(OLAP). By using some concepts in datawarehouse like slice-dice,drill down and
roll up will increase the ability of multidimensional datawarehouse. The
research question and the discussing for this paper are how much deepest the
multidimensional ability from each fact table in datawarehouse. By using the
statistic combination formula we try to explore the combination that can be
yielded from each dimension in hypercubes, the entire of dimensi combination,
minimum combination and maximum combination.
|
1006.1701
|
Virtual information system on working area
|
cs.AI
|
In order to get strategic positioning for competition in business
organization, the information system must be ahead in this information age
where the information as one of the weapons to win the competition and in the
right hand the information will become a right bullet. The information system
with the information technology support isn't enough if just only on internet
or implemented with internet technology. The growth of information technology
as tools for helping and making people easy to use must be accompanied by
wanting to make fun and happy when they make contact with the information
technology itself. Basically human like to play, since childhood human have
been playing, free and happy and when human grow up they can't play as much as
when human was in their childhood. We have to develop the information system
which is not perform information system itself but can help human to explore
their natural instinct for playing, making fun and happiness when they interact
with the information system. Virtual information system is the way to present
playing and having fun atmosphere on working area.
|
1006.1703
|
Indonesian Earthquake Decision Support System
|
cs.AI
|
Earthquake DSS is an information technology environment which can be used by
government to sharpen, make faster and better the earthquake mitigation
decision. Earthquake DSS can be delivered as E-government which is not only for
government itself but in order to guarantee each citizen's rights for
education, training and information about earthquake and how to overcome the
earthquake. Knowledge can be managed for future use and would become mining by
saving and maintain all the data and information about earthquake and
earthquake mitigation in Indonesia. Using Web technology will enhance global
access and easy to use. Datawarehouse as unNormalized database for
multidimensional analysis will speed the query process and increase reports
variation. Link with other Disaster DSS in one national disaster DSS, link with
other government information system and international will enhance the
knowledge and sharpen the reports.
|
1006.1727
|
Distributed Consensus with Finite Message Passing
|
cs.IT math.IT
|
Inspired by distributed resource allocation problems in dynamic topology
networks, we initiate the study of distributed consensus with finite messaging
passing. We first find a sufficient condition on the network graph for which no
distributed protocol can guarantee a conflict-free allocation after $R$ rounds
of message passing. Secondly we fully characterize the conflict minimizing
zero-round protocol for path graphs, namely random allocation, which partitions
the graph into small conflict groups. Thirdly, we enumerate all one-round
protocols for path graphs and show that the best one further partitions each of
the smaller groups. Finally, we show that the number of conflicts decrease to
zero as the number of available resources increase.
|
1006.1735
|
Algebraic Attack on the Alternating Step(r,s)Generator
|
cs.CR cs.IT math.IT
|
The Alternating Step(r,s) Generator, ASG(r,s), is a clock-controlled sequence
generator which is recently proposed by A. Kanso. It consists of three
registers of length l, m and n bits. The first register controls the clocking
of the two others. The two other registers are clocked r times (or not clocked)
(resp. s times or not clocked) depending on the clock-control bit in the first
register. The special case r=s=1 is the original and well known Alternating
Step Generator. Kanso claims there is no efficient attack against the ASG(r,s)
since r and s are kept secret. In this paper, we present an Alternating Step
Generator, ASG, model for the ASG(r,s) and also we present a new and efficient
algebraic attack on ASG(r,s) using 3(m+n) bits of the output sequence to find
the secret key with O((m^2+n^2)*2^{l+1}+ (2^{m-1})*m^3 + (2^{n-1})*n^3)
computational complexity. We show that this system is no more secure than the
original ASG, in contrast to the claim of the ASG(r,s)'s constructor.
|
1006.1743
|
A Basis for all Solutions of the Key Equation for Gabidulin Codes
|
cs.IT math.IT
|
We present and prove the correctness of an efficient algorithm that provides
a basis for all solutions of a key equation in order to decode Gabidulin (G-)
codes up to a given radius tau. This algorithm is based on a symbolic
equivalent of the Euclidean Algorithm (EA) and can be applied for decoding of
G-codes beyond half the minimum rank distance. If the key equation has a unique
solution, our algorithm reduces to Gabidulin's decoding algorithm up to half
the minimum distance. If the solution is not unique, we provide a basis for all
solutions of the key equation. Our algorithm has time complexity O(tau^2) and
is a generalization of the modified EA by Bossert and Bezzateev for
Reed-Solomon codes.
|
1006.1746
|
Calibration and Internal no-Regret with Partial Monitoring
|
cs.GT cs.LG stat.ML
|
Calibrated strategies can be obtained by performing strategies that have no
internal regret in some auxiliary game. Such strategies can be constructed
explicitly with the use of Blackwell's approachability theorem, in an other
auxiliary game. We establish the converse: a strategy that approaches a convex
$B$-set can be derived from the construction of a calibrated strategy. We
develop these tools in the framework of a game with partial monitoring, where
players do not observe the actions of their opponents but receive random
signals, to define a notion of internal regret and construct strategies that
have no such regret.
|
1006.1749
|
Converse Lyapunov Theorems for Switched Systems in Banach and Hilbert
Spaces
|
math.OC cs.SY
|
We consider switched systems on Banach and Hilbert spaces governed by
strongly continuous one-parameter semigroups of linear evolution operators. We
provide necessary and sufficient conditions for their global exponential
stability, uniform with respect to the switching signal, in terms of the
existence of a Lyapunov function common to all modes.
|
1006.1772
|
Analysis of a Collaborative Filter Based on Popularity Amongst Neighbors
|
cs.IT math.IT
|
In this paper, we analyze a collaborative filter that answers the simple
question: What is popular amongst your friends? While this basic principle
seems to be prevalent in many practical implementations, there does not appear
to be much theoretical analysis of its performance. In this paper, we partly
fill this gap. While recent works on this topic, such as the low-rank matrix
completion literature, consider the probability of error in recovering the
entire rating matrix, we consider probability of an error in an individual
recommendation (bit error rate (BER)). For a mathematical model introduced in
[1],[2], we identify three regimes of operation for our algorithm (named
Popularity Amongst Friends (PAF)) in the limit as the matrix size grows to
infinity. In a regime characterized by large number of samples and small
degrees of freedom (defined precisely for the model in the paper), the
asymptotic BER is zero; in a regime characterized by large number of samples
and large degrees of freedom, the asymptotic BER is bounded away from 0 and 1/2
(and is identified exactly except for a special case); and in a regime
characterized by a small number of samples, the algorithm fails. We also
present numerical results for the MovieLens and Netflix datasets. We discuss
the empirical performance in light of our theoretical results and compare with
an approach based on low-rank matrix completion.
|
1006.1786
|
Measuring Meaning on the World-Wide Web
|
cs.AI cs.CL
|
We introduce the notion of the 'meaning bound' of a word with respect to
another word by making use of the World-Wide Web as a conceptual environment
for meaning. The meaning of a word with respect to another word is established
by multiplying the product of the number of webpages containing both words by
the total number of webpages of the World-Wide Web, and dividing the result by
the product of the number of webpages for each of the single words. We
calculate the meaning bounds for several words and analyze different aspects of
these by looking at specific examples.
|
1006.1828
|
Landau Theory of Adaptive Integration in Computational Intelligence
|
stat.ML cs.AI nlin.AO q-bio.NC q-bio.PE
|
Computational Intelligence (CI) is a sub-branch of Artificial Intelligence
paradigm focusing on the study of adaptive mechanisms to enable or facilitate
intelligent behavior in complex and changing environments. There are several
paradigms of CI [like artificial neural networks, evolutionary computations,
swarm intelligence, artificial immune systems, fuzzy systems and many others],
each of these has its origins in biological systems [biological neural systems,
natural Darwinian evolution, social behavior, immune system, interactions of
organisms with their environment]. Most of those paradigms evolved into
separate machine learning (ML) techniques, where probabilistic methods are used
complementary with CI techniques in order to effectively combine elements of
learning, adaptation, evolution and Fuzzy logic to create heuristic algorithms
that are, in some sense, intelligent. The current trend is to develop consensus
techniques, since no single machine learning algorithms is superior to others
in all possible situations. In order to overcome this problem several
meta-approaches were proposed in ML focusing on the integration of results from
different methods into single prediction. We discuss here the Landau theory for
the nonlinear equation that can describe the adaptive integration of
information acquired from an ensemble of independent learning agents. The
influence of each individual agent on other learners is described similarly to
the social impact theory. The final decision outcome for the consensus system
is calculated using majority rule in the stationary limit, yet the minority
solutions can survive inside the majority population as the complex
intermittent clusters of opposite opinion.
|
1006.1890
|
Optimal Power Allocation for GSVD-Based Beamforming in the MIMO Wiretap
Channel
|
cs.IT math.IT
|
This paper considers a multiple-input multiple-output (MIMO) Gaussian wiretap
channel model, where there exists a transmitter, a legitimate receiver and an
eavesdropper, each equipped with multiple antennas. Perfect secrecy is achieved
when the transmitter and the legitimate receiver can communicate at some
positive rate, while ensuring that the eavesdropper gets zero bits of
information. In this paper, the perfect secrecy capacity of the multiple
antenna MIMO wiretap channel is found for aribtrary numbers of antennas under
the assumption that the transmitter performs beamforming based on the
generalized singular value decomposition (GSVD). More precisely, the optimal
allocation of power for the GSVD-based precoder that achieves the secrecy
capacity is derived. This solution is shown to have several advantages over
prior work that considered secrecy capacity for the general MIMO Gaussian
wiretap channel under a high SNR assumption. Numerical results are presented to
illustrate the proposed theoretical findings.
|
1006.1916
|
Building Computer Network Attacks
|
cs.CR cs.AI
|
In this work we start walking the path to a new perspective for viewing
cyberwarfare scenarios, by introducing conceptual tools (a formal model) to
evaluate the costs of an attack, to describe the theater of operations,
targets, missions, actions, plans and assets involved in cyberwarfare attacks.
We also describe two applications of this model: autonomous planning leading to
automated penetration tests, and attack simulations, allowing a system
administrator to evaluate the vulnerabilities of his network.
|
1006.1918
|
Using Neural Networks to improve classical Operating System
Fingerprinting techniques
|
cs.CR cs.NE
|
We present remote Operating System detection as an inference problem: given a
set of observations (the target host responses to a set of tests), we want to
infer the OS type which most probably generated these observations. Classical
techniques used to perform this analysis present several limitations. To
improve the analysis, we have developed tools using neural networks and
Statistics tools. We present two working modules: one which uses DCE-RPC
endpoints to distinguish Windows versions, and another which uses Nmap
signatures to distinguish different version of Windows, Linux, Solaris,
OpenBSD, FreeBSD and NetBSD systems. We explain the details of the topology and
inner workings of the neural networks used, and the fine tuning of their
parameters. Finally we show positive experimental results.
|
1006.1930
|
The Pet-Fish problem on the World-Wide Web
|
cs.AI cs.CL
|
We identify the presence of Pet-Fish problem situations and the corresponding
Guppy effect of concept theory on the World-Wide Web. For this purpose, we
introduce absolute weights for words expressing concepts and relative weights
between words expressing concepts, and the notion of 'meaning bound' between
two words expressing concepts, making explicit use of the conceptual structure
of the World-Wide Web. The Pet-Fish problem occurs whenever there are exemplars
- in the case of Pet and Fish these can be Guppy or Goldfish - for which the
meaning bound with respect to the conjunction is stronger than the meaning
bounds with respect to the individual concepts.
|
1006.1956
|
A Semi-distributed Reputation Based Intrusion Detection System for
Mobile Adhoc Networks
|
cs.NI cs.MA
|
A Mobile Adhoc Network (MANET) is a cooperative engagement of a collection of
mobile nodes without any centralized access point or infrastructure to
coordinate among the peers. The underlying concept of coordination among nodes
in a cooperative MANET has induced in them a vulnerability to attacks due to
issues like lack of fixed infrastructure, dynamically changing network
topology, cooperative algorithms, lack of centralized monitoring and management
point, and lack of a clear line of defense. We propose a semi-distributed
approach towards Reputation Based Intrusion Detection System (IDS) that
combines with the DSR routing protocol for strengthening the defense of a
MANET. Our system inherits the features of reputation from human behavior,
hence making the IDS socially inspired. It has a semi-distributed architecture
as the critical observation results of the system are neither spread globally
nor restricted locally. The system assigns maximum weightage to self
observation by nodes for updating any reputation values, thus avoiding the need
of a trust relationship between nodes. Our system is also unique in the sense
that it features the concepts of Redemption and Fading with a robust Path
Manager and Monitor system. Simulation studies show that DSR fortified with our
system outperforms normal DSR in terms of the packet delivery ratio and routing
overhead even when up to half of nodes in the network behave as malicious.
Various parameters introduced such as timing window size, reputation update
values, congestion parameter and other thresholds have been optimized over
several simulation test runs of the system. By combining the semi-distributed
architecture and other design essentials like path manager, monitor module,
redemption and fading concepts; Our system proves to be robust enough to
counter most common attacks in MANETs.
|
1006.2002
|
Colored-Gaussian Multiple Descriptions: Spectral and Time-Domain Forms
|
cs.IT math.IT
|
It is well known that Shannon's rate-distortion function (RDF) in the colored
quadratic Gaussian (QG) case can be parametrized via a single Lagrangian
variable (the "water level" in the reverse water filling solution). In this
work, we show that the symmetric colored QG multiple-description (MD) RDF in
the case of two descriptions can be parametrized in the spectral domain via two
Lagrangian variables, which control the trade-off between the side distortion,
the central distortion, and the coding rate. This spectral-domain analysis is
complemented by a time-domain scheme-design approach: we show that the
symmetric colored QG MD RDF can be achieved by combining ideas of delta-sigma
modulation and differential pulse-code modulation. Specifically, two source
prediction loops, one for each description, are embedded within a common noise
shaping loop, whose parameters are explicitly found from the spectral-domain
characterization.
|
1006.2004
|
Throughput, Bit-Cost, Network State Information: Tradeoffs in
Cooperative CSMA Protocols
|
cs.IT math.IT
|
In wireless local area networks, spatially varying channel conditions result
in a severe performance discrepancy between different nodes in the uplink,
depending on their position. Both throughput and energy expense are affected.
Cooperative protocols were proposed to mitigate these discrepancies. However,
additional network state information (NSI) from other nodes is needed to enable
cooperation. The aim of this work is to assess how NSI and the degree of
cooperation affect throughput and energy expenses. To this end, a CSMA protocol
called fairMAC is defined, which allows to adjust the amount of NSI at the
nodes and the degree of cooperation among the nodes in a distributed manner. By
analyzing the data obtained by Monte Carlo simulations with varying protocol
parameters for fairMAC, two fundamental tradeoffs are identified: First, more
cooperation leads to higher throughput, but also increases energy expenses.
Second, using more than one helper increases throughput and decreases energy
expenses, however, more NSI has to be acquired by the nodes in the network. The
obtained insights are used to increase the lifetime of a network. While full
cooperation shortens the lifetime compared to no cooperation at all, lifetime
can be increased by over 25% with partial cooperation.
|
1006.2006
|
An entropy inequality for q-ary random variables and its application to
channel polarization
|
cs.IT math.IT
|
It is shown that given two copies of a q-ary input channel $W$, where q is
prime, it is possible to create two channels $W^-$ and $W^+$ whose symmetric
capacities satisfy $I(W^-)\le I(W)\le I(W^+)$, where the inequalities are
strict except in trivial cases. This leads to a simple proof of channel
polarization in the q-ary case.
|
1006.2022
|
Message and state cooperation in multiple access channels
|
cs.IT math.IT
|
We investigate the capacity of a multiple access channel with cooperating
encoders where partial state information is known to each encoder and full
state information is known to the decoder. The cooperation between the encoders
has a two-fold purpose: to generate empirical state coordination between the
encoders, and to share information about the private messages that each encoder
has. For two-way cooperation, this two-fold purpose is achieved by
double-binning, where the first layer of binning is used to generate the state
coordination similarly to the two-way source coding, and the second layer of
binning is used to transmit information about the private messages. The
complete result provides the framework and perspective for addressing a complex
level of cooperation that mixes states and messages in an optimal way.
|
1006.2055
|
Enhanced Compressive Wideband Frequency Spectrum Sensing for Dynamic
Spectrum Access
|
cs.IT math.IT
|
Wideband spectrum sensing detects the unused spectrum holes for dynamic
spectrum access (DSA). Too high sampling rate is the main problem. Compressive
sensing (CS) can reconstruct sparse signal with much fewer randomized samples
than Nyquist sampling with high probability. Since survey shows that the
monitored signal is sparse in frequency domain, CS can deal with the sampling
burden. Random samples can be obtained by the analog-to-information converter.
Signal recovery can be formulated as an L0 norm minimization and a linear
measurement fitting constraint. In DSA, the static spectrum allocation of
primary radios means the bounds between different types of primary radios are
known in advance. To incorporate this a priori information, we divide the whole
spectrum into subsections according to the spectrum allocation policy. In the
new optimization model, the minimization of the L2 norm of each subsection is
used to encourage the cluster distribution locally, while the L0 norm of the L2
norms is minimized to give sparse distribution globally. Because the L0/L2
optimization is not convex, an iteratively re-weighted L1/L2 optimization is
proposed to approximate it. Simulations demonstrate the proposed method
outperforms others in accuracy, denoising ability, etc.
|
1006.2077
|
Multidimensi Pada Data Warehouse Dengan Menggunakan Rumus Kombinasi
|
cs.DB
|
Multidimensional in data warehouse is a compulsion and become the most
important for information delivery, without multidimensional data warehouse is
incomplete. Multidimensional give the able to analyze business measurement in
many different ways. Multidimensional is also synonymous with online analytical
processing (OLAP).
|
1006.2086
|
A Geometric Approach to Low-Rank Matrix Completion
|
cs.IT math.IT math.NA
|
The low-rank matrix completion problem can be succinctly stated as follows:
given a subset of the entries of a matrix, find a low-rank matrix consistent
with the observations. While several low-complexity algorithms for matrix
completion have been proposed so far, it remains an open problem to devise
search procedures with provable performance guarantees for a broad class of
matrix models. The standard approach to the problem, which involves the
minimization of an objective function defined using the Frobenius metric, has
inherent difficulties: the objective function is not continuous and the
solution set is not closed. To address this problem, we consider an
optimization procedure that searches for a column (or row) space that is
geometrically consistent with the partial observations. The geometric objective
function is continuous everywhere and the solution set is the closure of the
solution set of the Frobenius metric. We also preclude the existence of local
minimizers, and hence establish strong performance guarantees, for special
completion scenarios, which do not require matrix incoherence or large matrix
size.
|
1006.2088
|
Classification rule with simple select SQL statement
|
cs.DB
|
A simple sql statement can be used to search learning or rule in relational
database for data mining purposes particularly for classification rule. With
just only one simple sql statement, characteristic and classification rule can
be created simultaneously. Collaboration sql statement with any other
application software will increase the ability for creating t-weight as
measurement the typicality of each record in the characteristic rule and
d-weight as measurement the discriminating behavior of the learned
classification/discriminant rule, specifically for further generalization in
characteristic rule. Handling concept hierarchy into tables based on concept
tree will influence for the successful simple sql statement and by knowing the
right standard knowledge to transform each of concept tree in concept hierarchy
into one table as to transform concept hierarchy into table, the simple sql
statement can be run properly.
|
1006.2125
|
Small But Slow World: How Network Topology and Burstiness Slow Down
Spreading
|
physics.soc-ph cs.SI nlin.AO physics.bio-ph
|
Communication networks show the small-world property of short paths, but the
spreading dynamics in them turns out slow. We follow the time evolution of
information propagation through communication networks by using the SI model
with empirical data on contact sequences. We introduce null models where the
sequences are randomly shuffled in different ways, enabling us to distinguish
between the contributions of different impeding effects. The slowing down of
spreading is found to be caused mostly by weight-topology correlations and the
bursty activity patterns of individuals.
|
1006.2156
|
Dyadic Prediction Using a Latent Feature Log-Linear Model
|
cs.LG
|
In dyadic prediction, labels must be predicted for pairs (dyads) whose
members possess unique identifiers and, sometimes, additional features called
side-information. Special cases of this problem include collaborative filtering
and link prediction. We present the first model for dyadic prediction that
satisfies several important desiderata: (i) labels may be ordinal or nominal,
(ii) side-information can be easily exploited if present, (iii) with or without
side-information, latent features are inferred for dyad members, (iv) it is
resistant to sample-selection bias, (v) it can learn well-calibrated
probabilities, and (vi) it can scale to very large datasets. To our knowledge,
no existing method satisfies all the above criteria. In particular, many
methods assume that the labels are ordinal and ignore side-information when it
is present. Experimental results show that the new method is competitive with
state-of-the-art methods for the special cases of collaborative filtering and
link prediction, and that it makes accurate predictions on nominal data.
|
1006.2162
|
Multi-Cell MIMO Downlink with Cell Cooperation and Fair Scheduling: a
Large-System Limit Analysis
|
cs.IT math.IT
|
We consider the downlink of a cellular network with multiple cells and
multi-antenna base stations, including a realistic distance-dependent pathloss
model, clusters of cooperating cells, and general "fairness" requirements.
Beyond Monte Carlo simulation, no efficient computation method to evaluate the
ergodic throughput of such systems has been presented so far. We propose an
analytic solution based on the combination of large random matrix results and
convex optimization. The proposed method is computationally much more efficient
than Monte Carlo simulation and provides surprisingly accurate approximations
for the actual finite-dimensional systems, even for a small number of users and
base station antennas. Numerical examples include 2-cell linear and
three-sectored 7-cell planar layouts, with no inter-cell cooperation, sector
cooperation, or full inter-cell cooperation.
|
1006.2165
|
A Probabilistic Perspective on Gaussian Filtering and Smoothing
|
stat.ME cs.AI cs.RO cs.SY math.OC stat.ML
|
We present a general probabilistic perspective on Gaussian filtering and
smoothing. This allows us to show that common approaches to Gaussian
filtering/smoothing can be distinguished solely by their methods of
computing/approximating the means and covariances of joint probabilities. This
implies that novel filters and smoothers can be derived straightforwardly by
providing methods for computing these moments. Based on this insight, we derive
the cubature Kalman smoother and propose a novel robust filtering and smoothing
algorithm based on Gibbs sampling.
|
1006.2195
|
Subspace Evolution and Transfer (SET) for Low-Rank Matrix Completion
|
cs.IT math.IT
|
We describe a new algorithm, termed subspace evolution and transfer (SET),
for solving low-rank matrix completion problems. The algorithm takes as its
input a subset of entries of a low-rank matrix, and outputs one low-rank matrix
consistent with the given observations. The completion task is accomplished by
searching for a column space on the Grassmann manifold that matches the
incomplete observations. The SET algorithm consists of two parts -- subspace
evolution and subspace transfer. In the evolution part, we use a gradient
descent method on the Grassmann manifold to refine our estimate of the column
space. Since the gradient descent algorithm is not guaranteed to converge, due
to the existence of barriers along the search path, we design a new mechanism
for detecting barriers and transferring the estimated column space across the
barriers. This mechanism constitutes the core of the transfer step of the
algorithm. The SET algorithm exhibits excellent empirical performance for both
high and low sampling rate regimes.
|
1006.2204
|
MDPs with Unawareness
|
cs.AI
|
Markov decision processes (MDPs) are widely used for modeling decision-making
problems in robotics, automated control, and economics. Traditional MDPs assume
that the decision maker (DM) knows all states and actions. However, this may
not be true in many situations of interest. We define a new framework, MDPs
with unawareness (MDPUs) to deal with the possibilities that a DM may not be
aware of all possible actions. We provide a complete characterization of when a
DM can learn to play near-optimally in an MDPU, and give an algorithm that
learns to play near-optimally when it is possible to do so, as efficiently as
possible. In particular, we characterize when a near-optimal solution can be
found in polynomial time.
|
1006.2221
|
Deterministic Sampling of Sparse Trigonometric Polynomials
|
math.NA cs.IT math.IT
|
One can recover sparse multivariate trigonometric polynomials from few
randomly taken samples with high probability (as shown by Kunis and Rauhut). We
give a deterministic sampling of multivariate trigonometric polynomials
inspired by Weil's exponential sum. Our sampling can produce a deterministic
matrix satisfying the statistical restricted isometry property, and also nearly
optimal Grassmannian frames. We show that one can exactly reconstruct every
$M$-sparse multivariate trigonometric polynomial with fixed degree and of
length $D$ from the determinant sampling $X$, using the orthogonal matching
pursuit, and $# X$ is a prime number greater than $(M\log D)^2$. This result is
almost optimal within the $(\log D)^2 $ factor. The simulations show that the
deterministic sampling can offer reconstruction performance similar to the
random sampling.
|
1006.2289
|
Unification in the Description Logic EL
|
cs.AI cs.LO
|
The Description Logic EL has recently drawn considerable attention since, on
the one hand, important inference problems such as the subsumption problem are
polynomial. On the other hand, EL is used to define large biomedical
ontologies. Unification in Description Logics has been proposed as a novel
inference service that can, for example, be used to detect redundancies in
ontologies. The main result of this paper is that unification in EL is
decidable. More precisely, EL-unification is NP-complete, and thus has the same
complexity as EL-matching. We also show that, w.r.t. the unification type, EL
is less well-behaved: it is of type zero, which in particular implies that
there are unification problems that have no finite complete set of unifiers.
|
1006.2322
|
Discovery of a missing disease spreader
|
cs.AI cs.SI physics.bio-ph physics.soc-ph q-bio.PE
|
This study presents a method to discover an outbreak of an infectious disease
in a region for which data are missing, but which is at work as a disease
spreader. Node discovery for the spread of an infectious disease is defined as
discriminating between the nodes which are neighboring to a missing disease
spreader node, and the rest, given a dataset on the number of cases. The spread
is described by stochastic differential equations. A perturbation theory
quantifies the impact of the missing spreader on the moments of the number of
cases. Statistical discriminators examine the mid-body or tail-ends of the
probability density function, and search for the disturbance from the missing
spreader. They are tested with computationally synthesized datasets, and
applied to the SARS outbreak and flu pandemic.
|
1006.2348
|
Space-time block codes from nonassociative division algebras
|
cs.IT math.IT
|
Associative division algebras are a rich source of fully diverse space-time
block codes (STBCs). In this paper the systematic construction of fully diverse
STBCs from nonassociative algebras is discussed. As examples, families of fully
diverse $2\times 2$, $2\times 4$ multiblock and $4\x 4$ STBCs are designed,
employing nonassociative quaternion division algebras.
|
1006.2368
|
L2-optimal image interpolation and its applications to medical imaging
|
cs.CV cs.GR
|
Digital medical images are always displayed scaled to fit particular view.
Interpolation is responsible for this scaling, and if not done properly, can
significantly degrade diagnostic image quality. However, theoretically-optimal
interpolation algorithms may also be the most time-consuming and impractical.
We propose a new approach, adapted to the needs of digital medical imaging, to
combine high interpolation speed and superior L2-optimal image quality.
|
1006.2380
|
Opportunistic Interference Mitigation Achieves Optimal
Degrees-of-Freedom in Wireless Multi-cell Uplink Networks
|
cs.IT math.IT
|
We introduce an opportunistic interference mitigation (OIM) protocol, where a
user scheduling strategy is utilized in $K$-cell uplink networks with
time-invariant channel coefficients and base stations (BSs) having $M$
antennas. Each BS opportunistically selects a set of users who generate the
minimum interference to the other BSs. Two OIM protocols are shown according to
the number $S$ of simultaneously transmitting users per cell: opportunistic
interference nulling (OIN) and opportunistic interference alignment (OIA).
Then, their performance is analyzed in terms of degrees-of-freedom (DoFs). As
our main result, it is shown that $KM$ DoFs are achievable under the OIN
protocol with $M$ selected users per cell, if the total number $N$ of users in
a cell scales at least as $\text{SNR}^{(K-1)M}$. Similarly, it turns out that
the OIA scheme with $S$($<M$) selected users achieves $KS$ DoFs, if $N$ scales
faster than $\text{SNR}^{(K-1)S}$. These results indicate that there exists a
trade-off between the achievable DoFs and the minimum required $N$. By deriving
the corresponding upper bound on the DoFs, it is shown that the OIN scheme is
DoF optimal. Finally, numerical evaluation, a two-step scheduling method, and
the extension to multi-carrier scenarios are shown.
|
1006.2403
|
On the Queueing Behavior of Random Codes over a Gilbert-Elliot Erasure
Channel
|
cs.IT math.IT
|
This paper considers the queueing performance of a system that transmits
coded data over a time-varying erasure channel. In our model, the queue length
and channel state together form a Markov chain that depends on the system
parameters. This gives a framework that allows a rigorous analysis of the queue
as a function of the code rate. Most prior work in this area either ignores
block-length (e.g., fluid models) or assumes error-free communication using
finite codes. This work enables one to determine when such assumptions provide
good, or bad, approximations of true behavior. Moreover, it offers a new
approach to optimize parameters and evaluate performance. This can be valuable
for delay-sensitive systems that employ short block lengths.
|
1006.2422
|
Complexity of Multi-Value Byzantine Agreement
|
cs.DC cs.IT cs.NI math.IT
|
In this paper, we consider the problem of maximizing the throughput of
Byzantine agreement, given that the sum capacity of all links in between nodes
in the system is finite. We have proposed a highly efficient Byzantine
agreement algorithm on values of length l>1 bits. This algorithm uses error
detecting network codes to ensure that fault-free nodes will never disagree,
and routing scheme that is adaptive to the result of error detection. Our
algorithm has a bit complexity of n(n-1)l/(n-t), which leads to a linear cost
(O(n)) per bit agreed upon, and overcomes the quadratic lower bound
(Omega(n^2)) in the literature. Such linear per bit complexity has only been
achieved in the literature by allowing a positive probability of error. Our
algorithm achieves the linear per bit complexity while guaranteeing agreement
is achieved correctly even in the worst case. We also conjecture that our
algorithm can be used to achieve agreement throughput arbitrarily close to the
agreement capacity of a network, when the sum capacity is given.
|
1006.2495
|
Mirrored Language Structure and Innate Logic of the Human Brain as a
Computable Model of the Oracle Turing Machine
|
cs.LO cs.AI
|
We wish to present a mirrored language structure (MLS) and four logic rules
determined by this structure for the model of a computable Oracle Turing
machine. MLS has novel features that are of considerable biological and
computational significance. It suggests an algorithm of relation learning and
recognition (RLR) that enables the deterministic computers to simulate the
mechanism of the Oracle Turing machine, or P = NP in a mathematical term.
|
1006.2498
|
On the Deterministic Code Capacity Region of an Arbitrarily Varying
Multiple-Access Channel Under List Decoding
|
cs.IT math.IT
|
We study the capacity region $C_L$ of an arbitrarily varying multiple-access
channel (AVMAC) for deterministic codes with decoding into a list of a fixed
size $L$ and for the average error probability criterion. Motivated by known
results in the study of fixed size list decoding for a point-to-point
arbitrarily varying channel, we define for every AVMAC whose capacity region
for random codes has a nonempty interior, a nonnegative integer $\Omega$ called
its symmetrizability. It is shown that for every $L \leq \Omega$, $C_L$ has an
empty interior, and for every $L \geq (\Omega+1)^2$, $C_L$ equals the
nondegenerate capacity region of the AVMAC for random codes with a known
single-letter characterization. For a binary AVMAC with a nondegenerate random
code capacity region, it is shown that the symmetrizability is always finite.
|
1006.2513
|
On the Achievability of Cram\'er-Rao Bound In Noisy Compressed Sensing
|
cs.IT cs.LG math.IT
|
Recently, it has been proved in Babadi et al. that in noisy compressed
sensing, a joint typical estimator can asymptotically achieve the Cramer-Rao
lower bound of the problem.To prove this result, this paper used a lemma,which
is provided in Akcakaya et al,that comprises the main building block of the
proof. This lemma is based on the assumption of Gaussianity of the measurement
matrix and its randomness in the domain of noise. In this correspondence, we
generalize the results obtained in Babadi et al by dropping the Gaussianity
assumption on the measurement matrix. In fact, by considering the measurement
matrix as a deterministic matrix in our analysis, we find a theorem similar to
the main theorem of Babadi et al for a family of randomly generated (but
deterministic in the noise domain) measurement matrices that satisfy a
generalized condition known as The Concentration of Measures Inequality. By
this, we finally show that under our generalized assumptions, the Cramer-Rao
bound of the estimation is achievable by using the typical estimator introduced
in Babadi et al.
|
1006.2523
|
Asymptotic Equipartition Properties for simple hierarchical and
networked structures
|
cs.IT math.IT math.PR
|
We prove asymptotic equipartition properties for simple hierarchical
structures (modelled as multitype Galton-Watson trees) and networked structures
(modelled as randomly coloured random graphs). For example, for large $n$, a
networked data structure consisting of $n$ units connected by an average number
of links of order $n/log n$ can be coded by about $nH$ bits, where $H$ is an
explicitly defined entropy. The main technique in our proofs are large
deviation principles for suitably defined empirical measures.
|
1006.2565
|
State-Dependent Relay Channel with Private Messages with Partial Causal
and Non-Causal Channel State Information
|
cs.IT math.IT
|
In this paper, we introduce a discrete memoryless State-Dependent Relay
Channel with Private Messages (SD-RCPM) as a generalization of the
state-dependent relay channel. We investigate two main cases: SD-RCPM with
non-causal Channel State Information (CSI), and SD-RCPM with causal CSI. In
each case, it is assumed that partial CSI is available at the source and relay.
For non-causal case, we establish an achievable rate region using
Gel'fand-Pinsker type coding scheme at the nodes informed of CSI, and
Compress-and-Forward (CF) scheme at the relay. Using Shannon's strategy and CF
scheme, an achievable rate region for causal case is obtained. As an example,
the Gaussian version of SD-RCPM is considered, and an achievable rate region
for Gaussian SD-RCPM with non-causal perfect CSI only at the source, is
derived. Providing numerical examples, we illustrate the comparison between
achievable rate regions derived using CF and Decode-and-Forward (DF) schemes.
|
1006.2588
|
Agnostic Active Learning Without Constraints
|
cs.LG
|
We present and analyze an agnostic active learning algorithm that works
without keeping a version space. This is unlike all previous approaches where a
restricted set of candidate hypotheses is maintained throughout learning, and
only hypotheses from this set are ever returned. By avoiding this version space
approach, our algorithm sheds the computational burden and brittleness
associated with maintaining version spaces, yet still allows for substantial
improvements over supervised learning for classification.
|
1006.2592
|
Outlier Detection Using Nonconvex Penalized Regression
|
stat.ME cs.LG stat.CO
|
This paper studies the outlier detection problem from the point of view of
penalized regressions. Our regression model adds one mean shift parameter for
each of the $n$ data points. We then apply a regularization favoring a sparse
vector of mean shift parameters. The usual $L_1$ penalty yields a convex
criterion, but we find that it fails to deliver a robust estimator. The $L_1$
penalty corresponds to soft thresholding. We introduce a thresholding (denoted
by $\Theta$) based iterative procedure for outlier detection ($\Theta$-IPOD). A
version based on hard thresholding correctly identifies outliers on some hard
test problems. We find that $\Theta$-IPOD is much faster than iteratively
reweighted least squares for large data because each iteration costs at most
$O(np)$ (and sometimes much less) avoiding an $O(np^2)$ least squares estimate.
We describe the connection between $\Theta$-IPOD and $M$-estimators. Our
proposed method has one tuning parameter with which to both identify outliers
and estimate regression coefficients. A data-dependent choice can be made based
on BIC. The tuned $\Theta$-IPOD shows outstanding performance in identifying
outliers in various situations in comparison to other existing approaches. This
methodology extends to high-dimensional modeling with $p\gg n$, if both the
coefficient vector and the outlier pattern are sparse.
|
1006.2610
|
Functions which are PN on infiitely many extensions of Fp, p odd
|
math.NT cs.IT math.IT
|
Let $p$ be an odd prime number. We prove that for $m\equiv1\mod p$, $x^m$ is
perfectly nonlinear over $\mathbb{F}_{p^n}$ for infinitely many $n$ if and only
if $m$ is of the form $p^l+1$, $l\in\mathbb{N}$. First, we study singularities
of $f(x,y)=\frac{(x+1)^m-x^m-(y+1)^m+y^m}{x-y}$ and we use Bezout theorem to
show that for $m\neq 1+p^l$, $f(x,y)$ has an absolutely irreducible factor.
Then by Weil theorem, f(x,y) has rationnal points such that $x\neq y$ which
means that $x^m$ is not PN.
|
1006.2660
|
Rate Compatible Protocol for Information Reconciliation: An application
to QKD
|
cs.IT math.IT
|
Information Reconciliation is a mechanism that allows to weed out the
discrepancies between two correlated variables. It is an essential component in
every key agreement protocol where the key has to be transmitted through a
noisy channel. The typical case is in the satellite scenario described by
Maurer in the early 90's. Recently the need has arisen in relation with Quantum
Key Distribution (QKD) protocols, where it is very important not to reveal
unnecessary information in order to maximize the shared key length. In this
paper we present an information reconciliation protocol based on a rate
compatible construction of Low Density Parity Check codes. Our protocol
improves the efficiency of the reconciliation for the whole range of error
rates in the discrete variable QKD context. Its adaptability together with its
low interactivity makes it specially well suited for QKD reconciliation.
|
1006.2700
|
Image Segmentation Using Weak Shape Priors
|
cs.CV
|
The problem of image segmentation is known to become particularly challenging
in the case of partial occlusion of the object(s) of interest, background
clutter, and the presence of strong noise. To overcome this problem, the
present paper introduces a novel approach segmentation through the use of
"weak" shape priors. Specifically, in the proposed method, an segmenting active
contour is constrained to converge to a configuration at which its geometric
parameters attain their empirical probability densities closely matching the
corresponding model densities that are learned based on training samples. It is
shown through numerical experiments that the proposed shape modeling can be
regarded as "weak" in the sense that it minimally influences the segmentation,
which is allowed to be dominated by data-related forces. On the other hand, the
priors provide sufficient constraints to regularize the convergence of
segmentation, while requiring substantially smaller training sets to yield less
biased results as compared to the case of PCA-based regularization methods. The
main advantages of the proposed technique over some existing alternatives is
demonstrated in a series of experiments.
|
1006.2718
|
From RESTful Services to RDF: Connecting the Web and the Semantic Web
|
cs.AI cs.DL
|
RESTful services on the Web expose information through retrievable resource
representations that represent self-describing descriptions of resources, and
through the way how these resources are interlinked through the hyperlinks that
can be found in those representations. This basic design of RESTful services
means that for extracting the most useful information from a service, it is
necessary to understand a service's representations, which means both the
semantics in terms of describing a resource, and also its semantics in terms of
describing its linkage with other resources. Based on the Resource Linking
Language (ReLL), this paper describes a framework for how RESTful services can
be described, and how these descriptions can then be used to harvest
information from these services. Building on this framework, a layered model of
RESTful service semantics allows to represent a service's information in
RDF/OWL. Because REST is based on the linkage between resources, the same model
can be used for aggregating and interlinking multiple services for extracting
RDF data from sets of RESTful services.
|
1006.2734
|
Penalized K-Nearest-Neighbor-Graph Based Metrics for Clustering
|
cs.CV
|
A difficult problem in clustering is how to handle data with a manifold
structure, i.e. data that is not shaped in the form of compact clouds of
points, forming arbitrary shapes or paths embedded in a high-dimensional space.
In this work we introduce the Penalized k-Nearest-Neighbor-Graph (PKNNG) based
metric, a new tool for evaluating distances in such cases. The new metric can
be used in combination with most clustering algorithms. The PKNNG metric is
based on a two-step procedure: first it constructs the k-Nearest-Neighbor-Graph
of the dataset of interest using a low k-value and then it adds edges with an
exponentially penalized weight for connecting the sub-graphs produced by the
first step. We discuss several possible schemes for connecting the different
sub-graphs. We use three artificial datasets in four different embedding
situations to evaluate the behavior of the new metric, including a comparison
among different clustering methods. We also evaluate the new metric in a real
world application, clustering the MNIST digits dataset. In all cases the PKNNG
metric shows promising clustering results.
|
1006.2743
|
Global Optimization for Value Function Approximation
|
cs.AI
|
Existing value function approximation methods have been successfully used in
many applications, but they often lack useful a priori error bounds. We propose
a new approximate bilinear programming formulation of value function
approximation, which employs global optimization. The formulation provides
strong a priori guarantees on both robust and expected policy loss by
minimizing specific norms of the Bellman residual. Solving a bilinear program
optimally is NP-hard, but this is unavoidable because the Bellman-residual
minimization itself is NP-hard. We describe and analyze both optimal and
approximate algorithms for solving bilinear programs. The analysis shows that
this algorithm offers a convergent generalization of approximate policy
iteration. We also briefly analyze the behavior of bilinear programming
algorithms under incomplete samples. Finally, we demonstrate that the proposed
approach can consistently minimize the Bellman residual on simple benchmark
problems.
|
1006.2758
|
Eigen-Based Transceivers for the MIMO Broadcast Channel with
Semi-Orthogonal User Selection
|
cs.IT math.IT
|
This paper studies the sum rate performance of two low complexity
eigenmode-based transmission techniques for the MIMO broadcast channel,
employing greedy semi-orthogonal user selection (SUS). The first approach,
termed ZFDPC-SUS, is based on zero-forcing dirty paper coding; the second
approach, termed ZFBF-SUS, is based on zero-forcing beamforming. We first
employ new analytical methods to prove that as the number of users K grows
large, the ZFDPC-SUS approach can achieve the optimal sum rate scaling of the
MIMO broadcast channel. We also prove that the average sum rates of both
techniques converge to the average sum capacity of the MIMO broadcast channel
for large K. In addition to the asymptotic analysis, we investigate the sum
rates achieved by ZFDPC-SUS and ZFBF-SUS for finite K, and show that ZFDPC-SUS
has significant performance advantages. Our results also provide key insights
into the benefit of multiple receive antennas, and the effect of the SUS
algorithm. In particular, we show that whilst multiple receive antennas only
improves the asymptotic sum rate scaling via the second-order behavior of the
multi-user diversity gain; for finite K, the benefit can be very significant.
We also show the interesting result that the semi-orthogonality constraint
imposed by SUS, whilst facilitating a very low complexity user selection
procedure, asymptotically does not reduce the multi-user diversity gain in
either first (log K) or second-order (loglog K) terms.
|
1006.2769
|
Achievable Rate Regions for Discrete Memoryless Interference Channel
with State Information
|
cs.IT math.IT
|
In this paper, we study the state-dependent two-user interference channel,
where the state information is non-causally known at both transmitters but
unknown to either of the receivers. We propose two coding schemes for the
discrete memoryless case: simultaneous encoding for the sub-messages in the
first one and superposition encoding in the second one, both with rate
splitting and Gel'fand-Pinsker coding. The corresponding achievable rate
regions are established.
|
1006.2804
|
An Effective Fingerprint Verification Technique
|
cs.CV
|
This paper presents an effective method for fingerprint verification based on
a data mining technique called minutiae clustering and a graph-theoretic
approach to analyze the process of fingerprint comparison to give a feature
space representation of minutiae and to produce a lower bound on the number of
detectably distinct fingerprints. The method also proving the invariance of
each individual fingerprint by using both the topological behavior of the
minutiae graph and also using a distance measure called Hausdorff distance.The
method provides a graph based index generation mechanism of fingerprint
biometric data. The self-organizing map neural network is also used for
classifying the fingerprints.
|
1006.2805
|
Robust PI Control Design Using Particle Swarm Optimization
|
cs.CE
|
This paper presents a set of robust PI tuning formulae for a first order plus
dead time process using particle swarm optimization. Also, tuning formulae for
an integrating process with dead time, which is a special case of a first order
plus dead time process, is given. The design problem considers three essential
requirements of control problems, namely load disturbance rejection, setpoint
regulation and robustness of closed-loop system against model uncertainties.
The primary design goal is to optimize load disturbance rejection. Robustness
is guaranteed by requiring that the maximum sensitivity is less than or equal
to a specified value. In the first step, PI controller parameters are
determined such that the IAE criterion to a load disturbance step is minimized
and the robustness constraint on maximum sensitivity is satisfied. Using a
structure with two degrees of freedom which introduces an extra parameter, the
setpoint weight, good setpoint regulation is achieved in the second step. The
main advantage of the proposed method is its simplicity. Once the equivalent
first order plus dead time model is determined, the PI parameters are
explicitly given by a set of tuning formulae. In order to show the performance
and effectiveness of the proposed tuning formulae, they are applied to three
simulation examples.
|
1006.2806
|
A Metaheuristic Approach for IT Projects Portfolio Optimization
|
cs.CE
|
Optimal selection of interdependent IT Projects for implementation in multi
periods has been challenging in the framework of real option valuation. This
paper presents a mathematical optimization model for multi-stage portfolio of
IT projects. The model optimizes the value of the portfolio within a given
budgetary and sequencing constraints for each period. These sequencing
constraints are due to time wise interdependencies among projects. A
Metaheuristic approach is well suited for solving this kind of a problem
definition and in this paper a genetic algorithm model has been proposed for
the solution. This optimization model and solution approach can help IT
managers taking optimal funding decision for projects prioritization in
multiple sequential periods. The model also gives flexibility to the managers
to generate alternative portfolio by changing the maximum and minimum number of
projects to be implemented in each sequential period.
|
1006.2809
|
Offline Arabic Handwriting Recognition Using Artificial Neural Network
|
cs.CL
|
The ambition of a character recognition system is to transform a text
document typed on paper into a digital format that can be manipulated by word
processor software Unlike other languages, Arabic has unique features, while
other language doesn't have, from this language these are seven or eight
language such as ordo, jewie and Persian writing, Arabic has twenty eight
letters, each of which can be linked in three different ways or separated
depending on the case. The difficulty of the Arabic handwriting recognition is
that, the accuracy of the character recognition which affects on the accuracy
of the word recognition, in additional there is also two or three from for each
character, the suggested solution by using artificial neural network can solve
the problem and overcome the difficulty of Arabic handwriting recognition.
|
1006.2813
|
Algorithm for Predicting Protein Secondary Structure
|
cs.CE q-bio.BM
|
Predicting protein structure from amino acid sequence is one of the most
important unsolved problems of molecular biology and biophysics.Not only would
a successful prediction algorithm be a tremendous advance in the understanding
of the biochemical mechanisms of proteins, but, since such an algorithm could
conceivably be used to design proteins to carry out specific
functions.Prediction of the secondary structure of a protein (alpha-helix,
beta-sheet, coil) is an important step towards elucidating its three
dimensional structure as well as its function. In this research, we use
different Hidden Markov models for protein secondary structure prediction. In
this paper we have proposed an algorithm for predicting protein secondary
structure. We have used Hidden Markov model with sliding window for secondary
structure prediction.The secondary structure has three regular forms, for each
secondary structural element we are using one Hidden Markov Model.
|
1006.2835
|
Fuzzy Modeling and Natural Language Processing for Panini's Sanskrit
Grammar
|
cs.CL
|
Indian languages have long history in World Natural languages. Panini was the
first to define Grammar for Sanskrit language with about 4000 rules in fifth
century. These rules contain uncertainty information. It is not possible to
Computer processing of Sanskrit language with uncertain information. In this
paper, fuzzy logic and fuzzy reasoning are proposed to deal to eliminate
uncertain information for reasoning with Sanskrit grammar. The Sanskrit
language processing is also discussed in this paper.
|
1006.2844
|
Outrepasser les limites des techniques classiques de Prise d'Empreintes
grace aux Reseaux de Neurones
|
cs.CR cs.AI cs.NE
|
We present an application of Artificial Intelligence techniques to the field
of Information Security. The problem of remote Operating System (OS) Detection,
also called OS Fingerprinting, is a crucial step of the penetration testing
process, since the attacker (hacker or security professional) needs to know the
OS of the target host in order to choose the exploits that he will use. OS
Detection is accomplished by passively sniffing network packets and actively
sending test packets to the target host, to study specific variations in the
host responses revealing information about its operating system.
The first fingerprinting implementations were based on the analysis of
differences between TCP/IP stack implementations. The next generation focused
the analysis on application layer data such as the DCE RPC endpoint
information. Even though more information was analyzed, some variation of the
"best fit" algorithm was still used to interpret this new information. Our new
approach involves an analysis of the composition of the information collected
during the OS identification process to identify key elements and their
relations. To implement this approach, we have developed tools using Neural
Networks and techniques from the field of Statistics. These tools have been
successfully integrated in a commercial software (Core Impact).
|
1006.2860
|
The Euclidean Algorithm for Generalized Minimum Distance Decoding of
Reed-Solomon Codes
|
cs.IT math.IT
|
This paper presents a method to merge Generalized Minimum Distance decoding
of Reed-Solomon codes with the extended Euclidean algorithm. By merge, we mean
that the steps taken to perform the Generalized Minimum Distance decoding are
similar to those performed by the extended Euclidean algorithm. The resulting
algorithm has a complexity of O(n^2).
|
1006.2880
|
Fast Incremental and Personalized PageRank
|
cs.DS cs.DB cs.IR
|
In this paper, we analyze the efficiency of Monte Carlo methods for
incremental computation of PageRank, personalized PageRank, and similar random
walk based methods (with focus on SALSA), on large-scale dynamically evolving
social networks. We assume that the graph of friendships is stored in
distributed shared memory, as is the case for large social networks such as
Twitter.
For global PageRank, we assume that the social network has $n$ nodes, and $m$
adversarially chosen edges arrive in a random order. We show that with a reset
probability of $\epsilon$, the total work needed to maintain an accurate
estimate (using the Monte Carlo method) of the PageRank of every node at all
times is $O(\frac{n\ln m}{\epsilon^{2}})$. This is significantly better than
all known bounds for incremental PageRank. For instance, if we naively
recompute the PageRanks as each edge arrives, the simple power iteration method
needs $\Omega(\frac{m^2}{\ln(1/(1-\epsilon))})$ total time and the Monte Carlo
method needs $O(mn/\epsilon)$ total time; both are prohibitively expensive.
Furthermore, we also show that we can handle deletions equally efficiently.
We then study the computation of the top $k$ personalized PageRanks starting
from a seed node, assuming that personalized PageRanks follow a power-law with
exponent $\alpha < 1$. We show that if we store $R>q\ln n$ random walks
starting from every node for large enough constant $q$ (using the approach
outlined for global PageRank), then the expected number of calls made to the
distributed social network database is $O(k/(R^{(1-\alpha)/\alpha}))$.
We also present experimental results from the social networking site,
Twitter, verifying our assumptions and analyses. The overall result is that
this algorithm is fast enough for real-time queries over a dynamic social
network.
|
1006.2883
|
The entropy per coordinate of a random vector is highly constrained
under convexity conditions
|
cs.IT math.FA math.IT math.PR
|
The entropy per coordinate in a log-concave random vector of any dimension
with given density at the mode is shown to have a range of just 1. Uniform
distributions on convex bodies are at the lower end of this range, the
distribution with i.i.d. exponentially distributed coordinates is at the upper
end, and the normal is exactly in the middle. Thus in terms of the amount of
randomness as measured by entropy per coordinate, any log-concave random vector
of any dimension contains randomness that differs from that in the normal
random variable with the same maximal density value by at most 1/2. As
applications, we obtain an information-theoretic formulation of the famous
hyperplane conjecture in convex geometry, entropy bounds for certain infinitely
divisible distributions, and quantitative estimates for the behavior of the
density at the mode on convolution. More generally, one may consider so-called
convex or hyperbolic probability measures on Euclidean spaces; we give new
constraints on entropy per coordinate for this class of measures, which
generalize our results under the log-concavity assumption, expose the extremal
role of multivariate Pareto-type distributions, and give some applications.
|
1006.2884
|
Fractional generalizations of Young and Brunn-Minkowski inequalities
|
math.FA cs.IT math.IT math.PR
|
A generalization of Young's inequality for convolution with sharp constant is
conjectured for scenarios where more than two functions are being convolved,
and it is proven for certain parameter ranges. The conjecture would provide a
unified proof of recent entropy power inequalities of Barron and Madiman, as
well as of a (conjectured) generalization of the Brunn-Minkowski inequality. It
is shown that the generalized Brunn-Minkowski conjecture is true for convex
sets; an application of this to the law of large numbers for random sets is
described.
|
1006.2899
|
Approximated Structured Prediction for Learning Large Scale Graphical
Models
|
cs.LG cs.AI
|
This manuscripts contains the proofs for "A Primal-Dual Message-Passing
Algorithm for Approximated Large Scale Structured Prediction".
|
1006.2945
|
Two-Timescale Learning Using Idiotypic Behaviour Mediation For A
Navigating Mobile Robot
|
cs.AI cs.NE cs.RO
|
A combined Short-Term Learning (STL) and Long-Term Learning (LTL) approach to
solving mobile-robot navigation problems is presented and tested in both the
real and virtual domains. The LTL phase consists of rapid simulations that use
a Genetic Algorithm to derive diverse sets of behaviours, encoded as variable
sets of attributes, and the STL phase is an idiotypic Artificial Immune System.
Results from the LTL phase show that sets of behaviours develop very rapidly,
and significantly greater diversity is obtained when multiple autonomous
populations are used, rather than a single one. The architecture is assessed
under various scenarios, including removal of the LTL phase and switching off
the idiotypic mechanism in the STL phase. The comparisons provide substantial
evidence that the best option is the inclusion of both the LTL phase and the
idiotypic system. In addition, this paper shows that structurally different
environments can be used for the two phases without compromising
transferability.
|
1006.2977
|
Algebraic Constructions of Graph-Based Nested Codes from Protographs
|
cs.IT math.IT
|
Nested codes have been employed in a large number of communication
applications as a specific case of superposition codes, for example to
implement binning schemes in the presence of noise, in joint network-channel
coding, or in physical-layer secrecy. Whereas nested lattice codes have been
proposed recently for continuous-input channels, in this paper we focus on the
construction of nested linear codes for joint channel-network coding problems
based on algebraic protograph LDPC codes. In particular, over the past few
years several constructions of codes have been proposed that are based on
random lifts of suitably chosen base graphs. More recently, an algebraic analog
of this approach was introduced using the theory of voltage graphs. In this
paper we illustrate how these methods can be used in the construction of nested
codes from algebraic lifts of graphs.
|
1006.2996
|
Bounding the Rate Region of Vector Gaussian Multiple Descriptions with
Individual and Central Receivers
|
cs.IT math.IT
|
In this work, the rate region of the vector Gaussian multiple description
problem with individual and central quadratic distortion constraints is
studied. In particular, an outer bound to the rate region of the L-description
problem is derived. The bound is obtained by lower bounding a weighted sum rate
for each supporting hyperplane of the rate region. The key idea is to introduce
at most L-1 auxiliary random variables and further impose upon the variables a
Markov structure according to the ordering of the description weights. This
makes it possible to greatly simplify the derivation of the outer bound. In the
scalar Gaussian case, the complete rate region is fully characterized by
showing that the outer bound is tight. In this case, the optimal weighted sum
rate for each supporting hyperplane is obtained by solving a single
maximization problem. This contrasts with existing results, which require
solving a min-max optimization problem.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.