id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1005.0608
|
Informal Concepts in Machines
|
cs.AI
|
This paper constructively proves the existence of an effective procedure
generating a computable (total) function that is not contained in any given
effectively enumerable set of such functions. The proof implies the existence
of machines that process informal concepts such as computable (total) functions
beyond the limits of any given Turing machine or formal system, that is, these
machines can, in a certain sense, "compute" function values beyond these
limits. We call these machines creative. We argue that any "intelligent"
machine should be capable of processing informal concepts such as computable
(total) functions, that is, it should be creative. Finally, we introduce
hypotheses on creative machines which were developed on the basis of
theoretical investigations and experiments with computer programs. The
hypotheses say that machine intelligence is the execution of a self-developing
procedure starting from any universal programming language and any input.
|
1005.0616
|
Tracking a Random Walk First-Passage Time Through Noisy Observations
|
math.ST cs.IT math.IT stat.TH
|
Given a Gaussian random walk (or a Wiener process), possibly with drift,
observed through noise, we consider the problem of estimating its first-passage
time $\tau_\ell$ of a given level $\ell$ with a stopping time $\eta$ defined
over the noisy observation process.
Main results are upper and lower bounds on the minimum mean absolute
deviation $\inf_\eta \ex|\eta-\tau_\ell|$ which become tight as
$\ell\to\infty$. Interestingly, in this regime the estimation error does not
get smaller if we allow $ \eta$ to be an arbitrary function of the entire
observation process, not necessarily a stopping time.
In the particular case where there is no drift, we show that it is impossible
to track $\tau_\ell$: $\inf_\eta \ex|\eta-\tau_\ell|^p=\infty$ for any $\ell>0$
and $p\geq1/2$.
|
1005.0624
|
The Gaussian Many-to-1 Interference Channel with Confidential Messages
|
cs.IT math.IT
|
The many-to-one interference channel has received interest by virtue of
embodying the essence of an interference network while being more tractable
than the general K-user interference channel. In this paper, we introduce
information theoretic secrecy to this model and consider the many-to-one
interference channel with confidential messages, in which each receiver, in
particular, the one subject to interference, is also one from which the
interfering users' messages need to be kept secret from. We derive the
achievable secrecy sum rate for this channel using nested lattice codes, as
well as an upper bound on the secrecy sum rate for all possible channel gain
configurations. We identify several nontrivial cases where the gap between the
upper bound and the achieved secrecy sum rate is only a function of the number
of the users K, and is uniform over all possible channel gain configurations in
each case. In addition, we identify the secure degree of freedom for this
channel and show it to be equivalent to its degree of freedom, i.e., the
secrecy in high SNR comes for free.
|
1005.0662
|
The B-Skip-List: A Simpler Uniquely Represented Alternative to B-Trees
|
cs.DS cs.DB
|
In previous work, the author introduced the B-treap, a uniquely represented
B-tree analogue, and proved strong performance guarantees for it. However, the
B-treap maintains complex invariants and is very complex to implement. In this
paper we introduce the B-skip-list, which has most of the guarantees of the
B-treap, but is vastly simpler and easier to implement. Like the B-treap, the
B-skip-list may be used to construct strongly history-independent index
structures and filesystems; such constructions reveal no information about the
historical sequence of operations that led to the current logical state. For
example, a uniquely represented filesystem would support the deletion of a file
in a way that, in a strong information-theoretic sense, provably removes all
evidence that the file ever existed. Like the B-tree, the B-skip-list has depth
O(log_B (n)) where B is the block transfer size of the external memory, uses
linear space with high probability, and supports efficient one-dimensional
range queries.
|
1005.0677
|
The Enigma of CDMA Revisited
|
cs.IT math.IT
|
In this paper, we explore the mystery of synchronous CDMA as applied to
wireless and optical communication systems under very general settings for the
user symbols and the signature matrix entries. The channel is modeled with
real/complex additive noise of arbitrary distribution. Two problems are
addressed. The first problem concerns whether overloaded error free codes exist
in the absence of additive noise under these general settings, and if so
whether there are any practical optimum decoding algorithms. The second one is
about the bounds for the sum channel capacity when user data and signature
codes employ any real or complex alphabets (finite or infinite). In response to
the first problem, we have developed practical Maximum Likelihood (ML) decoding
algorithms for overloaded CDMA systems for a large class of alphabets. In
response to the second problem, a general theorem has been developed in which
the sum capacity lower bounds with respect to the number of users and spreading
gain and Signal-to-Noise Ratio (SNR) can be derived as special cases for a
given CDMA system. To show the power and utility of the main theorem, a number
of sum capacity bounds for special cases are simulated. An important conclusion
of this paper is that the lower and upper bounds of the sum capacity for
small/medium size CDMA systems depend on both the input and the signature
symbols; this is contrary to the asymptotic results for large scale systems
reported in the literature (also confirmed in this paper) where the signature
symbols and statistics disappear in the asymptotic sum capacity. Moreover,
these questions are investigated for the case when not all users are active.
Furthermore, upper and asymptotic bounds are derived and numerically evaluated
and compared to other derivations.
|
1005.0707
|
The Production of Probabilistic Entropy in Structure/Action Contingency
Relations
|
cs.AI physics.soc-ph
|
Luhmann (1984) defined society as a communication system which is
structurally coupled to, but not an aggregate of, human action systems. The
communication system is then considered as self-organizing ("autopoietic"), as
are human actors. Communication systems can be studied by using Shannon's
(1948) mathematical theory of communication. The update of a network by action
at one of the local nodes is then a well-known problem in artificial
intelligence (Pearl 1988). By combining these various theories, a general
algorithm for probabilistic structure/action contingency can be derived. The
consequences of this contingency for each system, its consequences for their
further histories, and the stabilization on each side by counterbalancing
mechanisms are discussed, in both mathematical and theoretical terms. An
empirical example is elaborated.
|
1005.0732
|
Outage rates and outage durations of opportunistic relaying systems
|
cs.IT math.IT
|
Opportunistic relaying is a simple yet efficient cooperation scheme that
achieves full diversity and preserves the spectral efficiency among the
spatially distributed stations. However, the stations' mobility causes temporal
correlation of the system's capacity outage events, which gives rise to its
important second-order outage statistical parameters, such as the average
outage rate (AOR) and the average outage duration (AOD). This letter presents
exact analytical expressions for the AOR and the AOD of an opportunistic
relaying system, which employs a mobile source and a mobile destination
(without a direct path), and an arbitrary number of (fixed-gain
amplify-and-forward or decode-and-forward) mobile relays in Rayleigh fading
environment.
|
1005.0734
|
An efficient approximation to the correlated Nakagami-m sums and its
application in equal gain diversity receivers
|
cs.IT math.IT
|
There are several cases in wireless communications theory where the
statistics of the sum of independent or correlated Nakagami-m random variables
(RVs) is necessary to be known. However, a closed-form solution to the
distribution of this sum does not exist when the number of constituent RVs
exceeds two, even for the special case of Rayleigh fading. In this paper, we
present an efficient closed-form approximation for the distribution of the sum
of arbitrary correlated Nakagami-m envelopes with identical and integer fading
parameters. The distribution becomes exact for maximal correlation, while the
tightness of the proposed approximation is validated statistically by using the
Chi-square and the Kolmogorov-Smirnov goodness-of-fit tests. As an application,
the approximation is used to study the performance of equal-gain combining
(EGC) systems operating over arbitrary correlated Nakagami-m fading channels,
by utilizing the available analytical results for the error-rate performance of
an equivalent maximal-ratio combining (MRC) system.
|
1005.0749
|
Integrating multiple sources to answer questions in Algebraic Topology
|
cs.SC cs.AI cs.HC
|
We present in this paper an evolution of a tool from a user interface for a
concrete Computer Algebra system for Algebraic Topology (the Kenzo system), to
a front-end allowing the interoperability among different sources for
computation and deduction. The architecture allows the system not only to
interface several systems, but also to make them cooperate in shared
calculations.
|
1005.0766
|
Learning High-Dimensional Markov Forest Distributions: Analysis of Error
Rates
|
cs.IT math.IT stat.ML
|
The problem of learning forest-structured discrete graphical models from
i.i.d. samples is considered. An algorithm based on pruning of the Chow-Liu
tree through adaptive thresholding is proposed. It is shown that this algorithm
is both structurally consistent and risk consistent and the error probability
of structure learning decays faster than any polynomial in the number of
samples under fixed model size. For the high-dimensional scenario where the
size of the model d and the number of edges k scale with the number of samples
n, sufficient conditions on (n,d,k) are given for the algorithm to satisfy
structural and risk consistencies. In addition, the extremal structures for
learning are identified; we prove that the independent (resp. tree) model is
the hardest (resp. easiest) to learn using the proposed algorithm in terms of
error rates for structure learning.
|
1005.0794
|
Active Learning for Hidden Attributes in Networks
|
stat.ML cond-mat.stat-mech cs.IT cs.LG math.IT physics.soc-ph
|
In many networks, vertices have hidden attributes, or types, that are
correlated with the networks topology. If the topology is known but these
attributes are not, and if learning the attributes is costly, we need a method
for choosing which vertex to query in order to learn as much as possible about
the attributes of the other vertices. We assume the network is generated by a
stochastic block model, but we make no assumptions about its assortativity or
disassortativity. We choose which vertex to query using two methods: 1)
maximizing the mutual information between its attributes and those of the
others (a well-known approach in active learning) and 2) maximizing the average
agreement between two independent samples of the conditional Gibbs
distribution. Experimental results show that both these methods do much better
than simple heuristics. They also consistently identify certain vertices as
important by querying them early on.
|
1005.0813
|
TSDS: high-performance merge, subset, and filter software for time
series-like data
|
cs.DB
|
Time Series Data Server (TSDS) is a software package for implementing a
server that provides fast super-setting, sub-setting, filtering, and uniform
gridding of time series-like data. TSDS was developed to respond quickly to
requests for long time spans of data. Data may be served from a fast database,
typically created by aggregating granules (e.g., data files) from a remote data
source and storing them in a local cache that is optimized for serving time
series. The system was designed specifically for time series data, and is
optimized for requests where the longest dimension of the requested data
structure is time. Scalar, vector, and spectrogram time series types are
supported. The user can interact with the server by requesting a time series, a
date range, and an optional filter to apply to the data. Available filters
include strides, block average/minimum/maximum, exclude, and inequality.
Constraint expressions are supported, which allow such operations as a request
for data from one time series when a different time series satisfied a
specified relationship. TSDS builds upon DAP (Data Access Protocol), NcML
(netCDF Mark-up language) and related software libraries. In this work, we
describe the current design of this server, as well as planned features and
potential implementation strategies.
|
1005.0826
|
Clustering processes
|
cs.LG cs.IT math.IT stat.ML
|
The problem of clustering is considered, for the case when each data point is
a sample generated by a stationary ergodic process. We propose a very natural
asymptotic notion of consistency, and show that simple consistent algorithms
exist, under most general non-parametric assumptions. The notion of consistency
is as follows: two samples should be put into the same cluster if and only if
they were generated by the same distribution. With this notion of consistency,
clustering generalizes such classical statistical problems as homogeneity
testing and process classification. We show that, for the case of a known
number of clusters, consistency can be achieved under the only assumption that
the joint distribution of the data is stationary ergodic (no parametric or
Markovian assumptions, no assumptions of independence, neither between nor
within the samples). If the number of clusters is unknown, consistency can be
achieved under appropriate assumptions on the mixing rates of the processes.
(again, no parametric or independence assumptions). In both cases we give
examples of simple (at most quadratic in each argument) algorithms which are
consistent.
|
1005.0855
|
On Capacity Scaling of Underwater Networks: An Information-Theoretic
Perspective
|
cs.IT math.IT
|
Capacity scaling laws are analyzed in an underwater acoustic network with $n$
regularly located nodes on a square. A narrow-band model is assumed where the
carrier frequency is allowed to scale as a function of $n$. In the network, we
characterize an attenuation parameter that depends on the frequency scaling as
well as the transmission distance. A cut-set upper bound on the throughput
scaling is then derived in extended networks. Our result indicates that the
upper bound is inversely proportional to the attenuation parameter, thus
resulting in a highly power-limited network. Interestingly, it is seen that
unlike the case of wireless radio networks, our upper bound is intrinsically
related to the attenuation parameter but not the spreading factor. Furthermore,
we describe an achievable scheme based on the simple nearest neighbor multi-hop
(MH) transmission. It is shown under extended networks that the MH scheme is
order-optimal as the attenuation parameter scales exponentially with $\sqrt{n}$
(or faster). Finally, these scaling results are extended to a random network
realization.
|
1005.0858
|
Randomized hybrid linear modeling by local best-fit flats
|
cs.CV
|
The hybrid linear modeling problem is to identify a set of d-dimensional
affine sets in a D-dimensional Euclidean space. It arises, for example, in
object tracking and structure from motion. The hybrid linear model can be
considered as the second simplest (behind linear) manifold model of data. In
this paper we will present a very simple geometric method for hybrid linear
modeling based on selecting a set of local best fit flats that minimize a
global l1 error measure. The size of the local neighborhoods is determined
automatically by the Jones' l2 beta numbers; it is proven under certain
geometric conditions that good local neighborhoods exist and are found by our
method. We also demonstrate how to use this algorithm for fast determination of
the number of affine subspaces. We give extensive experimental evidence
demonstrating the state of the art accuracy and speed of the algorithm on
synthetic and real hybrid linear data.
|
1005.0879
|
From Skew-Cyclic Codes to Asymmetric Quantum Codes
|
cs.IT math.IT
|
We introduce an additive but not $\mathbb{F}_4$-linear map $S$ from
$\mathbb{F}_4^{n}$ to $\mathbb{F}_4^{2n}$ and exhibit some of its interesting
structural properties. If $C$ is a linear $[n,k,d]_4$-code, then $S(C)$ is an
additive $(2n,2^{2k},2d)_4$-code. If $C$ is an additive cyclic code then $S(C)$
is an additive quasi-cyclic code of index $2$. Moreover, if $C$ is a module
$\theta$-cyclic code, a recently introduced type of code which will be
explained below, then $S(C)$ is equivalent to an additive cyclic code if $n$ is
odd and to an additive quasi-cyclic code of index $2$ if $n$ is even. Given any
$(n,M,d)_4$-code $C$, the code $S(C)$ is self-orthogonal under the trace
Hermitian inner product. Since the mapping $S$ preserves nestedness, it can be
used as a tool in constructing additive asymmetric quantum codes.
|
1005.0896
|
A two-step fusion process for multi-criteria decision applied to natural
hazards in mountains
|
cs.AI
|
Mountain river torrents and snow avalanches generate human and material
damages with dramatic consequences. Knowledge about natural phenomenona is
often lacking and expertise is required for decision and risk management
purposes using multi-disciplinary quantitative or qualitative approaches.
Expertise is considered as a decision process based on imperfect information
coming from more or less reliable and conflicting sources. A methodology mixing
the Analytic Hierarchy Process (AHP), a multi-criteria aid-decision method, and
information fusion using Belief Function Theory is described. Fuzzy Sets and
Possibilities theories allow to transform quantitative and qualitative criteria
into a common frame of discernment for decision in Dempster-Shafer Theory (DST
) and Dezert-Smarandache Theory (DSmT) contexts. Main issues consist in basic
belief assignments elicitation, conflict identification and management, fusion
rule choices, results validation but also in specific needs to make a
difference between importance and reliability and uncertainty in the fusion
process.
|
1005.0897
|
The Complex Gaussian Kernel LMS algorithm
|
cs.LG
|
Although the real reproducing kernels are used in an increasing number of
machine learning problems, complex kernels have not, yet, been used, in spite
of their potential interest in applications such as communications. In this
work, we focus our attention on the complex gaussian kernel and its possible
application in the complex Kernel LMS algorithm. In order to derive the
gradients needed to develop the complex kernel LMS (CKLMS), we employ the
powerful tool of Wirtinger's Calculus, which has recently attracted much
attention in the signal processing community. Writinger's calculus simplifies
computations and offers an elegant tool for treating complex signals. To this
end, the notion of Writinger's calculus is extended to include complex RKHSs.
Experiments verify that the CKLMS offers significant performance improvements
over the traditional complex LMS or Widely Linear complex LMS (WL-LMS)
algorithms, when dealing with nonlinearities.
|
1005.0902
|
Extension of Wirtinger Calculus in RKH Spaces and the Complex Kernel LMS
|
cs.LG
|
Over the last decade, kernel methods for nonlinear processing have
successfully been used in the machine learning community. However, so far, the
emphasis has been on batch techniques. It is only recently, that online
adaptive techniques have been considered in the context of signal processing
tasks. To the best of our knowledge, no kernel-based strategy has been
developed, so far, that is able to deal with complex valued signals. In this
paper, we take advantage of a technique called complexification of real RKHSs
to attack this problem. In order to derive gradients and subgradients of
operators that need to be defined on the associated complex RKHSs, we employ
the powerful tool ofWirtinger's Calculus, which has recently attracted much
attention in the signal processing community. Writinger's calculus simplifies
computations and offers an elegant tool for treating complex signals. To this
end, in this paper, the notion of Writinger's calculus is extended, for the
first time, to include complex RKHSs and use it to derive the Complex Kernel
Least-Mean-Square (CKLMS) algorithm. Experiments verify that the CKLMS can be
used to derive nonlinear stable algorithms, which offer significant performance
improvements over the traditional complex LMS orWidely Linear complex LMS
(WL-LMS) algorithms, when dealing with nonlinearities.
|
1005.0907
|
Multistage Hybrid Arabic/Indian Numeral OCR System
|
cs.CV
|
The use of OCR in postal services is not yet universal and there are still
many countries that process mail sorting manually. Automated Arabic/Indian
numeral Optical Character Recognition (OCR) systems for Postal services are
being used in some countries, but still there are errors during the mail
sorting process, thus causing a reduction in efficiency. The need to
investigate fast and efficient recognition algorithms/systems is important so
as to correctly read the postal codes from mail addresses and to eliminate any
errors during the mail sorting stage. The objective of this study is to
recognize printed numerical postal codes from mail addresses. The proposed
system is a multistage hybrid system which consists of three different feature
extraction methods, i.e., binary, zoning, and fuzzy features, and three
different classifiers, i.e., Hamming Nets, Euclidean Distance, and Fuzzy Neural
Network Classifiers. The proposed system, systematically compares the
performance of each of these methods, and ensures that the numerals are
recognized correctly. Comprehensive results provide a very high recognition
rate, outperforming the other known developed methods in literature.
|
1005.0917
|
On Building a Knowledge Base for Stability Theory
|
cs.AI
|
A lot of mathematical knowledge has been formalized and stored in
repositories by now: different mathematical theorems and theories have been
taken into consideration and included in mathematical repositories.
Applications more distant from pure mathematics, however --- though based on
these theories --- often need more detailed knowledge about the underlying
theories. In this paper we present an example Mizar formalization from the area
of electrical engineering focusing on stability theory which is based on
complex analysis. We discuss what kind of special knowledge is necessary here
and which amount of this knowledge is included in existing repositories.
|
1005.0945
|
An Efficient Vein Pattern-based Recognition System
|
cs.CV
|
This paper presents an efficient human recognition system based on vein
pattern from the palma dorsa. A new absorption based technique has been
proposed to collect good quality images with the help of a low cost camera and
light source. The system automatically detects the region of interest from the
image and does the necessary preprocessing to extract features. A Euclidean
Distance based matching technique has been used for making the decision. It has
been tested on a data set of 1750 image samples collected from 341 individuals.
The accuracy of the verification system is found to be 99.26% with false
rejection rate (FRR) of 0.03%.
|
1005.0957
|
ECG Feature Extraction Techniques - A Survey Approach
|
cs.NE cs.AI physics.med-ph
|
ECG Feature Extraction plays a significant role in diagnosing most of the
cardiac diseases. One cardiac cycle in an ECG signal consists of the P-QRS-T
waves. This feature extraction scheme determines the amplitudes and intervals
in the ECG signal for subsequent analysis. The amplitudes and intervals value
of P-QRS-T segment determines the functioning of heart of every human.
Recently, numerous research and techniques have been developed for analyzing
the ECG signal. The proposed schemes were mostly based on Fuzzy Logic Methods,
Artificial Neural Networks (ANN), Genetic Algorithm (GA), Support Vector
Machines (SVM), and other Signal Analysis techniques. All these techniques and
algorithms have their advantages and limitations. This proposed paper discusses
various techniques and transformations proposed earlier in literature for
extracting feature from an ECG signal. In addition this paper also provides a
comparative study of various methods proposed by researchers in extracting the
feature from ECG signal.
|
1005.0961
|
Performance Oriented Query Processing In GEO Based Location Search
Engines
|
cs.IR
|
Geographic location search engines allow users to constrain and order search
results in an intuitive manner by focusing a query on a particular geographic
region. Geographic search technology, also called location search, has recently
received significant interest from major search engine companies. Academic
research in this area has focused primarily on techniques for extracting
geographic knowledge from the web. In this paper, we study the problem of
efficient query processing in scalable geographic search engines. Query
processing is a major bottleneck in standard web search engines, and the main
reason for the thousands of machines used by the major engines. Geographic
search engine query processing is different in that it requires a combination
of text and spatial data processing techniques. We propose several algorithms
for efficient query processing in geographic search engines, integrate them
into an existing web search query processor, and evaluate them on large sets of
real data and query traces.
|
1005.0965
|
Artificial Neural Network based Diagnostic Model For Causes of Success
and Failures
|
cs.NE
|
In this paper an attempt has been made to identify most important human
resource factors and propose a diagnostic model based on the back-propagation
and connectionist model approaches of artificial neural network (ANN). The
focus of the study is on the mobile -communication industry of India. The ANN
based approach is particularly important because conventional approaches (such
as algorithmic) to the problem solving have their inherent disadvantages. The
algorithmic approach is well-suited to the problems that are well-understood
and known solution(s). On the other hand the ANNs have learning by example and
processing capabilities similar to that of a human brain. ANN has been followed
due to its inherent advantage over conversion algorithmic like approaches and
having capabilities, training and human like intuitive decision making
capabilities. Therefore, this ANN based approach is likely to help researchers
and organizations to reach a better solution to the problem of managing the
human resource. The study is particularly important as many studies have been
carried in developed countries but there is a shortage of such studies in
developing nations like India. Here, a model has been derived using
connectionist-ANN approach and improved and verified via back-propagation
algorithm. This suggested ANN based model can be used for testing the success
and failure human factors in any of the communication Industry. Results have
been obtained on the basis of connectionist model, which has been further
refined by BPNN to an accuracy of 99.99%. Any company to predict failure due to
HR factors can directly deploy this model.
|
1005.0972
|
Adaptive Tuning Algorithm for Performance tuning of Database Management
System
|
cs.DB
|
Performance tuning of Database Management Systems(DBMS) is both complex and
challenging as it involves identifying and altering several key performance
tuning parameters. The quality of tuning and the extent of performance
enhancement achieved greatly depends on the skill and experience of the
Database Administrator (DBA). As neural networks have the ability to adapt to
dynamically changing inputs and also their ability to learn makes them ideal
candidates for employing them for tuning purpose. In this paper, a novel tuning
algorithm based on neural network estimated tuning parameters is presented. The
key performance indicators are proactively monitored and fed as input to the
Neural Network and the trained network estimates the suitable size of the
buffer cache, shared pool and redo log buffer size. The tuner alters these
tuning parameters using the estimated values using a rate change computing
algorithm. The preliminary results show that the proposed method is effective
in improving the query response time for a variety of workload types. .
|
1005.1062
|
Asymptotically Regular LDPC Codes with Linear Distance Growth and
Thresholds Close to Capacity
|
cs.IT math.IT
|
Families of "asymptotically regular" LDPC block code ensembles can be formed
by terminating (J,K)-regular protograph-based LDPC convolutional codes. By
varying the termination length, we obtain a large selection of LDPC block code
ensembles with varying code rates and substantially better iterative decoding
thresholds than those of (J,K)-regular LDPC block code ensembles, despite the
fact that the terminated ensembles are almost regular. Also, by means of an
asymptotic weight enumerator analysis, we show that minimum distance grows
linearly with block length for all of the ensembles in these families, i.e.,
the ensembles are asymptotically good. We find that, as the termination length
increases, families of "asymptotically regular" codes with capacity approaching
iterative decoding thresholds and declining minimum distance growth rates are
obtained, allowing a code designer to trade-off between distance growth rate
and threshold. Further, we show that the thresholds and the distance growth
rates can be improved by carefully choosing the component protographs used in
the code construction.
|
1005.1065
|
New Families of LDPC Block Codes Formed by Terminating Irregular
Protograph-Based LDPC Convolutional Codes
|
cs.IT math.IT
|
In this paper, we present a method of constructing new families of LDPC block
code ensembles formed by terminating irregular protograph-based LDPC
convolutional codes. Using the accumulate-repeat-by-4-jagged-accumulate (AR4JA)
protograph as an example, a density evolution analysis for the binary erasure
channel shows that this flexible design technique gives rise to a large
selection of LDPC block code ensembles with varying code rates and thresholds
close to capacity. Further, by means of an asymptotic weight enumerator
analysis, we show that all the ensembles in this family also have minimum
distance that grows linearly with block length, i.e., they are asymptotically
good.
|
1005.1120
|
Estimating small moments of data stream in nearly optimal space-time
|
cs.DS cs.LG
|
For each $p \in (0,2]$, we present a randomized algorithm that returns an
$\epsilon$-approximation of the $p$th frequency moment of a data stream $F_p =
\sum_{i = 1}^n \abs{f_i}^p$. The algorithm requires space $O(\epsilon^{-2} \log
(mM)(\log n))$ and processes each stream update using time $O((\log n) (\log
\epsilon^{-1}))$. It is nearly optimal in terms of space (lower bound
$O(\epsilon^{-2} \log (mM))$ as well as time and is the first algorithm with
these properties. The technique separates heavy hitters from the remaining
items in the stream using an appropriate threshold and estimates the
contribution of the heavy hitters and the light elements to $F_p$ separately. A
key component is the design of an unbiased estimator for $\abs{f_i}^p$ whose
data structure has low update time and low variance.
|
1005.1155
|
Decentralized Estimation over Orthogonal Multiple-access Fading Channels
in Wireless Sensor Networks - Optimal and Suboptimal Estimators
|
cs.IT math.IT stat.ME
|
Optimal and suboptimal decentralized estimators in wireless sensor networks
(WSNs) over orthogonal multiple-access fading channels are studied in this
paper. Considering multiple-bit quantization before digital transmission, we
develop maximum likelihood estimators (MLEs) with both known and unknown
channel state information (CSI). When training symbols are available, we derive
a MLE that is a special case of the MLE with unknown CSI. It implicitly uses
the training symbols to estimate the channel coefficients and exploits the
estimated CSI in an optimal way. To reduce the computational complexity, we
propose suboptimal estimators. These estimators exploit both signal and data
level redundant information to improve the estimation performance. The proposed
MLEs reduce to traditional fusion based or diversity based estimators when
communications or observations are perfect. By introducing a general message
function, the proposed estimators can be applied when various analog or digital
transmission schemes are used. The simulations show that the estimators using
digital communications with multiple-bit quantization outperform the estimator
using analog-and-forwarding transmission in fading channels. When considering
the total bandwidth and energy constraints, the MLE using multiple-bit
quantization is superior to that using binary quantization at medium and high
observation signal-to-noise ratio levels.
|
1005.1252
|
Universal algorithms, mathematics of semirings and parallel computations
|
math.NA cs.DS cs.MS cs.NE
|
This is a survey paper on applications of mathematics of semirings to
numerical analysis and computing. Concepts of universal algorithm and generic
program are discussed. Relations between these concepts and mathematics of
semirings are examined. A very brief introduction to mathematics of semirings
(including idempotent and tropical mathematics) is presented. Concrete
applications to optimization problems, idempotent linear algebra and interval
analysis are indicated. It is known that some nonlinear problems (and
especially optimization problems) become linear over appropriate semirings with
idempotent addition (the so-called idempotent superposition principle). This
linearity over semirings is convenient for parallel computations.
|
1005.1284
|
Approximately achieving Gaussian relay network capacity with lattice
codes
|
cs.IT math.IT
|
Recently, it has been shown that a quantize-map-and-forward scheme
approximately achieves (within a constant number of bits) the Gaussian relay
network capacity for arbitrary topologies. This was established using Gaussian
codebooks for transmission and random mappings at the relays. In this paper, we
show that the same approximation result can be established by using lattices
for transmission and quantization along with structured mappings at the relays.
|
1005.1292
|
Broadcast gossip averaging algorithms: interference and asymptotical
error in large networks
|
math.OC cs.SY
|
In this paper we study two related iterative randomized algorithms for
distributed computation of averages. The first one is the recently proposed
Broadcast Gossip Algorithm, in which at each iteration one randomly selected
node broadcasts its own state to its neighbors. The second algorithm is a novel
de-synchronized version of the previous one, in which at each iteration every
node is allowed to broadcast, with a given probability: hence this algorithm is
affected by interference among messages. Both algorithms are proved to
converge, and their performance is evaluated in terms of rate of convergence
and asymptotical error: focusing on the behavior for large networks, we
highlight the role of topology and design parameters on the performance.
Namely, we show that on fully-connected graphs the rate is bounded away from
one, whereas the asymptotical error is bounded away from zero. On the contrary,
on a wide class of locally-connected graphs, the rate goes to one and the
asymptotical error goes to zero, as the size of the network grows larger.
|
1005.1339
|
Coordination and Bargaining over the Gaussian Interference Channel
|
cs.IT math.IT
|
This work considers coordination and bargaining between two selfish users
over a Gaussian interference channel using game theory. The usual information
theoretic approach assumes full cooperation among users for codebook and rate
selection. In the scenario investigated here, each selfish user is willing to
coordinate its actions only when an incentive exists and benefits of
cooperation are fairly allocated. To improve communication rates, the two users
are allowed to negotiate for the use of a simple Han-Kobayashi type scheme with
fixed power split and conditions for which users have incentives to cooperate
are identified. The Nash bargaining solution (NBS) is used as a tool to get
fair information rates. The operating point is obtained as a result of an
optimization problem and compared with a TDM-based one in the literature.
|
1005.1340
|
Distribution of Cognitive Load in Web Search
|
cs.HC cs.IR
|
The search task and the system both affect the demand on cognitive resources
during information search. In some situations, the demands may become too high
for a person. This article has a three-fold goal. First, it presents and
critiques methods to measure cognitive load. Second, it explores the
distribution of load across search task stages. Finally, it seeks to improve
our understanding of factors affecting cognitive load levels in information
search. To this end, a controlled Web search experiment with forty-eight
participants was conducted. Interaction logs were used to segment search tasks
semi-automatically into task stages. Cognitive load was assessed using a new
variant of the dual-task method. Average cognitive load was found to vary by
search task stages. It was significantly higher during query formulation and
user description of a relevant document as compared to examining search results
and viewing individual documents. Semantic information shown next to the search
results lists in one of the studied interfaces was found to decrease mental
demands during query formulation and examination of the search results list.
These findings demonstrate that changes in dynamic cognitive load can be
detected within search tasks. Dynamic assessment of cognitive load is of core
interest to information science because it enriches our understanding of
cognitive demands imposed on people engaged in the search process by a task and
the interactive information retrieval system employed.
|
1005.1349
|
On Holant Theorem and Its Proof
|
cs.IT math.IT
|
Holographic algorithms are a recent breakthrough in computer science and has
found applications in information theory. This paper provides a proof to the
central component of holographic algorithms, namely, the Holant theorem.
Compared with previous works, the proof appears simpler and more direct. Along
the proof, we also develop a mathematical tool, which we call c-tensor. We
expect the notion of c-tensor may be applicable over a wide range of analysis.
|
1005.1364
|
Cognitive Radio Transmission under QoS Constraints and Interference
Limitations
|
cs.IT math.IT
|
In this paper, the performance of cognitive transmission under quality of
service (QoS)constraints and interference limitations is studied. Cognitive
secondary users are assumed to initially perform sensing over multiple
frequency bands (or equivalently channels) to detect the activities of primary
users. Subsequently, they perform transmission in a single channel at variable
power and rates depending on the channel sensing decisions and the fading
environment. A state transition model is constructed to model this cognitive
operation. Statistical limitations on the buffer lengths are imposed to take
into account the QoS constraints of the cognitive secondary users. Under such
QoS constraints and limitations on the interference caused to the primary
users, the maximum throughput is identified by finding the effective capacity
of the cognitive radio channel. Optimal power allocation strategies are
obtained and the optimal channel selection criterion is identified. The
intricate interplay between effective capacity, interference and QoS
constraints, channel sensing parameters and reliability, fading, and the number
of available frequency bands is investigated through numerical results.
|
1005.1365
|
Cooperative Sequential Spectrum Sensing Algorithms for OFDM
|
cs.IT math.IT
|
This paper considers the problem of spectrum sensing in cognitive radio
networks when the primary user employs Orthogonal Frequency Division
Multiplexing (OFDM). We develop cooperative sequential detection algorithms
based on energy detectors and the autocorrelation property of cyclic prefix
(CP) used in OFDM systems and compare their performances. We show that
sequential detection provides much better performance than the traditional
fixed sample size (snapshot) based detectors. We also study the effect of model
uncertainties such as timing and frequency offset, IQ-imbalance and uncertainty
in noise and transmit power on the performance of the detectors. We modify the
detectors to mitigate the effects of these impairments. The performance of the
proposed algorithms are studied via simulations. It is shown that energy
detector performs significantly better than the CP-based detector, except in
case of a snapshot detector with noise power uncertainty. Also, unlike for the
CP-based detector, most of the above mentioned impairments have no effect on
the energy detector.
|
1005.1369
|
Simultaneous communication in noisy channels
|
cs.IT math.IT
|
A sender wishes to broadcast a message of length $n$ over an alphabet to $r$
users, where each user $i$, $1 \leq i \leq r$ should be able to receive one of
$m_i$ possible messages. The broadcast channel has noise for each of the users
(possibly different noise for different users), who cannot distinguish between
some pairs of letters. The vector $(m_1, m_2,...s, m_r)_{(n)}$ is said to be
feasible if length $n$ encoding and decoding schemes exist enabling every user
to decode his message. A rate vector $(R_1, R_2,..., R_r)$ is feasible if there
exists a sequence of feasible vectors $(m_1, m_2,..., m_r)_{(n)}$ such that
$R_i = \lim_{n \mapsto \infty} \frac {\log_2 m_i} {n}, {for all} i$. We
determine the feasible rate vectors for several different scenarios and
investigate some of their properties. An interesting case discussed is when one
user can only distinguish between all the letters in a subset of the alphabet.
Tight restrictions on the feasible rate vectors for some specific noise types
for the other users are provided. The simplest non-trivial cases of two users
and alphabet of size three are fully characterized. To this end a more general
previously known result, to which we sketch an alternative proof, is used. This
problem generalizes the study of the Shannon capacity of a graph, by
considering more than a single user.
|
1005.1391
|
Solution to the Counterfeit Coin Problem and its Generalization
|
cs.IT math.IT
|
This work deals with a classic problem: "Given a set of coins among which
there is a counterfeit coin of a different weight, find this counterfeit coin
using ordinary balance scales, with the minimum number of weighings possible,
and indicate whether it weighs less or more than the rest". The method proposed
here not only calculates the minimum number of weighings necessary, but also
indicates how to perform these weighings, it is easily mechanizeable and valid
for any number of coins. Instructions are also given as to how to generalize
the procedure to include cases where there is more than one counterfeit coin.
|
1005.1395
|
Fractal Weyl law for Linux Kernel Architecture
|
cs.CE cond-mat.dis-nn nlin.CD physics.data-an
|
We study the properties of spectrum and eigenstates of the Google matrix of a
directed network formed by the procedure calls in the Linux Kernel. Our results
obtained for various versions of the Linux Kernel show that the spectrum is
characterized by the fractal Weyl law established recently for systems of
quantum chaotic scattering and the Perron-Frobenius operators of dynamical
maps. The fractal Weyl exponent is found to be $\nu \approx 0.63$ that
corresponds to the fractal dimension of the network $d \approx 1.2$. The
eigenmodes of the Google matrix of Linux Kernel are localized on certain
principal nodes. We argue that the fractal Weyl law should be generic for
directed networks with the fractal dimension $d<2$.
|
1005.1471
|
Classification via Incoherent Subspaces
|
cs.CV
|
This article presents a new classification framework that can extract
individual features per class. The scheme is based on a model of incoherent
subspaces, each one associated to one class, and a model on how the elements in
a class are represented in this subspace. After the theoretical analysis an
alternate projection algorithm to find such a collection is developed. The
classification performance and speed of the proposed method is tested on the AR
and YaleB databases and compared to that of Fisher's LDA and a recent approach
based on on $\ell_1$ minimisation. Finally connections of the presented scheme
to already existing work are discussed and possible ways of extensions are
pointed out.
|
1005.1475
|
How to correctly prune tropical trees
|
cs.AI cs.DM cs.GT cs.SC
|
We present tropical games, a generalization of combinatorial min-max games
based on tropical algebras. Our model breaks the traditional symmetry of
rational zero-sum games where players have exactly opposed goals (min vs. max),
is more widely applicable than min-max and also supports a form of pruning,
despite it being less effective than alpha-beta. Actually, min-max games may be
seen as particular cases where both the game and its dual are tropical: when
the dual of a tropical game is also tropical, the power of alpha-beta is
completely recovered. We formally develop the model and prove that the tropical
pruning strategy is correct, then conclude by showing how the problem of
approximated parsing can be modeled as a tropical game, profiting from pruning.
|
1005.1516
|
An Agent-based Simulation of the Effectiveness of Creative Leadership
|
cs.MA cs.NE cs.SI
|
This paper investigates the effectiveness of creative versus uncreative
leadership using EVOC, an agent-based model of cultural evolution. Each
iteration, each agent in the artificial society invents a new action, or
imitates a neighbor's action. Only the leader's actions can be imitated by all
other agents, referred to as followers. Two measures of creativity were used:
(1) invention-to-imitation ratio, iLeader, which measures how often an agent
invents, and (2) rate of conceptual change, cLeader, which measures how
creative an invention is. High iLeader increased mean fitness of ideas, but
only when creativity of followers was low. High iLeader was associated with
greater diversity of ideas in the early stage of idea generation only. High
cLeader increased mean fitness of ideas in the early stage of idea generation;
in the later stage it decreased idea fitness. Reasons for these findings and
tentative implications for creative leadership in human society are discussed.
|
1005.1518
|
Recognizability of Individual Creative Style Within and Across Domains:
Preliminary Studies
|
cs.AI
|
It is hypothesized that creativity arises from the self-mending capacity of
an internal model of the world, or worldview. The uniquely honed worldview of a
creative individual results in a distinctive style that is recognizable within
and across domains. It is further hypothesized that creativity is domaingeneral
in the sense that there exist multiple avenues by which the distinctiveness of
one's worldview can be expressed. These hypotheses were tested using art
students and creative writing students. Art students guessed significantly
above chance both which painting was done by which of five famous artists, and
which artwork was done by which of their peers. Similarly, creative writing
students guessed significantly above chance both which passage was written by
which of five famous writers, and which passage was written by which of their
peers. These findings support the hypothesis that creative style is
recognizable. Moreover, creative writing students guessed significantly above
chance which of their peers produced particular works of art, supporting the
hypothesis that creative style is recognizable not just within but across
domains.
|
1005.1524
|
Cumulative-Separable Codes
|
cs.IT math.IT
|
q-ary cumulative-separable $\Gamma(L,G^{(j)})$-codes $L=\{ \alpha \in
GF(q^{m}):G(\alpha )\neq 0 \}$ and $G^{(j)}(x)=G(x)^{j}, 1 \leq i\leq q$ are
considered. The relation between different codes from this class is
demonstrated. Improved boundaries of the minimum distance and dimension are
obtained.
|
1005.1545
|
Improving Semi-Supervised Support Vector Machines Through Unlabeled
Instances Selection
|
cs.LG
|
Semi-supervised support vector machines (S3VMs) are a kind of popular
approaches which try to improve learning performance by exploiting unlabeled
data. Though S3VMs have been found helpful in many situations, they may
degenerate performance and the resultant generalization ability may be even
worse than using the labeled data only. In this paper, we try to reduce the
chance of performance degeneration of S3VMs. Our basic idea is that, rather
than exploiting all unlabeled data, the unlabeled instances should be selected
such that only the ones which are very likely to be helpful are exploited,
while some highly risky unlabeled instances are avoided. We propose the
S3VM-\emph{us} method by using hierarchical clustering to select the unlabeled
instances. Experiments on a broad range of data sets over eighty-eight
different settings show that the chance of performance degeneration of
S3VM-\emph{us} is much smaller than that of existing S3VMs.
|
1005.1560
|
Computation using Noise-based Logic: Efficient String Verification over
a Slow Communication Channel
|
cs.IT math.IT physics.gen-ph
|
Utilizing the hyperspace of noise-based logic, we show two string
verification methods with low communication complexity. One of them is based on
continuum noise-based logic. The other one utilizes noise-based logic with
random telegraph signals where a mathematical analysis of the error probability
is also given. The last operation can also be interpreted as computing
universal hash functions with noise-based logic and using them for string
comparison. To find out with 10^-25 error probability that two strings with
arbitrary length are different (this value is similar to the error probability
of an idealistic gate in today's computer) Alice and Bob need to compare only
83 bits of the noise-based hyperspace.
|
1005.1567
|
On The Power of Tree Projections: Structural Tractability of Enumerating
CSP Solutions
|
cs.AI cs.DB
|
The problem of deciding whether CSP instances admit solutions has been deeply
studied in the literature, and several structural tractability results have
been derived so far. However, constraint satisfaction comes in practice as a
computation problem where the focus is either on finding one solution, or on
enumerating all solutions, possibly projected to some given set of output
variables. The paper investigates the structural tractability of the problem of
enumerating (possibly projected) solutions, where tractability means here
computable with polynomial delay (WPD), since in general exponentially many
solutions may be computed. A general framework based on the notion of tree
projection of hypergraphs is considered, which generalizes all known
decomposition methods. Tractability results have been obtained both for classes
of structures where output variables are part of their specification, and for
classes of structures where computability WPD must be ensured for any possible
set of output variables. These results are shown to be tight, by exhibiting
dichotomies for classes of structures having bounded arity and where the tree
decomposition method is considered.
|
1005.1594
|
Channel State Feedback over the MIMO-MAC
|
cs.IT math.IT
|
We consider the problem of designing low latency and low complexity schemes
for channel state feedback over the MIMO-MAC (multiple-input multiple-output
multiple access channel). We develop a framework for analyzing this problem in
terms of minimizing the MSE distortion, and come up with separated
source-channel schemes and joint source-channel schemes that perform better
than analog feedback. We also develop a strikingly simple code design based on
scalar quantization and uncoded QAM modulation that achieves the theoretical
asymptotic performance limit of the separated approach with very low complexity
and latency, in the case of single-antenna users.
|
1005.1625
|
On Some Results Related to Napoleon's Configurations
|
math.MG cs.IT math.GM math.IT
|
The goal of this paper is to give a purely geometric proof of a theorem by
Branko Gr\"unbaum concerning configuration of triangles coming from the
classical Napoleon's theorem in planar Euclidean geometry.
|
1005.1634
|
Interference Alignment in Regenerating Codes for Distributed Storage:
Necessity and Code Constructions
|
cs.IT cs.DC cs.NI math.IT
|
Regenerating codes are a class of recently developed codes for distributed
storage that, like Reed-Solomon codes, permit data recovery from any arbitrary
k of n nodes. However regenerating codes possess in addition, the ability to
repair a failed node by connecting to any arbitrary d nodes and downloading an
amount of data that is typically far less than the size of the data file. This
amount of download is termed the repair bandwidth. Minimum storage regenerating
(MSR) codes are a subclass of regenerating codes that require the least amount
of network storage; every such code is a maximum distance separable (MDS) code.
Further, when a replacement node stores data identical to that in the failed
node, the repair is termed as exact.
The four principal results of the paper are (a) the explicit construction of
a class of MDS codes for d = n-1 >= 2k-1 termed the MISER code, that achieves
the cut-set bound on the repair bandwidth for the exact-repair of systematic
nodes, (b) proof of the necessity of interference alignment in exact-repair MSR
codes, (c) a proof showing the impossibility of constructing linear,
exact-repair MSR codes for d < 2k-3 in the absence of symbol extension, and (d)
the construction, also explicit, of MSR codes for d = k+1. Interference
alignment (IA) is a theme that runs throughout the paper: the MISER code is
built on the principles of IA and IA is also a crucial component to the
non-existence proof for d < 2k-3. To the best of our knowledge, the
constructions presented in this paper are the first, explicit constructions of
regenerating codes that achieve the cut-set bound.
|
1005.1635
|
The Approximate Capacity Region of the Gaussian Z-Interference Channel
with Conferencing Encoders
|
cs.IT math.IT
|
A two-user Gaussian Z-Interference Channel (GZIC) is considered, in which
encoders are connected through noiseless links with finite capacities. In this
setting, prior to each transmission block the encoders communicate with each
other over the cooperative links. The capacity region and the sum-capacity of
the channel are characterized within 1.71 bits per user and 2 bits in total,
respectively. It is also established that properly sharing the total limited
cooperation capacity between the cooperative links may enhance the achievable
region, even when compared to the case of unidirectional transmitter
cooperation with infinite cooperation capacity. To obtain the results,
genie-aided upper bounds on the sum-capacity and cut-set bounds on the
individual rates are compared with the achievable rate region. In the
interference-limited regime, the achievable scheme enjoys a simple type of
Han-Kobayashi signaling, together with the zero-forcing, and basic relaying
techniques. In the noise-limited regime, it is shown that treating interference
as noise achieves the capacity region up to a single bit per user.
|
1005.1684
|
On Macroscopic Complexity and Perceptual Coding
|
cs.IT cs.AI cs.MM cs.SD math.IT
|
The theoretical limits of 'lossy' data compression algorithms are considered.
The complexity of an object as seen by a macroscopic observer is the size of
the perceptual code which discards all information that can be lost without
altering the perception of the specified observer. The complexity of this
macroscopically observed state is the simplest description of any microstate
comprising that macrostate. Inference and pattern recognition based on
macrostate rather than microstate complexities will take advantage of the
complexity of the macroscopic observer to ignore irrelevant noise.
|
1005.1711
|
On Design of Distributed Beamforming for Two-Way Relay Networks
|
cs.IT math.IT
|
We consider a two-way relay network, where two source nodes, S1 and S2,
exchange information through a cluster of relay nodes. The relay nodes receive
the sum signal from S1 and S2 in the first time slot. In the second time slot,
each relay node multiplies its received signal by a complex coefficient and
retransmits the signal to the two source nodes, which leads to a distributed
two-way beamforming system. By applying the principle of analog network coding,
each receiver at S1 and S2 cancels the ``self-interference'' in the received
signal from the relay cluster and decodes the message. This paper studies the
2-dimensional achievable rate region for such a two-way relay network with
distributed beamforming. With different assumptions of channel reciprocity
between the source-relay and relay-source channels, the achievable rate region
is characterized under two setups. First, with reciprocal channels, we
investigate the achievable rate regions when the relay cluster is subject to a
sum-power constraint or individual-power constraints. We show that the optimal
beamforming vectors obtained from solving the weighted sum inverse-SNR
minimization (WSISMin) problems are sufficient to characterize the
corresponding achievable rate region. Furthermore, we derive the closed form
solutions for those optimal beamforming vectors and consequently propose the
partially distributed algorithms to implement the optimal beamforming, where
each relay node only needs the local channel information and one global
parameter. Second, with the non-reciprocal channels, the achievable rate
regions are also characterized for both the sum-power constraint case and the
individual-power constraint case. Although no closed-form solutions are
available under this setup, we present efficient algorithms to compute the
optimal beamforming vectors, which are attained by solving SDP problems after
semi-definite relaxation.
|
1005.1715
|
Degrees of Freedom Region of a Class of Multi-source Gaussian Relay
Networks
|
cs.IT math.IT
|
We study a layered $K$-user $M$-hop Gaussian relay network consisting of
$K_m$ nodes in the $m^{\operatorname{th}}$ layer, where $M\geq2$ and
$K=K_1=K_{M+1}$. We observe that the time-varying nature of wireless channels
or fading can be exploited to mitigate the inter-user interference. The
proposed amplify-and-forward relaying scheme exploits such channel variations
and works for a wide class of channel distributions including Rayleigh fading.
We show a general achievable degrees of freedom (DoF) region for this class of
Gaussian relay networks. Specifically, the set of all $(d_1,..., d_K)$ such
that $d_i\leq 1$ for all $i$ and $\sum_{i=1}^K d_i\leq K_{\Sigma}$ is
achievable, where $d_i$ is the DoF of the $i^{\operatorname{th}}$
source--destination pair and $K_{\Sigma}$ is the maximum integer such that
$K_{\Sigma}\leq \min_m\{K_m\}$ and $M/K_{\Sigma}$ is an integer. We show that
surprisingly the achievable DoF region coincides with the cut-set outer bound
if $M/\min_m\{K_m\}$ is an integer, thus interference-free communication is
possible in terms of DoF. We further characterize an achievable DoF region
assuming multi-antenna nodes and general message set, which again coincides
with the cut-set outer bound for a certain class of networks.
|
1005.1716
|
Heuristics in Conflict Resolution
|
cs.AI cs.LO
|
Modern solvers for Boolean Satisfiability (SAT) and Answer Set Programming
(ASP) are based on sophisticated Boolean constraint solving techniques. In both
areas, conflict-driven learning and related techniques constitute key features
whose application is enabled by conflict analysis. Although various conflict
analysis schemes have been proposed, implemented, and studied both
theoretically and practically in the SAT area, the heuristic aspects involved
in conflict analysis have not yet received much attention. Assuming a fixed
conflict analysis scheme, we address the open question of how to identify
"good'' reasons for conflicts, and we investigate several heuristics for
conflict analysis in ASP solving. To our knowledge, a systematic study like
ours has not yet been performed in the SAT area, thus, it might be beneficial
for both the field of ASP as well as the one of SAT solving.
|
1005.1785
|
Sidelobe Suppression for Robust Beamformer via The Mixed Norm Constraint
|
cs.IT math.IT
|
Applying a sparse constraint on the beam pattern has been suggested to
suppress the sidelobe of the minimum variance distortionless response (MVDR)
beamformer recently. To further improve the performance, we add a mixed norm
constraint on the beam pattern. It matches the beam pattern better and
encourages dense distribution in mainlobe and sparse distribution in sidelobe.
The obtained beamformer has a lower sidelobe level and deeper nulls for
interference avoidance than the standard sparse constraint based beamformer.
Simulation demonstrates that the SINR gain is considerable for its lower
sidelobe level and deeper nulling for interference, while the robustness
against the mismatch between the steering angle and the direction of arrival
(DOA) of the desired signal, caused by imperfect estimation of DOA, is
maintained too.
|
1005.1800
|
Power-Efficient Ultra-Wideband Waveform Design Considering Radio Channel
Effects
|
cs.IT math.IT
|
This paper presents a power-efficient mask-constrained ultra-wideband (UWB)
waveform design with radio channel effects taken into consideration. Based on a
finite impulse response (FIR) filter, we develop a convex optimization model
with respect to the autocorrelation of the filter coefficients to optimize the
transmitted signal power spectrum, subject to a regulatory emission mask. To
improve power efficiency, effects of transmitter radio frequency (RF)
components are included in the optimization of the transmitter-output waveform,
and radio propagation effects are considered for optimizing at the receiver.
Optimum coefficients of the FIR filter are obtained through spectral
factorization of their autocorrelations. Simulation results show that the
proposed method is able to maximize the transmitted UWB signal power under mask
constraints set by regulatory authorities, while mitigating the power loss
caused by channel attenuations.
|
1005.1801
|
Sparse Support Recovery with Phase-Only Measurements
|
cs.IT math.IT math.NA
|
Sparse support recovery (SSR) is an important part of the compressive sensing
(CS). Most of the current SSR methods are with the full information
measurements. But in practice the amplitude part of the measurements may be
seriously destroyed. The corrupted measurements mismatch the current SSR
algorithms, which leads to serious performance degeneration. This paper
considers the problem of SSR with only phase information. In the proposed
method, the minimization of the l1 norm of the estimated sparse signal enforces
sparse distribution, while a nonzero constraint of the uncorrupted random
measurements' amplitudes with respect to the reconstructed sparse signal is
introduced. Because it only requires the phase components of the measurements
in the constraint, it can avoid the performance deterioration by corrupted
amplitude components. Simulations demonstrate that the proposed phase-only SSR
is superior in the support reconstruction accuracy when the amplitude
components of the measurements are contaminated.
|
1005.1803
|
Anti-Sampling-Distortion Compressive Wideband Spectrum Sensing for
Cognitive Radio
|
cs.IT math.IT
|
Too high sampling rate is the bottleneck to wideband spectrum sensing for
cognitive radio in mobile communication. Compressed sensing (CS) is introduced
to transfer the sampling burden. The standard sparse signal recovery of CS does
not consider the distortion in the analogue-to-information converter (AIC). To
mitigate performance degeneration casued by the mismatch in least square
distortionless constraint which doesn't consider the AIC distortion, we define
the sparse signal with the sampling distortion as a bounded additive noise, and
An anti-sampling-distortion constraint (ASDC) is deduced. Then we combine the
\ell1 norm based sparse constraint with the ASDC to get a novel robust sparse
signal recovery operator with sampling distortion. Numerical simulations
demonstrate that the proposed method outperforms standard sparse wideband
spectrum sensing in accuracy, denoising ability, etc.
|
1005.1804
|
Compressive Wideband Spectrum Sensing for Fixed Frequency Spectrum
Allocation
|
cs.IT math.IT
|
Too high sampling rate is the bottleneck to wideband spectrum sensing for
cognitive radio (CR). As the survey shows that the sensed signal has a sparse
representation in frequency domain in the mass, compressed sensing (CS) can be
used to transfer the sampling burden to the digital signal processor. An analog
to information converter (AIC) can randomly sample the received signal with
sub-Nyquist rate to obtained the random measurements. Considering that the
static frequency spectrum allocation of primary radios means the bounds between
different primary radios is known in advance, here we incorporate information
of the spectrum boundaries between different primary user as a priori
information to obtain a mixed l2/l1 norm denoising operator (MNDO). In the
MNDO, the estimated power spectrum density (PSD) vector is divided into block
sections with bounds corresponding different allocated primary radios.
Different from previous standard l1-norm constraint on the whole PSD vector, a
sum of the l2 norm of each section of the PSD vector is minimized to encourage
the local grouping distribution while the sparse distribution in mass, while a
relaxed constraint is used to improve the denoising performance. Simulation
demonstrates that the proposed method outperforms standard sparse spectrum
estimation in accuracy, denoising ability, etc.
|
1005.1853
|
Lattice model refinement of protein structures
|
cs.CE physics.comp-ph q-bio.QM
|
To find the best lattice model representation of a given full atom protein
structure is a hard computational problem. Several greedy methods have been
suggested where results are usually biased and leave room for improvement. In
this paper we formulate and implement a Constraint Programming method to refine
such lattice structure models. We show that the approach is able to provide
better quality solutions. The prototype is implemented in COLA and is based on
limited discrepancy search. Finally, some promising extensions based on local
search are discussed.
|
1005.1860
|
Feature Selection Using Regularization in Approximate Linear Programs
for Markov Decision Processes
|
cs.AI
|
Approximate dynamic programming has been used successfully in a large variety
of domains, but it relies on a small set of provided approximation features to
calculate solutions reliably. Large and rich sets of features can cause
existing algorithms to overfit because of a limited number of samples. We
address this shortcoming using $L_1$ regularization in approximate linear
programming. Because the proposed method can automatically select the
appropriate richness of features, its performance does not degrade with an
increasing number of features. These results rely on new and stronger sampling
bounds for regularized approximate linear programs. We also propose a
computationally efficient homotopy method. The empirical evaluation of the
approach shows that the proposed method performs well on simple MDPs and
standard benchmark problems.
|
1005.1871
|
Subfield-Subcodes of Generalized Toric codes
|
cs.IT math.IT
|
We study subfield-subcodes of Generalized Toric (GT) codes over
$\mathbb{F}_{p^s}$. These are the multidimensional analogues of BCH codes,
which may be seen as subfield-subcodes of generalized Reed-Solomon codes. We
identify polynomial generators for subfield-subcodes of GT codes which allows
us to determine the dimensions and obtain bounds for the minimum distance. We
give several examples of binary and ternary subfield-subcodes of GT codes that
are the best known codes of a given dimension and length.
|
1005.1918
|
Prediction with Expert Advice under Discounted Loss
|
cs.LG
|
We study prediction with expert advice in the setting where the losses are
accumulated with some discounting---the impact of old losses may gradually
vanish. We generalize the Aggregating Algorithm and the Aggregating Algorithm
for Regression to this case, propose a suitable new variant of exponential
weights algorithm, and prove respective loss bounds.
|
1005.1934
|
Scalable Probabilistic Databases with Factor Graphs and MCMC
|
cs.DB cs.AI
|
Probabilistic databases play a crucial role in the management and
understanding of uncertain data. However, incorporating probabilities into the
semantics of incomplete databases has posed many challenges, forcing systems to
sacrifice modeling power, scalability, or restrict the class of relational
algebra formula under which they are closed. We propose an alternative approach
where the underlying relational database always represents a single world, and
an external factor graph encodes a distribution over possible worlds; Markov
chain Monte Carlo (MCMC) inference is then used to recover this uncertainty to
a desired level of fidelity. Our approach allows the efficient evaluation of
arbitrary queries over probabilistic databases with arbitrary dependencies
expressed by graphical models with structure that changes during inference.
MCMC sampling provides efficiency by hypothesizing {\em modifications} to
possible worlds rather than generating entire worlds from scratch. Queries are
then run over the portions of the world that change, avoiding the onerous cost
of running full queries over each sampled world. A significant innovation of
this work is the connection between MCMC sampling and materialized view
maintenance techniques: we find empirically that using view maintenance
techniques is several orders of magnitude faster than naively querying each
sampled world. We also demonstrate our system's ability to answer relational
queries with aggregation, and demonstrate additional scalability through the
use of parallelization.
|
1005.2012
|
Dual Averaging for Distributed Optimization: Convergence Analysis and
Network Scaling
|
math.OC cs.SY stat.ML
|
The goal of decentralized optimization over a network is to optimize a global
objective formed by a sum of local (possibly nonsmooth) convex functions using
only local computation and communication. It arises in various application
domains, including distributed tracking and localization, multi-agent
co-ordination, estimation in sensor networks, and large-scale optimization in
machine learning. We develop and analyze distributed algorithms based on dual
averaging of subgradients, and we provide sharp bounds on their convergence
rates as a function of the network size and topology. Our method of analysis
allows for a clear separation between the convergence of the optimization
algorithm itself and the effects of communication constraints arising from the
network structure. In particular, we show that the number of iterations
required by our algorithm scales inversely in the spectral gap of the network.
The sharpness of this prediction is confirmed both by theoretical lower bounds
and simulations for various networks. Our approach includes both the cases of
deterministic optimization and communication, as well as problems with
stochastic optimization and/or communication.
|
1005.2061
|
Cooperative Diversity with Mobile Nodes: Capacity Outage Rate and
Duration
|
cs.IT math.IT
|
The outage probability is an important performance measure for cooperative
diversity schemes. However, in mobile environments, the outage probability does
not completely describe the behavior of cooperative diversity schemes since the
mobility of the involved nodes introduces variations in the channel gains. As a
result, the capacity outage events are correlated in time and second-order
statistical parameters of the achievable information-theoretic capacity such as
the average capacity outage rate (AOR) and the average capacity outage duration
(AOD) are required to obtain a more complete description of the properties of
cooperative diversity protocols. In this paper, assuming slow Rayleigh fading,
we derive exact expressions for the AOR and the AOD of three well-known
cooperative diversity protocols: variable-gain amplify-and-forward,
decode-and-forward, and selection decode-and-forward relaying. Furthermore, we
develop asymptotically tight high signal-to-noise ratio (SNR) approximations,
which offer important insights into the influence of various system and channel
parameters on the AOR and AOD. In particular, we show that on a
double-logarithmic scale, similar to the outage probability, the AOR
asymptotically decays with the SNR with a slope that depends on the diversity
gain of the cooperative protocol, whereas the AOD asymptotically decays with a
slope of -1/2 independent of the diversity gain.
|
1005.2146
|
On the Finite Time Convergence of Cyclic Coordinate Descent Methods
|
cs.LG cs.NA
|
Cyclic coordinate descent is a classic optimization method that has witnessed
a resurgence of interest in machine learning. Reasons for this include its
simplicity, speed and stability, as well as its competitive performance on
$\ell_1$ regularized smooth optimization problems. Surprisingly, very little is
known about its finite time convergence behavior on these problems. Most
existing results either just prove convergence or provide asymptotic rates. We
fill this gap in the literature by proving $O(1/k)$ convergence rates (where
$k$ is the iteration counter) for two variants of cyclic coordinate descent
under an isotonicity assumption. Our analysis proceeds by comparing the
objective values attained by the two variants with each other, as well as with
the gradient descent algorithm. We show that the iterates generated by the
cyclic coordinate descent methods remain better than those of gradient descent
uniformly over time.
|
1005.2179
|
Detecting Blackholes and Volcanoes in Directed Networks
|
cs.LG
|
In this paper, we formulate a novel problem for finding blackhole and volcano
patterns in a large directed graph. Specifically, a blackhole pattern is a
group which is made of a set of nodes in a way such that there are only inlinks
to this group from the rest nodes in the graph. In contrast, a volcano pattern
is a group which only has outlinks to the rest nodes in the graph. Both
patterns can be observed in real world. For instance, in a trading network, a
blackhole pattern may represent a group of traders who are manipulating the
market. In the paper, we first prove that the blackhole mining problem is a
dual problem of finding volcanoes. Therefore, we focus on finding the blackhole
patterns. Along this line, we design two pruning schemes to guide the blackhole
finding process. In the first pruning scheme, we strategically prune the search
space based on a set of pattern-size-independent pruning rules and develop an
iBlackhole algorithm. The second pruning scheme follows a divide-and-conquer
strategy to further exploit the pruning results from the first pruning scheme.
Indeed, a target directed graphs can be divided into several disconnected
subgraphs by the first pruning scheme, and thus the blackhole finding can be
conducted in each disconnected subgraph rather than in a large graph. Based on
these two pruning schemes, we also develop an iBlackhole-DC algorithm. Finally,
experimental results on real-world data show that the iBlackhole-DC algorithm
can be several orders of magnitude faster than the iBlackhole algorithm, which
has a huge computational advantage over a brute-force method.
|
1005.2243
|
Robustness and Generalization
|
cs.LG
|
We derive generalization bounds for learning algorithms based on their
robustness: the property that if a testing sample is "similar" to a training
sample, then the testing error is close to the training error. This provides a
novel approach, different from the complexity or stability arguments, to study
generalization of learning algorithms. We further show that a weak notion of
robustness is both sufficient and necessary for generalizability, which implies
that robustness is a fundamental property for learning algorithms to work.
|
1005.2249
|
Sparse Recovery with Orthogonal Matching Pursuit under RIP
|
cs.IT math.IT
|
This paper presents a new analysis for the orthogonal matching pursuit (OMP)
algorithm. It is shown that if the restricted isometry property (RIP) is
satisfied at sparsity level $O(\bar{k})$, then OMP can recover a
$\bar{k}$-sparse signal in 2-norm. For compressed sensing applications, this
result implies that in order to uniformly recover a $\bar{k}$-sparse signal in
$\Real^d$, only $O(\bar{k} \ln d)$ random projections are needed. This analysis
improves earlier results on OMP that depend on stronger conditions such as
mutual incoherence that can only be satisfied with $\Omega(\bar{k}^2 \ln d)$
random projections.
|
1005.2251
|
Interference Channel with a Half-Duplex Out-of-Band Relay
|
cs.IT math.IT
|
A Gaussian interference channel (IC) aided by a half-duplex relay is
considered, in which the relay receives and transmits in an orthogonal band
with respect to the IC. The system thus consists of two parallel channels, the
IC and the channel over which the relay is active, which is referred to as
Out-of-Band Relay Channel (OBRC). The OBRC is operated by separating a multiple
access phase from the sources to the relay and a broadcast phase from the relay
to the destinations. Conditions under which the optimal operation, in terms of
the sum-capacity, entails either signal relaying and/or interference forwarding
by the relay are identified. These conditions also assess the optimality of
either separable or non-separable transmission over the IC and OBRC.
Specifically, the optimality of signal relaying and separable coding is
established for scenarios where the relay-to-destination channels set the
performance bottleneck with respect to the source-to-relay channels on the
OBRC. Optimality of interference forwarding and non-separable operation is also
established in special cases.
|
1005.2254
|
On Universal Complexity Measures
|
cs.IT cs.CC math.IT
|
We relate the computational complexity of finite strings to universal
representations of their underlying symmetries. First, Boolean functions are
classified using the universal covering topologies of the circuits which
enumerate them. A binary string is classified as a fixed point of its
automorphism group; the irreducible representation of this group is the
string's universal covering group. Such a measure may be used to test the
quasi-randomness of binary sequences with regard to first-order set membership.
Next, strings over general alphabets are considered. The complexity of a
general string is given by a universal representation which recursively factors
the codeword number associated with a string. This is the complexity of the
representation recursively decoding a Godel number having the value of the
string; the result is a tree of prime numbers which forms a universal
representation of the string's group symmetries.
|
1005.2263
|
Context models on sequences of covers
|
stat.ML cs.LG
|
We present a class of models that, via a simple construction, enables exact,
incremental, non-parametric, polynomial-time, Bayesian inference of conditional
measures. The approach relies upon creating a sequence of covers on the
conditioning variable and maintaining a different model for each set within a
cover. Inference remains tractable by specifying the probabilistic model in
terms of a random walk within the sequence of covers. We demonstrate the
approach on problems of conditional density estimation, which, to our knowledge
is the first closed-form, non-parametric Bayesian approach to this problem.
|
1005.2267
|
A Fast Compressive Channel Estimation with Modified Smoothed L0
Algorithm
|
cs.IT math.IT
|
Broadband wireless channel is a time dispersive and becomes strongly
frequency selective. In most cases, the channel is composed of a few dominant
coefficients and a large part of coefficients is approximately zero or zero. To
exploit the sparsity of multi-path channel (MPC), there are various methods
have been proposed. They are, namely, greedy algorithms, iterative algorithms,
and convex program. The former two algorithms are easy to be implemented but
not stable; on the other hand, the last method is stable but difficult to be
implemented as practical channel estimation problems because of computational
complexity. In this paper, we proposed a novel channel estimation strategy by
using modified smoothed (MSL0) algorithm which combines stable and low
complexity. Computer simulations confirm the effectiveness of the introduced
algorithm comparisons with the existing methods. We also give
|
1005.2269
|
Sparse Multipath Channel Estimation using DS Algorithm in Wideband
Communication Systems
|
cs.IT cs.NI math.IT
|
Wideband wireless channel is a time dispersive channel and becomes strongly
frequency-selective. However, in most cases, the channel is composed of a few
dominant taps and a large part of taps is approximately zero or zero. They are
often called sparse multi-path channels (MPC). Conventional linear MPC methods,
such as the least squares (LS), do not exploit the sparsity of MPC. In general,
accurate sparse MPC estimator can be obtained by solving a LASSO problem even
in the presence of noise. In this paper, a novel CS-based sparse MPC method by
using Dantzig selector (DS) [1] is introduced. This method exploits a channel's
sparsity to reduce the number of training sequence and, hence, increase
spectral efficiency when compared to existed methods with computer simulations.
|
1005.2270
|
Sparse Multipath Channel Estimation Using Compressive Sampling Matching
Pursuit Algorithm
|
cs.IT math.IT
|
Wideband wireless channel is a time dispersive channel and becomes strongly
frequency-selective. However, in most cases, the channel is composed of a few
dominant taps and a large part of taps is approximately zero or zero. To
exploit the sparsity of multi-path channel (MPC), two methods have been
proposed. They are, namely, greedy algorithm and convex program. Greedy
algorithm is easy to be implemented but not stable; on the other hand, the
convex program method is stable but difficult to be implemented as practical
channel estimation problems. In this paper, we introduce a novel channel
estimation strategy using compressive sampling matching pursuit (CoSaMP)
algorithm which was proposed in [1]. This algorithm will combine the greedy
algorithm with the convex program method. The effectiveness of the proposed
algorithm will be confirmed through comparisons with the existing methods.
|
1005.2296
|
Online Learning of Noisy Data with Kernels
|
cs.LG
|
We study online learning when individual instances are corrupted by
adversarially chosen random noise. We assume the noise distribution is unknown,
and may change over time with no restriction other than having zero mean and
bounded variance. Our technique relies on a family of unbiased estimators for
non-linear functions, which may be of independent interest. We show that a
variant of online gradient descent can learn functions in any dot-product
(e.g., polynomial) or Gaussian kernel space with any analytic convex loss
function. Our variant uses randomized estimates that need to query a random
number of noisy copies of each instance, where with high probability this
number is upper bounded by a constant. Allowing such multiple queries cannot be
avoided: Indeed, we show that online learning is in general impossible when
only one noisy copy of each instance can be accessed.
|
1005.2303
|
Towards Physarum Binary Adders
|
nlin.PS cs.AI physics.bio-ph q-bio.CB
|
Plasmodium of \emph{Physarum polycephalum} is a single cell visible by
unaided eye. The plasmodium's foraging behaviour is interpreted in terms of
computation. Input data is a configuration of nutrients, result of computation
is a network of plasmodium's cytoplasmic tubes spanning sources of nutrients.
Tsuda et al (2004) experimentally demonstrated that basic logical gates can be
implemented in foraging behaviour of the plasmodium. We simplify the original
designs of the gates and show --- in computer models --- that the plasmodium is
capable for computation of two-input two-output gate $<x, y> \to <xy, x+y>$ and
three-input two-output $<x, y, z> \to < \bar{x}yz, x+y+z>$. We assemble the
gates in a binary one-bit adder and demonstrate validity of the design using
computer simulation.
|
1005.2308
|
Finding Your Literature Match -- A Recommender System
|
cs.DL cs.IR
|
The universe of potentially interesting, searchable literature is expanding
continuously. Besides the normal expansion, there is an additional influx of
literature because of interdisciplinary boundaries becoming more and more
diffuse. Hence, the need for accurate, efficient and intelligent search tools
is bigger than ever. Even with a sophisticated search engine, looking for
information can still result in overwhelming results. An overload of
information has the intrinsic danger of scaring visitors away, and any
organization, for-profit or not-for-profit, in the business of providing
scholarly information wants to capture and keep the attention of its target
audience. Publishers and search engine engineers alike will benefit from a
service that is able to provide visitors with recommendations that closely meet
their interests. Providing visitors with special deals, new options and
highlights may be interesting to a certain degree, but what makes more sense
(especially from a commercial point of view) than to let visitors do most of
the work by the mere action of making choices? Hiring psychics is not an
option, so a technological solution is needed to recommend items that a visitor
is likely to be looking for. In this presentation we will introduce such a
solution and argue that it is practically feasible to incorporate this approach
into a useful addition to any information retrieval system with enough usage.
|
1005.2321
|
Typical Sequences for Polish Alphabets
|
cs.IT math.IT
|
The notion of typical sequences plays a key role in the theory of
information. Central to the idea of typicality is that a sequence $x_1, x_2,
..., x_n$ that is $P_X$-typical should, loosely speaking, have an empirical
distribution that is in some sense close to the distribution $P_X$. The two
most common notions of typicality are that of strong (letter) typicality and
weak (entropy) typicality. While weak typicality allows one to apply many
arguments that can be made with strongly typical arguments, some arguments for
strong typicality cannot be generalized to weak typicality. In this paper, we
consider an alternate definition of typicality, namely one based on the weak*
topology and that is applicable to Polish alphabets (which includes
$\reals^n$). This notion is a generalization of strong typicality in the sense
that it degenerates to strong typicality in the finite alphabet case, and can
also be applied to mixed and continuous distributions. Furthermore, it is
strong enough to prove a Markov lemma, and thus can be used to directly prove a
more general class of results than weak typicality. As an example of this
technique, we directly prove achievability for Gel'fand-Pinsker channels with
input constraints for a large class of alphabets and channels without first
proving a finite alphabet result and then resorting to delicate quantization
arguments. While this large class does not include Gaussian distributions with
power constraints, it is shown to be straightforward to recover this case by
considering a sequence of truncated Gaussian distributions.
|
1005.2364
|
A Short Introduction to Model Selection, Kolmogorov Complexity and
Minimum Description Length (MDL)
|
cs.LG cs.CC
|
The concept of overfitting in model selection is explained and demonstrated
with an example. After providing some background information on information
theory and Kolmogorov complexity, we provide a short explanation of Minimum
Description Length and error minimization. We conclude with a discussion of the
typical features of overfitting in model selection.
|
1005.2443
|
Network Coded Transmission of Fountain Codes over Cooperative Relay
Networks
|
cs.IT math.IT
|
In this paper, a transmission strategy of fountain codes over cooperative
relay networks is proposed. When more than one relay nodes are available, we
apply network coding to fountain-coded packets. By doing this, partial
information is made available to the destination node about the upcoming
message block. It is therefore able to reduce the required number of
transmissions over erasure channels, hence increasing the effective throughput.
Its application to wireless channels with Rayleigh fading and AWGN noise is
also analysed, whereby the role of analogue network coding and optimal weight
selection is demonstrated.
|
1005.2533
|
Dynamics underlying Box-office: Movie Competition on Recommender Systems
|
physics.soc-ph cs.IR
|
We introduce a simple model to study movie competition in the recommender
systems. Movies of heterogeneous quality compete against each other through
viewers' reviews and generate interesting dynamics of box-office. By assuming
mean-field interactions between the competing movies, we show that run-away
effect of popularity spreading is triggered by defeating the average review
score, leading to hits in box-office. The average review score thus
characterizes the critical movie quality necessary for transition from
box-office bombs to blockbusters. The major factors affecting the critical
review score are examined. By iterating the mean-field dynamical equations, we
obtain qualitative agreements with simulations and real systems in the
dynamical forms of box-office, revealing the significant role of competition in
understanding box-office dynamics.
|
1005.2544
|
Channel Estimation for Opportunistic Spectrum Access: Uniform and Random
Sensing
|
cs.IT math.IT
|
The knowledge of channel statistics can be very helpful in making sound
opportunistic spectrum access decisions. It is therefore desirable to be able
to efficiently and accurately estimate channel statistics. In this paper we
study the problem of optimally placing sensing times over a time window so as
to get the best estimate on the parameters of an on-off renewal channel. We are
particularly interested in a sparse sensing regime with a small number of
samples relative to the time window size. Using Fisher information as a
measure, we analytically derive the best and worst sensing sequences under a
sparsity condition. We also present a way to derive the best/worst sequences
without this condition using a dynamic programming approach. In both cases the
worst turns out to be the uniform sensing sequence, where sensing times are
evenly spaced within the window. With these results we argue that without a
priori knowledge, a robust sensing strategy should be a randomized strategy. We
then compare different random schemes using a family of distributions generated
by the circular $\beta$ ensemble, and propose an adaptive sensing scheme to
effectively track time-varying channel parameters. We further discuss the
applicability of compressive sensing for this problem.
|
1005.2603
|
Eigenvectors for clustering: Unipartite, bipartite, and directed graph
cases
|
cs.LG math.SP
|
This paper presents a concise tutorial on spectral clustering for broad
spectrum graphs which include unipartite (undirected) graph, bipartite graph,
and directed graph. We show how to transform bipartite graph and directed graph
into corresponding unipartite graph, therefore allowing a unified treatment to
all cases. In bipartite graph, we show that the relaxed solution to the $K$-way
co-clustering can be found by computing the left and right eigenvectors of the
data matrix. This gives a theoretical basis for $K$-way spectral co-clustering
algorithms proposed in the literatures. We also show that solving row and
column co-clustering is equivalent to solving row and column clustering
separately, thus giving a theoretical support for the claim: ``column
clustering implies row clustering and vice versa''. And in the last part, we
generalize the Ky Fan theorem---which is the central theorem for explaining
spectral clustering---to rectangular complex matrix motivated by the results
from bipartite graph analysis.
|
1005.2613
|
Compressed Sensing with Coherent and Redundant Dictionaries
|
math.NA cs.IT math.IT
|
This article presents novel results concerning the recovery of signals from
undersampled data in the common situation where such signals are not sparse in
an orthonormal basis or incoherent dictionary, but in a truly redundant
dictionary. This work thus bridges a gap in the literature and shows not only
that compressed sensing is viable in this context, but also that accurate
recovery is possible via an L1-analysis optimization problem. We introduce a
condition on the measurement/sensing matrix, which is a natural generalization
of the now well-known restricted isometry property, and which guarantees
accurate recovery of signals that are nearly sparse in (possibly) highly
overcomplete and coherent dictionaries. This condition imposes no incoherence
restriction on the dictionary and our results may be the first of this kind. We
discuss practical examples and the implications of our results on those
applications, and complement our study by demonstrating the potential of
L1-analysis for such problems.
|
1005.2633
|
A Distributed Newton Method for Network Utility Maximization
|
math.OC cs.SY
|
Most existing work uses dual decomposition and subgradient methods to solve
Network Utility Maximization (NUM) problems in a distributed manner, which
suffer from slow rate of convergence properties. This work develops an
alternative distributed Newton-type fast converging algorithm for solving
network utility maximization problems with self-concordant utility functions.
By using novel matrix splitting techniques, both primal and dual updates for
the Newton step can be computed using iterative schemes in a decentralized
manner with limited information exchange. Similarly, the stepsize can be
obtained via an iterative consensus-based averaging scheme. We show that even
when the Newton direction and the stepsize in our method are computed within
some error (due to finite truncation of the iterative schemes), the resulting
objective function value still converges superlinearly to an explicitly
characterized error neighborhood. Simulation results demonstrate significant
convergence rate improvement of our algorithm relative to the existing
subgradient methods based on dual decomposition.
|
1005.2638
|
Hierarchical Clustering for Finding Symmetries and Other Patterns in
Massive, High Dimensional Datasets
|
stat.ML cs.CV cs.LG
|
Data analysis and data mining are concerned with unsupervised pattern finding
and structure determination in data sets. "Structure" can be understood as
symmetry and a range of symmetries are expressed by hierarchy. Such symmetries
directly point to invariants, that pinpoint intrinsic properties of the data
and of the background empirical domain of interest. We review many aspects of
hierarchy here, including ultrametric topology, generalized ultrametric,
linkages with lattices and other discrete algebraic structures and with p-adic
number representations. By focusing on symmetries in data we have a powerful
means of structuring and analyzing massive, high dimensional data stores. We
illustrate the powerfulness of hierarchical clustering in case studies in
chemistry and finance, and we provide pointers to other published case studies.
|
1005.2646
|
An Algebraic Approach to Physical-Layer Network Coding
|
cs.IT math.IT
|
The problem of designing new physical-layer network coding (PNC) schemes via
lattice partitions is considered. Building on a recent work by Nazer and
Gastpar, who demonstrated its asymptotic gain using information-theoretic
tools, we take an algebraic approach to show its potential in non-asymptotic
settings. We first relate Nazer-Gastpar's approach to the fundamental theorem
of finitely generated modules over a principle ideal domain. Based on this
connection, we generalize their code construction and simplify their encoding
and decoding methods. This not only provides a transparent understanding of
their approach, but more importantly, it opens up the opportunity to design
efficient and practical PNC schemes. Finally, we apply our framework for PNC to
a Gaussian relay network and demonstrate its advantage over conventional PNC
schemes.
|
1005.2662
|
Fastest Distributed Consensus Averaging Problem on Perfect and Complete
n-ary Tree networks
|
cs.IT cs.DC cs.NI math.IT
|
Solving fastest distributed consensus averaging problem (i.e., finding
weights on the edges to minimize the second-largest eigenvalue modulus of the
weight matrix) over networks with different topologies is one of the primary
areas of research in the field of sensor networks and one of the well known
networks in this issue is tree network. Here in this work we present analytical
solution for the problem of fastest distributed consensus averaging algorithm
by means of stratification and semidefinite programming, for two particular
types of tree networks, namely perfect and complete n-ary tree networks. Our
method in this paper is based on convexity of fastest distributed consensus
averaging problem, and inductive comparing of the characteristic polynomials
initiated by slackness conditions in order to find the optimal weights. Also
the optimal weights for the edges of certain types of branches such as perfect
and complete n-ary tree branches are determined independently of rest of the
network.
|
1005.2704
|
Characterizing and modeling the dynamics of online popularity
|
physics.soc-ph cs.CY cs.SI
|
Online popularity has enormous impact on opinions, culture, policy, and
profits. We provide a quantitative, large scale, temporal analysis of the
dynamics of online content popularity in two massive model systems, the
Wikipedia and an entire country's Web space. We find that the dynamics of
popularity are characterized by bursts, displaying characteristic features of
critical systems such as fat-tailed distributions of magnitude and inter-event
time. We propose a minimal model combining the classic preferential popularity
increase mechanism with the occurrence of random popularity shifts due to
exogenous factors. The model recovers the critical features observed in the
empirical analysis of the systems analyzed here, highlighting the key factors
needed in the description of popularity dynamics.
|
1005.2710
|
Capacity of a Class of Multicast Tree Networks
|
cs.IT math.IT
|
In this paper, we characterize the capacity of a new class of single-source
multicast discrete memoryless relay networks having a tree topology in which
the root node is the source and each parent node in the graph has at most one
noisy child node and any number of noiseless child nodes. This class of
multicast tree networks includes the class of diamond networks studied by Kang
and Ulukus as a special case, where they showed that the capacity can be
strictly lower than the cut-set bound. For achievablity, a novel coding scheme
is constructed where each noisy relay employs a combination of
decode-and-forward (DF) and compress-and-forward (CF) and each noiseless relay
performs a random binning such that codebook constructions and relay operations
are independent for each node and do not depend on the network topology. For
converse, a new technique of iteratively manipulating inequalities exploiting
the tree topology is used.
|
1005.2714
|
Structural Drift: The Population Dynamics of Sequential Learning
|
q-bio.PE cs.LG
|
We introduce a theory of sequential causal inference in which learners in a
chain estimate a structural model from their upstream teacher and then pass
samples from the model to their downstream student. It extends the population
dynamics of genetic drift, recasting Kimura's selectively neutral theory as a
special case of a generalized drift process using structured populations with
memory. We examine the diffusion and fixation properties of several drift
processes and propose applications to learning, inference, and evolution. We
also demonstrate how the organization of drift process space controls fidelity,
facilitates innovations, and leads to information loss in sequential learning
with and without memory.
|
1005.2715
|
On the Subspace of Image Gradient Orientations
|
cs.CV
|
We introduce the notion of Principal Component Analysis (PCA) of image
gradient orientations. As image data is typically noisy, but noise is
substantially different from Gaussian, traditional PCA of pixel intensities
very often fails to estimate reliably the low-dimensional subspace of a given
data population. We show that replacing intensities with gradient orientations
and the $\ell_2$ norm with a cosine-based distance measure offers, to some
extend, a remedy to this problem. Our scheme requires the eigen-decomposition
of a covariance matrix and is as computationally efficient as standard $\ell_2$
PCA. We demonstrate some of its favorable properties on robust subspace
estimation.
|
1005.2731
|
Cross-Band Interference Considered Harmful in OFDM Based Distributed
Spectrum Sharing
|
cs.IT math.IT
|
In the past few years we have witnessed the paradigm shift from static
spectrum allocation to dynamic spectrum access/sharing. Orthogonal
Frequency-Division Multiple Access (OFDMA) is a promising mechanism to
implement the agile spectrum access. However, in wireless distributed networks
where tight synchronization is infeasible, OFDMA faces the problem of
cross-band interference. Subcarriers used by different users are no longer
orthogonal, and transmissions operating on non-overlapping subcarriers can
interfere with each other. In this paper, we explore the cause of cross-band
interference and analytically quantify its strength and impact on packet
transmissions. Our analysis captures three key practical artifacts: inter-link
frequency offset, temporal sampling mismatch and power heterogeneity. To our
best knowledge, this work is the first to systematically analyze the cause and
impact of cross-band interference. Using insights from our analysis, we then
build and compared three mitigating methods to combat cross-band interference.
Analytical and simulation results show that placing frequency guardband at link
boundaries is the most effective solution in distributed spectrum sharing,
while the other two frequency-domain methods are sensitive to either temporal
sampling mismatch or inter-link frequency offset. We find that the proper
guardband size depends heavily on power heterogeneity. Consequently, protocol
designs for dynamic spectrum access should carefully take into account the
cross-band interference when configuring spectrum usage.
|
1005.2759
|
Secrecy-Achieving Polar-Coding for Binary-Input Memoryless Symmetric
Wire-Tap Channels
|
cs.IT cs.CR math.IT
|
A polar coding scheme is introduced in this paper for the wire-tap channel.
It is shown that the provided scheme achieves the entire rate-equivocation
region for the case of symmetric and degraded wire-tap channel, where the weak
notion of secrecy is assumed. For the particular case of binary erasure
wire-tap channel, an alternative proof is given. The case of general
non-degraded wire-tap channels is also considered.
|
1005.2770
|
Capacity-Achieving Polar Codes for Arbitrarily-Permuted Parallel
Channels
|
cs.IT math.IT
|
Channel coding over arbitrarily-permuted parallel channels was first studied
by Willems et al. (2008). This paper introduces capacity-achieving polar coding
schemes for arbitrarily-permuted parallel channels where the component channels
are memoryless, binary-input and output-symmetric.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.