id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1211.4321 | Bayesian nonparametric models for ranked data | stat.ML cs.LG stat.ME | We develop a Bayesian nonparametric extension of the popular Plackett-Luce
choice model that can handle an infinite number of choice items. Our framework
is based on the theory of random atomic measures, with the prior specified by a
gamma process. We derive a posterior characterization and a simple and
effective Gibbs sampler for posterior simulation. We develop a time-varying
extension of our model, and apply it to the New York Times lists of weekly
bestselling books.
|
1211.4346 | Characterization and computation of infinite horizon specifications over
Markov processes | math.OC cs.LO cs.SY math.PR | This work is devoted to the formal verification of specifications over
general discrete-time Markov processes, with an emphasis on infinite-horizon
properties. These properties, formulated in a modal logic known as PCTL, can be
expressed through value functions defined over the state space of the process.
The main goal is to understand how structural features of the model (primarily
the presence of absorbing sets) influence the uniqueness of the solutions of
corresponding Bellman equations. Furthermore, this contribution shows that the
investigation of these structural features leads to new computational
techniques to calculate the specifications of interest: the emphasis is to
derive approximation techniques with associated explicit convergence rates and
formal error bounds.
|
1211.4370 | An Algorithm for Optimized Searching using NON-Overlapping Iterative
Neighbor intervals | cs.DS cs.IR | We have attempted in this paper to reduce the number of checked condition
through saving frequency of the tandem replicated words, and also using
non-overlapping iterative neighbor intervals on plane sweep algorithm. The
essential idea of non-overlapping iterative neighbor search in a document lies
in focusing the search not on the full space of solutions but on a smaller
subspace considering non-overlapping intervals defined by the solutions.
Subspace is defined by the range near the specified minimum keyword. We
repeatedly pick a range up and flip the unsatisfied keywords, so the relevant
ranges are detected. The proposed method tries to improve the plane sweep
algorithm by efficiently calculating the minimal group of words and enumerating
intervals in a document which contain the minimum frequency keyword. It
decreases the number of comparison and creates the best state of optimized
search algorithm especially in a high volume of data. Efficiency and
reliability are also increased compared to the previous modes of the technical
approach.
|
1211.4371 | Building a health care data warehouse for cancer diseases | cs.DB | This paper presents architecture for health care data warehouse specific to
cancer diseases which could be used by executive managers, doctors, physicians
and other health professionals to support the healthcare process. The data
today existing in multi-sources with different formats makes it necessary to
have some techniques for data integration. Executive managers need access to
Information so that decision makers can react in real time to changing needs.
Information is one of the most factors to an organization success that
executive managers or physicians would need to base their decisions on, during
decision making. A health care data warehouse is therefore necessary to
integrate the different data sources into a central data repository and
analysis this data.
|
1211.4372 | A Framework for Uplink Intercell Interference Modeling with
Channel-Based Scheduling | math.ST cs.IT math.IT stat.TH | This paper presents a novel framework for modeling the uplink intercell
interference (ICI) in a multiuser cellular network. The proposed framework
assists in quantifying the impact of various fading channel models and
state-of-the-art scheduling schemes on the uplink ICI. Firstly, we derive a
semianalytical expression for the distribution of the location of the scheduled
user in a given cell considering a wide range of scheduling schemes. Based on
this, we derive the distribution and moment generating function (MGF) of the
uplink ICI considering a single interfering cell. Consequently, we determine
the MGF of the cumulative ICI observed from all interfering cells and derive
explicit MGF expressions for three typical fading models. Finally, we utilize
the obtained expressions to evaluate important network performance metrics such
as the outage probability, ergodic capacity, and average fairness numerically.
Monte-Carlo simulation results are provided to demonstrate the efficacy of the
derived analytical expressions.
|
1211.4381 | Degrees-of-Freedom Region of Time Correlated MISO Broadcast Channel with
Perfect Delayed CSIT and Asymmetric Partial Current CSIT | cs.IT math.IT | The impact of imperfect CSIT on the degrees of freedom (DoF) of a time
correlated MISO Broadcast Channel has drawn a lot of attention recently.
Maddah-Ali and Tse have shown that the completely stale CSIT still benefit the
DoF. In very recent works, Yang et al. have extended the results by integrating
the partial current CSIT for a two-user MISO broadcast channel. However, those
researches so far focused on a symmetric case. In this contribution, we
investigate a more general case where the transmitter has knowledge of current
CSI of both users with unequal qualities. The essential ingredient in our work
lies in the way to multicast the overheard interference to boost the DoF. The
optimal DoF region is simply proved and its achievability is shown using a
novel transmission scheme assuming an infinite number of channel uses.
|
1211.4384 | A Sensing Policy Based on Confidence Bounds and a Restless Multi-Armed
Bandit Model | cs.IT cs.LG math.IT | A sensing policy for the restless multi-armed bandit problem with stationary
but unknown reward distributions is proposed. The work is presented in the
context of cognitive radios in which the bandit problem arises when deciding
which parts of the spectrum to sense and exploit. It is shown that the proposed
policy attains asymptotically logarithmic weak regret rate when the rewards are
bounded independent and identically distributed or finite state Markovian.
Simulation results verifying uniformly logarithmic weak regret are also
presented. The proposed policy is a centrally coordinated index policy, in
which the index of a frequency band is comprised of a sample mean term and a
confidence term. The sample mean term promotes spectrum exploitation whereas
the confidence term encourages exploration. The confidence term is designed
such that the time interval between consecutive sensing instances of any
suboptimal band grows exponentially. This exponential growth between suboptimal
sensing time instances leads to logarithmically growing weak regret. Simulation
results demonstrate that the proposed policy performs better than other similar
methods in the literature.
|
1211.4385 | Artificial Neural Network Based Optical Character Recognition | cs.CV cs.NE | Optical Character Recognition deals in recognition and classification of
characters from an image. For the recognition to be accurate, certain
topological and geometrical properties are calculated, based on which a
character is classified and recognized. Also, the Human psychology perceives
characters by its overall shape and features such as strokes, curves,
protrusions, enclosures etc. These properties, also called Features are
extracted from the image by means of spatial pixel-based calculation. A
collection of such features, called Vectors, help in defining a character
uniquely, by means of an Artificial Neural Network that uses these Feature
Vectors.
|
1211.4392 | Cost Efficient High Capacity Indoor Wireless Access: Denser Wi-Fi or
Coordinated Pico-cellular? | cs.IT cs.NI math.IT | Rapidly increasing traffic demand has forced indoor operators to deploy more
and more Wi-Fi access points (APs). As AP density increases, inter-AP
interference rises and may limit the capacity. Alternatively, cellular
technologies using centralized interference coordination can provide the same
capacity with the fewer number of APs at the price of more expensive equipment
and installation cost. It is still not obvious at what demand level more
sophisticated coordination pays off in terms of total system cost. To make this
comparison, we assess the required AP density of three candidate systems for a
given average demand: a Wi-Fi network, a conventional pico-cellular network
with frequency planning, and an advanced system employing multi-cell joint
processing. Numerical results show that dense Wi-Fi is the cheapest solution at
a relatively low demand level. However, the AP density grows quickly at a
critical demand level regardless of propagation conditions. Beyond this Wi-Fi
network limit, the conventional pico-cellular network works and is cheaper than
the joint processing in obstructed environments, e.g., furnished offices with
walls. In line of sight condition such as stadiums, the joint processing
becomes the most viable solution. The drawback is that extremely accurate
channel state information at transmitters is needed.
|
1211.4410 | Mixture Gaussian Process Conditional Heteroscedasticity | cs.LG stat.ML | Generalized autoregressive conditional heteroscedasticity (GARCH) models have
long been considered as one of the most successful families of approaches for
volatility modeling in financial return series. In this paper, we propose an
alternative approach based on methodologies widely used in the field of
statistical machine learning. Specifically, we propose a novel nonparametric
Bayesian mixture of Gaussian process regression models, each component of which
models the noise variance process that contaminates the observed data as a
separate latent Gaussian process driven by the observed data. This way, we
essentially obtain a mixture Gaussian process conditional heteroscedasticity
(MGPCH) model for volatility modeling in financial return series. We impose a
nonparametric prior with power-law nature over the distribution of the model
mixture components, namely the Pitman-Yor process prior, to allow for better
capturing modeled data distributions with heavy tails and skewness. Finally, we
provide a copula- based approach for obtaining a predictive posterior for the
covariances over the asset returns modeled by means of a postulated MGPCH
model. We evaluate the efficacy of our approach in a number of benchmark
scenarios, and compare its performance to state-of-the-art methodologies.
|
1211.4414 | Towards a Scalable Dynamic Spatial Database System | cs.DB cs.CG cs.DC | With the rise of GPS-enabled smartphones and other similar mobile devices,
massive amounts of location data are available. However, no scalable solutions
for soft real-time spatial queries on large sets of moving objects have yet
emerged. In this paper we explore and measure the limits of actual algorithms
and implementations regarding different application scenarios. And finally we
propose a novel distributed architecture to solve the scalability issues.
|
1211.4415 | Discrete-Time Poles and Dynamics of Discontinuous Mode Boost and Buck
Converters Under Various Control Schemes | cs.SY math.DS nlin.CD | Nonlinear systems, such as switching DC-DC boost or buck converters, have
rich dynamics. A simple one-dimensional discrete-time model is used to analyze
the boost or buck converter in discontinuous conduction mode. Seven different
control schemes (open-loop power stage, voltage mode control, current mode
control, constant power load, constant current load, constant-on-time control,
and boundary conduction mode) are analyzed systematically. The linearized
dynamics is obtained simply by taking partial derivatives with respect to
dynamic variables. In the discrete-time model, there is only a single pole and
no zero. The single closed-loop pole is a linear combination of three terms:
the open-loop pole, a term due to the control scheme, and a term due to the
non-resistive load. Even with a single pole, the phase response of the
discrete-time model can go beyond -90 degrees as in the two-pole average
models. In the boost converter with a resistive load under current mode
control, adding the compensating ramp has no effect on the pole location.
Increasing the ramp slope decreases the DC gain of control-to-output transfer
function and increases the audio-susceptibility. Similar analysis is applied to
the buck converter with a non-resistive load or variable switching frequency.
The derived dynamics agrees closely with the exact switching model and the past
research results.
|
1211.4422 | Continuous Models of Epidemic Spreading in Heterogeneous Dynamically
Changing Random Networks | cs.SI physics.soc-ph | Modeling spreading processes in complex random networks plays an essential
role in understanding and prediction of many real phenomena like epidemics or
rumor spreading. The dynamics of such systems may be represented
algorithmically by Monte-Carlo simulations on graphs or by ordinary
differential equations (ODEs). Despite many results in the area of network
modeling the selection of the best computational representation of the model
dynamics remains a challenge. While a closed form description is often
straightforward to derive, it generally cannot be solved analytically; as a
consequence the network dynamics requires a numerical solution of the ODEs or a
direct Monte-Carlo simulation on the networks. Moreover, Monte-Carlo
simulations and ODE solutions are not equivalent since ODEs produce a
deterministic solution while Monte-Carlo simulations are stochastic by nature.
Despite some recent advantages in Monte-Carlo simulations, particularly in the
flexibility of implementation, the computational cost of an ODE solution is
much lower and supports accurate and detailed output analysis such as
uncertainty or sensitivity analyses, parameter identification etc. In this
paper we propose a novel approach to model spreading processes in complex
random heterogeneous networks using systems of nonlinear ordinary differential
equations. We successfully apply this approach to predict the dynamics of
HIV-AIDS spreading in sexual networks, and compare it to historical data.
|
1211.4441 | On the Separability of Targets Using Binary Proximity Sensors | cs.IT math.IT | We consider the problem where a network of sensors has to detect the presence
of targets at any of $n$ possible locations in a finite region. All such
locations may not be occupied by a target. The data from sensors is fused to
determine the set of locations that have targets. We term this the separability
problem. In this paper, we address the separability of an asymptotically large
number of static target locations by using binary proximity sensors. Two models
for target locations are considered: (i) when target locations lie on a
uniformly spaced grid; and, (ii) when target locations are i.i.d. uniformly
distributed in the area. Sensor locations are i.i.d uniformly distributed in
the same finite region, independent of target locations. We derive conditions
on the sensing radius and the number of sensors required to achieve
separability. Order-optimal scaling laws, on the number of sensors as a
function of the number of target locations, for two types of separability
requirements are derived. The robustness or security aspects of the above
problem is also addressed. It is shown that in the presence of adversarial
sensors, which toggle their sensed reading and inject binary noise, the scaling
laws for separability remain unaffected.
|
1211.4445 | Efficient Spectrum Sharing in the Presence of Multiple Narrowband
Interference | cs.IT cs.NI math.IT | In this paper, we study the spectrum usage efficiency by applying wideband
methods and systems to the existing analog systems and applications. The
essential motivation of this work is to define the prospective coexistence
between analog FM and digital Spread Spectrum systems in an efficient way
sharing the same frequency band. The potential overlaid Spread Spectrum (SS)
system can spectrally coincide within the existing narrowband Frequency
Modulated (FM) broadcasting system upon several limitations, originating a key
motivation for the use of the FM radio frequency band in many applications,
encompassing wireless personal and sensors networks. The performance of the SS
system due to the overlaying analog FM system, consisting of multiple
narrowband FM stations, is investigated in order to derive the relevant bit
error probability and maximum achievable data rates. The SS system uses direct
sequence (DS) spreading, through maximal length pseudorandom sequences with
long spreading codes. The SS signal is evaluated throughout theoretical and
simulation-based performance analysis, for various types of spreading
scenarios, for different carrier frequency offset ({\Delta}f) and
signal-to-interference ratios, in order to derive valuable results for future
developing and planning of an overlay scenario.
|
1211.4464 | Free-surface flow simulations for discharge-based operation of hydraulic
structure gates | cs.CE physics.flu-dyn | We combine non-hydrostatic flow simulations of the free surface with a
discharge model based on elementary gate flow equations for decision support in
operation of hydraulic structure gates. A water level-based gate control used
in most of today's general practice does not take into account the fact that
gate operation scenarios producing similar total discharged volumes and similar
water levels may have different local flow characteristics. Accurate and timely
prediction of local flow conditions around hydraulic gates is important for
several aspects of structure management: ecology, scour, flow-induced gate
vibrations and waterway navigation. The modelling approach is described and
tested for a multi-gate sluice structure regulating discharge from a river to
the sea. The number of opened gates is varied and the discharge is stabilized
with automated control by varying gate openings. The free-surface model was
validated for discharge showing a correlation coefficient of 0.994 compared to
experimental data. Additionally, we show the analysis of CFD results for
evaluating bed stability and gate vibrations.
|
1211.4488 | A Rule-Based Approach For Aligning Japanese-Spanish Sentences From A
Comparable Corpora | cs.CL cs.AI | The performance of a Statistical Machine Translation System (SMT) system is
proportionally directed to the quality and length of the parallel corpus it
uses. However for some pair of languages there is a considerable lack of them.
The long term goal is to construct a Japanese-Spanish parallel corpus to be
used for SMT, whereas, there are a lack of useful Japanese-Spanish parallel
Corpus. To address this problem, In this study we proposed a method for
extracting Japanese-Spanish Parallel Sentences from Wikipedia using POS tagging
and Rule-Based approach. The main focus of this approach is the syntactic
features of both languages. Human evaluation was performed over a sample and
shows promising results, in comparison with the baseline.
|
1211.4499 | Rate-Distortion Analysis of Multiview Coding in a DIBR Framework | cs.CV | Depth image based rendering techniques for multiview applications have been
recently introduced for efficient view generation at arbitrary camera
positions. Encoding rate control has thus to consider both texture and depth
data. Due to different structures of depth and texture images and their
different roles on the rendered views, distributing the available bit budget
between them however requires a careful analysis. Information loss due to
texture coding affects the value of pixels in synthesized views while errors in
depth information lead to shift in objects or unexpected patterns at their
boundaries. In this paper, we address the problem of efficient bit allocation
between textures and depth data of multiview video sequences. We adopt a
rate-distortion framework based on a simplified model of depth and texture
images. Our model preserves the main features of depth and texture images.
Unlike most recent solutions, our method permits to avoid rendering at encoding
time for distortion estimation so that the encoding complexity is not
augmented. In addition to this, our model is independent of the underlying
inpainting method that is used at decoder. Experiments confirm our theoretical
results and the efficiency of our rate allocation strategy.
|
1211.4503 | An Effective Fingerprint Classification and Search Method | cs.CV cs.CR | This paper presents an effective fingerprint classification method designed
based on a hierarchical agglomerative clustering technique. The performance of
the technique was evaluated in terms of several real-life datasets and a
significant improvement in reducing the misclassification error has been
noticed. This paper also presents a query based faster fingerprint search
method over the clustered fingerprint databases. The retrieval accuracy of the
search method has been found effective in light of several real-life databases.
|
1211.4518 | Hypothesis Testing in Feedforward Networks with Broadcast Failures | cs.IT cs.LG math.IT | Consider a countably infinite set of nodes, which sequentially make decisions
between two given hypotheses. Each node takes a measurement of the underlying
truth, observes the decisions from some immediate predecessors, and makes a
decision between the given hypotheses. We consider two classes of broadcast
failures: 1) each node broadcasts a decision to the other nodes, subject to
random erasure in the form of a binary erasure channel; 2) each node broadcasts
a randomly flipped decision to the other nodes in the form of a binary
symmetric channel. We are interested in whether there exists a decision
strategy consisting of a sequence of likelihood ratio tests such that the node
decisions converge in probability to the underlying truth. In both cases, we
show that if each node only learns from a bounded number of immediate
predecessors, then there does not exist a decision strategy such that the
decisions converge in probability to the underlying truth. However, in case 1,
we show that if each node learns from an unboundedly growing number of
predecessors, then the decisions converge in probability to the underlying
truth, even when the erasure probabilities converge to 1. We also derive the
convergence rate of the error probability. In case 2, we show that if each node
learns from all of its previous predecessors, then the decisions converge in
probability to the underlying truth when the flipping probabilities of the
binary symmetric channels are bounded away from 1/2. In the case where the
flipping probabilities converge to 1/2, we derive a necessary condition on the
convergence rate of the flipping probabilities such that the decisions still
converge to the underlying truth. We also explicitly characterize the
relationship between the convergence rate of the error probability and the
convergence rate of the flipping probabilities.
|
1211.4520 | Storing cycles in Hopfield-type networks with pseudoinverse learning
rule: admissibility and network topology | cs.NE | Cyclic patterns of neuronal activity are ubiquitous in animal nervous
systems, and partially responsible for generating and controlling rhythmic
movements such as locomotion, respiration, swallowing and so on. Clarifying the
role of the network connectivities for generating cyclic patterns is
fundamental for understanding the generation of rhythmic movements. In this
paper, the storage of binary cycles in neural networks is investigated. We call
a cycle $\Sigma$ admissible if a connectivity matrix satisfying the cycle's
transition conditions exists, and construct it using the pseudoinverse learning
rule. Our main focus is on the structural features of admissible cycles and
corresponding network topology. We show that $\Sigma$ is admissible if and only
if its discrete Fourier transform contains exactly $r={rank}(\Sigma)$ nonzero
columns. Based on the decomposition of the rows of $\Sigma$ into loops, where a
loop is the set of all cyclic permutations of a row, cycles are classified as
simple cycles, separable or inseparable composite cycles. Simple cycles contain
rows from one loop only, and the network topology is a feedforward chain with
feedback to one neuron if the loop-vectors in $\Sigma$ are cyclic permutations
of each other. Composite cycles contain rows from at least two disjoint loops,
and the neurons corresponding to the rows in $\Sigma$ from the same loop are
identified with a cluster. Networks constructed from separable composite cycles
decompose into completely isolated clusters. For inseparable composite cycles
at least two clusters are connected, and the cluster-connectivity is related to
the intersections of the spaces spanned by the loop-vectors of the clusters.
Simulations showing successfully retrieved cycles in continuous-time
Hopfield-type networks and in networks of spiking neurons are presented.
|
1211.4521 | Hash in a Flash: Hash Tables for Solid State Devices | cs.DB cs.DS cs.IR | In recent years, information retrieval algorithms have taken center stage for
extracting important data in ever larger datasets. Advances in hardware
technology have lead to the increasingly wide spread use of flash storage
devices. Such devices have clear benefits over traditional hard drives in terms
of latency of access, bandwidth and random access capabilities particularly
when reading data. There are however some interesting trade-offs to consider
when leveraging the advanced features of such devices. On a relative scale
writing to such devices can be expensive. This is because typical flash devices
(NAND technology) are updated in blocks. A minor update to a given block
requires the entire block to be erased, followed by a re-writing of the block.
On the other hand, sequential writes can be two orders of magnitude faster than
random writes. In addition, random writes are degrading to the life of the
flash drive, since each block can support only a limited number of erasures.
TF-IDF can be implemented using a counting hash table. In general, hash tables
are a particularly challenging case for the flash drive because this data
structure is inherently dependent upon the randomness of the hash function, as
opposed to the spatial locality of the data. This makes it difficult to avoid
the random writes incurred during the construction of the counting hash table
for TF-IDF. In this paper, we will study the design landscape for the
development of a hash table for flash storage devices. We demonstrate how to
effectively design a hash table with two related hash functions, one of which
exhibits a data placement property with respect to the other. Specifically, we
focus on three designs based on this general philosophy and evaluate the
trade-offs among them along the axes of query performance, insert and update
times and I/O time through an implementation of the TF-IDF algorithm.
|
1211.4524 | Applying Dynamic Model for Multiple Manoeuvring Target Tracking Using
Particle Filtering | cs.CV cs.AI | In this paper, we applied a dynamic model for manoeuvring targets in SIR
particle filter algorithm for improving tracking accuracy of multiple
manoeuvring targets. In our proposed approach, a color distribution model is
used to detect changes of target's model . Our proposed approach controls
deformation of target's model. If deformation of target's model is larger than
a predetermined threshold, then the model will be updated. Global Nearest
Neighbor (GNN) algorithm is used as data association algorithm. We named our
proposed method as Deformation Detection Particle Filter (DDPF) . DDPF approach
is compared with basic SIR-PF algorithm on real airshow videos. Comparisons
results show that, the basic SIR-PF algorithm is not able to track the
manoeuvring targets when the rotation or scaling is occurred in target' s
model. However, DDPF approach updates target's model when the rotation or
scaling is occurred. Thus, the proposed approach is able to track the
manoeuvring targets more efficiently and accurately.
|
1211.4552 | A Dataset for StarCraft AI \& an Example of Armies Clustering | cs.AI | This paper advocates the exploration of the full state of recorded real-time
strategy (RTS) games, by human or robotic players, to discover how to reason
about tactics and strategy. We present a dataset of StarCraft games
encompassing the most of the games' state (not only player's orders). We
explain one of the possible usages of this dataset by clustering armies on
their compositions. This reduction of armies compositions to mixtures of
Gaussian allow for strategic reasoning at the level of the components. We
evaluated this clustering method by predicting the outcomes of battles based on
armies compositions' mixtures components
|
1211.4555 | Distributed Control of Generation in a Transmission Grid with a High
Penetration of Renewables | cs.SY math.OC | Deviations of grid frequency from the nominal frequency are an indicator of
the global imbalance between genera- tion and load. Two types of control, a
distributed propor- tional control and a centralized integral control, are cur-
rently used to keep frequency deviations small. Although generation-load
imbalance can be very localized, both controls primarily rely on frequency
deviation as their in- put. The time scales of control require the outputs of
the centralized integral control to be communicated to distant generators every
few seconds. We reconsider this con- trol/communication architecture and
suggest a hybrid ap- proach that utilizes parameterized feedback policies that
can be implemented in a fully distributed manner because the inputs to these
policies are local observables at each generator. Using an ensemble of
forecasts of load and time-intermittent generation representative of possible
fu- ture scenarios, we perform a centralized off-line stochas- tic optimization
to select the generator-specific feedback parameters. These parameters need
only be communi- cated to generators once per control period (60 minutes in our
simulations). We show that inclusion of local power flows as feedback inputs is
crucial and reduces frequency deviations by a factor of ten. We demonstrate our
con- trol on a detailed transmission model of the Bonneville Power
Administration (BPA). Our findings suggest that a smart automatic and
distributed control, relying on ad- vanced off-line and system-wide
computations commu- nicated to controlled generators infrequently, may be a
viable control and communication architecture solution. This architecture is
suitable for a future situation when generation-load imbalances are expected to
grow because of increased penetration of time-intermittent generation.
|
1211.4591 | Five Modulus Method For Image Compression | cs.CV cs.MM | Data is compressed by reducing its redundancy, but this also makes the data
less reliable, more prone to errors. In this paper a novel approach of image
compression based on a new method that has been created for image compression
which is called Five Modulus Method (FMM). The new method consists of
converting each pixel value in an 8-by-8 block into a multiple of 5 for each of
the R, G and B arrays. After that, the new values could be divided by 5 to get
new values which are 6-bit length for each pixel and it is less in storage
space than the original value which is 8-bits. Also, a new protocol for
compression of the new values as a stream of bits has been presented that gives
the opportunity to store and transfer the new compressed image easily.
|
1211.4627 | Enabling Social Applications via Decentralized Social Data Management | cs.SI cs.CY cs.DC physics.soc-ph | An unprecedented information wealth produced by online social networks,
further augmented by location/collocation data, is currently fragmented across
different proprietary services. Combined, it can accurately represent the
social world and enable novel socially-aware applications. We present
Prometheus, a socially-aware peer-to-peer service that collects social
information from multiple sources into a multigraph managed in a decentralized
fashion on user-contributed nodes, and exposes it through an interface
implementing non-trivial social inferences while complying with user-defined
access policies. Simulations and experiments on PlanetLab with emulated
application workloads show the system exhibits good end-to-end response time,
low communication overhead and resilience to malicious attacks.
|
1211.4649 | Artificial-Noise Alignment for Secure Multicast using Multiple Antennas | cs.IT math.IT | We propose an artificial-noise alignment scheme for multicasting a
common-confidential message to a group of receivers. Our scheme transmits a
superposition of information and noise symbols. The noise symbols are aligned
at each legitimate receiver and hence the information symbols can be decoded.
In contrast, the noise symbols completely mask the information symbols at the
eavesdroppers. Our proposed scheme does not require the knowledge of the
eavesdropper's channel gains at the transmitter for alignment, yet it achieves
the best-known lower bound on the secure degrees of freedom. Our scheme is also
a natural generalization of the approach of transmitting artificial noise in
the null-space of the legitimate receiver's channel, previously proposed in the
literature.
|
1211.4654 | Application of Data mining in Protein sequence Classification | cs.CE | Protein sequence classification involves feature selection for accurate
classification. Popular protein sequence classification techniques involve
extraction of specific features from the sequences. Researchers apply some
well-known classification techniques like neural networks, Genetic algorithm,
Fuzzy ARTMAP,Rough Set Classifier etc for accurate classification. This paper
presents a review is with three different classification models such as neural
network model, fuzzy ARTMAP model and Rough set classifier model. This is
followed by a new technique for classifying protein sequences. The proposed
model is typically implemented with an own designed tool and tries to reduce
the computational overheads encountered by earlier approaches and increase the
accuracy of classification
|
1211.4657 | Forest Sparsity for Multi-channel Compressive Sensing | cs.LG cs.CV cs.IT math.IT stat.ML | In this paper, we investigate a new compressive sensing model for
multi-channel sparse data where each channel can be represented as a
hierarchical tree and different channels are highly correlated. Therefore, the
full data could follow the forest structure and we call this property as
\emph{forest sparsity}. It exploits both intra- and inter- channel correlations
and enriches the family of existing model-based compressive sensing theories.
The proposed theory indicates that only $\mathcal{O}(Tk+\log(N/k))$
measurements are required for multi-channel data with forest sparsity, where
$T$ is the number of channels, $N$ and $k$ are the length and sparsity number
of each channel respectively. This result is much better than
$\mathcal{O}(Tk+T\log(N/k))$ of tree sparsity, $\mathcal{O}(Tk+k\log(N/k))$ of
joint sparsity, and far better than $\mathcal{O}(Tk+Tk\log(N/k))$ of standard
sparsity. In addition, we extend the forest sparsity theory to the multiple
measurement vectors problem, where the measurement matrix is a block-diagonal
matrix. The result shows that the required measurement bound can be the same as
that for dense random measurement matrix, when the data shares equal energy in
each channel. A new algorithm is developed and applied on four example
applications to validate the benefit of the proposed model. Extensive
experiments demonstrate the effectiveness and efficiency of the proposed theory
and algorithm.
|
1211.4658 | An Effective Method for Fingerprint Classification | cs.CV cs.CR | This paper presents an effective method for fingerprint classification using
data mining approach. Initially, it generates a numeric code sequence for each
fingerprint image based on the ridge flow patterns. Then for each class, a seed
is selected by using a frequent itemsets generation technique. These seeds are
subsequently used for clustering the fingerprint images. The proposed method
was tested and evaluated in terms of several real-life datasets and a
significant improvement in reducing the misclassification errors has been
noticed in comparison to its other counterparts.
|
1211.4665 | A Decentralized Method for Joint Admission Control and Beamforming in
Coordinated Multicell Downlink | cs.IT math.IT | In cellular networks, admission control and beamforming optimization are
intertwined problems. While beamforming optimization aims at satisfying users'
quality-of-service (QoS) requirements or improving the QoS levels, admission
control looks at how a subset of users should be selected so that the
beamforming optimization problem can yield a reasonable solution in terms of
the QoS levels provided. However, in order to simplify the design, the two
problems are usually seen as separate problems. This paper considers joint
admission control and beamforming (JACoB) under a coordinated multicell MISO
downlink scenario. We formulate JACoB as a user number maximization problem,
where selected users are guaranteed to receive the QoS levels they requested.
The formulated problem is combinatorial and hard, and we derive a convex
approximation to the problem. A merit of our convex approximation formulation
is that it can be easily decomposed for per-base-station decentralized
optimization, namely, via block coordinate decent. The efficacy of the proposed
decentralized method is demonstrated by simulation results.
|
1211.4674 | On Whitespace Identification Using Randomly Deployed Sensors | cs.IT math.IT | This work considers the identification of the available whitespace, i.e., the
regions that are not covered by any of the existing transmitters, within a
given geographical area. To this end, $n$ sensors are deployed at random
locations within the area. These sensors detect for the presence of a
transmitter within their radio range $r_s$, and their individual decisions are
combined to estimate the available whitespace. The limiting behavior of the
recovered whitespace as a function of $n$ and $r_s$ is analyzed. It is shown
that both the fraction of the available whitespace that the nodes fail to
recover as well as their radio range both optimally scale as $\log(n)/n$ as $n$
gets large. The analysis is extended to the case of unreliable sensors, and it
is shown that, surprisingly, the optimal scaling is still $\log(n)/n$ even in
this case. A related problem of estimating the number of transmitters and their
locations is also analyzed, with the sum absolute error in localization as
performance metric. The optimal scaling of the radio range and the necessary
minimum transmitter separation is determined, that ensure that the sum absolute
error in transmitter localization is minimized, with high probability, as $n$
gets large. Finally, the optimal distribution of sensor deployment is
determined, given the distribution of the transmitters, and the resulting
performance benefit is characterized.
|
1211.4683 | Content based video retrieval | cs.MM cs.CV | Content based video retrieval is an approach for facilitating the searching
and browsing of large image collections over World Wide Web. In this approach,
video analysis is conducted on low level visual properties extracted from video
frame. We believed that in order to create an effective video retrieval system,
visual perception must be taken into account. We conjectured that a technique
which employs multiple features for indexing and retrieval would be more
effective in the discrimination and search tasks of videos. In order to
validate this claim, content based indexing and retrieval systems were
implemented using color histogram, various texture features and other
approaches. Videos were stored in Oracle 9i Database and a user study measured
correctness of response.
|
1211.4709 | A New Similarity Measure for Taxonomy Based on Edge Counting | cs.AI cs.IR | This paper introduces a new similarity measure based on edge counting in a
taxonomy like WorldNet or Ontology. Measurement of similarity between text
segments or concepts is very useful for many applications like information
retrieval, ontology matching, text mining, and question answering and so on.
Several measures have been developed for measuring similarity between two
concepts: out of these we see that the measure given by Wu and Palmer [1] is
simple, and gives good performance. Our measure is based on their measure but
strengthens it. Wu and Palmer [1] measure has a disadvantage that it does not
consider how far the concepts are semantically. In our measure we include the
shortest path between the concepts and the depth of whole taxonomy together
with the distances used in Wu and Palmer [1]. Also the measure has following
disadvantage i.e. in some situations, the similarity of two elements of an IS-A
ontology contained in the neighborhood exceeds the similarity value of two
elements contained in the same hierarchy. Our measure introduces a penalization
factor for this case based upon shortest length between the concepts and depth
of whole taxonomy.
|
1211.4728 | Lemma for Linear Feedback Shift Registers and DFTs Applied to Affine
Variety Codes | cs.IT cs.DM math.AC math.CO math.IT | In this paper, we establish a lemma in algebraic coding theory that
frequently appears in the encoding and decoding of, e.g., Reed-Solomon codes,
algebraic geometry codes, and affine variety codes. Our lemma corresponds to
the non-systematic encoding of affine variety codes, and can be stated by
giving a canonical linear map as the composition of an extension through linear
feedback shift registers from a Grobner basis and a generalized inverse
discrete Fourier transform. We clarify that our lemma yields the error-value
estimation in the fast erasure-and-error decoding of a class of dual affine
variety codes. Moreover, we show that systematic encoding corresponds to a
special case of erasure-only decoding. The lemma enables us to reduce the
computational complexity of error-evaluation from O(n^3) using Gaussian
elimination to O(qn^2) with some mild conditions on n and q, where n is the
code length and q is the finite-field size.
|
1211.4753 | A unifying representation for a class of dependent random measures | stat.ML cs.LG | We present a general construction for dependent random measures based on
thinning Poisson processes on an augmented space. The framework is not
restricted to dependent versions of a specific nonparametric model, but can be
applied to all models that can be represented using completely random measures.
Several existing dependent random measures can be seen as specific cases of
this framework. Interesting properties of the resulting measures are derived
and the efficacy of the framework is demonstrated by constructing a
covariate-dependent latent feature model and topic model that obtain superior
predictive performance.
|
1211.4755 | Interference in Poisson Networks with Isotropically Distributed Nodes | cs.IT math.IT | Practical wireless networks are finite, and hence non-stationary with nodes
typically non-homo-geneously deployed over the area. This leads to a
location-dependent performance and to boundary effects which are both often
neglected in network modeling. In this work, interference in networks with
nodes distributed according to an isotropic but not necessarily stationary
Poisson point process (PPP) are studied. The resulting link performance is
precisely characterized as a function of (i) an arbitrary receiver location and
of (ii) an arbitrary isotropic shape of the spatial distribution. Closed-form
expressions for the first moment and the Laplace transform of the interference
are derived for the path loss exponents $\alpha=2$ and $\alpha=4$, and simple
bounds are derived for other cases. The developed model is applied to practical
problems in network analysis: for instance, the accuracy loss due to neglecting
border effects is shown to be undesirably high within transition regions of
certain deployment scenarios. Using a throughput metric not relying on the
stationarity of the spatial node distribution, the spatial throughput locally
around a given node is characterized.
|
1211.4771 | Matching Through Features and Features Through Matching | cs.CV | This paper addresses how to construct features for the problem of image
correspondence, in particular, the paper addresses how to construct features so
as to maintain the right level of invariance versus discriminability. We show
that without additional prior knowledge of the 3D scene, the right tradeoff
cannot be established in a pre-processing step of the images as is typically
done in most feature-based matching methods. However, given knowledge of the
second image to match, the tradeoff between invariance and discriminability of
features in the first image is less ambiguous. This suggests to setup the
problem of feature extraction and matching as a joint estimation problem. We
develop a possible mathematical framework, a possible computational algorithm,
and we give example demonstration on finding correspondence on images related
by a scene that undergoes large 3D deformation of non-planar objects and camera
viewpoint change.
|
1211.4783 | Inference of the Russian drug community from one of the largest social
networks in the Russian Federation | cs.SI physics.soc-ph | The criminal nature of narcotics complicates the direct assessment of a drug
community, while having a good understanding of the type of people drawn or
currently using drugs is vital for finding effective intervening strategies.
Especially for the Russian Federation this is of immediate concern given the
dramatic increase it has seen in drug abuse since the fall of the Soviet Union
in the early nineties. Using unique data from the Russian social network
'LiveJournal' with over 39 million registered users worldwide, we were able for
the first time to identify the on-line drug community by context sensitive text
mining of the users' blogs using a dictionary of known drug-related official
and 'slang' terminology. By comparing the interests of the users that most
actively spread information on narcotics over the network with the interests of
the individuals outside the on-line drug community, we found that the 'average'
drug user in the Russian Federation is generally mostly interested in topics
such as Russian rock, non-traditional medicine, UFOs, Buddhism, yoga and the
occult. We identify three distinct scale-free sub-networks of users which can
be uniquely classified as being either 'infectious', 'susceptible' or 'immune'.
|
1211.4795 | A Unifying Variational Perspective on Some Fundamental Information
Theoretic Inequalities | cs.IT math.IT | This paper proposes a unifying variational approach for proving and extending
some fundamental information theoretic inequalities. Fundamental information
theory results such as maximization of differential entropy, minimization of
Fisher information (Cram\'er-Rao inequality), worst additive noise lemma,
entropy power inequality (EPI), and extremal entropy inequality (EEI) are
interpreted as functional problems and proved within the framework of calculus
of variations. Several applications and possible extensions of the proposed
results are briefly mentioned.
|
1211.4798 | A survey of non-exchangeable priors for Bayesian nonparametric models | stat.ML cs.LG | Dependent nonparametric processes extend distributions over measures, such as
the Dirichlet process and the beta process, to give distributions over
collections of measures, typically indexed by values in some covariate space.
Such models are appropriate priors when exchangeability assumptions do not
hold, and instead we want our model to vary fluidly with some set of
covariates. Since the concept of dependent nonparametric processes was
formalized by MacEachern [1], there have been a number of models proposed and
used in the statistics and machine learning literatures. Many of these models
exhibit underlying similarities, an understanding of which, we hope, will help
in selecting an appropriate prior, developing new models, and leveraging
inference techniques.
|
1211.4839 | An Insight View of Kernel Visual Debugger in System Boot up | cs.OS cs.SY | For many years, developers could not figure out the mystery of OS kernels.
The main source of this mystery is the interaction between operating systems
and hardware while system's boot up and kernel initialization. In addition,
many operating system kernels differ in their behavior toward many situations.
For instance, kernels act differently in racing conditions, kernel
initialization and process scheduling. For such operations, kernel debuggers
were designed to help in tracing kernel behavior and solving many kernel bugs.
The importance of kernel debuggers is not limited to kernel code tracing but
also, they can be used in verification and performance comparisons. However,
developers had to be aware of debugger commands thus introducing some
difficulties to non-expert programmers. Later, several visual kernel debuggers
were presented to make it easier for programmers to trace their kernel code and
analyze kernel behavior. Nowadays, several kernel debuggers exist for solving
this mystery but only very few support line-by-line debugging at run-time. In
this paper, a generic approach for operating system source code debugging in
graphical mode with line-by-line tracing support is proposed. In the context of
this approach, system boot up and evaluation of two operating system schedulers
from several points of views will be discussed.
|
1211.4852 | Gaussian Assumption: the Least Favorable but the Most Useful | cs.IT math.IT | This paper focuses on three contributions. First, a connection between the
result, proposed by Stoica and Babu, and the recent information theoretic
results, the worst additive noise lemma and the isoperimetric inequality for
entropies, is illustrated. Second, information theoretic and estimation
theoretic justifications for the fact that the Gaussian assumption leads to the
largest Cram\'{e}r-Rao lower bound (CRLB) is presented. Third, a slight
extension of this result to the more general framework of correlated
observations is shown.
|
1211.4860 | Domain Adaptations for Computer Vision Applications | cs.CV cs.LG stat.ML | A basic assumption of statistical learning theory is that train and test data
are drawn from the same underlying distribution. Unfortunately, this assumption
doesn't hold in many applications. Instead, ample labeled data might exist in a
particular `source' domain while inference is needed in another, `target'
domain. Domain adaptation methods leverage labeled data from both domains to
improve classification on unseen data in the target domain. In this work we
survey domain transfer learning methods for various application domains with
focus on recent work in Computer Vision.
|
1211.4866 | A Brief Review of Data Mining Application Involving Protein Sequence
Classification | cs.DB cs.NE | Data mining techniques have been used by researchers for analyzing protein
sequences. In protein analysis, especially in protein sequence classification,
selection of feature is most important. Popular protein sequence classification
techniques involve extraction of specific features from the sequences.
Researchers apply some well-known classification techniques like neural
networks, Genetic algorithm, Fuzzy ARTMAP, Rough Set Classifier etc for
accurate classification. This paper presents a review is with three different
classification models such as neural network model, fuzzy ARTMAP model and
Rough set classifier model. A new technique for classifying protein sequences
have been proposed in the end. The proposed technique tries to reduce the
computational overheads encountered by earlier approaches and increase the
accuracy of classification.
|
1211.4888 | A Traveling Salesman Learns Bayesian Networks | cs.LG stat.ML | Structure learning of Bayesian networks is an important problem that arises
in numerous machine learning applications. In this work, we present a novel
approach for learning the structure of Bayesian networks using the solution of
an appropriately constructed traveling salesman problem. In our approach, one
computes an optimal ordering (partially ordered set) of random variables using
methods for the traveling salesman problem. This ordering significantly reduces
the search space for the subsequent greedy optimization that computes the final
structure of the Bayesian network. We demonstrate our approach of learning
Bayesian networks on real world census and weather datasets. In both cases, we
demonstrate that the approach very accurately captures dependencies between
random variables. We check the accuracy of the predictions based on independent
studies in both application domains.
|
1211.4889 | Statistical Tests for Contagion in Observational Social Network Studies | cs.SI physics.soc-ph stat.ME | Current tests for contagion in social network studies are vulnerable to the
confounding effects of latent homophily (i.e., ties form preferentially between
individuals with similar hidden traits). We demonstrate a general method to
lower bound the strength of causal effects in observational social network
studies, even in the presence of arbitrary, unobserved individual traits. Our
tests require no parametric assumptions and each test is associated with an
algebraic proof. We demonstrate the effectiveness of our approach by correctly
deducing the causal effects for examples previously shown to expose defects in
existing methodology. Finally, we discuss preliminary results on data taken
from the Framingham Heart Study.
|
1211.4891 | Correspondence and Independence of Numerical Evaluations of Algorithmic
Information Measures | cs.IT cs.CC cs.FL math.IT | We show that real-value approximations of Kolmogorov-Chaitin (K_m) using the
algorithmic Coding theorem as calculated from the output frequency of a large
set of small deterministic Turing machines with up to 5 states (and 2 symbols),
is in agreement with the number of instructions used by the Turing machines
producing s, which is consistent with strict integer-value program-size
complexity. Nevertheless, K_m proves to be a finer-grained measure and a
potential alternative approach to lossless compression algorithms for small
entities, where compression fails. We also show that neither K_m nor the number
of instructions used shows any correlation with Bennett's Logical Depth LD(s)
other than what's predicted by the theory. The agreement between theory and
numerical calculations shows that despite the undecidability of these
theoretical measures, approximations are stable and meaningful, even for small
programs and for short strings. We also announce a first Beta version of an
Online Algorithmic Complexity Calculator (OACC), based on a combination of
theoretical concepts, as a numerical implementation of the Coding Theorem
Method.
|
1211.4907 | Mahotas: Open source software for scriptable computer vision | cs.CV cs.SE | Mahotas is a computer vision library for Python. It contains traditional
image processing functionality such as filtering and morphological operations
as well as more modern computer vision functions for feature computation,
including interest point detection and local descriptors.
The interface is in Python, a dynamic programming language, which is very
appropriate for fast development, but the algorithms are implemented in C++ and
are tuned for speed. The library is designed to fit in with the scientific
software ecosystem in this language and can leverage the existing
infrastructure developed in that language.
Mahotas is released under a liberal open source license (MIT License) and is
available from (http://github.com/luispedro/mahotas) and from the Python
Package Index (http://pypi.python.org/pypi/mahotas).
|
1211.4909 | Fast Marginalized Block Sparse Bayesian Learning Algorithm | cs.IT cs.LG math.IT stat.ML | The performance of sparse signal recovery from noise corrupted,
underdetermined measurements can be improved if both sparsity and correlation
structure of signals are exploited. One typical correlation structure is the
intra-block correlation in block sparse signals. To exploit this structure, a
framework, called block sparse Bayesian learning (BSBL), has been proposed
recently. Algorithms derived from this framework showed superior performance
but they are not very fast, which limits their applications. This work derives
an efficient algorithm from this framework, using a marginalized likelihood
maximization method. Compared to existing BSBL algorithms, it has close
recovery performance but is much faster. Therefore, it is more suitable for
large scale datasets and applications requiring real-time implementation.
|
1211.4929 | Summarizing Reviews with Variable-length Syntactic Patterns and Topic
Models | cs.IR cs.CL | We present a novel summarization framework for reviews of products and
services by selecting informative and concise text segments from the reviews.
Our method consists of two major steps. First, we identify five frequently
occurring variable-length syntactic patterns and use them to extract candidate
segments. Then we use the output of a joint generative sentiment topic model to
filter out the non-informative segments. We verify the proposed method with
quantitative and qualitative experiments. In a quantitative study, our approach
outperforms previous methods in producing informative segments and summaries
that capture aspects of products and services as expressed in the
user-generated pros and cons lists. Our user study with ninety users resonates
with this result: individual segments extracted and filtered by our method are
rated as more useful by users compared to previous approaches by users.
|
1211.4940 | A Wireless Channel Sounding System for Rapid Propagation Measurements | cs.IT math.IT | Wireless systems are getting deployed in many new environments with different
antenna heights, frequency bands and multipath conditions. This has led to an
increasing demand for more channel measurements to understand wireless
propagation in specific environments and assist deployment engineering. We
design and implement a rapid wireless channel sounding system, using the
Universal Software Radio Peripheral (USRP) and GNU Radio software, to address
these demands. Our design measures channel propagation characteristics
simultaneously from multiple transmitter locations. The system consists of
multiple battery-powered transmitters and receivers. Therefore, we can set-up
the channel sounder rapidly at a field location and measure expeditiously by
analyzing different transmitters signals during a single walk or drive through
the environment. Our design can be used for both indoor and outdoor channel
measurements in the frequency range of 1 MHz to 6 GHz. We expect that the
proposed approach, with a few further refinements, can transform the task of
propagation measurement as a routine part of day-to-day wireless network
engineering.
|
1211.4957 | An Experiment on the Connection between the DLs' Family DL<ForAllPiZero>
and the Real World | cs.AI cs.LO | This paper describes the analysis of a selected testbed of Semantic Web
ontologies, by a SPARQL query, which determines those ontologies that can be
related to the description logic DL<ForAllPiZero>, introduced in [4] and
studied in [9]. We will see that a reasonable number of them is expressible
within such computationally efficient language. We expect that, in a long-term
view, a temporalization of description logics, and consequently, of OWL(2), can
open new perspectives for the inclusion in this language of a greater number of
ontologies of the testbed and, hopefully, of the "real world".
|
1211.4971 | A Hybrid Bacterial Foraging Algorithm For Solving Job Shop Scheduling
Problems | cs.NE | Bio-Inspired computing is the subset of Nature-Inspired computing. Job Shop
Scheduling Problem is categorized under popular scheduling problems. In this
research work, Bacterial Foraging Optimization was hybridized with Ant Colony
Optimization and a new technique Hybrid Bacterial Foraging Optimization for
solving Job Shop Scheduling Problem was proposed. The optimal solutions
obtained by proposed Hybrid Bacterial Foraging Optimization algorithms are much
better when compared with the solutions obtained by Bacterial Foraging
Optimization algorithm for well-known test problems of different sizes. From
the implementation of this research work, it could be observed that the
proposed Hybrid Bacterial Foraging Optimization was effective than Bacterial
Foraging Optimization algorithm in solving Job Shop Scheduling Problems. Hybrid
Bacterial Foraging Optimization is used to implement real world Job Shop
Scheduling Problems.
|
1211.4976 | Channel Independent Cryptographic Key Distribution | cs.IT cs.CR math.IT | This paper presents a method of cryptographic key distribution using an
`artificially' noisy channel. This is an important development because, while
it is known that a noisy channel can be used to generate unconditional secrecy,
there are many circumstances in which it is not possible to have a noisy
information exchange, such as in error corrected communication stacks. It is
shown that two legitimate parties can simulate a noisy channel by adding local
noise onto the communication and that the simulated channel has a secrecy
capacity even if the underlying channel does not. A derivation of the secrecy
conditions is presented along with numerical simulations of the channel
function to show that key exchange is feasible.
|
1211.5009 | Temporal Provenance Model (TPM): Model and Query Language | cs.DB | Provenance refers to the documentation of an object's lifecycle. This
documentation (often represented as a graph) should include all the information
necessary to reproduce a certain piece of data or the process that led to it.
In a dynamic world, as data changes, it is important to be able to get a piece
of data as it was, and its provenance graph, at a certain point in time.
Supporting time-aware provenance querying is challenging and requires: (i)
explicitly representing the time information in the provenance graphs, and (ii)
providing abstractions and efficient mechanisms for time-aware querying of
provenance graphs over an ever growing volume of data. The existing provenance
models treat time as a second class citizen (i.e. as an optional annotation).
This makes time-aware querying of provenance data inefficient and sometimes
inaccessible. We introduce an extended provenance graph model to explicitly
represent time as an additional dimension of provenance data. We also provide a
query language, novel abstractions and efficient mechanisms to query and
analyze timed provenance graphs. The main contributions of the paper include:
(i) proposing a Temporal Provenance Model (TPM) as a timed provenance model;
and (ii) introducing two concepts of timed folder, as a container of related
set of objects and their provenance relationship over time, and timed paths, to
represent the evolution of objects tracing information over time, for analyzing
and querying TPM graphs. We have implemented the approach on top of FPSPARQL, a
query engine for large graphs, and have evaluated for querying TPM models. The
evaluation shows the viability and efficiency of our approach.
|
1211.5027 | Enhanced Contention Resolution Aloha - ECRA | cs.IT cs.NI math.IT | Random Access (RA) Medium Access (MAC) protocols are simple and effective
when the nature of the traffic is unpredictable and random. In the following
paper, a novel RA protocol called Enhanced Contention Resolution ALOHA (ECRA)
is presented. This evolution, based on the previous Contention Resolution ALOHA
(CRA) protocol, exploits the nature of the interference in unslotted Aloha-like
channels for trying to resolve most of the partial collision that can occur
there. In the paper, the idea behind ECRA is presented together with numerical
simulations and a mathematical analysis of its performance gain. It is shown
that relevant performance increases in both throughput and Packet Error Rate
(PER) can be reached by ECRA with respect to CRA. A comparison with Contention
Resolution Diversity Slotted ALOHA (CRDSA) is also provided.
|
1211.5037 | Bayesian nonparametric Plackett-Luce models for the analysis of
preferences for college degree programmes | stat.ML cs.LG stat.ME | In this paper we propose a Bayesian nonparametric model for clustering
partial ranking data. We start by developing a Bayesian nonparametric extension
of the popular Plackett-Luce choice model that can handle an infinite number of
choice items. Our framework is based on the theory of random atomic measures,
with the prior specified by a completely random measure. We characterise the
posterior distribution given data, and derive a simple and effective Gibbs
sampler for posterior simulation. We then develop a Dirichlet process mixture
extension of our model and apply it to investigate the clustering of
preferences for college degree programmes amongst Irish secondary school
graduates. The existence of clusters of applicants who have similar preferences
for degree programmes is established and we determine that subject matter and
geographical location of the third level institution characterise these
clusters.
|
1211.5058 | Compressed Sensing of Simultaneous Low-Rank and Joint-Sparse Matrices | cs.IT math.IT | In this paper we consider the problem of recovering a high dimensional data
matrix from a set of incomplete and noisy linear measurements. We introduce a
new model that can efficiently restrict the degrees of freedom of the problem
and is generic enough to find a lot of applications, for instance in
multichannel signal compressed sensing (e.g. sensor networks, hyperspectral
imaging) and compressive sparse principal component analysis (s-PCA). We assume
data matrices have a simultaneous low-rank and joint sparse structure, and we
propose a novel approach for efficient compressed sensing (CS) of such data.
Our CS recovery approach is based on a convex minimization problem that
incorporates this restrictive structure by jointly regularizing the solutions
with their nuclear (trace) norm and l2/l1 mixed norm. Our theoretical analysis
uses a new notion of restricted isometry property (RIP) and shows that, for
sampling schemes satisfying RIP, our approach can stably recover all low-rank
and joint-sparse matrices. For a certain class of random sampling schemes
satisfying a particular concentration bound (e.g. the subgaussian ensembles) we
derive a lower bound on the number of CS measurements indicating the
near-optimality of our recovery approach as well as a significant enhancement
compared to the state-of-the-art. We introduce an iterative algorithm based on
proximal calculus in order to solve the joint nuclear and l2/l1 norms
minimization problem and, finally, we illustrate the empirical recovery phase
transition of this approach by series of numerical experiments.
|
1211.5060 | On sensor fusion for airborne wind energy systems | cs.SY math.OC | A study on filtering aspects of airborne wind energy generators is presented.
This class of renewable energy systems aims to convert the aerodynamic forces
generated by tethered wings, flying in closed paths transverse to the wind
flow, into electricity. The accurate reconstruction of the wing's position,
velocity and heading is of fundamental importance for the automatic control of
these kinds of systems. The difficulty of the estimation problem arises from
the nonlinear dynamics, wide speed range, large accelerations and fast changes
of direction that the wing experiences during operation. It is shown that the
overall nonlinear system has a specific structure allowing its partitioning
into sub-systems, hence leading to a series of simpler filtering problems.
Different sensor setups are then considered, and the related sensor fusion
algorithms are presented. The results of experimental tests carried out with a
small-scale prototype and wings of different sizes are discussed. The designed
filtering algorithms rely purely on kinematic laws, hence they are independent
from features like wing area, aerodynamic efficiency, mass, etc. Therefore, the
presented results are representative also of systems with larger size and
different wing design, different number of tethers and/or rigid wings.
|
1211.5063 | On the difficulty of training Recurrent Neural Networks | cs.LG | There are two widely known issues with properly training Recurrent Neural
Networks, the vanishing and the exploding gradient problems detailed in Bengio
et al. (1994). In this paper we attempt to improve the understanding of the
underlying issues by exploring these problems from an analytical, a geometric
and a dynamical systems perspective. Our analysis is used to justify a simple
yet effective solution. We propose a gradient norm clipping strategy to deal
with exploding gradients and a soft constraint for the vanishing gradients
problem. We validate empirically our hypothesis and proposed solutions in the
experimental section.
|
1211.5067 | Approaching the Capacity of Large-Scale MIMO Systems via Non-Binary LDPC
Codes | cs.IT math.IT | In this paper, the application of non-binary low-density parity-check
(NBLDPC) codes to MIMO systems which employ hundreds of antennas at both the
transmitter and the receiver has been proposed. Together with the well-known
low-complexity MMSE detection, the moderate length NBLDPC codes can operate
closer to the MIMO capacity, e.g., capacity-gap about 3.5 dB (the best known
gap is more than 7 dB). To further reduce the complexity of MMSE detection, a
novel soft output detection that can provide an excellent coded performance in
low SNR region with 99% complexity reduction is also proposed. The asymptotic
performance is analysed by using the Monte Carlo density evolution. It is found
that the NBLDPC codes can operate within 1.6 dB from the MIMO capacity.
Furthermore, the merit of using the NBLDPC codes in large MIMO systems with the
presence of imperfect channel estimation and spatial fading correlation which
are both the realistic scenarios for large MIMO systems is also pointed out.
|
1211.5084 | On Top-$k$ Weighted SUM Aggregate Nearest and Farthest Neighbors in the
$L_1$ Plane | cs.CG cs.DB cs.DS | In this paper, we study top-$k$ aggregate (or group) nearest neighbor queries
using the weighted SUM operator under the $L_1$ metric in the plane. Given a
set $P$ of $n$ points, for any query consisting of a set $Q$ of $m$ weighted
points and an integer $k$, $ 1 \le k \le n$, the top-$k$ aggregate nearest
neighbor query asks for the $k$ points of $P$ whose aggregate distances to $Q$
are the smallest, where the aggregate distance of each point $p$ of $P$ to $Q$
is the sum of the weighted distances from $p$ to all points of $Q$. We build an
$O(n\log n\log\log n)$-size data structure in $O(n\log n \log\log n)$ time,
such that each top-$k$ query can be answered in $O(m\log m+(k+m)\log^2 n)$
time. We also obtain other results with trade-off between preprocessing and
query. Even for the special case where $k=1$, our results are better than the
previously best method (in PODS 2012), which requires $O(n\log^2 n)$
preprocessing time, $O(n\log^2 n)$ space, and $O(m^2\log^3 n)$ query time. In
addition, for the one-dimensional version of this problem, our approach can
build an $O(n)$-size data structure in $O(n\log n)$ time that can support
$O(\min\{k,\log m\}\cdot m+k+\log n)$ time queries. Further, we extend our
techniques to the top-$k$ aggregate farthest neighbor queries, with the same
bounds.
|
1211.5086 | Optimal Sequence-Based Control and Estimation of Networked Linear
Systems | cs.SY | In this paper, a unified approach to sequence-based control and estimation of
linear networked systems with multiple sensors is proposed. Time delays and
data losses in the controller-actuator-channel are compensated by sending
sequences of control inputs. The sequence-based design paradigm is further
extended to the sensor-controller-channels without increasing the load of the
network. In this context, we present a recursive solution based on the
Hypothesizing Distributed Kalman Filter (HKF) that is included in the overall
sequence-based controller design.
|
1211.5098 | Scaling Genetic Programming for Source Code Modification | cs.NE cs.SE | In Search Based Software Engineering, Genetic Programming has been used for
bug fixing, performance improvement and parallelisation of programs through the
modification of source code. Where an evolutionary computation algorithm, such
as Genetic Programming, is to be applied to similar code manipulation tasks,
the complexity and size of source code for real-world software poses a
scalability problem. To address this, we intend to inspect how the Software
Engineering concepts of modularity, granularity and localisation of change can
be reformulated as additional mechanisms within a Genetic Programming
algorithm.
|
1211.5108 | The Rightmost Equal-Cost Position Problem | cs.DS cs.IT math.IT | LZ77-based compression schemes compress the input text by replacing factors
in the text with an encoded reference to a previous occurrence formed by the
couple (length, offset). For a given factor, the smallest is the offset, the
smallest is the resulting compression ratio. This is optimally achieved by
using the rightmost occurrence of a factor in the previous text. Given a cost
function, for instance the minimum number of bits used to represent an integer,
we define the Rightmost Equal-Cost Position (REP) problem as the problem of
finding one of the occurrences of a factor which cost is equal to the cost of
the rightmost one. We present the Multi-Layer Suffix Tree data structure that,
for a text of length n, at any time i, it provides REP(LPF) in constant time,
where LPF is the longest previous factor, i.e. the greedy phrase, a reference
to the list of REP({set of prefixes of LPF}) in constant time and REP(p) in
time O(|p| log log n) for any given pattern p.
|
1211.5157 | To Relay or Not To Relay in Cognitive Radio Sensor Networks | cs.NI cs.IT math.IT math.OC | Recent works proposed the relaying at the MAC layer in cognitive radio
networks whereby the primary packets are forwarded by the secondary node
maintaining an extra queue devoted to the relaying function. However, relaying
of primary packets may introduce delays on the secondary packets (called
secondary delay) and require additional power budget in order to forward the
primary packets that is especially crucial when the network is deployed using
sensors with limited power resources. To this end, an admission control can be
employed in order to manage efficiently the relaying in cognitive radio sensor
networks. In this paper, we first analyse and formulate the secondary delay and
the required power budget of the secondary sensor node in relation with the
acceptance factor that indicates whether the primary packets are allowed to be
forwarded or not. Having defined the above, we present the tradeoff between the
secondary delay and the required power budget when the acceptance factor is
adapted. In the sequel, we formulate an optimization problem to minimize the
secondary delay over the admission control parameter subject to a limit on the
required power budget plus the constraints related to the stabilities of the
individual queues due to their interdependencies observed by the analysis. The
solution of this problem is provided using iterative decomposition methods i.e.
dual and primal decompositions using Lagrange multipliers that simplifies the
original complicated problem resulting in a final equivalent dual problem that
includes the initial Karush Kuhn Tucker conditions. Using the derived
equivalent dual problem, we obtain the optimal acceptance factor while in
addition we highlight the possibilities for extra delay minimization that is
provided by relaxing the initial constraints through changing the values of the
Lagrange multipliers.
|
1211.5164 | State Evolution for General Approximate Message Passing Algorithms, with
Applications to Spatial Coupling | math.PR cs.IT math.IT math.ST stat.TH | We consider a class of approximated message passing (AMP) algorithms and
characterize their high-dimensional behavior in terms of a suitable state
evolution recursion. Our proof applies to Gaussian matrices with independent
but not necessarily identically distributed entries. It covers --in
particular-- the analysis of generalized AMP, introduced by Rangan, and of AMP
reconstruction in compressed sensing with spatially coupled sensing matrices.
The proof technique builds on the one of [BM11], while simplifying and
generalizing several steps.
|
1211.5184 | Faster Random Walks By Rewiring Online Social Networks On-The-Fly | cs.SI cs.DS physics.soc-ph | Many online social networks feature restrictive web interfaces which only
allow the query of a user's local neighborhood through the interface. To enable
analytics over such an online social network through its restrictive web
interface, many recent efforts reuse the existing Markov Chain Monte Carlo
methods such as random walks to sample the social network and support analytics
based on the samples. The problem with such an approach, however, is the large
amount of queries often required (i.e., a long "mixing time") for a random walk
to reach a desired (stationary) sampling distribution.
In this paper, we consider a novel problem of enabling a faster random walk
over online social networks by "rewiring" the social network on-the-fly.
Specifically, we develop Modified TOpology (MTO)-Sampler which, by using only
information exposed by the restrictive web interface, constructs a "virtual"
overlay topology of the social network while performing a random walk, and
ensures that the random walk follows the modified overlay topology rather than
the original one. We show that MTO-Sampler not only provably enhances the
efficiency of sampling, but also achieves significant savings on query cost
over real-world online social networks such as Google Plus, Epinion etc.
|
1211.5189 | Optimally fuzzy temporal memory | cs.AI cs.LG | Any learner with the ability to predict the future of a structured
time-varying signal must maintain a memory of the recent past. If the signal
has a characteristic timescale relevant to future prediction, the memory can be
a simple shift register---a moving window extending into the past, requiring
storage resources that linearly grows with the timescale to be represented.
However, an independent general purpose learner cannot a priori know the
characteristic prediction-relevant timescale of the signal. Moreover, many
naturally occurring signals show scale-free long range correlations implying
that the natural prediction-relevant timescale is essentially unbounded. Hence
the learner should maintain information from the longest possible timescale
allowed by resource availability. Here we construct a fuzzy memory system that
optimally sacrifices the temporal accuracy of information in a scale-free
fashion in order to represent prediction-relevant information from
exponentially long timescales. Using several illustrative examples, we
demonstrate the advantage of the fuzzy memory system over a shift register in
time series forecasting of natural signals. When the available storage
resources are limited, we suggest that a general purpose learner would be
better off committing to such a fuzzy memory system.
|
1211.5207 | On the Compressed Measurements over Finite Fields: Sparse or Dense
Sampling | cs.IT math.IT | We consider compressed sampling over finite fields and investigate the number
of compressed measurements needed for successful L0 recovery. Our results are
obtained while the sparseness of the sensing matrices as well as the size of
the finite fields are varied. One of interesting conclusions includes that
unless the signal is "ultra" sparse, the sensing matrices do not have to be
dense.
|
1211.5227 | Service Composition Design Pattern for Autonomic Computing Systems using
Association Rule based Learning and Service-Oriented Architecture | cs.SE cs.DC cs.LG | In this paper we present a Service Injection and composition Design Pattern
for Unstructured Peer-to-Peer networks, which is designed with Aspect-oriented
design patterns, and amalgamation of the Strategy, Worker Object, and
Check-List Design Patterns used to design the Self-Adaptive Systems. It will
apply self reconfiguration planes dynamically without the interruption or
intervention of the administrator for handling service failures at the servers.
When a client requests for a complex service, Service Composition should be
done to fulfil the request. If a service is not available in the memory, it
will be injected as Aspectual Feature Module code. We used Service Oriented
Architecture (SOA) with Web Services in Java to Implement the composite Design
Pattern. As far as we know, there are no studies on composition of design
patterns for Peer-to-peer computing domain. The pattern is described using a
java-like notation for the classes and interfaces. A simple UML class and
Sequence diagrams are depicted.
|
1211.5231 | Sparsity-Aware Learning and Compressed Sensing: An Overview | cs.IT math.IT | This paper is based on a chapter of a new book on Machine Learning, by the
first and third author, which is currently under preparation. We provide an
overview of the major theoretical advances as well as the main trends in
algorithmic developments in the area of sparsity-aware learning and compressed
sensing. Both batch processing and online processing techniques are considered.
A case study in the context of time-frequency analysis of signals is also
presented. Our intent is to update this review from time to time, since this is
a very hot research area with a momentum and speed that is sometimes difficult
to follow up.
|
1211.5251 | Families of Hadamard Z2Z4Q8-codes | cs.IT math.CO math.IT | A Z2Z4Q8-code is a non-empty subgroup of a direct product of copies of Z_2,
Z_4 and Q_8 (the binary field, the ring of integers modulo 4 and the quaternion
group on eight elements, respectively). Such Z2Z4Q8-codes are translation
invariant propelinear codes as the well known Z_4-linear or Z_2Z_4-linear
codes.
In the current paper, we show that there exist "pure" Z2Z4Q8-codes, that is,
codes that do not admit any abelian translation invariant propelinear
structure. We study the dimension of the kernel and rank of the Z2Z4Q8-codes,
and we give upper and lower bounds for these parameters. We give tools to
construct a new class of Hadamard codes formed by several families of
Z2Z4Q8-codes; we study and show the different shapes of such a codes and we
improve the upper and lower bounds for the rank and the dimension of the kernel
when the codes are Hadamard.
|
1211.5252 | Non-Asymptotic Analysis of Privacy Amplification via Renyi Entropy and
Inf-Spectral Entropy | cs.IT cs.CR math.IT | This paper investigates the privacy amplification problem, and compares the
existing two bounds: the exponential bound derived by one of the authors and
the min-entropy bound derived by Renner. It turns out that the exponential
bound is better than the min-entropy bound when a security parameter is rather
small for a block length, and that the min-entropy bound is better than the
exponential bound when a security parameter is rather large for a block length.
Furthermore, we present another bound that interpolates the exponential bound
and the min-entropy bound by a hybrid use of the Renyi entropy and the
inf-spectral entropy.
|
1211.5257 | On binary quadratic symmetric bent and almost bent functions | cs.IT math.IT | We give a new simple construction for known binary quadratic symmetric bent
and almost bent functions. In particular, for even number of variables, they
are self-dual and anti-self-dual quadratic bent functions, respectively, which
are not of the Maiorana-McFarland type, but affine equivalent to it.
|
1211.5264 | Source and Channel Polarization over Finite Fields and Reed-Solomon
Matrices | cs.IT math.IT | Polarization phenomenon over any finite field $\mathbb{F}_{q}$ with size $q$
being a power of a prime is considered. This problem is a generalization of the
original proposal of channel polarization by Arikan for the binary field, as
well as its extension to a prime field by Sasoglu, Telatar, and Arikan. In this
paper, a necessary and sufficient condition of a matrix over a finite field
$\mathbb{F}_q$ is shown under which any source and channel are polarized.
Furthermore, the result of the speed of polarization for the binary alphabet
obtained by Arikan and Telatar is generalized to arbitrary finite field. It is
also shown that the asymptotic error probability of polar codes is improved by
using the Reed-Solomon matrix, which can be regarded as a natural
generalization of the $2\times 2$ binary matrix used in the original proposal
by Arikan.
|
1211.5283 | DNF-AF Selection Two-Way Relaying | cs.IT math.IT | Error propagation and noise propagation at the relay node would highly
degrade system performance in two-way relay networks. In this paper, we
introduce DNF-AF selection two-way relaying scheme which aims to avoid error
propagation and mitigate noise propagation. If the relay successfully decodes
the exclusive or (XOR) of the messages sent by the two transceivers, it applies
denoise-and-forward (DNF). Otherwise, amplify-and-forward (AF) strategy will be
utilized. In this way, decoding error propagation is avoided at the relay.
Meanwhile, since the relay attempts to decode the XOR of the two messages
instead of explicitly decoding the two messages, the larger usable range of XOR
network coding can be obtained. As XOR network coding can avoid noise
propagation, DNF-AF would mitigate noise propagation. In addition, bit error
rate (BER) performance of DNF-AF selection scheme with BPSK modulation is
theoretically analyzed in this paper. Numerical results verify that the
proposed scheme has better BER performance than existing ones.
|
1211.5292 | Impact of blood rheology on wall shear stress in a model of the middle
cerebral artery | cs.CE physics.flu-dyn physics.med-ph | Perturbations to the homeostatic distribution of mechanical forces exerted by
blood on the endothelial layer have been correlated with vascular pathologies
including intracranial aneurysms and atherosclerosis. Recent computational work
suggests that in order to correctly characterise such forces, the
shear-thinning properties of blood must be taken into account. To the best of
our knowledge, these findings have never been compared against experimentally
observed pathological thresholds. In the current work, we apply the three-band
diagram (TBD) analysis due to Gizzi et al. to assess the impact of the choice
of blood rheology model on a computational model of the right middle cerebral
artery. Our results show that, in the model under study, the differences
between the wall shear stress predicted by a Newtonian model and the well known
Carreau-Yasuda generalized Newtonian model are only significant if the vascular
pathology under study is associated with a pathological threshold in the range
0.94 Pa to 1.56 Pa, where the results of the TBD analysis of the rheology
models considered differs. Otherwise, we observe no significant differences.
|
1211.5353 | Faster Compact Top-k Document Retrieval | cs.DS cs.IR | An optimal index solving top-k document retrieval [Navarro and Nekrich,
SODA12] takes O(m + k) time for a pattern of length m, but its space is at
least 80n bytes for a collection of n symbols. We reduce it to 1.5n to 3n
bytes, with O(m+(k+log log n) log log n) time, on typical texts. The index is
up to 25 times faster than the best previous compressed solutions, and requires
at most 5% more space in practice (and in some cases as little as one half).
Apart from replacing classical by compressed data structures, our main idea is
to replace suffix tree sampling by frequency thresholding to achieve
compression.
|
1211.5355 | Cobb Angle Measurement of Scoliosis with Reduced Variability | cs.CV | Cobb angle, which is a measure of spinal curvature is the standard method for
quantifying the magnitude of Scoliosis related to spinal deformity in
orthopedics. Determining the Cobb angle through manual process is subject to
human errors. In this work, we propose a methodology to measure the magnitude
of Cobb angle, which appreciably reduces the variability related to its
measurement compared to the related works. The proposed methodology is
facilitated by using a suitable new improved version of Non-Local Means for
image denoisation and Otsus automatic threshold selection for Canny edge
detection. We have selected NLM for preprocessing of the image as it is one of
the fine states of art for image denoisation and helps in retaining the image
quality. Trimmedmean, median are more robust to outliners than mean and
following this concept we observed that NLM denoising quality performance can
be enhanced by using Euclidean trimmed-mean replacing the mean. To prove the
better performance of the Non-Local Euclidean Trimmed-mean denoising filter, we
have provided some comparative study results of the proposed denoising
technique with traditional NLM and NonLocal Euclidean Medians. The experimental
results for Cobb angle measurement over intra observer and inter observer
experimental data reveals the better performance and superiority of the
proposed approach compared to the related works. MATLAB2009b image processing
toolbox was used for the purpose of simulation and verification of the proposed
methodology.
|
1211.5358 | Stable XOR-based Policies for the Broadcast Erasure Channel with
Feedback | cs.IT math.IT | In this paper we describe a network coding scheme for the Broadcast Erasure
Channel with multiple unicast stochastic flows, in the case of a single source
transmitting packets to $N$ users, where per-slot feedback is fed back to the
transmitter in the form of ACK/NACK messages. This scheme performs only binary
(XOR) operations and involves a network of queues, along with special rules for
coding and moving packets among the queues, that ensure instantaneous
decodability. The system under consideration belongs to a class of networks
whose stability properties have been analyzed in earlier work, which is used to
provide a stabilizing policy employing the currently proposed coding scheme.
Finally, we show the optimality of the proposed policy for $N=4$ and i.i.d.
erasure events, in the sense that the policy's stability region matches a
derived outer bound (which coincides with the system's information-theoretic
capacity region), even when a restricted set of coding rules is used.
|
1211.5371 | A hybrid cross entropy algorithm for solving dynamic transit network
design problem | cs.NI cs.AI | This paper proposes a hybrid multiagent learning algorithm for solving the
dynamic simulation-based bilevel network design problem. The objective is to
determine the op-timal frequency of a multimodal transit network, which
minimizes total users' travel cost and operation cost of transit lines. The
problem is formulated as a bilevel programming problem with equilibrium
constraints describing non-cooperative Nash equilibrium in a dynamic
simulation-based transit assignment context. A hybrid algorithm combing the
cross entropy multiagent learning algorithm and Hooke-Jeeves algorithm is
proposed. Computational results are provided on the Sioux Falls network to
illustrate the perform-ance of the proposed algorithm.
|
1211.5380 | Interference Alignment with Incomplete CSIT Sharing | cs.IT math.IT | In this work, we study the impact of having only incomplete channel state
information at the transmitters (CSIT) over the feasibility of interference
alignment (IA) in a K-user MIMO interference channel (IC). Incompleteness of
CSIT refers to the perfect knowledge at each transmitter (TX) of only a
sub-matrix of the global channel matrix, where the sub-matrix is specific to
each TX. This paper investigates the notion of IA feasibility for CSIT
configurations being as incomplete as possible, as this leads to feedback
overhead reductions in practice. We distinguish between antenna configurations
where (i) removing a single antenna makes IA unfeasible, referred to as
tightly-feasible settings, and (ii) cases where extra antennas are available,
referred to as super-feasible settings. We show conditions for which IA is
feasible in strictly incomplete CSIT scenarios, even in tightly-feasible
settings. For such cases, we provide a CSIT allocation policy preserving IA
feasibility while reducing significantly the amount of CSIT required. For
super-feasible settings, we develop a heuristic CSIT allocation algorithm which
exploits the additional antennas to further reduce the size of the CSIT
allocation. As a byproduct of our approach, a simple and intuitive algorithm
for testing feasibility of single stream IA is provided.
|
1211.5400 | Ecosystem-Oriented Distributed Evolutionary Computing | cs.NE | We create a novel optimisation technique inspired by natural ecosystems,
where the optimisation works at two levels: a first optimisation, migration of
genes which are distributed in a peer-to-peer network, operating continuously
in time; this process feeds a second optimisation based on evolutionary
computing that operates locally on single peers and is aimed at finding
solutions to satisfy locally relevant constraints. We consider from the domain
of computer science distributed evolutionary computing, with the relevant
theory from the domain of theoretical biology, including the fields of
evolutionary and ecological theory, the topological structure of ecosystems,
and evolutionary processes within distributed environments. We then define
ecosystem- oriented distributed evolutionary computing, imbibed with the
properties of self-organisation, scalability and sustainability from natural
ecosystems, including a novel form of distributed evolu- tionary computing.
Finally, we conclude with a discussion of the apparent compromises resulting
from the hybrid model created, such as the network topology.
|
1211.5405 | The MDS Queue: Analysing the Latency Performance of Erasure Codes | cs.IT cs.NI math.IT math.OC | In order to scale economically, data centers are increasingly evolving their
data storage methods from the use of simple data replication to the use of more
powerful erasure codes, which provide the same level of reliability as
replication but at a significantly lower storage cost. In particular, it is
well known that Maximum-Distance-Separable (MDS) codes, such as Reed-Solomon
codes, provide the maximum storage efficiency. While the use of codes for
providing improved reliability in archival storage systems, where the data is
less frequently accessed (or so-called "cold data"), is well understood, the
role of codes in the storage of more frequently accessed and active "hot data",
where latency is the key metric, is less clear.
In this paper, we study data storage systems based on MDS codes through the
lens of queueing theory, and term this the "MDS queue." We analytically
characterize the (average) latency performance of MDS queues, for which we
present insightful scheduling policies that form upper and lower bounds to
performance, and are observed to be quite tight. Extensive simulations are also
provided and used to validate our theoretical analysis. We also employ the
framework of the MDS queue to analyse different methods of performing so-called
degraded reads (reading of partial data) in distributed data storage.
|
1211.5414 | Analysis of a randomized approximation scheme for matrix multiplication | cs.DS cs.LG cs.NA stat.ML | This note gives a simple analysis of a randomized approximation scheme for
matrix multiplication proposed by Sarlos (2006) based on a random rotation
followed by uniform column sampling. The result follows from a matrix version
of Bernstein's inequality and a tail inequality for quadratic forms in
subgaussian random vectors.
|
1211.5418 | A survey on data and transaction management in mobile databases | cs.DB | The popularity of the Mobile Database is increasing day by day as people need
information even on the move in the fast changing world. This database
technology permits employees using mobile devices to connect to their corporate
networks, hoard the needed data, work in the disconnected mode and reconnect to
the network to synchronize with the corporate database. In this scenario, the
data is being moved closer to the applications in order to improve the
performance and autonomy. This leads to many interesting problems in mobile
database research and Mobile Database has become a fertile land for many
researchers. In this paper a survey is presented on data and Transaction
management in Mobile Databases from the year 2000 onwards. The survey focuses
on the complete study on the various types of Architectures used in Mobile
databases and Mobile Transaction Models. It also addresses the data management
issues namely Replication and Caching strategies and the transaction management
functionalities such as Concurrency Control and Commit protocols,
Synchronization, Query Processing, Recovery and Security. It also provides
Research Directions in Mobile databases.
|
1211.5425 | A Cross-layer Perspective on Energy Harvesting Aided Green
Communications over Fading Channels | cs.IT math.IT | We consider the power allocation of the physical layer and the buffer delay
of the upper application layer in energy harvesting green networks. The total
power required for reliable transmission includes the transmission power and
the circuit power. The harvested power (which is stored in a battery) and the
grid power constitute the power resource. The uncertainty of data generated
from the upper layer, the intermittence of the harvested energy, and the
variation of the fading channel are taken into account and described as
independent Markov processes. In each transmission, the transmitter decides the
transmission rate as well as the allocated power from the battery, and the rest
of the required power will be supplied by the power grid. The objective is to
find an allocation sequence of transmission rate and battery power to minimize
the long-term average buffer delay under the average grid power constraint. A
stochastic optimization problem is formulated accordingly to find such
transmission rate and battery power sequence. Furthermore, the optimization
problem is reformulated as a constrained MDP problem whose policy is a
two-dimensional vector with the transmission rate and the power allocation of
the battery as its elements. We prove that the optimal policy of the
constrained MDP can be obtained by solving the unconstrained MDP. Then we focus
on the analysis of the unconstrained average-cost MDP. The structural
properties of the average optimal policy are derived. Moreover, we discuss the
relations between elements of the two-dimensional policy. Next, based on the
theoretical analysis, the algorithm to find the constrained optimal policy is
presented for the finite state space scenario. In addition, heuristic policies
with low-complexity are given for the general state space. Finally, simulations
are performed under these policies to demonstrate the effectiveness.
|
1211.5481 | Genetic Algorithm Modeling with GPU Parallel Computing Technology | astro-ph.IM cs.DC cs.NE | We present a multi-purpose genetic algorithm, designed and implemented with
GPGPU / CUDA parallel computing technology. The model was derived from a
multi-core CPU serial implementation, named GAME, already scientifically
successfully tested and validated on astrophysical massive data classification
problems, through a web application resource (DAMEWARE), specialized in data
mining based on Machine Learning paradigms. Since genetic algorithms are
inherently parallel, the GPGPU computing paradigm has provided an exploit of
the internal training features of the model, permitting a strong optimization
in terms of processing performances and scalability.
|
1211.5484 | Ranking the Importance of Nodes of Complex Networks by the Equivalence
Classes Approach | cs.SI physics.soc-ph | Identifying the importance of nodes of complex networks is of interest to the
research of Social Networks, Biological Networks etc.. Current researchers have
proposed several measures or algorithms, such as betweenness, PageRank and HITS
etc., to identify the node importance. However, these measures are based on
different aspects of properties of nodes, and often conflict with the others. A
reasonable, fair standard is needed for evaluating and comparing these
algorithms. This paper develops a framework as the standard for ranking the
importance of nodes. Four intuitive rules are suggested to measure the node
importance, and the equivalence classes approach is employed to resolve the
conflicts and aggregate the results of the rules. To quantitatively compare the
algorithms, the performance indicators are also proposed based on a similarity
measure. Three widely used real-world networks are used as the test-beds. The
experimental results illustrate the feasibility of this framework and show that
both algorithms, PageRank and HITS, perform well with bias when dealing with
the tested networks. Furthermore, this paper uses the proposed approach to
analyze the structure of the Internet, and draws out the kernel of the Internet
with dense links.
|
1211.5492 | Corpus Development for Affective Video Indexing | cs.MM cs.HC cs.IR | Affective video indexing is the area of research that develops techniques to
automatically generate descriptions of video content that encode the emotional
reactions which the video content evokes in viewers. This paper provides a set
of corpus development guidelines based on state-of-the-art practice intended to
support researchers in this field. Affective descriptions can be used for video
search and browsing systems offering users affective perspectives. The paper is
motivated by the observation that affective video indexing has yet to fully
profit from the standard corpora (data sets) that have benefited conventional
forms of video indexing. Affective video indexing faces unique challenges,
since viewer-reported affective reactions are difficult to assess. Moreover
affect assessment efforts must be carefully designed in order to both cover the
types of affective responses that video content evokes in viewers and also
capture the stable and consistent aspects of these responses. We first present
background information on affect and multimedia and related work on affective
multimedia indexing, including existing corpora. Three dimensions emerge as
critical for affective video corpora, and form the basis for our proposed
guidelines: the context of viewer response, personal variation among viewers,
and the effectiveness and efficiency of corpus creation. Finally, we present
examples of three recent corpora and discuss how these corpora make progressive
steps towards fulfilling the guidelines.
|
1211.5494 | Optimal design of PID controllers using the QFT method | cs.SY math.OC | An optimisation algorithm is proposed for designing PID controllers, which
minimises the asymptotic open-loop gain of a system, subject to appropriate
robust- stability and performance QFT constraints. The algorithm is simple and
can be used to automate the loop-shaping step of the QFT design procedure. The
effectiveness of the method is illustrated with an example.
|
1211.5498 | Canonical fitness model for simple scale-free graphs | physics.soc-ph cs.SI | We consider a fitness model assumed to generate simple graphs with power-law
heavy-tailed degree sequence: P(k) \propto k^{-1-\alpha} with 0 < \alpha < 1,
in which the corresponding distributions do not posses a mean. We discuss the
situations in which the model is used to produce a multigraph and examine what
happens if the multiple edges are merged to a single one and thus a simple
graph is built. We give the relation between the (normalized) fitness parameter
r and the expected degree \nu of a node and show analytically that it possesses
non-trivial intermediate and final asymptotic behaviors. We show that the model
produces P(k) \propto k^{-2} for large values of k independent of \alpha. Our
analytical findings are confirmed by numerical simulations.
|
1211.5520 | Accurate Demarcation of Protein Domain Linkers based on Structural
Analysis of Linker Probable Region | cs.CE q-bio.BM | In multi-domain proteins, the domains are connected by a flexible
unstructured region called as protein domain linker. The accurate demarcation
of these linkers holds a key to understanding of their biochemical and
evolutionary attributes. This knowledge helps in designing a suitable linker
for engineering stable multi-domain chimeric proteins. Here we propose a novel
method for the demarcation of the linker based on a three-dimensional protein
structure and a domain definition. The proposed method is based on biological
knowledge about structural flexibility of the linkers. We performed structural
analysis on a linker probable region (LPR) around domain boundary points of
known SCOP domains. The LPR was described using a set of overlapping peptide
fragments of fixed size. Each peptide fragment was then described by geometric
invariants (GIs) and subjected to clustering process where the fragments
corresponding to actual linker come up as outliers. We then discover the actual
linkers by finding the longest continuous stretch of outlier fragments from
LPRs. This method was evaluated on a benchmark dataset of 51 continuous
multi-domain proteins, where it achieves F1 score of 0.745 (0.83 precision and
0.66 recall). When the method was applied on 725 continuous multi-domain
proteins, it was able to identify novel linkers that were not reported
previously. This method can be used in combination with supervised / sequence
based linker prediction methods for accurate linker demarcation.
|
1211.5556 | Improving Perceptual Color Difference using Basic Color Terms | cs.CV cs.GR | We suggest a new color distance based on two observations. First, perceptual
color differences were designed to be used to compare very similar colors. They
do not capture human perception for medium and large color differences well.
Thresholding was proposed to solve the problem for large color differences,
i.e. two totally different colors are always the same distance apart. We show
that thresholding alone cannot improve medium color differences. We suggest to
alleviate this problem using basic color terms. Second, when a color distance
is used for edge detection, many small distances around the just noticeable
difference may account for false edges. We suggest to reduce the effect of
small distances.
|
1211.5562 | Spectrum Sensing using Distributed Sequential Detection via Noisy
Reporting MAC | cs.IT math.IT stat.AP | This paper considers cooperative spectrum sensing algorithms for Cognitive
Radios which focus on reducing the number of samples to make a reliable
detection. We develop an energy efficient detector with low detection delay
using decentralized sequential hypothesis testing. Our algorithm at the
Cognitive Radios employs an asynchronous transmission scheme which takes into
account the noise at the fusion center. We start with a distributed algorithm,
DualSPRT, in which Cognitive Radios sequentially collect the observations, make
local decisions using SPRT (Sequential Probability Ratio Test) and send them to
the fusion center. The fusion center sequentially processes these received
local decisions corrupted by noise, using an SPRT-like procedure to arrive at a
final decision. We theoretically analyse its probability of error and average
detection delay. We also asymptotically study its performance. Even though
DualSPRT performs asymptotically well, a modification at the fusion node
provides more control over the design of the algorithm parameters which then
performs better at the usual operating probabilities of error in Cognitive
Radio systems. We also analyse the modified algorithm theoretically. Later we
modify these algorithms to handle uncertainties in SNR and fading.
|
1211.5566 | On the Composition of Secret Sharing Schemes Related to Codes | cs.IT math.IT | In this paper we construct a subclass of the composite access structure
introduced by Mart\'inez et al. based on schemes realizing the structure given
by the set of codewords of minimal support of linear codes. This class enlarges
the iterated threshold class studied in the same paper. Furthermore all the
schemes on this paper are ideal (in fact they allow a vector space
construction) and we arrived to give a partial answer to a conjecture stated in
the above paper. Finally, as a corollary we proof that all the monotone access
structures based on all the minimal supports of a code can be realized by a
vector space construction.
|
1211.5568 | Computing coset leaders and leader codewords of binary codes | cs.IT math.IT | In this paper we use the Gr\"obner representation of a binary linear code
$\mathcal C$ to give efficient algorithms for computing the whole set of coset
leaders, denoted by $\mathrm{CL}(\mathcal C)$ and the set of leader codewords,
denoted by $\mathrm L(\mathcal C)$. The first algorithm could be adapted to
provide not only the Newton and the covering radius of $\mathcal C$ but also to
determine the coset leader weight distribution. Moreover, providing the set of
leader codewords we have a test-set for decoding by a gradient-like decoding
algorithm. Another contribution of this article is the relation stablished
between zero neighbours and leader codewords.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.