id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1209.3804
|
Compressive Link Acquisition in Multiuser Communications
|
cs.IT math.IT
|
An important receiver operation is to detect the presence specific preamble
signals with unknown delays in the presence of scattering, Doppler effects and
carrier offsets. This task, referred to as "link acquisition", is typically a
sequential search over the transmitted signal space. Recently, many authors
have suggested applying sparse recovery algorithms in the context of similar
estimation or detection problems. These works typically focus on the benefits
of sparse recovery, but not generally on the cost brought by compressive
sensing. Thus, our goal is to examine the trade-off in complexity and
performance that is possible when using sparse recovery. To do so, we propose a
sequential sparsity-aware compressive sampling (C-SA) acquisition scheme, where
a compressive multi-channel sampling (CMS) front-end is followed by a sparsity
regularized likelihood ratio test (SR-LRT) module.
The proposed C-SA acquisition scheme borrows insights from the models studied
in the context of sub-Nyquist sampling, where a minimal amount of samples is
captured to reconstruct signals with Finite Rate of Innovation (FRI). In
particular, we propose an A/D conversion front-end that maximizes a well-known
probability divergence measure, the average Kullback-Leibler distance, of all
the hypotheses of the SR-LRT performed on the samples. We compare the proposed
acquisition scheme vis-\`{a}-vis conventional alternatives with relatively low
computational cost, such as the Matched Filter (MF), in terms of performance
and complexity.
|
1209.3808
|
Minimal realization of the dynamical structure function and its
application to network reconstruction
|
cs.SY q-bio.QM
|
Network reconstruction, i.e., obtaining network structure from data, is a
central theme in systems biology, economics and engineering. In some previous
work, we introduced dynamical structure functions as a tool for posing and
solving the problem of network reconstruction between measured states. While
recovering the network structure between hidden states is not possible since
they are not measured, in many situations it is important to estimate the
minimal number of hidden states in order to understand the complexity of the
network under investigation and help identify potential targets for
measurements. Estimating the minimal number of hidden states is also crucial to
obtain the simplest state-space model that captures the network structure and
is coherent with the measured data. This paper characterizes minimal order
state-space realizations that are consistent with a given dynamical structure
function by exploring properties of dynamical structure functions and
developing an algorithm to explicitly obtain such a minimal realization.
|
1209.3811
|
Textual Features for Programming by Example
|
cs.AI
|
In Programming by Example, a system attempts to infer a program from input
and output examples, generally by searching for a composition of certain base
functions. Performing a naive brute force search is infeasible for even mildly
involved tasks. We note that the examples themselves often present clues as to
which functions to compose, and how to rank the resulting programs. In text
processing, which is our domain of interest, clues arise from simple textual
features: for example, if parts of the input and output strings are
permutations of one another, this suggests that sorting may be useful. We
describe a system that learns the reliability of such clues, allowing for
faster search and a principled ranking over programs. Experiments on a
prototype of this system show that this learning scheme facilitates efficient
inference on a range of text processing tasks.
|
1209.3818
|
Evolution and the structure of learning agents
|
cs.AI cs.LG
|
This paper presents the thesis that all learning agents of finite information
size are limited by their informational structure in what goals they can
efficiently learn to achieve in a complex environment. Evolutionary change is
critical for creating the required structure for all learning agents in any
complex environment. The thesis implies that there is no efficient universal
learning algorithm. An agent can go past the learning limits imposed by its
structure only by slow evolutionary change or blind search which in a very
complex environment can only give an agent an inefficient universal learning
capability that can work only in evolutionary timescales or improbable luck.
|
1209.3824
|
Interference Mitigation via Interference-Aware Successive Decoding
|
cs.IT math.IT
|
In modern wireless networks, interference is no longer negligible since each
cell becomes smaller to support high throughput. The reduced size of each cell
forces to install many cells, and consequently causes to increase inter-cell
interference at many cell edge areas. This paper considers a practical way of
mitigating interference at the receiver equipped with multiple antennas in
interference channels. Recently, it is shown that the capacity region of
interference channels over point-to-point codes could be established with a
combination of two schemes: treating interference as noise and jointly decoding
both desired and interference signals. In practice, the first scheme is
straightforwardly implementable, but the second scheme needs impractically huge
computational burden at the receiver. Within a practical range of complexity,
this paper proposes the interference-aware successive decoding (IASD) algorithm
which successively decodes desired and interference signals while updating a
priori information of both signals. When multiple decoders are allowed to be
used, the proposed IASD can be extended to interference-aware parallel decoding
(IAPD). The proposed algorithm is analyzed with extrinsic information transfer
(EXIT) chart so as to show that the interference decoding is advantageous to
improve the performance. Simulation results demonstrate that the proposed
algorithm significantly outperforms interference non-decoding algorithms.
|
1209.3869
|
Hybrid technique for effective knowledge representation & a comparative
study
|
cs.AI
|
Knowledge representation (KR) and inference mechanism are most desirable
thing to make the system intelligent. System is known to an intelligent if its
intelligence is equivalent to the intelligence of human being for a particular
domain or general. Because of incomplete ambiguous and uncertain information
the task of making intelligent system is very difficult. The objective of this
paper is to present the hybrid KR technique for making the system effective &
Optimistic. The requirement for (effective & optimistic) is because the system
must be able to reply the answer with a confidence of some factor. This paper
also presents the comparison between various hybrid KR techniques with the
proposed one.
|
1209.3902
|
Markov Chain Aggregation for Simple Agent-Based Models on Symmetric
Networks: The Voter Model
|
physics.soc-ph cs.SI nlin.AO
|
For Agent Based Models, in particular the Voter Model (VM), a general
framework of aggregation is developed which exploits the symmetries of the
agent network $G$. Depending on the symmetry group $Aut_{\omega} (N)$ of the
weighted agent network, certain ensembles of agent configurations can be
interchanged without affecting the dynamical properties of the VM. These
configurations can be aggregated into the same macro state and the dynamical
process projected onto these states is, contrary to the general case, still a
Markov chain. The method facilitates the analysis of the relation between
microscopic processes and a their aggregation to a macroscopic level of
description and informs about the complexity of a system introduced by
heterogeneous interaction relations. In some cases the macro chain is solvable.
|
1209.3909
|
Network Routing Optimization Using Swarm Intelligence
|
cs.NE cs.DM
|
The aim of this paper is to highlight and explore a traditional problem,
which is the minimum spanning tree, and finding the shortest-path in network
routing, by using Swarm Intelligence. This work to be considered as an
investigation topic with combination between operations research, discrete
mathematics, and evolutionary computing aiming to solve one of networking
problems.
|
1209.3913
|
Keyspace: A Consistently Replicated, Highly-Available Key-Value Store
|
cs.DB cs.DC
|
This paper describes the design and architecture of Keyspace, a distributed
key-value store offering strong consistency, fault-tolerance and high
availability. The source code is available under the open-source AGPL license
for Linux, Windows and BSD-like platforms. As of 2012, Keyspace is no longer
undergoing active development.
|
1209.3914
|
Theorem Proving in Large Formal Mathematics as an Emerging AI Field
|
cs.AI cs.DL
|
In the recent years, we have linked a large corpus of formal mathematics with
automated theorem proving (ATP) tools, and started to develop combined AI/ATP
systems working in this setting. In this paper we first relate this project to
the earlier large-scale automated developments done by Quaife with McCune's
Otter system, and to the discussions of the QED project about formalizing a
significant part of mathematics. Then we summarize our adventure so far, argue
that the QED dreams were right in anticipating the creation of a very
interesting semantic AI field, and discuss its further research directions.
|
1209.3916
|
Qualitative Modelling via Constraint Programming: Past, Present and
Future
|
cs.CE cs.AI math.DS q-bio.CB
|
Qualitative modelling is a technique integrating the fields of theoretical
computer science, artificial intelligence and the physical and biological
sciences. The aim is to be able to model the behaviour of systems without
estimating parameter values and fixing the exact quantitative dynamics.
Traditional applications are the study of the dynamics of physical and
biological systems at a higher level of abstraction than that obtained by
estimation of numerical parameter values for a fixed quantitative model.
Qualitative modelling has been studied and implemented to varying degrees of
sophistication in Petri nets, process calculi and constraint programming. In
this paper we reflect on the strengths and weaknesses of existing frameworks,
we demonstrate how recent advances in constraint programming can be leveraged
to produce high quality qualitative models, and we describe the advances in
theory and technology that would be needed to make constraint programming the
best option for scientific investigation in the broadest sense.
|
1209.3943
|
Formal Concept Analysis Based Association Rules Extraction
|
cs.DB
|
Generating a huge number of association rules reduces their utility in the
decision making process, done by domain experts. In this context, based on the
theory of Formal Concept Analysis, we propose to extend the notion of Formal
Concept through the generalization of the notion of itemset in order to
consider the itemset as an intent, its support as the cardinality of the extent
and its relevance which is related to the confidence of rule. Accordingly, we
propose a new approach to extract interesting itemsets through the concept
coverage. This approach uses a new quality-criteria of a rule: the relevance
bringing a semantic added value to formal concept analysis approach to discover
association rules.
|
1209.3944
|
Cyclic Association Rules Mining under Constraints
|
cs.DB
|
Several researchers have explored the temporal aspect of association rules
mining. In this paper, we focus on the cyclic association rules, in order to
discover correlations among items characterized by regular cyclic variation
overtime. The overview of the state of the art has revealed the drawbacks of
proposed algorithm literatures, namely the excessive number of generated rules
which are not meeting the expert's expectations. To overcome these
restrictions, we have introduced our approach dedicated to generate the cyclic
association rules under constraints through a new method called
Constraint-Based Cyclic Association Rules CBCAR. The carried out experiments
underline the usefulness and the performance of our new approach.
|
1209.3977
|
Quasi-cyclic Flexible Regenerating Codes
|
cs.IT cs.DC math.IT
|
In a distributed storage environment, where the data is placed in nodes
connected through a network, it is likely that one of these nodes fails. It is
known that the use of erasure coding improves the fault tolerance and minimizes
the redundancy added in distributed storage environments. The use of
regenerating codes not only make the most of the erasure coding improvements,
but also minimizes the amount of data needed to regenerate a failed node.
In this paper, a new family of regenerating codes based on quasi-cyclic codes
is presented. Quasi-cyclic flexible minimum storage regenerating (QCFMSR) codes
are constructed and their existence is proved. Quasi-cyclic flexible
regenerating codes with minimum bandwidth constructed from a base QCFMSR code
are also provided. These codes not only achieve optimal MBR parameters in terms
of stored data and repair bandwidth, but also for an specific choice of the
parameters involved, they can be decreased under the optimal MBR point.
Quasi-cyclic flexible regenerating codes are very interesting because of
their simplicity and low complexity. They allow exact repair-by-transfer in the
minimum bandwidth case and an exact pseudo repair-by-transfer in the MSR case,
where operations are needed only when a new node enters into the system
replacing a lost one.
|
1209.3982
|
Sparsifying Defaults: Optimal Bailout Policies for Financial Networks in
Distress
|
q-fin.CP cs.SI math.OC q-fin.RM
|
The events of the last few years revealed an acute need for tools to
systematically model and analyze large financial networks. Many applications of
such tools include the forecasting of systemic failures and analyzing probable
effects of economic policy decisions. We consider optimizing the amount and
structure of a bailout in a borrower-lender network: Given a fixed amount of
cash to be injected into the system, how should it be distributed among the
nodes in order to achieve the smallest overall amount of unpaid liabilities or
the smallest number of nodes in default? We develop an exact algorithm for the
problem of minimizing the amount of unpaid liabilities, by showing that it is
equivalent to a linear program. For the problem of minimizing the number of
defaults, we develop an approximate algorithm using a reweighted l1
minimization approach. We illustrate this algorithm using an example with
synthetic data for which the optimal solution can be calculated exactly, and
show through numerical simulation that the solutions calculated by our
algorithm are close to optimal.
|
1209.4022
|
Game Theoretic Formation of a Centrality Based Network
|
cs.GT cs.SI physics.soc-ph
|
We model the formation of networks as a game where players aspire to maximize
their own centrality by increasing the number of other players to which they
are path-wise connected, while simultaneously incurring a cost for each added
adjacent edge. We simulate the interactions between players using an algorithm
that factors in rational strategic behavior based on a common objective
function. The resulting networks exhibit pairwise stability, from which we
derive necessary stable conditions for specific graph topologies. We then
expand the model to simulate non-trivial games with large numbers of players.
We show that using conditions necessary for the stability of star topologies we
can induce the formation of hub players that positively impact the total
welfare of the network.
|
1209.4065
|
On the Performance of Transmit Antenna Selection Based on Shadowing Side
Information
|
cs.IT math.IT math.ST stat.OT stat.TH
|
In this paper, a transmit antenna selection scheme, which is based on
shadowing side information, is investigated. In this scheme, the selected
single transmit antenna provides the highest shadowing coefficient between
transmitter and receiver. By the proposed technique, the frequency of the usage
of the feedback channel from the receiver to the transmitter and also channel
estimation complexity at the receiver can be reduced. We study the performance
of our proposed technique and in the analysis, we consider an independent but
not identically distributed Generalized-K composite fading model. More
specifically exact and closed-form expressions for the outage probability, the
moment generating function, the moments of signal-to-noise ratio, and the
average symbol error probability are derived. In addition, asymptotic outage
probability and symbol error probability expressions are also presented in
order to investigate the diversity order and the array gain. Finally, our
theoretical performance results are validated by Monte Carlo simulations.
|
1209.4066
|
Low Complexity Differentiating Adaptive Erasure Codes for Multimedia
Wireless Broadcast
|
cs.IT math.IT
|
Based on the erasure channel FEC model as defined in multimedia wireless
broadcast standards, we illustrate how doping mechanisms included in the design
of erasure coding and decoding may improve the scalability of the packet
throughput, decrease overall latency and potentially differentiate among
classes of multimedia subscribers regardless of their signal quality. We
describe decoding mechanisms that allow for linear complexity and give
complexity bounds when feedback is available. We show that elaborate coding
schemes which include pre-coding stages are inferior to simple Ideal Soliton
based rateless codes, combined with the proposed two-phase decoder. The
simplicity of this scheme and the availability of tight bounds on latency given
pre-allocated radio resources makes it a practical and efficient design
solution.
|
1209.4093
|
Capacity Limits and Multiplexing Gains of MIMO Channels with Transceiver
Impairments
|
cs.IT math.IT
|
The capacity of ideal MIMO channels has a high-SNR slope that equals the
minimum of the number of transmit and receive antennas. This letter analyzes if
this result holds when there are distortions from physical transceiver
impairments. We prove analytically that such physical MIMO channels have a
finite upper capacity limit, for any channel distribution and SNR. The high-SNR
slope thus collapses to zero. This appears discouraging, but we prove the
encouraging result that the relative capacity gain of employing MIMO is at
least as large as with ideal transceivers.
|
1209.4115
|
Transferring Subspaces Between Subjects in Brain-Computer Interfacing
|
stat.ML cs.HC cs.LG
|
Compensating changes between a subjects' training and testing session in
Brain Computer Interfacing (BCI) is challenging but of great importance for a
robust BCI operation. We show that such changes are very similar between
subjects, thus can be reliably estimated using data from other users and
utilized to construct an invariant feature space. This novel approach to
learning from other subjects aims to reduce the adverse effects of common
non-stationarities, but does not transfer discriminative information. This is
an important conceptual difference to standard multi-subject methods that e.g.
improve the covariance matrix estimation by shrinking it towards the average of
other users or construct a global feature space. These methods do not reduces
the shift between training and test data and may produce poor results when
subjects have very different signal characteristics. In this paper we compare
our approach to two state-of-the-art multi-subject methods on toy data and two
data sets of EEG recordings from subjects performing motor imagery. We show
that it can not only achieve a significant increase in performance, but also
that the extracted change patterns allow for a neurophysiologically meaningful
interpretation.
|
1209.4129
|
Comunication-Efficient Algorithms for Statistical Optimization
|
stat.ML cs.LG stat.CO
|
We analyze two communication-efficient algorithms for distributed statistical
optimization on large-scale data sets. The first algorithm is a standard
averaging method that distributes the $N$ data samples evenly to $\nummac$
machines, performs separate minimization on each subset, and then averages the
estimates. We provide a sharp analysis of this average mixture algorithm,
showing that under a reasonable set of conditions, the combined parameter
achieves mean-squared error that decays as $\order(N^{-1}+(N/m)^{-2})$.
Whenever $m \le \sqrt{N}$, this guarantee matches the best possible rate
achievable by a centralized algorithm having access to all $\totalnumobs$
samples. The second algorithm is a novel method, based on an appropriate form
of bootstrap subsampling. Requiring only a single round of communication, it
has mean-squared error that decays as $\order(N^{-1} + (N/m)^{-3})$, and so is
more robust to the amount of parallelization. In addition, we show that a
stochastic gradient-based method attains mean-squared error decaying as
$O(N^{-1} + (N/ m)^{-3/2})$, easing computation at the expense of penalties in
the rate of convergence. We also provide experimental evaluation of our
methods, investigating their performance both on simulated data and on a
large-scale regression problem from the internet search domain. In particular,
we show that our methods can be used to efficiently solve an advertisement
prediction problem from the Chinese SoSo Search Engine, which involves logistic
regression with $N \approx 2.4 \times 10^8$ samples and $d \approx 740,000$
covariates.
|
1209.4145
|
Network Massive MIMO for Cell-Boundary Users: From a Precoding
Normalization Perspective
|
cs.IT math.IT
|
In this paper, we propose network massive multiple- input multiple-output
(MIMO) systems, where three radio units (RUs) connected via one digital unit
(DU) support multiple user equipments (UEs) at a cell-boundary through the same
radio resource, i.e., the same frequency/time band. For precoding designs,
zero-forcing (ZF) and matched filter (MF) with vector or matrix normalization
are considered. We also derive the formulae of the lower and upper bounds of
the achievable sum rate for each precoding. Based on our analytical results, we
observe that vector normalization is better for ZF while matrix normalization
is better for MF. Given antenna configurations, we also derive the optimal
switching point as a function of the number of active users in a network.
Numerical simulations confirm our analytical
|
1209.4169
|
Hybrid Data Mining Technique for Knowledge Discovery from Engineering
Materials' Data sets
|
cs.DB
|
Studying materials informatics from a data mining perspective can be
beneficial for manufacturing and other industrial engineering applications.
Predictive data mining technique and machine learning algorithm are combined to
design a knowledge discovery system for the selection of engineering materials
that meet the design specifications. Predictive method-Naive Bayesian
classifier and Machine learning Algorithm - Pearson correlation coefficient
method were implemented respectively for materials classification and
selection. The knowledge extracted from the engineering materials data sets is
proposed for effective decision making in advanced engineering materials design
applications.
|
1209.4187
|
PaxosLease: Diskless Paxos for Leases
|
cs.DC cs.DB
|
This paper describes PaxosLease, a distributed algorithm for lease
negotiation. PaxosLease is based on Paxos, but does not require disk writes or
clock synchrony. PaxosLease is used for master lease negotation in the
open-source Keyspace and ScalienDB replicated key-value stores.
|
1209.4199
|
Discrete State Transition Algorithm for Unconstrained Integer
Optimization Problems
|
math.OC cs.IT math.IT math.PR math.RT
|
A recently new intelligent optimization algorithm called discrete state
transition algorithm is considered in this study, for solving unconstrained
integer optimization problems. Firstly, some key elements for discrete state
transition algorithm are summarized to guide its well development. Several
intelligent operators are designed for local exploitation and global
exploration. Then, a dynamic adjustment strategy ``risk and restoration in
probability" is proposed to capture global solutions with high probability.
Finally, numerical experiments are carried out to test the performance of the
proposed algorithm compared with other heuristics, and they show that the
similar intelligent operators can be applied to ranging from traveling salesman
problem, boolean integer programming, to discrete value selection problem,
which indicates the adaptability and flexibility of the proposed intelligent
elements.
|
1209.4207
|
A Cramer-Rao Bound for Semi-Blind Channel Estimation in Redundant Block
Transmission Systems
|
cs.IT math.IT
|
A Cramer-Rao bound (CRB) for semi-blind channel estimators in redundant block
transmission systems is derived. The derived CRB is valid for any system
adopting a full-rank linear redundant precoder, including the popular
cyclic-prefixed orthogonal frequency-division multiplexing system. Simple forms
of CRBs for multiple complex parameters, either unconstrained or constrained by
a holomorphic function, are also derived, which facilitate the CRB derivation
of the problem of interest. The derived CRB is a lower bound on the variance of
any unbiased semi-blind channel estimator, and can serve as a tractable
performance metric for system design.
|
1209.4209
|
Tight Sufficient Conditions on Exact Sparsity Pattern Recovery
|
cs.IT math.IT
|
A noisy underdetermined system of linear equations is considered in which a
sparse vector (a vector with a few nonzero elements) is subject to measurement.
The measurement matrix elements are drawn from a Gaussian distribution. We
study the information-theoretic constraints on exact support recovery of a
sparse vector from the measurement vector and matrix. We compute a tight,
sufficient condition that is applied to ergodic wide-sense stationary sparse
vectors. We compare our results with the existing bounds and recovery
conditions. Finally, we extend our results to approximately sparse signals.
|
1209.4233
|
Writing Reusable Digital Geometry Algorithms in a Generic Image
Processing Framework
|
cs.MS cs.CV
|
Digital Geometry software should reflect the generality of the underlying
mathe- matics: mapping the latter to the former requires genericity. By
designing generic solutions, one can effectively reuse digital geometry data
structures and algorithms. We propose an image processing framework focused on
the Generic Programming paradigm in which an algorithm on the paper can be
turned into a single code, written once and usable with various input types.
This approach enables users to design and implement new methods at a lower
cost, try cross-domain experiments and help generalize results
|
1209.4236
|
Estimation of Radio Interferometer Beam Shapes Using Riemannian
Optimization
|
astro-ph.IM cs.CE
|
The knowledge of receiver beam shapes is essential for accurate radio
interferometric imaging. Traditionally, this information is obtained by
holographic techniques or by numerical simulation. However, such methods are
not feasible for an observation with time varying beams, such as the beams
produced by a phased array radio interferometer. We propose the use of the
observed data itself for the estimation of the beam shapes. We use the
directional gains obtained along multiple sources across the sky for the
construction of a time varying beam model. The construction of this model is an
ill posed non linear optimization problem. Therefore, we propose to use
Riemannian optimization, where we consider the constraints imposed as a
manifold. We compare the performance of the proposed approach with traditional
unconstrained optimization and give results to show the superiority of the
proposed approach.
|
1209.4238
|
The Capacity of the Gaussian Cooperative Two-user Multiple Access
Channel to within a Constant Gap
|
cs.IT math.IT
|
The capacity region of the cooperative two-user Multiple Access Channel (MAC)
in Gaussian noise is determined to within a constant gap for both the
Full-Duplex (FD) and Half-Duplex (HD) case. The main contributions are: (a) for
both FD and HD: unilateral cooperation suffices to achieve capacity to within a
constant gap where only the user with the strongest link to the destination
needs to engage in cooperation, (b) for both FD and HD: backward joint decoding
is not necessary to achieve capacity to within a constant gap, and (c) for HD:
time sharing between the case where the two users do not cooperate and the case
where the user with the strongest link to the destination acts as pure relay
for the other user suffices to achieve capacity to within a constant gap. These
findings show that simple achievable strategies are approximately optimal for
all channel parameters with interesting implications for practical cooperative
schemes.
|
1209.4240
|
Network Coordination and Synchronization in a Noisy Environment with
Time Delays
|
cond-mat.stat-mech cond-mat.dis-nn cs.MA nlin.CD
|
We study the effects of nonzero time delays in stochastic synchronization
problems with linear couplings in complex networks. We consider two types of
time delays: transmission delays between interacting nodes and local delays at
each node (due to processing, cognitive, or execution delays). By investigating
the underlying fluctuations for several delay schemes, we obtain the
synchronizability threshold (phase boundary) and the scaling behavior of the
width of the synchronization landscape, in some cases for arbitrary networks
and in others for specific weighted networks. Numerical computations allow the
behavior of these networks to be explored when direct analytical results are
not available. We comment on the implications of these findings for simple
locally or globally weighted network couplings and possible trade-offs present
in such systems.
|
1209.4246
|
Distributed Bayesian Detection Under Unknown Observation Statistics
|
cs.IT math.IT
|
In this paper, distributed Bayesian detection problems with unknown prior
probabilities of hypotheses are considered. The sensors obtain observations
which are conditionally dependent across sensors and their probability density
functions (pdf) are not exactly known. The observations are quantized and are
sent to the fusion center. The fusion center fuses the current quantized
observations and makes a final decision. It also designs (updated) quantizers
to be used at the sensors and the fusion rule based on all previous quantized
observations. Information regarding updated quantizers is sent back to the
sensors for use at the next time. In this paper, the conditional joint pdf is
represented in a parametric form by using the copula framework. The unknown
parameters include dependence parameters and marginal parameters. Maximum
likelihood estimation (MLE) with feedback based on quantized data is proposed
to estimate the unknown parameters. These estimates are iteratively used to
refine the quantizers and the fusion rule to improve distributed detection
performance by using feedback. Numerical examples show that the new detection
method based on MLE with feedback is much better than the usual detection
method based on the assumption of conditionally independent observations.
|
1209.4257
|
Communication-Efficient and Exact Clustering Distributed Streaming Data
|
cs.DB cs.DC
|
A widely used approach to clustering a single data stream is the two-phased
approach in which the online phase creates and maintains micro-clusters while
the off-line phase generates the macro-clustering from the micro-clusters. We
use this approach to propose a distributed framework for clustering streaming
data. Our proposed framework consists of fundamen- tal processes: one
coordinator-site process and many remote-site processes. Remote-site processes
can directly communicate with the coordinator-process but cannot communicate
the other remote site processes. Every remote-site process generates and
maintains micro- clusters that represent cluster information summary, from its
local data stream. Remote sites send the local micro-clusterings to the
coordinator by the serialization technique, or the coordinator invokes the
remote methods in order to get the local micro-clusterings from the remote
sites. After the coordinator receives all the local micro-clusterings from the
remote sites, it generates the global clustering by the macro-clustering
method. Our theoretical and empirical results show that, the global clustering
generated by our distributed framework is similar to the clustering generated
by the underlying centralized algorithm on the same data set. By using the
local micro-clustering approach, our framework achieves high scalability, and
communication-efficiency.
|
1209.4275
|
Decision-Theoretic Coordination and Control for Active Multi-Camera
Surveillance in Uncertain, Partially Observable Environments
|
cs.AI cs.MA cs.MM cs.RO
|
A central problem of surveillance is to monitor multiple targets moving in a
large-scale, obstacle-ridden environment with occlusions. This paper presents a
novel principled Partially Observable Markov Decision Process-based approach to
coordinating and controlling a network of active cameras for tracking and
observing multiple mobile targets at high resolution in such surveillance
environments. Our proposed approach is capable of (a) maintaining a belief over
the targets' states (i.e., locations, directions, and velocities) to track
them, even when they may not be observed directly by the cameras at all times,
(b) coordinating the cameras' actions to simultaneously improve the belief over
the targets' states and maximize the expected number of targets observed with a
guaranteed resolution, and (c) exploiting the inherent structure of our
surveillance problem to improve its scalability (i.e., linear time) in the
number of targets to be observed. Quantitative comparisons with
state-of-the-art multi-camera coordination and control techniques show that our
approach can achieve higher surveillance quality in real time. The practical
feasibility of our approach is also demonstrated using real AXIS 214 PTZ
cameras
|
1209.4277
|
Multi-Level Modeling of Quotation Families Morphogenesis
|
cs.CY cs.CL cs.SI physics.soc-ph
|
This paper investigates cultural dynamics in social media by examining the
proliferation and diversification of clearly-cut pieces of content: quoted
texts. In line with the pioneering work of Leskovec et al. and Simmons et al.
on memes dynamics we investigate in deep the transformations that quotations
published online undergo during their diffusion. We deliberately put aside the
structure of the social network as well as the dynamical patterns pertaining to
the diffusion process to focus on the way quotations are changed, how often
they are modified and how these changes shape more or less diverse families and
sub-families of quotations. Following a biological metaphor, we try to
understand in which way mutations can transform quotations at different scales
and how mutation rates depend on various properties of the quotations.
|
1209.4280
|
Alpha/Beta Divergences and Tweedie Models
|
stat.ML cs.IT math.IT math.ST stat.TH
|
We describe the underlying probabilistic interpretation of alpha and beta
divergences. We first show that beta divergences are inherently tied to Tweedie
distributions, a particular type of exponential family, known as exponential
dispersion models. Starting from the variance function of a Tweedie model, we
outline how to get alpha and beta divergences as special cases of Csisz\'ar's
$f$ and Bregman divergences. This result directly generalizes the well-known
relationship between the Gaussian distribution and least squares estimation to
Tweedie models and beta divergence minimization.
|
1209.4290
|
Cognitive Bias for Universal Algorithmic Intelligence
|
cs.AI
|
Existing theoretical universal algorithmic intelligence models are not
practically realizable. More pragmatic approach to artificial general
intelligence is based on cognitive architectures, which are, however,
non-universal in sense that they can construct and use models of the
environment only from Turing-incomplete model spaces. We believe that the way
to the real AGI consists in bridging the gap between these two approaches. This
is possible if one considers cognitive functions as a "cognitive bias" (priors
and search heuristics) that should be incorporated into the models of universal
algorithmic intelligence without violating their universality. Earlier reported
results suiting this approach and its overall feasibility are discussed on the
example of perception, planning, knowledge representation, attention, theory of
mind, language, and some others.
|
1209.4316
|
Critical Parameter Values and Reconstruction Properties of Discrete
Tomography: Application to Experimental Fluid Dynamics
|
math.NA cs.IT math.IT
|
We analyze representative ill-posed scenarios of tomographic PIV with a focus
on conditions for unique volume reconstruction. Based on sparse random seedings
of a region of interest with small particles, the corresponding systems of
linear projection equations are probabilistically analyzed in order to
determine (i) the ability of unique reconstruction in terms of the imaging
geometry and the critical sparsity parameter, and (ii) sharpness of the
transition to non-unique reconstruction with ghost particles when choosing the
sparsity parameter improperly. The sparsity parameter directly relates to the
seeding density used for PIV in experimental fluids dynamics that is chosen
empirically to date. Our results provide a basic mathematical characterization
of the PIV volume reconstruction problem that is an essential prerequisite for
any algorithm used to actually compute the reconstruction. Moreover, we connect
the sparse volume function reconstruction problem from few tomographic
projections to major developments in compressed sensing.
|
1209.4317
|
Image Super-Resolution via Sparse Bayesian Modeling of Natural Images
|
cs.CV
|
Image super-resolution (SR) is one of the long-standing and active topics in
image processing community. A large body of works for image super resolution
formulate the problem with Bayesian modeling techniques and then obtain its
Maximum-A-Posteriori (MAP) solution, which actually boils down to a regularized
regression task over separable regularization term. Although straightforward,
this approach cannot exploit the full potential offered by the probabilistic
modeling, as only the posterior mode is sought. Also, the separable property of
the regularization term can not capture any correlations between the sparse
coefficients, which sacrifices much on its modeling accuracy. We propose a
Bayesian image SR algorithm via sparse modeling of natural images. The sparsity
property of the latent high resolution image is exploited by introducing latent
variables into the high-order Markov Random Field (MRF) which capture the
content adaptive variance by pixel-wise adaptation. The high-resolution image
is estimated via Empirical Bayesian estimation scheme, which is substantially
faster than our previous approach based on Markov Chain Monte Carlo sampling
[1]. It is shown that the actual cost function for the proposed approach
actually incorporates a non-factorial regularization term over the sparse
coefficients. Experimental results indicate that the proposed method can
generate competitive or better results than \emph{state-of-the-art} SR
algorithms.
|
1209.4330
|
Modeling and Verification of a Multi-Agent Argumentation System using
NuSMV
|
cs.AI cs.MA
|
Autonomous intelligent agent research is a domain situated at the forefront
of artificial intelligence. Interest-based negotiation (IBN) is a form of
negotiation in which agents exchange information about their underlying goals,
with a view to improve the likelihood and quality of a offer. In this paper we
model and verify a multi-agent argumentation scenario of resource sharing
mechanism to enable resource sharing in a distributed system. We use IBN in our
model wherein agents express their interests to the others in the society to
gain certain resources.
|
1209.4340
|
Moments and Absolute Moments of the Normal Distribution
|
math.ST cs.IT math.IT math.PR stat.OT stat.TH
|
We present formulas for the (raw and central) moments and absolute moments of
the normal distribution. We note that these results are not new, yet many
textbooks miss out on at least some of them. Hence, we believe that it is
worthwhile to collect these formulas and their derivations in these notes.
|
1209.4365
|
Stochastic Stabilization of Partially Observed and Multi-Sensor Systems
Driven by Gaussian Noise under Fixed-Rate Information Constraints
|
math.OC cs.IT math.IT
|
We investigate the stabilization of unstable multidimensional partially
observed single-sensor and multi-sensor linear systems driven by unbounded
noise and controlled over discrete noiseless channels under fixed-rate
information constraints. Stability is achieved under fixed-rate communication
requirements that are asymptotically tight in the limit of large sampling
periods. Through the use of similarity transforms, sampling and random-time
drift conditions we obtain a coding and control policy leading to the existence
of a unique invariant distribution and finite second moment for the sampled
state. We use a vector stabilization scheme in which all modes of the linear
system visit a compact set together infinitely often. We prove tight necessary
and sufficient conditions for the general multi-sensor case under an assumption
related to the Jordan form structure of such systems. In the absence of this
assumption, we give sufficient conditions for stabilization.
|
1209.4383
|
Minimum Communication Cost for Joint Distributed Source Coding and
Dispersive Information Routing
|
cs.IT math.IT
|
This paper considers the problem of minimum cost communication of correlated
sources over a network with multiple sinks, which consists of distributed
source coding followed by routing. We introduce a new routing paradigm called
dispersive information routing, wherein the intermediate nodes are allowed to
`split' a packet and forward subsets of the received bits on each of the
forward paths. This paradigm opens up a rich class of research problems which
focus on the interplay between encoding and routing in a network. Unlike
conventional routing methods such as in [1], dispersive information routing
ensures that each sink receives just the information needed to reconstruct the
sources it is required to reproduce. We demonstrate using simple examples that
our approach offers better asymptotic performance than conventional routing
techniques. This paradigm leads to a new information theoretic setup, which has
not been studied earlier. We propose a new coding scheme, using principles from
multiple descriptions encoding [2] and Han and Kobayashi decoding [3]. We show
that this coding scheme achieves the complete rate region for certain special
cases of the general setup and thereby achieves the minimum communication cost
under this routing paradigm.
|
1209.4405
|
Strongly Convex Programming for Principal Component Pursuit
|
cs.IT math.IT math.NA
|
In this paper, we address strongly convex programming for princi- pal
component pursuit with reduced linear measurements, which decomposes a
superposition of a low-rank matrix and a sparse matrix from a small set of
linear measurements. We first provide sufficient conditions under which the
strongly convex models lead to the exact low-rank and sparse matrix recov- ery;
Second, we also give suggestions on how to choose suitable parameters in
practical algorithms.
|
1209.4414
|
On Cyclic DNA Codes
|
cs.IT math.IT q-bio.OT
|
This paper considers cyclic DNA codes of arbitrary length over the ring
$R=\F_2[u]/u^4-1$. A mapping is given between the elements of $R$ and the
alphabet $\{A,C,G,T\}$ which allows the additive stem distance to be extended
to this ring. Cyclic codes over $R$ are designed such that their images under
the mapping are also cyclic or quasi-cyclic of index 2. The additive distance
and hybridization energy are functions of the neighborhood energy.
|
1209.4419
|
Head Frontal-View Identification Using Extended LLE
|
cs.CV
|
Automatic head frontal-view identification is challenging due to appearance
variations caused by pose changes, especially without any training samples. In
this paper, we present an unsupervised algorithm for identifying frontal view
among multiple facial images under various yaw poses (derived from the same
person). Our approach is based on Locally Linear Embedding (LLE), with the
assumption that with yaw pose being the only variable, the facial images should
lie in a smooth and low dimensional manifold. We horizontally flip the facial
images and present two K-nearest neighbor protocols for the original images and
the flipped images, respectively. In the proposed extended LLE, for any facial
image (original or flipped one), we search (1) the Ko nearest neighbors among
the original facial images and (2) the Kf nearest neighbors among the flipped
facial images to construct the same neighborhood graph. The extended LLE
eliminates the differences (because of background, face position and scale in
the whole image and some asymmetry of left-right face) between the original
facial image and the flipped facial image at the same yaw pose so that the
flipped facial images can be used effectively. Our approach does not need any
training samples as prior information. The experimental results show that the
frontal view of head can be identified reliably around the lowest point of the
pose manifold for multiple facial images, especially the cropped facial images
(little background and centered face).
|
1209.4420
|
An Efficient Color Face Verification Based on 2-Directional
2-Dimensional Feature Extraction
|
cs.CV
|
A novel and uniform framework for face verification is presented in this
paper. First of all, a 2-directional 2-dimensional feature extraction method is
adopted to extract client-specific template - 2D discrimant projection matrix.
Then the face skin color information is utilized as an additive feature to
enhance decision making strategy that makes use of not only 2D grey feature but
also 2D skin color feature. A fusion decision of both is applied to experiment
the performance on the XM2VTS database according to Lausanne protocol.
Experimental results show that the framework achieves high verification
accuracy and verification speed.
|
1209.4425
|
Distributed Estimation of a Parametric Field Using Sparse Noisy Data
|
cs.IT math.IT
|
The problem of distributed estimation of a parametric physical field is
stated as a maximum likelihood estimation problem. Sensor observations are
distorted by additive white Gaussian noise. Prior to data transmission, each
sensor quantizes its observation to $M$ levels. The quantized data are then
communicated over parallel additive white Gaussian channels to a fusion center
for a joint estimation. An iterative expectation-maximization (EM) algorithm to
estimate the unknown parameter is formulated, and its linearized version is
adopted for numerical analysis. The numerical examples are provided for the
case of the field modeled as a Gaussian bell. The dependence of the integrated
mean-square error on the number of quantization levels, the number of sensors
in the network and the SNR in observation and transmission channels is
analyzed.
|
1209.4433
|
Transverse Contraction Criteria for Existence, Stability, and Robustness
of a Limit Cycle
|
math.OC cs.RO cs.SY
|
This paper derives a differential contraction condition for the existence of
an orbitally-stable limit cycle in an autonomous system. This transverse
contraction condition can be represented as a pointwise linear matrix
inequality (LMI), thus allowing convex optimization tools such as
sum-of-squares programming to be used to search for certificates of the
existence of a stable limit cycle. Many desirable properties of contracting
dynamics are extended to this context, including preservation of contraction
under a broad class of interconnections. In addition, by introducing the
concepts of differential dissipativity and transverse differential
dissipativity, contraction and transverse contraction can be established for
large scale systems via LMI conditions on component subsystems.
|
1209.4444
|
On the Construction of Polar Codes
|
cs.IT math.IT
|
We consider the problem of efficiently constructing polar codes over binary
memoryless symmetric (BMS) channels. The complexity of designing polar codes
via an exact evaluation of the polarized channels to find which ones are "good"
appears to be exponential in the block length. In \cite{TV11}, Tal and Vardy
show that if instead the evaluation if performed approximately, the
construction has only linear complexity. In this paper, we follow this approach
and present a framework where the algorithms of \cite{TV11} and new related
algorithms can be analyzed for complexity and accuracy. We provide numerical
and analytical results on the efficiency of such algorithms, in particular we
show that one can find all the "good" channels (except a vanishing fraction)
with almost linear complexity in block-length (except a polylogarithmic
factor).
|
1209.4445
|
Speech Signal Filters based on Soft Computing Techniques: A Comparison
|
cs.AI
|
The paper presents a comparison of various soft computing techniques used for
filtering and enhancing speech signals. The three major techniques that fall
under soft computing are neural networks, fuzzy systems and genetic algorithms.
Other hybrid techniques such as neuro-fuzzy systems are also available. In
general, soft computing techniques have been experimentally observed to give
far superior performance as compared to non-soft computing techniques in terms
of robustness and accuracy.
|
1209.4463
|
Sparsification of Motion-Planning Roadmaps by Edge Contraction
|
cs.RO cs.DS
|
We present Roadmap Sparsification by Edge Contraction (RSEC), a simple and
effective algorithm for reducing the size of a motion-planning roadmap. The
algorithm exhibits minimal effect on the quality of paths that can be extracted
from the new roadmap. The primitive operation used by RSEC is edge contraction
- the contraction of a roadmap edge to a single vertex and the connection of
the new vertex to the neighboring vertices of the contracted edge. For certain
scenarios, we compress more than 98% of the edges and vertices at the cost of
degradation of average shortest path length by at most 2%.
|
1209.4471
|
Stemmer for Serbian language
|
cs.CL cs.IR
|
In linguistic morphology and information retrieval, stemming is the process
for reducing inflected (or sometimes derived) words to their stem, base or root
form; generally a written word form. In this work is presented suffix stripping
stemmer for Serbian language, one of the highly inflectional languages.
|
1209.4479
|
Beyond Cumulated Gain and Average Precision: Including Willingness and
Expectation in the User Model
|
cs.IR
|
In this paper, we define a new metric family based on two concepts: The
definition of the stopping criterion and the notion of satisfaction, where the
former depends on the willingness and expectation of a user exploring search
results. Both concepts have been discussed so far in the IR literature, but we
argue in this paper that defining a proper single valued metric depends on
merging them into a single conceptual framework.
|
1209.4483
|
Compute-and-Forward on a Multiaccess Relay Channel: Coding and
Symmetric-Rate Optimization
|
cs.IT math.IT
|
We consider a system in which two users communicate with a destination with
the help of a half-duplex relay. Based on the compute-and-forward scheme, we
develop and evaluate the performance of coding strategies that are of network
coding spirit. In this framework, instead of decoding the users' information
messages, the destination decodes two integer-valued linear combinations that
relate the transmitted codewords. Two decoding schemes are considered. In the
first one, the relay computes one of the linear combinations and then forwards
it to the destination. The destination computes the other linear combination
based on the direct transmissions. In the second one, accounting for the side
information available at the destination through the direct links, the relay
compresses what it gets using Wyner-Ziv compression and conveys it to the
destination. The destination then computes the two linear combinations,
locally. For both coding schemes, we discuss the design criteria, and derive
the allowed symmetric-rate. Next, we address the power allocation and the
selection of the integer-valued coefficients to maximize the offered
symmetric-rate; an iterative coordinate descent method is proposed. The
analysis shows that the first scheme can outperform standard relaying
techniques in certain regimes, and the second scheme, while relying on feasible
structured lattice codes, can at best achieve the same performance as regular
compress-and-forward for the multiaccess relay network model that we study. The
results are illustrated through some numerical examples.
|
1209.4506
|
A three-dimensional domain decomposition method for large-scale DFT
electronic structure calculations
|
cond-mat.mtrl-sci cs.CE cs.DC physics.comp-ph
|
With tens of petaflops supercomputers already in operation and exaflops
machines expected to appear within the next 10 years, efficient parallel
computational methods are required to take advantage of such extreme-scale
machines. In this paper, we present a three-dimensional domain decomposition
scheme for enabling large-scale electronic calculations based on density
functional theory (DFT) on massively parallel computers. It is composed of two
methods: (i) atom decomposition method and (ii) grid decomposition method. In
the former, we develop a modified recursive bisection method based on inertia
tensor moment to reorder the atoms along a principal axis so that atoms that
are close in real space are also close on the axis to ensure data locality. The
atoms are then divided into sub-domains depending on their projections onto the
principal axis in a balanced way among the processes. In the latter, we define
four data structures for the partitioning of grids that are carefully
constructed to make data locality consistent with that of the clustered atoms
for minimizing data communications between the processes. We also propose a
decomposition method for solving the Poisson equation using three-dimensional
FFT in Hartree potential calculation, which is shown to be better than a
previously proposed parallelization method based on a two-dimensional
decomposition in terms of communication efficiency. For evaluation, we perform
benchmark calculations with our open-source DFT code, OpenMX, paying particular
attention to the O(N) Krylov subspace method. The results show that our scheme
exhibits good strong and weak scaling properties, with the parallel efficiency
at 131,072 cores being 67.7% compared to the baseline of 16,384 cores with
131,072 diamond atoms on the K computer.
|
1209.4523
|
Evolution of the Media Web
|
cs.IR cs.SI physics.soc-ph
|
We present a detailed study of the part of the Web related to media content,
i.e., the Media Web. Using publicly available data, we analyze the evolution of
incoming and outgoing links from and to media pages. Based on our observations,
we propose a new class of models for the appearance of new media content on the
Web where different \textit{attractiveness} functions of nodes are possible
including ones taken from well-known preferential attachment and fitness
models. We analyze these models theoretically and empirically and show which
ones realistically predict both the incoming degree distribution and the
so-called \textit{recency property} of the Media Web, something that existing
models did not do well. Finally we compare these models by estimating the
likelihood of the real-world link graph from our data set given each model and
obtain that models we introduce are significantly more likely than previously
proposed ones. One of the most surprising results is that in the Media Web the
probability for a post to be cited is determined, most likely, by its quality
rather than by its current popularity.
|
1209.4532
|
Applicability of Crisp and Fuzzy Logic in Intelligent Response
Generation
|
cs.AI
|
This paper discusses the merits and demerits of crisp logic and fuzzy logic
with respect to their applicability in intelligent response generation by a
human being and by a robot. Intelligent systems must have the capability of
taking decisions that are wise and handle situations intelligently. A direct
relationship exists between the level of perfection in handling a situation and
the level of completeness of the available knowledge or information or data
required to handle the situation. The paper concludes that the use of crisp
logic with complete knowledge leads to perfection in handling situations
whereas fuzzy logic can handle situations imperfectly only. However, in the
light of availability of incomplete knowledge fuzzy theory is more effective
but may be disadvantageous as compared to crisp logic.
|
1209.4535
|
Application of Fuzzy Mathematics to Speech-to-Text Conversion by
Elimination of Paralinguistic Content
|
cs.AI
|
For the past few decades, man has been trying to create an intelligent
computer which can talk and respond like he can. The task of creating a system
that can talk like a human being is the primary objective of Automatic Speech
Recognition. Various Speech Recognition techniques have been developed in
theory and have been applied in practice. This paper discusses the problems
that have been encountered in developing Speech Recognition, the techniques
that have been applied to automate the task, and a representation of the core
problems of present day Speech Recognition by using Fuzzy Mathematics.
|
1209.4557
|
Strong Secrecy for Multiple Access Channels
|
cs.IT math.IT
|
We show strongly secret achievable rate regions for two different wiretap
multiple-access channel coding problems. In the first problem, each encoder has
a private message and both together have a common message to transmit. The
encoders have entropy-limited access to common randomness. If no common
randomness is available, then the achievable region derived here does not allow
for the secret transmission of a common message. The second coding problem
assumes that the encoders do not have a common message nor access to common
randomness. However, they may have a conferencing link over which they may
iteratively exchange rate-limited information. This can be used to form a
common message and common randomness to reduce the second coding problem to the
first one. We give the example of a channel where the achievable region equals
zero without conferencing or common randomness and where conferencing
establishes the possibility of secret message transmission. Both coding
problems describe practically relevant networks which need to be secured
against eavesdropping attacks.
|
1209.4576
|
Low-Complexity Quantized Switching Controllers using Approximate
Bisimulation
|
cs.SY math.OC
|
In this paper, we consider the problem of synthesizing low-complexity
controllers for incrementally stable switched systems. For that purpose, we
establish a new approximation result for the computation of symbolic models
that are approximately bisimilar to a given switched system. The main advantage
over existing results is that it allows us to design naturally quantized
switching controllers for safety or reachability specifications; these can be
pre-computed offline and therefore the online execution time is reduced. Then,
we present a technique to reduce the memory needed to store the control law by
borrowing ideas from algebraic decision diagrams for compact function
representation and by exploiting the non-determinism of the synthesized
controllers. We show the merits of our approach by applying it to a simple
model of temperature regulation in a building.
|
1209.4608
|
Performance Analysis of Hybrid Forecasting Model In Stock Market
Forecasting
|
q-fin.ST cs.CE
|
This paper presents performance analysis of hybrid model comprise of
concordance and Genetic Programming (GP) to forecast financial market with some
existing models. This scheme can be used for in depth analysis of stock market.
Different measures of concordances such as Kendalls Tau, Ginis Mean Difference,
Spearmans Rho, and weak interpretation of concordance are used to search for
the pattern in past that look similar to present. Genetic Programming is then
used to match the past trend to present trend as close as possible. Then
Genetic Program estimates what will happen next based on what had happened
next. The concept is validated using financial time series data (S&P 500 and
NASDAQ indices) as sample data sets. The forecasted result is then compared
with standard ARIMA model and other model to analyse its performance.
|
1209.4612
|
Polar Codes: Robustness of the Successive Cancellation Decoder with
Respect to Quantization
|
cs.IT math.IT
|
Polar codes provably achieve the capacity of a wide array of channels under
successive decoding. This assumes infinite precision arithmetic. Given the
successive nature of the decoding algorithm, one might worry about the
sensitivity of the performance to the precision of the computation.
We show that even very coarsely quantized decoding algorithms lead to
excellent performance. More concretely, we show that under successive decoding
with an alphabet of cardinality only three, the decoder still has a threshold
and this threshold is a sizable fraction of capacity. More generally, we show
that if we are willing to transmit at a rate $\delta$ below capacity, then we
need only $c \log(1/\delta)$ bits of precision, where $c$ is a universal
constant.
|
1209.4616
|
Rethinking Centrality: The Role of Dynamical Processes in Social Network
Analysis
|
cs.SI physics.soc-ph
|
Many popular measures used in social network analysis, including centrality,
are based on the random walk. The random walk is a model of a stochastic
process where a node interacts with one other node at a time. However, the
random walk may not be appropriate for modeling social phenomena, including
epidemics and information diffusion, in which one node may interact with many
others at the same time, for example, by broadcasting the virus or information
to its neighbors. To produce meaningful results, social network analysis
algorithms have to take into account the nature of interactions between the
nodes. In this paper we classify dynamical processes as conservative and
non-conservative and relate them to well-known measures of centrality used in
network analysis: PageRank and Alpha-Centrality. We demonstrate, by ranking
users in online social networks used for broadcasting information, that
non-conservative Alpha-Centrality generally leads to a better agreement with an
empirical ranking scheme than the conservative PageRank.
|
1209.4679
|
Coding and System Design for Quantize-Map-and-Forward Relaying
|
cs.IT math.IT
|
In this paper we develop a low-complexity coding scheme and system design
framework for the half duplex relay channel based on the
Quantize-Map-and-Forward (QMF) relay- ing scheme. The proposed framework allows
linear complexity operations at all network terminals. We propose the use of
binary LDPC codes for encoding at the source and LDGM codes for mapping at the
relay. We express joint decoding at the destination as a belief propagation
algorithm over a factor graph. This graph has the LDPC and LDGM codes as
subgraphs connected via probabilistic constraints that model the QMF relay
operations. We show that this coding framework extends naturally to the high
SNR regime using bit interleaved coded modulation (BICM). We develop density
evolution analysis tools for this factor graph and demonstrate the design of
practical codes for the half-duplex relay channel that perform within 1dB of
information theoretic QMF threshold.
|
1209.4683
|
Joint User Grouping and Linear Virtual Beamforming: Complexity,
Algorithms and Approximation Bounds
|
cs.IT math.IT
|
In a wireless system with a large number of distributed nodes, the quality of
communication can be greatly improved by pooling the nodes to perform joint
transmission/reception. In this paper, we consider the problem of optimally
selecting a subset of nodes from potentially a large number of candidates to
form a virtual multi-antenna system, while at the same time designing their
joint linear transmission strategies. We focus on two specific application
scenarios: 1) multiple single antenna transmitters cooperatively transmit to a
receiver; 2) a single transmitter transmits to a receiver with the help of a
number of cooperative relays. We formulate the joint node selection and
beamforming problems as cardinality constrained optimization problems with both
discrete variables (used for selecting cooperative nodes) and continuous
variables (used for designing beamformers). For each application scenario, we
first characterize the computational complexity of the joint optimization
problem, and then propose novel semi-definite relaxation (SDR) techniques to
obtain approximate solutions. We show that the new SDR algorithms have a
guaranteed approximation performance in terms of the gap to global optimality,
regardless of channel realizations. The effectiveness of the proposed
algorithms is demonstrated via numerical experiments.
|
1209.4687
|
Capacity of Gaussian Channels with Duty Cycle and Power Constraints
|
cs.IT math.IT
|
In many wireless communication systems, radios are subject to a duty cycle
constraint, that is, a radio only actively transmits signals over a fraction of
the time. For example, it is desirable to have a small duty cycle in some low
power systems; a half-duplex radio cannot keep transmitting if it wishes to
receive useful signals; and a cognitive radio needs to listen and detect
primary users frequently. This work studies the capacity of scalar
discrete-time Gaussian channels subject to duty cycle constraint as well as
average transmit power constraint. An idealized duty cycle constraint is first
studied, which can be regarded as a requirement on the minimum fraction of
nontransmissions or zero symbols in each codeword. A unique discrete input
distribution is shown to achieve the channel capacity. In many situations,
numerically optimized on-off signaling can achieve much higher rate than
Gaussian signaling over a deterministic transmission schedule. This is in part
because the positions of nontransmissions in a codeword can convey information.
Furthermore, a more realistic duty cycle constraint is studied, where the extra
cost of transitions between transmissions and nontransmissions due to pulse
shaping is accounted for. The capacity-achieving input is no longer independent
over time and is hard to compute. A lower bound of the achievable rate as a
function of the input distribution is shown to be maximized by a first-order
Markov input process, the distribution of which is also discrete and can be
computed efficiently. The results in this paper suggest that, under various
duty cycle constraints, departing from the usual paradigm of intermittent
packet transmissions may yield substantial gain.
|
1209.4700
|
Fast Computation of the Arnold Complexity of Length $2^{n}$ Binary Words
|
math.CO cs.IT math.IT
|
For fast computation of the Arnold complexity of length $2^{n}$ binary words
we obtain an upper bound for the Shannon function $Sh(n)$
|
1209.4760
|
Structure and stability of online chat networks built on
emotion-carrying links
|
physics.soc-ph cs.SI
|
High-resolution data of online chats are studied as a physical system in
laboratory in order to quantify collective behavior of users. Our analysis
reveals strong regularities characteristic to natural systems with additional
features. In particular, we find self-organized dynamics with long-range
correlations in user actions and persistent associations among users that have
the properties of a social network. Furthermore, the evolution of the graph and
its architecture with specific k-core structure are shown to be related with
the type and the emotion arousal of exchanged messages. Partitioning of the
graph by deletion of the links which carry high arousal messages exhibits
critical fluctuations at the percolation threshold.
|
1209.4772
|
Statistical mechanical evaluation of spread spectrum watermarking model
with image restoration
|
cond-mat.stat-mech cs.IT math.IT
|
In cases in which an original image is blind, a decoding method where both
the image and the messages can be estimated simultaneously is desirable. We
propose a spread spectrum watermarking model with image restoration based on
Bayes estimation. We therefore need to assume some prior probabilities. The
probability for estimating the messages is given by the uniform distribution,
and the ones for the image are given by the infinite range model and 2D Ising
model. Any attacks from unauthorized users can be represented by channel
models. We can obtain the estimated messages and image by maximizing the
posterior probability.
We analyzed the performance of the proposed method by the replica method in
the case of the infinite range model. We first calculated the theoretical
values of the bit error rate from obtained saddle point equations and then
verified them by computer simulations. For this purpose, we assumed that the
image is binary and is generated from a given prior probability. We also assume
that attacks can be represented by the Gaussian channel. The computer
simulation retults agreed with the theoretical values.
In the case of prior probability given by the 2D Ising model, in which each
pixel is statically connected with four-neighbors, we evaluated the decoding
performance by computer simulations, since the replica theory could not be
applied. Results using the 2D Ising model showed that the proposed method with
image restoration is as effective as the infinite range model for decoding
messages.
We compared the performances in a case in which the image was blind and one
in which it was informed. The difference between these cases was small as long
as the embedding and attack rates were small. This demonstrates that the
proposed method with simultaneous estimation is effective as a watermarking
decoder.
|
1209.4785
|
Sparse Signal Recovery from Quadratic Measurements via Convex
Programming
|
cs.IT math.IT math.NA
|
In this paper we consider a system of quadratic equations |<z_j, x>|^2 = b_j,
j = 1, ..., m, where x in R^n is unknown while normal random vectors z_j in R_n
and quadratic measurements b_j in R are known. The system is assumed to be
underdetermined, i.e., m < n. We prove that if there exists a sparse solution
x, i.e., at most k components of x are non-zero, then by solving a convex
optimization program, we can solve for x up to a multiplicative constant with
high probability, provided that k <= O((m/log n)^(1/2)). On the other hand, we
prove that k <= O(log n (m)^(1/2)) is necessary for a class of naive convex
relaxations to be exact.
|
1209.4811
|
Performance Analysis of Error Control Coding Techniques for
Peak-to-Average Power Ratio Reduction of Multicarrier Signals
|
cs.IT math.IT
|
Increasing demands on high data rate mobile communications services will
inevitably drive future broadband mobile communication systems toward achieving
data transmission rates in excess of 100 Mbps. One of the promising
technologies which can satisfy this demand on high data rate mobile
communications services is the Orthogonal Frequency Division Multiplexing
(OFDM) transmission technology which falls under the general category of
multicarrier modulation systems. OFDM is a spectrally efficient modulation
technique that can achieve high speed data transmission over multipath fading
channels without the need for powerful equalization techniques. However the
price paid for this high spectral efficiency and less intensive equalization is
low power efficiency. OFDM signals are very sensitive to non-linear effects due
to the high peak-to-average power ratio (PAPR), which leads to the power
inefficiency in the RF section of the transmitter. This paper analyzes the
relation between aperiodic autocorrelation of OFDM symbols and PAPR. The paper
also gives a comparative study of PAPR reduction performance of various channel
coding techniques for the OFDM signals. For our study we have considered
Hamming codes, cyclic codes, convolution codes, Golay and Reed-Muller codes.
The results show that each of the channel coding technique has a different PAPR
reduction performance. Coding technique with the highest value of PAPR
reduction has been identified along with an illustration on PAPR reduction
performances with respect to each code.
|
1209.4818
|
Recursive Descriptions of Polar Codes
|
cs.IT cs.AR math.IT
|
Polar codes are recursive general concatenated codes. This property motivates
a recursive formalization of the known decoding algorithms: Successive
Cancellation, Successive Cancellation with Lists and Belief Propagation. Using
such description allows an easy development of these algorithms for arbitrary
polarizing kernels. Hardware architectures for these decoding algorithms are
also described in a recursive way, both for Arikan's standard polar codes and
for arbitrary polarizing kernels.
|
1209.4825
|
Efficient Regularized Least-Squares Algorithms for Conditional Ranking
on Relational Data
|
cs.LG stat.ML
|
In domains like bioinformatics, information retrieval and social network
analysis, one can find learning tasks where the goal consists of inferring a
ranking of objects, conditioned on a particular target object. We present a
general kernel framework for learning conditional rankings from various types
of relational data, where rankings can be conditioned on unseen data objects.
We propose efficient algorithms for conditional ranking by optimizing squared
regression and ranking loss functions. We show theoretically, that learning
with the ranking loss is likely to generalize better than with the regression
loss. Further, we prove that symmetry or reciprocity properties of relations
can be efficiently enforced in the learned models. Experiments on synthetic and
real-world data illustrate that the proposed methods deliver state-of-the-art
performance in terms of predictive power and computational efficiency.
Moreover, we also show empirically that incorporating symmetry or reciprocity
properties can improve the generalization performance.
|
1209.4831
|
Dynamics of link states in complex networks: The case of a majority rule
|
physics.soc-ph cs.SI
|
Motivated by the idea that some characteristics are specific to the relations
between individuals and not of the individuals themselves, we study a prototype
model for the dynamics of the states of the links in a fixed network of
interacting units. Each link in the network can be in one of two equivalent
states. A majority link-dynamics rule is implemented, so that in each dynamical
step the state of a randomly chosen link is updated to the state of the
majority of neighboring links. Nodes can be characterized by a link
heterogeneity index, giving a measure of the likelihood of a node to have a
link in one of the two states. We consider this link-dynamics model on fully
connected networks, square lattices and Erd \"os-Renyi random networks. In each
case we find and characterize a number of nontrivial asymptotic configurations,
as well as some of the mechanisms leading to them and the time evolution of the
link heterogeneity index distribution. For a fully connected network and random
networks there is a broad distribution of possible asymptotic configurations.
Most asymptotic configurations that result from link-dynamics have no
counterpart under traditional node dynamics in the same topologies.
|
1209.4838
|
Formal Definition of AI
|
cs.AI
|
A definition of Artificial Intelligence was proposed in [1] but this
definition was not absolutely formal at least because the word "Human" was
used. In this paper we will formalize the definition from [1]. The biggest
problem in this definition was that the level of intelligence of AI is compared
to the intelligence of a human being. In order to change this we will introduce
some parameters to which AI will depend. One of this parameters will be the
level of intelligence and we will define one AI to each level of intelligence.
We assume that for some level of intelligence the respective AI will be more
intelligent than a human being. Nevertheless, we cannot say which is this level
because we cannot calculate its exact value.
|
1209.4850
|
The Pascal Triangle of a Discrete Image: Definition, Properties and
Application to Shape Analysis
|
math-ph cs.CV math.MP
|
We define the Pascal triangle of a discrete (gray scale) image as a pyramidal
arrangement of complex-valued moments and we explore its geometric
significance. In particular, we show that the entries of row k of this triangle
correspond to the Fourier series coefficients of the moment of order k of the
Radon transform of the image. Group actions on the plane can be naturally
prolonged onto the entries of the Pascal triangle. We study the prolongation of
some common group actions, such as rotations and reflections, and we propose
simple tests for detecting equivalences and self-equivalences under these group
actions. The motivating application of this work is the problem of
characterizing the geometry of objects on images, for example by detecting
approximate symmetries.
|
1209.4854
|
Geometric simulation of locally optimal tool paths in three-axis milling
|
cs.CG cs.CE math.NA
|
The most important aim in tool path generation methods is to increase the
machining efficiency by minimizing the total length of tool paths while the
error is kept under a prescribed tolerance. This can be achieved by determining
the moving direction of the cutting tool such that the machined stripe is the
widest. From a technical point of view it is recommended that the angle between
the tool axis and the surface normal does not change too much along the tool
path in order to ensure even abrasion of the tool. In this paper a mathematical
method for tool path generation in 3-axis milling is presented, which considers
these requirements by combining the features of isophotic curves and principal
curvatures. It calculates the proposed moving direction of the tool at each
point of the surface. The proposed direction depends on the measurement of the
tool and on the curvature values of the surface. For triangulated surfaces a
new local offset computation method is presented, which is suitable also for
detecting tool collision with the target surface and self intersection in the
offset mesh.
|
1209.4855
|
The Future of Neural Networks
|
cs.NE
|
The paper describes some recent developments in neural networks and discusses
the applicability of neural networks in the development of a machine that
mimics the human brain. The paper mentions a new architecture, the pulsed
neural network that is being considered as the next generation of neural
networks. The paper also explores the use of memristors in the development of a
brain-like computer called the MoNETA. A new model, multi/infinite dimensional
neural networks, are a recent development in the area of advanced neural
networks. The paper concludes that the need of neural networks in the
development of human-like technology is essential and may be non-expendable for
it.
|
1209.4887
|
A Note on the SPICE Method
|
stat.ML cs.SY
|
In this article, we analyze the SPICE method developed in [1], and establish
its connections with other standard sparse estimation methods such as the Lasso
and the LAD-Lasso. This result positions SPICE as a computationally efficient
technique for the calculation of Lasso-type estimators. Conversely, this
connection is very useful for establishing the asymptotic properties of SPICE
under several problem scenarios and for suggesting suitable modifications in
cases where the naive version of SPICE would not work.
|
1209.4889
|
A Unified Relay Framework with both D-F and C-F Relay Nodes
|
cs.IT math.IT
|
Decode-and-forward (D-F) and compress-and-forward (C-F) are two fundamentally
different relay strategies proposed by (Cover and El Gamal, 1979).
Individually, either of them has been successfully generalized to multi-relay
channels. In this paper, to allow each relay node the freedom of choosing
either of the two strategies, we propose a unified framework, where both the
D-F and C-F strategies can be employed simultaneously in the network. It turns
out that, to fully incorporate the advantages of both the best known D-F and
C-F strategies into a unified framework, the major challenge arises as follows:
For the D-F relay nodes to fully utilize the help of the C-F relay nodes,
decoding at the D-F relay nodes should not be conducted until all the blocks
have been finished; However, in the multi-level D-F strategy, the upstream
nodes have to decode prior to the downstream nodes in order to help, which
makes simultaneous decoding at all the D-F relay nodes after all the blocks
have been finished inapplicable. To tackle this problem, nested blocks combined
with backward decoding are used in our framework, so that the D-F relay nodes
at different levels can perform backward decoding at different frequencies. As
such, the upstream D-F relay nodes can decode before the downstream D-F relay
nodes, and the use of backward decoding at each D-F relay node ensures the full
exploitation of the help of both the other D-F relay nodes and the C-F relay
nodes. The achievable rates under our unified relay framework are found to
combine both the best known D-F and C-F achievable rates and include them as
special cases.
|
1209.4893
|
On the Sensitivity of Shape Fitting Problems
|
cs.CG cs.LG
|
In this article, we study shape fitting problems, $\epsilon$-coresets, and
total sensitivity. We focus on the $(j,k)$-projective clustering problems,
including $k$-median/$k$-means, $k$-line clustering, $j$-subspace
approximation, and the integer $(j,k)$-projective clustering problem. We derive
upper bounds of total sensitivities for these problems, and obtain
$\epsilon$-coresets using these upper bounds. Using a dimension-reduction type
argument, we are able to greatly simplify earlier results on total sensitivity
for the $k$-median/$k$-means clustering problems, and obtain
positively-weighted $\epsilon$-coresets for several variants of the
$(j,k)$-projective clustering problem. We also extend an earlier result on
$\epsilon$-coresets for the integer $(j,k)$-projective clustering problem in
fixed dimension to the case of high dimension.
|
1209.4895
|
A Neuro-Fuzzy Technique for Implementing the Half-Adder Circuit Using
the CANFIS Model
|
cs.NE
|
A Neural Network, in general, is not considered to be a good solver of
mathematical and binary arithmetic problems. However, networks have been
developed for such problems as the XOR circuit. This paper presents a technique
for the implementation of the Half-adder circuit using the CoActive Neuro-Fuzzy
Inference System (CANFIS) Model and attempts to solve the problem using the
NeuroSolutions 5 Simulator. The paper gives the experimental results along with
the interpretations and possible applications of the technique.
|
1209.4897
|
Structural robustness and transport efficiency of complex networks with
degree correlation
|
physics.soc-ph cs.SI
|
We examine two properties of complex networks, the robustness against
targeted node removal (attack) and the transport efficiency in terms of degree
correlation in node connection by numerical evaluation of exact analytic
expressions. We find that, while the assortative correlation enhances the
structural robustness against attack, the disassortative correlation
significantly improves the transport efficiency of the network under
consideration. This finding might shed light on the reason why some networks in
the real world prefer assortative correlation and others prefer disassortative
one.
|
1209.4922
|
Monitoring Control Updating Period In Fast Gradient Based NMPC
|
cs.SY cs.SE
|
In this paper, a method is proposed for on-line monitoring of the control
updating period in fast-gradient-based Model Predictive Control (MPC) schemes.
Such schemes are currently under intense investigation as a way to accommodate
for real-time requirements when dealing with systems showing fast dynamics. The
method needs cheap computations that use the algorithm on-line behavior in
order to recover the optimal updating period in terms of cost function
decrease. A simple example of constrained triple integrator is used to
illustrate the proposed method and to assess its efficiency.
|
1209.4950
|
Social Dynamics of Science
|
physics.soc-ph cs.DL cs.SI
|
The birth and decline of disciplines are critical to science and society.
However, no quantitative model to date allows us to validate competing theories
of whether the emergence of scientific disciplines drives or follows the
formation of social communities of scholars. Here we propose an agent-based
model based on a \emph{social dynamics of science,} in which the evolution of
disciplines is guided mainly by the social interactions among scientists. We
find that such a social theory can account for a number of stylized facts about
the relationships between disciplines, authors, and publications. These results
provide strong quantitative support for the key role of social interactions in
shaping the dynamics of science. A "science of science" must gauge the role of
exogenous events, such as scientific discoveries and technological advances,
against this purely social baseline.
|
1209.4951
|
An efficient model-free estimation of multiclass conditional probability
|
stat.ML cs.LG stat.ME
|
Conventional multiclass conditional probability estimation methods, such as
Fisher's discriminate analysis and logistic regression, often require
restrictive distributional model assumption. In this paper, a model-free
estimation method is proposed to estimate multiclass conditional probability
through a series of conditional quantile regression functions. Specifically,
the conditional class probability is formulated as difference of corresponding
cumulative distribution functions, where the cumulative distribution functions
can be converted from the estimated conditional quantile regression functions.
The proposed estimation method is also efficient as its computation cost does
not increase exponentially with the number of classes. The theoretical and
numerical studies demonstrate that the proposed estimation method is highly
competitive against the existing competitors, especially when the number of
classes is relatively large.
|
1209.4965
|
Structure theorem of square complex orthogonal design
|
cs.IT math.IT
|
Square COD (complex orthogonal design) with size $[n, n, k]$ is an $n \times
n$ matrix $\mathcal{O}_z$, where each entry is a complex linear combination of
$z_i$ and their conjugations $z_i^*$, $i=1,\ldots, k$, such that
$\mathcal{O}_z^H \mathcal{O}_z = (|z_1|^2 + \ldots + |z_k|^2)I_n$. Closely
following the work of Hottinen and Tirkkonen, which proved an upper bound of
$k/n$ by making a crucial observation between square COD and group
representation, we prove the structure theorem of square COD.
|
1209.4970
|
Kick synchronization versus diffusive synchronization
|
cs.SY math.DS nlin.AO
|
The paper provides an introductory discussion about two fundamental models of
oscillator synchronization: the (continuous-time) diffusive model, that
dominates the mathematical literature on synchronization, and the (hybrid) kick
model, that accounts for most popular examples of synchronization, but for
which only few theoretical results exist. The paper stresses fundamental
differences between the two models, such as the different contraction measures
underlying the analysis, as well as important analogies that can be drawn in
the limit of weak coupling.
|
1209.4975
|
Parametric matroid of rough set
|
cs.AI cs.DM
|
Rough set is mainly concerned with the approximations of objects through an
equivalence relation on a universe. Matroid is a combinatorial generalization
of linear independence in vector spaces. In this paper, we define a parametric
set family, with any subset of a universe as its parameter, to connect rough
sets and matroids. On the one hand, for a universe and an equivalence relation
on the universe, a parametric set family is defined through the lower
approximation operator. This parametric set family is proved to satisfy the
independent set axiom of matroids, therefore it can generate a matroid, called
a parametric matroid of the rough set. Three equivalent representations of the
parametric set family are obtained. Moreover, the parametric matroid of the
rough set is proved to be the direct sum of a partition-circuit matroid and a
free matroid. On the other hand, since partition-circuit matroids were well
studied through the lower approximation number, we use it to investigate the
parametric matroid of the rough set. Several characteristics of the parametric
matroid of the rough set, such as independent sets, bases, circuits, the rank
function and the closure operator, are expressed by the lower approximation
number.
|
1209.4976
|
Matroidal structure of rough sets based on serial and transitive
relations
|
cs.AI
|
The theory of rough sets is concerned with the lower and upper approximations
of objects through a binary relation on a universe. It has been applied to
machine learning, knowledge discovery and data mining. The theory of matroids
is a generalization of linear independence in vector spaces. It has been used
in combinatorial optimization and algorithm design. In order to take advantages
of both rough sets and matroids, in this paper we propose a matroidal structure
of rough sets based on a serial and transitive relation on a universe. We
define the family of all minimal neighborhoods of a relation on a universe, and
prove it satisfy the circuit axioms of matroids when the relation is serial and
transitive. In order to further study this matroidal structure, we investigate
the inverse of this construction: inducing a relation by a matroid. The
relationships between the upper approximation operators of rough sets based on
relations and the closure operators of matroids in the above two constructions
are studied. Moreover, we investigate the connections between the above two
constructions.
|
1209.4978
|
Covering matroid
|
cs.AI
|
In this paper, we propose a new type of matroids, namely covering matroids,
and investigate the connections with the second type of covering-based rough
sets and some existing special matroids. Firstly, as an extension of
partitions, coverings are more natural combinatorial objects and can sometimes
be more efficient to deal with problems in the real world. Through extending
partitions to coverings, we propose a new type of matroids called covering
matroids and prove them to be an extension of partition matroids. Secondly,
since some researchers have successfully applied partition matroids to
classical rough sets, we study the relationships between covering matroids and
covering-based rough sets which are an extension of classical rough sets.
Thirdly, in matroid theory, there are many special matroids, such as
transversal matroids, partition matroids, 2-circuit matroid and
partition-circuit matroids. The relationships among several special matroids
and covering matroids are studied.
|
1209.4992
|
Discontinuous Galerkin method for Navier-Stokes equations using kinetic
flux vector splitting
|
cs.NA cs.CE math.NA
|
Kinetic schemes for compressible flow of gases are constructed by exploiting
the connection between Boltzmann equation and the Navier-Stokes equations. This
connection allows us to construct a flux splitting for the Navier-Stokes
equations based on the direction of molecular motion from which a numerical
flux can be obtained. The naive use of such a numerical flux function in a
discontinuous Galerkin (DG) discretization leads to an unstable scheme in the
viscous dominated case. Stable schemes are constructed by adding additional
terms either in a symmetric or non-symmetric manner which are motivated by the
DG schemes for elliptic equations. The novelty of the present scheme is the use
of kinetic fluxes to construct the stabilization terms. In the symmetric case,
interior penalty terms have to be added for stability and the resulting schemes
give optimal convergence rates in numerical experiments. The non-symmetric
schemes lead to a cell energy/entropy inequality but exhibit sub-optimal
convergence rates. These properties are studied by applying the schemes to a
scalar convection-diffusion equation and the 1-D compressible Navier-Stokes
equations. In the case of Navier-Stokes equations, entropy variables are used
to construct stable schemes.
|
1209.4994
|
Kinetic energy preserving and entropy stable finite volume schemes for
compressible Euler and Navier-Stokes equations
|
cs.NA cs.CE math.NA
|
Centered numerical fluxes can be constructed for compressible Euler equations
which preserve kinetic energy in the semi-discrete finite volume scheme. The
essential feature is that the momentum flux should be of the form $f^m_\jph =
\tp_\jph + \avg{u}_\jph f^\rho_\jph$ where $\avg{u}_\jph = (u_j + u_{j+1})/2$
and $\tp_\jph, f^\rho_\jph$ are {\em any} consistent approximations to the
pressure and the mass flux. This scheme thus leaves most terms in the numerical
flux unspecified and various authors have used simple averaging. Here we
enforce approximate or exact entropy consistency which leads to a unique choice
of all the terms in the numerical fluxes. As a consequence novel entropy
conservative flux that also preserves kinetic energy for the semi-discrete
finite volume scheme has been proposed. These fluxes are centered and some
dissipation has to be added if shocks are present or if the mesh is coarse. We
construct scalar artificial dissipation terms which are kinetic energy stable
and satisfy approximate/exact entropy condition. Secondly, we use entropy-
variable based matrix dissipation flux which leads to kinetic energy and
entropy stable schemes. These schemes are shown to be free of entropy violating
solutions unlike the original Roe scheme. For hypersonic flows a blended scheme
is proposed which gives carbuncle free solutions for blunt body flows.
Numerical results for Euler and Navier-Stokes equations are presented to
demonstrate the performance of the different schemes.
|
1209.5019
|
A Bayesian Nonparametric Approach to Image Super-resolution
|
cs.LG stat.ML
|
Super-resolution methods form high-resolution images from low-resolution
images. In this paper, we develop a new Bayesian nonparametric model for
super-resolution. Our method uses a beta-Bernoulli process to learn a set of
recurring visual patterns, called dictionary elements, from the data. Because
it is nonparametric, the number of elements found is also determined from the
data. We test the results on both benchmark and natural images, comparing with
several other models from the research literature. We perform large-scale human
evaluation experiments to assess the visual quality of the results. In a first
implementation, we use Gibbs sampling to approximate the posterior. However,
this algorithm is not feasible for large-scale data. To circumvent this, we
then develop an online variational Bayes (VB) algorithm. This algorithm finds
high quality dictionaries in a fraction of the time needed by the Gibbs
sampler.
|
1209.5037
|
Delay Analysis of Max-Weight Queue Algorithm for Time-varying Wireless
Adhoc Networks - Control Theoretical Approach
|
cs.SY cs.IT math.IT
|
Max weighted queue (MWQ) control policy is a widely used cross-layer control
policy that achieves queue stability and a reasonable delay performance. In
most of the existing literature, it is assumed that optimal MWQ policy can be
obtained instantaneously at every time slot. However, this assumption may be
unrealistic in time varying wireless systems, especially when there is no
closed-form MWQ solution and iterative algorithms have to be applied to obtain
the optimal solution. This paper investigates the convergence behavior and the
queue delay performance of the conventional MWQ iterations in which the channel
state information (CSI) and queue state information (QSI) are changing in a
similar timescale as the algorithm iterations. Our results are established by
studying the stochastic stability of an equivalent virtual stochastic dynamic
system (VSDS), and an extended Foster-Lyapunov criteria is applied for the
stability analysis. We derive a closed form delay bound of the wireless network
in terms of the CSI fading rate and the sensitivity of MWQ policy over CSI and
QSI. Based on the equivalent VSDS, we propose a novel MWQ iterative algorithm
with compensation to improve the tracking performance. We demonstrate that
under some mild conditions, the proposed modified MWQ algorithm converges to
the optimal MWQ control despite the time-varying CSI and QSI.
|
1209.5038
|
Fast Randomized Model Generation for Shapelet-Based Time Series
Classification
|
cs.LG
|
Time series classification is a field which has drawn much attention over the
past decade. A new approach for classification of time series uses
classification trees based on shapelets. A shapelet is a subsequence extracted
from one of the time series in the dataset. A disadvantage of this approach is
the time required for building the shapelet-based classification tree. The
search for the best shapelet requires examining all subsequences of all lengths
from all time series in the training set.
A key goal of this work was to find an evaluation order of the shapelets
space which enables fast convergence to an accurate model. The comparative
analysis we conducted clearly indicates that a random evaluation order yields
the best results. Our empirical analysis of the distribution of high-quality
shapelets within the shapelets space provides insights into why randomized
shapelets sampling is superior to alternative evaluation orders.
We present an algorithm for randomized model generation for shapelet-based
classification that converges extremely quickly to a model with surprisingly
high accuracy after evaluating only an exceedingly small fraction of the
shapelets space.
|
1209.5039
|
Creation of Digital Test Form for Prepress Department
|
cs.CV
|
The main problem in colour management in prepress department is lack of
availability of literature on colour management and knowledge gap between
prepress department and press department. So a digital test from has been
created by Adobe Photoshop to analyse the ICC profile and to create a new
profile and this analysed data is used to study about various grey scale of RGB
and CMYK images. That helps in conversion of image from RGB to CMYK in prepress
department.
|
1209.5040
|
Image Classification and Optimized Image Reproduction
|
cs.CV
|
By taking into account the properties and limitations of the human visual
system, images can be more efficiently compressed, colors more accurately
reproduced, prints better rendered. To show all these advantages in this paper
new adapted color charts have been created based on technical and visual image
category analysis. A number of tests have been carried out using extreme images
with their key information strictly in dark and light areas. It was shown that
the image categorization using the adapted color charts improves the analysis
of relevant image information with regard to both the image gradation and the
detail reproduction. The images with key information in hi-key areas were also
test printed using the adapted color charts.
|
1209.5041
|
An Implementation of Computer Graphics as Prepress Image Enhancement
Process
|
cs.CV
|
The production of a printed product involves three stages: prepress, the
printing process (press) itself, and finishing (post press). There are various
types of equipments (printers, scanners) and various qualities image are
present in the market. These give different color rendering each time during
reproduction. So, a color key tool has been developed keeping Color Management
Scheme (CMS) in mind so that during reproduction no color rendering takes place
irrespective of use of any device and resolution level has also been improved.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.