id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1203.1995
|
Classify Participants in Online Communities
|
cs.SI
|
As online communities become increasingly popular, researchers have tried to
examine participating activities in online communities as well as how to
sustain online communities. However, relatively few studies have tried to
understand what kinds of participants constitute online communities. In this
study, we try to contribute online community research by developing "common
language" to classify different participants in online communities.
Specifically, we argue that the previous way to classify participants is not
sufficient and accurate, and we propose a continuum to classify participants
based on participants' overall trend of posting activities. In order to further
online community research, we also propose potential directions for future
studies.
|
1203.2000
|
Overview of streaming-data algorithms
|
cs.DB cs.IR
|
Due to recent advances in data collection techniques, massive amounts of data
are being collected at an extremely fast pace. Also, these data are potentially
unbounded. Boundless streams of data collected from sensors, equipments, and
other data sources are referred to as data streams. Various data mining tasks
can be performed on data streams in search of interesting patterns. This paper
studies a particular data mining task, clustering, which can be used as the
first step in many knowledge discovery processes. By grouping data streams into
homogeneous clusters, data miners can learn about data characteristics which
can then be developed into classification models for new data or predictive
models for unknown events. Recent research addresses the problem of data-stream
mining to deal with applications that require processing huge amounts of data
such as sensor data analysis and financial applications. For such analysis,
single-pass algorithms that consume a small amount of memory are critical.
|
1203.2002
|
Graph partitioning advance clustering technique
|
cs.LG cs.DB
|
Clustering is a common technique for statistical data analysis, Clustering is
the process of grouping the data into classes or clusters so that objects
within a cluster have high similarity in comparison to one another, but are
very dissimilar to objects in other clusters. Dissimilarities are assessed
based on the attribute values describing the objects. Often, distance measures
are used. Clustering is an unsupervised learning technique, where interesting
patterns and structures can be found directly from very large data sets with
little or none of the background knowledge. This paper also considers the
partitioning of m-dimensional lattice graphs using Fiedler's approach, which
requires the determination of the eigenvector belonging to the second smallest
Eigenvalue of the Laplacian with K-means partitioning algorithm.
|
1203.2021
|
A new supervised non-linear mapping
|
cs.IR
|
Supervised mapping methods project multi-dimensional labeled data onto a
2-dimensional space attempting to preserve both data similarities and topology
of classes. Supervised mappings are expected to help the user to understand the
underlying original class structure and to classify new data visually. Several
methods have been designed to achieve supervised mapping, but many of them
modify original distances prior to the mapping so that original data
similarities are corrupted and even overlapping classes tend to be separated
onto the map ignoring their original topology. We propose ClassiMap, an
alternative method for supervised mapping. Mappings come with distortions which
can be split between tears (close points mapped far apart) and false
neighborhoods (points far apart mapped as neighbors). Some mapping methods
favor the former while others favor the latter. ClassiMap switches between such
mapping methods so that tears tend to appear between classes and false
neighborhood within classes, better preserving classes' topology. We also
propose two new objective criteria instead of the usual subjective visual
inspection to perform fair comparisons of supervised mapping methods. ClassiMap
appears to be the best supervised mapping method according to these criteria in
our experiments on synthetic and real datasets.
|
1203.2024
|
A Greedy Link Scheduler for Wireless Networks with Fading Channels
|
cs.NI cs.IT math.IT
|
We consider the problem of link scheduling for wireless networks with fading
channels, where the link rates are varying with time. Due to the high
computational complexity of the throughput optimal scheduler, we provide a low
complexity greedy link scheduler GFS, with provable performance guarantees. We
show that the performance of our greedy scheduler can be analyzed using the
Local Pooling Factor (LPF) of a network graph, which has been previously used
to characterize the stability of the Greedy Maximal Scheduling (GMS) policy for
networks with static channels. We conjecture that the performance of GFS is a
lower bound on the performance of GMS for wireless networks with fading
channels
|
1203.2031
|
Design of modular wireless sensor
|
cs.SE cs.NI cs.SY math.OC
|
The paper addresses combinatorial approach to design of modular wireless
sensor as composition of the sensor element from its component alternatives and
aggregation of the obtained solutions into a resultant aggregated solution. A
hierarchical model is used for the wireless sensor element. The solving process
consists of three stages: (i) multicriteria ranking of design alternatives for
system components/parts, (ii) composing the selected design alternatives into
composite solution(s) while taking into account ordinal quality of the design
alternatives above and their compatibility (this stage is based on Hierarchical
Morphological Multicriteria Design - HMMD), and (iii) aggregation of the
obtained composite solutions into a resultant aggregated solution(s). A
numerical example describes the problem structuring and solving processes for
modular alarm wireless sensor element.
|
1203.2109
|
Network Cosmology
|
gr-qc cond-mat.dis-nn cs.NI cs.SI physics.soc-ph
|
Prediction and control of the dynamics of complex networks is a central
problem in network science. Structural and dynamical similarities of different
real networks suggest that some universal laws might accurately describe the
dynamics of these networks, albeit the nature and common origin of such laws
remain elusive. Here we show that the causal network representing the
large-scale structure of spacetime in our accelerating universe is a power-law
graph with strong clustering, similar to many complex networks such as the
Internet, social, or biological networks. We prove that this structural
similarity is a consequence of the asymptotic equivalence between the
large-scale growth dynamics of complex networks and causal networks. This
equivalence suggests that unexpectedly similar laws govern the dynamics of
complex networks and spacetime in the universe, with implications to network
science and cosmology.
|
1203.2147
|
A Hybrid Image Cryptosystem Based On OMFLIP Permutation Cipher
|
cs.MM cs.IT math.IT
|
The protection of confidential image data from unauthorized access is an
important area of research in network communication. This paper presents a
high-level security encryption scheme for gray scale images. The gray level
image is first decomposed into binary images using bit scale decomposition.
Each binary image is then compressed by selecting a good scanning path that
minimizes the total number of bits needed to encode the bit sequence along the
scanning path using two dimensional run encoding. The compressed bit string is
then scrambled iteratively using a pseudo-random number generator and finally
encrypted using a bit level permutation OMFLIP. The performance is tested,
illustrated and discussed.
|
1203.2169
|
Blind Carrier Phase Recovery for General 2{\pi}/M-rotationally Symmetric
Constellations
|
cs.IT math.IT
|
This paper introduces a novel blind carrier phase recovery estimator for
general 2{\Pi}/M-rotationally symmetric constellations. This estimation method
is a generalization of the non-data-aided (NDA) nonlinear Phase Metric Method
(PMM) estimator already designed for general quadrature amplitude
constellations. This unbiased estimator is seen here as a fourth order PMM then
generalized to Mth order (Mth PMM) in such manner that it covers general
2{\Pi}/M-rotationally symmetric constellations such as PAM, QAM, PSK.
Simulation results demonstrate the good performance of this Mth PMM estimation
algorithm against competitive blind phase estimators already published for
various modulation systems of practical interest.
|
1203.2177
|
Regret Bounds for Deterministic Gaussian Process Bandits
|
cs.LG stat.ML
|
This paper analyses the problem of Gaussian process (GP) bandits with
deterministic observations. The analysis uses a branch and bound algorithm that
is related to the UCB algorithm of (Srinivas et al., 2010). For GPs with
Gaussian observation noise, with variance strictly greater than zero, (Srinivas
et al., 2010) proved that the regret vanishes at the approximate rate of
$O(\frac{1}{\sqrt{t}})$, where t is the number of observations. To complement
their result, we attack the deterministic case and attain a much faster
exponential convergence rate. Under some regularity assumptions, we show that
the regret decreases asymptotically according to $O(e^{-\frac{\tau t}{(\ln
t)^{d/4}}})$ with high probability. Here, d is the dimension of the search
space and $\tau$ is a constant that depends on the behaviour of the objective
function near its global maximum.
|
1203.2200
|
Role-Dynamics: Fast Mining of Large Dynamic Networks
|
cs.SI cs.AI cs.LG stat.ML
|
To understand the structural dynamics of a large-scale social, biological or
technological network, it may be useful to discover behavioral roles
representing the main connectivity patterns present over time. In this paper,
we propose a scalable non-parametric approach to automatically learn the
structural dynamics of the network and individual nodes. Roles may represent
structural or behavioral patterns such as the center of a star, peripheral
nodes, or bridge nodes that connect different communities. Our novel approach
learns the appropriate structural role dynamics for any arbitrary network and
tracks the changes over time. In particular, we uncover the specific global
network dynamics and the local node dynamics of a technological, communication,
and social network. We identify interesting node and network patterns such as
stationary and non-stationary roles, spikes/steps in role-memberships (perhaps
indicating anomalies), increasing/decreasing role trends, among many others.
Our results indicate that the nodes in each of these networks have distinct
connectivity patterns that are non-stationary and evolve considerably over
time. Overall, the experiments demonstrate the effectiveness of our approach
for fast mining and tracking of the dynamics in large networks. Furthermore,
the dynamic structural representation provides a basis for building more
sophisticated models and tools that are fast for exploring large dynamic
networks.
|
1203.2202
|
Exact-MSR Codes for Distributed Storage with Low Repair Complexity
|
cs.IT math.IT
|
In this paper, we propose two new constructions of exact-repair minimum
storage regenerating (exact-MSR) codes. For both constructions, the encoded
symbols are obtained by treating the message vector over GF(q) as a linearized
polynomial and evaluating it over an extension field GF(q^m). For our exact-MSR
codes, data repair does not need matrix inversion, and can be implemented by
additions and multiplications over GF$(q)$ as well as cyclic shifts when a
normal basis is used. The two constructions assume a base field of GF(q) (q>2)
and GF(2), respectively. In contrast to existing constructions of exact-MSR
codes, the former construction works for arbitrary code parameters, provided
that $q$ is large enough. This is the first construction of exact-MSR codes
with arbitrary code parameters, to the best of our knowledge. In comparison to
existing exact-MSR codes, while data construction of our exact-MSR codes has a
higher complexity, the complexity of data repair is lower. Thus, they are
attractive for applications that need a small number of data reconstructions
along with a large number of data repairs.
|
1203.2210
|
Fixed-Rank Representation for Unsupervised Visual Learning
|
cs.CV cs.NA
|
Subspace clustering and feature extraction are two of the most commonly used
unsupervised learning techniques in computer vision and pattern recognition.
State-of-the-art techniques for subspace clustering make use of recent advances
in sparsity and rank minimization. However, existing techniques are
computationally expensive and may result in degenerate solutions that degrade
clustering performance in the case of insufficient data sampling. To partially
solve these problems, and inspired by existing work on matrix factorization,
this paper proposes fixed-rank representation (FRR) as a unified framework for
unsupervised visual learning. FRR is able to reveal the structure of multiple
subspaces in closed-form when the data is noiseless. Furthermore, we prove that
under some suitable conditions, even with insufficient observations, FRR can
still reveal the true subspace memberships. To achieve robustness to outliers
and noise, a sparse regularizer is introduced into the FRR framework. Beyond
subspace clustering, FRR can be used for unsupervised feature extraction. As a
non-trivial byproduct, a fast numerical solver is developed for FRR.
Experimental results on both synthetic data and real applications validate our
theoretical analysis and demonstrate the benefits of FRR for unsupervised
visual learning.
|
1203.2213
|
On the Mixing Time of Markov Chain Monte Carlo for Integer Least-Square
Problems
|
cs.IT math.IT
|
In this paper, we study the mixing time of Markov Chain Monte Carlo (MCMC)
for integer least-square (LS) optimization problems. It is found that the
mixing time of MCMC for integer LS problems depends on the structure of the
underlying lattice. More specifically, the mixing time of MCMC is closely
related to whether there is a local minimum in the lattice structure. For some
lattices, the mixing time of the Markov chain is independent of the
signal-to-noise ($SNR$) ratio and grows polynomially in the problem dimension;
while for some lattices, the mixing time grows unboundedly as $SNR$ grows. Both
theoretical and empirical results suggest that to ensure fast mixing, the
temperature for MCMC should often grow positively as the $SNR$ increases. We
also derive the probability that there exist local minima in an integer
least-square problem, which can be as high as
$1/3-\frac{1}{\sqrt{5}}+\frac{2\arctan(\sqrt{5/3})}{\sqrt{5}\pi}$.
|
1203.2228
|
A network-based dynamical ranking system for competitive sports
|
physics.soc-ph cs.SI
|
From the viewpoint of networks, a ranking system for players or teams in
sports is equivalent to a centrality measure for sports networks, whereby a
directed link represents the result of a single game. Previously proposed
network-based ranking systems are derived from static networks, i.e.,
aggregation of the results of games over time. However, the score of a player
(or team) fluctuates over time. Defeating a renowned player in the peak
performance is intuitively more rewarding than defeating the same player in
other periods. To account for this factor, we propose a dynamic variant of such
a network-based ranking system and apply it to professional men's tennis data.
We derive a set of linear online update equations for the score of each player.
The proposed ranking system predicts the outcome of the future games with a
higher accuracy than the static counterparts.
|
1203.2245
|
Facticity as the amount of self-descriptive information in a data set
|
cs.IT math.IT
|
Using the theory of Kolmogorov complexity the notion of facticity {\phi}(x)
of a string is defined as the amount of self-descriptive information it
contains. It is proved that (under reasonable assumptions: the existence of an
empty machine and the availability of a faithful index) facticity is definite,
i.e. random strings have facticity 0 and for compressible strings 0 < {\phi}(x)
< 1/2 |x| + O(1). Consequently facticity measures the tension in a data set
between structural and ad-hoc information objectively. For binary strings there
is a so-called facticity threshold that is dependent on their entropy. Strings
with facticty above this threshold have no optimal stochastic model and are
essentially computational. The shape of the facticty versus entropy plot
coincides with the well-known sawtooth curves observed in complex systems. The
notion of factic processes is discussed. This approach overcomes problems with
earlier proposals to use two-part code to define the meaningfulness or
usefulness of a data set.
|
1203.2268
|
Friends FTW! Friendship, Collaboration and Competition in Halo: Reach
|
cs.SI cs.CY cs.HC physics.soc-ph
|
How important are friendships in determining success by individuals and teams
in complex collaborative environments? By combining a novel data set containing
the dynamics of millions of ad hoc teams from the popular multiplayer online
first person shooter Halo: Reach with survey data on player demographics, play
style, psychometrics and friendships derived from an anonymous online survey,
we investigate the impact of friendship on collaborative and competitive
performance. In addition to finding significant differences in player behavior
across these variables, we find that friendships exert a strong influence,
leading to both improved individual and team performance--even after
controlling for the overall expertise of the team--and increased pro-social
behaviors. Players also structure their in-game activities around social
opportunities, and as a result hidden friendship ties can be accurately
inferred directly from behavioral time series. Virtual environments that enable
such friendship effects will thus likely see improved collaboration and
competition.
|
1203.2293
|
Categories of Emotion names in Web retrieved texts
|
cs.CL cs.IR
|
The categorization of emotion names, i.e., the grouping of emotion words that
have similar emotional connotations together, is a key tool of Social
Psychology used to explore people's knowledge about emotions. Without
exception, the studies following that research line were based on the gauging
of the perceived similarity between emotion names by the participants of the
experiments. Here we propose and examine a new approach to study the categories
of emotion names - the similarities between target emotion names are obtained
by comparing the contexts in which they appear in texts retrieved from the
World Wide Web. This comparison does not account for any explicit semantic
information; it simply counts the number of common words or lexical items used
in the contexts. This procedure allows us to write the entries of the
similarity matrix as dot products in a linear vector space of contexts. The
properties of this matrix were then explored using Multidimensional Scaling
Analysis and Hierarchical Clustering. Our main findings, namely, the underlying
dimension of the emotion space and the categories of emotion names, were
consistent with those based on people's judgments of emotion names
similarities.
|
1203.2297
|
Analog network coding in general SNR regime: Performance of a greedy
scheme
|
cs.IT math.IT
|
The problem of maximum rate achievable with analog network coding for a
unicast communication over a layered relay network with directed links is
considered. A relay node performing analog network coding scales and forwards
the signals received at its input. Recently this problem has been considered
under certain assumptions on per node scaling factor and received SNR.
Previously, we established a result that allows us to characterize the optimal
performance of analog network coding in network scenarios beyond those that can
be analyzed using the approaches based on such assumptions.
The key contribution of this work is a scheme to greedily compute a lower
bound to the optimal rate achievable with analog network coding in the general
layered networks. This scheme allows for exact computation of the optimal
achievable rates in a wider class of layered networks than those that can be
addressed using existing approaches. For the specific case of Gaussian N-relay
diamond network, to the best of our knowledge, the proposed scheme provides the
first exact characterization of the optimal rate achievable with analog network
coding. Further, for general layered networks, our scheme allows us to compute
optimal rates within a constant gap from the cut-set upper bound asymptotically
in the source power.
|
1203.2298
|
Minimum Cost Multicast with Decentralized Sources
|
cs.IT math.IT
|
In this paper we study the multisource multicast problem where every sink in
a given directed acyclic graph is a client and is interested in a common file.
We consider the case where each node can have partial knowledge about the file
as a side information. Assuming that nodes can communicate over the capacity
constrained links of the graph, the goal is for each client to gain access to
the file, while minimizing some linear cost function of number of bits
transmitted in the network. We consider three types of side-information
settings:(ii) side information in the form of linearly correlated packets; and
(iii) the general setting where the side information at the nodes have an
arbitrary (i.i.d.) correlation structure. In this work we 1) provide a
polynomial time feasibility test, i.e., whether or not all the clients can
recover the file, and 2) we provide a polynomial-time algorithm that finds the
optimal rate allocation among the links of the graph, and then determines an
explicit transmission scheme for cases (i) and (ii).
|
1203.2299
|
A Cross-cultural Corpus of Annotated Verbal and Nonverbal Behaviors in
Receptionist Encounters
|
cs.CL cs.RO
|
We present the first annotated corpus of nonverbal behaviors in receptionist
interactions, and the first nonverbal corpus (excluding the original video and
audio data) of service encounters freely available online. Native speakers of
American English and Arabic participated in a naturalistic role play at
reception desks of university buildings in Doha, Qatar and Pittsburgh, USA.
Their manually annotated nonverbal behaviors include gaze direction, hand and
head gestures, torso positions, and facial expressions. We discuss possible
uses of the corpus and envision it to become a useful tool for the human-robot
interaction community.
|
1203.2315
|
Modeling multistage decision processes with Reflexive Game Theory
|
cs.MA cs.AI
|
This paper introduces application of Reflexive Game Theory to the matter of
multistage decision making processes. The idea behind is that each decision
making session has certain parameters like "when the session is taking place",
"who are the group members to make decision", "how group members influence on
each other", etc. This study illustrates the consecutive or sequential decision
making process, which consist of two stages. During the stage 1 decisions about
the parameters of the ultimate decision making are made. Then stage 2 is
implementation of Ultimate decision making itself. Since during stage 1 there
can be multiple decision sessions. In such a case it takes more than two
sessions to make ultimate (final) decision. Therefore the overall process of
ultimate decision making becomes multistage decision making process consisting
of consecutive decision making sessions.
|
1203.2316
|
Near-optimal quantization and linear network coding for relay networks
|
cs.IT cs.NI math.IT
|
We introduce a discrete network corresponding to any Gaussian wireless
network that is obtained by simply quantizing the received signals and
restricting the transmitted signals to a finite precision. Since signals in the
discrete network are obtained from those of a Gaussian network, the Gaussian
network can be operated on the quantization-based digital interface defined by
the discrete network. We prove that this digital interface is near-optimal for
Gaussian relay networks and the capacities of the Gaussian and the discrete
networks are within a bounded gap of O(M^2) bits, where M is the number of
nodes.
We prove that any near-optimal coding strategy for the discrete network can
be naturally transformed into a near-optimal coding strategy for the Gaussian
network merely by quantization. We exploit this by designing a linear coding
strategy for the case of layered discrete relay networks. The linear coding
strategy is near-optimal for Gaussian and discrete networks and achieves rates
within O(M^2) bits of the capacity, independent of channel gains or SNR. The
linear code is robust and the relays need not know the channel gains. The
transmit and receive signals at all relays are simply quantized to binary
tuples of the same length $n$ . The linear network code requires all the relay
nodes to collect the received binary tuples into a long binary vector and apply
a linear transformation on the long vector. The resulting binary vector is
split into smaller binary tuples for transmission by the relays. The
quantization requirements of the linear network code are completely defined by
the parameter $n$, which also determines the resolution of the
analog-to-digital and digital-to-analog convertors for operating the network
within a bounded gap of the network's capacity. The linear network code
explicitly connects network coding for wireline networks with codes for
Gaussian networks.
|
1203.2384
|
Elements of Cellular Blind Interference Alignment --- Aligned Frequency
Reuse, Wireless Index Coding and Interference Diversity
|
cs.IT math.IT
|
We explore degrees of freedom (DoF) characterizations of partially connected
wireless networks, especially cellular networks, with no channel state
information at the transmitters. Specifically, we introduce three fundamental
elements --- aligned frequency reuse, wireless index coding and interference
diversity --- through a series of examples, focusing first on infinite regular
arrays, then on finite clusters with arbitrary connectivity and message sets,
and finally on heterogeneous settings with asymmetric multiple antenna
configurations. Aligned frequency reuse refers to the optimality of orthogonal
resource allocations in many cases, but according to unconventional reuse
patterns that are guided by interference alignment principles. Wireless index
coding highlights both the intimate connection between the index coding problem
and cellular blind interference alignment, as well as the added complexity
inherent to wireless settings. Interference diversity refers to the observation
that in a wireless network each receiver experiences a different set of
interferers, and depending on the actions of its own set of interferers, the
interference-free signal space at each receiver fluctuates differently from
other receivers, creating opportunities for robust applications of blind
interference alignment principles.
|
1203.2386
|
On-Board Visual Tracking with Unmanned Aircraft System (UAS)
|
cs.CV cs.RO
|
This paper presents the development of a real time tracking algorithm that
runs on a 1.2 GHz PC/104 computer on-board a small UAV. The algorithm uses zero
mean normalized cross correlation to detect and locate an object in the image.
A kalman filter is used to make the tracking algorithm computationally
efficient. Object position in an image frame is predicted using the motion
model and a search window, centered at the predicted position is generated.
Object position is updated with the measurement from object detection. The
detected position is sent to the motion controller to move the gimbal so that
the object stays at the center of the image frame. Detection and tracking is
autonomously carried out on the payload computer and the system is able to work
in two different methods. The first method starts detecting and tracking using
a stored image patch. The second method allows the operator on the ground to
select the interest object for the UAV to track. The system is capable of
re-detecting an object, in the event of tracking failure. Performance of the
tracking system was verified both in the lab and on the field by mounting the
payload on a vehicle and simulating a flight. Tests show that the system can
detect and track a diverse set of objects in real time. Flight testing of the
system will be conducted at the next available opportunity.
|
1203.2394
|
Decentralized, Adaptive, Look-Ahead Particle Filtering
|
stat.ML cs.LG stat.CO
|
The decentralized particle filter (DPF) was proposed recently to increase the
level of parallelism of particle filtering. Given a decomposition of the state
space into two nested sets of variables, the DPF uses a particle filter to
sample the first set and then conditions on this sample to generate a set of
samples for the second set of variables. The DPF can be understood as a variant
of the popular Rao-Blackwellized particle filter (RBPF), where the second step
is carried out using Monte Carlo approximations instead of analytical
inference. As a result, the range of applications of the DPF is broader than
the one for the RBPF. In this paper, we improve the DPF in two ways. First, we
derive a Monte Carlo approximation of the optimal proposal distribution and,
consequently, design and implement a more efficient look-ahead DPF. Although
the decentralized filters were initially designed to capitalize on parallel
implementation, we show that the look-ahead DPF can outperform the standard
particle filter even on a single machine. Second, we propose the use of bandit
algorithms to automatically configure the state space decomposition of the DPF.
|
1203.2404
|
Video Object Tracking and Analysis for Computer Assisted Surgery
|
cs.CV
|
Pedicle screw insertion technique has made revolution in the surgical
treatment of spinal fractures and spinal disorders. Although X- ray fluoroscopy
based navigation is popular, there is risk of prolonged exposure to X- ray
radiation. Systems that have lower radiation risk are generally quite
expensive. The position and orientation of the drill is clinically very
important in pedicle screw fixation. In this paper, the position and
orientation of the marker on the drill is determined using pattern recognition
based methods, using geometric features, obtained from the input video sequence
taken from CCD camera. A search is then performed on the video frames after
preprocessing, to obtain the exact position and orientation of the drill.
Animated graphics, showing the instantaneous position and orientation of the
drill is then overlaid on the processed video for real time drill control and
navigation.
|
1203.2456
|
On Secrecy above Secrecy Capacity
|
cs.IT math.IT
|
We consider secrecy obtained when one transmits on a Gaussian Wiretap channel
above the secrecy capacity. Instead of equivocation, we consider probability of
error as the criterion of secrecy. The usual channel codes are considered for
transmission. The rates obtained can reach the channel capacity. We show that
the "confusion" caused to the Eve when the rate of transmission is above
capacity of the Eve's channel is similar to the confusion caused by using the
wiretap channel codes used below the secrecy capacity.
|
1203.2468
|
Diversity, Coding, and Multiplexing Trade-Off of Network-Coded
Cooperative Wireless Networks
|
cs.IT math.IT
|
In this paper, we study the performance of network-coded cooperative
diversity systems with practical communication constraints. More specifically,
we investigate the interplay between diversity, coding, and multiplexing gain
when the relay nodes do not act as dedicated repeaters, which only forward data
packets transmitted by the sources, but they attempt to pursue their own
interest by forwarding packets which contain a network-coded version of
received and their own data. We provide a very accurate analysis of the Average
Bit Error Probability (ABEP) for two network topologies with three and four
nodes, when practical communication constraints, i.e., erroneous decoding at
the relays and fading over all the wireless links, are taken into account.
Furthermore, diversity and coding gain are studied, and advantages and
disadvantages of cooperation and binary Network Coding (NC) are highlighted.
Our results show that the throughput increase introduced by NC is offset by a
loss of diversity and coding gain. It is shown that there is neither a coding
nor a diversity gain for the source node when the relays forward a
network-coded version of received and their own data. Compared to other results
available in the literature, the conclusion is that binary NC seems to be more
useful when the relay nodes act only on behalf of the source nodes, and do not
mix their own packets to the received ones. Analytical derivation and findings
are substantiated through extensive Monte Carlo simulations.
|
1203.2498
|
Fault detection system for Arabic language
|
cs.CL
|
The study of natural language, especially Arabic, and mechanisms for the
implementation of automatic processing is a fascinating field of study, with
various potential applications. The importance of tools for natural language
processing is materialized by the need to have applications that can
effectively treat the vast mass of information available nowadays on electronic
forms. Among these tools, mainly driven by the necessity of a fast writing in
alignment to the actual daily life speed, our interest is on the writing
auditors. The morphological and syntactic properties of Arabic make it a
difficult language to master, and explain the lack in the processing tools for
that language. Among these properties, we can mention: the complex structure of
the Arabic word, the agglutinative nature, lack of vocalization, the
segmentation of the text, the linguistic richness, etc.
|
1203.2499
|
A framework for integrated design of algorithmic architectural forms
|
cs.CE
|
This paper presents a methodology and software tools for parametric design of
complex architectural objects, called digital or algorithmic forms. In order to
provide a flexible tool, the proposed design philosophy involves two open
source utilities Donkey and MIDAS written in Grasshopper algorithm editor and
C++, respectively, that are to be linked with a scripting-based architectural
modellers Rhinoceros, IntelliCAD and the open source Finite Element solver
OOFEM. The emphasis is put on the mechanical response in order to provide
architects with a consistent learning framework and an insight into structural
behaviour of designed objects. As demonstrated on three case studies, the
proposed modular solution is capable of handling objects of considerable
structural complexity, thereby accelerating the process of finding procedural
design parameters from orders of weeks to days or hours.
|
1203.2506
|
Vibrating Cantilever Transducer Incorporated in Dual Diaphragms
Structure for Sensing Differential Pneumatic Pressure
|
cs.SY
|
Pneumatic pressure cells with thin metallic spherical diaphragm of shallow
spherical shell configuration linked with vibrating wire pickup or vibrating
cantilever pickup were reported in the past. In order to enhance the
sensitivity of the pressure cell this work considers dual diaphragm structure
fitted with cantilever pickup. The design and development of the pressure cell
with this dual diaphragm structure having cantilever pickup is presented here.
The geometrical design is optimally made as to sense either mono pressure or
differential pressure resources. The cantilevers of the two diaphragms are
excited to produce vibrations and the frequencies of vibrations are determined
by picking up signals from orthogonally arranged opto-coupler links. With the
computed frequency a lookup table is referred to obtain the pressure acting on
the concerned diaphragm. In the external circuits, the average pressure and the
differential pressure acting on two diaphragms are computed. Furthermore
transmitting circuits taking the average pressure and differential pressure in
digital form and analogue form to remote area are presented. Performance
analysis of the proposed mechatronic pressure cell is made and its improved
performance over other pressure cells is presented.
|
1203.2507
|
Deviation optimal learning using greedy Q-aggregation
|
math.ST cs.LG stat.ML stat.TH
|
Given a finite family of functions, the goal of model selection aggregation
is to construct a procedure that mimics the function from this family that is
the closest to an unknown regression function. More precisely, we consider a
general regression model with fixed design and measure the distance between
functions by the mean squared error at the design points. While procedures
based on exponential weights are known to solve the problem of model selection
aggregation in expectation, they are, surprisingly, sub-optimal in deviation.
We propose a new formulation called Q-aggregation that addresses this
limitation; namely, its solution leads to sharp oracle inequalities that are
optimal in a minimax sense. Moreover, based on the new formulation, we design
greedy Q-aggregation procedures that produce sparse aggregation models
achieving the optimal rate. The convergence and performance of these greedy
procedures are illustrated and compared with other standard methods on
simulated examples.
|
1203.2508
|
Pneumatic Pressure Cell with Twin Diaphragms Embedding Spherical
Corrugations in a Dual Diaphragm Structure
|
cs.SY
|
Thin metallic shallow spherical diaphragms are being used for measuring
pneumatic pressure in process industries. The drift in vertex realized due to
application of pressure is transformed into electrical signal and this is
calibrated for pressure. We now propose a modified structure for the pressure
cell by having double ended shallow spherical shells embedded with spherical
corrugations as to enhance the sensitivity to a greater extent. By having dual
such installation in the structure of the pressure cell it concedes further
increase in sensitivity. The construction details of the diaphragm structure,
theory and analysis to assess the performance are presented.
|
1203.2509
|
Tripartite Bell inequality, random matrices and trilinear forms
|
math.OA cs.IT math-ph math.FA math.IT math.MP math.PR
|
In this seminar report, we present in detail the proof of a recent result due
to J. Bri\"et and T. Vidick, improving an estimate in a 2008 paper by D.
P\'erez-Garc\'{\i}a, M. Wolf, C. Palazuelos, I. Villanueva, and M. Junge,
estimating the growth of the deviation in the tripartite Bell inequality. The
proof requires a delicate estimate of the norms of certain trilinear (or
$d$-linear) forms on Hilbert space with coefficients in the second Gaussian
Wiener chaos. Let $E^n_{\vee}$ (resp. $E^n_{\min}$) denote $ \ell_1^n \otimes
\ell_1^n\otimes \ell_1^n$ equipped with the injective (resp. minimal) tensor
norm. Here $ \ell_1^n$ is equipped with its maximal operator space structure.
The Bri\"et-Vidick method yields that the identity map $I_n$ satisfies (for
some $c>0$) $\|I_n:\ E^n_{\vee}\to E^n_{\min}\|\ge c n^{1/4} (\log n)^{-3/2}.$
Let $S^n_2$ denote the (Hilbert) space of $n\times n$-matrices equipped with
the Hilbert-Schmidt norm. While a lower bound closer to $n^{1/2} $ is still
open, their method produces an interesting, asymptotically almost sharp,
related estimate for the map $J_n:\ S^n_2\stackrel{\vee}{\otimes}
S^n_2\stackrel{\vee}{\otimes}S^n_2 \to \ell_2^{n^3} \stackrel{\vee}{\otimes}
\ell_2^{n^3} $ taking $e_{i,j}\otimes e_{k,l}\otimes e_{m,n}$ to
$e_{[i,k,m],[j,l,n]}$.
|
1203.2511
|
A Simple Flood Forecasting Scheme Using Wireless Sensor Networks
|
cs.LG cs.CE cs.NI cs.SY stat.AP
|
This paper presents a forecasting model designed using WSNs (Wireless Sensor
Networks) to predict flood in rivers using simple and fast calculations to
provide real-time results and save the lives of people who may be affected by
the flood. Our prediction model uses multiple variable robust linear regression
which is easy to understand and simple and cost effective in implementation, is
speed efficient, but has low resource utilization and yet provides real time
predictions with reliable accuracy, thus having features which are desirable in
any real world algorithm. Our prediction model is independent of the number of
parameters, i.e. any number of parameters may be added or removed based on the
on-site requirements. When the water level rises, we represent it using a
polynomial whose nature is used to determine if the water level may exceed the
flood line in the near future. We compare our work with a contemporary
algorithm to demonstrate our improvements over it. Then we present our
simulation results for the predicted water level compared to the actual water
level.
|
1203.2514
|
Enhancement of Images using Morphological Transformation
|
cs.CV
|
This paper deals with enhancement of images with poor contrast and detection
of background. Proposes a frame work which is used to detect the background in
images characterized by poor contrast. Image enhancement has been carried out
by the two methods based on the Weber's law notion. The first method employs
information from image background analysis by blocks, while the second
transformation method utilizes the opening operation, closing operation, which
is employed to define the multi-background gray scale images. The complete
image processing is done using MATLAB simulation model. Finally, this paper is
organized as follows as Morphological transformation and Weber's law. Image
background approximation to the background by means of block analysis in
conjunction with transformations that enhance images with poor lighting. The
multibackground notion is introduced by means of the opening by reconstruction
shows a comparison among several techniques to improve contrast in images.
Finally, conclusions are presented.
|
1203.2528
|
Knowledge-based antenna pattern extrapolation
|
cs.CE
|
We describe a theoretically-motivated algorithm for extrapolation of antenna
radiation patterns from a small number of measurements. This algorithm exploits
constraints on the antenna's underlying design to avoid ambiguities, but is
sufficiently general to address many different antenna types. A theoretical
basis for the robustness of this algorithm is developed, and its performance is
verified in simulation using a number of popular antenna designs.
|
1203.2550
|
Degrees of Freedom of Time Correlated MISO Broadcast Channel with
Delayed CSIT
|
cs.IT math.IT
|
We consider the time correlated multiple-input single-output (MISO) broadcast
channel where the transmitter has imperfect knowledge on the current channel
state, in addition to delayed channel state information. By representing the
quality of the current channel state information as P^-{\alpha} for the
signal-to-noise ratio P and some constant {\alpha} \geq 0, we characterize the
optimal degree of freedom region for this more general two-user MISO broadcast
correlated channel. The essential ingredients of the proposed scheme lie in the
quantization and multicasting of the overheard interferences, while
broadcasting new private messages. Our proposed scheme smoothly bridges between
the scheme recently proposed by Maddah-Ali and Tse with no current state
information and a simple zero-forcing beamforming with perfect current state
information.
|
1203.2556
|
A Probabilistic Transmission Expansion Planning Methodology based on
Roulette Wheel Selection and Social Welfare
|
cs.AI cs.SY
|
A new probabilistic methodology for transmission expansion planning (TEP)
that does not require a priori specification of new/additional transmission
capacities and uses the concept of social welfare has been proposed. Two new
concepts have been introduced in this paper: (i) roulette wheel methodology has
been used to calculate the capacity of new transmission lines and (ii) load
flow analysis has been used to calculate expected demand not served (EDNS). The
overall methodology has been implemented on a modified IEEE 5-bus test system.
Simulations show an important result: addition of only new transmission lines
is not sufficient to minimize EDNS.
|
1203.2557
|
On the Necessity of Irrelevant Variables
|
cs.LG
|
This work explores the effects of relevant and irrelevant boolean variables
on the accuracy of classifiers. The analysis uses the assumption that the
variables are conditionally independent given the class, and focuses on a
natural family of learning algorithms for such sources when the relevant
variables have a small advantage over random guessing. The main result is that
algorithms relying predominately on irrelevant variables have error
probabilities that quickly go to 0 in situations where algorithms that limit
the use of irrelevant variables have errors bounded below by a positive
constant. We also show that accurate learning is possible even when there are
so few examples that one cannot determine with high confidence whether or not
any individual variable is relevant.
|
1203.2563
|
Average Consensus on General Strongly Connected Digraphs
|
cs.SY
|
We study the average consensus problem of multi-agent systems for general
network topologies with unidirectional information flow. We propose two
(linear) distributed algorithms, deterministic and gossip, respectively for the
cases where the inter-agent communication is synchronous and asynchronous. Our
contribution is that in both cases, the developed algorithms guarantee state
averaging on arbitrary strongly connected digraphs; in particular, this
graphical condition does not require that the network be balanced or symmetric,
thereby extending many previous results in the literature. The key novelty of
our approach is to augment an additional variable for each agent, called
"surplus", whose function is to locally record individual state updates. For
convergence analysis, we employ graph-theoretic and nonnegative matrix tools,
with the eigenvalue perturbation theory playing a crucial role.
|
1203.2569
|
When Index Term Probability Violates the Classical Probability Axioms
Quantum Probability can be a Necessary Theory for Information Retrieval
|
cs.IR
|
Probabilistic models require the notion of event space for defining a
probability measure. An event space has a probability measure which ensues the
Kolmogorov axioms. However, the probabilities observed from distinct sources,
such as that of relevance of documents, may not admit a single event space thus
causing some issues. In this article, some results are introduced for ensuring
whether the observed prob- abilities of relevance of documents admit a single
event space. More- over, an alternative framework of probability is introduced,
thus chal- lenging the use of classical probability for ranking documents. Some
reflections on the convenience of extending the classical probabilis- tic
retrieval toward a more general framework which encompasses the issues are
made.
|
1203.2570
|
Differential Privacy for Functions and Functional Data
|
stat.ML cs.LG
|
Differential privacy is a framework for privately releasing summaries of a
database. Previous work has focused mainly on methods for which the output is a
finite dimensional vector, or an element of some discrete set. We develop
methods for releasing functions while preserving differential privacy.
Specifically, we show that adding an appropriate Gaussian process to the
function of interest yields differential privacy. When the functions lie in the
same RKHS as the Gaussian process, then the correct noise level is established
by measuring the "sensitivity" of the function in the RKHS norm. As examples we
consider kernel density estimation, kernel support vector machines, and
functions in reproducing kernel Hilbert spaces.
|
1203.2574
|
Towards a Unified Architecture for in-RDBMS Analytics
|
cs.DB
|
The increasing use of statistical data analysis in enterprise applications
has created an arms race among database vendors to offer ever more
sophisticated in-database analytics. One challenge in this race is that each
new statistical technique must be implemented from scratch in the RDBMS, which
leads to a lengthy and complex development process. We argue that the root
cause for this overhead is the lack of a unified architecture for in-database
analytics. Our main contribution in this work is to take a step towards such a
unified architecture. A key benefit of our unified architecture is that
performance optimizations for analytics techniques can be studied generically
instead of an ad hoc, per-technique fashion. In particular, our technical
contributions are theoretical and empirical studies of two key factors that we
found impact performance: the order data is stored, and parallelization of
computations on a single-node multicore RDBMS. We demonstrate the feasibility
of our architecture by integrating several popular analytics techniques into
two commercial and one open-source RDBMS. Our architecture requires changes to
only a few dozen lines of code to integrate a new statistical technique. We
then compare our approach with the native analytics tools offered by the
commercial RDBMSes on various analytics tasks, and validate that our approach
achieves competitive or higher performance, while still achieving the same
quality.
|
1203.2655
|
Control centrality and hierarchical structure in complex networks
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
We introduce the concept of control centrality to quantify the ability of a
single node to control a directed weighted network. We calculate the
distribution of control centrality for several real networks and find that it
is mainly determined by the network's degree distribution. We rigorously prove
that in a directed network without loops the control centrality of a node is
uniquely determined by its layer index or topological position in the
underlying hierarchical structure of the network. Inspired by the deep relation
between control centrality and hierarchical structure in a general directed
network, we design an efficient attack strategy against the controllability of
malicious networks.
|
1203.2672
|
FDB: A Query Engine for Factorised Relational Databases
|
cs.DB cs.DS
|
Factorised databases are relational databases that use compact factorised
representations at the physical layer to reduce data redundancy and boost query
performance. This paper introduces FDB, an in-memory query engine for
select-project-join queries on factorised databases. Key components of FDB are
novel algorithms for query optimisation and evaluation that exploit the
succinctness brought by data factorisation. Experiments show that for data sets
with many-to-many relationships FDB can outperform relational engines by orders
of magnitude.
|
1203.2675
|
Quantum Simpsons Paradox and High Order Bell-Tsirelson Inequalities
|
quant-ph cs.IT math-ph math.IT math.MP math.ST stat.TH
|
The well-known Simpson's Paradox, or Yule-Simpson Effect, in statistics is
often illustrated by the following thought experiment: A drug may be found in a
trial to increase the survival rate for both men and women, but decrease the
rate for all the subjects as a whole. This paradoxical reversal effect has been
found in numerous datasets across many disciplines, and is now included in most
introductory statistics textbooks. In the language of the drug trial, the
effect is impossible, however, if both treatment groups' survival rates are
higher than both control groups'. Here we show that for quantum probabilities,
such a reversal remains possible. In particular, a "quantum drug", so to speak,
could be life-saving for both men and women yet deadly for the whole
population. We further identify a simple inequality on conditional
probabilities that must hold classically but is violated by our quantum
scenarios, and completely characterize the maximum quantum violation. As
polynomial inequalities on entries of the density operator, our inequalities
are of degree 6.
|
1203.2676
|
Robust Stability of Uncertain Quantum Systems
|
quant-ph cs.SY math.OC
|
This paper considers the problem of robust stability for a class of uncertain
quantum systems subject to unknown perturbations in the system Hamiltonian.
Some general stability results are given for different classes of perturbations
to the system Hamiltonian. Then, the special case of a nominal linear quantum
system is considered with either quadratic or non-quadratic perturbations to
the system Hamiltonian. In this case, robust stability conditions are given in
terms of strict bounded real conditions.
|
1203.2690
|
Analysis of Sparse MIMO Radar
|
cs.IT math.IT math.NA
|
We consider a multiple-input-multiple-output radar system and derive a
theoretical framework for the recoverability of targets in the azimuth-range
domain and the azimuth-range-Doppler domain via sparse approximation
algorithms. Using tools developed in the area of compressive sensing, we prove
bounds on the number of detectable targets and the achievable resolution in the
presence of additive noise. Our theoretical findings are validated by numerical
simulations.
|
1203.2721
|
Analysis of Finite Field Spreading for Multiple-Access Channel
|
cs.IT math.IT
|
Finite field spreading scheme is proposed for a synchronous multiple-access
channel with Gaussian noise and equal-power users. For each user, $s$
information bits are spread \emph{jointly} into a length-$sL$ vector by $L$
multiplications on GF($2^s$). Thus, each information bit is dispersed into $sL$
transmitted symbols, and the finite field despreading (FF-DES) of each bit can
take advantage of $sL$ independent receiving observations. To show the
performance gain of joint spreading quantitatively, an extrinsic information
transfer (EXIT) function analysis of the FF-DES is given. It shows that the
asymptotic slope of this EXIT function increases as $s$ increases and is in
fact the absolute slope of the bit error rate (BER) curve at the low BER
region. This means that by increasing the length $s$ of information bits for
joint spreading, a larger absolute slope of the BER curve is achieved. For $s,
L\geq 2$, the BER curve of the finite field spreading has a larger absolute
slope than that of the single-user transmission with BPSK modulation.
|
1203.2725
|
On the Complexity of the Minimum Latency Scheduling Problem on the
Euclidean Plane
|
cs.NI cs.SY
|
We show NP-hardness of the minimum latency scheduling (MLS) problem under the
physical model of wireless networking. In this model a transmission is received
successfully if the Signal to Interference-plus-Noise Ratio (SINR), is above a
given threshold. In the minimum latency scheduling problem, the goal is to
assign a time slot and power level to each transmission, so that all the
messages are received successfully, and the number of distinct times slots is
minimized.
Despite its seeming simplicity and several previous hardness results for
various settings of the minimum latency scheduling problem, it has remained an
open question whether or not the minimum latency scheduling problem is NP-hard,
when the nodes are placed in the Euclidean plane and arbitrary power levels can
be chosen for the transmissions. We resolve this open question for all path
loss exponent values $\alpha \geq 3$.
|
1203.2760
|
New approximations for DQPSK transmission bit error rate
|
cs.IT math.CA math.IT
|
In this correspondence our aim is to use some tight lower and upper bounds
for the differential quaternary phase shift keying transmission bit error rate
in order to deduce accurate approximations for the bit error rate by improving
the known results in the literature. The computation of our new approximate
expressions are significantly simpler than that of the exact expression.
|
1203.2768
|
On the Performance Limits of Pilot-Based Estimation of Bandlimited
Frequency-Selective Communication Channels
|
cs.IT cs.NI math.IT
|
In this paper the problem of assessing bounds on the accuracy of pilot-based
estimation of a bandlimited frequency selective communication channel is
tackled. Mean square error is taken as a figure of merit in channel estimation
and a tapped-delay line model is adopted to represent a continuous time channel
via a finite number of unknown parameters. This allows to derive some
properties of optimal waveforms for channel sounding and closed form Cramer-Rao
bounds.
|
1203.2769
|
Performance Guarantees of the Thresholding Algorithm for the Co-Sparse
Analysis Model
|
cs.IT math.IT
|
The co-sparse analysis model for signals assumes that the signal of interest
can be multiplied by an analysis dictionary \Omega, leading to a sparse
outcome. This model stands as an interesting alternative to the more classical
synthesis based sparse representation model. In this work we propose a
theoretical study of the performance guarantee of the thresholding algorithm
for the pursuit problem in the presence of noise. Our analysis reveals two
significant properties of \Omega, which govern the pursuit performance: The
first is the degree of linear dependencies between sets of rows in \Omega,
depicted by the co-sparsity level. The second property, termed the Restricted
Orthogonal Projection Property (ROPP), is the level of independence between
such dependent sets and other rows in \Omega. We show how these dictionary
properties are meaningful and useful, both in the theoretical bounds derived,
and in a series of experiments that are shown to align well with the
theoretical prediction.
|
1203.2778
|
Seven Means, Generalized Triangular Discrimination, and Generating
Divergence Measures
|
cs.IT math.IT
|
From geometrical point of view, Eve (2003) studied seven means. These means
are Harmonic, Geometric, Arithmetic, Heronian, Contra-harmonic, Root-mean
square and Centroidal mean. We have considered for the first time a new measure
calling generalized triangular discrimination. Inequalities among non-negative
differences arising due to seven means and particular cases of generalized
triangular discrimination are considered. Some new generating measures and
their exponential representations are also presented.
|
1203.2816
|
Animal-Inspired Agile Flight Using Optical Flow Sensing
|
cs.SY
|
There is evidence that flying animals such as pigeons, goshawks, and bats use
optical flow sensing to enable high-speed flight through forest clutter. This
paper discusses the elements of a theory of controlled flight through obstacle
fields in which motion control laws are based on optical flow sensing.
Performance comparison is made with feedback laws that use distance and bearing
measurements, and practical challenges of implementation on an actual robotic
air vehicle are described. The related question of fundamental performance
limits due to clutter density is addressed.
|
1203.2821
|
Graphlet decomposition of a weighted network
|
stat.ME cs.LG cs.SI physics.soc-ph
|
We introduce the graphlet decomposition of a weighted network, which encodes
a notion of social information based on social structure. We develop a scalable
inference algorithm, which combines EM with Bron-Kerbosch in a novel fashion,
for estimating the parameters of the model underlying graphlets using one
network sample. We explore some theoretical properties of the graphlet
decomposition, including computational complexity, redundancy and expected
accuracy. We demonstrate graphlets on synthetic and real data. We analyze
messaging patterns on Facebook and criminal associations in the 19th century.
|
1203.2835
|
Statistical Characterization and Mitigation of NLOS Errors in UWB
Localization Systems
|
cs.IT math.IT stat.AP
|
In this paper some new experimental results about the statistical
characterization of the non-line-of-sight (NLOS) bias affecting time-of-arrival
(TOA) estimation in ultrawideband (UWB) wireless localization systems are
illustrated. Then, these results are exploited to assess the performance of
various maximum-likelihood (ML) based algorithms for joint TOA localization and
NLOS bias mitigation. Our numerical results evidence that the accuracy of all
the considered algorithms is appreciably influenced by the LOS/NLOS conditions
of the propagation environment.
|
1203.2839
|
Square-Cut: A Segmentation Algorithm on the Basis of a Rectangle Shape
|
cs.CV
|
We present a rectangle-based segmentation algorithm that sets up a graph and
performs a graph cut to separate an object from the background. However,
graph-based algorithms distribute the graph's nodes uniformly and equidistantly
on the image. Then, a smoothness term is added to force the cut to prefer a
particular shape. This strategy does not allow the cut to prefer a certain
structure, especially when areas of the object are indistinguishable from the
background. We solve this problem by referring to a rectangle shape of the
object when sampling the graph nodes, i.e., the nodes are distributed
nonuniformly and non-equidistantly on the image. This strategy can be useful,
when areas of the object are indistinguishable from the background. For
evaluation, we focus on vertebrae images from Magnetic Resonance Imaging (MRI)
datasets to support the time consuming manual slice-by-slice segmentation
performed by physicians. The ground truth of the vertebrae boundaries were
manually extracted by two clinical experts (neurological surgeons) with several
years of experience in spine surgery and afterwards compared with the automatic
segmentation results of the proposed scheme yielding an average Dice Similarity
Coefficient (DSC) of 90.97\pm62.2%.
|
1203.2860
|
Receding Horizon Temporal Logic Control for Finite Deterministic Systems
|
math.OC cs.SY
|
This paper considers receding horizon control of finite deterministic
systems, which must satisfy a high level, rich specification expressed as a
linear temporal logic formula. Under the assumption that time-varying rewards
are associated with states of the system and they can be observed in real-time,
the control objective is to maximize the collected reward while satisfying the
high level task specification. In order to properly react to the changing
rewards, a controller synthesis framework inspired by model predictive control
is proposed, where the rewards are locally optimized at each time-step over a
finite horizon, and the immediate optimal control is applied. By enforcing
appropriate constraints, the infinite trajectory produced by the controller is
guaranteed to satisfy the desired temporal logic formula. Simulation results
demonstrate the effectiveness of the approach.
|
1203.2870
|
Streaming Transmitter over Block-Fading Channels with Delay Constraint
|
cs.IT math.IT
|
Data streaming transmission over a block fading channel is studied. It is
assumed that the transmitter receives a new message at each channel block at a
constant rate, which is fixed by an underlying application, and tries to
deliver the arriving messages by a common deadline. Various transmission
schemes are proposed and compared with an informed transmitter upper bound in
terms of the average decoded rate. It is shown that in the single receiver case
the adaptive joint encoding (aJE) scheme is asymptotically optimal, in that it
achieves the ergodic capacity as the transmission deadline goes to infinity;
and it closely follows the performance of the informed transmitter upper bound
in the case of finite transmission deadline. On the other hand, in the presence
of multiple receivers with different signal-to-noise ratios (SNR), memoryless
transmission (MT), time sharing (TS) and superposition transmission (ST)
schemes are shown to be more robust than the joint encoding (JE) scheme as they
have gradual performance loss with decreasing SNR.
|
1203.2886
|
BitPath -- Label Order Constrained Reachability Queries over Large
Graphs
|
cs.DB cs.DS
|
In this paper we focus on the following constrained reachability problem over
edge-labeled graphs like RDF -- "given source node x, destination node y, and a
sequence of edge labels (a, b, c, d), is there a path between the two nodes
such that the edge labels on the path satisfy a regular expression
"*a.*b.*c.*d.*". A "*" before "a" allows any other edge label to appear on the
path before edge "a". "a.*" forces at least one edge with label "a". ".*" after
"a" allows zero or more edge labels after "a" and before "b". Our query
processing algorithm uses simple divide-and-conquer and greedy pruning
procedures to limit the search space. However, our graph indexing technique --
based on "compressed bit-vectors" -- allows indexing large graphs which
otherwise would have been infeasible. We have evaluated our approach on graphs
with more than 22 million edges and 6 million nodes -- much larger compared to
the datasets used in the contemporary work on path queries.
|
1203.2890
|
Statistical Characterization and Mitigation of NLOS Bias in UWB
Localization Systems
|
cs.IT math.IT stat.AP
|
Propagation in non-line-of-sight (NLOS) conditions is one of the major
impairments in ultrawideband (UWB) wireless localization systems based on
time-of-arrival (TOA) measurements. In this paper the problem of the joint
statistical characterization of the NLOS bias and of the most representative
features of LOS/NLOS UWB waveforms is investigated. In addition, the
performance of various maximum-likelihood (ML) estimators for joint
localization and NLOS bias mitigation is assessed. Our numerical results
evidence that the accuracy of all the considered estimators is appreciably
influenced by the LOS/NLOS conditions of the propagation environment and that a
statistical knowledge of multiple signal features can be exploited to mitigate
the NLOS bias, reducing the overall localization error.
|
1203.2936
|
Combinatorial Selection and Least Absolute Shrinkage via the CLASH
Algorithm
|
cs.IT math.IT
|
The least absolute shrinkage and selection operator (LASSO) for linear
regression exploits the geometric interplay of the $\ell_2$-data error
objective and the $\ell_1$-norm constraint to arbitrarily select sparse models.
Guiding this uninformed selection process with sparsity models has been
precisely the center of attention over the last decade in order to improve
learning performance. To this end, we alter the selection process of LASSO to
explicitly leverage combinatorial sparsity models (CSMs) via the combinatorial
selection and least absolute shrinkage (CLASH) operator. We provide concrete
guidelines how to leverage combinatorial constraints within CLASH, and
characterize CLASH's guarantees as a function of the set restricted isometry
constants of the sensing matrix. Finally, our experimental results show that
CLASH can outperform both LASSO and model-based compressive sensing in sparse
estimation.
|
1203.2982
|
Enhancing network robustness for malicious attacks
|
physics.soc-ph cs.SI physics.comp-ph
|
In a recent work [Proc. Natl. Acad. Sci. USA 108, 3838 (2011)], the authors
proposed a simple measure for network robustness under malicious attacks on
nodes. With a greedy algorithm, they found the optimal structure with respect
to this quantity is an onion structure in which high-degree nodes form a core
surrounded by rings of nodes with decreasing degree. However, in real networks
the failure can also occur in links such as dysfunctional power cables and
blocked airlines. Accordingly, complementary to the node-robustness measurement
($R_{n}$), we propose a link-robustness index ($R_{l}$). We show that solely
enhancing $R_{n}$ cannot guarantee the improvement of $R_{l}$. Moreover, the
structure of $R_{l}$-optimized network is found to be entirely different from
that of onion network. In order to design robust networks resistant to more
realistic attack condition, we propose a hybrid greedy algorithm which takes
both the $R_{n}$ and $R_{l}$ into account. We validate the robustness of our
generated networks against malicious attacks mixed with both nodes and links
failure. Finally, some economical constraints for swapping the links in real
networks are considered and significant improvement in both aspects of
robustness are still achieved.
|
1203.2987
|
Mining Education Data to Predict Student's Retention: A comparative
Study
|
cs.LG cs.DB
|
The main objective of higher education is to provide quality education to
students. One way to achieve highest level of quality in higher education
system is by discovering knowledge for prediction regarding enrolment of
students in a course. This paper presents a data mining project to generate
predictive models for student retention management. Given new records of
incoming students, these predictive models can produce short accurate
prediction lists identifying students who tend to need the support from the
student retention program most. This paper examines the quality of the
predictive models generated by the machine learning algorithms. The results
show that some of the machines learning algorithms are able to establish
effective predictive models from the existing student retention data.
|
1203.2990
|
Evolving Culture vs Local Minima
|
cs.LG cs.AI
|
We propose a theory that relates difficulty of learning in deep architectures
to culture and language. It is articulated around the following hypotheses: (1)
learning in an individual human brain is hampered by the presence of effective
local minima; (2) this optimization difficulty is particularly important when
it comes to learning higher-level abstractions, i.e., concepts that cover a
vast and highly-nonlinear span of sensory configurations; (3) such high-level
abstractions are best represented in brains by the composition of many levels
of representation, i.e., by deep architectures; (4) a human brain can learn
such high-level abstractions if guided by the signals produced by other humans,
which act as hints or indirect supervision for these high-level abstractions;
and (5), language and the recombination and optimization of mental concepts
provide an efficient evolutionary recombination operator, and this gives rise
to rapid search in the space of communicable ideas that help humans build up
better high-level internal representations of their world. These hypotheses put
together imply that human culture and the evolution of ideas have been crucial
to counter an optimization difficulty: this optimization difficulty would
otherwise make it very difficult for human brains to capture high-level
knowledge of the world. The theory is grounded in experimental observations of
the difficulties of training deep artificial neural networks. Plausible
consequences of this theory for the efficiency of cultural evolutions are
sketched.
|
1203.2992
|
Hybrid Poisson and multi-Bernoulli filters
|
cs.SY cs.CV
|
The probability hypothesis density (PHD) and multi-target multi-Bernoulli
(MeMBer) filters are two leading algorithms that have emerged from random
finite sets (RFS). In this paper we study a method which combines these two
approaches. Our work is motivated by a sister paper, which proves that the full
Bayes RFS filter naturally incorporates a Poisson component representing
targets that have never been detected, and a linear combination of
multi-Bernoulli components representing targets under track. Here we
demonstrate the benefit (in speed of track initiation) that maintenance of a
Poisson component of undetected targets provides. Subsequently, we propose a
method of recycling, which projects Bernoulli components with a low probability
of existence onto the Poisson component (as opposed to deleting them). We show
that this allows us to achieve similar tracking performance using a fraction of
the number of Bernoulli components (i.e., tracks).
|
1203.2995
|
Marginal multi-Bernoulli filters: RFS derivation of MHT, JIPDA and
association-based MeMBer
|
cs.SY cs.CV
|
Recent developments in random finite sets (RFSs) have yielded a variety of
tracking methods that avoid data association. This paper derives a form of the
full Bayes RFS filter and observes that data association is implicitly present,
in a data structure similar to MHT. Subsequently, algorithms are obtained by
approximating the distribution of associations. Two algorithms result: one
nearly identical to JIPDA, and another related to the MeMBer filter. Both
improve performance in challenging environments.
|
1203.3002
|
A Proximal-Gradient Homotopy Method for the Sparse Least-Squares Problem
|
math.OC cs.IT math.IT stat.ML
|
We consider solving the $\ell_1$-regularized least-squares ($\ell_1$-LS)
problem in the context of sparse recovery, for applications such as compressed
sensing. The standard proximal gradient method, also known as iterative
soft-thresholding when applied to this problem, has low computational cost per
iteration but a rather slow convergence rate. Nevertheless, when the solution
is sparse, it often exhibits fast linear convergence in the final stage. We
exploit the local linear convergence using a homotopy continuation strategy,
i.e., we solve the $\ell_1$-LS problem for a sequence of decreasing values of
the regularization parameter, and use an approximate solution at the end of
each stage to warm start the next stage. Although similar strategies have been
studied in the literature, there have been no theoretical analysis of their
global iteration complexity. This paper shows that under suitable assumptions
for sparse recovery, the proposed homotopy strategy ensures that all iterates
along the homotopy solution path are sparse. Therefore the objective function
is effectively strongly convex along the solution path, and geometric
convergence at each stage can be established. As a result, the overall
iteration complexity of our method is $O(\log(1/\epsilon))$ for finding an
$\epsilon$-optimal solution, which can be interpreted as global geometric rate
of convergence. We also present empirical results to support our theoretical
analysis.
|
1203.3023
|
Toward an example-based machine translation from written text to ASL
using virtual agent animation
|
cs.CL
|
Modern computational linguistic software cannot produce important aspects of
sign language translation. Using some researches we deduce that the majority of
automatic sign language translation systems ignore many aspects when they
generate animation; therefore the interpretation lost the truth information
meaning. Our goals are: to translate written text from any language to ASL
animation; to model maximum raw information using machine learning and
computational techniques; and to produce a more adapted and expressive form to
natural looking and understandable ASL animations. Our methods include
linguistic annotation of initial text and semantic orientation to generate the
facial expression. We use the genetic algorithms coupled to learning/recognized
systems to produce the most natural form. To detect emotion we are based on
fuzzy logic to produce the degree of interpolation between facial expressions.
Roughly, we present a new expressive language Text Adapted Sign Modeling
Language TASML that describes all maximum aspects related to a natural sign
language interpretation. This paper is organized as follow: the next section is
devoted to present the comprehension effect of using Space/Time/SVO form in ASL
animation based on experimentation. In section 3, we describe our technical
considerations. We present the general approach we adopted to develop our tool
in section 4. Finally, we give some perspectives and future works.
|
1203.3037
|
Expanding the Transfer Entropy to Identify Information Subgraphs in
Complex Systems
|
q-bio.QM cs.IT math.IT physics.data-an
|
We propose a formal expansion of the transfer entropy to put in evidence
irreducible sets of variables which provide information for the future state of
each assigned target. Multiplets characterized by a large contribution to the
expansion are associated to informational circuits present in the system, with
an informational character which can be associated to the sign of the
contribution. For the sake of computational complexity, we adopt the assumption
of Gaussianity and use the corresponding exact formula for the conditional
mutual information. We report the application of the proposed methodology on
two EEG data sets.
|
1203.3051
|
Combining Voting Rules Together
|
cs.AI
|
We propose a simple method for combining together voting rules that performs
a run-off between the different winners of each voting rule. We prove that this
combinator has several good properties. For instance, even if just one of the
base voting rules has a desirable property like Condorcet consistency, the
combination inherits this property. In addition, we prove that combining voting
rules together in this way can make finding a manipulation more computationally
difficult. Finally, we study the impact of this combinator on approximation
methods that find close to optimal manipulations.
|
1203.3055
|
Application of sensitivity analysis in building energy simulations:
combining first and second order elementary effects Methods
|
cs.CE stat.AP
|
Sensitivity analysis plays an important role in the understanding of complex
models. It helps to identify influence of input parameters in relation to the
outputs. It can be also a tool to understand the behavior of the model and then
can help in its development stage. This study aims to analyze and illustrate
the potential usefulness of combining first and second-order sensitivity
analysis, applied to a building energy model (ESP-r). Through the example of a
collective building, a sensitivity analysis is performed using the method of
elementary effects (also known as Morris method), including an analysis of
interactions between the input parameters (second order analysis). Importance
of higher-order analysis to better support the results of first order analysis,
highlighted especially in such complex model. Several aspects are tackled to
implement efficiently the multi-order sensitivity analysis: interval size of
the variables, management of non-linearity, usefulness of various outputs.
|
1203.3065
|
The Leviathan model: Absolute dominance, generalised distrust, small
worlds and other patterns emerging from combining vanity with opinion
propagation
|
physics.soc-ph cs.SI
|
We propose an opinion dynamics model that combines processes of vanity and
opinion propagation. The interactions take place between randomly chosen pairs.
During an interaction, the agents propagate their opinions about themselves and
about other people they know. Moreover, each individual is subject to vanity:
if her interlocutor seems to value her highly, then she increases her opinion
about this interlocutor. On the contrary she tends to decrease her opinion
about those who seem to undervalue her. The combination of these dynamics with
the hypothesis that the opinion propagation is more efficient when coming from
highly valued individuals, leads to different patterns when varying the
parameters. For instance, for some parameters the positive opinion links
between individuals generate a small world network. In one of the patterns,
absolute dominance of one agent alternates with a state of generalised
distrust, where all agents have a very low opinion of all the others (including
themselves). We provide some explanations of the mechanisms behind these
emergent behaviors and finally propose a discussion about their interest
|
1203.3092
|
gcodeml: A Grid-enabled Tool for Detecting Positive Selection in
Biological Evolution
|
cs.DC cs.CE q-bio.PE
|
One of the important questions in biological evolution is to know if certain
changes along protein coding genes have contributed to the adaptation of
species. This problem is known to be biologically complex and computationally
very expensive. It, therefore, requires efficient Grid or cluster solutions to
overcome the computational challenge. We have developed a Grid-enabled tool
(gcodeml) that relies on the PAML (codeml) package to help analyse large
phylogenetic datasets on both Grids and computational clusters. Although we
report on results for gcodeml, our approach is applicable and customisable to
related problems in biology or other scientific domains.
|
1203.3097
|
A Comparative Study of Adaptive Crossover Operators for Genetic
Algorithms to Resolve the Traveling Salesman Problem
|
cs.NE cs.CE
|
Genetic algorithm includes some parameters that should be adjusting so that
the algorithm can provide positive results. Crossover operators play very
important role by constructing competitive Genetic Algorithms (GAs). In this
paper, the basic conceptual features and specific characteristics of various
crossover operators in the context of the Traveling Salesman Problem (TSP) are
discussed. The results of experimental comparison of more than six different
crossover operators for the TSP are presented. The experiment results show that
OX operator enables to achieve a better solutions than other operators tested.
|
1203.3099
|
Analyzing the Performance of Mutation Operators to Solve the Travelling
Salesman Problem
|
cs.NE cs.CE
|
The genetic algorithm includes some parameters that should be adjusted, so as
to get reliable results. Choosing a representation of the problem addressed, an
initial population, a method of selection, a crossover operator, mutation
operator, the probabilities of crossover and mutation, and the insertion method
creates a variant of genetic algorithms. Our work is part of the answer to this
perspective to find a solution for this combinatorial problem. What are the
best parameters to select for a genetic algorithm that creates a variety
efficient to solve the Travelling Salesman Problem (TSP)? In this paper, we
present a comparative analysis of different mutation operators, surrounded by a
dilated discussion that justifying the relevance of genetic operators chosen to
solving the TSP problem.
|
1203.3114
|
Integrated three-dimensional reconstruction using reflectance fields
|
cs.CV
|
A method to obtain three-dimensional data of real-world objects by
integrating their material properties is presented. The material properties are
defined by capturing the Reflectance Fields of the real-world objects. It is
shown, unlike conventional reconstruction methods, the method is able to use
the reflectance information to recover surface depth for objects having a
non-Lambertian surface reflectance. It is, for recovering 3D data of objects
exhibiting an anisotropic BRDF with an error less than 0.3%.
|
1203.3115
|
Codes on Graphs: Observability, Controllability and Local Reducibility
|
cs.IT cs.SY math.IT
|
This paper investigates properties of realizations of linear or group codes
on general graphs that lead to local reducibility.
Trimness and properness are dual properties of constraint codes. A linear or
group realization with a constraint code that is not both trim and proper is
locally reducible. A linear or group realization on a finite cycle-free graph
is minimal if and only if every local constraint code is trim and proper.
A realization is called observable if there is a one-to-one correspondence
between codewords and configurations, and controllable if it has independent
constraints. A linear or group realization is observable if and only if its
dual is controllable. A simple counting test for controllability is given. An
unobservable or uncontrollable realization is locally reducible. Parity-check
realizations are controllable if and only if they have independent parity
checks. In an uncontrollable tail-biting trellis realization, the behavior
partitions into disconnected subbehaviors, but this property does not hold for
non-trellis realizations. On a general graph, the support of an unobservable
configuration is a generalized cycle.
|
1203.3128
|
Distributed Space Time Coding for Wireless Two-way Relaying
|
cs.IT math.IT
|
We consider the wireless two-way relay channel, in which two-way data
transfer takes place between the end nodes with the help of a relay. For the
Denoise-And-Forward (DNF) protocol, it was shown by Koike-Akino et. al. that
adaptively changing the network coding map used at the relay greatly reduces
the impact of Multiple Access interference at the relay. The harmful effect of
the deep channel fade conditions can be effectively mitigated by proper choice
of these network coding maps at the relay. Alternatively, in this paper we
propose a Distributed Space Time Coding (DSTC) scheme, which effectively
removes most of the deep fade channel conditions at the transmitting nodes
itself without any CSIT and without any need to adaptively change the network
coding map used at the relay. It is shown that the deep fades occur when the
channel fade coefficient vector falls in a finite number of vector subspaces of
$\mathbb{C}^2$, which are referred to as the singular fade subspaces. DSTC
design criterion referred to as the \textit{singularity minimization criterion}
under which the number of such vector subspaces are minimized is obtained.
Also, a criterion to maximize the coding gain of the DSTC is obtained. Explicit
low decoding complexity DSTC designs which satisfy the singularity minimization
criterion and maximize the coding gain for QAM and PSK signal sets are
provided. Simulation results show that at high Signal to Noise Ratio, the DSTC
scheme provides large gains when compared to the conventional Exclusive OR
network code and performs slightly better than the adaptive network coding
scheme proposed by Koike-Akino et. al.
|
1203.3136
|
A Receding Horizon Strategy for Systems with Interval-Wise Energy
Constraints
|
cs.SY
|
We propose a receding horizon control strategy that readily handles systems
that exhibit interval-wise total energy constraints on the input control
sequence. The approach is based on a variable optimization horizon length and
contractive final state constraint sets. The optimization horizon, which
recedes by N steps every N steps, is the key to accommodate the interval-wise
total energy constraints. The varying optimization horizon along with the
contractive constraints are used to achieve analytic asymptotic stability of
the system under the proposed scheme. The strategy is demonstrated by
simulation examples.
|
1203.3143
|
Dynamic Compression-Transmission for Energy-Harvesting Multihop Networks
with Correlated Sources
|
cs.IT cs.NI math.IT
|
Energy-harvesting wireless sensor networking is an emerging technology with
applications to various fields such as environmental and structural health
monitoring. A distinguishing feature of wireless sensors is the need to perform
both source coding tasks, such as measurement and compression, and transmission
tasks. It is known that the overall energy consumption for source coding is
generally comparable to that of transmission, and that a joint design of the
two classes of tasks can lead to relevant performance gains. Moreover, the
efficiency of source coding in a sensor network can be potentially improved via
distributed techniques by leveraging the fact that signals measured by
different nodes are correlated.
In this paper, a data gathering protocol for multihop wireless sensor
networks with energy harvesting capabilities is studied whereby the sources
measured by the sensors are correlated. Both the energy consumptions of source
coding and transmission are modeled, and distributed source coding is assumed.
The problem of dynamically and jointly optimizing the source coding and
transmission strategies is formulated for time-varying channels and sources.
The problem consists in the minimization of a cost function of the distortions
in the source reconstructions at the sink under queue stability constraints. By
adopting perturbation-based Lyapunov techniques, a close-to-optimal online
scheme is proposed that has an explicit and controllable trade-off between
optimality gap and queue sizes. The role of side information available at the
sink is also discussed under the assumption that acquiring the side information
entails an energy cost. It is shown that the presence of side information can
improve the network performance both in terms of overall network cost function
and queue sizes.
|
1203.3170
|
Single Reduct Generation Based on Relative Indiscernibility of Rough Set
Theory
|
cs.CV
|
In real world everything is an object which represents particular classes.
Every object can be fully described by its attributes. Any real world dataset
contains large number of attributes and objects. Classifiers give poor
performance when these huge datasets are given as input to it for proper
classification. So from these huge dataset most useful attributes need to be
extracted that contribute the maximum to the decision. In the paper, attribute
set is reduced by generating reducts using the indiscernibility relation of
Rough Set Theory (RST). The method measures similarity among the attributes
using relative indiscernibility relation and computes attribute similarity set.
Then the set is minimized and an attribute similarity table is constructed from
which attribute similar to maximum number of attributes is selected so that the
resultant minimum set of selected attributes (called reduct) cover all
attributes of the attribute similarity table. The method has been applied on
glass dataset collected from the UCI repository and the classification accuracy
is calculated by various classifiers. The result shows the efficiency of the
proposed method.
|
1203.3178
|
A Fast fixed-point Quantum Search Algorithm by using Disentanglement and
Measurement
|
cs.IT math.IT quant-ph
|
Generic quantum search algorithm searches for target entity in an unsorted
database by repeatedly applying canonical Grover's quantum rotation transform
to reach near the vicinity of the target entity. Thus, upon measurement, there
is a high probability of finding the target entity. However, the number of
times quantum rotation transform is to be applied for reaching near the
vicinity of the target is a function of the number of target entities present
in an unsorted database, which is generally unknown. A wrong estimate of the
number of target entities can lead to overshooting or undershooting the
targets, thus reducing the success probability. Some proposals have been made
to overcome this limitation. These proposals either employ quantum counting to
estimate the number of solutions or fixed-point schemes. This paper proposes a
new scheme for stopping the application of quantum rotation transformation on
reaching near the targets by disentanglement, measurement and subsequent
processing to estimate the distance of the state vector from the target states.
It ensures a success probability, which is greater than half for all
practically significant ratios of the number of target entities to the total
number of entities in a database. The search problem is trivial for remaining
possible ratios. The proposed scheme is simpler than quantum counting and more
efficient than the known fixed-point schemes. It has same order of
computational complexity as canonical Grover`s search algorithm but is slow by
a factor of two and requires two additional ancilla qubits.
|
1203.3210
|
A Game Theoretic Model for the Gaussian Broadcast Channel
|
cs.IT math.IT
|
The behavior of rational and selfish players (receivers) over a
multiple-input multiple-output Gaussian broadcast channel is investigated using
the framework of noncooperative game theory. In contrast to the game-theoretic
model of the Gaussian multiple access channel where the set of feasible actions
for each player is independent of other players' actions, the strategies of the
players in the broadcast channel are mutually coupled, usually by a sum power
or joint covariance constraint, and hence cannot be treated using traditional
Nash equilibrium solution concepts. To characterize the strategic behavior of
receivers connected to a single transmitter, this paper models the broadcast
channel as a generalized Nash equilibrium problem with coupled constraints. The
concept of normalized equilibrium (NoE) is used to characterize the equilibrium
points and the existence and uniqueness of the NoE are proven for key
scenarios.
|
1203.3217
|
Channel simulation via interactive communications
|
cs.IT math.IT
|
In this paper, we study the problem of channel simulation via interactive
communication, known as the coordination capacity, in a two-terminal network.
We assume that two terminals observe i.i.d.\ copies of two random variables and
would like to generate i.i.d.\ copies of two other random variables jointly
distributed with the observed random variables. The terminals are provided with
two-way communication links, and shared common randomness, all at limited
rates. Two special cases of this problem are the interactive function
computation studied by Ma and Ishwar, and the tradeoff curve between one-way
communication and shared randomness studied by Cuff. The latter work had
inspired Gohari and Anantharam to study the general problem of channel
simulation via interactive communication stated above. However only inner and
outer bounds for the special case of no shared randomness were obtained in
their work. In this paper we settle this problem by providing an exact
computable characterization of the multi-round problem. To show this we employ
the technique of "output statistics of random binning" that has been recently
developed by the authors.
|
1203.3227
|
Generalisation of language and knowledge models for corpus analysis
|
cs.AI cs.CL
|
This paper takes new look on language and knowledge modelling for corpus
linguistics. Using ideas of Chaitin, a line of argument is made against
language/knowledge separation in Natural Language Processing. A simplistic
model, that generalises approaches to language and knowledge, is proposed. One
of hypothetical consequences of this model is Strong AI.
|
1203.3230
|
Reconstruction error in a motion capture system
|
cs.CV
|
Marker-based motion capture (MoCap) systems can be composed by several dozens
of cameras with the purpose of reconstructing the trajectories of hundreds of
targets. With a large amount of cameras it becomes interesting to determine the
optimal reconstruction strategy. For such aim it is of fundamental importance
to understand the information provided by different camera measurements and how
they are combined, i.e. how the reconstruction error changes by considering
different cameras. In this work, first, an approximation of the reconstruction
error variance is derived. The results obtained in some simulations suggest
that the proposed strategy allows to obtain a good approximation of the real
error variance with significant reduction of the computational time.
|
1203.3241
|
Dynamics of periodic node states on a model of static networks with
repeated-averaging rules
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
We introduce a simple model of static networks, where nodes are located on a
ring structure, and two accompanying dynamic rules of repeated averaging on
periodic node states. We assume nodes can interact with neighbors, and will add
long-range links randomly. The number of long-range links, E, controls
structures of these networks, and we show that there exist many types of fixed
points, when E is varied. When E is low, fixed points are mostly diverse
states, in which node states are diversely populated; on the other hand, when E
is high, fixed points tend to be dominated by converged states, in which node
states converge to one value. Numerically, we observe properties of fixed
points for various E's, and also estimate points of the transition from diverse
states to converged states for four different cases. This kind of simple
network models will help us understand how diversities that we encounter in
many systems of complex networks are sustained, even when mechanisms of
averaging are at work,and when they break down if more long-range connections
are added.
|
1203.3245
|
The Parameters For Powerline Channel Modeling
|
cs.IT math.IT
|
This is a support document which describes the properties of the cable and
parameters of the formulas for the statistical powerline channel modeling. The
cable parameters help the reader build powerline channel according to the
transmission line theory. The document also presents the parameters which
describe the distribution of the number of path, path magnitude, path interval
and the cable loss feature of the powerline channel. By using the parameters in
this document, readers can model the powerline channel according to the
statistical methodology proposed.
|
1203.3258
|
QoE-aware Media Streaming in Technology and Cost Heterogeneous Networks
|
cs.SY cs.MM cs.NI
|
We present a framework for studying the problem of media streaming in
technology and cost heterogeneous environments. We first address the problem of
efficient streaming in a technology-heterogeneous setting. We employ random
linear network coding to simplify the packet selection strategies and alleviate
issues such as duplicate packet reception. Then, we study the problem of media
streaming from multiple cost-heterogeneous access networks. Our objective is to
characterize analytically the trade-off between access cost and user
experience. We model the Quality of user Experience (QoE) as the probability of
interruption in playback as well as the initial waiting time. We design and
characterize various control policies, and formulate the optimal control
problem using a Markov Decision Process (MDP) with a probabilistic constraint.
We present a characterization of the optimal policy using the
Hamilton-Jacobi-Bellman (HJB) equation. For a fluid approximation model, we
provide an exact and explicit characterization of a threshold policy and prove
its optimality using the HJB equation.
Our simulation results show that under properly designed control policy, the
existence of alternative access technology as a complement for a primary access
network can significantly improve the user experience without any bandwidth
over-provisioning.
|
1203.3269
|
Physical Layer Network Coding for Two-Way Relaying with QAM and Latin
Squares
|
cs.IT math.IT
|
The design of modulation schemes for the physical layer network-coded two way
relaying scenario has been extensively studied recently with the protocol which
employs two phases: Multiple access (MA) Phase and Broadcast (BC) Phase. It was
observed by Koike-Akino et al. that adaptively changing the network coding map
used at the relay according to the channel conditions greatly reduces the
impact of multiple access interference which occurs at the relay during the MA
Phase and all these network coding maps should satisfy a requirement called the
exclusive law. Only the scenario in which the end nodes use M-PSK signal sets
is extensively studied in \cite{NVR} using Latin Sqaures. In this paper, we
address the case in which the end nodes use M-QAM signal sets (where M is of
the form $2^{2\lambda}$, $\lambda$ being any positive integer). In a fading
scenario, for certain channel conditions $\gamma e^{j \theta}$, termed singular
fade states, the MA phase performance is greatly reduced. We show that the
square QAM signal sets give lesser number of singular fade states compared to
PSK signal sets. Because of this, the complexity at the relay is enormously
reduced. Moreover, lesser number of overhead bits are required in the BC phase.
The fade state $\gamma e^{j \theta}=1$ is singular for all constellations of
arbitrary size including PSK and QAM. For arbitrary PSK constellation it is
well known that the Latin Square obtained by bit-wise XOR mapping removes this
singularity. We show that XOR mapping fails to remove this singularity for QAM
of size more greater than 4 and show that a doubly block circulant Latin Square
removes this singularity. Simulation results are presented to show the
superiority of QAM over PSK.
|
1203.3270
|
Extraction of Facial Feature Points Using Cumulative Histogram
|
cs.CV
|
This paper proposes a novel adaptive algorithm to extract facial feature
points automatically such as eyebrows corners, eyes corners, nostrils, nose
tip, and mouth corners in frontal view faces, which is based on cumulative
histogram approach by varying different threshold values. At first, the method
adopts the Viola-Jones face detector to detect the location of face and also
crops the face region in an image. From the concept of the human face
structure, the six relevant regions such as right eyebrow, left eyebrow, right
eye, left eye, nose, and mouth areas are cropped in a face image. Then the
histogram of each cropped relevant region is computed and its cumulative
histogram value is employed by varying different threshold values to create a
new filtering image in an adaptive way. The connected component of interested
area for each relevant filtering image is indicated our respective feature
region. A simple linear search algorithm for eyebrows, eyes and mouth filtering
images and contour algorithm for nose filtering image are applied to extract
our desired corner points automatically. The method was tested on a large BioID
frontal face database in different illuminations, expressions and lighting
conditions and the experimental results have achieved average success rates of
95.27%.
|
1203.3271
|
The thermodynamics of prediction
|
cond-mat.stat-mech cs.IT math.IT q-bio.QM
|
A system responding to a stochastic driving signal can be interpreted as
computing, by means of its dynamics, an implicit model of the environmental
variables. The system's state retains information about past environmental
fluctuations, and a fraction of this information is predictive of future ones.
The remaining nonpredictive information reflects model complexity that does not
improve predictive power, and thus represents the ineffectiveness of the model.
We expose the fundamental equivalence between this model inefficiency and
thermodynamic inefficiency, measured by dissipation. Our results hold
arbitrarily far from thermodynamic equilibrium and are applicable to a wide
range of systems, including biomolecular machines. They highlight a profound
connection between the effective use of information and efficient thermodynamic
operation: any system constructed to keep memory about its environment and to
operate with maximal energetic efficiency has to be predictive.
|
1203.3274
|
Two kinds of Phase transitions in a Voting model
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
In this paper, we discuss a voting model with two candidates, C_0 and C_1. We
consider two types of voters--herders and independents. The voting of
independents is based on their fundamental values; on the other hand, the
voting of herders is based on the number of previous votes. We can identify two
kinds of phase transitions. One is an information cascade transition similar to
a phase transition seen in Ising model. The other is a transition of super and
normal diffusions. These phase transitions coexist. We compared our results to
the conclusions of experiments and identified the phase transitions in the
upper limit of the time t by using analysis of human behavior obtained from
experiments.
|
1203.3282
|
Practical Encoders and Decoders for Euclidean Codes from Barnes-Wall
Lattices
|
cs.IT math.IT
|
In this paper, we address the design of high spectral-efficiency Barnes-Wall
(BW) lattice codes which are amenable to low-complexity decoding in additive
white Gaussian noise (AWGN) channels. We propose a new method of constructing
complex BW lattice codes from linear codes over polynomial rings, and show that
the proposed construction provides an explicit method of bit-labeling complex
BW lattice codes. To decode the code, we adapt the low-complexity sequential BW
lattice decoder (SBWD) recently proposed by Micciancio and Nicolosi. First, we
study the error performance of SBWD in decoding the infinite lattice, wherein
we analyze the noise statistics in the algorithm, and propose a new upper bound
on its error performance. We show that the SBWD is powerful in making correct
decisions well beyond the packing radius. Subsequently, we use the SBWD to
decode lattice codes through a novel noise-trimming technique. This is the
first work that showcases the error performance of SBWD in decoding BW lattice
codes of large block lengths.
|
1203.3287
|
Analysis of a Cooperative Strategy for a Large Decentralized Wireless
Network
|
cs.IT math.IT
|
This paper investigates the benefits of cooperation and proposes a relay
activation strategy for a large wireless network with multiple transmitters. In
this framework, some nodes cooperate with a nearby node that acts as a relay,
using the decode-and-forward protocol, and others use direct transmission. The
network is modeled as an independently marked Poisson point process and the
source nodes may choose their relays from the set of inactive nodes. Although
cooperation can potentially lead to significant improvements in the performance
of a communication pair, relaying causes additional interference in the
network, increasing the average noise that other nodes see. We investigate how
source nodes should balance cooperation vs. interference to obtain reliable
transmissions, and for this purpose we study and optimize a relay activation
strategy with respect to the outage probability. Surprisingly, in the high
reliability regime, the optimized strategy consists on the activation of all
the relays or none at all, depending on network parameters. We provide a simple
closed-form expression that indicates when the relays should be active, and we
introduce closed form expressions that quantify the performance gains of this
scheme with respect to a network that only uses direct transmission.
|
1203.3288
|
Approximation to Distribution of Product of Random Variables Using
Orthogonal Polynomials for Lognormal Density
|
cs.IT math.IT
|
We derive a closed-form expression for the orthogonal polynomials associated
with the general lognormal density. The result can be utilized to construct
easily computable approximations for probability density function of a product
of random variables, when the considered variates are either independent or
correlated. As an example, we have calculated the approximative distribution
for the product of Nakagami-m variables. Simulations indicate that accuracy of
the proposed approximation is good with small cross-correlations under light
fading condition.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.