id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1302.3892 | Identifying trends in word frequency dynamics | physics.soc-ph cond-mat.dis-nn cs.CL q-bio.PE | The word-stock of a language is a complex dynamical system in which words can
be created, evolve, and become extinct. Even more dynamic are the short-term
fluctuations in word usage by individuals in a population. Building on the
recent demonstration that word niche is a strong determinant of future rise or
fall in word frequency, here we introduce a model that allows us to distinguish
persistent from temporary increases in frequency. Our model is illustrated
using a 10^8-word database from an online discussion group and a 10^11-word
collection of digitized books. The model reveals a strong relation between
changes in word dissemination and changes in frequency. Aside from their
implications for short-term word frequency dynamics, these observations are
potentially important for language evolution as new words must survive in the
short term in order to survive in the long term.
|
1302.3900 | Robust Image Segmentation in Low Depth Of Field Images | cs.CV | In photography, low depth of field (DOF) is an important technique to
emphasize the object of interest (OOI) within an image. Thus, low DOF images
are widely used in the application area of macro, portrait or sports
photography. When viewing a low DOF image, the viewer implicitly concentrates
on the regions that are sharper regions of the image and thus segments the
image into regions of interest and non regions of interest which has a major
impact on the perception of the image. Thus, a robust algorithm for the fully
automatic detection of the OOI in low DOF images provides valuable information
for subsequent image processing and image retrieval. In this paper we propose a
robust and parameterless algorithm for the fully automatic segmentation of low
DOF images. We compare our method with three similar methods and show the
superior robustness even though our algorithm does not require any parameters
to be set by hand. The experiments are conducted on a real world data set with
high and low DOF images.
|
1302.3912 | An Online Environment for Democratic Deliberation: Motivations,
Principles, and Design | cs.HC cs.CY cs.SI | We have created a platform for online deliberation called Deme (which rhymes
with 'team'). Deme is designed to allow groups of people to engage in
collaborative drafting, focused discussion, and decision making using the
Internet. The Deme project has evolved greatly from its beginning in 2003. This
chapter outlines the thinking behind Deme's initial design: our motivations for
creating it, the principles that guided its construction, and its most
important design features. The version of Deme described here was written in
PHP and was deployed in 2004 and used by several groups (including organizers
of the 2005 Online Deliberation Conference). Other papers describe later
developments in the Deme project (see Davies et al. 2005, 2008; Davies and
Mintz 2009).
|
1302.3918 | Using Correlated Subset Structure for Compressive Sensing Recovery | cs.IT math.IT math.NA | Compressive sensing is a methodology for the reconstruction of sparse or
compressible signals using far fewer samples than required by the Nyquist
criterion. However, many of the results in compressive sensing concern random
sampling matrices such as Gaussian and Bernoulli matrices. In common physically
feasible signal acquisition and reconstruction scenarios such as
super-resolution of images, the sensing matrix has a non-random structure with
highly correlated columns. Here we present a compressive sensing recovery
algorithm that exploits this correlation structure. We provide algorithmic
justification as well as empirical comparisons.
|
1302.3921 | Support detection in super-resolution | cs.IT math.IT math.NA math.OC | We study the problem of super-resolving a superposition of point sources from
noisy low-pass data with a cut-off frequency f. Solving a tractable convex
program is shown to locate the elements of the support with high precision as
long as they are separated by 2/f and the noise level is small with respect to
the amplitude of the signal.
|
1302.3931 | Understanding Boltzmann Machine and Deep Learning via A Confident
Information First Principle | cs.NE cs.LG stat.ML | Typical dimensionality reduction methods focus on directly reducing the
number of random variables while retaining maximal variations in the data. In
this paper, we consider the dimensionality reduction in parameter spaces of
binary multivariate distributions. We propose a general
Confident-Information-First (CIF) principle to maximally preserve parameters
with confident estimates and rule out unreliable or noisy parameters. Formally,
the confidence of a parameter can be assessed by its Fisher information, which
establishes a connection with the inverse variance of any unbiased estimate for
the parameter via the Cram\'{e}r-Rao bound. We then revisit Boltzmann machines
(BM) and theoretically show that both single-layer BM without hidden units
(SBM) and restricted BM (RBM) can be solidly derived using the CIF principle.
This can not only help us uncover and formalize the essential parts of the
target density that SBM and RBM capture, but also suggest that the deep neural
network consisting of several layers of RBM can be seen as the layer-wise
application of CIF. Guided by the theoretical analysis, we develop a
sample-specific CIF-based contrastive divergence (CD-CIF) algorithm for SBM and
a CIF-based iterative projection procedure (IP) for RBM. Both CD-CIF and IP are
studied in a series of density estimation experiments.
|
1302.3932 | Real-Time Power Balancing via Decentralized Coordinated Home Energy
Scheduling | cs.SY cs.IT math.IT | It is anticipated that an uncoordinated operation of individual home energy
management (HEM) systems in a neighborhood would have a rebound effect on the
aggregate demand profile. To address this issue, this paper proposes a
coordinated home energy management (CoHEM) architecture in which distributed
HEM units collaborate with each other in order to keep the demand and supply
balanced in their neighborhood. Assuming the energy requests by customers are
random in time, we formulate the proposed CoHEM design as a multi-stage
stochastic optimization problem. We propose novel models to describe the
deferrable appliance load (e.g., Plug-in (Hybrid) Electric Vehicles (PHEV)),
and apply approximation and decomposition techniques to handle the considered
design problem in a decentralized fashion. The developed decentralized CoHEM
algorithm allow the customers to locally compute their scheduling solutions
using domestic user information and with message exchange between their
neighbors only. Extensive simulation results demonstrate that the proposed
CoHEM architecture can effectively improve real-time power balancing.
Extensions to joint power procurement and real-time CoHEM scheduling are also
presented.
|
1302.3949 | A collective opinion formation model under Bayesian updating and
confirmation bias | physics.soc-ph cs.SI | We propose a collective opinion formation model with a so-called confirmation
bias. The confirmation bias is a psychological effect with which, in the
context of opinion formation, an individual in favor of an opinion is prone to
misperceive new incoming information as supporting the current belief of the
individual. Our model modifies a Bayesian decision-making model for single
individuals [M. Rabin and J. L. Schrag, Q. J. Econ. 114, 37 (1999)] for the
case of a well-mixed population of interacting individuals in the absence of
the external input. We numerically simulate the model to show that all the
agents eventually agree on one of the two opinions only when the confirmation
bias is weak. Otherwise, the stochastic population dynamics ends up creating a
disagreement configuration (also called polarization), particularly for large
system sizes. A strong confirmation bias allows various final disagreement
configurations with different fractions of the individuals in favor of the
opposite opinions.
|
1302.3956 | Clustering validity based on the most similarity | cs.LG stat.ML | One basic requirement of many studies is the necessity of classifying data.
Clustering is a proposed method for summarizing networks. Clustering methods
can be divided into two categories named model-based approaches and algorithmic
approaches. Since the most of clustering methods depend on their input
parameters, it is important to evaluate the result of a clustering algorithm
with its different input parameters, to choose the most appropriate one. There
are several clustering validity techniques based on inner density and outer
density of clusters that represent different metrics to choose the most
appropriate clustering independent of the input parameters. According to
dependency of previous methods on the input parameters, one challenge in facing
with large systems, is to complete data incrementally that effects on the final
choice of the most appropriate clustering. Those methods define the existence
of high intensity in a cluster, and low intensity among different clusters as
the measure of choosing the optimal clustering. This measure has a tremendous
problem, not availing all data at the first stage. In this paper, we introduce
an efficient measure in which maximum number of repetitions for various initial
values occurs.
|
1302.3969 | Coordination Control of Heterogeneous Compounded-Order Multi-Agent
Systems with Communication Delays | cs.SY | Since the complexity of the practical environment, many distributed networked
systems can not be illustrated with the integer-order dynamics and only be
described as the fractional-order dynamics. Suppose multi-agent systems will
show the individual diversity with difference agents, where the heterogeneous
(integer-order and fractional-order) dynamics are used to illustrate the agent
systems and compose integer-fractional compounded-order systems. Applying
Laplace transform and frequency domain theory of the fractional-order operator,
consensus of delayed multi-agent systems with directed weighted topologies is
studied. Since integer-order model is the special case of fractional-order
model, the results in this paper can be extend to the systems with
integer-order models. Finally, numerical examples are used to verify our
results.
|
1302.3971 | Directed Information on Abstract Spaces: Properties and Variational
Equalities | cs.IT math.FA math.IT math.OC math.PR | Directed information or its variants are utilized extensively in the
characterization of the capacity of channels with memory and feedback,
nonanticipative lossy data compression, and their generalizations to networks.
In this paper, we derive several functional and topological properties of
directed information for general abstract alphabets (complete separable metric
spaces) using the topology of weak convergence of probability measures. These
include convexity of the set of consistent distributions, which uniquely define
causally conditioned distributions, convexity and concavity of directed
information with respect to the sets of consistent distributions, weak
compactness of these sets of distributions, their joint distributions and their
marginals. Furthermore, we show lower semicontinuity of directed information,
and under certain conditions we also establish continuity of directed
information. Finally, we derive variational equalities for directed
information, including sequential versions. These may be viewed as the analogue
of the variational equalities of mutual information (utilized in Blahut-Arimoto
algorithm).
In summary, we extend the basic functional and topological properties of
mutual information to directed information. These properties are discussed in
the context of extremum problems of directed information.
|
1302.3988 | A solution concept for games with altruism and cooperation | cs.GT cs.AI | Over the years, numerous experiments have been accumulated to show that
cooperation is not casual and depends on the payoffs of the game. These
findings suggest that humans have attitude to cooperation by nature and the
same person may act more or less cooperatively depending on the particular
payoffs. In other words, people do not act a priori as single agents, but they
forecast how the game would be played if they formed coalitions and then they
play according to their best forecast. In this paper we formalize this idea and
we define a new solution concept for one-shot normal form games. We prove that
this \emph{cooperative equilibrium} exists for all finite games and it explains
a number of different experimental findings, such as (1) the rate of
cooperation in the Prisoner's dilemma depends on the cost-benefit ratio; (2)
the rate of cooperation in the Traveler's dilemma depends on the bonus/penalty;
(3) the rate of cooperation in the Publig Goods game depends on the pro-capite
marginal return and on the numbers of players; (4) the rate of cooperation in
the Bertrand competition depends on the number of players; (5) players tend to
be fair in the bargaining problem; (6) players tend to be fair in the Ultimatum
game; (7) players tend to be altruist in the Dictator game; (8) offers in the
Ultimatum game are larger than offers in the Dictator game.
|
1302.4000 | ClusCo: clustering and comparison of protein models | q-bio.BM cs.CE q-bio.QM | Background: The development, optimization and validation of protein modeling
methods require efficient tools for structural comparison. Frequently, a large
number of models need to be compared with the target native structure. The main
reason for the development of Clusco software was to create a high-throughput
tool for all-versus-all comparison, because calculating similarity matrix is
the one of the bottlenecks in the protein modeling pipeline. Results: Clusco is
fast and easy-to-use software for high-throughput comparison of protein models
with different similarity measures (cRMSD, dRMSD, GDT_TS, TM-Score, MaxSub,
Contact Map Overlap) and clustering of the comparison results with standard
methods: K-means Clustering or Hierarchical Agglomerative Clustering.
Conclusions: The application was highly optimized and written in C/C++,
including the code for parallel execution on CPU and GPU version of cRMSD,
which resulted in a significant speedup over similar clustering and scoring
computation programs.
|
1302.4019 | Decentralized Event-Triggering for Control of Nonlinear Systems | cs.SY math.OC | This paper considers nonlinear systems with full state feedback, a central
controller and distributed sensors not co-located with the central controller.
We present a methodology for designing decentralized asynchronous
event-triggers, which utilize only locally available information, for
determining the time instants of transmission from the sensors to the central
controller. The proposed design guarantees a positive lower bound for the
inter-transmission times of each sensor, while ensuring asymptotic stability of
the origin of the system with an arbitrary, but priorly fixed, compact region
of attraction. In the special case of Linear Time Invariant (LTI) systems,
global asymptotic stability is guaranteed and scale invariance of
inter-transmission times is preserved. A modified design method is also
proposed for nonlinear systems, with the addition of event-triggered
communication from the controller to the sensors, that promises to
significantly increase the average sensor inter-transmission times compared to
the case where the controller does not transmit data to the sensors. The
proposed designs are illustrated through simulations of a linear and a
nonlinear example.
|
1302.4020 | Topological Interference Management with Alternating Connectivity | cs.IT math.IT | The topological interference management problem refers to the study of the
capacity of partially connected linear (wired and wireless) communication
networks with no channel state information at the transmitters (no CSIT) beyond
the network topology, i.e., a knowledge of which channel coefficients are zero
(weaker than the noise floor in the wireless case). While the problem is
originally studied with fixed topology, in this work we explore the
implications of varying connectivity, through a series of simple and
conceptually representative examples. Specifically, we highlight the
synergistic benefits of coding across alternating topologies.
|
1302.4024 | Note on the Complex Networks and Epidemiology Part I: Complex Networks | physics.soc-ph cs.SI nlin.AO q-bio.MN q-bio.PE | Complex networks describe a wide range of systems in nature and society.
Frequently cited examples include Internet, WWW, a network of chemicals linked
by chemical reactions, social relationship networks, citation networks, etc.
The research of complex networks has attracted many scientists' attention.
Physicists have shown that these networks exhibit some surprising characters,
such as high clustering coefficient, small diameter, and the absence of the
thresholds of percolation. Scientists in mathematical epidemiology discovered
that the threshold of infectious disease disappears on contact networks that
following Scale-Free distribution. Researchers in economics and public health
also find that the imitation behavior could lead to cluster phenomena of
vaccination and un-vaccination. In this note, we will review the basic concepts
of complex networks; Basic epidemic models; the development of complex networks
and epidemiology.
|
1302.4043 | A new scheme of signature extraction for iris authentication | cs.CV | Iris recognition, a relatively new biometric technology, has great
advantages, such as variability, stability and security, thus is the most
promising for high security environment. Iris recognition is proposed in this
report. We describe some methods, the first one is based on grey level
histogram to extract the pupil, the second is based on elliptic and parabolic
HOUGH transformation to determinate the edge of iris, upper and lower eyelids,
the third we used 2D Gabor Wavelets to encode the iris and finally we used the
Hamming distance for authentication.
|
1302.4092 | On time-varying collaboration networks | physics.soc-ph cs.SI physics.data-an | The patterns of scientific collaboration have been frequently investigated in
terms of complex networks without reference to time evolution. In the present
work, we derive collaborative networks (from the arXiv repository)
parameterized along time. By defining the concept of affine group, we identify
several interesting trends in scientific collaboration, including the fact that
the average size of the affine groups grows exponentially, while the number of
authors increases as a power law. We were therefore able to identify, through
extrapolation, the possible date when a single affine group is expected to
emerge. Characteristic collaboration patterns were identified for each
researcher, and their analysis revealed that larger affine groups tend to be
less stable.
|
1302.4095 | Three-feature model to reproduce the topology of citation networks and
the effects from authors' visibility on their h-index | physics.soc-ph cs.DL cs.SI physics.data-an | Various factors are believed to govern the selection of references in
citation networks, but a precise, quantitative determination of their
importance has remained elusive. In this paper, we show that three factors can
account for the referencing pattern of citation networks for two topics, namely
"graphenes" and "complex networks", thus allowing one to reproduce the
topological features of the networks built with papers being the nodes and the
edges established by citations. The most relevant factor was content
similarity, while the other two - in-degree (i.e. citation counts) and {age of
publication} had varying importance depending on the topic studied. This
dependence indicates that additional factors could play a role. Indeed, by
intuition one should expect the reputation (or visibility) of authors and/or
institutions to affect the referencing pattern, and this is only indirectly
considered via the in-degree that should correlate with such reputation.
Because information on reputation is not readily available, we simulated its
effect on artificial citation networks considering two communities with
distinct fitness (visibility) parameters. One community was assumed to have
twice the fitness value of the other, which amounts to a double probability for
a paper being cited. While the h-index for authors in the community with larger
fitness evolved with time with slightly higher values than for the control
network (no fitness considered), a drastic effect was noted for the community
with smaller fitness.
|
1302.4099 | Identification of Literary Movements Using Complex Networks to Represent
Texts | physics.soc-ph cs.SI physics.data-an | The use of statistical methods to analyze large databases of text has been
useful to unveil patterns of human behavior and establish historical links
between cultures and languages. In this study, we identify literary movements
by treating books published from 1590 to 1922 as complex networks, whose
metrics were analyzed with multivariate techniques to generate six clusters of
books. The latter correspond to time periods coinciding with relevant literary
movements over the last 5 centuries. The most important factor contributing to
the distinction between different literary styles was {the average shortest
path length (particularly, the asymmetry of the distribution)}. Furthermore,
over time there has been a trend toward larger average shortest path lengths,
which is correlated with increased syntactic complexity, and a more uniform use
of the words reflected in a smaller power-law coefficient for the distribution
of word frequency. Changes in literary style were also found to be driven by
opposition to earlier writing styles, as revealed by the analysis performed
with geometrical concepts. The approaches adopted here are generic and may be
extended to analyze a number of features of languages and cultures.
|
1302.4107 | Using Complex Networks to Quantify Consistency in the Use of Words | physics.soc-ph cs.SI physics.data-an | In this paper we quantify the consistency of word usage in written texts
represented by complex networks, where words were taken as nodes, by measuring
the degree of preservation of the node neighborhood.} Words were considered
highly consistent if the authors used them with the same neighborhood. When
ranked according to the consistency of use, the words obeyed a log-normal
distribution, in contrast to the Zipf's law that applies to the frequency of
use. Consistency correlated positively with the familiarity and frequency of
use, and negatively with ambiguity and age of acquisition. An inspection of
some highly consistent words confirmed that they are used in very limited
semantic contexts. A comparison of consistency indices for 8 authors indicated
that these indices may be employed for author recognition. Indeed, as expected
authors of novels could be distinguished from those who wrote scientific texts.
Our analysis demonstrated the suitability of the consistency indices, which can
now be applied in other tasks, such as emotion recognition.
|
1302.4118 | Target Estimation in Colocated MIMO Radar via Matrix Completion | cs.IT math.IT stat.AP | We consider a colocated MIMO radar scenario, in which the receive antennas
forward their measurements to a fusion center. Based on the received data, the
fusion center formulates a matrix which is then used for target parameter
estimation. When the receive antennas sample the target returns at Nyquist
rate, and assuming that there are more receive antennas than targets, the data
matrix at the fusion center is low-rank. When each receive antenna sends to the
fusion center only a small number of samples, along with the sample index, the
receive data matrix has missing elements, corresponding to the samples that
were not forwarded. Under certain conditions, matrix completion techniques can
be applied to recover the full receive data matrix, which can then be used in
conjunction with array processing techniques, e.g., MUSIC, to obtain target
information. Numerical results indicate that good target recovery can be
achieved with occupancy of the receive data matrix as low as 50%.
|
1302.4127 | Adaptive Set-Membership Reduced-Rank Least Squares Beamforming
Algorithms | cs.IT math.IT | This paper presents a new adaptive algorithm for the linearly constrained
minimum variance (LCMV) beamformer design. We incorporate the set-membership
filtering (SMF) mechanism into the reduced-rank joint iterative optimization
(JIO) scheme to develop a constrained recursive least squares (RLS) based
algorithm called JIO-SM-RLS. The proposed algorithm inherits the positive
features of reduced-rank signal processing techniques to enhance the output
performance, and utilizes the data-selective updates (around 10-15%) of the SMF
methodology to save the computational cost significantly. An effective
time-varying bound is imposed on the array output as a constraint to circumvent
the risk of overbounding or underbounding, and to update the parameters for
beamforming. The updated parameters construct a set of solutions (a membership
set) that satisfy the constraints of the LCMV beamformer. Simulations are
performed to show the superior performance of the proposed algorithm in terms
of the convergence rate and the reduced computational complexity in comparison
with the existing methods.
|
1302.4129 | Repair-Optimal MDS Array Codes over GF(2) | cs.IT math.IT | Maximum-distance separable (MDS) array codes with high rate and an optimal
repair property were introduced recently. These codes could be applied in
distributed storage systems, where they minimize the communication and disk
access required for the recovery of failed nodes. However, the encoding and
decoding algorithms of the proposed codes use arithmetic over finite fields of
order greater than 2, which could result in a complex implementation.
In this work, we present a construction of 2-parity MDS array codes, that
allow for optimal repair of a failed information node using XOR operations
only. The reduction of the field order is achieved by allowing more parity bits
to be updated when a single information bit is being changed by the user.
|
1302.4130 | Adaptive Minimum BER Reduced-Rank Interference Suppression Algorithms
Based on Joint and Iterative Optimization of Parameters | cs.IT math.IT | In this letter, we propose a novel adaptive reduced-rank strategy based on
joint iterative optimization (JIO) of filters according to the minimization of
the bit error rate (BER) cost function. The proposed optimization technique
adjusts the weights of a subspace projection matrix and a reduced-rank filter
jointly. We develop stochastic gradient (SG) algorithms for their adaptive
implementation and introduce a novel automatic rank selection method based on
the BER criterion. Simulation results for direct-sequence
code-division-multiple-access (DS-CDMA) systems show that the proposed adaptive
algorithms significantly outperform the existing schemes.
|
1302.4136 | Post-buckling Solutions of Hyper-elastic Beam by Canonical Dual Finite
Element Method | cs.CE math.NA | Post buckling problem of a large deformed beam is analyzed using canonical
dual finite element method (CD-FEM). The feature of this method is to choose
correctly the canonical dual stress so that the original non-convex potential
energy functional is reformulated in a mixed complementary energy form with
both displacement and stress fields, and a pure complementary energy is
explicitly formulated in finite dimensional space. Based on the canonical
duality theory and the associated triality theorem, a primal-dual algorithm is
proposed, which can be used to find all possible solutions of this nonconvex
post-buckling problem. Numerical results show that the global maximum of the
pure-complementary energy leads to a stable buckled configuration of the beam.
While the local extrema of the pure-complementary energy present unstable
deformation states, especially. We discovered that the unstable buckled state
is very sensitive to the number of total elements and the external loads.
Theoretical results are verified through numerical examples and some
interesting phenomena in post-bifurcation of this large deformed beam are
observed.
|
1302.4141 | Canonical dual solutions to nonconvex radial basis neural network
optimization problem | cs.NE cs.LG stat.ML | Radial Basis Functions Neural Networks (RBFNNs) are tools widely used in
regression problems. One of their principal drawbacks is that the formulation
corresponding to the training with the supervision of both the centers and the
weights is a highly non-convex optimization problem, which leads to some
fundamentally difficulties for traditional optimization theory and methods.
This paper presents a generalized canonical duality theory for solving this
challenging problem. We demonstrate that by sequential canonical dual
transformations, the nonconvex optimization problem of the RBFNN can be
reformulated as a canonical dual problem (without duality gap). Both global
optimal solution and local extrema can be classified. Several applications to
one of the most used Radial Basis Functions, the Gaussian function, are
illustrated. Our results show that even for one-dimensional case, the global
minimizer of the nonconvex problem may not be the best solution to the RBFNNs,
and the canonical dual theory is a promising tool for solving general neural
networks training problems.
|
1302.4146 | Linear Network Error Correction Multicast/Broadcast/Dispersion Codes | cs.IT math.IT | In this paper, for the purposes of information transmission and network error
correction simultaneously, three classes of important linear network codes in
network coding, linear multicast/broadcast/dispersion codes are generalized to
linear network error correction coding, i.e., linear network error correction
multicast/broadcast/dispersion codes. We further propose the (weakly, strongly)
extended Singleton bounds for these new classes of codes, and define the
optimal codes satisfying the corresponding Singleton bounds with equality,
which are called multicast/broadcast/dispersion MDS codes respectively. The
existence of such codes are proved by an algebraic method and one kind of
constructive algorithm is also proposed.
|
1302.4147 | The Failure Probability of Random Linear Network Coding for Networks | cs.IT math.IT | In practice, since many communication networks are huge in scale, or
complicated in structure, or even dynamic, the predesigned linear network codes
based on the network topology is impossible even if the topological structure
is known. Therefore, random linear network coding has been proposed as an
acceptable coding technique for the case that the network topology cannot be
utilized completely. Motivated by the fact that different network topological
information can be obtained for different practical applications, we study the
performance analysis of random linear network coding by analyzing some failure
probabilities depending on these different topological information of networks.
We obtain some tight or asymptotically tight upper bounds on these failure
probabilities and indicate the worst cases for these bounds, i.e., the networks
meeting the upper bounds with equality. In addition, if the more topological
information of the network is utilized, the better upper bounds are obtained.
On the other hand, we also discuss the lower bounds on the failure
probabilities.
|
1302.4150 | Duality in Entanglement-Assisted Quantum Error Correction | quant-ph cs.IT math.IT | The dual of an entanglement-assisted quantum error-correcting (EAQEC) code is
defined from the orthogonal group of a simplified stabilizer group. From the
Poisson summation formula, this duality leads to the MacWilliams identities and
linear programming bounds for EAQEC codes. We establish a table of upper and
lower bounds on the minimum distance of any maximal-entanglement EAQEC code
with length up to 15 channel qubits.
|
1302.4168 | Data Placement and Replica Selection for Improving Co-location in
Distributed Environments | cs.DB cs.DC | Increasing need for large-scale data analytics in a number of application
domains has led to a dramatic rise in the number of distributed data management
systems, both parallel relational databases, and systems that support
alternative frameworks like MapReduce. There is thus an increasing contention
on scarce data center resources like network bandwidth; further, the energy
requirements for powering the computing equipment are also growing
dramatically. As we show empirically, increasing the execution parallelism by
spreading out data across a large number of machines may achieve the intended
goal of decreasing query latencies, but in most cases, may increase the total
resource and energy consumption significantly. For many analytical workloads,
however, minimizing query latencies is often not critical; in such scenarios,
we argue that we should instead focus on minimizing the average query span,
i.e., the average number of machines that are involved in processing of a
query, through colocation of data items that are frequently accessed together.
In this work, we exploit the fact that most distributed environments need to
use replication for fault tolerance, and we devise workload-driven replica
selection and placement algorithms that attempt to minimize the average query
span. We model a historical query workload trace as a hypergraph over a set of
data items, and formulate and analyze the problem of replica placement by
drawing connections to several well-studied graph theoretic concepts. We
develop a series of algorithms to decide which data items to replicate, and
where to place the replicas. We show effectiveness of our proposed approach by
presenting results on a collection of synthetic and real workloads. Our
experiments show that careful data placement and replication can dramatically
reduce the average query spans resulting in significant reductions in the
resource consumption.
|
1302.4225 | Impact of Pointing Errors on the Performance of Mixed RF/FSO Dual-Hop
Transmission Systems | cs.IT cs.PF math.IT math.PR | In this work, the performance analysis of a dual-hop relay transmission
system composed of asymmetric radio-frequency (RF)/free-space optical (FSO)
links with pointing errors is presented. More specifically, we build on the
system model presented in [1] to derive new exact closed-form expressions for
the cumulative distribution function, probability density function, moment
generating function, and moments of the end-to-end signal-to-noise ratio in
terms of the Meijer's G function. We then capitalize on these results to offer
new exact closed-form expressions for the higher-order amount of fading,
average error rate for binary and M-ary modulation schemes, and the ergodic
capacity, all in terms of Meijer's G functions. Our new analytical results were
also verified via computer-based Monte-Carlo simulation results.
|
1302.4242 | Metrics for Multivariate Dictionaries | cs.LG stat.ML | Overcomplete representations and dictionary learning algorithms kept
attracting a growing interest in the machine learning community. This paper
addresses the emerging problem of comparing multivariate overcomplete
representations. Despite a recurrent need to rely on a distance for learning or
assessing multivariate overcomplete representations, no metrics in their
underlying spaces have yet been proposed. Henceforth we propose to study
overcomplete representations from the perspective of frame theory and matrix
manifolds. We consider distances between multivariate dictionaries as distances
between their spans which reveal to be elements of a Grassmannian manifold. We
introduce Wasserstein-like set-metrics defined on Grassmannian spaces and study
their properties both theoretically and numerically. Indeed a deep experimental
study based on tailored synthetic datasetsand real EEG signals for
Brain-Computer Interfaces (BCI) have been conducted. In particular, the
introduced metrics have been embedded in clustering algorithm and applied to
BCI Competition IV-2a for dataset quality assessment. Besides, a principled
connection is made between three close but still disjoint research fields,
namely, Grassmannian packing, dictionary learning and compressed sensing.
|
1302.4245 | Gaussian Process Kernels for Pattern Discovery and Extrapolation | stat.ML cs.AI stat.ME | Gaussian processes are rich distributions over functions, which provide a
Bayesian nonparametric approach to smoothing and interpolation. We introduce
simple closed form kernels that can be used with Gaussian processes to discover
patterns and enable extrapolation. These kernels are derived by modelling a
spectral density -- the Fourier transform of a kernel -- with a Gaussian
mixture. The proposed kernels support a broad class of stationary covariances,
but Gaussian process inference remains simple and analytic. We demonstrate the
proposed kernels by discovering patterns and performing long range
extrapolation on synthetic examples, as well as atmospheric CO2 trends and
airline passenger data. We also show that we can reconstruct standard
covariances within our framework.
|
1302.4258 | Phase Retrieval via Structured Modulations in Paley-Wiener Spaces | cs.IT math.IT | This paper considers the recovery of continuous time signals from the
magnitude of its samples. It uses a combination of structured modulation and
oversampling and provides sufficient conditions on the signal and the sampling
system such that signal recovery is possible. In particular, it is shown that
an average sampling rate of four times the Nyquist rate is sufficient to
reconstruct a signal from its magnitude measurements.
|
1302.4268 | Re-Encoding Techniques for Interpolation-Based Decoding of Reed-Solomon
Codes | cs.IT math.IT | We consider interpolation-based decoding of Reed-Solomon codes using the
Guruswami-Sudan algorithm (GSA) and investigate the effects of two modification
techniques for received vectors, i.e., the re-encoding map and the newly
introduced periodicity projection. After an analysis of the latter, we track
the benefits (that is low Hamming weight and regular structure) of modified
received vectors through the interpolation step of the GSA and show how the
involved homogeneous linear system of equations can be compressed. We show that
this compression as well as the recovery of the interpolated bivariate
polynomial is particularly simple when the periodicity projection was applied.
|
1302.4283 | On the Fly Self-Organized Base Station Placement | cs.NI cs.IT math.IT | In this paper, we address the deployment of base stations (BSs) in a
one-dimensional network in which the users are randomly distributed.In order to
take into account the users' distribution to optimally place the BSs we
optimize the uplink MMSE sum rate. Moreover, given a massive number of antennas
at the BSs we propose a novel random matrix theory-based technique so as to
obtain tight approximations for the MMSE sum rate in the uplink. We investigate
a cooperative (CP) scenario where the BSs jointly decode the messages and a
non-cooperative (NCP) scheme in which the BS can only decode its own users. Our
results show that the CP strategy considerably outperforms the NCP case.
Moreover, we show that there exists a trade off in the BS deployment regarding
the position of each BS. Thus, through location games we can optimize the
position of each BS in order to maximize the system performance.
|
1302.4297 | Feature Multi-Selection among Subjective Features | cs.LG stat.ML | When dealing with subjective, noisy, or otherwise nebulous features, the
"wisdom of crowds" suggests that one may benefit from multiple judgments of the
same feature on the same object. We give theoretically-motivated `feature
multi-selection' algorithms that choose, among a large set of candidate
features, not only which features to judge but how many times to judge each
one. We demonstrate the effectiveness of this approach for linear regression on
a crowdsourced learning task of predicting people's height and weight from
photos, using features such as 'gender' and 'estimated weight' as well as
culturally fraught ones such as 'attractive'.
|
1302.4332 | Streaming Data from HDD to GPUs for Sustained Peak Performance | cs.DC cs.CE cs.MS q-bio.GN | In the context of the genome-wide association studies (GWAS), one has to
solve long sequences of generalized least-squares problems; such a task has two
limiting factors: execution time --often in the range of days or weeks-- and
data management --data sets in the order of Terabytes. We present an algorithm
that obviates both issues. By pipelining the computation, and thanks to a
sophisticated transfer strategy, we stream data from hard disk to main memory
to GPUs and achieve sustained peak performance; with respect to a
highly-optimized CPU implementation, our algorithm shows a speedup of 2.6x.
Moreover, the approach lends itself to multiple GPUs and attains almost perfect
scalability. When using 4 GPUs, we observe speedups of 9x over the
aforementioned implementation, and 488x over a widespread biology library.
|
1302.4343 | On Translation Invariant Kernels and Screw Functions | math.FA cs.LG stat.ML | We explore the connection between Hilbertian metrics and positive definite
kernels on the real line. In particular, we look at a well-known
characterization of translation invariant Hilbertian metrics on the real line
by von Neumann and Schoenberg (1941). Using this result we are able to give an
alternate proof of Bochner's theorem for translation invariant positive
definite kernels on the real line (Rudin, 1962).
|
1302.4381 | Reasoning about Independence in Probabilistic Models of Relational Data | cs.AI | We extend the theory of d-separation to cases in which data instances are not
independent and identically distributed. We show that applying the rules of
d-separation directly to the structure of probabilistic models of relational
data inaccurately infers conditional independence. We introduce relational
d-separation, a theory for deriving conditional independence facts from
relational models. We provide a new representation, the abstract ground graph,
that enables a sound, complete, and computationally efficient method for
answering d-separation queries about relational models, and we present
empirical results that demonstrate effectiveness.
|
1302.4383 | Explaining Zipf's Law via Mental Lexicon | physics.data-an cond-mat.stat-mech cs.CL | The Zipf's law is the major regularity of statistical linguistics that served
as a prototype for rank-frequency relations and scaling laws in natural
sciences. Here we show that the Zipf's law -- together with its applicability
for a single text and its generalizations to high and low frequencies including
hapax legomena -- can be derived from assuming that the words are drawn into
the text with random probabilities. Their apriori density relates, via the
Bayesian statistics, to general features of the mental lexicon of the author
who produced the text.
|
1302.4387 | Online Learning with Switching Costs and Other Adaptive Adversaries | cs.LG stat.ML | We study the power of different types of adaptive (nonoblivious) adversaries
in the setting of prediction with expert advice, under both full-information
and bandit feedback. We measure the player's performance using a new notion of
regret, also known as policy regret, which better captures the adversary's
adaptiveness to the player's behavior. In a setting where losses are allowed to
drift, we characterize ---in a nearly complete manner--- the power of adaptive
adversaries with bounded memories and switching costs. In particular, we show
that with switching costs, the attainable rate with bandit feedback is
$\widetilde{\Theta}(T^{2/3})$. Interestingly, this rate is significantly worse
than the $\Theta(\sqrt{T})$ rate attainable with switching costs in the
full-information case. Via a novel reduction from experts to bandits, we also
show that a bounded memory adversary can force $\widetilde{\Theta}(T^{2/3})$
regret even in the full information case, proving that switching costs are
easier to control than bounded memory adversaries. Our lower bounds rely on a
new stochastic adversary strategy that generates loss processes with strong
dependencies.
|
1302.4389 | Maxout Networks | stat.ML cs.LG | We consider the problem of designing models to leverage a recently introduced
approximate model averaging technique called dropout. We define a simple new
model called maxout (so named because its output is the max of a set of inputs,
and because it is a natural companion to dropout) designed to both facilitate
optimization by dropout and improve the accuracy of dropout's fast approximate
model averaging technique. We empirically verify that the model successfully
accomplishes both of these tasks. We use maxout and dropout to demonstrate
state of the art classification performance on four benchmark datasets: MNIST,
CIFAR-10, CIFAR-100, and SVHN.
|
1302.4391 | Constructing a genome assembly that has the maximum likelihood | cs.CE cs.DS | We formulate genome assembly problem as an optimization problem in which the
objective function is the likelihood of the assembly given the reads.
|
1302.4405 | Performance Regions in Compressed Sensing from Noisy Measurements | cs.IT math.IT | In this paper, compressed sensing with noisy measurements is addressed. The
theoretically optimal reconstruction error is studied by evaluating Tanaka's
equation. The main contribution is to show that in several regions, which have
different measurement rates and noise levels, the reconstruction error behaves
differently. This paper also evaluates the performance of the belief
propagation (BP) signal reconstruction method in the regions discovered. When
the measurement rate and the noise level lie in a certain region, BP is
suboptimal with respect to Tanaka's equation, and it may be possible to develop
reconstruction algorithms with lower error in that region.
|
1302.4406 | Optimal Scheduling for Linear-Rate Multi-Mode Systems | cs.FL cs.SY | Linear-Rate Multi-Mode Systems is a model that can be seen both as a subclass
of switched linear systems with imposed global safety constraints and as hybrid
automata with no guards on transitions. We study the existence and design of a
controller for this model that keeps the state of the system within a given
safe set for the whole time. A sufficient and necessary condition is given for
such a controller to exist as well as an algorithm that finds one in polynomial
time. We further generalise the model by adding costs on modes and present an
algorithm that constructs a safe controller which minimises the peak cost, the
average-cost or any cost expressed as a weighted sum of these two. Finally, we
present numerical simulation results based on our implementation of these
algorithms.
|
1302.4412 | Recommending Given Names | cs.IR cs.SI physics.soc-ph | All over the world, future parents are facing the task of finding a suitable
given name for their child. This choice is influenced by different factors,
such as the social context, language, cultural background and especially
personal taste. Although this task is omnipresent, little research has been
conducted on the analysis and application of interrelations among given names
from a data mining perspective.
The present work tackles the problem of recommending given names, by firstly
mining for inter-name relatedness in data from the Social Web. Based on these
results, the name search engine "Nameling" was built, which attracted more than
35,000 users within less than six months, underpinning the relevance of the
underlying recommendation task. The accruing usage data is then used for
evaluating different state-of-the-art recommendation systems, as well our new
NameRank algorithm which we adopted from our previous work on folksonomies and
which yields the best results, considering the trade-off between prediction
accuracy and runtime performance as well as its ability to generate
personalized recommendations. We also show, how the gathered inter-name
relationships can be used for meaningful result diversification of
PageRank-based recommendation systems.
As all of the considered usage data is made publicly available, the present
work establishes baseline results, encouraging other researchers to implement
advanced recommendation systems for given names.
|
1302.4421 | Towards a theory of good SAT representations | cs.AI cs.LO | We aim at providing a foundation of a theory of "good" SAT representations F
of boolean functions f. We argue that the hierarchy UC_k of unit-refutation
complete clause-sets of level k, introduced by the authors, provides the most
basic target classes, that is, F in UC_k is to be achieved for k as small as
feasible. If F does not contain new variables, i.e., F is equivalent (as a CNF)
to f, then F in UC_1 is similar to "achieving (generalised) arc consistency"
known from the literature (it is somewhat weaker, but theoretically much nicer
to handle). We show that for polysize representations of boolean functions in
this sense, the hierarchy UC_k is strict. The boolean functions for these
separations are "doped" minimally unsatisfiable clause-sets of deficiency 1;
these functions have been introduced in [Sloan, Soerenyi, Turan, 2007], and we
generalise their construction and show a correspondence to a strengthened
notion of irredundant sub-clause-sets. Turning from lower bounds to upper
bounds, we believe that many common CNF representations fit into the UC_k
scheme, and we give some basic tools to construct representations in UC_1 with
new variables, based on the Tseitin translation. Note that regarding new
variables the UC_1-representations are stronger than mere "arc consistency",
since the new variables are not excluded from consideration.
|
1302.4433 | Adaptive Minimum BER Reduced-Rank Linear Detection for Massive MIMO
Systems | cs.IT math.IT | In this paper, we propose a novel adaptive reduced-rank strategy for very
large multiuser multi-input multi-output (MIMO) systems. The proposed
reduced-rank scheme is based on the concept of joint iterative optimization
(JIO) of filters according to the minimization of the bit error rate (BER) cost
function. The proposed optimization technique adjusts the weights of a
projection matrix and a reduced-rank filter jointly. We develop stochastic
gradient (SG) algorithms for their adaptive implementation and introduce a
novel automatic rank selection method based on the BER criterion. Simulation
results for multiuser MIMO systems show that the proposed adaptive algorithms
significantly outperform existing schemes.
|
1302.4462 | LEDDB: LOFAR Epoch of Reionization Diagnostic Database | astro-ph.IM cs.DB | One of the key science projects of the Low-Frequency Array (LOFAR) is the
detection of the cosmological signal coming from the Epoch of Reionization
(EoR). Here we present the LOFAR EoR Diagnostic Database (LEDDB) that is used
in the storage, management, processing and analysis of the LOFAR EoR
observations. It stores referencing information of the observations and
diagnostic parameters extracted from their calibration. This stored data is
used to ease the pipeline processing, monitor the performance of the telescope
and visualize the diagnostic parameters which facilitates the analysis of the
several contamination effects on the signals. It is implemented with PostgreSQL
and accessed through the psycopg2 python module. We have developed a very
flexible query engine, which is used by a web user interface to access the
database, and a very extensive set of tools for the visualization of the
diagnostic parameters through all their multiple dimensions.
|
1302.4465 | Unveiling the relationship between complex networks metrics and word
senses | physics.soc-ph cs.CL cs.SI physics.data-an | The automatic disambiguation of word senses (i.e., the identification of
which of the meanings is used in a given context for a word that has multiple
meanings) is essential for such applications as machine translation and
information retrieval, and represents a key step for developing the so-called
Semantic Web. Humans disambiguate words in a straightforward fashion, but this
does not apply to computers. In this paper we address the problem of Word Sense
Disambiguation (WSD) by treating texts as complex networks, and show that word
senses can be distinguished upon characterizing the local structure around
ambiguous words. Our goal was not to obtain the best possible disambiguation
system, but we nevertheless found that in half of the cases our approach
outperforms traditional shallow methods. We show that the hierarchical
connectivity and clustering of words are usually the most relevant features for
WSD. The results reported here shine light on the relationship between semantic
and structural parameters of complex networks. They also indicate that when
combined with traditional techniques the complex network approach may be useful
to enhance the discrimination of senses in large texts
|
1302.4471 | Word sense disambiguation via high order of learning in complex networks | physics.soc-ph cs.CL cs.SI physics.data-an | Complex networks have been employed to model many real systems and as a
modeling tool in a myriad of applications. In this paper, we use the framework
of complex networks to the problem of supervised classification in the word
disambiguation task, which consists in deriving a function from the supervised
(or labeled) training data of ambiguous words. Traditional supervised data
classification takes into account only topological or physical features of the
input data. On the other hand, the human (animal) brain performs both low- and
high-level orders of learning and it has facility to identify patterns
according to the semantic meaning of the input data. In this paper, we apply a
hybrid technique which encompasses both types of learning in the field of word
sense disambiguation and show that the high-level order of learning can really
improve the accuracy rate of the model. This evidence serves to demonstrate
that the internal structures formed by the words do present patterns that,
generally, cannot be correctly unveiled by only traditional techniques.
Finally, we exhibit the behavior of the model for different weights of the low-
and high-level classifiers by plotting decision boundaries. This study helps
one to better understand the effectiveness of the model.
|
1302.4474 | On the multiple unicast capacity of 3-source, 3-terminal directed
acyclic networks | cs.IT cs.NI math.IT | We consider the multiple unicast problem with three source-terminal pairs
over directed acyclic networks with unit-capacity edges. The three $s_i-t_i$
pairs wish to communicate at unit-rate via network coding. The connectivity
between the $s_i - t_i$ pairs is quantified by means of a connectivity level
vector, $[k_1 k_2 k_3]$ such that there exist $k_i$ edge-disjoint paths between
$s_i$ and $t_i$. In this work we attempt to classify networks based on the
connectivity level. It can be observed that unit-rate transmission can be
supported by routing if $k_i \geq 3$, for all $i = 1, \dots, 3$. In this work,
we consider, connectivity level vectors such that $\min_{i = 1, \dots, 3} k_i <
3$. We present either a constructive linear network coding scheme or an
instance of a network that cannot support the desired unit-rate requirement,
for all such connectivity level vectors except the vector $[1~2~4]$ (and its
permutations). The benefits of our schemes extend to networks with higher and
potentially different edge capacities. Specifically, our experimental results
indicate that for networks where the different source-terminal paths have a
significant overlap, our constructive unit-rate schemes can be packed along
with routing to provide higher throughput as compared to a pure routing
approach.
|
1302.4475 | In Love With a Robot: the Dawn of Machine-To-Machine Marketing | cs.AI cs.CY | The article looks at mass market artificial intelligence tools in the context
of their ever-growing sophistication, availability and market penetration. The
subject is especially relevant today for these exact reasons - if a few years
ago AI was the subject of high tech research and science fiction novels, today,
we increasingly rely on cloud robotics to cater to our daily needs - to trade
stock, predict weather, manage diaries, find friends and buy presents online.
|
1302.4489 | Termhood-based Comparability Metrics of Comparable Corpus in Special
Domain | cs.CL | Cross-Language Information Retrieval (CLIR) and machine translation (MT)
resources, such as dictionaries and parallel corpora, are scarce and hard to
come by for special domains. Besides, these resources are just limited to a few
languages, such as English, French, and Spanish and so on. So, obtaining
comparable corpora automatically for such domains could be an answer to this
problem effectively. Comparable corpora, that the subcorpora are not
translations of each other, can be easily obtained from web. Therefore,
building and using comparable corpora is often a more feasible option in
multilingual information processing. Comparability metrics is one of key issues
in the field of building and using comparable corpus. Currently, there is no
widely accepted definition or metrics method of corpus comparability. In fact,
Different definitions or metrics methods of comparability might be given to
suit various tasks about natural language processing. A new comparability,
namely, termhood-based metrics, oriented to the task of bilingual terminology
extraction, is proposed in this paper. In this method, words are ranked by
termhood not frequency, and then the cosine similarities, calculated based on
the ranking lists of word termhood, is used as comparability. Experiments
results show that termhood-based metrics performs better than traditional
frequency-based metrics.
|
1302.4490 | Complex networks analysis of language complexity | physics.soc-ph cs.CL cs.SI physics.data-an | Methods from statistical physics, such as those involving complex networks,
have been increasingly used in quantitative analysis of linguistic phenomena.
In this paper, we represented pieces of text with different levels of
simplification in co-occurrence networks and found that topological regularity
correlated negatively with textual complexity. Furthermore, in less complex
texts the distance between concepts, represented as nodes, tended to decrease.
The complex networks metrics were treated with multivariate pattern recognition
techniques, which allowed us to distinguish between original texts and their
simplified versions. For each original text, two simplified versions were
generated manually with increasing number of simplification operations. As
expected, distinction was easier for the strongly simplified versions, where
the most relevant metrics were node strength, shortest paths and diversity.
Also, the discrimination of complex texts was improved with higher hierarchical
network metrics, thus pointing to the usefulness of considering wider contexts
around the concepts. Though the accuracy rate in the distinction was not as
high as in methods using deep linguistic knowledge, the complex network
approach is still useful for a rapid screening of texts whenever assessing
complexity is essential to guarantee accessibility to readers with limited
reading ability
|
1302.4492 | Bilingual Terminology Extraction Using Multi-level Termhood | cs.CL | Purpose: Terminology is the set of technical words or expressions used in
specific contexts, which denotes the core concept in a formal discipline and is
usually applied in the fields of machine translation, information retrieval,
information extraction and text categorization, etc. Bilingual terminology
extraction plays an important role in the application of bilingual dictionary
compilation, bilingual Ontology construction, machine translation and
cross-language information retrieval etc. This paper addresses the issues of
monolingual terminology extraction and bilingual term alignment based on
multi-level termhood.
Design/methodology/approach: A method based on multi-level termhood is
proposed. The new method computes the termhood of the terminology candidate as
well as the sentence that includes the terminology by the comparison of the
corpus. Since terminologies and general words usually have differently
distribution in the corpus, termhood can also be used to constrain and enhance
the performance of term alignment when aligning bilingual terms on the parallel
corpus. In this paper, bilingual term alignment based on termhood constraints
is presented.
Findings: Experiment results show multi-level termhood can get better
performance than existing method for terminology extraction. If termhood is
used as constrain factor, the performance of bilingual term alignment can be
improved.
|
1302.4504 | On the use of topological features and hierarchical characterization for
disambiguating names in collaborative networks | physics.soc-ph cs.DL cs.IR cs.SI | Many features of complex systems can now be unveiled by applying statistical
physics methods to treat them as social networks. The power of the analysis may
be limited, however, by the presence of ambiguity in names, e.g., caused by
homonymy in collaborative networks. In this paper we show that the ability to
distinguish between homonymous authors is enhanced when longer-distance
connections are considered, rather than looking at only the immediate neighbors
of a node in the collaborative network. Optimized results were obtained upon
using the 3rd hierarchy in connections. Furthermore, reasonable distinction
among authors could also be achieved upon using pattern recognition strategies
for the data generated from the topology of the collaborative network. These
results were obtained with a network from papers in the arXiv repository, into
which homonymy was deliberately introduced to test the methods with a
controlled, reliable dataset. In all cases, several methods of supervised and
unsupervised machine learning were used, leading to the same overall results.
The suitability of using deeper hierarchies and network topology was confirmed
with a real database of movie actors, with the additional finding that the
distinguishing ability can be further enhanced by combining topology features
and long-range connections in the collaborative network.
|
1302.4516 | Bilayer Protograph Codes for Half-Duplex Relay Channels | cs.IT math.IT | Despite encouraging advances in the design of relay codes, several important
challenges remain. Many of the existing LDPC relay codes are tightly optimized
for fixed channel conditions and not easily adapted without extensive
re-optimization of the code. Some have high encoding complexity and some need
long block lengths to approach capacity. This paper presents a high-performance
protograph-based LDPC coding scheme for the half-duplex relay channel that
addresses simultaneously several important issues: structured coding that
permits easy design, low encoding complexity, embedded structure for convenient
adaptation to various channel conditions, and performance close to capacity
with a reasonable block length. The application of the coding structure to
multi-relay networks is demonstrated. Finally, a simple new methodology for
evaluating the end-to-end error performance of relay coding systems is
developed and used to highlight the performance of the proposed codes.
|
1302.4519 | A Genetic Algorithm for Power-Aware Virtual Machine Allocation in
Private Cloud | cs.NE cs.DC | Energy efficiency has become an important measurement of scheduling algorithm
for private cloud. The challenge is trade-off between minimizing of energy
consumption and satisfying Quality of Service (QoS) (e.g. performance or
resource availability on time for reservation request). We consider resource
needs in context of a private cloud system to provide resources for
applications in teaching and researching. In which users request computing
resources for laboratory classes at start times and non-interrupted duration in
some hours in prior. Many previous works are based on migrating techniques to
move online virtual machines (VMs) from low utilization hosts and turn these
hosts off to reduce energy consumption. However, the techniques for migration
of VMs could not use in our case. In this paper, a genetic algorithm for
power-aware in scheduling of resource allocation (GAPA) has been proposed to
solve the static virtual machine allocation problem (SVMAP). Due to limited
resources (i.e. memory) for executing simulation, we created a workload that
contains a sample of one-day timetable of lab hours in our university. We
evaluate the GAPA and a baseline scheduling algorithm (BFD), which sorts list
of virtual machines in start time (i.e. earliest start time first) and using
best-fit decreasing (i.e. least increased power consumption) algorithm, for
solving the same SVMAP. As a result, the GAPA algorithm obtains total energy
consumption is lower than the baseline algorithm on simulated experimentation.
|
1302.4545 | Preference-Based Unawareness | cs.GT cs.AI cs.LO | Morris (1996, 1997) introduced preference-based definitions of knowledge and
belief in standard state-space structures. This paper extends this
preference-based approach to unawareness structures (Heifetz, Meier, and
Schipper, 2006, 2008). By defining unawareness and knowledge in terms of
preferences over acts in unawareness structures and showing their equivalence
to the epistemic notions of unawareness and knowledge, we try to build a bridge
between decision theory and epistemic logic. Unawareness of an event is
characterized behaviorally as the event being null and its negation being null.
|
1302.4546 | Random-walk domination in large graphs: problem definitions and fast
solutions | cs.SI cs.DS physics.soc-ph | We introduce and formulate two types of random-walk domination problems in
graphs motivated by a number of applications in practice (e.g., item-placement
problem in online social network, Ads-placement problem in advertisement
networks, and resource-placement problem in P2P networks). Specifically, given
a graph $G$, the goal of the first type of random-walk domination problem is to
target $k$ nodes such that the total hitting time of an $L$-length random walk
starting from the remaining nodes to the targeted nodes is minimal. The second
type of random-walk domination problem is to find $k$ nodes to maximize the
expected number of nodes that hit any one targeted node through an $L$-length
random walk. We prove that these problems are two special instances of the
submodular set function maximization with cardinality constraint problem. To
solve them effectively, we propose a dynamic-programming (DP) based greedy
algorithm which is with near-optimal performance guarantee. The DP-based greedy
algorithm, however, is not very efficient due to the expensive marginal gain
evaluation. To further speed up the algorithm, we propose an approximate greedy
algorithm with linear time complexity w.r.t.\ the graph size and also with
near-optimal performance guarantee. The approximate greedy algorithm is based
on a carefully designed random-walk sampling and sample-materialization
techniques. Extensive experiments demonstrate the effectiveness, efficiency and
scalability of the proposed algorithms.
|
1302.4549 | Breaking the Small Cluster Barrier of Graph Clustering | cs.LG stat.ML | This paper investigates graph clustering in the planted cluster model in the
presence of {\em small clusters}. Traditional results dictate that for an
algorithm to provably correctly recover the clusters, {\em all} clusters must
be sufficiently large (in particular, $\tilde{\Omega}(\sqrt{n})$ where $n$ is
the number of nodes of the graph). We show that this is not really a
restriction: by a more refined analysis of the trace-norm based recovery
approach proposed in Jalali et al. (2011) and Chen et al. (2012), we prove that
small clusters, under certain mild assumptions, do not hinder recovery of large
ones.
Based on this result, we further devise an iterative algorithm to recover
{\em almost all clusters} via a "peeling strategy", i.e., recover large
clusters first, leading to a reduced problem, and repeat this procedure. These
results are extended to the {\em partial observation} setting, in which only a
(chosen) part of the graph is observed.The peeling strategy gives rise to an
active learning algorithm, in which edges adjacent to smaller clusters are
queried more often as large clusters are learned (and removed).
From a high level, this paper sheds novel insights on high-dimensional
statistics and learning structured data, by presenting a structured matrix
learning problem for which a one shot convex relaxation approach necessarily
fails, but a carefully constructed sequence of convex relaxationsdoes the job.
|
1302.4557 | Extracting Three Dimensional Surface Model of Human Kidney from the
Visible Human Data Set using Free Software | physics.med-ph cs.CE | Three dimensional digital model of a representative human kidney is needed
for a surgical simulator that is capable of simulating a laparoscopic surgery
involving kidney. Buying a three dimensional computer model of a representative
human kidney, or reconstructing a human kidney from an image sequence using
commercial software, both involve (sometimes significant amount of) money. In
this paper, author has shown that one can obtain a three dimensional surface
model of human kidney by making use of images from the Visible Human Data Set
and a few free software packages (ImageJ, ITK-SNAP, and MeshLab in particular).
Images from the Visible Human Data Set, and the software packages used here,
both do not cost anything. Hence, the practice of extracting the geometry of a
representative human kidney for free, as illustrated in the present work, could
be a free alternative to the use of expensive commercial software or to the
purchase of a digital model.
|
1302.4572 | Searchability of central nodes in networks | physics.soc-ph cs.SI | Social networks are discrete systems with a large amount of heterogeneity
among nodes (individuals). Measures of centrality aim at a quantification of
nodes' importance for structure and function. Here we ask to which extent the
most central nodes can be found by purely local search. We find that many
networks have close-to-optimal searchability under eigenvector centrality,
outperforming searches for degree and betweenness. Searchability of the
strongest spreaders in epidemic dynamics tends to be substantially larger for
supercritical than for subcritical spreading.
|
1302.4619 | Compactified Horizontal Visibility Graph for the Language Network | cs.CL cs.DS | A compactified horizontal visibility graph for the language network is
proposed. It was found that the networks constructed in such way are scale
free, and have a property that among the nodes with largest degrees there are
words that determine not only a text structure communication, but also its
informational structure.
|
1302.4660 | Compressive Classification | cs.IT math.IT | This paper derives fundamental limits associated with compressive
classification of Gaussian mixture source models. In particular, we offer an
asymptotic characterization of the behavior of the (upper bound to the)
misclassification probability associated with the optimal Maximum-A-Posteriori
(MAP) classifier that depends on quantities that are dual to the concepts of
diversity gain and coding gain in multi-antenna communications. The diversity,
which is shown to determine the rate at which the probability of
misclassification decays in the low noise regime, is shown to depend on the
geometry of the source, the geometry of the measurement system and their
interplay. The measurement gain, which represents the counterpart of the coding
gain, is also shown to depend on geometrical quantities. It is argued that the
diversity order and the measurement gain also offer an optimization criterion
to perform dictionary learning for compressive classification applications.
|
1302.4670 | Exact-Repair Regenerating Codes Via Layered Erasure Correction and Block
Designs | cs.IT math.IT | A new class of exact-repair regenerating codes is constructed by combining
two layers of erasure correction codes together with combinatorial block
designs, e.g., Steiner systems, balanced incomplete block designs and
t-designs. The proposed codes have the "uncoded repair" property where the
nodes participating in the repair simply transfer part of the stored data
directly, without performing any computation. The layered error correction
structure makes the decoding process rather straightforward, and in general the
complexity is low. We show that this construction is able to achieve
performance better than time-sharing between the minimum storage regenerating
codes and the minimum repair-bandwidth regenerating codes.
|
1302.4673 | Good Recognition is Non-Metric | cs.CV | Recognition is the fundamental task of visual cognition, yet how to formalize
the general recognition problem for computer vision remains an open issue. The
problem is sometimes reduced to the simplest case of recognizing matching
pairs, often structured to allow for metric constraints. However, visual
recognition is broader than just pair matching -- especially when we consider
multi-class training data and large sets of features in a learning context.
What we learn and how we learn it has important implications for effective
algorithms. In this paper, we reconsider the assumption of recognition as a
pair matching test, and introduce a new formal definition that captures the
broader context of the problem. Through a meta-analysis and an experimental
assessment of the top algorithms on popular data sets, we gain a sense of how
often metric properties are violated by good recognition algorithms. By
studying these violations, useful insights come to light: we make the case that
locally metric algorithms should leverage outside information to solve the
general recognition problem.
|
1302.4680 | Moving target inference with hierarchical Bayesian models in synthetic
aperture radar imagery | cs.IT math.IT | In synthetic aperture radar (SAR), images are formed by focusing the response
of stationary objects to a single spatial location. On the other hand, moving
targets cause phase errors in the standard formation of SAR images that cause
displacement and defocusing effects. SAR imagery also contains significant
sources of non-stationary spatially-varying noises, including antenna gain
discrepancies, angular scintillation (glints) and complex speckle. In order to
account for this intricate phenomenology, this work combines the knowledge of
the physical, kinematic, and statistical properties of SAR imaging into a
single unified Bayesian structure that simultaneously (a) estimates the
nuisance parameters such as clutter distributions and antenna miscalibrations
and (b) estimates the target signature required for detection/inference of the
target state. Moreover, we provide a Monte Carlo estimate of the posterior
distribution for the target state and nuisance parameters that infers the
parameters of the model directly from the data, largely eliminating tuning of
algorithm parameters. We demonstrate that our algorithm competes at least as
well on a synthetic dataset as state-of-the-art algorithms for estimating
sparse signals. Finally, performance analysis on a measured dataset
demonstrates that the proposed algorithm is robust at detecting/estimating
targets over a wide area and performs at least as well as popular algorithms
for SAR moving target detection.
|
1302.4701 | A Receiver-Centric OFCDM Approach with Subcarrier Grouping | cs.IT cs.SY math.IT | In this letter, following a cross-layer design concept, we propose a novel
subcarrier grouping technique for Orthogonal Frequency and Code Division
Multiplexing (OFCDM) multiuser systems. We adopt a two dimensional (2D)
spreading, so as to achieve both frequency- and time-domain channel gain.
Furthermore, we enable a receiver-centric approach, where the receiver rather
than a potential sender controls the admission decision of the communication
establishment. We study the robustness of the proposed scheme in terms of the
Bit-Error-Rate (BER) and the outage probability. The derived results indicate
that the proposed scheme outperforms the classical OFCDM approach.
|
1302.4705 | Performance Analysis of the Ordered V-BLAST Approach over Nakagami-m
Fading Channels | cs.IT cs.SY math.IT | The performance of the V-BLAST approach, which utilizes successive
interference cancellation (SIC) with optimal ordering, over independent
Nakagami-m fading channels is studied. Systems with two transmit and n receive
antennas are employed whereas the potential erroneous decision of SIC is also
considered. In particular, tight closed-form bound expressions are derived in
terms of the average symbol error rate (ASER) and the outage probability, in
case of binary and rectangular M-ary constellation alphabets. The mathematical
analysis is accompanied with selected performance evaluation and numerical
results, which demonstrate the usefulness of the proposed approach.
|
1302.4706 | Curves on Flat Tori and Analog Source-Channel Codes | cs.IT math.IT | In this paper we consider the problem of transmitting a continuous alphabet
discrete-time source over an AWGN channel. We propose a constructive scheme
based on a set of curves on the surface of a N-dimensional sphere. Our approach
shows that the design of good codes for this communication problem is related
to geometrical properties of spherical codes and projections of N-dimensional
rectangular lattices. Theoretical comparisons with some previous works in terms
of the mean square error as a function of the channel SNR as well as
simulations are provided.
|
1302.4717 | Channel Sounding Waveforms Design for Asynchronous Multiuser MIMO
Systems | cs.IT math.IT | In this paper we provide three contributions to the field of channel sounding
waveform design in asynchronous Multi-user (MU) MIMO systems. The first
contribution is a derivation of the asynchronous MU-MIMO model and the
conditions that the sounding waveform must meet to independently resolve all of
the spatial channel responses. Next we propose a chirp waveform that meets the
constraints and we show that the MSE of our system meets the Cramer-Rao Bound
(CRB) when the time offset is an integer multiple of the sampling interval.
Finally we demonstrate that the channel capacity region of the asynchronous
system and synchronous system is equivalent under certain conditions.
Simulation results are provided to illustrate the findings.
|
1302.4721 | Energy-Efficient Resource Allocation in OFDMA Systems with Hybrid Energy
Harvesting Base Station | cs.IT math.IT | We study resource allocation algorithm design for energy-efficient
communication in an OFDMA downlink network with hybrid energy harvesting base
station. Specifically, an energy harvester and a constant energy source driven
by a non-renewable resource are used for supplying the energy required for
system operation. We first consider a deterministic offline system setting. In
particular, assuming availability of non-causal knowledge about energy arrivals
and channel gains, an offline resource allocation problem is formulated as a
non-convex optimization problem taking into account the circuit energy
consumption, a finite energy storage capacity, and a minimum required data
rate. We transform this non-convex optimization problem into a convex
optimization problem by applying time-sharing and fractional programming which
results in an efficient asymptotically optimal offline iterative resource
allocation algorithm. In each iteration, the transformed problem is solved by
using Lagrange dual decomposition. The obtained resource allocation policy
maximizes the weighted energy efficiency of data transmission. Subsequently, we
focus on online algorithm design. A stochastic dynamic programming approach is
employed to obtain the optimal online resource allocation algorithm which
requires a prohibitively high complexity. To strike a balance between system
performance and computational complexity, we propose a low complexity
suboptimal online iterative algorithm which is motivated by the offline
optimization.
|
1302.4726 | An Ontology for Modelling and Supporting the Process of Authoring
Technical Assessments | cs.IR cs.CL cs.DL | In this paper, we present a semantic web approach for modelling the process
of creating new technical and regulatory documents related to the Building
sector. This industry, among other industries, is currently experiencing a
phenomenal growth in its technical and regulatory texts. Therefore, it is
urgent and crucial to improve the process of creating regulations by automating
it as much as possible. We focus on the creation of particular technical
documents issued by the French Scientific and Technical Centre for Building
(CSTB), called Technical Assessments, and we propose services based on Semantic
Web models and techniques for modelling the process of their creation.
|
1302.4735 | Realignment in the NHL, MLB, the NFL, and the NBA | stat.AP cs.SI physics.soc-ph | Sports leagues consist of conferences subdivided into divisions. Teams play a
number of games within their divisions and fewer games against teams in
different divisions and conferences. Usually, a league structure remains stable
from one season to the next. However, structures change when growth or
contraction occurs, and realignment of the four major professional sports
leagues in North America has occurred more than twenty-five times since 1967.
In this paper, we describe a method for realigning sports leagues that is
flexible, adaptive, and that enables construction of schedules that minimize
travel while satisfying other criteria. We do not build schedules; we develop
league structures which support the subsequent construction of efficient
schedules. Our initial focus is the NHL, which has an urgent need for
realignment following the recent move of the Atlanta Thrashers to Winnipeg, but
our methods can be adapted to virtually any situation. We examine a variety of
scenarios for the NHL, and apply our methods to the NBA, MLB, and NFL. We find
the biggest improvements for MLB and the NFL, where adopting the best solutions
would reduce league travel by about 20%.
|
1302.4755 | Channel-Aware Random Access in the Presence of Channel Estimation Errors | cs.IT math.IT | In this work, we consider the random access of nodes adapting their
transmission probability based on the local channel state information (CSI) in
a decentralized manner, which is called CARA. The CSI is not directly available
to each node but estimated with some errors in our scenario. Thus, the impact
of imperfect CSI on the performance of CARA is our main concern. Specifically,
an exact stability analysis is carried out when a pair of bursty sources are
competing for a common receiver and, thereby, have interdependent services. The
analysis also takes into account the compound effects of the multipacket
reception (MPR) capability at the receiver. The contributions in this paper are
twofold: first, we obtain the exact stability region of CARA in the presence of
channel estimation errors; such an assessment is necessary as the errors in
channel estimation are inevitable in the practical situation. Secondly, we
compare the performance of CARA to that achieved by the class of stationary
scheduling policies that make decisions in a centralized manner based on the
CSI feedback. It is shown that the stability region of CARA is not necessarily
a subset of that of centralized schedulers as the MPR capability improves.
|
1302.4761 | Finite-time Consensus for Multi-agent Networks with Unknown Inherent
Nonlinear Dynamics | math.OC cs.SY | This paper focuses on analyzing the finite-time convergence of a nonlinear
consensus algorithm for multi-agent networks with unknown inherent nonlinear
dynamics. Due to the existence of the unknown inherent nonlinear dynamics, the
stability analysis and the finite-time convergence analysis of the closed-loop
system under the proposed consensus algorithm are more challenging than those
under the well-studied consensus algorithms for known linear systems. For this
purpose, we propose a novel stability tool based on a generalized comparison
lemma. With the aid of the novel stability tool, it is shown that the proposed
nonlinear consensus algorithm can guarantee finite-time convergence if the
directed switching interaction graph has a directed spanning tree at each time
interval. Specifically, the finite-time convergence is shown by comparing the
closed-loop system under the proposed consensus algorithm with some
well-designed closed-loop system whose stability properties are easier to
obtain. Moreover, the stability and the finite-time convergence of the
closed-loop system using the proposed consensus algorithm under a (general)
directed switching interaction graph can even be guaranteed by the stability
and the finite-time convergence of some special well-designed nonlinear
closed-loop system under some special directed switching interaction graph,
where each agent has at most one neighbor whose state is either the maximum of
those states that are smaller than its own state or the minimum of those states
that are larger than its own state. This provides a stimulating example for the
potential applications of the proposed novel stability tool in the stability
analysis of linear/nonlinear closed-loop systems by making use of known results
in linear/nonlinear systems. For illustration of the theoretical result, we
provide a simulation example.
|
1302.4765 | Design Features for the Social Web: The Architecture of Deme | cs.SI cs.SE | We characterize the "social Web" and argue for several features that are
desirable for users of socially oriented web applications. We describe the
architecture of Deme, a web content management system (WCMS) and extensible
framework, and show how it implements these desired features. We then compare
Deme on our desiderata with other web technologies: traditional HTML, previous
open source WCMSs (illustrated by Drupal), commercial Web 2.0 applications, and
open-source, object-oriented web application frameworks. The analysis suggests
that a WCMS can be well suited to building social websites if it makes more of
the features of object-oriented programming, such as polymorphism, and class
inheritance, available to non-programmers in an accessible vocabulary.
|
1302.4767 | Low-power Secret-key Agreement over OFDM | cs.IT cs.CR math.IT | Information-theoretic secret-key agreement is perhaps the most practically
feasible mechanism that provides unconditional security at the physical layer
to date. In this paper, we consider the problem of secret-key agreement by
sharing randomness at low power over an orthogonal frequency division
multiplexing (OFDM) link, in the presence of an eavesdropper. The low power
assumption greatly simplifies the design of the randomness sharing scheme, even
in a fading channel scenario. We assess the performance of the proposed system
in terms of secrecy key rate and show that a practical approach to key sharing
is obtained by using low-density parity check (LDPC) codes for information
reconciliation. Numerical results confirm the merits of the proposed approach
as a feasible and practical solution. Moreover, the outage formulation allows
to implement secret-key agreement even when only statistical knowledge of the
eavesdropper channel is available.
|
1302.4773 | Optimal Discriminant Functions Based On Sampled Distribution Distance
for Modulation Classification | stat.ML cs.LG cs.PF | In this letter, we derive the optimal discriminant functions for modulation
classification based on the sampled distribution distance. The proposed method
classifies various candidate constellations using a low complexity approach
based on the distribution distance at specific testpoints along the cumulative
distribution function. This method, based on the Bayesian decision criteria,
asymptotically provides the minimum classification error possible given a set
of testpoints. Testpoint locations are also optimized to improve classification
performance. The method provides significant gains over existing approaches
that also use the distribution of the signal features.
|
1302.4774 | A theoretical framework for conducting multi-level studies of complex
social systems with agent-based models and empirical data | cs.MA cs.SI stat.AP | A formal but intuitive framework is introduced to bridge the gap between data
obtained from empirical studies and that generated by agent-based models. This
is based on three key tenets. Firstly, a simulation can be given multiple
formal descriptions corresponding to static and dynamic properties at different
levels of observation. These can be easily mapped to empirically observed
phenomena and data obtained from them. Secondly, an agent-based model generates
a set of closed systems, and computational simulation is the means by which we
sample from this set. Thirdly, properties at different levels and statistical
relationships between them can be used to classify simulations as those that
instantiate a more sophisticated set of constraints. These can be validated
with models obtained from statistical models of empirical data (for example,
structural equation or multi-level models) and hence provide more stringent
criteria for validating the agent-based model itself.
|
1302.4776 | Universal Outlier Hypothesis Testing | cs.IT math.IT math.ST stat.TH | Outlier hypothesis testing is studied in a universal setting. Multiple
sequences of observations are collected, a small subset of which are outliers.
A sequence is considered an outlier if the observations in that sequence are
distributed according to an ``outlier'' distribution, distinct from the
``typical'' distribution governing the observations in all the other sequences.
Nothing is known about the outlier and typical distributions except that they
are distinct and have full supports. The goal is to design a universal test to
best discern the outlier sequence(s). It is shown that the generalized
likelihood test is universally exponentially consistent under various settings.
The achievable error exponent is also characterized. In the other settings, it
is also shown that there cannot exist any universally exponentially consistent
test.
|
1302.4784 | An Optical Watermarking Solution for Color Personal Identification
Pictures | cs.MM cs.CV physics.optics | This paper presents a new approach for embedding authentication information
into image on printed materials based on optical projection technique. Our
experimental setup consists of two parts, one is a common camera, and the other
is a LCD projector, which project a pattern on personnel's body (especially on
the face). The pattern, generated by a computer, act as the illumination light
source with sinusoidal distribution and it is also the watermark signal. For a
color image, the watermark is embedded into the blue channel. While we take
pictures (256 *256 and 512*512, 567*390 pixels, respectively), an invisible
mark is embedded directly into magnitude oefficients of Discrete Fourier
transform (DFT) at exposure moment. Both optical an d digital correlation is
suitable for detection of this type of watermark. The decoded watermark is a
set of concentric circles or sectors in the DFT domain (middle frequencies
region) which is robust to photographing, printing and scanning. The unlawful
people modify or replace the original photograph, and make fake passport
(drivers' license and so on). Experiments show, it is difficult to forge
certificates in which a watermark was embedded by our projector-camera
combination based on analogue watermark method rather than classical digital
method.
|
1302.4785 | A Distributed Approach to Interference Alignment in OFDM-based
Two-tiered Networks | cs.IT math.IT | In this contribution, we consider a two-tiered network and focus on the
coexistence between the two tiers at physical layer. We target our efforts on a
long term evolution advanced (LTE-A) orthogonal frequency division multiple
access (OFDMA) macro-cell sharing the spectrum with a randomly deployed second
tier of small-cells. In such networks, high levels of co-channel interference
between the macro and small base stations (MBS/SBS) may largely limit the
potential spectral efficiency gains provided by the frequency reuse 1. To
address this issue, we propose a novel cognitive interference alignment based
scheme to protect the macro-cell from the cross-tier interference, while
mitigating the co-tier interference in the second tier. Remarkably, only local
channel state information (CSI) and autonomous operations are required in the
second tier, resulting in a completely self-organizing approach for the SBSs.
The optimal precoder that maximizes the spectral efficiency of the link between
each SBS and its served user equipment is found by means of a distributed
one-shot strategy. Numerical findings reveal non-negligible spectral efficiency
enhancements with respect to traditional time division multiple access
approaches at any signal to noise (SNR) regime. Additionally, the proposed
technique exhibits significant robustness to channel estimation errors,
achieving remarkable results for the imperfect CSI case and yielding consistent
performance enhancements to the network.
|
1302.4786 | Cognitive Orthogonal Precoder for Two-tiered Networks Deployment | cs.IT math.IT | In this work, the problem of cross-tier interference in a two-tiered
(macro-cell and cognitive small-cells) network, under the complete spectrum
sharing paradigm, is studied. A new orthogonal precoder transmit scheme for the
small base stations, called multi-user Vandermonde-subspace frequency division
multiplexing (MU-VFDM), is proposed. MU-VFDM allows several cognitive small
base stations to coexist with legacy macro-cell receivers, by nulling the
small- to macro-cell cross-tier interference, without any cooperation between
the two tiers. This cleverly designed cascaded precoder structure, not only
cancels the cross-tier interference, but avoids the co-tier interference for
the small-cell network. The achievable sum-rate of the small-cell network,
satisfying the interference cancelation requirements, is evaluated for perfect
and imperfect channel state information at the transmitter. Simulation results
for the cascaded MU-VFDM precoder show a comparable performance to that of
state-of-the-art dirty paper coding technique, for the case of a dense cellular
layout. Finally, a comparison between MU-VFDM and a standard complete spectrum
separation strategy is proposed. Promising gains in terms of achievable
sum-rate are shown for the two-tiered network w.r.t. the traditional bandwidth
management approach.
|
1302.4788 | Layered Interference Networks with Delayed CSI: DoF Scaling with
Distributed Transmitters | cs.IT math.IT | The layered interference network is investigated with delayed channel state
information (CSI) at all nodes. It is demonstrated how multi-hopping can be
utilized to increase the achievable degrees of freedom (DoF). In particular, a
multi-phase transmission scheme is proposed for the $K$-user $2K$-hop
interference network in order to systematically exploit the layered structure
of the network and delayed CSI to achieve DoF values that scale with $K$. This
result provides the first example of a network with distributed transmitters
and delayed CSI whose DoF scales with the number of users.
|
1302.4793 | Opportunistic Wireless Energy Harvesting in Cognitive Radio Networks | cs.NI cs.IT math.IT | Wireless networks can be self-sustaining by harvesting energy from ambient
radio-frequency (RF) signals. Recently, researchers have made progress on
designing efficient circuits and devices for RF energy harvesting suitable for
low-power wireless applications. Motivated by this and building upon the
classic cognitive radio (CR) network model, this paper proposes a novel method
for wireless networks coexisting where low-power mobiles in a secondary
network, called secondary transmitters (STs), harvest ambient RF energy from
transmissions by nearby active transmitters in a primary network, called
primary transmitters (PTs), while opportunistically accessing the spectrum
licensed to the primary network. We consider a stochastic-geometry model in
which PTs and STs are distributed as independent homogeneous Poisson point
processes (HPPPs) and communicate with their intended receivers at fixed
distances. Each PT is associated with a guard zone to protect its intended
receiver from ST's interference, and at the same time delivers RF energy to STs
located in its harvesting zone. Based on the proposed model, we analyze the
transmission probability of STs and the resulting spatial throughput of the
secondary network. The optimal transmission power and density of STs are
derived for maximizing the secondary network throughput under the given
outage-probability constraints in the two coexisting networks, which reveal key
insights to the optimal network design. Finally, we show that our analytical
result can be generally applied to a non-CR setup, where distributed wireless
power chargers are deployed to power coexisting wireless transmitters in a
sensor network.
|
1302.4805 | Energy-Efficient Optimization for Physical Layer Security in
Multi-Antenna Downlink Networks with QoS Guarantee | cs.IT math.IT | In this letter, we consider a multi-antenna downlink network where a secure
user (SU) coexists with a passive eavesdropper. There are two design
requirements for such a network. First, the information should be transferred
in a secret and efficient manner. Second, the quality of service (QoS), i.e.
delay sensitivity, should be take into consideration to satisfy the demands of
real-time wireless services. In order to fulfill the two requirements, we
combine the physical layer security technique based on switched beam
beamforming with an energy-efficient power allocation. The problem is
formulated as the maximization of the secrecy energy efficiency subject to
delay and power constraints. By solving the optimization problem, we derive an
energy-efficient power allocation scheme. Numerical results validate the
effectiveness of the proposed scheme.
|
1302.4811 | Towards a Semantic-based Approach for Modeling Regulatory Documents in
Building Industry | cs.CL | Regulations in the Building Industry are becoming increasingly complex and
involve more than one technical area. They cover products, components and
project implementation. They also play an important role to ensure the quality
of a building, and to minimize its environmental impact. In this paper, we are
particularly interested in the modeling of the regulatory constraints derived
from the Technical Guides issued by CSTB and used to validate Technical
Assessments. We first describe our approach for modeling regulatory constraints
in the SBVR language, and formalizing them in the SPARQL language. Second, we
describe how we model the processes of compliance checking described in the
CSTB Technical Guides. Third, we show how we implement these processes to
assist industrials in drafting Technical Documents in order to acquire a
Technical Assessment; a compliance report is automatically generated to explain
the compliance or noncompliance of this Technical Documents.
|
1302.4813 | Probabilistic Frame Induction | cs.CL | In natural-language discourse, related events tend to appear near each other
to describe a larger scenario. Such structures can be formalized by the notion
of a frame (a.k.a. template), which comprises a set of related events and
prototypical participants and event transitions. Identifying frames is a
prerequisite for information extraction and natural language generation, and is
usually done manually. Methods for inducing frames have been proposed recently,
but they typically use ad hoc procedures and are difficult to diagnose or
extend. In this paper, we propose the first probabilistic approach to frame
induction, which incorporates frames, events, participants as latent topics and
learns those frame and event transitions that best explain the text. The number
of frames is inferred by a novel application of a split-merge method from
syntactic parsing. In end-to-end evaluations from text to induced frames and
extracted facts, our method produced state-of-the-art results while
substantially reducing engineering effort.
|
1302.4814 | NLP and CALL: integration is working | cs.CL | In the first part of this article, we explore the background of
computer-assisted learning from its beginnings in the early XIXth century and
the first teaching machines, founded on theories of learning, at the start of
the XXth century. With the arrival of the computer, it became possible to offer
language learners different types of language activities such as comprehension
tasks, simulations, etc. However, these have limits that cannot be overcome
without some contribution from the field of natural language processing (NLP).
In what follows, we examine the challenges faced and the issues raised by
integrating NLP into CALL. We hope to demonstrate that the key to success in
integrating NLP into CALL is to be found in multidisciplinary work between
computer experts, linguists, language teachers, didacticians and NLP
specialists.
|
1302.4840 | Joint Physical Network Coding and LDPC decoding for Two Way Wireless
Relaying | cs.IT math.IT | In this paper, we investigate the joint design of channel and network coding
in bi-directional relaying systems and propose a combined low complexity
physical network coding and LDPC decoding scheme. For the same LDPC codes
employed at both source nodes, we show that the relay can decodes the network
coded codewords from the superimposed signal received from the BPSK-modulated
multiple-access channel. Simulation results shown that this novel joint
physical network coding and LDPC decoding method outperforms the existing MMSE
network coding and LDPC decoding method over AWGN and complex MAC channel.
|
1302.4858 | Trajectory generation and display for free flight | math.OC cs.RO | In this study a new approach is proposed for the generation of aircraft
trajectories. The relative guidance of an aircraft, which is aimed to join in
minimum time the track of a leader aircraft, is particularly considered. In a
first place, a minimum time relative convergence problem is considered and
optimal trajectories are characterized. Then the synthesis of a neural
approximator for optimal trajectories is discussed. Trained neural networks are
used in an adaptive manner to generate intent trajectories during operation.
Finally simulation results involving two wide body aircraft are presented.
|
1302.4872 | Cohesion, consensus and extreme information in opinion dynamics | physics.soc-ph cs.SI nlin.AO | Opinion formation is an important element of social dynamics. It has been
widely studied in the last years with tools from physics, mathematics and
computer science. Here, a continuous model of opinion dynamics for multiple
possible choices is analysed. Its main features are the inclusion of
disagreement and possibility of modulating information, both from one and
multiple sources. The interest is in identifying the effect of the initial
cohesion of the population, the interplay between cohesion and information
extremism, and the effect of using multiple sources of information that can
influence the system. Final consensus, especially with external information,
depends highly on these factors, as numerical simulations show. When no
information is present, consensus or segregation is determined by the initial
cohesion of the population. Interestingly, when only one source of information
is present, consensus can be obtained, in general, only when this is extremely
mild, i.e. there is not a single opinion strongly promoted, or in the special
case of a large initial cohesion and low information exposure. On the contrary,
when multiple information sources are allowed, consensus can emerge with an
information source even when this is not extremely mild, i.e. it carries a
strong message, for a large range of initial conditions.
|
1302.4874 | A Labeled Graph Kernel for Relationship Extraction | cs.CL cs.LG | In this paper, we propose an approach for Relationship Extraction (RE) based
on labeled graph kernels. The kernel we propose is a particularization of a
random walk kernel that exploits two properties previously studied in the RE
literature: (i) the words between the candidate entities or connecting them in
a syntactic representation are particularly likely to carry information
regarding the relationship; and (ii) combining information from distinct
sources in a kernel may help the RE system make better decisions. We performed
experiments on a dataset of protein-protein interactions and the results show
that our approach obtains effectiveness values that are comparable with the
state-of-the art kernel methods. Moreover, our approach is able to outperform
the state-of-the-art kernels when combined with other kernel methods.
|
1302.4886 | Fast methods for denoising matrix completion formulations, with
applications to robust seismic data interpolation | stat.ML cs.LG | Recent SVD-free matrix factorization formulations have enabled rank
minimization for systems with millions of rows and columns, paving the way for
matrix completion in extremely large-scale applications, such as seismic data
interpolation.
In this paper, we consider matrix completion formulations designed to hit a
target data-fitting error level provided by the user, and propose an algorithm
called LR-BPDN that is able to exploit factorized formulations to solve the
corresponding optimization problem. Since practitioners typically have strong
prior knowledge about target error level, this innovation makes it easy to
apply the algorithm in practice, leaving only the factor rank to be determined.
Within the established framework, we propose two extensions that are highly
relevant to solving practical challenges of data interpolation. First, we
propose a weighted extension that allows known subspace information to improve
the results of matrix completion formulations. We show how this weighting can
be used in the context of frequency continuation, an essential aspect to
seismic data interpolation. Second, we propose matrix completion formulations
that are robust to large measurement errors in the available data.
We illustrate the advantages of LR-BPDN on the collaborative filtering
problem using the MovieLens 1M, 10M, and Netflix 100M datasets. Then, we use
the new method, along with its robust and subspace re-weighted extensions, to
obtain high-quality reconstructions for large scale seismic interpolation
problems with real data, even in the presence of data contamination.
|
1302.4888 | Exploiting Social Tags for Cross-Domain Collaborative Filtering | cs.IR cs.AI | One of the most challenging problems in recommender systems based on the
collaborative filtering (CF) concept is data sparseness, i.e., limited user
preference data is available for making recommendations. Cross-domain
collaborative filtering (CDCF) has been studied as an effective mechanism to
alleviate data sparseness of one domain using the knowledge about user
preferences from other domains. A key question to be answered in the context of
CDCF is what common characteristics can be deployed to link different domains
for effective knowledge transfer. In this paper, we assess the usefulness of
user-contributed (social) tags in this respect. We do so by means of the
Generalized Tag-induced Cross-domain Collaborative Filtering (GTagCDCF)
approach that we propose in this paper and that we developed based on the
general collective matrix factorization framework. Assessment is done by a
series of experiments, using publicly available CF datasets that represent
three cross-domain cases, i.e., two two-domain cases and one three-domain case.
A comparative analysis on two-domain cases involving GTagCDCF and several
state-of-the-art CDCF approaches indicates the increased benefit of using
social tags as representatives of explicit links between domains for CDCF as
compared to the implicit links deployed by the existing CDCF methods. In
addition, we show that users from different domains can already benefit from
GTagCDCF if they only share a few common tags. Finally, we use the three-domain
case to validate the robustness of GTagCDCF with respect to the scale of
datasets and the varying number of domains.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.