id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1204.0052
|
Unique Decoding of Plane AG Codes Revisited
|
cs.IT math.IT
|
We reformulate a recently introduced interpolation-based unique decoding
algorithm of algebraic geometry codes using the theory of Gr\"obner bases of
modules on the coordinate ring of the base curve. With the same decoding
performance, the new algorithm has a more conceptual description that lets us
better understand the majority voting procedure central in the
interpolation-based unique decoding.
|
1204.0065
|
MIMO Z Channel Interference Management
|
cs.IT math.IT
|
MIMO Z Channel is investigated in this paper. We focus on how to tackle the
interference when different users try to send their codewords to their
corresponding receivers while only one user will cause interference to the
other. We assume there are two transmitters and two receivers each with two
antennas. We propose a strategy to remove the interference while allowing
different users transmit at the same time. Our strategy is low-complexity while
the performance is good. Mathematical analysis is provided and simulations are
given based on our system.
|
1204.0067
|
Estimating Rigid Transformation Between Two Range Maps Using Expectation
Maximization Algorithm
|
cs.RO
|
We address the problem of estimating a rigid transformation between two point
sets, which is a key module for target tracking system using Light Detection
And Ranging (LiDAR). A fast implementation of Expectation-maximization (EM)
algorithm is presented whose complexity is O(N) with $N$ the number of scan
points.
|
1204.0072
|
Generalized fuzzy rough sets based on fuzzy coverings
|
cs.IT math.IT
|
This paper further studies the fuzzy rough sets based on fuzzy coverings. We
first present the notions of the lower and upper approximation operators based
on fuzzy coverings and derive their basic properties. To facilitate the
computation of fuzzy coverings for fuzzy covering rough sets, the concepts of
fuzzy subcoverings, the reducible and intersectional elements, the union and
intersection operations are provided and their properties are discussed in
detail. Afterwards, we introduce the concepts of consistent functions and fuzzy
covering mappings and provide a basic theoretical foundation for the
communication between fuzzy covering information systems. In addition, the
notion of homomorphisms is proposed to reveal the relationship between fuzzy
covering information systems. We show how large-scale fuzzy covering
information systems and dynamic fuzzy covering information systems can be
converted into small-scale ones by means of homomorphisms. Finally, an
illustrative example is employed to show that the attribute reduction can be
simplified significantly by our proposed approach.
|
1204.0075
|
Weighted Approach to R\'enyi Entropy
|
cs.IT math.IT
|
R\'enyi entropy of order \alpha is a general measure of entropy. In this
paper we derive estimations for the R\'enyi entropy of the mixture of sources
in terms of the entropy of the single sources. These relations allow to compute
the R\'enyi entropy dimension of arbitrary order of a mixture of measures.
The key for obtaining these results is our new definition of the weighted
R\'enyi entropy. It is shown that weighted entropy is equal to the classical
R\'enyi entropy.
|
1204.0077
|
Asynchronous Games over Tree Architectures
|
cs.FL cs.SY
|
We consider the task of controlling in a distributed way a Zielonka
asynchronous automaton. Every process of a controller has access to its causal
past to determine the next set of actions it proposes to play. An action can be
played only if every process controlling this action proposes to play it. We
consider reachability objectives: every process should reach its set of final
states. We show that this control problem is decidable for tree architectures,
where every process can communicate with its parent, its children, and with the
environment. The complexity of our algorithm is l-fold exponential with l being
the height of the tree representing the architecture. We show that this is
unavoidable by showing that even for three processes the problem is
EXPTIME-complete, and that it is non-elementary in general.
|
1204.0078
|
Partition Reduction for Lossy Data Compression Problem
|
cs.IT math.IT
|
We consider the computational aspects of lossy data compression problem,
where the compression error is determined by a cover of the data space. We
propose an algorithm which reduces the number of partitions needed to find the
entropy with respect to the compression error. In particular, we show that, in
the case of finite cover, the entropy is attained on some partition. We give an
algorithmic construction of such partition.
|
1204.0100
|
Roles of Ties in Spreading
|
physics.soc-ph cs.SI
|
Background: Controlling global epidemics in the real world and accelerating
information propagation in the artificial world are of great significance,
which have activated an upsurge in the studies on networked spreading dynamics.
Lots of efforts have been made to understand the impacts of macroscopic
statistics (e.g., degree distribution and average distance) and mesoscopic
structures (e.g., communities and rich clubs) on spreading processes while the
microscopic elements are less concerned. In particular, roles of ties are not
yet clear to the academic community.
Methodology/Principle Findings: Every edges is stamped by its strength that
is defined solely based on the local topology. According to a weighted
susceptible-infected-susceptible model, the steady-state infected density and
spreading speed are respectively optimized by adjusting the relationship
between edge's strength and spreading ability. Experiments on six real networks
show that the infected density is increased when strong ties are favored in the
spreading, while the speed is enhanced when weak ties are favored. Significance
of these findings is further demonstrated by comparing with a null model.
Conclusions/Significance: Experimental results indicate that strong and weak
ties play distinguishable roles in spreading dynamics: the former enlarge the
infected density while the latter fasten the process. The proposed method
provides a quantitative way to reveal the qualitatively different roles of
ties, which could find applications in analyzing many networked dynamical
processes with multiple performance indices, such as synchronizability and
converging time in synchronization and throughput and delivering time in
transportation.
|
1204.0128
|
From User Comments to On-line Conversations
|
cs.CY cs.SI physics.soc-ph
|
We present an analysis of user conversations in on-line social media and
their evolution over time. We propose a dynamic model that accurately predicts
the growth dynamics and structural properties of conversation threads. The
model successfully reconciles the differing observations that have been
reported in existing studies. By separating artificial factors from user
behaviors, we show that there are actually underlying rules in common for
on-line conversations in different social media websites. Results of our model
are supported by empirical measurements throughout a number of different social
media websites.
|
1204.0133
|
Progressive Gaussian Filtering
|
cs.SY cs.IT cs.RO math.IT
|
In this paper, we propose a progressive Bayesian procedure, where the
measurement information is continuously included into the given prior estimate
(although we perform observations at discrete time steps). The key idea is to
derive a system of ordinary first-order differential equations (ODE) by
employing a new coupled density representation comprising a Gaussian density
and its Dirac Mixture approximation. The ODE is used for continuously tracking
the true non-Gaussian posterior by its best matching Gaussian approximation.
The performance of the new filter is evaluated in comparison with
state-of-the-art filters by means of a canonical benchmark example, the
discrete-time cubic sensor problem.
|
1204.0136
|
Near-Optimal Algorithms for Online Matrix Prediction
|
cs.LG cs.DS
|
In several online prediction problems of recent interest the comparison class
is composed of matrices with bounded entries. For example, in the online
max-cut problem, the comparison class is matrices which represent cuts of a
given graph and in online gambling the comparison class is matrices which
represent permutations over n teams. Another important example is online
collaborative filtering in which a widely used comparison class is the set of
matrices with a small trace norm. In this paper we isolate a property of
matrices, which we call (beta,tau)-decomposability, and derive an efficient
online learning algorithm, that enjoys a regret bound of O*(sqrt(beta tau T))
for all problems in which the comparison class is composed of
(beta,tau)-decomposable matrices. By analyzing the decomposability of cut
matrices, triangular matrices, and low trace-norm matrices, we derive near
optimal regret bounds for online max-cut, online gambling, and online
collaborative filtering. In particular, this resolves (in the affirmative) an
open problem posed by Abernethy (2010); Kleinberg et al (2010). Finally, we
derive lower bounds for the three problems and show that our upper bounds are
optimal up to logarithmic factors. In particular, our lower bound for the
online collaborative filtering problem resolves another open problem posed by
Shamir and Srebro (2011).
|
1204.0140
|
Roget's Thesaurus as a Lexical Resource for Natural Language Processing
|
cs.CL
|
WordNet proved that it is possible to construct a large-scale electronic
lexical database on the principles of lexical semantics. It has been accepted
and used extensively by computational linguists ever since it was released.
Inspired by WordNet's success, we propose as an alternative a similar resource,
based on the 1987 Penguin edition of Roget's Thesaurus of English Words and
Phrases.
Peter Mark Roget published his first Thesaurus over 150 years ago. Countless
writers, orators and students of the English language have used it.
Computational linguists have employed Roget's for almost 50 years in Natural
Language Processing, however hesitated in accepting Roget's Thesaurus because a
proper machine tractable version was not available.
This dissertation presents an implementation of a machine-tractable version
of the 1987 Penguin edition of Roget's Thesaurus - the first implementation of
its kind to use an entire current edition. It explains the steps necessary for
taking a machine-readable file and transforming it into a tractable system.
This involves converting the lexical material into a format that can be more
easily exploited, identifying data structures and designing classes to
computerize the Thesaurus. Roget's organization is studied in detail and
contrasted with WordNet's.
We show two applications of the computerized Thesaurus: computing semantic
similarity between words and phrases, and building lexical chains in a text.
The experiments are performed using well-known benchmarks and the results are
compared to those of other systems that use Roget's, WordNet and statistical
techniques. Roget's has turned out to be an excellent resource for measuring
semantic similarity; lexical chains are easily built but more difficult to
evaluate. We also explain ways in which Roget's Thesaurus and WordNet can be
combined.
|
1204.0147
|
Covering Numbers for Convex Functions
|
cs.IT math.IT math.ST stat.ML stat.TH
|
In this paper we study the covering numbers of the space of convex and
uniformly bounded functions in multi-dimension. We find optimal upper and lower
bounds for the $\epsilon$-covering number of $\C([a, b]^d, B)$, in the
$L_p$-metric, $1 \le p < \infty$, in terms of the relevant constants, where $d
\geq 1$, $a < b \in \mathbb{R}$, $B>0$, and $\C([a,b]^d, B)$ denotes the set of
all convex functions on $[a, b]^d$ that are uniformly bounded by $B$. We
summarize previously known results on covering numbers for convex functions and
also provide alternate proofs of some known results. Our results have direct
implications in the study of rates of convergence of empirical minimization
procedures as well as optimal convergence rates in the numerous convexity
constrained function estimation problems.
|
1204.0156
|
Ranking Tweets Considering Trust and Relevance
|
cs.SI cs.IR
|
The increasing popularity of Twitter and other microblogs makes improved
trustworthiness and relevance assessment of microblogs evermore important. We
propose a method of ranking of tweets considering trustworthiness and content
based popularity. The analysis of trustworthiness and popularity exploits the
implicit relationships between the tweets. We model microblog ecosystem as a
three-layer graph consisting of : (i) users (ii) tweets and (iii) web pages. We
propose to derive trust and popularity scores of entities in these three
layers, and propagate the scores to tweets considering the inter-layer
relations. Our preliminary evaluations show improvement in precision and
trustworthiness over the baseline methods and acceptable computation timings.
|
1204.0161
|
Rebels Lead to the Doctrine of the Mean: Opinion Dynamic in a
Heterogeneous DeGroot Model
|
cs.SI physics.soc-ph
|
We study an extension of the DeGroot model where part of the players may be
rebels. The updating rule for rebels is quite different with that of normal
players (which are referred to as conformists): at each step a rebel first
takes the opposite value of the weighted average of her neighbors' opinions,
i.e. 1 minus that average (the opinion space is assumed to be [0,1] as usual),
and then updates her opinion by taking another weighted average between that
value and her own opinion in the last round. We find that the effect of rebels
is rather significant: as long as there is at least one rebel in every closed
and strongly connected group, under very weak conditions, the opinion of each
player in the whole society will eventually tend to 0.5.
|
1204.0163
|
Fashion, Cooperation, and Social Interactions
|
cs.MA cs.SI physics.soc-ph
|
Fashion plays such a crucial rule in the evolution of culture and society
that it is regarded as a second nature to the human being. Also, its impact on
economy is quite nontrivial. On what is fashionable, interestingly, there are
two viewpoints that are both extremely widespread but almost opposite:
conformists think that what is popular is fashionable, while rebels believe
that being different is the essence. Fashion color is fashionable in the first
sense, and Lady Gaga in the second. We investigate a model where the population
consists of the afore-mentioned two groups of people that are located on social
networks (a spatial cellular automata network and small-world networks). This
model captures two fundamental kinds of social interactions (coordination and
anti-coordination) simultaneously, and also has its own interest to game
theory: it is a hybrid model of pure competition and pure cooperation. This is
true because when a conformist meets a rebel, they play the zero sum matching
pennies game, which is pure competition. When two conformists (rebels) meet,
they play the (anti-) coordination game, which is pure cooperation. Simulation
shows that simple social interactions greatly promote cooperation: in most
cases people can reach an extraordinarily high level of cooperation, through a
selfish, myopic, naive, and local interacting dynamic (the best response
dynamic). We find that degree of synchronization also plays a critical role,
but mostly on the negative side. Four indices, namely cooperation degree,
average satisfaction degree, equilibrium ratio and complete ratio, are defined
and applied to measure people's cooperation levels from various angles. Phase
transition, as well as emergence of many interesting geographic patterns in the
cellular automata network, is also observed.
|
1204.0165
|
Analytical Models for Power Networks: The case of the Western US and
ERCOT grids
|
cs.SI physics.soc-ph stat.OT
|
The topological structure of the power grid plays a key role in the reliable
delivery of electricity and price settlement in the electricity market.
Incorporation of new energy sources and loads into the grid over time has led
to its structural and geographical expansion and can affect its stable
operation. This paper presents an intuitive analytical model for the temporal
evolution of large grids and uses it to understand common structural features
observed in grids across America. In particular, key graph parameters like
degree distribution, graph diameter, betweenness centralities, eigen-spread and
clustering coefficients, as well as graph processes like infection propagation
are used to quantify the model's benefits through comparison with the Western
US and ERCOT power grids. The most significant contribution of the developed
model is its analytical tractability, that provides a closed form expression
for the nodal degree distribution observed in large grids. The discussed model
can be used to generate realistic test cases to analyze topological effects on
grid functioning and new grid technologies.
|
1204.0166
|
Worst-Case Robust Multiuser Transmit Beamforming Using Semidefinite
Relaxation: Duality and Implications
|
cs.IT math.IT
|
This paper studies a downlink multiuser transmit beamforming design under
spherical channel uncertainties, using a worst-case robust formulation. This
robust design problem is nonconvex. Recently, a convex approximation
formulation based on semidefinite relaxation (SDR) has been proposed to handle
the problem. Curiously, simulation results have consistently indicated that SDR
can attain the global optimum of the robust design problem. This paper intends
to provide some theoretical insights into this important empirical finding. Our
main result is a dual representation of the SDR formulation, which reveals an
interesting linkage to a different robust design problem, and the possibility
of SDR optimality.
|
1204.0168
|
Modeling Infection with Multi-agent Dynamics
|
stat.AP cs.MA cs.SI physics.soc-ph
|
Developing the ability to comprehensively study infections in small
populations enables us to improve epidemic models and better advise individuals
about potential risks to their health. We currently have a limited
understanding of how infections spread within a small population because it has
been difficult to closely track an infection within a complete community. The
paper presents data closely tracking the spread of an infection centered on a
student dormitory, collected by leveraging the residents' use of cellular
phones. The data are based on daily symptom surveys taken over a period of four
months and proximity tracking through cellular phones. We demonstrate that
using a Bayesian, discrete-time multi-agent model of infection to model
real-world symptom reports and proximity tracking records gives us important
insights about infec-tions in small populations.
|
1204.0170
|
A New Approach to Speeding Up Topic Modeling
|
cs.LG cs.IR
|
Latent Dirichlet allocation (LDA) is a widely-used probabilistic topic
modeling paradigm, and recently finds many applications in computer vision and
computational biology. In this paper, we propose a fast and accurate batch
algorithm, active belief propagation (ABP), for training LDA. Usually batch LDA
algorithms require repeated scanning of the entire corpus and searching the
complete topic space. To process massive corpora having a large number of
topics, the training iteration of batch LDA algorithms is often inefficient and
time-consuming. To accelerate the training speed, ABP actively scans the subset
of corpus and searches the subset of topic space for topic modeling, therefore
saves enormous training time in each iteration. To ensure accuracy, ABP selects
only those documents and topics that contribute to the largest residuals within
the residual belief propagation (RBP) framework. On four real-world corpora,
ABP performs around $10$ to $100$ times faster than state-of-the-art batch LDA
algorithms with a comparable topic modeling accuracy.
|
1204.0171
|
A New Fuzzy Stacked Generalization Technique and Analysis of its
Performance
|
cs.LG cs.CV
|
In this study, a new Stacked Generalization technique called Fuzzy Stacked
Generalization (FSG) is proposed to minimize the difference between N -sample
and large-sample classification error of the Nearest Neighbor classifier. The
proposed FSG employs a new hierarchical distance learning strategy to minimize
the error difference. For this purpose, we first construct an ensemble of
base-layer fuzzy k- Nearest Neighbor (k-NN) classifiers, each of which receives
a different feature set extracted from the same sample set. The fuzzy
membership values computed at the decision space of each fuzzy k-NN classifier
are concatenated to form the feature vectors of a fusion space. Finally, the
feature vectors are fed to a meta-layer classifier to learn the degree of
accuracy of the decisions of the base-layer classifiers for meta-layer
classification. Rather than the power of the individual base layer-classifiers,
diversity and cooperation of the classifiers become an important issue to
improve the overall performance of the proposed FSG. A weak base-layer
classifier may boost the overall performance more than a strong classifier, if
it is capable of recognizing the samples, which are not recognized by the rest
of the classifiers, in its own feature space. The experiments explore the type
of the collaboration among the individual classifiers required for an improved
performance of the suggested architecture. Experiments on multiple feature
real-world datasets show that the proposed FSG performs better than the state
of the art ensemble learning algorithms such as Adaboost, Random Subspace and
Rotation Forest. On the other hand, compatible performances are observed in the
experiments on single feature multi-attribute datasets.
|
1204.0173
|
On The Achievable Rate Region of a New Wiretap Channel With Side
Information
|
cs.IT math.IT
|
A new applicable wiretap channel with separated side information is
considered here which consist of a sender, a legitimate receiver and a
wiretapper. In the considered scenario, the links from the transmitter to the
legitimate receiver and the eavesdropper experience different conditions or
channel states. So, the legitimate receiver and the wiretapper listen to the
transmitted signal through the channels with different channel states which may
have some correlation to each other. It is assumed that the transmitter knows
the state of the main channel non-causally and uses this knowledge to encode
its message. The state of the wiretap channel is not known anywhere. An
achievable equivocation rate region is derived for this model and is compared
to the existing works. In some special cases, the results are extended to the
Gaussian wiretap channel.
|
1204.0176
|
Using Fuzzy Logic to Evaluate Normalization Completeness for An Improved
Database Design
|
cs.DB
|
A new approach, to measure normalization completeness for conceptual model,
is introduced using quantitative fuzzy functionality in this paper. We measure
the normalization completeness of the conceptual model in two steps. In the
first step, different normalization techniques are analyzed up to Boyce Codd
Normal Form (BCNF) to find the current normal form of the relation. In the
second step, fuzzy membership values are used to scale the normal form between
0 and 1. Case studies to explain schema transformation rules and measurements.
Normalization completeness is measured by considering completeness attributes,
preventing attributes of the functional dependencies and total number of
attributes such as if the functional dependency is non-preventing then the
attributes of that functional dependency are completeness attributes. The
attributes of functional dependency which prevent to go to the next normal form
are called preventing attributes.
|
1204.0179
|
Service-Oriented Architecture for Weaponry and Battle Command and
Control Systems in Warfighting
|
cs.RO
|
Military is one of many industries that is more computer-dependent than ever
before, from soldiers with computerized weapons, and tactical wireless devices,
to commanders with advanced battle management, command and control systems.
Fundamentally, command and control is the process of planning, monitoring, and
commanding military personnel, weaponry equipment, and combating vehicles to
execute military missions. In fact, command and control systems are
revolutionizing as war fighting is changing into cyber, technology,
information, and unmanned warfare. As a result, a new design model that
supports scalability, reusability, maintainability, survivability, and
interoperability is needed to allow commanders, hundreds of miles away from the
battlefield, to plan, monitor, evaluate, and control the war events in a
dynamic, robust, agile, and reliable manner. This paper proposes a
service-oriented architecture for weaponry and battle command and control
systems, made out of loosely-coupled and distributed web services. The proposed
architecture consists of three elementary tiers: the client tier that
corresponds to any computing military equipment; the server tier that
corresponds to the web services that deliver the basic functionalities for the
client tier; and the middleware tier that corresponds to an enterprise service
bus that promotes interoperability between all the interconnected entities. A
command and control system was simulated and experimented and it successfully
exhibited the desired features of SOA. Future research can improve upon the
proposed architecture so much so that it supports encryption for securing the
exchange of data between the various communicating entities of the system.
|
1204.0181
|
Expert PC Troubleshooter With Fuzzy-Logic And Self-Learning Support
|
cs.AI
|
Expert systems use human knowledge often stored as rules within the computer
to solve problems that generally would entail human intelligence. Today, with
information systems turning out to be more pervasive and with the myriad
advances in information technologies, automating computer fault diagnosis is
becoming so fundamental that soon every enterprise has to endorse it. This
paper proposes an expert system called Expert PC Troubleshooter for diagnosing
computer problems. The system is composed of a user interface, a rule-base, an
inference engine, and an expert interface. Additionally, the system features a
fuzzy-logic module to troubleshoot POST beep errors, and an intelligent agent
that assists in the knowledge acquisition process. The proposed system is meant
to automate the maintenance, repair, and operations (MRO) process, and free-up
human technicians from manually performing routine, laborious, and
timeconsuming maintenance tasks. As future work, the proposed system is to be
parallelized so as to boost its performance and speed-up its various
operations.
|
1204.0182
|
Hybrid Information Retrieval Model For Web Images
|
cs.IR
|
The Bing Bang of the Internet in the early 90's increased dramatically the
number of images being distributed and shared over the web. As a result, image
information retrieval systems were developed to index and retrieve image files
spread over the Internet. Most of these systems are keyword-based which search
for images based on their textual metadata; and thus, they are imprecise as it
is vague to describe an image with a human language. Besides, there exist the
content-based image retrieval systems which search for images based on their
visual information. However, content-based type systems are still immature and
not that effective as they suffer from low retrieval recall/precision rate.
This paper proposes a new hybrid image information retrieval model for indexing
and retrieving web images published in HTML documents. The distinguishing mark
of the proposed model is that it is based on both graphical content and textual
metadata. The graphical content is denoted by color features and color
histogram of the image; while textual metadata are denoted by the terms that
surround the image in the HTML document, more particularly, the terms that
appear in the tags p, h1, and h2, in addition to the terms that appear in the
image's alt attribute, filename, and class-label. Moreover, this paper presents
a new term weighting scheme called VTF-IDF short for Variable Term
Frequency-Inverse Document Frequency which unlike traditional schemes, it
exploits the HTML tag structure and assigns an extra bonus weight for terms
that appear within certain particular HTML tags that are correlated to the
semantics of the image. Experiments conducted to evaluate the proposed IR model
showed a high retrieval precision rate that outpaced other current models.
|
1204.0183
|
Neural Network Model for Path-Planning of Robotic Rover Systems
|
cs.NE
|
Today, robotics is an auspicious and fast-growing branch of technology that
involves the manufacturing, design, and maintenance of robot machines that can
operate in an autonomous fashion and can be used in a wide variety of
applications including space exploration, weaponry, household, and
transportation. More particularly, in space applications, a common type of
robots has been of widespread use in the recent years. It is called planetary
rover which is a robot vehicle that moves across the surface of a planet and
conducts detailed geological studies pertaining to the properties of the
landing cosmic environment. However, rovers are always impeded by obstacles
along the traveling path which can destabilize the rover's body and prevent it
from reaching its goal destination. This paper proposes an ANN model that
allows rover systems to carry out autonomous path-planning to successfully
navigate through challenging planetary terrains and follow their goal location
while avoiding dangerous obstacles. The proposed ANN is a multilayer network
made out of three layers: an input, a hidden, and an output layer. The network
is trained in offline mode using back-propagation supervised learning
algorithm. A software-simulated rover was experimented and it revealed that it
was able to follow the safest trajectory despite existing obstacles. As future
work, the proposed ANN is to be parallelized so as to speed-up the execution
time of the training process.
|
1204.0184
|
Parallel Spell-Checking Algorithm Based on Yahoo! N-Grams Dataset
|
cs.CL
|
Spell-checking is the process of detecting and sometimes providing
suggestions for incorrectly spelled words in a text. Basically, the larger the
dictionary of a spell-checker is, the higher is the error detection rate;
otherwise, misspellings would pass undetected. Unfortunately, traditional
dictionaries suffer from out-of-vocabulary and data sparseness problems as they
do not encompass large vocabulary of words indispensable to cover proper names,
domain-specific terms, technical jargons, special acronyms, and terminologies.
As a result, spell-checkers will incur low error detection and correction rate
and will fail to flag all errors in the text. This paper proposes a new
parallel shared-memory spell-checking algorithm that uses rich real-world word
statistics from Yahoo! N-Grams Dataset to correct non-word and real-word errors
in computer text. Essentially, the proposed algorithm can be divided into three
sub-algorithms that run in a parallel fashion: The error detection algorithm
that detects misspellings, the candidates generation algorithm that generates
correction suggestions, and the error correction algorithm that performs
contextual error correction. Experiments conducted on a set of text articles
containing misspellings, showed a remarkable spelling error correction rate
that resulted in a radical reduction of both non-word and real-word errors in
electronic text. In a further study, the proposed algorithm is to be optimized
for message-passing systems so as to become more flexible and less costly to
scale over distributed machines.
|
1204.0185
|
Service-Oriented Architecture for Space Exploration Robotic Rover
Systems
|
cs.RO
|
Currently, industrial sectors are transforming their business processes into
e-services and component-based architectures to build flexible, robust, and
scalable systems, and reduce integration-related maintenance and development
costs. Robotics is yet another promising and fast-growing industry that deals
with the creation of machines that operate in an autonomous fashion and serve
for various applications including space exploration, weaponry, laboratory
research, and manufacturing. It is in space exploration that the most common
type of robots is the planetary rover which moves across the surface of a
planet and conducts a thorough geological study of the celestial surface. This
type of rover system is still ad-hoc in that it incorporates its software into
its core hardware making the whole system cohesive, tightly-coupled, more
susceptible to shortcomings, less flexible, hard to be scaled and maintained,
and impossible to be adapted to other purposes. This paper proposes a
service-oriented architecture for space exploration robotic rover systems made
out of loosely-coupled and distributed web services. The proposed architecture
consists of three elementary tiers: the client tier that corresponds to the
actual rover; the server tier that corresponds to the web services; and the
middleware tier that corresponds to an Enterprise Service Bus which promotes
interoperability between the interconnected entities. The niche of this
architecture is that rover's software components are decoupled and isolated
from the rover's body and possibly deployed at a distant location. A
service-oriented architecture promotes integrate-ability, scalability,
reusability, maintainability, and interoperability for client-to-server
communication.
|
1204.0186
|
Semantic-Sensitive Web Information Retrieval Model for HTML Documents
|
cs.IR
|
With the advent of the Internet, a new era of digital information exchange
has begun. Currently, the Internet encompasses more than five billion online
sites and this number is exponentially increasing every day. Fundamentally,
Information Retrieval (IR) is the science and practice of storing documents and
retrieving information from within these documents. Mathematically, IR systems
are at the core based on a feature vector model coupled with a term weighting
scheme that weights terms in a document according to their significance with
respect to the context in which they appear. Practically, Vector Space Model
(VSM), Term Frequency (TF), and Inverse Term Frequency (IDF) are among other
long-established techniques employed in mainstream IR systems. However, present
IR models only target generic-type text documents, in that, they do not
consider specific formats of files such as HTML web documents. This paper
proposes a new semantic-sensitive web information retrieval model for HTML
documents. It consists of a vector model called SWVM and a weighting scheme
called BTF-IDF, particularly designed to support the indexing and retrieval of
HTML web documents. The chief advantage of the proposed model is that it
assigns extra weights for terms that appear in certain pre-specified HTML tags
that are correlated to the semantics of the document. Additionally, the model
is semantic-sensitive as it generates synonyms for every term being indexed and
later weights them appropriately to increase the likelihood of retrieving
documents with similar context but different vocabulary terms. Experiments
conducted, revealed a momentous enhancement in the precision of web IR systems
and a radical increase in the number of relevant documents being retrieved. As
further research, the proposed model is to be upgraded so as to support the
indexing and retrieval of web images in multimedia-rich web documents.
|
1204.0188
|
OCR Context-Sensitive Error Correction Based on Google Web 1T 5-Gram
Data Set
|
cs.CL cs.IR
|
Since the dawn of the computing era, information has been represented
digitally so that it can be processed by electronic computers. Paper books and
documents were abundant and widely being published at that time; and hence,
there was a need to convert them into digital format. OCR, short for Optical
Character Recognition was conceived to translate paper-based books into digital
e-books. Regrettably, OCR systems are still erroneous and inaccurate as they
produce misspellings in the recognized text, especially when the source
document is of low printing quality. This paper proposes a post-processing OCR
context-sensitive error correction method for detecting and correcting non-word
and real-word OCR errors. The cornerstone of this proposed approach is the use
of Google Web 1T 5-gram data set as a dictionary of words to spell-check OCR
text. The Google data set incorporates a very large vocabulary and word
statistics entirely reaped from the Internet, making it a reliable source to
perform dictionary-based error correction. The core of the proposed solution is
a combination of three algorithms: The error detection, candidate spellings
generator, and error correction algorithms, which all exploit information
extracted from Google Web 1T 5-gram data set. Experiments conducted on scanned
images written in different languages showed a substantial improvement in the
OCR error correction rate. As future developments, the proposed algorithm is to
be parallelised so as to support parallel and distributed computing
architectures.
|
1204.0191
|
OCR Post-Processing Error Correction Algorithm using Google Online
Spelling Suggestion
|
cs.CL
|
With the advent of digital optical scanners, a lot of paper-based books,
textbooks, magazines, articles, and documents are being transformed into an
electronic version that can be manipulated by a computer. For this purpose,
OCR, short for Optical Character Recognition was developed to translate scanned
graphical text into editable computer text. Unfortunately, OCR is still
imperfect as it occasionally mis-recognizes letters and falsely identifies
scanned text, leading to misspellings and linguistics errors in the OCR output
text. This paper proposes a post-processing context-based error correction
algorithm for detecting and correcting OCR non-word and real-word errors. The
proposed algorithm is based on Google's online spelling suggestion which
harnesses an internal database containing a huge collection of terms and word
sequences gathered from all over the web, convenient to suggest possible
replacements for words that have been misspelled during the OCR process.
Experiments carried out revealed a significant improvement in OCR error
correction rate. Future research can improve upon the proposed algorithm so
much so that it can be parallelized and executed over multiprocessing
platforms.
|
1204.0198
|
Game arguments in computability theory and algorithmic information
theory
|
math.LO cs.GT cs.IT math.IT
|
We provide some examples showing how game-theoretic arguments can be used in
computability theory and algorithmic information theory: unique numbering
theorem (Friedberg), the gap between conditional complexity and total
conditional complexity, Epstein--Levin theorem and some (yet unpublished)
result of Muchnik and Vyugin
|
1204.0199
|
Delay-aware BS Discontinuous Transmission Control and User Scheduling
for Energy Harvesting Downlink Coordinated MIMO Systems
|
cs.SY
|
In this paper, we propose a two-timescale delay-optimal base station
Discontinuous Transmission (BS-DTX) control and user scheduling for downlink
coordinated MIMO systems with energy harvesting capability. To reduce the
complexity and signaling overhead in practical systems, the BS-DTX control is
adaptive to both the energy state information (ESI) and the data queue state
information (QSI) over a longer timescale. The user scheduling is adaptive to
the ESI, the QSI and the channel state information (CSI) over a shorter
timescale. We show that the two-timescale delay-optimal control problem can be
modeled as an infinite horizon average cost Partially Observed Markov Decision
Problem (POMDP), which is well-known to be a difficult problem in general. By
using sample-path analysis and exploiting specific problem structure, we first
obtain some structural results on the optimal control policy and derive an
equivalent Bellman equation with reduced state space. To reduce the complexity
and facilitate distributed implementation, we obtain a delay-aware distributed
solution with the BS-DTX control at the BS controller (BSC) and the user
scheduling at each cluster manager (CM) using approximate dynamic programming
and distributed stochastic learning. We show that the proposed distributed
two-timescale algorithm converges almost surely. Furthermore, using queueing
theory, stochastic geometry and optimization techniques, we derive sufficient
conditions for the data queues to be stable in the coordinated MIMO network and
discuss various design insights.
|
1204.0201
|
Limit complexities revisited [once more]
|
math.LO cs.IT math.IT
|
The main goal of this article is to put some known results in a common
perspective and to simplify their proofs.
We start with a simple proof of a result of Vereshchagin saying that
$\limsup_n C(x|n)$ equals $C^{0'}(x)$. Then we use the same argument to prove
similar results for prefix complexity, a priori probability on binary tree, to
prove Conidis' theorem about limits of effectively open sets, and also to
improve the results of Muchnik about limit frequencies. As a by-product, we get
a criterion of 2-randomness proved by Miller: a sequence $X$ is 2-random if and
only if there exists $c$ such that any prefix $x$ of $X$ is a prefix of some
string $y$ such that $C(y)\ge |y|-c$. (In the 1960ies this property was
suggested in Kolmogorov as one of possible randomness definitions.) We also get
another 2-randomness criterion by Miller and Nies: $X$ is 2-random if and only
if $C(x)\ge |x|-c$ for some $c$ and infinitely many prefixes $x$ of $X$.
This is a modified version of our old paper that contained a weaker (and
cumbersome) version of Conidis' result, and the proof used low basis theorem
(in quite a strange way). The full version was formulated there as a
conjecture. This conjecture was later proved by Conidis. Bruno Bauwens
(personal communication) noted that the proof can be obtained also by a simple
modification of our original argument, and we reproduce Bauwens' argument with
his permission.
|
1204.0245
|
Roget's Thesaurus and Semantic Similarity
|
cs.CL
|
We have implemented a system that measures semantic similarity using a
computerized 1987 Roget's Thesaurus, and evaluated it by performing a few
typical tests. We compare the results of these tests with those produced by
WordNet-based similarity measures. One of the benchmarks is Miller and Charles'
list of 30 noun pairs to which human judges had assigned similarity measures.
We correlate these measures with those computed by several NLP systems. The 30
pairs can be traced back to Rubenstein and Goodenough's 65 pairs, which we have
also studied. Our Roget's-based system gets correlations of .878 for the
smaller and .818 for the larger list of noun pairs; this is quite close to the
.885 that Resnik obtained when he employed humans to replicate the Miller and
Charles experiment. We further evaluate our measure by using Roget's and
WordNet to answer 80 TOEFL, 50 ESL and 300 Reader's Digest questions: the
correct synonym must be selected amongst a group of four words. Our system gets
78.75%, 82.00% and 74.33% of the questions respectively.
|
1204.0248
|
Small polygons and toric codes
|
math.CO cs.IT math.IT
|
We describe two different approaches to making systematic classifications of
plane lattice polygons, and recover the toric codes they generate, over small
fields, where these match or exceed the best known minimum distance. This
includes a [36,19,12]-code over F_7 whose minimum distance 12 exceeds that of
all previously known codes.
|
1204.0255
|
Keyphrase Extraction : Enhancing Lists
|
cs.CL cs.IR
|
This paper proposes some modest improvements to Extractor, a state-of-the-art
keyphrase extraction system, by using a terabyte-sized corpus to estimate the
informativeness and semantic similarity of keyphrases. We present two
techniques to improve the organization and remove outliers of lists of
keyphrases. The first is a simple ordering according to their occurrences in
the corpus; the second is clustering according to semantic similarity.
Evaluation issues are discussed. We present a novel technique of comparing
extracted keyphrases to a gold standard which relies on semantic similarity
rather than string matching or an evaluation involving human judges.
|
1204.0257
|
Not As Easy As It Seems: Automating the Construction of Lexical Chains
Using Roget's Thesaurus
|
cs.CL
|
Morris and Hirst present a method of linking significant words that are about
the same topic. The resulting lexical chains are a means of identifying
cohesive regions in a text, with applications in many natural language
processing tasks, including text summarization. The first lexical chains were
constructed manually using Roget's International Thesaurus. Morris and Hirst
wrote that automation would be straightforward given an electronic thesaurus.
All applications so far have used WordNet to produce lexical chains, perhaps
because adequate electronic versions of Roget's were not available until
recently. We discuss the building of lexical chains using an electronic version
of Roget's Thesaurus. We implement a variant of the original algorithm, and
explain the necessary design decisions. We include a comparison with other
implementations.
|
1204.0258
|
Roget's Thesaurus: a Lexical Resource to Treasure
|
cs.CL
|
This paper presents the steps involved in creating an electronic lexical
knowledge base from the 1987 Penguin edition of Roget's Thesaurus. Semantic
relations are labelled with the help of WordNet. The two resources are compared
in a qualitative and quantitative manner. Differences in the organization of
the lexical material are discussed, as well as the possibility of merging both
resources.
|
1204.0262
|
Managing contextual artificial neural networks with a service-based
mediator
|
cs.NE
|
Today, a wide variety of probabilistic and expert AI systems used to analyze
real world inputs such as unstructured text, sounds, images, and statistical
data. However, all these systems exist on different platforms, with different
implementations, and with very different, often very specific goals in mind.
This paper introduces a concept for a mediator framework for such systems and
seeks to show several architectures which would support it, potential benefits
in combining the signals of disparate networks for formalized, high level logic
and signal processing, and its possible academic and industrial uses.
|
1204.0266
|
Uncovering disassortativity in large scale-free networks
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
Mixing patterns in large self-organizing networks, such as the Internet, the
World Wide Web, social and biological networks are often characterized by
degree-degree dependencies between neighbouring nodes. In this paper we propose
a new way of measuring degree-degree dependencies. One of the problems with the
commonly used assortativity coefficient is that in disassortative networks its
magnitude decreases with the network size. We mathematically explain this
phenomenon and validate the results on synthetic graphs and real-world network
data. As an alternative, we suggest to use rank correlation measures such as
Spearman's rho. Our experiments convincingly show that Spearman's rho produces
consistent values in graphs of different sizes but similar structure, and it is
able to reveal strong (positive or negative) dependencies in large graphs. In
particular, we discover much stronger negative degree-degree dependencies} in
Web graphs than was previously thought. {Rank correlations allow us to compare
the assortativity of networks of different sizes, which is impossible with the
assortativity coefficient due to its genuine dependence on the network size. We
conclude that rank correlations provide a suitable and informative method for
uncovering network mixing patterns.
|
1204.0267
|
Computational science and re-discovery: open-source implementations of
ellipsoidal harmonics for problems in potential theory
|
cs.CE cs.MS physics.chem-ph physics.comp-ph
|
We present two open-source (BSD) implementations of ellipsoidal harmonic
expansions for solving problems of potential theory using separation of
variables. Ellipsoidal harmonics are used surprisingly infrequently,
considering their substantial value for problems ranging in scale from
molecules to the entire solar system. In this article, we suggest two possible
reasons for the paucity relative to spherical harmonics. The first is
essentially historical---ellipsoidal harmonics developed during the late 19th
century and early 20th, when it was found that only the lowest-order harmonics
are expressible in closed form. Each higher-order term requires the solution of
an eigenvalue problem, and tedious manual computation seems to have discouraged
applications and theoretical studies. The second explanation is practical: even
with modern computers and accurate eigenvalue algorithms, expansions in
ellipsoidal harmonics are significantly more challenging to compute than those
in Cartesian or spherical coordinates. The present implementations reduce the
"barrier to entry" by providing an easy and free way for the community to begin
using ellipsoidal harmonics in actual research. We demonstrate our
implementation using the specific and physiologically crucial problem of how
charged proteins interact with their environment, and ask: what other
analytical tools await re-discovery in an era of inexpensive computation?
|
1204.0274
|
Learning from Humans as an I-POMDP
|
cs.RO cs.AI
|
The interactive partially observable Markov decision process (I-POMDP) is a
recently developed framework which extends the POMDP to the multi-agent setting
by including agent models in the state space. This paper argues for formulating
the problem of an agent learning interactively from a human teacher as an
I-POMDP, where the agent \emph{programming} to be learned is captured by random
variables in the agent's state space, all \emph{signals} from the human teacher
are treated as observed random variables, and the human teacher, modeled as a
distinct agent, is explicitly represented in the agent's state space. The main
benefits of this approach are: i. a principled action selection mechanism, ii.
a principled belief update mechanism, iii. support for the most common teacher
\emph{signals}, and iv. the anticipated production of complex beneficial
interactions. The proposed formulation, its benefits, and several open
questions are presented.
|
1204.0280
|
Framing Human-Robot Task Communication as a POMDP
|
cs.RO
|
As general purpose robots become more capable, pre-programming of all tasks
at the factory will become less practical. We would like for non-technical
human owners to be able to communicate, through interaction with their robot,
the details of a new task; we call this interaction "task communication".
During task communication the robot must infer the details of the task from
unstructured human signals and it must choose actions that facilitate this
inference. In this paper we propose the use of a partially observable Markov
decision process (POMDP) for representing the task communication problem; with
the unobservable task details and unobservable intentions of the human teacher
captured in the state, with all signals from the human represented as
observations, and with the cost function chosen to penalize uncertainty. We
work through an example representation of task communication as a POMDP, and
present results from a user experiment on an interactive virtual robot,
compared with a human controlled virtual robot, for a task involving a single
object movement and binary approval input from the teacher. The results suggest
that the proposed POMDP representation produces robots that are robust to
teacher error, that can accurately infer task details, and that are perceived
to be intelligent.
|
1204.0281
|
The memory centre
|
cs.IT math.IT
|
Let $x \in \R$ be given. As we know the, amount of bits needed to binary code
$x$ with given accuracy ($h \in \R$) is approximately $ \m_{h}(x) \approx
\log_{2}(\max {1, |\frac{x}{h}|}). $ We consider the problem where we should
translate the origin $a$ so that the mean amount of bits needed to code
randomly chosen element from a realization of a random variable $X$ is minimal.
In other words, we want to find $a \in \R$ such that $$ \R \ni a \to \mathrm{E}
(\m_{h} (X-a)) $$ attains minimum.
|
1204.0301
|
Tree Codes Improve Convergence Rate of Consensus Over Erasure Channels
|
math.OC cs.IT math.IT
|
We study the problem of achieving average consensus between a group of agents
over a network with erasure links. In the context of consensus problems, the
unreliability of communication links between nodes has been traditionally
modeled by allowing the underlying graph to vary with time. In other words,
depending on the realization of the link erasures, the underlying graph at each
time instant is assumed to be a subgraph of the original graph. Implicit in
this model is the assumption that the erasures are symmetric: if at time t the
packet from node i to node j is dropped, the same is true for the packet
transmitted from node j to node i. However, in practical wireless communication
systems this assumption is unreasonable and, due to the lack of symmetry,
standard averaging protocols cannot guarantee that the network will reach
consensus to the true average. In this paper we explore the use of channel
coding to improve the performance of consensus algorithms. For symmetric
erasures, we show that, for certain ranges of the system parameters, repetition
codes can speed up the convergence rate. For asymmetric erasures we show that
tree codes (which have recently been designed for erasure channels) can be used
to simulate the performance of the original "unerased" graph. Thus, unlike
conventional consensus methods, we can guarantee convergence to the average in
the asymmetric case. The price is a slowdown in the convergence rate, relative
to the unerased network, which is still often faster than the convergence rate
of conventional consensus algorithms over noisy links.
|
1204.0304
|
Distributed continuous-time convex optimization on weight-balanced
digraphs
|
math.OC cs.SY
|
This paper studies the continuous-time distributed optimization of a sum of
convex functions over directed graphs. Contrary to what is known in the
consensus literature, where the same dynamics works for both undirected and
directed scenarios, we show that the consensus-based dynamics that solves the
continuous-time distributed optimization problem for undirected graphs fails to
converge when transcribed to the directed setting. This study sets the basis
for the design of an alternative distributed dynamics which we show is
guaranteed to converge, on any strongly connected weight-balanced digraph, to
the set of minimizers of a sum of convex differentiable functions with globally
Lipschitz gradients. Our technical approach combines notions of invariance and
cocoercivity with the positive definiteness properties of graph matrices to
establish the results.
|
1204.0309
|
A Model for Personalized Keyword Extraction from Web Pages using
Segmentation
|
cs.IR
|
The World Wide Web caters to the needs of billions of users in heterogeneous
groups. Each user accessing the World Wide Web might have his / her own
specific interest and would expect the web to respond to the specific
requirements. The process of making the web to react in a customized manner is
achieved through personalization. This paper proposes a novel model for
extracting keywords from a web page with personalization being incorporated
into it. The keyword extraction problem is approached with the help of web page
segmentation which facilitates in making the problem simpler and solving it
effectively. The proposed model is implemented as a prototype and the
experiments conducted on it empirically validate the model's efficiency.
|
1204.0334
|
Implementation Of Decoders for LDPC Block Codes and LDPC Convolutional
Codes Based on GPUs
|
cs.IT cs.DC math.IT
|
With the use of belief propagation (BP) decoding algorithm, low-density
parity-check (LDPC) codes can achieve near-Shannon limit performance. In order
to evaluate the error performance of LDPC codes, simulators running on CPUs are
commonly used. However, the time taken to evaluate LDPC codes with very good
error performance is excessive. In this paper, efficient LDPC block-code
decoders/simulators which run on graphics processing units (GPUs) are proposed.
We also implement the decoder for the LDPC convolutional code (LDPCCC). The
LDPCCC is derived from a pre-designed quasi-cyclic LDPC block code with good
error performance. Compared to the decoder based on the randomly constructed
LDPCCC code, the complexity of the proposed LDPCCC decoder is reduced due to
the periodicity of the derived LDPCCC and the properties of the quasi-cyclic
structure. In our proposed decoder architecture, $\Gamma$ (a multiple of a
warp) codewords are decoded together and hence the messages of $\Gamma$
codewords are also processed together. Since all the $\Gamma$ codewords share
the same Tanner graph, messages of the $\Gamma$ distinct codewords
corresponding to the same edge can be grouped into one package and stored
linearly. By optimizing the data structures of the messages used in the
decoding process, both the read and write processes can be performed in a
highly parallel manner by the GPUs. In addition, a thread hierarchy minimizing
the divergence of the threads is deployed, and it can maximize the efficiency
of the parallel execution. With the use of a large number of cores in the GPU
to perform the simple computations simultaneously, our GPU-based LDPC decoder
can obtain hundreds of times speedup compared with a serial CPU-based simulator
and over 40 times speedup compared with an 8-thread CPU-based simulator.
|
1204.0343
|
Comments on "Prediction of Subharmonic Oscillation in Switching
Converters Under Different Control Strategies"
|
cs.SY math.DS nlin.CD
|
A recent paper [1] (El Aroudi, 2012) misapplied a critical condition (Fang
and Abed, 2001) to a well-known example. Even if the mistake is corrected, the
results in [1] are applicable only to buck converters and period-doubling
bifurcation. Actually, these results are known in Fang's works a decade ago
which have broader critical conditions applicable to other converters and
bifurcations. The flaws in [1] are identified.
|
1204.0354
|
Identifying Infection Sources and Regions in Large Networks
|
cs.DM cs.SI physics.soc-ph
|
Identifying the infection sources in a network, including the index cases
that introduce a contagious disease into a population network, the servers that
inject a computer virus into a computer network, or the individuals who started
a rumor in a social network, plays a critical role in limiting the damage
caused by the infection through timely quarantine of the sources. We consider
the problem of estimating the infection sources and the infection regions
(subsets of nodes infected by each source) in a network, based only on
knowledge of which nodes are infected and their connections, and when the
number of sources is unknown a priori. We derive estimators for the infection
sources and their infection regions based on approximations of the infection
sequences count. We prove that if there are at most two infection sources in a
geometric tree, our estimator identifies the true source or sources with
probability going to one as the number of infected nodes increases. When there
are more than two infection sources, and when the maximum possible number of
infection sources is known, we propose an algorithm with quadratic complexity
to estimate the actual number and identities of the infection sources.
Simulations on various kinds of networks, including tree networks, small-world
networks and real world power grid networks, and tests on two real data sets
are provided to verify the performance of our estimators.
|
1204.0357
|
Skull-stripping for Tumor-bearing Brain Images
|
cs.CV cs.CE
|
Skull-stripping separates the skull region of the head from the soft brain
tissues. In many cases of brain image analysis, this is an essential
preprocessing step in order to improve the final result. This is true for both
registration and segmentation tasks. In fact, skull-stripping of magnetic
resonance images (MRI) is a well-studied problem with numerous publications in
recent years. Many different algorithms have been proposed, a summary and
comparison of which can be found in [Fennema-Notestine, 2006]. Despite the
abundance of approaches, we discovered that the algorithms which had been
suggested so far, perform poorly when dealing with tumor-bearing brain images.
This is mostly due to additional difficulties in separating the brain from the
skull in this case, especially when the lesion is located very close to the
skull border. Additionally, images acquired according to standard clinical
protocols, often exhibit anisotropic resolution and only partial coverage,
which further complicates the task. Therefore, we developed a method which is
dedicated to skull-stripping for clinically acquired tumor-bearing brain
images.
|
1204.0386
|
Tax evasion dynamics and Zaklan model on Opinion-dependent Network
|
physics.soc-ph cs.SI
|
Within the context of agent-based Monte-Carlo simulations, we study the
well-known majority-vote model (MVM) with noise applied to tax evasion on
Stauffer-Hohnisch-Pittnauer (SHP) networks. To control the fluctuations for tax
evasion in the economics model proposed by Zaklan, MVM is applied in the
neighborhood of the critical noise $q_{c}$ to evolve the Zaklan model. The
Zaklan model had been studied recently using the equilibrium Ising model. Here
we show that the Zaklan model is robust because this can be studied besides
using equilibrium dynamics of Ising model also through the nonequilibrium MVM
and on various topologies giving the same behavior regardless of dynamic or
topology used here.
|
1204.0423
|
On voting intentions inference from Twitter content: a case study on UK
2010 General Election
|
cs.SI physics.soc-ph
|
This is a report, where preliminary work regarding the topic of voting
intention inference from Social Media - such as Twitter - is presented. Our
case study is the UK 2010 General Election and we are focusing on predicting
the percentages of voting intention polls (conducted by YouGov) for the three
major political parties - Conservatives, Labours and Liberal Democrats - during
a 5-month period before the election date (May 6, 2010). We form three
methodologies for extracting positive or negative sentiment from tweets, which
build on each other, and then propose two supervised models for turning
sentiment into voting intention percentages. Interestingly, when the content of
tweets is enriched by attaching synonymous words, a significant improvement on
inference performance is achieved reaching a mean absolute error of 4.34% +/-
2.13%; in that case, the predictions are also shown to be statistically
significant. The presented methods should be considered as work-in-progress;
limitations and suggestions for future work appear in the final section of this
script.
|
1204.0429
|
Relative Information Loss in the PCA
|
cs.IT math.IT
|
In this work we analyze principle component analysis (PCA) as a deterministic
input-output system. We show that the relative information loss induced by
reducing the dimensionality of the data after performing the PCA is the same as
in dimensionality reduction without PCA. Finally, we analyze the case where the
PCA uses the sample covariance matrix to compute the rotation. If the rotation
matrix is not available at the output, we show that an infinite amount of
information is lost. The relative information loss is shown to decrease with
increasing sample size.
|
1204.0431
|
On Dispersions of Discrete Memoryless Channels with Noncausal State
Information at the Encoder
|
cs.IT math.IT
|
In this paper, we study the finite blocklength limits of state-dependent
discrete memoryless channels where the discrete memoryless state is known
noncausally at the encoder. For the point-to-point case, this is known as the
Gel'fand-Pinsker channel model. We define the (n,\epsilon)-capacity of the
Gel'fand-Pinsker channel as the maximal rate of transmission of a message
subject to the condition that the length of the block-code is n and the average
error probability is no larger than \epsilon. This paper provides a lower bound
for the (n,\epsilon)-capacity of the Gel'fand-Pinsker channel model, and hence
an upper bound on the dispersion, a fundamental second-order quantity in the
study of the performance limits of discrete memoryless channels. In addition,
we extend the work of Y. Steinberg (2005), in which the (degraded) broadcast
channel extension of the Gel'fand-Pinsker model was studied. We provide and
inner bound to the (n,\epsilon)-capacity region for this broadcast channel
model using a combination of ideas of Gel'fand-Pinsker coding, superposition
coding and dispersion (finite blocklength) analysis.
|
1204.0479
|
A collaborative ant colony metaheuristic for distributed multi-level
lot-sizing
|
cs.AI cs.DC
|
The paper presents an ant colony optimization metaheuristic for collaborative
planning. Collaborative planning is used to coordinate individual plans of
self-interested decision makers with private information in order to increase
the overall benefit of the coalition. The method consists of a new search graph
based on encoded solutions. Distributed and private information is integrated
via voting mechanisms and via a simple but effective collaborative local search
procedure. The approach is applied to a distributed variant of the multi-level
lot-sizing problem and evaluated by means of 352 benchmark instances from the
literature. The proposed approach clearly outperforms existing approaches on
the sets of medium and large sized instances. While the best method in the
literature so far achieves an average deviation from the best known
non-distributed solutions of 46 percent for the set of the largest instances,
for example, the presented approach reduces the average deviation to only 5
percent.
|
1204.0491
|
Analysis of complex contagions in random multiplex networks
|
physics.soc-ph cs.SI
|
We study the diffusion of influence in random multiplex networks where links
can be of $r$ different types, and for a given content (e.g., rumor, product,
political view), each link type is associated with a content dependent
parameter $c_i$ in $[0,\infty]$ that measures the relative bias type-$i$ links
have in spreading this content. In this setting, we propose a linear threshold
model of contagion where nodes switch state if their "perceived" proportion of
active neighbors exceeds a threshold \tau. Namely, a node connected to $m_i$
active neighbors and $k_i-m_i$ inactive neighbors via type-$i$ links will turn
active if $\sum{c_i m_i}/\sum{c_i k_i}$ exceeds its threshold \tau. Under this
model, we obtain the condition, probability and expected size of global
spreading events. Our results extend the existing work on complex contagions in
several directions by i) providing solutions for coupled random networks whose
vertices are neither identical nor disjoint, (ii) highlighting the effect of
content on the dynamics of complex contagions, and (iii) showing that
content-dependent propagation over a multiplex network leads to a subtle
relation between the giant vulnerable component of the graph and the global
cascade condition that is not seen in the existing models in the literature.
|
1204.0521
|
Explicit receivers for pure-interference bosonic multiple access
channels
|
quant-ph cs.IT math.IT
|
The pure-interference bosonic multiple access channel has two senders and one
receiver, such that the senders each communicate with multiple temporal modes
of a single spatial mode of light. The channel mixes the input modes from the
two users pairwise on a lossless beamsplitter, and the receiver has access to
one of the two output ports. In prior work, Yen and Shapiro found the capacity
region of this channel if encodings consist of coherent-state preparations.
Here, we demonstrate how to achieve the coherent-state Yen-Shapiro region (for
a range of parameters) using a sequential decoding strategy, and we show that
our strategy outperforms the rate regions achievable using conventional
receivers. Our receiver performs binary-outcome quantum measurements for every
codeword pair in the senders' codebooks. A crucial component of this scheme is
a non-destructive "vacuum-or-not" measurement that projects an n-symbol
modulated codeword onto the n-fold vacuum state or its orthogonal complement,
such that the post-measurement state is either the n-fold vacuum or has the
vacuum removed from the support of the n symbols' joint quantum state. This
receiver requires the additional ability to perform multimode optical
phase-space displacements which are realizable using a beamsplitter and a
laser.
|
1204.0556
|
Decomposition Methods for Large Scale LP Decoding
|
cs.IT math.IT math.OC
|
When binary linear error-correcting codes are used over symmetric channels, a
relaxed version of the maximum likelihood decoding problem can be stated as a
linear program (LP). This LP decoder can be used to decode error-correcting
codes at bit-error-rates comparable to state-of-the-art belief propagation (BP)
decoders, but with significantly stronger theoretical guarantees. However, LP
decoding when implemented with standard LP solvers does not easily scale to the
block lengths of modern error correcting codes. In this paper we draw on
decomposition methods from optimization theory, specifically the Alternating
Directions Method of Multipliers (ADMM), to develop efficient distributed
algorithms for LP decoding.
The key enabling technical result is a "two-slice" characterization of the
geometry of the parity polytope, which is the convex hull of all codewords of a
single parity check code. This new characterization simplifies the
representation of points in the polytope. Using this simplification, we develop
an efficient algorithm for Euclidean norm projection onto the parity polytope.
This projection is required by ADMM and allows us to use LP decoding, with all
its theoretical guarantees, to decode large-scale error correcting codes
efficiently.
We present numerical results for LDPC codes of lengths more than 1000. The
waterfall region of LP decoding is seen to initiate at a slightly higher
signal-to-noise ratio than for sum-product BP, however an error floor is not
observed for LP decoding, which is not the case for BP. Our implementation of
LP decoding using ADMM executes as fast as our baseline sum-product BP decoder,
is fully parallelizable, and can be seen to implement a type of message-passing
with a particularly simple schedule.
|
1204.0562
|
Atomic norm denoising with applications to line spectral estimation
|
cs.IT math.IT
|
Motivated by recent work on atomic norms in inverse problems, we propose a
new approach to line spectral estimation that provides theoretical guarantees
for the mean-squared-error (MSE) performance in the presence of noise and
without knowledge of the model order. We propose an abstract theory of
denoising with atomic norms and specialize this theory to provide a convex
optimization problem for estimating the frequencies and phases of a mixture of
complex exponentials. We show that the associated convex optimization problem
can be solved in polynomial time via semidefinite programming (SDP). We also
show that the SDP can be approximated by an l1-regularized least-squares
problem that achieves nearly the same error rate as the SDP but can scale to
much larger problems. We compare both SDP and l1-based approaches with
classical line spectral analysis methods and demonstrate that the SDP
outperforms the l1 optimization which outperforms MUSIC, Cadzow's, and Matrix
Pencil approaches in terms of MSE over a wide range of signal-to-noise ratios.
|
1204.0566
|
The Kernelized Stochastic Batch Perceptron
|
cs.LG
|
We present a novel approach for training kernel Support Vector Machines,
establish learning runtime guarantees for our method that are better then those
of any other known kernelized SVM optimization approach, and show that our
method works well in practice compared to existing alternatives.
|
1204.0590
|
Linear System Identification via Atomic Norm Regularization
|
math.OC cs.IT math.IT
|
This paper proposes a new algorithm for linear system identification from
noisy measurements. The proposed algorithm balances a data fidelity term with a
norm induced by the set of single pole filters. We pose a convex optimization
problem that approximately solves the atomic norm minimization problem and
identifies the unknown system from noisy linear measurements. This problem can
be solved efficiently with standard, freely available software. We provide
rigorous statistical guarantees that explicitly bound the estimation error (in
the H_2-norm) in terms of the stability radius, the Hankel singular values of
the true system and the number of measurements. These results in turn yield
complexity bounds and asymptotic consistency. We provide numerical experiments
demonstrating the efficacy of our method for estimating linear systems from a
variety of linear measurements.
|
1204.0634
|
Multi-level agent-based modeling with the Influence Reaction principle
|
cs.MA
|
This paper deals with the specification and the implementation of multi-level
agent-based models, using a formal model, IRM4MLS (an Influence Reaction Model
for Multi-Level Simulation), based on the Influence Reaction principle.
Proposed examples illustrate forms of top-down control in (multi-level)
multi-agent based-simulations.
|
1204.0650
|
Variability of Contact Process in Complex Networks
|
physics.soc-ph cs.SI
|
We study numerically how the structures of distinct networks influence the
epidemic dynamics in contact process. We first find that the variability
difference between homogeneous and heterogeneous networks is very narrow,
although the heterogeneous structures can induce the lighter prevalence.
Contrary to non-community networks, strong community structures can cause the
secondary outbreak of prevalence and two peaks of variability appeared.
Especially in the local community, the extraordinarily large variability in
early stage of the outbreak makes the prediction of epidemic spreading hard.
Importantly, the bridgeness plays a significant role in the predictability,
meaning the further distance of the initial seed to the bridgeness, the less
accurate the predictability is. Also, we investigate the effect of different
disease reaction mechanisms on variability, and find that the different
reaction mechanisms will result in the distinct variabilities at the end of
epidemic spreading.
|
1204.0684
|
Validation of nonlinear PCA
|
cs.LG cs.AI cs.CV stat.ML
|
Linear principal component analysis (PCA) can be extended to a nonlinear PCA
by using artificial neural networks. But the benefit of curved components
requires a careful control of the model complexity. Moreover, standard
techniques for model selection, including cross-validation and more generally
the use of an independent test set, fail when applied to nonlinear PCA because
of its inherent unsupervised characteristics. This paper presents a new
approach for validating the complexity of nonlinear PCA models by using the
error in missing data estimation as a criterion for model selection. It is
motivated by the idea that only the model of optimal complexity is able to
predict missing values with the highest accuracy. While standard test set
validation usually favours over-fitted nonlinear PCA models, the proposed model
validation approach correctly selects the optimal model complexity.
|
1204.0706
|
Epidemic Variability in Hierarchical Geographical Networks with Human
Activity Patterns
|
physics.soc-ph cs.SI
|
Recently, some studies have revealed that non-Poissonian statistics of human
behaviors stem from the hierarchical geographical network structure. On this
view, we focus on epidemic spreading in the hierarchical geographical networks,
and study how two distinct contact patterns (i. e., homogeneous time delay
(HOTD) and heterogeneous time delay (HETD) associated with geographical
distance) influence the spreading speed and the variability of outbreaks. We
find that, compared with HOTD and null model, correlations between time delay
and network hierarchy in HETD remarkably slow down epidemic spreading, and
result in a upward cascading multi-modal phenomenon. Proportionately, the
variability of outbreaks in HETD has the lower value, but several comparable
peaks for a long time, which makes the long-term prediction of epidemic
spreading hard. When a seed (i. e., the initial infected node) is from the high
layers of networks, epidemic spreading is remarkably promoted. Interestingly,
distinct trends of variabilities in two contact patterns emerge: high-layer
seeds in HOTD result in the lower variabilities, the case of HETD is opposite.
More importantly, the variabilities of high-layer seeds in HETD are much
greater than that in HOTD, which implies the unpredictability of epidemic
spreading in hierarchical geographical networks.
|
1204.0731
|
Unit contradiction versus unit propagation
|
cs.AI
|
Some aspects of the result of applying unit resolution on a CNF formula can
be formalized as functions with domain a set of partial truth assignments. We
are interested in two ways for computing such functions, depending on whether
the result is the production of the empty clause or the assignment of a
variable with a given truth value. We show that these two models can compute
the same functions with formulae of polynomially related sizes, and we explain
how this result is related to the CNF encoding of Boolean constraints.
|
1204.0746
|
Gradually Atom Pruning for Sparse Reconstruction and Extension to
Correlated Sparsity
|
cs.IT math.IT
|
We propose a new algorithm for recovery of sparse signals from their
compressively sensed samples. The proposed algorithm benefits from the strategy
of gradual movement to estimate the positions of non-zero samples of sparse
signal. We decompose each sample of signal into two variables, namely "value"
and "detector", by a weighted exponential function. We update these new
variables using gradient descent method. Like the traditional compressed
sensing algorithms, the first variable is used to solve the Least Absolute
Shrinkage and Selection Operator (Lasso) problem. As a new strategy, the second
variable participates in the regularization term of the Lasso (l1 norm) that
gradually detects the non-zero elements. The presence of the second variable
enables us to extend the corresponding vector of the first variable to matrix
form. This makes possible use of the correlation matrix for a heuristic search
in the case that there are correlations among the samples of signal. We compare
the performance of the new algorithm with various algorithms for uncorrelated
and correlated sparsity. The results indicate the efficiency of the proposed
methods.
|
1204.0767
|
Efficient Fruit Defect Detection and Glare removal Algorithm by
anisotropic diffusion and 2D Gabor filter
|
cs.CV
|
This paper focuses on fruit defect detection and glare removal using
morphological operations, Glare removal can be considered as an important
preprocessing step as uneven lighting may introduce it in images, which hamper
the results produced through segmentation by Gabor filters .The problem of
glare in images is very pronounced sometimes due to the unusual reflectance
from the camera sensor or stray light entering, this method counteracts this
problem and makes the defect detection much more pronounced. Anisotropic
diffusion is used for further smoothening of the images and removing the high
energy regions in an image for better defect detection and makes the defects
more retrievable. Our algorithm is robust and scalable the employability of a
particular mask for glare removal has been checked and proved useful for
counteracting.this problem, anisotropic diffusion further enhances the defects
with its use further Optimal Gabor filter at various orientations is used for
defect detection.
|
1204.0776
|
Exploiting Channel Correlation and PU Traffic Memory for Opportunistic
Spectrum Scheduling
|
cs.IT cs.SY math.IT
|
We consider a cognitive radio network with multiple primary users (PUs) and
one secondary user (SU), where a spectrum server is utilized for spectrum
sensing and scheduling the SU to transmit over one of the PU channels
opportunistically. One practical yet challenging scenario is when \textit{both}
the PU occupancy and the channel fading vary over time and exhibit temporal
correlations. Little work has been done for exploiting such temporal memory in
the channel fading and the PU occupancy simultaneously for opportunistic
spectrum scheduling. A main goal of this work is to understand the intricate
tradeoffs resulting from the interactions of the two sets of system states -
the channel fading and the PU occupancy, by casting the problem as a partially
observable Markov decision process. We first show that a simple greedy policy
is optimal in some special cases. To build a clear understanding of the
tradeoffs, we then introduce a full-observation genie-aided system, where the
spectrum server collects channel fading states from all PU channels. The
genie-aided system is used to decompose the tradeoffs in the original system
into multiple tiers, which are examined progressively. Numerical examples
indicate that the optimal scheduler in the original system, with observation on
the scheduled channel only, achieves a performance very close to the
genie-aided system. Further, as expected, the optimal policy in the original
system significantly outperforms randomized scheduling, pointing to the merit
of exploiting the temporal correlation structure in both channel fading and PU
occupancy.
|
1204.0803
|
Compressed Sensing for Denoising in Adaptive System Identification
|
cs.IT math.IT
|
We propose a new technique for adaptive identification of sparse systems
based on the compressed sensing (CS) theory. We manipulate the transmitted
pilot (input signal) and the received signal such that the weights of adaptive
filter approach the compressed version of the sparse system instead of the
original system. To this end, we use random filter structure at the transmitter
to form the measurement matrix according to the CS framework. The original
sparse system can be reconstructed by the conventional recovery algorithms. As
a result, the denoising property of CS can be deployed in the proposed method
at the recovery stage. The experiments indicate significant performance
improvement of proposed method compared to the conventional LMS method which
directly identifies the sparse system. Furthermore, at low levels of sparsity,
our method outperforms a specialized identification algorithm that promotes
sparsity.
|
1204.0830
|
Information Transmission using the Nonlinear Fourier Transform, Part II:
Numerical Methods
|
cs.IT math.IT
|
In this paper, numerical methods are suggested to compute the discrete and
the continuous spectrum of a signal with respect to the Zakharov-Shabat system,
a Lax operator underlying numerous integrable communication channels including
the nonlinear Schr\"odinger channel, modeling pulse propagation in optical
fibers. These methods are subsequently tested and their ability to estimate the
spectrum are compared against each other. These methods are used to compute the
spectrum of various signals commonly used in the optical fiber communications.
It is found that the layer-peeling and the spectral methods are suitable
schemes to estimate the nonlinear spectra with good accuracy. To illustrate the
structure of the spectrum, the locus of the eigenvalues is determined under
amplitude and phase modulation in a number of examples. It is observed that in
some cases, as signal parameters vary, eigenvalues collide and change their
course of motion. The real axis is typically the place from which new
eigenvalues originate or are absorbed into after traveling a trajectory in the
complex plane.
|
1204.0839
|
A Constrained Random Demodulator for Sub-Nyquist Sampling
|
cs.IT math.IT
|
This paper presents a significant modification to the Random Demodulator (RD)
of Tropp et al. for sub-Nyquist sampling of frequency-sparse signals. The
modification, termed constrained random demodulator, involves replacing the
random waveform, essential to the operation of the RD, with a constrained
random waveform that has limits on its switching rate because fast switching
waveforms may be hard to generate cleanly. The result is a relaxation on the
hardware requirements with a slight, but manageable, decrease in the recovery
guarantees. The paper also establishes the importance of properly choosing the
statistics of the constrained random waveform. If the power spectrum of the
random waveform matches the distribution on the tones of the input signal
(i.e., the distribution is proportional to the power spectrum), then recovery
of the input signal tones is improved. The theoretical guarantees provided in
the paper are validated through extensive numerical simulations and phase
transition plots.
|
1204.0844
|
Mitigating Timing Errors in Time-Interleaved ADCs: a signal conditioning
approach
|
cs.IT math.IT
|
Novel techniques based on signal-conditioning are presented to mitigate
timing errors in time-interleaved ADCs. A theoretical bound on the achievable
spurious signal content, on applying the techniques, is also derived.
Behavioral simulations corroborating the same are presented.
|
1204.0852
|
Distributed convergence to Nash equilibria in two-network zero-sum games
|
math.OC cs.SY
|
This paper considers a class of strategic scenarios in which two networks of
agents have opposing objectives with regards to the optimization of a common
objective function. In the resulting zero-sum game, individual agents
collaborate with neighbors in their respective network and have only partial
knowledge of the state of the agents in the other network. For the case when
the interaction topology of each network is undirected, we synthesize a
distributed saddle-point strategy and establish its convergence to the Nash
equilibrium for the class of strictly concave-convex and locally Lipschitz
objective functions. We also show that this dynamics does not converge in
general if the topologies are directed. This justifies the introduction, in the
directed case, of a generalization of this distributed dynamics which we show
converges to the Nash equilibrium for the class of strictly concave-convex
differentiable functions with locally Lipschitz gradients. The technical
approach combines tools from algebraic graph theory, nonsmooth analysis,
set-valued dynamical systems, and game theory.
|
1204.0864
|
GeT_Move: An Efficient and Unifying Spatio-Temporal Pattern Mining
Algorithm for Moving Objects
|
cs.DB
|
Recent improvements in positioning technology has led to a much wider
availability of massive moving object data. A crucial task is to find the
moving objects that travel together. Usually, these object sets are called
spatio-temporal patterns. Due to the emergence of many different kinds of
spatio-temporal patterns in recent years, different approaches have been
proposed to extract them. However, each approach only focuses on mining a
specific kind of pattern. In addition to being a painstaking task due to the
large number of algorithms used to mine and manage patterns, it is also time
consuming. Moreover, we have to execute these algorithms again whenever new
data are added to the existing database. To address these issues, we first
redefine spatio-temporal patterns in the itemset context. Secondly, we propose
a unifying approach, named GeT_Move, which uses a frequent closed itemset-based
spatio-temporal pattern-mining algorithm to mine and manage different
spatio-temporal patterns. GeT_Move is implemented in two versions which are
GeT_Move and Incremental GeT_Move. To optimize the efficiency and to free the
parameters setting, we also propose a Parameter Free Incremental GeT_Move
algorithm. Comprehensive experiments are performed on real datasets as well as
large synthetic datasets to demonstrate the effectiveness and efficiency of our
approaches.
|
1204.0867
|
Optimal Index Codes for a Class of Multicast Networks with Receiver Side
Information
|
cs.IT math.IT
|
This paper studies a special class of multicast index coding problems where a
sender transmits messages to multiple receivers, each with some side
information. Here, each receiver knows a unique message a priori, and there is
no restriction on how many messages each receiver requests from the sender. For
this class of multicast index coding problems, we obtain the optimal index
code, which has the shortest codelength for which the sender needs to send in
order for all receivers to obtain their (respective) requested messages. This
is the first class of index coding problems where the optimal index codes are
found. In addition, linear index codes are shown to be optimal for this class
of index coding problems.
|
1204.0870
|
Relax and Localize: From Value to Algorithms
|
cs.LG cs.GT stat.ML
|
We show a principled way of deriving online learning algorithms from a
minimax analysis. Various upper bounds on the minimax value, previously thought
to be non-constructive, are shown to yield algorithms. This allows us to
seamlessly recover known methods and to derive new ones. Our framework also
captures such "unorthodox" methods as Follow the Perturbed Leader and the R^2
forecaster. We emphasize that understanding the inherent complexity of the
learning problem leads to the development of algorithms.
We define local sequential Rademacher complexities and associated algorithms
that allow us to obtain faster rates in online learning, similarly to
statistical learning theory. Based on these localized complexities we build a
general adaptive method that can take advantage of the suboptimality of the
observed sequence.
We present a number of new algorithms, including a family of randomized
methods that use the idea of a "random playout". Several new versions of the
Follow-the-Perturbed-Leader algorithms are presented, as well as methods based
on the Littlestone's dimension, efficient methods for matrix completion with
trace norm, and algorithms for the problems of transductive learning and
prediction with static experts.
|
1204.0885
|
PID Parameters Optimization by Using Genetic Algorithm
|
cs.SY cs.LG cs.NE
|
Time delays are components that make time-lag in systems response. They arise
in physical, chemical, biological and economic systems, as well as in the
process of measurement and computation. In this work, we implement Genetic
Algorithm (GA) in determining PID controller parameters to compensate the delay
in First Order Lag plus Time Delay (FOLPD) and compare the results with
Iterative Method and Ziegler-Nichols rule results.
|
1204.0958
|
Robust methods for LTE and WiMAX dimensioning
|
cs.RO cs.NI cs.PF
|
This paper proposes an analytic model for dimensioning OFDMA based networks
like WiMAX and LTE systems. In such a system, users require a number of
subchannels which depends on their \SNR, hence of their position and the
shadowing they experience. The system is overloaded when the number of required
subchannels is greater than the number of available subchannels. We give an
exact though not closed expression of the loss probability and then give an
algorithmic method to derive the number of subchannels which guarantees a loss
probability less than a given threshold. We show that Gaussian approximation
lead to optimistic values and are thus unusable. We then introduce Edgeworth
expansions with error bounds and show that by choosing the right order of the
expansion, one can have an approximate dimensioning value easy to compute but
with guaranteed performance. As the values obtained are highly dependent from
the parameters of the system, which turned to be rather undetermined, we
provide a procedure based on concentration inequality for Poisson functionals,
which yields to conservative dimensioning. This paper relies on recent results
on concentration inequalities and establish new results on Edgeworth
expansions.
|
1204.0982
|
Approximability of the Vertex Cover Problem in Power Law Graphs
|
cs.DS cs.SI
|
In this paper we construct an approximation algorithm for the Minimum Vertex
Cover Problem (Min-VC) with an expected approximation ratio of 2-f(beta) for
random Power Law Graphs (PLG) in the (alpha,beta)-model of Aiello et. al.,
where f(beta) is a strictly positive function of the parameter beta. We obtain
this result by combining the Nemhauser and Trotter approach for Min-VC with a
new deterministic rounding procedure which achieves an approximation ratio of
3/2 on a subset of low degree vertices for which the expected contribution to
the cost of the associated linear program is sufficiently large.
|
1204.0992
|
Discrete Sampling and Interpolation: Universal Sampling Sets for
Discrete Bandlimited Spaces
|
cs.IT math.IT
|
We study the problem of interpolating all values of a discrete signal f of
length N when d<N values are known, especially in the case when the Fourier
transform of the signal is zero outside some prescribed index set J; these
comprise the (generalized) bandlimited spaces B^J. The sampling pattern for f
is specified by an index set I, and is said to be a universal sampling set if
samples in the locations I can be used to interpolate signals from B^J for any
J. When N is a prime power we give several characterizations of universal
sampling sets, some structure theorems for such sets, an algorithm for their
construction, and a formula that counts them. There are also natural
applications to additive uncertainty principles.
|
1204.1002
|
Fast Multi-Scale Detection of Relevant Communities
|
cs.DS cs.SI physics.soc-ph
|
Nowadays, networks are almost ubiquitous. In the past decade, community
detection received an increasing interest as a way to uncover the structure of
networks by grouping nodes into communities more densely connected internally
than externally. Yet most of the effective methods available do not consider
the potential levels of organisation, or scales, a network may encompass and
are therefore limited. In this paper we present a method compatible with global
and local criteria that enables fast multi-scale community detection. The
method is derived in two algorithms, one for each type of criterion, and
implemented with 6 known criteria. Uncovering communities at various scales is
a computationally expensive task. Therefore this work puts a strong emphasis on
the reduction of computational complexity. Some heuristics are introduced for
speed-up purposes. Experiments demonstrate the efficiency and accuracy of our
method with respect to each algorithm and criterion by testing them against
large generated multi-scale networks. This study also offers a comparison
between criteria and between the global and local approaches.
|
1204.1069
|
Convergence and Equivalence results for the Jensen's inequality -
Application to time-delay and sampled-data systems
|
cs.SY math.DS math.OC
|
The Jensen's inequality plays a crucial role in the analysis of time-delay
and sampled-data systems. Its conservatism is studied through the use of the
Gr\"{u}ss Inequality. It has been reported in the literature that fragmentation
(or partitioning) schemes allow to empirically improve the results. We prove
here that the Jensen's gap can be made arbitrarily small provided that the
order of uniform fragmentation is chosen sufficiently large. Non-uniform
fragmentation schemes are also shown to speed up the convergence in certain
cases. Finally, a family of bounds is characterized and a comparison with other
bounds of the literature is provided. It is shown that the other bounds are
equivalent to Jensen's and that they exhibit interesting well-posedness and
linearity properties which can be exploited to obtain better numerical results.
|
1204.1080
|
Memory Resilient Gain-scheduled State-Feedback Control of Uncertain
LTI/LPV Systems with Time-Varying Delays
|
cs.SY math.CA math.DS math.OC
|
The stabilization of uncertain LTI/LPV time delay systems with time varying
delays by state-feedback controllers is addressed. At the difference of other
works in the literature, the proposed approach allows for the synthesis of
resilient controllers with respect to uncertainties on the implemented delay.
It is emphasized that such controllers unify memoryless and exact-memory
controllers usually considered in the literature. The solutions to the
stability and stabilization problems are expressed in terms of LMIs which allow
to check the stability of the closed-loop system for a given bound on the
knowledge error and even optimize the uncertainty radius under some performance
constraints; in this paper, the $\mathcal{H}_\infty$ performance measure is
considered. The interest of the approach is finally illustrated through several
examples.
|
1204.1085
|
Post-Nonlinear Sparse Component Analysis Using Single-Source Zones and
Functional Data Clustering
|
cs.IT math.IT
|
In this paper, we introduce a general extension of linear sparse component
analysis (SCA) approaches to postnonlinear (PNL) mixtures. In particular, and
contrary to the state-of-art methods, our approaches use a weak sparsity source
assumption: we look for tiny temporal zones where only one source is active. We
investigate two nonlinear single-source confidence measures, using the mutual
information and a local linear tangent space approximation (LTSA). For this
latter measure, we derive two extensions of linear single-source measures,
respectively based on correlation (LTSA-correlation) and eigenvalues
(LTSA-PCA). A second novelty of our approach consists of applying functional
data clustering techniques to the scattered observations in the above
single-source zones, thus allowing us to accurately estimate them.We first
study a classical approach using a B-spline approximation, and then two
approaches which locally approximate the nonlinear functions as lines. Finally,
we extend our PNL methods to more general nonlinear mixtures. Combining
single-source zones and functional data clustering allows us to tackle speech
signals, which has never been performed by other PNL-SCA methods. We
investigate the performance of our approaches with simulated PNL mixtures of
real speech signals. Both the mutual information and the LTSA-correlation
measures are better-suited to detecting single-source zones than the LTSA-PCA
measure. We also find local-linear-approximation-based clustering approaches to
be more flexible and more accurate than the B-spline one.
|
1204.1091
|
Load-Aware Modeling and Analysis of Heterogeneous Cellular Networks
|
cs.IT math.IT
|
Random spatial models are attractive for modeling heterogeneous cellular
networks (HCNs) due to their realism, tractability, and scalability. A major
limitation of such models to date in the context of HCNs is the neglect of
network traffic and load: all base stations (BSs) have typically been assumed
to always be transmitting. Small cells in particular will have a lighter load
than macrocells, and so their contribution to the network interference may be
significantly overstated in a fully loaded model. This paper incorporates a
flexible notion of BS load by introducing a new idea of conditionally thinning
the interference field. For a K-tier HCN where BSs across tiers differ in terms
of transmit power, supported data rate, deployment density, and now load, we
derive the coverage probability for a typical mobile, which connects to the
strongest BS signal. Conditioned on this connection, the interfering BSs of the
$i^{th}$ tier are assumed to transmit independently with probability $p_i$,
which models the load. Assuming - reasonably - that smaller cells are more
lightly loaded than macrocells, the analysis shows that adding such access
points to the network always increases the coverage probability. We also
observe that fully loaded models are quite pessimistic in terms of coverage.
|
1204.1096
|
MIMO Precoding in Underlay Cognitive Radio Systems with Completely
Unknown Primary CSI
|
cs.IT math.IT
|
This paper studies a novel underlay MIMO cognitive radio (CR) system, where
the instantaneous or statistical channel state information (CSI) of the
interfering channels to the primary receivers (PRs) is completely unknown to
the CR. For the single underlay receiver scenario, we assume a minimum
information rate must be guaranteed on the CR main channel whose CSI is known
at the CR transmitter. We first show that low-rank CR interference is
preferable for improving the throughput of the PRs compared with spreading less
power over more transmit dimensions. Based on this observation, we then propose
a rank minimization CR transmission strategy assuming a minimum information
rate must be guaranteed on the CR main channel. We propose a simple solution
referred to as frugal waterfilling (FWF) that uses the least amount of power
required to achieve the rate constraint with a minimum-rank transmit covariance
matrix. We also present two heuristic approaches that have been used in prior
work to transform rank minimization problems into convex optimization problems.
The proposed schemes are then generalized to an underlay MIMO CR downlink
network with multiple receivers. Finally, a theoretical analysis of the
interference temperature and leakage rate outage probabilities at the PR is
presented for Rayleigh fading channels.We demonstrate that the direct FWF
solution leads to higher PR throughput even though it has higher interference
"temperature (IT) compared with the heuristic methods and classic waterfilling,
which calls into question the use of IT as a metric for CR interference.
|
1204.1106
|
Message Passing for Dynamic Network Energy Management
|
math.OC cs.DC cs.SY
|
We consider a network of devices, such as generators, fixed loads, deferrable
loads, and storage devices, each with its own dynamic constraints and
objective, connected by lossy capacitated lines. The problem is to minimize the
total network objective subject to the device and line constraints, over a
given time horizon. This is a large optimization problem, with variables for
consumption or generation in each time period for each device. In this paper we
develop a decentralized method for solving this problem. The method is
iterative: At each step, each device exchanges simple messages with its
neighbors in the network and then solves its own optimization problem,
minimizing its own objective function, augmented by a term determined by the
messages it has received. We show that this message passing method converges to
a solution when the device objective and constraints are convex. The method is
completely decentralized, and needs no global coordination other than
synchronizing iterations; the problems to be solved by each device can
typically be solved extremely efficiently and in parallel. The method is fast
enough that even a serial implementation can solve substantial problems in
reasonable time frames. We report results for several numerical experiments,
demonstrating the method's speed and scaling, including the solution of a
problem instance with over 30 million variables in 52 minutes for a serial
implementation; with decentralized computing, the solve time would be less than
one second.
|
1204.1156
|
Web Services Supply Chains: A Literature Review
|
cs.SY
|
The aim of this review paper is to bring into light a potential area i.e.,
web services supply chains for research by analyzing the existing state of art
in this. It is observed from the review process that there seems to be much
less work done in the area of web service supply chains as compared to
e-commerce and product oriented service supply chains. The service quality
assurance models, end to end Quality of Service (QoS) models, attempts made to
QoS attributes are also found to be from individual perspectives of
participating entities in a service process rather than a collective
perspective considering individual QoS attributes rather than multiple QoS
attributes. In light of these gaps we highlight the comparison between product
oriented and pure online/ web service supply chains, a need for quality driven
optimization in the web services supply chains, perceived complexities in the
existing work and propose a conceptual model.
|
1204.1158
|
Dynamic Bayesian diffusion estimation
|
cs.IT math.IT
|
The rapidly increasing complexity of (mainly wireless) ad-hoc networks
stresses the need of reliable distributed estimation of several variables of
interest. The widely used centralized approach, in which the network nodes
communicate their data with a single specialized point, suffers from high
communication overheads and represents a potentially dangerous concept with a
single point of failure needing special treatment. This paper's aim is to
contribute to another quite recent method called diffusion estimation. By
decentralizing the operating environment, the network nodes communicate just
within a close neighbourhood. We adopt the Bayesian framework to modelling and
estimation, which, unlike the traditional approaches, abstracts from a
particular model case. This leads to a very scalable and universal method,
applicable to a wide class of different models. A particularly interesting case
- the Gaussian regressive model - is derived as an example.
|
1204.1160
|
Opinion formation in time-varying social networks: The case of the
naming game
|
physics.soc-ph cs.SI
|
We study the dynamics of the naming game as an opinion formation model on
time-varying social networks. This agent-based model captures the essential
features of the agreement dynamics by means of a memory-based negotiation
process. Our study focuses on the impact of time-varying properties of the
social network of the agents on the naming game dynamics. In particular, we
perform a computational exploration of this model using simulations on top of
real networks. We investigate the outcomes of the dynamics on two different
types of time-varying data - (i) the networks vary on a day-to-day basis and
(ii) the networks vary within very short intervals of time (20 seconds). In the
first case, we find that networks with strong community structure hinder the
system from reaching global agreement; the evolution of the naming game in
these networks maintains clusters of coexisting opinions indefinitely leading
to metastability. In the second case, we investigate the evolution of the
naming game in perfect synchronization with the time evolution of the
underlying social network shedding new light on the traditional emergent
properties of the game that differ largely from what has been reported in the
existing literature.
|
1204.1162
|
Performance of the Google Desktop, Arabic Google Desktop and Peer to
Peer Application in Arabic Language
|
cs.IR
|
The Arabic language is a complex language; it is different from Western
languages especially at the morphological and spelling variations. Indeed, the
performance of information retrieval systems in the Arabic language is still a
problem. For this reason, we are interested in studying the performance of the
most famous search engine, which is a Google Desktop, while searching in Arabic
language documents. Then, we propose an update to the Google Desktop to take
into consideration in search the Arabic words that have the same root. After
that, we evaluate the performance of the Google Desktop in this context. Also,
we are interested in evaluation the performance of peer-to-peer application in
two ways. The first one uses a simple indexation that indexes Arabic documents
without taking in consideration the root of words. The second way takes in
consideration the roots in the indexation of Arabic documents. This evaluation
is done by using a corpus of ten thousand documents and one hundred different
queries.
|
1204.1172
|
Timing acquisition and demodulation of an UWB system based on the
differential scheme
|
cs.IT math.IT
|
Blind synchronization constitutes a major challenge in realizing highly
efficient ultra wide band (UWB) systems because of the short pulse duration
which requires a fast synchronization algorithm to accommodate several
asynchronous users. In this paper, we present a new Code Block Synchronization
Algorithm (CBSA) based on a particular code design for a non coherent
transmission. Synchronization algorithm is applied directly on received signal
to estimate timing offset, without needing any training sequence. Different
users can share the available bandwidth by means of different spreading codes
with different lengths. This allows the receiver to separate users, and to
recover the timing information of the transmitted symbols. Simulation results
and comparisons validate the promising performance of the proposed scheme even
in a multi user scenario. In fact, the proposed algorithm offers a gain of
about 3 dB in comparison with reference [5].
|
1204.1177
|
Principal Component Analysis-Linear Discriminant Analysis Feature
Extractor for Pattern Recognition
|
cs.CV
|
Robustness of embedded biometric systems is of prime importance with the
emergence of fourth generation communication devices and advancement in
security systems This paper presents the realization of such technologies which
demands reliable and error-free biometric identity verification systems. High
dimensional patterns are not permitted due to eigen-decomposition in high
dimensional image space and degeneration of scattering matrices in small size
sample. Generalization, dimensionality reduction and maximizing the margins are
controlled by minimizing weight vectors. Results show good pattern by
multimodal biometric system proposed in this paper. This paper is aimed at
investigating a biometric identity system using Principal Component Analysis
and Lindear Discriminant Analysis with K-Nearest Neighbor and implementing such
system in real-time using SignalWAVE.
|
1204.1185
|
Query Language for Complex Similarity Queries
|
cs.DB cs.IR cs.MM
|
For complex data types such as multimedia, traditional data management
methods are not suitable. Instead of attribute matching approaches, access
methods based on object similarity are becoming popular. Recently, this
resulted in an intensive research of indexing and searching methods for the
similarity-based retrieval. Nowadays, many efficient methods are already
available, but using them to build an actual search system still requires
specialists that tune the methods and build the system manually. Several
attempts have already been made to provide a more convenient high-level
interface in a form of query languages for such systems, but these are limited
to support only basic similarity queries. In this paper, we propose a new
language that allows to formulate content-based queries in a flexible way,
taking into account the functionality offered by a particular search engine in
use. To ensure this, the language is based on a general data model with an
abstract set of operations. Consequently, the language supports various
advanced query operations such as similarity joins, reverse nearest neighbor
queries, or distinct kNN queries, as well as multi-object and multi-modal
queries. The language is primarily designed to be used with the MESSIF
framework for content-based searching but can be employed by other retrieval
systems as well.
|
1204.1198
|
A Complete Workflow for Development of Bangla OCR
|
cs.CV
|
Developing a Bangla OCR requires bunch of algorithm and methods. There were
many effort went on for developing a Bangla OCR. But all of them failed to
provide an error free Bangla OCR. Each of them has some lacking. We discussed
about the problem scope of currently existing Bangla OCR's. In this paper, we
present the basic steps required for developing a Bangla OCR and a complete
workflow for development of a Bangla OCR with mentioning all the possible
algorithms required.
|
1204.1231
|
How Many Vote Operations Are Needed to Manipulate A Voting System?
|
cs.AI cs.GT
|
In this paper, we propose a framework to study a general class of strategic
behavior in voting, which we call vote operations. We prove the following
theorem: if we fix the number of alternatives, generate $n$ votes i.i.d.
according to a distribution $\pi$, and let $n$ go to infinity, then for any
$\epsilon >0$, with probability at least $1-\epsilon$, the minimum number of
operations that are needed for the strategic individual to achieve her goal
falls into one of the following four categories: (1) 0, (2) $\Theta(\sqrt n)$,
(3) $\Theta(n)$, and (4) $\infty$. This theorem holds for any set of vote
operations, any individual vote distribution $\pi$, and any integer generalized
scoring rule, which includes (but is not limited to) almost all commonly
studied voting rules, e.g., approval voting, all positional scoring rules
(including Borda, plurality, and veto), plurality with runoff, Bucklin,
Copeland, maximin, STV, and ranked pairs.
We also show that many well-studied types of strategic behavior fall under
our framework, including (but not limited to) constructive/destructive
manipulation, bribery, and control by adding/deleting votes, margin of victory,
and minimum manipulation coalition size. Therefore, our main theorem naturally
applies to these problems.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.