id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1204.3481
|
Crowdsourcing Collective Emotional Intelligence
|
cs.SI cs.HC
|
One of the hallmarks of emotional intelligence is the ability to regulate
emotions. Research suggests that cognitive reappraisal - a technique that
involves reinterpreting the meaning of a thought or situation - can
down-regulate negative emotions, without incurring significant psychological or
physiological costs. Habitual use of this strategy is also linked to many key
indices of physical and emotional health. Unfortunately, this technique is not
always easy to apply. Thinking flexibly about stressful thoughts and situations
requires creativity and poise, faculties that often elude us when we need them
the most. In this paper, we propose an assistive technology that coordinates
collective intelligence on demand, to help individuals reappraise stressful
thoughts and situations. In two experiments, we assess key features of our
design and we demonstrate the feasibility of crowdsourcing empathetic
reappraisals with on demand workforces, such as Amazon's Mechanical Turk.
|
1204.3491
|
Rationale awareness for quality assurance in iterative human computation
processes
|
cs.HC cs.SI
|
Human computation refers to the outsourcing of computation tasks to human
workers. It offers a new direction for solving a variety of problems and calls
for innovative ways of managing human computation processes. The majority of
human computation tasks take a parallel approach, whereas the potential of an
iterative approach, i.e., having workers iteratively build on each other's
work, has not been sufficiently explored. This study investigates whether and
how human workers' awareness of previous workers' rationales affects the
performance of the iterative approach in a brainstorming task and a rating
task. Rather than viewing this work as a conclusive piece, the author believes
that this research endeavor is just the beginning of a new research focus that
examines and supports meta-cognitive processes in crowdsourcing activities.
|
1204.3495
|
No big deal: introducing roles to reduce the size of ATL models
|
cs.LO cs.MA
|
In the following paper we present a new semantics for the well-known
strategic logic ATL. It is based on adding roles to concurrent game structures,
that is at every state, each agent belongs to exactly one role, and the role
specifies what actions are available to him at that state. We show advantages
of the new semantics, analyze model checking complexity and prove equivalence
between standard ATL semantics and our new approach.
|
1204.3498
|
A Computational Analysis of Collective Discourse
|
cs.SI cs.CL physics.soc-ph
|
This paper is focused on the computational analysis of collective discourse,
a collective behavior seen in non-expert content contributions in online social
media. We collect and analyze a wide range of real-world collective discourse
datasets from movie user reviews to microblogs and news headlines to scientific
citations. We show that all these datasets exhibit diversity of perspective, a
property seen in other collective systems and a criterion in wise crowds. Our
experiments also confirm that the network of different perspective
co-occurrences exhibits the small-world property with high clustering of
different perspectives. Finally, we show that non-expert contributions in
collective discourse can be used to answer simple questions that are otherwise
hard to answer.
|
1204.3511
|
Crowd & Prejudice: An Impossibility Theorem for Crowd Labelling without
a Gold Standard
|
cs.SI cs.GT
|
A common use of crowd sourcing is to obtain labels for a dataset. Several
algorithms have been proposed to identify uninformative members of the crowd so
that their labels can be disregarded and the cost of paying them avoided. One
common motivation of these algorithms is to try and do without any initial set
of trusted labeled data. We analyse this class of algorithms as mechanisms in a
game-theoretic setting to understand the incentives they create for workers. We
find an impossibility result that without any ground truth, and when workers
have access to commonly shared 'prejudices' upon which they agree but are not
informative of true labels, there is always equilibria where all agents report
the prejudice. A small amount amount of gold standard data is found to be
sufficient to rule out these equilibria.
|
1204.3514
|
Distributed Learning, Communication Complexity and Privacy
|
cs.LG cs.DS
|
We consider the problem of PAC-learning from distributed data and analyze
fundamental communication complexity questions involved. We provide general
upper and lower bounds on the amount of communication needed to learn well,
showing that in addition to VC-dimension and covering number, quantities such
as the teaching-dimension and mistake-bound of a class play an important role.
We also present tight results for a number of common concept classes including
conjunctions, parity functions, and decision lists. For linear separators, we
show that for non-concentrated distributions, we can use a version of the
Perceptron algorithm to learn with much less communication than the number of
updates given by the usual margin bound. We also show how boosting can be
performed in a generic manner in the distributed setting to achieve
communication with only logarithmic dependence on 1/epsilon for any concept
class, and demonstrate how recent work on agnostic learning from
class-conditional queries can be used to achieve low communication in agnostic
settings as well. We additionally present an analysis of privacy, considering
both differential privacy and a notion of distributional privacy that is
especially appealing in this context.
|
1204.3516
|
When majority voting fails: Comparing quality assurance methods for
noisy human computation environment
|
cs.SI cs.AI
|
Quality assurance remains a key topic in human computation research. Prior
work indicates that majority voting is effective for low difficulty tasks, but
has limitations for harder tasks. This paper explores two methods of addressing
this problem: tournament selection and elimination selection, which exploit 2-,
3- and 4-way comparisons between different answers to human computation tasks.
Our experimental results and statistical analyses show that both methods
produce the correct answer in noisy human computation environment more often
than majority voting. Furthermore, we find that the use of 4-way comparisons
can significantly reduce the cost of quality assurance relative to the use of
2-way comparisons.
|
1204.3523
|
Efficient Protocols for Distributed Classification and Optimization
|
cs.LG stat.ML
|
In distributed learning, the goal is to perform a learning task over data
distributed across multiple nodes with minimal (expensive) communication. Prior
work (Daume III et al., 2012) proposes a general model that bounds the
communication required for learning classifiers while allowing for $\eps$
training error on linearly separable data adversarially distributed across
nodes.
In this work, we develop key improvements and extensions to this basic model.
Our first result is a two-party multiplicative-weight-update based protocol
that uses $O(d^2 \log{1/\eps})$ words of communication to classify distributed
data in arbitrary dimension $d$, $\eps$-optimally. This readily extends to
classification over $k$ nodes with $O(kd^2 \log{1/\eps})$ words of
communication. Our proposed protocol is simple to implement and is considerably
more efficient than baselines compared, as demonstrated by our empirical
results.
In addition, we illustrate general algorithm design paradigms for doing
efficient learning over distributed data. We show how to solve
fixed-dimensional and high dimensional linear programming efficiently in a
distributed setting where constraints may be distributed across nodes. Since
many learning problems can be viewed as convex optimization problems where
constraints are generated by individual points, this models many typical
distributed learning scenarios. Our techniques make use of a novel connection
from multipass streaming, as well as adapting the multiplicative-weight-update
framework more generally to a distributed setting. As a consequence, our
methods extend to the wide range of problems solvable using these techniques.
|
1204.3529
|
Hardness Results for Approximate Pure Horn CNF Formulae Minimization
|
cs.CC cs.AI
|
We study the hardness of approximation of clause minimum and literal minimum
representations of pure Horn functions in $n$ Boolean variables. We show that
unless P=NP, it is not possible to approximate in polynomial time the minimum
number of clauses and the minimum number of literals of pure Horn CNF
representations to within a factor of $2^{\log^{1-o(1)} n}$. This is the case
even when the inputs are restricted to pure Horn 3-CNFs with
$O(n^{1+\varepsilon})$ clauses, for some small positive constant $\varepsilon$.
Furthermore, we show that even allowing sub-exponential time computation, it is
still not possible to obtain constant factor approximations for such problems
unless the Exponential Time Hypothesis turns out to be false.
|
1204.3534
|
Toward a Comparative Cognitive History: Archimedes and D. H. J. Polymath
|
cs.SI math.HO physics.soc-ph
|
Is collective intelligence just individual intelligence writ large, or are
there fundamental differences? This position paper argues that a cognitive
history methodology can shed light into the nature of collective intelligence
and its differences from individual intelligence. To advance this proposed area
of research, a small case study on the structure of argument and proof is
presented. Quantitative metrics from network science are used to compare the
artifacts of deduction from two sources. The first is the work of Archimedes of
Syracuse, putatively an individual, and of other ancient Greek mathematicians.
The second is work of the Polymath Project, a massively collaborative
mathematics project that used blog posts and comments to prove new results in
combinatorics.
|
1204.3554
|
Robust stability and stabilization of uncertain linear positive systems
via Integral Linear Constraints: L1- and Linfinity-gains characterization
|
cs.SY math.CA math.DS math.OC
|
Copositive linear Lyapunov functions are used along with dissipativity theory
for stability analysis and control of uncertain linear positive systems. Unlike
usual results on linear systems, linear supply-rates are employed here for
robustness and performance analysis using L1- and Linfinity-gains. Robust
stability analysis is performed using Integral Linear Constraints (ILCs) for
which several classes of uncertainties are discussed. The approach is then
extended to robust stabilization and performance optimization. The obtained
results are expressed in terms of robust linear programming problems that are
equivalently turned into finite dimensional ones using Handelman's Theorem.
Several examples are provided for illustration.
|
1204.3581
|
The Wavelet Trie: Maintaining an Indexed Sequence of Strings in
Compressed Space
|
cs.DS cs.DB
|
An indexed sequence of strings is a data structure for storing a string
sequence that supports random access, searching, range counting and analytics
operations, both for exact matches and prefix search. String sequences lie at
the core of column-oriented databases, log processing, and other storage and
query tasks. In these applications each string can appear several times and the
order of the strings in the sequence is relevant. The prefix structure of the
strings is relevant as well: common prefixes are sought in strings to extract
interesting features from the sequence. Moreover, space-efficiency is highly
desirable as it translates directly into higher performance, since more data
can fit in fast memory.
We introduce and study the problem of compressed indexed sequence of strings,
representing indexed sequences of strings in nearly-optimal compressed space,
both in the static and dynamic settings, while preserving provably good
performance for the supported operations.
We present a new data structure for this problem, the Wavelet Trie, which
combines the classical Patricia Trie with the Wavelet Tree, a succinct data
structure for storing a compressed sequence. The resulting Wavelet Trie
smoothly adapts to a sequence of strings that changes over time. It improves on
the state-of-the-art compressed data structures by supporting a dynamic
alphabet (i.e. the set of distinct strings) and prefix queries, both crucial
requirements in the aforementioned applications, and on traditional indexes by
reducing space occupancy to close to the entropy of the sequence.
|
1204.3596
|
Markerless Motion Capture in the Crowd
|
cs.SI cs.HC
|
This work uses crowdsourcing to obtain motion capture data from video
recordings. The data is obtained by information workers who click repeatedly to
indicate body configurations in the frames of a video, resulting in a model of
2D structure over time. We discuss techniques to optimize the tracking task and
strategies for maximizing accuracy and efficiency. We show visualizations of a
variety of motions captured with our pipeline then apply reconstruction
techniques to derive 3D structure.
|
1204.3598
|
Visualizing Collective Discursive User Interactions in Online Life
Science Communities
|
cs.SI
|
This paper highlights the rationale for the development of BioViz, a tool to
help visualize the existence of collective user interactions in online life
science communities. The first community studied has approximately 22,750
unique users and the second has 35,000. Making sense of the number of
interactions between actors in these networks in order to discern patterns of
collective organization and intelligent behavior is challenging. One of the
complications is that forums - our object of interest - can vary in their
purpose and remit (e.g. the role of gender in the life sciences to forums of
praxis such as one exploring the cell line culturing) and this shapes the
structure of the forum organization itself. Our approach took a random sample
of 53 forums which were manually analyzed by our research team and interactions
between actors were recorded as arcs between nodes. The paper focuses on a
discussion of the utility of our approach, but presents some brief results to
highlight the forms of knowledge that can be gained in identifying collective
group formations. Specifically, we found that by using a matrix-based
visualization approach, we were able to see patterns of collective behavior
which we believe is valuable both to the study of collective intelligence and
the design of virtual organizations.
|
1204.3611
|
Learning to Predict the Wisdom of Crowds
|
cs.SI cs.LG
|
The problem of "approximating the crowd" is that of estimating the crowd's
majority opinion by querying only a subset of it. Algorithms that approximate
the crowd can intelligently stretch a limited budget for a crowdsourcing task.
We present an algorithm, "CrowdSense," that works in an online fashion to
dynamically sample subsets of labelers based on an exploration/exploitation
criterion. The algorithm produces a weighted combination of a subset of the
labelers' votes that approximates the crowd's opinion.
|
1204.3616
|
Large-Scale Automatic Labeling of Video Events with Verbs Based on
Event-Participant Interaction
|
cs.CV cs.AI
|
We present an approach to labeling short video clips with English verbs as
event descriptions. A key distinguishing aspect of this work is that it labels
videos with verbs that describe the spatiotemporal interaction between event
participants, humans and objects interacting with each other, abstracting away
all object-class information and fine-grained image characteristics, and
relying solely on the coarse-grained motion of the event participants. We apply
our approach to a large set of 22 distinct verb classes and a corpus of 2,584
videos, yielding two surprising outcomes. First, a classification accuracy of
greater than 70% on a 1-out-of-22 labeling task and greater than 85% on a
variety of 1-out-of-10 subsets of this labeling task is independent of the
choice of which of two different time-series classifiers we employ. Second, we
achieve this level of accuracy using a highly impoverished intermediate
representation consisting solely of the bounding boxes of one or two event
participants as a function of time. This indicates that successful event
recognition depends more on the choice of appropriate features that
characterize the linguistic invariants of the event classes than on the
particular classifier algorithms.
|
1204.3618
|
Compensating Interpolation Distortion by Using New Optimized Modular
Method
|
cs.CV cs.MM
|
A modular method was suggested before to recover a band limited signal from
the sample and hold and linearly interpolated (or, in general, an
nth-order-hold) version of the regular samples. In this paper a novel approach
for compensating the distortion of any interpolation based on modular method
has been proposed. In this method the performance of the modular method is
optimized by adding only some simply calculated coefficients. This approach
causes drastic improvement in terms of signal-to-noise ratios with fewer
modules compared to the classical modular method. Simulation results clearly
confirm the improvement of the proposed method and also its superior robustness
against additive noise.
|
1204.3658
|
Jar Decoding: Non-Asymptotic Converse Coding Theorems, Taylor-Type
Expansion, and Optimality
|
cs.IT math.IT
|
Recently, a new decoding rule called jar decoding was proposed; under jar
decoding, a non-asymptotic achievable tradeoff between the coding rate and word
error probability was also established for any discrete input memoryless
channel with discrete or continuous output (DIMC). Along the path of
non-asymptotic analysis, in this paper, it is further shown that jar decoding
is actually optimal up to the second order coding performance by establishing
new non-asymptotic converse coding theorems, and determining the Taylor
expansion of the (best) coding rate $R_n (\epsilon)$ of finite block length for
any block length $n$ and word error probability $\epsilon$ up to the second
order. Finally, based on the Taylor-type expansion and the new converses, two
approximation formulas for $R_n (\epsilon)$ (dubbed "SO" and "NEP") are
provided; they are further evaluated and compared against some of the best
bounds known so far, as well as the normal approximation of $R_n (\epsilon)$
revisited recently in the literature. It turns out that while the normal
approximation is all over the map, i.e. sometime below achievable bounds and
sometime above converse bounds, the SO approximation is much more reliable as
it is always below converses; in the meantime, the NEP approximation is the
best among the three and always provides an accurate estimation for $R_n
(\epsilon)$. An important implication arising from the Taylor-type expansion of
$R_n (\epsilon)$ is that in the practical non-asymptotic regime, the optimal
marginal codeword symbol distribution is not necessarily a capacity achieving
distribution.
|
1204.3661
|
Non-asymptotic Equipartition Properties for Independent and Identically
Distributed Sources
|
cs.IT math.IT
|
Given an independent and identically distributed source $X = \{X_i
\}_{i=1}^{\infty}$ with finite Shannon entropy or differential entropy (as the
case may be) $H(X)$, the non-asymptotic equipartition property (NEP) with
respect to $H(X)$ is established, which characterizes, for any finite block
length $n$, how close $-{1\over n} \ln p(X_1 X_2...X_n)$ is to $H(X)$ by
determining the information spectrum of $X_1 X_2...X_n $, i.e., the
distribution of $-{1\over n} \ln p(X_1 X_2...X_n)$. Non-asymptotic
equipartition properties (with respect to conditional entropy, mutual
information, and relative entropy) in a similar nature are also established.
These non-asymptotic equipartition properties are instrumental to the
development of non-asymptotic coding (including both source and channel coding)
results in information theory in the same way as the asymptotic equipartition
property to all asymptotic coding theorems established so far in information
theory. As an example, the NEP with respect to $H(X)$ is used to establish a
non-asymptotic fixed rate source coding theorem, which reveals, for any finite
block length $n$, a complete picture about the tradeoff between the minimum
rate of fixed rate coding of $X_1...X_n$ and error probability when the error
probability is a constant, or goes to 0 with block length $n$ at a
sub-polynomial, polynomial or sub-exponential speed. With the help of the NEP
with respect to other information quantities, non-asymptotic channel coding
theorems of similar nature will be established in a separate paper.
|
1204.3663
|
Thermodynamic Principles in Social Collaborations
|
cs.SI physics.soc-ph
|
A thermodynamic framework is presented to characterize the evolution of
efficiency, order, and quality in social content production systems, and this
framework is applied to the analysis of Wikipedia. Contributing editors are
characterized by their (creative) energy levels in terms of number of edits. We
develop a definition of entropy that can be used to analyze the efficiency of
the system as a whole, and relate it to the evolution of power-law
distributions and a metric of quality. The concept is applied to the analysis
of eight years of Wikipedia editing data and results show that (1) Wikipedia
has become more efficient during its evolution and (2) the entropy-based
efficiency metric has high correlation with observed readership of Wikipedia
pages.
|
1204.3673
|
Group Foraging in Dynamic Environments
|
cs.SI physics.soc-ph q-bio.PE
|
Previous human foraging experiments have shown that human groups routinely
undermatch environmental resources much like other animal species. In this
experiment, we test whether humans also selectively rely on others as
information sources when the environmental state is uncertain, and we also test
whether overt signals of other foragers' success influences group matching
behavior and group adaptation to a changing environment. The results show
evidence of reliance on social information in specific conditions, but
participants were primarily influenced by their individual assessments of food
location rather than the success of other foragers.
|
1204.3677
|
Bayesian Data Cleaning for Web Data
|
cs.DB cs.IR
|
Data Cleaning is a long standing problem, which is growing in importance with
the mass of uncurated web data. State of the art approaches for handling
inconsistent data are systems that learn and use conditional functional
dependencies (CFDs) to rectify data. These methods learn data
patterns--CFDs--from a clean sample of the data and use them to rectify the
dirty/inconsistent data. While getting a clean training sample is feasible in
enterprise data scenarios, it is infeasible in web databases where there is no
separate curated data. CFD based methods are unfortunately particularly
sensitive to noise; we will empirically demonstrate that the number of CFDs
learned falls quite drastically with even a small amount of noise. In order to
overcome this limitation, we propose a fully probabilistic framework for
cleaning data. Our approach involves learning both the generative and error
(corruption) models of the data and using them to clean the data. For
generative models, we learn Bayes networks from the data. For error models, we
consider a maximum entropy framework for combing multiple error processes. The
generative and error models are learned directly from the noisy data. We
present the details of the framework and demonstrate its effectiveness in
rectifying web data.
|
1204.3678
|
Crowd Memory: Learning in the Collective
|
cs.SI cs.HC physics.soc-ph
|
Crowd algorithms often assume workers are inexperienced and thus fail to
adapt as workers in the crowd learn a task. These assumptions fundamentally
limit the types of tasks that systems based on such algorithms can handle. This
paper explores how the crowd learns and remembers over time in the context of
human computation, and how more realistic assumptions of worker experience may
be used when designing new systems. We first demonstrate that the crowd can
recall information over time and discuss possible implications of crowd memory
in the design of crowd algorithms. We then explore crowd learning during a
continuous control task. Recent systems are able to disguise dynamic groups of
workers as crowd agents to support continuous tasks, but have not yet
considered how such agents are able to learn over time. We show, using a
real-time gaming setting, that crowd agents can learn over time, and `remember'
by passing strategies from one generation of workers to the next, despite high
turnover rates in the workers comprising them. We conclude with a discussion of
future research directions for crowd memory and learning.
|
1204.3682
|
Leading the Collective: Social Capital and the Development of Leaders in
Core-Periphery Organizations
|
cs.SI physics.soc-ph
|
Wikipedia and open source software projects have been cited as canonical
examples of collectively intelligent organizations. Both organizations rely on
large crowds of contributors to create knowledge goods. The crowds that emerge
in both cases are not flat, but form a core-periphery network in which a few
leaders contribute a large portion of the production and coordination work.
This paper explores the social network processes by which leaders emerge from
crowd-based organizations.
|
1204.3698
|
Automatic Prediction Of Small Group Performance In Information Sharing
Tasks
|
cs.SI cs.HC physics.soc-ph
|
In this paper, we describe a novel approach, based on Markov jump processes,
to model small group conversational dynamics and to predict small group
performance. More precisely, we estimate conversational events such as turn
taking, backchannels, turn-transitions at the micro-level (1 minute windows)
and then we bridge the micro-level behavior and the macro-level performance. We
tested our approach with a cooperative task, the Information Sharing task, and
we verified the relevance of micro- level interaction dynamics in determining a
good group performance (e.g. higher speaking turns rate and more balanced
participation among group members).
|
1204.3700
|
Fast thresholding algorithms with feedbacks for sparse signal recovery
|
cs.IT math.IT
|
We provide another framework of iterative algorithms based on thresholding,
feedback and null space tuning for sparse signal recovery arising in sparse
representations and compressed sensing. Several thresholding algorithms with
various feedbacks are derived, which are seen as exceedingly effective and
fast. Convergence results are also provided. The core algorithm is shown to
converge in finite many steps under a (preconditioned) restricted isometry
condition. The algorithms are seen as particularly effective for large scale
problems. Numerical studies about the effectiveness and the speed of the
algorithms are also presented.
|
1204.3711
|
Large-System Analysis of Joint User Selection and Vector Precoding for
Multiuser MIMO Downlink
|
cs.IT math.IT
|
Joint user selection (US) and vector precoding (US-VP) is proposed for
multiuser multiple-input multiple-output (MU-MIMO) downlink. The main
difference between joint US-VP and conventional US is that US depends on data
symbols for joint US-VP, whereas conventional US is independent of data
symbols. The replica method is used to analyze the performance of joint US-VP
in the large-system limit, where the numbers of transmit antennas, users, and
selected users tend to infinity while their ratios are kept constant. The
analysis under the assumptions of replica symmetry (RS) and 1-step replica
symmetry breaking (1RSB) implies that optimal data-independent US provides
nothing but the same performance as random US in the large-system limit,
whereas data-independent US is capacity-achieving as only the number of users
tends to infinity. It is shown that joint US-VP can provide a substantial
reduction of the energy penalty in the large-system limit. Consequently, joint
US-VP outperforms separate US-VP in terms of the achievable sum rate, which
consists of a combination of vector precoding (VP) and data-independent US. In
particular, data-dependent US can be applied to general modulation, and
implemented with a greedy algorithm.
|
1204.3716
|
On the Blind Interference Alignment over Homogeneous Block Fading
Channels
|
cs.IT math.IT
|
Staggered fading pattern between different users is crucial to interference
alignment without CSIT, or so-called blind interference alignment (BIA). This
special fading structure naturally arises from heterogeneous block fading
setting, in which different users experience independent block fading with
different coherent times. Jafar et al. prove that BIA can be applied in some
special heterogeneous block fading channels, which are formed naturally or
constructed artificially. In this paper, we show that in the context of a
2-user 2x1 broadcasting (BC) channel, staggered fading pattern can also be
found in homogeneous block fading setting, in which both users experience
independent fading with the same coherent time; and we propose a scheme to
achieve the optimal 4/3 DoF for the homogenous setting by using BIA. Applying
the proposed scheme, we further study a 2x1 BC network with K users undergoing
homogeneous block fading. When K>=4, we show it is almost guaranteed that the
transmitter can find two users among the K users to form a 2-user 2x1 BC
channel which can apply BIA.
|
1204.3719
|
On the Computation of the Higher Order Statistics of the Channel
Capacity over Generalized Fading Channels
|
cs.IT cs.PF math.IT math.PR math.ST stat.TH
|
The higher-order statistics (HOS) of the channel capacity
$\mu_n=\mathbb{E}[\log^n(1+\gamma_{end})]$, where $n\in\mathbb{N}$ denotes the
order of the statistics, has received relatively little attention in the
literature, due in part to the intractability of its analysis. In this letter,
we propose a novel and unified analysis, which is based on the moment
generating function (MGF) technique, to exactly compute the HOS of the channel
capacity. More precisely, our mathematical formalism can be readily applied to
maximal-ratio-combining (MRC) receivers operating in generalized fading
environments (i.e., the sum of the correlated noncentral chi-squared
distributions / the correlated generalized Rician distributions). The
mathematical formalism is illustrated by some numerical examples focussing on
the correlated generalized fading environments.
|
1204.3724
|
Who is Authoritative? Understanding Reputation Mechanisms in Quora
|
cs.SI cs.HC
|
As social Q&A sites gain popularity, it is important to understand how users
judge the authoritativeness of users and content, build reputation, and
identify and promote high quality content. We conducted a study of emerging
social Q&A site Quora. First, we describe user activity on Quora by analyzing
data across 60 question topics and 3917 users. Then we provide a rich
understanding of issues of authority, reputation, and quality from in-depth
interviews with ten Quora users. Our results show that primary sources of
information on Quora are judged authoritative. Also, users judge the reputation
of other users based on their past contributions. Social voting helps users
identify and promote good content but is prone to preferential attachment.
Combining social voting with sophisticated algorithms for ranking content might
enable users to better judge others' reputation and promote high quality
content.
|
1204.3726
|
Proceedings of the first International Workshop On Open Data, WOD-2012
|
cs.DL cs.DB
|
WOD-2012 aims at facilitating new trends and ideas from a broad range of
topics concerned within the widely-spread Open Data movement, from the
viewpoint of computer science research.
While being most commonly known from the recent Linked Open Data movement,
the concept of publishing data explicitly as Open Data has meanwhile developed
many variants and facets that go beyond publishing large and highly structured
RDF/S repositories. Open Data comprises text and semi-structured data, but also
open multi-modal contents, including music, images, and videos. With the
increasing amount of data that is published by governments (see, e.g.,
data.gov, data.gov.uk or data.gouv.fr), by international organizations
(data.worldbank.org or data.undp.org) and by scientific communities (tdar.org,
cds.u-strasbg.fr, GenBank, IRIS or KNB) explicitly under an Open Data policy,
new challenges arise not only due to the scale at which this data becomes
available.
A number of community-based conferences accommodate tracks or workshops which
are dedicated to Open Data. However, WOD aims to be a premier venue to gather
researchers and practitioners who are contributing to and interested in the
emerging field of managing Open Data from a computer science perspective.
Hence, it is a unique opportunity to find in a single place up-to-date
scientific works on Web-scale Open Data issues that have so far only partially
been addressed by different research communities such as Databases, Data Mining
and Knowledge Management, Distributed Systems, Data Privacy, and Data
Visualization.
|
1204.3731
|
Towards Real-Time Summarization of Scheduled Events from Twitter Streams
|
cs.IR cs.CL cs.SI
|
This paper explores the real-time summarization of scheduled events such as
soccer games from torrential flows of Twitter streams. We propose and evaluate
an approach that substantially shrinks the stream of tweets in real-time, and
consists of two steps: (i) sub-event detection, which determines if something
new has occurred, and (ii) tweet selection, which picks a representative tweet
to describe each sub-event. We compare the summaries generated in three
languages for all the soccer games in "Copa America 2011" to reference live
reports offered by Yahoo! Sports journalists. We show that simple text analysis
methods which do not involve external knowledge lead to summaries that cover
84% of the sub-events on average, and 100% of key types of sub-events (such as
goals in soccer). Our approach should be straightforwardly applicable to other
kinds of scheduled events such as other sports, award ceremonies, keynote
talks, TV shows, etc.
|
1204.3740
|
Cyclic codes over some special rings
|
cs.IT math.IT math.RA
|
In this paper we will study cyclic codes over some special rings:
F_{q}[u]/(u^{i}), F_{q}[u_1,...u_{i}]/(u_1^2,u_2^2,...,u_{i}^2, u_1 u_2 - u_2
u_1,...,u_{i}u_{j} - u_{j}u_{i},...), F_{q}[u,v]/(u^{i},v^{j},uv-vu), q=p^{r},
where p is a prime number, r\in N-{0} and F_{q} is a field with q elements.
|
1204.3742
|
Distributed Iterative Processing for Interference Channels with Receiver
Cooperation
|
cs.IT math.IT stat.ML
|
We propose a framework for the derivation and evaluation of distributed
iterative algorithms for receiver cooperation in interference-limited wireless
systems. Our approach views the processing within and collaboration between
receivers as the solution to an inference problem in the probabilistic model of
the whole system. The probabilistic model is formulated to explicitly
incorporate the receivers' ability to share information of a predefined type.
We employ a recently proposed unified message-passing tool to infer the
variables of interest in the factor graph representation of the probabilistic
model. The exchange of information between receivers arises in the form of
passing messages along some specific edges of the factor graph; the rate of
updating and passing these messages determines the communication overhead
associated with cooperation. Simulation results illustrate the high performance
of the proposed algorithm even with a low number of message exchanges between
receivers.
|
1204.3748
|
Statistical Multiresolution Estimation for Variational Imaging: With an
Application in Poisson-Biophotonics
|
stat.AP cs.CV
|
In this paper we present a spatially-adaptive method for image reconstruction
that is based on the concept of statistical multiresolution estimation as
introduced in [Frick K, Marnitz P, and Munk A. "Statistical multiresolution
Dantzig estimation in imaging: Fundamental concepts and algorithmic framework".
Electron. J. Stat., 6:231-268, 2012]. It constitutes a variational
regularization technique that uses an supremum-type distance measure as
data-fidelity combined with a convex cost functional. The resulting convex
optimization problem is approached by a combination of an inexact alternating
direction method of multipliers and Dykstra's projection algorithm. We describe
a novel method for balancing data-fit and regularity that is fully automatic
and allows for a sound statistical interpretation. The performance of our
estimation approach is studied for various problems in imaging. Among others,
this includes deconvolution problems that arise in Poisson nanoscale
fluorescence microscopy.
|
1204.3752
|
GPS Information and Rate Tolerance - Clarifying Relationship between
Rate Distortion and Complexity Distortion
|
cs.IT cs.CC math.IT
|
I proposed rate tolerance and discussed its relation to rate distortion in my
book "A Generalized Information Theory" published in 1993. Recently, I examined
the structure function and the complexity distortion based on Kolmogorov's
complexity theory. It is my understanding now that complexity-distortion is
only a special case of rate tolerance while constraint sets change from fuzzy
sets into clear sets that look like balls with the same radius. It is not true
that the complexity distortion is generally equivalent to rate distortion as
claimed by the researchers of complexity theory. I conclude that a rate
distortion function can only be equivalent to a rate tolerance function and
both of them can be described by a generalized mutual information formula where
P(Y|X) is equal to P(Y|Tolerance). The paper uses GPS as an example to derive
generalized information formulae and proves the above conclusions using
mathematical analyses and a coding example. The similarity between the formula
for measuring GPS information and the formula for rate distortion function can
deepen our understanding the generalized information measure.
|
1204.3799
|
Biographical Social Networks on Wikipedia - A cross-cultural study of
links that made history
|
cs.SI cs.CY physics.soc-ph
|
It is arguable whether history is made by great men and women or vice versa,
but undoubtably social connections shape history. Analysing Wikipedia, a global
collective memory place, we aim to understand how social links are recorded
across cultures. Starting with the set of biographies in the English Wikipedia
we focus on the networks of links between these biographical articles on the 15
largest language Wikipedias. We detect the most central characters in these
networks and point out culture-related peculiarities. Furthermore, we reveal
remarkable similarities between distinct groups of language Wikipedias and
highlight the shared knowledge about connections between persons across
cultures.
|
1204.3800
|
Indus script corpora, archaeo-metallurgy and Meluhha (Mleccha)
|
cs.CL
|
Jules Bloch's work on formation of the Marathi language has to be expanded
further to provide for a study of evolution and formation of Indian languages
in the Indian language union (sprachbund). The paper analyses the stages in the
evolution of early writing systems which began with the evolution of counting
in the ancient Near East. A stage anterior to the stage of syllabic
representation of sounds of a language, is identified. Unique geometric shapes
required for tokens to categorize objects became too large to handle to
abstract hundreds of categories of goods and metallurgical processes during the
production of bronze-age goods. About 3500 BCE, Indus script as a writing
system was developed to use hieroglyphs to represent the 'spoken words'
identifying each of the goods and processes. A rebus method of representing
similar sounding words of the lingua franca of the artisans was used in Indus
script. This method is recognized and consistently applied for the lingua
franca of the Indian sprachbund. That the ancient languages of India,
constituted a sprachbund (or language union) is now recognized by many
linguists. The sprachbund area is proximate to the area where most of the Indus
script inscriptions were discovered, as documented in the corpora. That
hundreds of Indian hieroglyphs continued to be used in metallurgy is evidenced
by their use on early punch-marked coins. This explains the combined use of
syllabic scripts such as Brahmi and Kharoshti together with the hieroglyphs on
Rampurva copper bolt, and Sohgaura copper plate from about 6th century
BCE.Indian hieroglyphs constitute a writing system for meluhha language and are
rebus representations of archaeo-metallurgy lexemes. The rebus principle was
employed by the early scripts and can legitimately be used to decipher the
Indus script, after secure pictorial identification.
|
1204.3806
|
PageRank model of opinion formation on social networks
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
We propose the PageRank model of opinion formation and investigate its rich
properties on real directed networks of Universities of Cambridge and Oxford,
LiveJournal and Twitter. In this model the opinion formation of linked electors
is weighted with their PageRank probability. We find that the society elite,
corresponding to the top PageRank nodes, can impose its opinion to a
significant fraction of the society. However, for a homogeneous distribution of
two opinions there exists a bistability range of opinions which depends on a
conformist parameter characterizing the opinion formation. We find that
LiveJournal and Twitter networks have a stronger tendency to a totalitar
opinion formation. We also analyze the Sznajd model generalized for scale-free
networks with the weighted PageRank vote of electors.
|
1204.3812
|
Gaussian Approximation for the Wireless Multi-access Interference
Distribution and Its Applications
|
math.PR cs.IT math.IT
|
This paper investigates the problem of Gaussian approximation for the
wireless multi-access interference distribution in large spatial wireless
networks. First, a principled methodology is presented to establish rates of
convergence of the multi-access interference distribution to a Gaussian
distribution for general bounded and power-law decaying path-loss functions.
The model is general enough to also include various random wireless channel
dynamics such as fading and shadowing arising from multipath propagation and
obstacles existing in the communication environment. It is shown that the
wireless multi-access interference distribution converges to the Gaussian
distribution with the same mean and variance at a rate
$\frac{1}{\sqrt{\lambda}}$, where $\lambda>0$ is a parameter controlling the
intensity of the planar (possibly non-stationary) Poisson point process
generating node locations. An explicit expression for the scaling coefficient
is obtained as a function of fading statistics and the path-loss function.
Second, an extensive numerical and simulation study is performed to illustrate
the accuracy of the derived Gaussian approximation bounds. A good statistical
fit between the interference distribution and its Gaussian approximation is
observed for moderate to high values of $\lambda$. Finally, applications of
these approximation results to upper and lower bound the outage capacity and
ergodic sum capacity for spatial wireless networks are illustrated. The derived
performance bounds on these capacity metrics track the network performance
within one nats per second per hertz.
|
1204.3818
|
Throughput Optimal Policies for Energy Harvesting Wireless Transmitters
with Non-Ideal Circuit Power
|
cs.IT math.IT
|
Characterizing the fundamental tradeoffs for maximizing energy efficiency
(EE) versus spectrum efficiency (SE) is a key problem in wireless
communication. In this paper, we address this problem for a point-to-point
additive white Gaussian noise (AWGN) channel with the transmitter powered
solely via energy harvesting from the environment. In addition, we assume a
practical on-off transmitter model with non-ideal circuit power, i.e., when the
transmitter is on, its consumed power is the sum of the transmit power and a
constant circuit power. Under this setup, we study the optimal transmit power
allocation to maximize the average throughput over a finite horizon, subject to
the time-varying energy constraint and the non-ideal circuit power consumption.
First, we consider the off-line optimization under the assumption that the
energy arrival time and amount are a priori known at the transmitter. Although
this problem is non-convex due to the non-ideal circuit power, we show an
efficient optimal solution that in general corresponds to a two-phase
transmission: the first phase with an EE-maximizing on-off power allocation,
and the second phase with a SE-maximizing power allocation that is
non-decreasing over time, thus revealing an interesting result that both the EE
and SE optimizations are unified in an energy harvesting communication system.
We then extend the optimal off-line algorithm to the case with multiple
parallel AWGN channels, based on the principle of nested optimization. Finally,
inspired by the off-line optimal solution, we propose a new online algorithm
under the practical setup with only the past and present energy state
information (ESI) known at the transmitter.
|
1204.3820
|
Distance Optimal Formation Control on Graphs with a Tight Convergence
Time Guarantee
|
cs.SY cs.AI cs.RO
|
For the task of moving a set of indistinguishable agents on a connected graph
with unit edge distance to an arbitrary set of goal vertices, free of
collisions, we propose a fast distance optimal control algorithm that guides
the agents into the desired formation. Moreover, we show that the algorithm
also provides a tight convergence time guarantee (time optimality and distance
optimality cannot be simultaneously satisfied). Our generic graph formulation
allows the algorithm to be applied to scenarios such as grids with holes
(modeling obstacles) in arbitrary dimensions. Simulations, available online,
confirm our theoretical developments.
|
1204.3830
|
Planning Optimal Paths for Multiple Robots on Graphs
|
cs.RO cs.AI cs.SY
|
In this paper, we study the problem of optimal multi-robot path planning
(MPP) on graphs. We propose two multiflow based integer linear programming
(ILP) models that computes minimum last arrival time and minimum total distance
solutions for our MPP formulation, respectively. The resulting algorithms from
these ILP models are complete and guaranteed to yield true optimal solutions.
In addition, our flexible framework can easily accommodate other variants of
the MPP problem. Focusing on the time optimal algorithm, we evaluate its
performance, both as a stand alone algorithm and as a generic heuristic for
quickly solving large problem instances. Computational results confirm the
effectiveness of our method.
|
1204.3838
|
Energy cost reduction in the synchronization of a pair of nonidentical
coupled Hindmarsh-Rose neurons
|
cs.AI nlin.CD q-bio.NC
|
Many biological processes involve synchronization between nonequivalent
systems, i.e, systems where the difference is limited to a rather small
parameter mismatch. The maintenance of the synchronized regime in this cases is
energetically costly \cite{1}. This work studies the energy implications of
synchronization phenomena in a pair of structurally flexible coupled neurons
that interact through electrical coupling. We show that the forced
synchronization between two nonidentical neurons creates appropriate conditions
for an efficient actuation of adaptive laws able to make the neurons
structurally approach their behaviours in order to decrease the flow of energy
required to maintain the synchronization regime.
|
1204.3844
|
On how percolation threshold affects PSO performance
|
cs.AI
|
Statistical evidence of the influence of neighborhood topology on the
performance of particle swarm optimization (PSO) algorithms has been shown in
many works. However, little has been done about the implications could have the
percolation threshold in determining the topology of this neighborhood. This
work addresses this problem for individuals that, like robots, are able to
sense in a limited neighborhood around them. Based on the concept of
percolation threshold, and more precisely, the disk percolation model in 2D, we
show that better results are obtained for low values of radius, when
individuals occasionally ask others their best visited positions, with the
consequent decrease of computational complexity. On the other hand, since
percolation threshold is a universal measure, it could have a great interest to
compare the performance of different hybrid PSO algorithms.
|
1204.3860
|
Macroscopes: models for collective decision making
|
cs.SI cs.CC
|
We introduce a new model of collective decision making, when a global
decision needs to be made but the parties only possess partial information, and
are unwilling (or unable) to first create a globalcomposite of their local
views. Our macroscope model captures two key features of many real-world
problems: allotment structure (how access to local information is apportioned
between parties, including overlaps between the parties) and the possible
presence of meta-information (what each party knows about the allotment
structure of the overall problem). Using the framework of communication
complexity, we formalize the efficient solution of a macroscope. We present
general results about the macroscope model, and also results that abstract the
essential computational operations underpinning practical applications,
including in financial markets and decentralized sensor networks. We illustrate
the computational problem inherent in real-world collective decision making
processes using results for specific functions, involving detecting a change in
state (constant and step functions), and computing statistical properties (the
mean).
|
1204.3890
|
Collective Creativity: Where we are and where we might go
|
cs.SI cs.HC
|
Creativity is individual, and it is social. The social aspects of creativity
have become of increasing interest as systems have emerged that mobilize large
numbers of people to engage in creative tasks. We examine research related to
collective intelligence and differentiate work on collective creativity from
other collective activities by analyzing systems with respect to the tasks that
are performed and the outputs that result. Three types of systems are
discussed: games, contests and networks. We conclude by suggesting how systems
that generate collective creativity can be improved and how new systems might
be constructed.
|
1204.3891
|
Re-differentiation as collective intelligence: The Ktunaxa language
online community
|
cs.SI physics.soc-ph
|
This paper presents preliminary results of an investigation of collectively
intelligent behavior in a Native North American speech community. The research
reveals several independently initiated strategies organized around the
collective problem of language endangerment. Specifically, speakers are
engaging in self-organizing efforts to reverse historical language
simplification that resulted from cultural trauma. These acts of collective
intelligence serve to reduce entropy in speech community identity.
|
1204.3918
|
Eliminating the Weakest Link: Making Manipulation Intractable?
|
cs.AI cs.CC cs.GT
|
Successive elimination of candidates is often a route to making manipulation
intractable to compute. We prove that eliminating candidates does not
necessarily increase the computational complexity of manipulation. However, for
many voting rules used in practice, the computational complexity increases. For
example, it is already known that it is NP-hard to compute how a single voter
can manipulate the result of single transferable voting (the elimination
version of plurality voting). We show here that it is NP-hard to compute how a
single voter can manipulate the result of the elimination version of veto
voting, of the closely related Coombs' rule, and of the elimination versions of
a general class of scoring rules.
|
1204.3921
|
Analysis of Twitter Traffic based on Renewal Densities
|
cs.CY cs.SI
|
In this paper we propose a novel approach for Twitter traffic analysis based
on renewal theory. Even though twitter datasets are of increasing interest to
researchers, extracting information from message timing remains somewhat
unexplored. Our approach, extending our prior work on anomaly detection, makes
it possible to characterize levels of correlation within a message stream, thus
assessing how much interaction there is between those posting messages.
Moreover, our method enables us to detect the presence of periodic traffic,
which is useful to determine whether there is spam in the message stream.
Because our proposed techniques only make use of timing information and are
amenable to downsampling, they can be used as low complexity tools for data
analysis.
|
1204.3939
|
Tracking the 2011 Student-led Collective Movement in Chile through
Social Media Use
|
cs.SI physics.soc-ph
|
Using social media archives of the 2011 Chilean student unrest and dynamic
social network analysis, we study how leaders and participants use social media
such as Twitter, and the Web to self-organize and communicate with each other,
and thus generate one of the biggest "smart movements" in the history of Chile.
In this paper we i) describe the basic network topology of the 2011 student-led
social movement in Chile; ii) explore how the student leaders are connected to,
and how are they seen by (a) political leaders, and (b) University authorities;
iii) hypothesize about key success factors and risk variables for the Student
Network Movement's organization process and sustainability over time. We
contend that this social media enabled massive movement is yet another
manifestation of the network era, which leverages agents' socio-technical
networks, and thus accelerates how agents coordinate, mobilize resources and
enact collective intelligence.
|
1204.3946
|
The Dynamics of Influence Systems
|
nlin.AO cs.MA cs.SI math.DS
|
Influence systems form a large class of multiagent systems designed to model
how influence, broadly defined, spreads across a dynamic network. We build a
general analytical framework which we then use to prove that, while sometimes
chaotic, influence dynamics of the diffusive kind is almost always
asymptotically periodic. Besides resolving the dynamics of a popular family of
multiagent systems, the other contribution of this work is to introduce a new
type of renormalization-based bifurcation analysis for multiagent systems.
|
1204.3968
|
Convolutional Neural Networks Applied to House Numbers Digit
Classification
|
cs.CV cs.LG cs.NE
|
We classify digits of real-world house numbers using convolutional neural
networks (ConvNets). ConvNets are hierarchical feature learning neural networks
whose structure is biologically inspired. Unlike many popular vision approaches
that are hand-designed, ConvNets can automatically learn a unique set of
features optimized for a given task. We augmented the traditional ConvNet
architecture by learning multi-stage features and by using Lp pooling and
establish a new state-of-the-art of 94.85% accuracy on the SVHN dataset (45.2%
error improvement). Furthermore, we analyze the benefits of different pooling
methods and multi-stage features in ConvNets. The source code and a tutorial
are available at eblearn.sf.net.
|
1204.3972
|
EigenGP: Sparse Gaussian process models with data-dependent
eigenfunctions
|
cs.LG stat.CO stat.ML
|
Gaussian processes (GPs) provide a nonparametric representation of functions.
However, classical GP inference suffers from high computational cost and it is
difficult to design nonstationary GP priors in practice. In this paper, we
propose a sparse Gaussian process model, EigenGP, based on the Karhunen-Loeve
(KL) expansion of a GP prior. We use the Nystrom approximation to obtain data
dependent eigenfunctions and select these eigenfunctions by evidence
maximization. This selection reduces the number of eigenfunctions in our model
and provides a nonstationary covariance function. To handle nonlinear
likelihoods, we develop an efficient expectation propagation (EP) inference
algorithm, and couple it with expectation maximization for eigenfunction
selection. Because the eigenfunctions of a Gaussian kernel are associated with
clusters of samples - including both the labeled and unlabeled - selecting
relevant eigenfunctions enables EigenGP to conduct semi-supervised learning.
Our experimental results demonstrate improved predictive performance of EigenGP
over alternative state-of-the-art sparse GP and semisupervised learning methods
for regression, classification, and semisupervised classification.
|
1204.3989
|
Closed-Form Critical Conditions of Saddle-Node Bifurcations for Buck
Converters
|
cs.SY math.DS nlin.CD
|
A general and exact critical condition of saddle-node bifurcation is derived
in closed form for the buck converter. The critical condition is helpful for
the converter designers to predict or prevent some jump instabilities or
coexistence of multiple solutions associated with the saddle-node bifurcation.
Some previously known critical conditions become special cases in this
generalized framework. Given an arbitrary control scheme, a systematic
procedure is proposed to derive the critical condition for that control scheme.
|
1204.3990
|
Comments on "Bifurcations in DC-DC Switching Converters: Review of
Methods and Applications"
|
cs.SY math.DS nlin.CD
|
In a review paper [1] (El Aroudi, et al., 2005), two stability conditions for
DC-DC converters are presented. However, these two conditions were published
years earlier at least in a journal paper [2] (Fang and Abed, 2001). In this
note, the similar texts of [1] and [2] are compared.
|
1204.3997
|
A New Low-Complexity Decodable Rate-5/4 STBC for Four Transmit Antennas
with Nonvanishing Determinants
|
cs.IT math.IT
|
The use of Space-Time Block Codes (STBCs) increases significantly the optimal
detection complexity at the receiver unless the low-complexity decodability
property is taken into consideration in the STBC design. In this paper we
propose a new low-complexity decodable rate-5/4 full-diversity 4 x 4 STBC. We
provide an analytical proof that the proposed code has the
Non-Vanishing-Determinant (NVD) property, a property that can be exploited
through the use of adaptive modulation which changes the transmission rate
according to the wireless channel quality. We compare the proposed code to the
best existing low-complexity decodable rate-5/4 full-diversity 4 x 4 STBC in
terms of performance over quasi-static Rayleigh fading channels, worst- case
complexity, average complexity, and Peak-to-Average Power Ratio (PAPR). Our
code is found to provide better performance, lower average decoding complexity,
and lower PAPR at the expense of a slight increase in worst-case decoding
complexity.
|
1204.4000
|
A New Family of Low-Complexity Decodable STBCs for Four Transmit
Antennas
|
cs.IT math.IT
|
In this paper we propose a new construction method for rate-1
Fast-Group-Decodable (FGD) Space-Time-Block Codes (STBC)s for 2^a transmit
antennas. We focus on the case of a=2 and we show that the new FGD rate-1 code
has the lowest worst-case decoding complexity among existing comparable STBCs.
The coding gain of the new rate-1 code is then optimized through constellation
stretching and proved to be constant irrespective of the underlying QAM
constellation prior to normalization. In a second step, we propose a new rate-2
STBC that multiplexes two of our rate-1 codes by the means of a unitary matrix.
A compromise between rate and complexity is then obtained through puncturing
our rate-2 code giving rise to a new rate-3/2 code. The proposed codes are
compared to existing codes in the literature and simulation results show that
our rate-3/2 code has a lower average decoding complexity while our rate-2 code
maintains its lower average decoding complexity in the low SNR region at the
expense of a small performance loss.
|
1204.4015
|
Human Navigational Performance in a Complex Network with Progressive
Disruptions
|
physics.soc-ph cs.HC cs.SI
|
The current paper is an investigation towards understanding the navigational
performance of humans on a network when the "landmark" nodes are blocked. We
observe that humans learn to cope up, despite the continued introduction of
blockages in the network. The experiment proposed involves the task of
navigating on a word network based on a puzzle called the wordmorph. We
introduce blockages in the network and report an incremental improvement in
performance with respect to time. We explain this phenomenon by analyzing the
evolution of the knowledge in the human participants of the underlying network
as more and more landmarks are removed. We hypothesize that humans learn the
bare essentials to navigate unless we introduce blockages in the network which
would whence enforce upon them the need to explore newer ways of navigating. We
draw a parallel to human problem solving and postulate that obstacles are
catalysts for humans to innovate techniques to solve a restricted variant of a
familiar problem.
|
1204.4051
|
Solution Representations and Local Search for the bi-objective Inventory
Routing Problem
|
cs.AI
|
The solution of the biobjective IRP is rather challenging, even for
metaheuristics. We are still lacking a profound understanding of appropriate
solution representations and effective neighborhood structures. Clearly, both
the delivery volumes and the routing aspects of the alternatives need to be
reflected in an encoding, and must be modified when searching by means of local
search. Our work contributes to the better understanding of such solution
representations. On the basis of an experimental investigation, the advantages
and drawbacks of two encodings are studied and compared.
|
1204.4065
|
Analysis of Sparse Representations Using Bi-Orthogonal Dictionaries
|
cs.IT math.IT
|
The sparse representation problem of recovering an N dimensional sparse
vector x from M < N linear observations y = Dx given dictionary D is
considered. The standard approach is to let the elements of the dictionary be
independent and identically distributed (IID) zero-mean Gaussian and minimize
the l1-norm of x under the constraint y = Dx. In this paper, the performance of
l1-reconstruction is analyzed, when the dictionary is bi-orthogonal D = [O1
O2], where O1,O2 are independent and drawn uniformly according to the Haar
measure on the group of orthogonal M x M matrices. By an application of the
replica method, we obtain the critical conditions under which perfect
l1-recovery is possible with bi-orthogonal dictionaries.
|
1204.4071
|
Motivations for Participation in Socially Networked Collective
Intelligence Systems
|
cs.SI physics.soc-ph
|
One of the most significant challenges facing systems of collective
intelligence is how to encourage participation on the scale required to produce
high quality data. This paper details ongoing work with Phrase Detectives, an
online game-with-a-purpose deployed on Facebook, and investigates user
motivations for participation in social network gaming where the wisdom of
crowds produces useful data.
|
1204.4073
|
Modulation Diversity for Spatial Modulation Using Complex Interleaved
Orthogonal Design
|
cs.IT math.IT
|
In this paper, we propose modulation diversity techniques for Spatial
Modulation (SM) system using Complex Interleaved Orthogonal Design (CIOD) meant
for two transmit antennas. Specifically, we show that by using the CIOD for two
transmit antenna system, the standard SM scheme, where only one transmit
antenna is activated in any symbol duration, can achieve a transmit diversity
order of two. We show with our simulation results that the proposed schemes
offer transmit diversity order of two, and hence, give a better Symbol Error
Rate performance than the SM scheme with transmit diversity order of one.
|
1204.4093
|
Utilizing RxNorm to Support Practical Computing Applications: Capturing
Medication History in Live Electronic Health Records
|
cs.DB cs.HC
|
RxNorm was utilized as the basis for direct-capture of medication history
data in a live EHR system deployed in a large, multi-state outpatient
behavioral healthcare provider in the United States serving over 75,000
distinct patients each year across 130 clinical locations. This tool
incorporated auto-complete search functionality for medications and proper
dosage identification assistance. The overarching goal was to understand if and
how standardized terminologies like RxNorm can be used to support practical
computing applications in live EHR systems. We describe the stages of
implementation, approaches used to adapt RxNorm's data structure for the
intended EHR application, and the challenges faced. We evaluate the
implementation using a four-factor framework addressing flexibility, speed,
data integrity, and medication coverage. RxNorm proved to be functional for the
intended application, given appropriate adaptations to address high-speed
input/output (I/O) requirements of a live EHR and the flexibility required for
data entry in multiple potential clinical scenarios. Future research around
search optimization for medication entry, user profiling, and linking RxNorm to
drug classification schemes holds great potential for improving the user
experience and utility of medication data in EHRs.
|
1204.4104
|
Normality and Finite-state Dimension of Liouville numbers
|
cs.IT math.IT
|
Liouville numbers were the first class of real numbers which were proven to
be transcendental. It is easy to construct non-normal Liouville numbers. Kano
and Bugeaud have proved, using analytic techniques, that there are normal
Liouville numbers. Here, for a given base k >= 2, we give two simple
constructions of a Liouville number which is normal to the base k.
The first construction is combinatorial, and is based on de Bruijn sequences.
A real number in the unit interval is normal if and only if its finite-state
dimension is 1. We generalize our construction to prove that for any rational r
in the closed unit interval, there is a Liouville number with finite state
dimension r. This refines Staiger's result that the set of Liouville numbers
has constructive Hausdorff dimension zero, showing a new quantitative
classification of Liouville numbers can be attained using finite-state
dimension.
In the second number-theoretic construction, we use an arithmetic property of
numbers - the existence of primitive roots - to construct Liouville numbers
normal in finitely many bases, assuming a Generalized Artin's conjecture on
primitive roots.
|
1204.4107
|
Towards the Evolution of Vertical-Axis Wind Turbines using Supershapes
|
cs.NE cs.CG
|
We have recently presented an initial study of evolutionary algorithms used
to design vertical-axis wind turbines (VAWTs) wherein candidate prototypes are
evaluated under approximated wind tunnel conditions after being physically
instantiated by a 3D printer. That is, unlike other approaches such as
computational fluid dynamics simulations, no mathematical formulations are used
and no model assumptions are made. However, the representation used
significantly restricted the range of morphologies explored. In this paper, we
present initial explorations into the use of a simple generative encoding,
known as Gielis superformula, that produces a highly flexible 3D shape
representation to design VAWT. First, the target-based evolution of 3D
artefacts is investigated and subsequently initial design experiments are
performed wherein each VAWT candidate is physically instantiated and evaluated
under approximated wind tunnel conditions. It is shown possible to produce very
closely matching designs of a number of 3D objects through the evolution of
supershapes produced by Gielis superformula. Moreover, it is shown possible to
use artificial physical evolution to identify novel and increasingly efficient
supershape VAWT designs.
|
1204.4116
|
An existing, ecologically-successful genus of collectively intelligent
artificial creatures
|
cs.SI cs.AI cs.MA physics.soc-ph
|
People sometimes worry about the Singularity [Vinge, 1993; Kurzweil, 2005],
or about the world being taken over by artificially intelligent robots. I
believe the risks of these are very small. However, few people recognize that
we already share our world with artificial creatures that participate as
intelligent agents in our society: corporations. Our planet is inhabited by two
distinct kinds of intelligent beings --- individual humans and corporate
entities --- whose natures and interests are intimately linked. To co-exist
well, we need to find ways to define the rights and responsibilities of both
individual humans and corporate entities, and to find ways to ensure that
corporate entities behave as responsible members of society.
|
1204.4122
|
Network structure of inter-industry flows
|
physics.soc-ph cs.SI q-fin.GN
|
We study the structure of inter-industry relationships using networks of
money flows between industries in 20 national economies. We find these networks
vary around a typical structure characterized by a Weibull link weight
distribution, exponential industry size distribution, and a common community
structure. The community structure is hierarchical, with the top level of the
hierarchy comprising five industry communities: food industries, chemical
industries, manufacturing industries, service industries, and extraction
industries.
|
1204.4140
|
Beyond Random Walk and Metropolis-Hastings Samplers: Why You Should Not
Backtrack for Unbiased Graph Sampling
|
stat.ME cs.DS cs.NI cs.SI physics.data-an physics.soc-ph
|
Graph sampling via crawling has been actively considered as a generic and
important tool for collecting uniform node samples so as to consistently
estimate and uncover various characteristics of complex networks. The so-called
simple random walk with re-weighting (SRW-rw) and Metropolis-Hastings (MH)
algorithm have been popular in the literature for such unbiased graph sampling.
However, an unavoidable downside of their core random walks -- slow diffusion
over the space, can cause poor estimation accuracy. In this paper, we propose
non-backtracking random walk with re-weighting (NBRW-rw) and MH algorithm with
delayed acceptance (MHDA) which are theoretically guaranteed to achieve, at
almost no additional cost, not only unbiased graph sampling but also higher
efficiency (smaller asymptotic variance of the resulting unbiased estimators)
than the SRW-rw and the MH algorithm, respectively. In particular, a remarkable
feature of the MHDA is its applicability for any non-uniform node sampling like
the MH algorithm, but ensuring better sampling efficiency than the MH
algorithm. We also provide simulation results to confirm our theoretical
findings.
|
1204.4141
|
Analysis of a Natural Gradient Algorithm on Monotonic
Convex-Quadratic-Composite Functions
|
cs.AI math.OC
|
In this paper we investigate the convergence properties of a variant of the
Covariance Matrix Adaptation Evolution Strategy (CMA-ES). Our study is based on
the recent theoretical foundation that the pure rank-mu update CMA-ES performs
the natural gradient descent on the parameter space of Gaussian distributions.
We derive a novel variant of the natural gradient method where the parameters
of the Gaussian distribution are updated along the natural gradient to improve
a newly defined function on the parameter space. We study this algorithm on
composites of a monotone function with a convex quadratic function. We prove
that our algorithm adapts the covariance matrix so that it becomes proportional
to the inverse of the Hessian of the original objective function. We also show
the speed of covariance matrix adaptation and the speed of convergence of the
parameters. We introduce a stochastic algorithm that approximates the natural
gradient with finite samples and present some simulated results to evaluate how
precisely the stochastic algorithm approximates the deterministic, ideal one
under finite samples and to see how similarly our algorithm and the CMA-ES
perform.
|
1204.4145
|
Learning From An Optimization Viewpoint
|
cs.LG cs.GT
|
In this dissertation we study statistical and online learning problems from
an optimization viewpoint.The dissertation is divided into two parts :
I. We first consider the question of learnability for statistical learning
problems in the general learning setting. The question of learnability is well
studied and fully characterized for binary classification and for real valued
supervised learning problems using the theory of uniform convergence. However
we show that for the general learning setting uniform convergence theory fails
to characterize learnability. To fill this void we use stability of learning
algorithms to fully characterize statistical learnability in the general
setting. Next we consider the problem of online learning. Unlike the
statistical learning framework there is a dearth of generic tools that can be
used to establish learnability and rates for online learning problems in
general. We provide online analogs to classical tools from statistical learning
theory like Rademacher complexity, covering numbers, etc. We further use these
tools to fully characterize learnability for online supervised learning
problems.
II. In the second part, for general classes of convex learning problems, we
provide appropriate mirror descent (MD) updates for online and statistical
learning of these problems. Further, we show that the the MD is near optimal
for online convex learning and for most cases, is also near optimal for
statistical convex learning. We next consider the problem of convex
optimization and show that oracle complexity can be lower bounded by the so
called fat-shattering dimension of the associated linear class. Thus we
establish a strong connection between offline convex optimization problems and
statistical learning problems. We also show that for a large class of high
dimensional optimization problems, MD is in fact near optimal even for convex
optimization.
|
1204.4151
|
Ultra Low Complexity Soft Output Detector for Non-Binary LDPC Coded
Large MIMO Systems
|
cs.IT math.IT
|
The theoretic results of MIMO capacity tell us that the higher the number of
antennas are employed, the higher the transmission rate is. This makes MIMO
systems with hundreds of antennas very attractive but one of the major problems
that obstructs such large dimensional MIMO systems from the practical
realization is a high complexity of the MIMO detector. We present in this paper
the new soft output MIMO detector based on matched filtering that can be
applied to the large MIMO systems which are coded by the powerful non-binary
LDPC codes. The per-bit complexity of the proposed detector is just 0.28% to
that of low complexity soft output MMSE detector and scales only linearly with
a number of antennas. Furthermore, the coded performances with small
information length 800 bits are within 4.2 dB from the associated MIMO
capacity.
|
1204.4166
|
Message passing with relaxed moment matching
|
cs.LG stat.CO stat.ML
|
Bayesian learning is often hampered by large computational expense. As a
powerful generalization of popular belief propagation, expectation propagation
(EP) efficiently approximates the exact Bayesian computation. Nevertheless, EP
can be sensitive to outliers and suffer from divergence for difficult cases. To
address this issue, we propose a new approximate inference approach, relaxed
expectation propagation (REP). It relaxes the moment matching requirement of
expectation propagation by adding a relaxation factor into the KL minimization.
We penalize this relaxation with a $l_1$ penalty. As a result, when two
distributions in the relaxed KL divergence are similar, the relaxation factor
will be penalized to zero and, therefore, we obtain the original moment
matching; In the presence of outliers, these two distributions are
significantly different and the relaxation factor will be used to reduce the
contribution of the outlier. Based on this penalized KL minimization, REP is
robust to outliers and can greatly improve the posterior approximation quality
over EP. To examine the effectiveness of REP, we apply it to Gaussian process
classification, a task known to be suitable to EP. Our classification results
on synthetic and UCI benchmark datasets demonstrate significant improvement of
REP over EP and Power EP--in terms of algorithmic stability, estimation
accuracy and predictive performance.
|
1204.4200
|
Discrete Dynamical Genetic Programming in XCS
|
cs.AI cs.LG cs.NE cs.SY
|
A number of representation schemes have been presented for use within
Learning Classifier Systems, ranging from binary encodings to neural networks.
This paper presents results from an investigation into using a discrete
dynamical system representation within the XCS Learning Classifier System. In
particular, asynchronous random Boolean networks are used to represent the
traditional condition-action production system rules. It is shown possible to
use self-adaptive, open-ended evolution to design an ensemble of such discrete
dynamical systems within XCS to solve a number of well-known test problems.
|
1204.4202
|
Fuzzy Dynamical Genetic Programming in XCSF
|
cs.AI cs.LG cs.NE cs.SY
|
A number of representation schemes have been presented for use within
Learning Classifier Systems, ranging from binary encodings to Neural Networks,
and more recently Dynamical Genetic Programming (DGP). This paper presents
results from an investigation into using a fuzzy DGP representation within the
XCSF Learning Classifier System. In particular, asynchronous Fuzzy Logic
Networks are used to represent the traditional condition-action production
system rules. It is shown possible to use self-adaptive, open-ended evolution
to design an ensemble of such fuzzy dynamical systems within XCSF to solve
several well-known continuous-valued test problems.
|
1204.4204
|
Tilings with $n$-Dimensional Chairs and their Applications to Asymmetric
Codes
|
cs.IT math.CO math.IT
|
An $n$-dimensional chair consists of an $n$-dimensional box from which a
smaller $n$-dimensional box is removed. A tiling of an $n$-dimensional chair
has two nice applications in coding for write-once memories. The first one is
in the design of codes which correct asymmetric errors with limited-magnitude.
The second one is in the design of $n$ cells $q$-ary write-once memory codes.
We show an equivalence between the design of a tiling with an integer lattice
and the design of a tiling from a generalization of splitting (or of Sidon
sequences). A tiling of an $n$-dimensional chair can define a perfect code for
correcting asymmetric errors with limited-magnitude. We present constructions
for such tilings and prove cases where perfect codes for these type of errors
do not exist.
|
1204.4209
|
Folded Codes from Function Field Towers and Improved Optimal Rate List
Decoding
|
cs.IT cs.DS math.AG math.IT math.NT
|
We give a new construction of algebraic codes which are efficiently list
decodable from a fraction $1-R-\eps$ of adversarial errors where $R$ is the
rate of the code, for any desired positive constant $\eps$. The worst-case list
size output by the algorithm is $O(1/\eps)$, matching the existential bound for
random codes up to constant factors. Further, the alphabet size of the codes is
a constant depending only on $\eps$ - it can be made
$\exp(\tilde{O}(1/\eps^2))$ which is not much worse than the lower bound of
$\exp(\Omega(1/\eps))$. The parameters we achieve are thus quite close to the
existential bounds in all three aspects - error-correction radius, alphabet
size, and list-size - simultaneously. Our code construction is Monte Carlo and
has the claimed list decoding property with high probability. Once the code is
(efficiently) sampled, the encoding/decoding algorithms are deterministic with
a running time $O_\eps(N^c)$ for an absolute constant $c$, where $N$ is the
code's block length.
Our construction is based on a linear-algebraic approach to list decoding
folded codes from towers of function fields, and combining it with a special
form of subspace-evasive sets. Instantiating this with the explicit
"asymptotically good" Garcia-Stichtenoth tower of function fields yields the
above parameters. To illustrate the method in a simpler setting, we also
present a construction based on Hermitian function fields, which offers similar
guarantees with a list and alphabet size polylogarithmic in the block length
$N$. Along the way, we shed light on how to use automorphisms of certain
function fields to enable list decoding of the folded version of the associated
algebraic-geometric codes.
|
1204.4223
|
Improved Quantum LDPC Decoding Strategies For The Misidentified Quantum
Depolarizing Channel
|
cs.IT math.IT quant-ph
|
Quantum cryptography via key distribution mechanisms that utilize quantum
entanglement between sender-receiver pairs will form the basis of future
large-scale quantum networks. A key engineering challenge in such networks will
be the ability to correct for decoherence effects in the distributed
entanglement resources. It is widely believed that sophisticated quantum error
correction codes, such as quantum low-density parity-check (LDPC) codes, will
be pivotal in such a role. However, recently the importance of the channel
mismatch effect in degrading the performance of deployed quantum LDPC codes has
been pointed out. In this work we help remedy this situation by proposing new
quantum LDPC decoding strategies that can significantly reduce performance
degradation by as much as $50\%$. Our new strategies for the quantum LDPC
decoder are based on previous insights from classical LDPC decoders in
mismatched channels, where an asymmetry in performance is known as a function
of the estimated channel noise. We show how similar asymmetries carry over to
the quantum depolarizing channel, and how an estimate of the depolarization
flip parameter weighted to larger values leads to significant performance
improvement.
|
1204.4227
|
Estimating Unknown Sparsity in Compressed Sensing
|
cs.IT math.IT math.ST stat.ME stat.ML stat.TH
|
In the theory of compressed sensing (CS), the sparsity ||x||_0 of the unknown
signal x\in\R^p is commonly assumed to be a known parameter. However, it is
typically unknown in practice. Due to the fact that many aspects of CS depend
on knowing ||x||_0, it is important to estimate this parameter in a data-driven
way. A second practical concern is that ||x||_0 is a highly unstable function
of x. In particular, for real signals with entries not exactly equal to 0, the
value ||x||_0=p is not a useful description of the effective number of
coordinates. In this paper, we propose to estimate a stable measure of sparsity
s(x):=||x||_1^2/||x||_2^2, which is a sharp lower bound on ||x||_0. Our
estimation procedure uses only a small number of linear measurements, does not
rely on any sparsity assumptions, and requires very little computation. A
confidence interval for s(x) is provided, and its width is shown to have no
dependence on the signal dimension p. Moreover, this result extends naturally
to the matrix recovery setting, where a soft version of matrix rank can be
estimated with analogous guarantees. Finally, we show that the use of
randomized measurements is essential to estimating s(x). This is accomplished
by proving that the minimax risk for estimating s(x) with deterministic
measurements is large when n<<p.
|
1204.4249
|
Posterior Matching Scheme for Gaussian Multiple Access Channel with
Feedback
|
cs.IT math.IT
|
Posterior matching is a method proposed by Ofer Shayevitz and Meir Feder to
design capacity achieving coding schemes for general point-to-point memoryless
channels with feedback. In this paper, we present a way to extend posterior
matching based encoding and variable rate decoding ideas for the Gaussian MAC
with feedback, referred to as time-varying posterior matching scheme, analyze
the achievable rate region and error probabilities of the extended
encoding-decoding scheme. The time-varying posterior matching scheme is a
generalization of the Shayevitz and Feder's posterior matching scheme when the
posterior distributions of the input messages given output are not fixed over
transmission time slots. It turns out that the well-known Ozarow's encoding
scheme, which obtains the capacity of two-user Gaussian channel, is a special
case of our extended posterior matching framework as the Schalkwijk-Kailath's
scheme is a special case of the point-to-point posterior matching mentioned
above. Furthermore, our designed posterior matching also obtains the
linear-feedback sum-capacity for the symmetric multiuser Gaussian MAC. Besides,
the encoding scheme in this paper is designed for the real Gaussian MAC to
obtain that performance, which is different from previous approaches where
encoding schemes are designed for the complex Gaussian MAC. More importantly,
this paper shows potential of posterior matching in designing optimal coding
schemes for multiuser channels with feedback.
|
1204.4253
|
Extended master equation models for molecular communication networks
|
cs.CE physics.bio-ph q-bio.QM
|
We consider molecular communication networks consisting of transmitters and
receivers distributed in a fluidic medium. In such networks, a transmitter
sends one or more signalling molecules, which are diffused over the medium, to
the receiver to realise the communication. In order to be able to engineer
synthetic molecular communication networks, mathematical models for these
networks are required. This paper proposes a new stochastic model for molecular
communication networks called reaction-diffusion master equation with exogenous
input (RDMEX). The key idea behind RDMEX is to model the transmitters as time
series of signalling molecule counts, while diffusion in the medium and
chemical reactions at the receivers are modelled as Markov processes using
master equation. An advantage of RDMEX is that it can readily be used to model
molecular communication networks with multiple transmitters and receivers. For
the case where the reaction kinetics at the receivers is linear, we show how
RDMEX can be used to determine the mean and covariance of the receiver output
signals, and derive closed-form expressions for the mean receiver output signal
of the RDMEX model. These closed-form expressions reveal that the output signal
of a receiver can be affected by the presence of other receivers. Numerical
examples are provided to demonstrate the properties of the model.
|
1204.4257
|
Speech Recognition: Increasing Efficiency of Support Vector Machines
|
cs.CV
|
With the advancement of communication and security technologies, it has
become crucial to have robustness of embedded biometric systems. This paper
presents the realization of such technologies which demands reliable and
error-free biometric identity verification systems. High dimensional patterns
are not permitted due to eigen-decomposition in high dimensional feature space
and degeneration of scattering matrices in small size sample. Generalization,
dimensionality reduction and maximizing the margins are controlled by
minimizing weight vectors. Results show good pattern by multimodal biometric
system proposed in this paper. This paper is aimed at investigating a biometric
identity system using Support Vector Machines(SVMs) and Lindear Discriminant
Analysis(LDA) with MFCCs and implementing such system in real-time using
SignalWAVE.
|
1204.4294
|
Learning in Riemannian Orbifolds
|
cs.LG cs.AI cs.CV
|
Learning in Riemannian orbifolds is motivated by existing machine learning
algorithms that directly operate on finite combinatorial structures such as
point patterns, trees, and graphs. These methods, however, lack statistical
justification. This contribution derives consistency results for learning
problems in structured domains and thereby generalizes learning in vector
spaces and manifolds.
|
1204.4307
|
Avian Influenza (H5N1) Warning System using Dempster-Shafer Theory and
Web Mapping
|
cs.AI math.PR
|
Based on Cumulative Number of Confirmed Human Cases of Avian Influenza (H5N1)
Reported to World Health Organization (WHO) in the 2011 from 15 countries,
Indonesia has the largest number death because Avian Influenza which 146
deaths. In this research, the researcher built a Web Mapping and
Dempster-Shafer theory as early warning system of avian influenza. Early
warning is the provision of timely and effective information, through
identified institutions, that allows individuals exposed to a hazard to take
action to avoid or reduce their risk and prepare for effective response. In
this paper as example we use five symptoms as major symptoms which include
depression, combs, wattle, bluish face region, swollen face region, narrowness
of eyes, and balance disorders. Research location is in the Lampung Province,
South Sumatera. The researcher reason to choose Lampung Province in South
Sumatera on the basis that has a high poultry population. Geographically,
Lampung province is located at 103040' to 105050' East Longitude and 6045' -
3045' South latitude, confined with: South Sumatera and Bengkulu on North Side,
Sunda Strait on the Side, Java Sea on the East Side, Indonesia Ocean on the
West Side. Our approach uses Dempster Shafer theory to combine beliefs in
certain hypotheses under conditions of uncertainty and ignorance, and allows
quantitative measurement of the belief and plausibility in our identification
result. Web Mapping is also used for displaying maps on a screen to visualize
the result of the identification process. The result reveal that avian
influenza warning system has successfully identified the existence of avian
influenza and the maps can be displayed as the visualization.
|
1204.4311
|
Avian Influenza (H5N1) Expert System using Dempster-Shafer Theory
|
cs.AI math.PR
|
Based on Cumulative Number of Confirmed Human Cases of Avian Influenza (H5N1)
Reported to World Health Organization (WHO) in the 2011 from 15 countries,
Indonesia has the largest number death because Avian Influenza which 146
deaths. In this research, the researcher built an Avian Influenza (H5N1) Expert
System for identifying avian influenza disease and displaying the result of
identification process. In this paper, we describe five symptoms as major
symptoms which include depression, combs, wattle, bluish face region, swollen
face region, narrowness of eyes, and balance disorders. We use chicken as
research object. Research location is in the Lampung Province, South Sumatera.
The researcher reason to choose Lampung Province in South Sumatera on the basis
that has a high poultry population. Dempster-Shafer theory to quantify the
degree of belief as inference engine in expert system, our approach uses
Dempster-Shafer theory to combine beliefs under conditions of uncertainty and
ignorance, and allows quantitative measurement of the belief and plausibility
in our identification result. The result reveal that Avian Influenza (H5N1)
Expert System has successfully identified the existence of avian influenza and
displaying the result of identification process.
|
1204.4329
|
Supervised feature evaluation by consistency analysis: application to
measure sets used to characterise geographic objects
|
cs.LG
|
Nowadays, supervised learning is commonly used in many domains. Indeed, many
works propose to learn new knowledge from examples that translate the expected
behaviour of the considered system. A key issue of supervised learning concerns
the description language used to represent the examples. In this paper, we
propose a method to evaluate the feature set used to describe them. Our method
is based on the computation of the consistency of the example base. We carried
out a case study in the domain of geomatic in order to evaluate the sets of
measures used to characterise geographic objects. The case study shows that our
method allows to give relevant evaluations of measure sets.
|
1204.4332
|
Designing generalisation evaluation function through human-machine
dialogue
|
cs.HC cs.LG
|
Automated generalisation has known important improvements these last few
years. However, an issue that still deserves more study concerns the automatic
evaluation of generalised data. Indeed, many automated generalisation systems
require the utilisation of an evaluation function to automatically assess
generalisation outcomes. In this paper, we propose a new approach dedicated to
the design of such a function. This approach allows an imperfectly defined
evaluation function to be revised through a man-machine dialogue. The user
gives its preferences to the system by comparing generalisation outcomes.
Machine Learning techniques are then used to improve the evaluation function.
An experiment carried out on buildings shows that our approach significantly
improves generalisation evaluation functions defined by users.
|
1204.4346
|
Your Two Weeks of Fame and Your Grandmother's
|
cs.DL cs.CL cs.SI physics.soc-ph
|
Did celebrity last longer in 1929, 1992 or 2009? We investigate the
phenomenon of fame by mining a collection of news articles that spans the
twentieth century, and also perform a side study on a collection of blog posts
from the last 10 years. By analyzing mentions of personal names, we measure
each person's time in the spotlight, using two simple metrics that evaluate,
roughly, the duration of a single news story about a person, and the overall
duration of public interest in a person. We watched the distribution evolve
from 1895 to 2010, expecting to find significantly shortening fame durations,
per the much popularly bemoaned shortening of society's attention spans and
quickening of media's news cycles. Instead, we conclusively demonstrate that,
through many decades of rapid technological and societal change, through the
appearance of Twitter, communication satellites, and the Internet, fame
durations did not decrease, neither for the typical case nor for the extremely
famous, with the last statistically significant fame duration decreases coming
in the early 20th century, perhaps from the spread of telegraphy and telephony.
Furthermore, while median fame durations stayed persistently constant, for the
most famous of the famous, as measured by either volume or duration of media
attention, fame durations have actually trended gently upward since the 1940s,
with statistically significant increases on 40-year timescales. Similar studies
have been done with much shorter timescales specifically in the context of
information spreading on Twitter and similar social networking sites. To the
best of our knowledge, this is the first massive scale study of this nature
that spans over a century of archived data, thereby allowing us to track
changes across decades.
|
1204.4347
|
Change-Of-Bases Abstractions for Non-Linear Systems
|
cs.SC cs.LO cs.SY
|
We present abstraction techniques that transform a given non-linear dynamical
system into a linear system or an algebraic system described by polynomials of
bounded degree, such that, invariant properties of the resulting abstraction
can be used to infer invariants for the original system. The abstraction
techniques rely on a change-of-basis transformation that associates each state
variable of the abstract system with a function involving the state variables
of the original system. We present conditions under which a given change of
basis transformation for a non-linear system can define an abstraction.
Furthermore, the techniques developed here apply to continuous systems defined
by Ordinary Differential Equations (ODEs), discrete systems defined by
transition systems and hybrid systems that combine continuous as well as
discrete subsystems. The techniques presented here allow us to discover, given
a non-linear system, if a change of bases transformation involving
degree-bounded polynomials yielding an algebraic abstraction exists. If so, our
technique yields the resulting abstract system, as well. This approach is
further extended to search for a change of bases transformation that abstracts
a given non-linear system into a system of linear differential inclusions. Our
techniques enable the use of analysis techniques for linear systems to infer
invariants for non-linear systems. We present preliminary evidence of the
practical feasibility of our ideas using a prototype implementation.
|
1204.4366
|
Multipath-dominant, pulsed doppler analysis of rotating blades
|
cs.CE
|
We present a novel angular fingerprinting algorithm for detecting changes in
the direction of rotation of a target with a monostatic, stationary sonar
platform. Unlike other approaches, we assume that the target's centroid is
stationary, and exploit doppler multipath signals to resolve the otherwise
unavoidable ambiguities that arise. Since the algorithm is based on an
underlying differential topological theory, it is highly robust to distortions
in the collected data. We demonstrate performance of this algorithm
experimentally, by exhibiting a pulsed doppler sonar collection system that
runs on a smartphone. The performance of this system is sufficiently good to
both detect changes in target rotation direction using angular fingerprints,
and also to form high-resolution inverse synthetic aperature images of the
target.
|
1204.4419
|
Geometry of Power Flows and Optimization in Distribution Networks
|
math.OC cs.IT cs.SY math.IT
|
We investigate the geometry of injection regions and its relationship to
optimization of power flows in tree networks. The injection region is the set
of all vectors of bus power injections that satisfy the network and operation
constraints. The geometrical object of interest is the set of Pareto-optimal
points of the injection region. If the voltage magnitudes are fixed, the
injection region of a tree network can be written as a linear transformation of
the product of two-bus injection regions, one for each line in the network.
Using this decomposition, we show that under the practical condition that the
angle difference across each line is not too large, the set of Pareto-optimal
points of the injection region remains unchanged by taking the convex hull.
Moreover, the resulting convexified optimal power flow problem can be
efficiently solved via }{ semi-definite programming or second order cone
relaxations. These results improve upon earlier works by removing the
assumptions on active power lower bounds. It is also shown that our practical
angle assumption guarantees two other properties: (i) the uniqueness of the
solution of the power flow problem, and (ii) the non-negativity of the
locational marginal prices. Partial results are presented for the case when the
voltage magnitudes are not fixed but can lie within certain bounds.
|
1204.4427
|
Coupling Clinical Decision Support System with Computerized Prescriber
Order Entry and their Dynamic Plugging in the Medical Workflow System
|
cs.MA
|
This work deals with coupling Clinical Decision Support System (CDSS) with
Computerized Prescriber Order Entry (CPOE) and their dynamic plugging in the
medical Workflow Management System (WfMS). First, in this paper we argue some
existing CDSS representative of the state of the art in order to emphasize
their inability to deal with coupling with CPOE and medical WfMS. The
multi-agent technology is at the basis of our proposition since (i) it provides
natural abstractions to deal with distribution, heterogeneity and autonomy
which are inherent to the previous systems (CDSS, CPOE and medical WfMS), and
(ii) it introduces powerful concepts such as organizations, goals and roles
useful to describe in details the coordination of the different components
involved in these systems. In this paper, we also propose a Multi-Agent System
(MAS) to support the coupling CDSS with CPOE. Finally, we show how we integrate
the proposed MAS in the medical workflow management system which is also based
on collaborating agents
|
1204.4476
|
Dynamic Template Tracking and Recognition
|
cs.CV cs.SY
|
In this paper we address the problem of tracking non-rigid objects whose
local appearance and motion changes as a function of time. This class of
objects includes dynamic textures such as steam, fire, smoke, water, etc., as
well as articulated objects such as humans performing various actions. We model
the temporal evolution of the object's appearance/motion using a Linear
Dynamical System (LDS). We learn such models from sample videos and use them as
dynamic templates for tracking objects in novel videos. We pose the problem of
tracking a dynamic non-rigid object in the current frame as a maximum
a-posteriori estimate of the location of the object and the latent state of the
dynamical system, given the current image features and the best estimate of the
state in the previous frame. The advantage of our approach is that we can
specify a-priori the type of texture to be tracked in the scene by using
previously trained models for the dynamics of these textures. Our framework
naturally generalizes common tracking methods such as SSD and kernel-based
tracking from static templates to dynamic templates. We test our algorithm on
synthetic as well as real examples of dynamic textures and show that our simple
dynamics-based trackers perform at par if not better than the state-of-the-art.
Since our approach is general and applicable to any image feature, we also
apply it to the problem of human action tracking and build action-specific
optical flow trackers that perform better than the state-of-the-art when
tracking a human performing a particular action. Finally, since our approach is
generative, we can use a-priori trained trackers for different texture or
action classes to simultaneously track and recognize the texture or action in
the video.
|
1204.4491
|
On Budgeted Influence Maximization in Social Networks
|
cs.SI physics.soc-ph
|
Given a budget and arbitrary cost for selecting each node, the budgeted
influence maximization (BIM) problem concerns selecting a set of seed nodes to
disseminate some information that maximizes the total number of nodes
influenced (termed as influence spread) in social networks at a total cost no
more than the budget. Our proposed seed selection algorithm for the BIM problem
guarantees an approximation ratio of (1 - 1/sqrt(e)). The seed selection
algorithm needs to calculate the influence spread of candidate seed sets, which
is known to be #P-complex. Identifying the linkage between the computation of
marginal probabilities in Bayesian networks and the influence spread, we devise
efficient heuristic algorithms for the latter problem. Experiments using both
large-scale social networks and synthetically generated networks demonstrate
superior performance of the proposed algorithm with moderate computation costs.
Moreover, synthetic datasets allow us to vary the network parameters and gain
important insights on the impact of graph structures on the performance of
different algorithms.
|
1204.4497
|
Ranking spreaders by decomposing complex networks
|
physics.soc-ph cs.SI physics.comp-ph
|
Ranking the nodes' ability for spreading in networks is a fundamental problem
which relates to many real applications such as information and disease
control. In the previous literatures, a network decomposition procedure called
k-shell method has been shown to effectively identify the most influential
spreaders. In this paper, we find that the k-shell method have some limitations
when it is used to rank all the nodes in the network. We also find that these
limitations are due to considering only the links between the remaining nodes
(residual degree) while entirely ignoring all the links connecting to the
removed nodes (exhausted degree) when decomposing the networks. Accordingly, we
propose a mixed degree decomposition (MDD) procedure in which both the residual
degree and the exhausted degree are considered. By simulating the epidemic
process on the real networks, we show that the MDD method can outperform the
k-shell and the degree methods in ranking spreaders. Finally, the influence of
the network structure on the performance of the MDD method is discussed.
|
1204.4498
|
Diversity Loss due to Interference Correlation
|
cs.IT cs.NI math.IT math.PR
|
Interference in wireless systems is both temporally and spatially correlated.
Yet very little research has analyzed the effect of such correlation. Here we
focus on its impact on the diversity in Poisson networks with multi-antenna
receivers. Most work on multi-antenna communication does not consider
interference, and if it is included, it is assumed independent across the
receive antennas. Here we show that interference correlation significantly
reduces the probability of successful reception over SIMO links. The diversity
loss is quantified via the diversity polynomial. For the two-antenna case, we
provide the complete joint SIR distribution.
|
1204.4518
|
A Study of Trade-off between Opportunistic Resource Allocation and
Interference Alignment in Femtocell Scenarios
|
cs.IT math.IT
|
One of the main problems in wireless heterogeneous networks is interference
between macro- and femto-cells. Using Orthogonal Frequency-Division Multiple
Access (OFDMA) to create multiple frequency orthogonal sub-channels, this
interference can be completely avoided if each sub-channel is exclusively used
by either macro- or a femto-cell. However, such an orthogonal allocation may be
inefficient. We consider two alternative strategies for interference
management, opportunistic resource allocation (ORA) and interference alignment
(IA). Both of them utilize the fading fluctuations across frequency channels in
different ways. ORA allows the users to interfere, but selecting the channels
where the interference is faded, while the desired signal has a good channel.
IA uses precoding to create interference-free transmissions; however, such a
precoding changes the diversity picture of the communication resources. In this
letter we investigate the interactions and the trade-offs between these two
strategies.
|
1204.4521
|
A Privacy-Aware Bayesian Approach for Combining Classifier and Cluster
Ensembles
|
cs.LG cs.CV stat.ML
|
This paper introduces a privacy-aware Bayesian approach that combines
ensembles of classifiers and clusterers to perform semi-supervised and
transductive learning. We consider scenarios where instances and their
classification/clustering results are distributed across different data sites
and have sharing restrictions. As a special case, the privacy aware computation
of the model when instances of the target data are distributed across different
data sites, is also discussed. Experimental results show that the proposed
approach can provide good classification accuracies while adhering to the
data/model sharing constraints.
|
1204.4528
|
Learning Asynchronous-Time Information Diffusion Models and its
Application to Behavioral Data Analysis over Social Networks
|
cs.SI physics.soc-ph
|
One of the interesting and important problems of information diffusion over a
large social network is to identify an appropriate model from a limited amount
of diffusion information. There are two contrasting approaches to model
information diffusion: a push type model known as Independent Cascade (IC)
model and a pull type model known as Linear Threshold (LT) model. We extend
these two models (called AsIC and AsLT in this paper) to incorporate
asynchronous time delay and investigate 1) how they differ from or similar to
each other in terms of information diffusion, 2) whether the model itself is
learnable or not from the observed information diffusion data, and 3) which
model is more appropriate to explain for a particular topic (information) to
diffuse/propagate. We first show there can be variations with respect to how
the time delay is modeled, and derive the likelihood of the observed data being
generated for each model. Using one particular time delay model, we show the
model parameters are learnable from a limited amount of observation. We then
propose a method based on predictive accuracy by which to select a model which
better explains the observed data. Extensive evaluations were performed. We
first show using synthetic data with the network structures taken from real
networks that there are considerable behavioral differences between the AsIC
and the AsLT models, the proposed methods accurately and stably learn the model
parameters, and identify the correct diffusion model from a limited amount of
observation data. We next apply these methods to behavioral analysis of topic
propagation using the real blog propagation data, and show there is a clear
indication as to which topic better follows which model although the results
are rather insensitive to the model selected at the level of discussing how far
and fast each topic propagates from the learned parameter values.
|
1204.4539
|
Supervised Feature Selection in Graphs with Path Coding Penalties and
Network Flows
|
stat.ML cs.LG math.OC
|
We consider supervised learning problems where the features are embedded in a
graph, such as gene expressions in a gene network. In this context, it is of
much interest to automatically select a subgraph with few connected components;
by exploiting prior knowledge, one can indeed improve the prediction
performance or obtain results that are easier to interpret. Regularization or
penalty functions for selecting features in graphs have recently been proposed,
but they raise new algorithmic challenges. For example, they typically require
solving a combinatorially hard selection problem among all connected subgraphs.
In this paper, we propose computationally feasible strategies to select a
sparse and well-connected subset of features sitting on a directed acyclic
graph (DAG). We introduce structured sparsity penalties over paths on a DAG
called "path coding" penalties. Unlike existing regularization functions that
model long-range interactions between features in a graph, path coding
penalties are tractable. The penalties and their proximal operators involve
path selection problems, which we efficiently solve by leveraging network flow
optimization. We experimentally show on synthetic, image, and genomic data that
our approach is scalable and leads to more connected subgraphs than other
regularization functions for graphs.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.