id
stringlengths
9
16
title
stringlengths
4
278
categories
listlengths
1
13
abstract
stringlengths
3
4.08k
filtered_category_membership
dict
1109.1168
An Extension of Semantic Proximity for Fuzzy Multivalued Dependencies in Fuzzy Relational Database
[ "cs.DB", "cs.IR" ]
Following the development of fuzzy logic theory by Lotfi Zadeh, its applications were investigated by researchers in different fields. Presenting and working with uncertain data is a complex problem. To solve for such a complex problem, the structure of relationships and operators dependent on such relationships must be repaired. The fuzzy database has integrity limitations including data dependencies. In this paper, first fuzzy multivalued dependency based semantic proximity and its problems are studied. To solve these problems, the semantic proximity's formula is modified, and fuzzy multivalued dependency based on the concept of extension of semantic proximity with \alpha degree is defined in fuzzy relational database which includes Crisp, NULL and fuzzy values, and also inference rules for this dependency are defined, and their completeness is proved. Finally, we will show that fuzzy functional dependency based on this concept is a special case of fuzzy multivalued dependency in fuzzy relational database.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 1, "cs.HC": 0, "cs.IR": 1, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1175
Estimating 3D Human Shapes from Measurements
[ "cs.CV", "cs.GR" ]
The recent advances in 3-D imaging technologies give rise to databases of human shapes, from which statistical shape models can be built. These statistical models represent prior knowledge of the human shape and enable us to solve shape reconstruction problems from partial information. Generating human shape from traditional anthropometric measurements is such a problem, since these 1-D measurements encode 3-D shape information. Combined with a statistical shape model, these easy-to-obtain measurements can be leveraged to create 3D human shapes. However, existing methods limit the creation of the shapes to the space spanned by the database and thus require a large amount of training data. In this paper, we introduce a technique that extrapolates the statistically inferred shape to fit the measurement data using nonlinear optimization. This method ensures that the generated shape is both human-like and satisfies the measurement conditions. We demonstrate the effectiveness of the method and compare it to existing approaches through extensive experiments, using both synthetic data and real human measurements.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1202
Data Mining Techniques: A Source for Consumer Behavior Analysis
[ "cs.DB" ]
Various studies on consumer purchasing behaviors have been presented and used in real problems. Data mining techniques are expected to be a more effective tool for analyzing consumer behaviors. However, the data mining method has disadvantages as well as advantages. Therefore, it is important to select appropriate techniques to mine databases. The objective of this paper is to know consumer behavior, his psychological condition at the time of purchase and how suitable data mining method apply to improve conventional method. Moreover, in an experiment, association rule is employed to mine rules for trusted customers using sales data in a super market industry
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 1, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1211
An Efficient Preprocessing Methodology for Discovering Patterns and Clustering of Web Users using a Dynamic ART1 Neural Network
[ "cs.NE" ]
In this paper, a complete preprocessing methodology for discovering patterns in web usage mining process to improve the quality of data by reducing the quantity of data has been proposed. A dynamic ART1 neural network clustering algorithm to group users according to their Web access patterns with its neat architecture is also proposed. Several experiments are conducted and the results show the proposed methodology reduces the size of Web log files down to 73-82% of the initial size and the proposed ART1 algorithm is dynamic and learns relatively stable quality clusters.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 1, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1214
A distributed optimization-based approach for hierarchical model predictive control of large-scale systems with coupled dynamics and constraints
[ "math.OC", "cs.MA", "cs.SY" ]
We present a hierarchical model predictive control approach for large-scale systems based on dual decomposition. The proposed scheme allows coupling in both dynamics and constraints between the subsystems and generates a primal feasible solution within a finite number of iterations, using primal averaging and a constraint tightening approach. The primal update is performed in a distributed way and does not require exact solutions, while the dual problem uses an approximate subgradient method. Stability of the scheme is established using bounded suboptimality.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 1, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 1 }
1109.1231
A Combinatorial Optimisation Approach to Designing Dual-Parented Long-Reach Passive Optical Networks
[ "cs.AI" ]
We present an application focused on the design of resilient long-reach passive optical networks. We specifically consider dual-parented networks whereby each customer must be connected to two metro sites via local exchange sites. An important property of such a placement is resilience to single metro node failure. The objective of the application is to determine the optimal position of a set of metro nodes such that the total optical fibre length is minimized. We prove that this problem is NP-Complete. We present two alternative combinatorial optimisation approaches to finding an optimal metro node placement using: a mixed integer linear programming (MIP) formulation of the problem; and, a hybrid approach that uses clustering as a preprocessing step. We consider a detailed case-study based on a network for Ireland. The hybrid approach scales well and finds solutions that are close to optimal, with a runtime that is two orders-of-magnitude better than the MIP model.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1247
Devnagari document segmentation using histogram approach
[ "cs.CV" ]
Document segmentation is one of the critical phases in machine recognition of any language. Correct segmentation of individual symbols decides the accuracy of character recognition technique. It is used to decompose image of a sequence of characters into sub images of individual symbols by segmenting lines and words. Devnagari is the most popular script in India. It is used for writing Hindi, Marathi, Sanskrit and Nepali languages. Moreover, Hindi is the third most popular language in the world. Devnagari documents consist of vowels, consonants and various modifiers. Hence proper segmentation of Devnagari word is challenging. A simple histogram based approach to segment Devnagari documents is proposed in this paper. Various challenges in segmentation of Devnagari script are also discussed.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1251
Synthesis of Distributed Control and Communication Schemes from Global LTL Specifications
[ "cs.RO", "cs.SY", "math.OC" ]
We introduce a technique for synthesis of control and communication strategies for a team of agents from a global task specification given as a Linear Temporal Logic (LTL) formula over a set of properties that can be satisfied by the agents. We consider a purely discrete scenario, in which the dynamics of each agent is modeled as a finite transition system. The proposed computational framework consists of two main steps. First, we extend results from concurrency theory to check whether the specification is distributable among the agents. Second, we generate individual control and communication strategies by using ideas from LTL model checking. We apply the method to automatically deploy a team of miniature cars in our Robotic Urban-Like Environment.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 1, "cs.SD": 0, "cs.SI": 0, "cs.SY": 1 }
1109.1255
Interference Mitigation in Large Random Wireless Networks
[ "cs.IT", "math.IT" ]
A central problem in the operation of large wireless networks is how to deal with interference -- the unwanted signals being sent by transmitters that a receiver is not interested in. This thesis looks at ways of combating such interference. In Chapters 1 and 2, we outline the necessary information and communication theory background, including the concept of capacity. We also include an overview of a new set of schemes for dealing with interference known as interference alignment, paying special attention to a channel-state-based strategy called ergodic interference alignment. In Chapter 3, we consider the operation of large regular and random networks by treating interference as background noise. We consider the local performance of a single node, and the global performance of a very large network. In Chapter 4, we use ergodic interference alignment to derive the asymptotic sum-capacity of large random dense networks. These networks are derived from a physical model of node placement where signal strength decays over the distance between transmitters and receivers. (See also arXiv:1002.0235 and arXiv:0907.5165.) In Chapter 5, we look at methods of reducing the long time delays incurred by ergodic interference alignment. We analyse the tradeoff between reducing delay and lowering the communication rate. (See also arXiv:1004.0208.) In Chapter 6, we outline a problem that is equivalent to the problem of pooled group testing for defective items. We then present some new work that uses information theoretic techniques to attack group testing. We introduce for the first time the concept of the group testing channel, which allows for modelling of a wide range of statistical error models for testing. We derive new results on the number of tests required to accurately detect defective items, including when using sequential `adaptive' tests.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1265
FEBER: Feedback Based Erasure Recovery for Real-Time Multicast over 802.11 Networks
[ "cs.IT", "cs.MM", "cs.NI", "math.IT" ]
We consider the problem of broadcasting data streams over a wireless network for multiple receivers with reliability and timely delivery guarantees. In our framework, we consider packets that need to be delivered within a given time interval, after which the packet is no longer useful at the application layer. We set the notion of critical packet and, based on periodic feedback from the receivers, we propose a retransmission scheme that will guarantee timely delivery of such packets, as well as packets that are innovative for other receivers. Our solution provides a trade-off between packet delivery ratio and bandwidth use, which contrasts with existing approaches such as FEC and ARQ, where the focus is on ensuring reliability first, offering no guarantees of timely delivery of data. We evaluate the performance of our proposal in a 802.11 wireless network testbed.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1275
A Formal Verification Approach to the Design of Synthetic Gene Networks
[ "cs.SY", "math.OC", "q-bio.MN" ]
The design of genetic networks with specific functions is one of the major goals of synthetic biology. However, constructing biological devices that work "as required" remains challenging, while the cost of uncovering flawed designs experimentally is large. To address this issue, we propose a fully automated framework that allows the correctness of synthetic gene networks to be formally verified in silico from rich, high level functional specifications. Given a device, we automatically construct a mathematical model from experimental data characterizing the parts it is composed of. The specific model structure guarantees that all experimental observations are captured and allows us to construct finite abstractions through polyhedral operations. The correctness of the model with respect to temporal logic specifications can then be verified automatically using methods inspired by model checking. Overall, our procedure is conservative but it can filter through a large number of potential device designs and select few that satisfy the specification to be implemented and tested further experimentally. Illustrative examples of the application of our methods to the design of simple synthetic gene networks are included.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 1 }
1109.1276
Application of the Modified 2-opt and Jumping Gene Operators in Multi-Objective Genetic Algorithm to solve MOTSP
[ "cs.AI", "cs.NE" ]
Evolutionary Multi-Objective Optimization is becoming a hot research area and quite a few papers regarding these algorithms have been published. However the role of local search techniques has not been expanded adequately. This paper studies the role of a local search technique called 2-opt for the Multi-Objective Travelling Salesman Problem (MOTSP). A new mutation operator called Jumping Gene (JG) is also used. Since 2-opt operator was intended for the single objective TSP, its domain has been expanded to MOTSP in this paper. This new technique is applied to the list of KroAB100 cities.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 1, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1293
Source Coding When the Side Information May Be Delayed
[ "cs.IT", "math.IT" ]
For memoryless sources, delayed side information at the decoder does not improve the rate-distortion function. However, this is not the case for more general sources with memory, as demonstrated by a number of works focusing on the special case of (delayed) feedforward. In this paper, a setting is studied in which the encoder is potentially uncertain about the delay with which measurements of the side information are acquired at the decoder. Assuming a hidden Markov model for the sources, at first, a single-letter characterization is given for the set-up where the side information delay is arbitrary and known at the encoder, and the reconstruction at the destination is required to be (near) lossless. Then, with delay equal to zero or one source symbol, a single-letter characterization is given of the rate-distortion region for the case where side information may be delayed or not, unbeknownst to the encoder. The characterization is further extended to allow for additional information to be sent when the side information is not delayed. Finally, examples for binary and Gaussian sources are provided.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1302
Adding a new site in an existing Oracle Multimaster replication without quiescing the replication
[ "cs.DB" ]
This paper presents a new solution, which adds a new database server on an existing Oracle Multimaster Data replication system with Online Instantiation method. During this time the system is down, because we cannot execute DML statements on replication objects but we can only make queries. The time for adding the new database server depends on the number of objects, on the replication group and on the network conditions. We propose to add a new layer between replication objects and the database sessions, which contain DML statements. The layer eliminates the system down time exploiting our developed packages. The packages will be active only during the addition of a new site process and will modify all DML statements and queries based on replication objects.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 1, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1314
Measuring Intelligence through Games
[ "cs.AI" ]
Artificial general intelligence (AGI) refers to research aimed at tackling the full problem of artificial intelligence, that is, create truly intelligent agents. This sets it apart from most AI research which aims at solving relatively narrow domains, such as character recognition, motion planning, or increasing player satisfaction in games. But how do we know when an agent is truly intelligent? A common point of reference in the AGI community is Legg and Hutter's formal definition of universal intelligence, which has the appeal of simplicity and generality but is unfortunately incomputable. Games of various kinds are commonly used as benchmarks for "narrow" AI research, as they are considered to have many important properties. We argue that many of these properties carry over to the testing of general intelligence as well. We then sketch how such testing could practically be carried out. The central part of this sketch is an extension of universal intelligence to deal with finite time, and the use of sampling of the space of games expressed in a suitably biased game description language.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1317
Lifted Unit Propagation for Effective Grounding
[ "cs.LO", "cs.AI" ]
A grounding of a formula $\phi$ over a given finite domain is a ground formula which is equivalent to $\phi$ on that domain. Very effective propositional solvers have made grounding-based methods for problem solving increasingly important, however for realistic problem domains and instances, the size of groundings is often problematic. A key technique in ground (e.g., SAT) solvers is unit propagation, which often significantly reduces ground formula size even before search begins. We define a "lifted" version of unit propagation which may be carried out prior to grounding, and describe integration of the resulting technique into grounding algorithms. We describe an implementation of the method in a bottom-up grounder, and an experimental study of its performance.
{ "Other": 1, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1325
Get the Most out of Your Sample: Optimal Unbiased Estimators using Partial Information
[ "cs.DB", "cs.DS", "cs.NI", "math.ST", "stat.TH" ]
Random sampling is an essential tool in the processing and transmission of data. It is used to summarize data too large to store or manipulate and meet resource constraints on bandwidth or battery power. Estimators that are applied to the sample facilitate fast approximate processing of queries posed over the original data and the value of the sample hinges on the quality of these estimators. Our work targets data sets such as request and traffic logs and sensor measurements, where data is repeatedly collected over multiple {\em instances}: time periods, locations, or snapshots. We are interested in queries that span multiple instances, such as distinct counts and distance measures over selected records. These queries are used for applications ranging from planning to anomaly and change detection. Unbiased low-variance estimators are particularly effective as the relative error decreases with the number of selected record keys. The Horvitz-Thompson estimator, known to minimize variance for sampling with "all or nothing" outcomes (which reveals exacts value or no information on estimated quantity), is not optimal for multi-instance operations for which an outcome may provide partial information. We present a general principled methodology for the derivation of (Pareto) optimal unbiased estimators over sampled instances and aim to understand its potential. We demonstrate significant improvement in estimate accuracy of fundamental queries for common sampling schemes.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 1, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1355
Localization on low-order eigenvectors of data matrices
[ "cs.DM", "cs.CE", "cs.LG" ]
Eigenvector localization refers to the situation when most of the components of an eigenvector are zero or near-zero. This phenomenon has been observed on eigenvectors associated with extremal eigenvalues, and in many of those cases it can be meaningfully interpreted in terms of "structural heterogeneities" in the data. For example, the largest eigenvectors of adjacency matrices of large complex networks often have most of their mass localized on high-degree nodes; and the smallest eigenvectors of the Laplacians of such networks are often localized on small but meaningful community-like sets of nodes. Here, we describe localization associated with low-order eigenvectors, i.e., eigenvectors corresponding to eigenvalues that are not extremal but that are "buried" further down in the spectrum. Although we have observed it in several unrelated applications, this phenomenon of low-order eigenvector localization defies common intuitions and simple explanations, and it creates serious difficulties for the applicability of popular eigenvector-based machine learning and data analysis tools. After describing two examples where low-order eigenvector localization arises, we present a very simple model that qualitatively reproduces several of the empirically-observed results. This model suggests certain coarse structural similarities among the seemingly-unrelated applications where we have observed low-order eigenvector localization, and it may be used as a diagnostic tool to help extract insight from data graphs when such low-order eigenvector localization is present.
{ "Other": 1, "cs.AI": 0, "cs.CE": 1, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1359
Representation for alphanumeric data type based on space and speed case study: Student ID of X university
[ "cs.DB" ]
ID is derived from the word identity, derived from the first two characters in the word. ID is used to distinguish between an entity to another entity. Student ID (SID) is the key differentiator between a student with other students. On the concept of database, the differentiator is unique. SID can be numbers, letters, or a combination of both (alphanumeric). Viewed from the daily context, it is not important to determine which a SID belongs to the type of data. However, when reviewed on database design, determining the type of data, including SID in this case, is important. Problems arise because there is a contradiction between the data type viewed from the data characteristic and practical needs. Type of data for SID is a string, if it is evaluated from the basic concepts and its characteristic. It is acceptable because SID consists of a set of numbers which will not be meaningful if applied arithmetic operations like addition, subtraction, multiplication and division. But in terms of computer organization, data representation type will determine how much data space requirements, speed of access, and speed of operation. By considering the constraints of space and speed on the experiments conducted, SID is better expressed as an integer rather than a set of characters. KEYWORDS aphanumeric,representation, string, integer, space, speed
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 1, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1363
Modelling Spatial Interactions in the Arbuscular Mycorrhizal Symbiosis using the Calculus of Wrapped Compartments
[ "cs.LO", "cs.CE" ]
Arbuscular mycorrhiza (AM) is the most wide-spread plant-fungus symbiosis on earth. Investigating this kind of symbiosis is considered one of the most promising ways to develop methods to nurture plants in more natural manners, avoiding the complex chemical productions used nowadays to produce artificial fertilizers. In previous work we used the Calculus of Wrapped Compartments (CWC) to investigate different phases of the AM symbiosis. In this paper, we continue this line of research by modelling the colonisation of the plant root cells by the fungal hyphae spreading in the soil. This study requires the description of some spatial interaction. Although CWC has no explicit feature modelling a spatial geometry, the compartment labelling feature can be effectively exploited to define a discrete surface topology outlining the relevant sectors which determine the spatial properties of the system under consideration. Different situations and interesting spatial properties can be modelled and analysed in such a lightweight framework (which has not an explicit notion of geometry with coordinates and spatial metrics), thus exploiting the existing CWC simulation tool.
{ "Other": 1, "cs.AI": 0, "cs.CE": 1, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1364
Programmable models of growth and mutation of cancer-cell populations
[ "cs.CE", "q-bio.CB" ]
In this paper we propose a systematic approach to construct mathematical models describing populations of cancer-cells at different stages of disease development. The methodology we propose is based on stochastic Concurrent Constraint Programming, a flexible stochastic modelling language. The methodology is tested on (and partially motivated by) the study of prostate cancer. In particular, we prove how our method is suitable to systematically reconstruct different mathematical models of prostate cancer growth - together with interactions with different kinds of hormone therapy - at different levels of refinement.
{ "Other": 0, "cs.AI": 0, "cs.CE": 1, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1365
A semi-quantitative equivalence for abstracting from fast reactions
[ "cs.CE", "q-bio.QM" ]
Semantic equivalences are used in process algebra to capture the notion of similar behaviour, and this paper proposes a semi-quantitative equivalence for a stochastic process algebra developed for biological modelling. We consider abstracting away from fast reactions as suggested by the Quasi-Steady-State Assumption. We define a fast-slow bisimilarity based on this idea. We also show congruence under an appropriate condition for the cooperation operator of Bio-PEPA. The condition requires that there is no synchronisation over fast actions, and this distinguishes fast-slow bisimilarity from weak bisimilarity. We also show congruence for an operator which extends the reactions available for a species. We characterise models for which it is only necessary to consider the matching of slow transitions and we illustrate the equivalence on two models of competitive inhibition.
{ "Other": 0, "cs.AI": 0, "cs.CE": 1, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1366
A Minimal OO Calculus for Modelling Biological Systems
[ "cs.CE", "cs.LO" ]
In this paper we present a minimal object oriented core calculus for modelling the biological notion of type that arises from biological ontologies in formalisms based on term rewriting. This calculus implements encapsulation, method invocation, subtyping and a simple formof overriding inheritance, and it is applicable to models designed in the most popular term-rewriting formalisms. The classes implemented in a formalism can be used in several models, like programming libraries.
{ "Other": 1, "cs.AI": 0, "cs.CE": 1, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1367
A Study of the PDGF Signaling Pathway with PRISM
[ "cs.CE", "cs.LO", "q-bio.QM" ]
In this paper, we apply the probabilistic model checker PRISM to the analysis of a biological system -- the Platelet-Derived Growth Factor (PDGF) signaling pathway, demonstrating in detail how this pathway can be analyzed in PRISM. We show that quantitative verification can yield a better understanding of the PDGF signaling pathway.
{ "Other": 1, "cs.AI": 0, "cs.CE": 1, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1368
Multiple verification in computational modeling of bone pathologies
[ "cs.LO", "cs.CE", "cs.SY", "math.OC", "q-bio.TO" ]
We introduce a model checking approach to diagnose the emerging of bone pathologies. The implementation of a new model of bone remodeling in PRISM has led to an interesting characterization of osteoporosis as a defective bone remodeling dynamics with respect to other bone pathologies. Our approach allows to derive three types of model checking-based diagnostic estimators. The first diagnostic measure focuses on the level of bone mineral density, which is currently used in medical practice. In addition, we have introduced a novel diagnostic estimator which uses the full patient clinical record, here simulated using the modeling framework. This estimator detects rapid (months) negative changes in bone mineral density. Independently of the actual bone mineral density, when the decrease occurs rapidly it is important to alarm the patient and monitor him/her more closely to detect insurgence of other bone co-morbidities. A third estimator takes into account the variance of the bone density, which could address the investigation of metabolic syndromes, diabetes and cancer. Our implementation could make use of different logical combinations of these statistical estimators and could incorporate other biomarkers for other systemic co-morbidities (for example diabetes and thalassemia). We are delighted to report that the combination of stochastic modeling with formal methods motivate new diagnostic framework for complex pathologies. In particular our approach takes into consideration important properties of biosystems such as multiscale and self-adaptiveness. The multi-diagnosis could be further expanded, inching towards the complexity of human diseases. Finally, we briefly introduce self-adaptiveness in formal methods which is a key property in the regulative mechanisms of biological systems and well known in other mathematical and engineering areas.
{ "Other": 1, "cs.AI": 0, "cs.CE": 1, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 1 }
1109.1396
Gossip Learning with Linear Models on Fully Distributed Data
[ "cs.LG", "cs.DC" ]
Machine learning over fully distributed data poses an important problem in peer-to-peer (P2P) applications. In this model we have one data record at each network node, but without the possibility to move raw data due to privacy considerations. For example, user profiles, ratings, history, or sensor readings can represent this case. This problem is difficult, because there is no possibility to learn local models, the system model offers almost no guarantees for reliability, yet the communication cost needs to be kept low. Here we propose gossip learning, a generic approach that is based on multiple models taking random walks over the network in parallel, while applying an online learning algorithm to improve themselves, and getting combined via ensemble learning methods. We present an instantiation of this approach for the case of classification with linear models. Our main contribution is an ensemble learning method which---through the continuous combination of the models in the network---implements a virtual weighted voting mechanism over an exponential number of models at practically no extra cost as compared to independent random walks. We prove the convergence of the method theoretically, and perform extensive experiments on benchmark datasets. Our experimental analysis demonstrates the performance and robustness of the proposed approach.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1409
A georeferenced Agent-Based Model to analyze the climate change impacts on the Andorra winter tourism
[ "cs.MA" ]
This study presents a georeferenced agent-based model to analyze the climate change impacts on the ski industry in Andorra and the effect of snowmaking as future adaptation strategy. The present study is the first attempt to analyze the ski industry in the Pyrenees region and will contribute to a better understanding of the vulnerability of Andorran ski resorts and the suitability of snowmaking as potential adaptation strategy to climate change. The resulting model can be used as a planning support tool to help local stakeholders understand the vulnerability and potential impacts of climate change. This model can be used in the decision-making process of designing and developing appropriate sustainable adaptation strategies to future climate variability.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 1, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1460
Finiteness of the playing time in 'Beggar-my-neighbour' card game
[ "math.PR", "cs.IT", "math.DS", "math.IT" ]
It is proved that in card games similar to 'Beggar-my-neighbour' the mathematical expectation of the playing time is finite, provided that the player who starts the round is determined randomly and the deck is shuffled when the trick is added. The result holds for the generic setting of the game.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1480
Curvature Prior for MRF-based Segmentation and Shape Inpainting
[ "cs.CV" ]
Most image labeling problems such as segmentation and image reconstruction are fundamentally ill-posed and suffer from ambiguities and noise. Higher order image priors encode high level structural dependencies between pixels and are key to overcoming these problems. However, these priors in general lead to computationally intractable models. This paper addresses the problem of discovering compact representations of higher order priors which allow efficient inference. We propose a framework for solving this problem which uses a recently proposed representation of higher order functions where they are encoded as lower envelopes of linear functions. Maximum a Posterior inference on our learned models reduces to minimizing a pairwise function of discrete variables, which can be done approximately using standard methods. Although this is a primarily theoretical paper, we also demonstrate the practical effectiveness of our framework on the problem of learning a shape prior for image segmentation and reconstruction. We show that our framework can learn a compact representation that approximates a prior that encourages low curvature shapes. We evaluate the approximation accuracy, discuss properties of the trained model, and show various results for shape inpainting and image segmentation.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1488
Are Opinions Based on Science: Modelling Social Response to Scientific Facts
[ "physics.soc-ph", "cs.CY", "cs.SI", "nlin.AO" ]
As scientists we like to think that modern societies and their members base their views, opinions and behaviour on scientific facts. This is not necessarily the case, even though we are all (over-) exposed to information flow through various channels of media, i.e. newspapers, television, radio, internet, and web. It is thought that this is mainly due to the conflicting information on the mass media and to the individual attitude (formed by cultural, educational and environmental factors), that is, one external factor and another personal factor. In this paper we will investigate the dynamical development of opinion in a small population of agents by means of a computational model of opinion formation in a co-evolving network of socially linked agents. The personal and external factors are taken into account by assigning an individual attitude parameter to each agent, and by subjecting all to an external but homogeneous field to simulate the effect of the media. We then adjust the field strength in the model by using actual data on scientific perception surveys carried out in two different populations, which allow us to compare two different societies. We interpret the model findings with the aid of simple mean field calculations. Our results suggest that scientifically sound concepts are more difficult to acquire than concepts not validated by science, since opposing individuals organize themselves in close communities that prevent opinion consensus.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 1, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 1, "cs.SY": 0 }
1109.1498
Structured Knowledge Representation for Image Retrieval
[ "cs.AI" ]
We propose a structured approach to the problem of retrieval of images by content and present a description logic that has been devised for the semantic indexing and retrieval of images containing complex objects. As other approaches do, we start from low-level features extracted with image analysis to detect and characterize regions in an image. However, in contrast with feature-based approaches, we provide a syntax to describe segmented regions as basic objects and complex objects as compositions of basic ones. Then we introduce a companion extensional semantics for defining reasoning services, such as retrieval, classification, and subsumption. These services can be used for both exact and approximate matching, using similarity measures. Using our logical approach as a formal specification, we implemented a complete client-server image retrieval system, which allows a user to pose both queries by sketch and queries by example. A set of experiments has been carried out on a testbed of images to assess the retrieval capabilities of the system in comparison with expert users ranking. Results are presented adopting a well-established measure of quality borrowed from textual information retrieval.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1504
A New Method for Lower Bounds on the Running Time of Evolutionary Algorithms
[ "cs.NE" ]
We present a new method for proving lower bounds on the expected running time of evolutionary algorithms. It is based on fitness-level partitions and an additional condition on transition probabilities between fitness levels. The method is versatile, intuitive, elegant, and very powerful. It yields exact or near-exact lower bounds for LO, OneMax, long k-paths, and all functions with a unique optimum. Most lower bounds are very general: they hold for all evolutionary algorithms that only use bit-flip mutation as variation operator---i.e. for all selection operators and population models. The lower bounds are stated with their dependence on the mutation rate. These results have very strong implications. They allow to determine the optimal mutation-based algorithm for LO and OneMax, i.e., which algorithm minimizes the expected number of fitness evaluations. This includes the choice of the optimal mutation rate.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 1, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1507
On the Symmetric Feedback Capacity of the K-user Cyclic Z-Interference Channel
[ "cs.IT", "math.IT" ]
The K-user cyclic Z-interference channel models a situation in which the kth transmitter causes interference only to the (k-1)th receiver in a cyclic manner, e.g., the first transmitter causes interference only to the Kth receiver. The impact of noiseless feedback on the capacity of this channel is studied by focusing on the Gaussian cyclic Z-interference channel. To this end, the symmetric feedback capacity of the linear shift deterministic cyclic Z-interference channel (LD-CZIC) is completely characterized for all interference regimes. Using insights from the linear deterministic channel model, the symmetric feedback capacity of the Gaussian cyclic Z-interference channel is characterized up to within a constant number of bits. As a byproduct of the constant gap result, the symmetric generalized degrees of freedom with feedback for the Gaussian cyclic Z-interference channel are also characterized. These results highlight that the symmetric feedback capacities for both linear and Gaussian channel models are in general functions of K, the number of users. Furthermore, the capacity gain obtained due to feedback decreases as K increases.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1525
Conceptual Knowledge Markup Language: The central core
[ "cs.DL", "cs.AI" ]
The conceptual knowledge framework OML/CKML needs several components for a successful design. One important, but previously overlooked, component is the central core of OML/CKML. The central core provides a theoretical link between the ontological specification in OML and the conceptual knowledge representation in CKML. This paper discusses the formal semantics and syntactic styles of the central core, and also the important role it plays in defining interoperability between OML/CKML, RDF/S and Ontolingua.
{ "Other": 1, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1528
Dynamics of Boltzmann Q-Learning in Two-Player Two-Action Games
[ "cs.GT", "cs.LG", "cs.MA", "nlin.AO", "q-bio.PE" ]
We consider the dynamics of Q-learning in two-player two-action games with a Boltzmann exploration mechanism. For any non-zero exploration rate the dynamics is dissipative, which guarantees that agent strategies converge to rest points that are generally different from the game's Nash Equlibria (NE). We provide a comprehensive characterization of the rest point structure for different games, and examine the sensitivity of this structure with respect to the noise due to exploration. Our results indicate that for a class of games with multiple NE the asymptotic behavior of learning dynamics can undergo drastic changes at critical exploration rates. Furthermore, we demonstrate that for certain games with a single NE, it is possible to have additional rest points (not corresponding to any NE) that persist for a finite range of the exploration rates and disappear when the exploration rates of both players tend to zero.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 1, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1530
Daily Deals: Prediction, Social Diffusion, and Reputational Ramifications
[ "cs.SI", "physics.soc-ph" ]
Daily deal sites have become the latest Internet sensation, providing discounted offers to customers for restaurants, ticketed events, services, and other items. We begin by undertaking a study of the economics of daily deals on the web, based on a dataset we compiled by monitoring Groupon and LivingSocial sales in 20 large cities over several months. We use this dataset to characterize deal purchases; glean insights about operational strategies of these firms; and evaluate customers' sensitivity to factors such as price, deal scheduling, and limited inventory. We then marry our daily deals dataset with additional datasets we compiled from Facebook and Yelp users to study the interplay between social networks and daily deal sites. First, by studying user activity on Facebook while a deal is running, we provide evidence that daily deal sites benefit from significant word-of-mouth effects during sales events, consistent with results predicted by cascade models. Second, we consider the effects of daily deals on the longer-term reputation of merchants, based on their Yelp reviews before and after they run a daily deal. Our analysis shows that while the number of reviews increases significantly due to daily deals, average rating scores from reviewers who mention daily deals are 10% lower than scores of their peers on average.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 1, "cs.SY": 0 }
1109.1533
The Non-Bayesian Restless Multi-Armed Bandit: A Case of Near-Logarithmic Strict Regret
[ "math.OC", "cs.LG", "cs.NI", "cs.SY", "math.PR" ]
In the classic Bayesian restless multi-armed bandit (RMAB) problem, there are $N$ arms, with rewards on all arms evolving at each time as Markov chains with known parameters. A player seeks to activate $K \geq 1$ arms at each time in order to maximize the expected total reward obtained over multiple plays. RMAB is a challenging problem that is known to be PSPACE-hard in general. We consider in this work the even harder non-Bayesian RMAB, in which the parameters of the Markov chain are assumed to be unknown \emph{a priori}. We develop an original approach to this problem that is applicable when the corresponding Bayesian problem has the structure that, depending on the known parameter values, the optimal solution is one of a prescribed finite set of policies. In such settings, we propose to learn the optimal policy for the non-Bayesian RMAB by employing a suitable meta-policy which treats each policy from this finite set as an arm in a different non-Bayesian multi-armed bandit problem for which a single-arm selection policy is optimal. We demonstrate this approach by developing a novel sensing policy for opportunistic spectrum access over unknown dynamic channels. We prove that our policy achieves near-logarithmic regret (the difference in expected reward compared to a model-aware genie), which leads to the same average reward that can be achieved by the optimal policy under a known model. This is the first such result in the literature for a non-Bayesian RMAB. For our proof, we also develop a novel generalization of the Chernoff-Hoeffding bound.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 1 }
1109.1552
Efficient Online Learning for Opportunistic Spectrum Access
[ "cs.LG", "cs.NI", "cs.SY", "math.OC", "math.PR" ]
The problem of opportunistic spectrum access in cognitive radio networks has been recently formulated as a non-Bayesian restless multi-armed bandit problem. In this problem, there are N arms (corresponding to channels) and one player (corresponding to a secondary user). The state of each arm evolves as a finite-state Markov chain with unknown parameters. At each time slot, the player can select K < N arms to play and receives state-dependent rewards (corresponding to the throughput obtained given the activity of primary users). The objective is to maximize the expected total rewards (i.e., total throughput) obtained over multiple plays. The performance of an algorithm for such a multi-armed bandit problem is measured in terms of regret, defined as the difference in expected reward compared to a model-aware genie who always plays the best K arms. In this paper, we propose a new continuous exploration and exploitation (CEE) algorithm for this problem. When no information is available about the dynamics of the arms, CEE is the first algorithm to guarantee near-logarithmic regret uniformly over time. When some bounds corresponding to the stationary state distributions and the state-dependent rewards are known, we show that CEE can be easily modified to achieve logarithmic regret over time. In contrast, prior algorithms require additional information concerning bounds on the second eigenvalues of the transition matrices in order to guarantee logarithmic regret. Finally, we show through numerical simulations that CEE is more efficient than prior algorithms.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 1 }
1109.1604
Degrees of Freedom (DoF) of Locally Connected Interference Channels with Coordinated Multi-Point (CoMP) Transmission
[ "cs.IT", "math.IT" ]
The degrees of freedom (DoF) available for communication provides an analytically tractable way to characterize the information-theoretic capacity of interference channels. In this paper, the DoF of a K-user interference channel is studied under the assumption that the transmitters can cooperate via coordinated multi-point (CoMP) transmission. In [1], the authors considered the linear asymmetric model of Wyner, where each transmitter is connected to its own receiver and its successor, and is aware of its own message as well as M-1 preceding messages. The per user DoF was shown to go to M/(M+1) as the number of users increases to infinity. In this work, the same model of channel connectivity is considered, with a relaxed cooperation constraint that bounds the maximum number of transmitters at which each message can be available, by a cooperation order M. We show that the relaxation of the cooperation constraint, while maintaining the same load imposed on a backhaul link needed to distribute the messages, results in a gain in the DoF. In particular, the asymptotic limit of the per user DoF under the cooperation order constraint is (2M)/(2M+1) . Moreover, the optimal transmit set selection satisfies a local cooperation constraint. i.e., each message needs only to be available at neighboring transmitters. [1] A. Lapidoth, S. Shamai (Shitz) and M. A. Wigger, "A linear interference network with local Side-Information," in Proc. IEEE International Symposium on Information Theory (ISIT), Nice, Jun. 2007.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1605
On Clustering on Graphs with Multiple Edge Types
[ "cs.SI", "cs.LG", "physics.soc-ph" ]
We study clustering on graphs with multiple edge types. Our main motivation is that similarities between objects can be measured in many different metrics. For instance similarity between two papers can be based on common authors, where they are published, keyword similarity, citations, etc. As such, graphs with multiple edges is a more accurate model to describe similarities between objects. Each edge/metric provides only partial information about the data; recovering full information requires aggregation of all the similarity metrics. Clustering becomes much more challenging in this context, since in addition to the difficulties of the traditional clustering problem, we have to deal with a space of clusterings. We generalize the concept of clustering in single-edge graphs to multi-edged graphs and investigate problems such as: Can we find a clustering that remains good, even if we change the relative weights of metrics? How can we describe the space of clusterings efficiently? Can we find unexpected clusterings (a good clustering that is distant from all given clusterings)? If given the ground-truth clustering, can we recover how the weights for edge types were aggregated? %In this paper, we discuss these problems and the underlying algorithmic challenges and propose some solutions. We also present two case studies: one based on papers on Arxiv and one based on CIA World Factbook.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 1, "cs.SY": 0 }
1109.1606
Online Learning for Combinatorial Network Optimization with Restless Markovian Rewards
[ "cs.LG", "cs.NI", "math.OC", "math.PR" ]
Combinatorial network optimization algorithms that compute optimal structures taking into account edge weights form the foundation for many network protocols. Examples include shortest path routing, minimal spanning tree computation, maximum weighted matching on bipartite graphs, etc. We present CLRMR, the first online learning algorithm that efficiently solves the stochastic version of these problems where the underlying edge weights vary as independent Markov chains with unknown dynamics. The performance of an online learning algorithm is characterized in terms of regret, defined as the cumulative difference in rewards between a suitably-defined genie, and that obtained by the given algorithm. We prove that, compared to a genie that knows the Markov transition matrices and uses the single-best structure at all times, CLRMR yields regret that is polynomial in the number of edges and nearly-logarithmic in time.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1618
An analysis of Twitter messages in the 2011 Tohoku Earthquake
[ "cs.SI", "cs.CL", "physics.soc-ph" ]
Social media such as Facebook and Twitter have proven to be a useful resource to understand public opinion towards real world events. In this paper, we investigate over 1.5 million Twitter messages (tweets) for the period 9th March 2011 to 31st May 2011 in order to track awareness and anxiety levels in the Tokyo metropolitan district to the 2011 Tohoku Earthquake and subsequent tsunami and nuclear emergencies. These three events were tracked using both English and Japanese tweets. Preliminary results indicated: 1) close correspondence between Twitter data and earthquake events, 2) strong correlation between English and Japanese tweets on the same events, 3) tweets in the native language play an important roles in early warning, 4) tweets showed how quickly Japanese people's anxiety returned to normal levels after the earthquake event. Several distinctions between English and Japanese tweets on earthquake events are also discussed. The results suggest that Twitter data can be used as a useful resource for tracking the public mood of populations affected by natural disasters as well as an early warning system.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 1, "cs.SY": 0 }
1109.1643
An Efficient Hybrid Power Control Algorithm for Capacity Improvement of CDMA-based Fixed Wireless Applications
[ "cs.IT", "math.IT" ]
In Fixed Wireless Applications (FWA), the Code Division Multiple Access (CDMA) is the most promising candidate for wideband data access. The reason is the soft limit on the number of active mobile devices. Many Fixed Wireless Applications impose an upper bound on the BER performance which restricts the increase in number of mobile users. The number of active mobile users or Capacity is further reduced in Multipath Fading Environment (MFE). This paper presents an effective method of improving the capacity of CDMA based Fixed Wireless Networks by using a hybrid power control algorithm. The proposed scheme improves the capacity two times as compared to the conventional CDMA based networks. Simulation results have been presented to demonstrate the effectiveness of the proposed scheme.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1646
Exact Subspace Segmentation and Outlier Detection by Low-Rank Representation
[ "cs.IT", "cs.CV", "math.IT" ]
In this work, we address the following matrix recovery problem: suppose we are given a set of data points containing two parts, one part consists of samples drawn from a union of multiple subspaces and the other part consists of outliers. We do not know which data points are outliers, or how many outliers there are. The rank and number of the subspaces are unknown either. Can we detect the outliers and segment the samples into their right subspaces, efficiently and exactly? We utilize a so-called {\em Low-Rank Representation} (LRR) method to solve this problem, and prove that under mild technical conditions, any solution to LRR exactly recovers the row space of the samples and detect the outliers as well. Since the subspace membership is provably determined by the row space, this further implies that LRR can perform exact subspace segmentation and outlier detection, in an efficient way.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1649
Reachability in Biochemical Dynamical Systems by Quantitative Discrete Approximation (extended abstract)
[ "cs.SY", "cs.CE", "math.OC" ]
In this paper, a novel computational technique for finite discrete approximation of continuous dynamical systems suitable for a significant class of biochemical dynamical systems is introduced. The method is parameterized in order to affect the imposed level of approximation provided that with increasing parameter value the approximation converges to the original continuous system. By employing this approximation technique, we present algorithms solving the reachability problem for biochemical dynamical systems. The presented method and algorithms are evaluated on several exemplary biological models and on a real case study.
{ "Other": 0, "cs.AI": 0, "cs.CE": 1, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 1 }
1109.1664
Recognition of Crowd Behavior from Mobile Sensors with Pattern Analysis and Graph Clustering Methods
[ "physics.soc-ph", "cs.SI" ]
Mobile on-body sensing has distinct advantages for the analysis and understanding of crowd dynamics: sensing is not geographically restricted to a specific instrumented area, mobile phones offer on-body sensing and they are already deployed on a large scale, and the rich sets of sensors they contain allows one to characterize the behavior of users through pattern recognition techniques. In this paper we present a methodological framework for the machine recognition of crowd behavior from on-body sensors, such as those in mobile phones. The recognition of crowd behaviors opens the way to the acquisition of large-scale datasets for the analysis and understanding of crowd dynamics. It has also practical safety applications by providing improved crowd situational awareness in cases of emergency. The framework comprises: behavioral recognition with the user's mobile device, pairwise analyses of the activity relatedness of two users, and graph clustering in order to uncover globally, which users participate in a given crowd behavior. We illustrate this framework for the identification of groups of persons walking, using empirically collected data. We discuss the challenges and research avenues for theoretical and applied mathematics arising from the mobile sensing of crowd behaviors.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 1, "cs.SY": 0 }
1109.1670
A New Rate Region for General Interference Channel (Improved HK Region)
[ "cs.IT", "math.IT" ]
In this paper (a) after detailed investigation of the previous equivalent rate regions for general interference channel, i.e., the Han-Kobayashi (HK) and the Chong-Motani-Garg (CMG) regions, we define modified CMG region the equivalency of which with the HK region is readily seen; (b) we make two novel changes in the HK coding. First, we allow the input auxiliary random variables to be correlated and, second, exploit the powerful technique of random binning instead of the HK -CMG superposition coding, thereby establishing a new rate region for general interference channel, as an improved version of the HK region; (c) we make a novel change in the CMG coding by allowing the message variables to be correlated and obtain an equivalent form for our new region in (b), as an improved version of the CMG region. Then, (d) in order to exactly demarcate the regions, by considering their different easily comparable versions, we compare our region to the HK and CMG regions. Specifically, using a simple dependency structure for the correlated auxiliary random variables, based on the Wyner and Gacs-Korner common information between dependent variables, we show that the HK and the CMG regions are special cases of our new region. Keywords. Interference channel, Correlated auxiliary random variables, Common information, Superposition coding, Binning scheme
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1680
An extremal [72,36,16] binary code has no automorphism group containing Z2xZ4, Q_8, or Z_{10}
[ "cs.IT", "math.IT" ]
Let $C$ be an extremal self-dual binary code of length 72 and $g\in \Aut(C) $ be an automorphism of order 2. We show that $C$ is a free $\F_2<g>$ module and use this to exclude certain subgroups of order 8 of $\Aut (C)$. We also show that $\Aut(C)$ does not contain an element of order 10.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1681
On extremal self-dual ternary codes of length 48
[ "cs.IT", "math.IT" ]
All extremal ternary codes of length 48 that have some automorphism of prime order $p\geq 5$ are equivalent to one of the two known codes, the Pless code or the extended quadratic residue code.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1724
The Complexity of Approximating a Bethe Equilibrium
[ "cs.AI", "cs.CC" ]
This paper resolves a common complexity issue in the Bethe approximation of statistical physics and the Belief Propagation (BP) algorithm of artificial intelligence. The Bethe approximation and the BP algorithm are heuristic methods for estimating the partition function and marginal probabilities in graphical models, respectively. The computational complexity of the Bethe approximation is decided by the number of operations required to solve a set of non-linear equations, the so-called Bethe equation. Although the BP algorithm was inspired and developed independently, Yedidia, Freeman and Weiss (2004) showed that the BP algorithm solves the Bethe equation if it converges (however, it often does not). This naturally motivates the following question to understand limitations and empirical successes of the Bethe and BP methods: is the Bethe equation computationally easy to solve? We present a message-passing algorithm solving the Bethe equation in a polynomial number of operations for general binary graphical models of n variables where the maximum degree in the underlying graph is O(log n). Our algorithm can be used as an alternative to BP fixing its convergence issue and is the first fully polynomial-time approximation scheme for the BP fixed-point computation in such a large class of graphical models, while the approximate fixed-point computation is known to be (PPAD-)hard in general. We believe that our technique is of broader interest to understand the computational complexity of the cavity method in statistical physics.
{ "Other": 1, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1729
Anomaly Sequences Detection from Logs Based on Compression
[ "cs.LG", "cs.DS" ]
Mining information from logs is an old and still active research topic. In recent years, with the rapid emerging of cloud computing, log mining becomes increasingly important to industry. This paper focus on one major mission of log mining: anomaly detection, and proposes a novel method for mining abnormal sequences from large logs. Different from previous anomaly detection systems which based on statistics, probabilities and Markov assumption, our approach measures the strangeness of a sequence using compression. It first trains a grammar about normal behaviors using grammar-based compression, then measures the information quantities and densities of questionable sequences according to incrementation of grammar length. We have applied our approach on mining some real bugs from fine grained execution logs. We have also tested its ability on intrusion detection using some publicity available system call traces. The experiments show that our method successfully selects the strange sequences which related to bugs or attacking.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1746
Circadian patterns of Wikipedia editorial activity: A demographic analysis
[ "physics.soc-ph", "cs.SI", "stat.AP" ]
Wikipedia (WP) as a collaborative, dynamical system of humans is an appropriate subject of social studies. Each single action of the members of this society, i.e. editors, is well recorded and accessible. Using the cumulative data of 34 Wikipedias in different languages, we try to characterize and find the universalities and differences in temporal activity patterns of editors. Based on this data, we estimate the geographical distribution of editors for each WP in the globe. Furthermore we also clarify the differences among different groups of WPs, which originate in the variance of cultural and social features of the communities of editors.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 1, "cs.SY": 0 }
1109.1754
Solving Limited Memory Influence Diagrams
[ "cs.AI", "cs.CC", "stat.ML" ]
We present a new algorithm for exactly solving decision making problems represented as influence diagrams. We do not require the usual assumptions of no forgetting and regularity; this allows us to solve problems with simultaneous decisions and limited information. The algorithm is empirically shown to outperform a state-of-the-art algorithm on randomly generated problems of up to 150 variables and $10^{64}$ solutions. We show that the problem is NP-hard even if the underlying graph structure of the problem has small treewidth and the variables take on a bounded number of states, but that a fully polynomial time approximation scheme exists for these cases. Moreover, we show that the bound on the number of states is a necessary condition for any efficient approximation scheme.
{ "Other": 1, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1766
Analysis of Speedups in Parallel Evolutionary Algorithms for Combinatorial Optimization
[ "cs.NE" ]
Evolutionary algorithms are popular heuristics for solving various combinatorial problems as they are easy to apply and often produce good results. Island models parallelize evolution by using different populations, called islands, which are connected by a graph structure as communication topology. Each island periodically communicates copies of good solutions to neighboring islands in a process called migration. We consider the speedup gained by island models in terms of the parallel running time for problems from combinatorial optimization: sorting (as maximization of sortedness), shortest paths, and Eulerian cycles. Different search operators are considered. The results show in which settings and up to what degree evolutionary algorithms can be parallelized efficiently. Along the way, we also investigate how island models deal with plateaus. In particular, we show that natural settings lead to exponential vs. logarithmic speedups, depending on the frequency of migration.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 1, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1774
Conjure Revisited: Towards Automated Constraint Modelling
[ "cs.AI", "cs.PL" ]
Automating the constraint modelling process is one of the key challenges facing the constraints field, and one of the principal obstacles preventing widespread adoption of constraint solving. This paper focuses on the refinement-based approach to automated modelling, where a user specifies a problem in an abstract constraint specification language and it is then automatically refined into a constraint model. In particular, we revisit the Conjure system that first appeared in prototype form in 2005 and present a new implementation with a much greater coverage of the specification language Essence.
{ "Other": 1, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1808
The use of microblogging for field-based scientific research
[ "cs.SI", "cs.DL", "physics.soc-ph" ]
Documenting the context in which data are collected is an integral part of the scientific research lifecycle. In field-based research, contextual information provides a detailed description of scientific practices and thus enables data interpretation and reuse. For field data, losing contextual information often means losing the data altogether. Yet, documenting the context of distributed, collaborative, field-based research can be a significant challenge due to the unpredictable nature of real-world settings and to the high degree of variability in data collection methods and scientific practices of different researchers. In this article, we propose the use of microblogging as a mechanism to support collection, ingestion, and publication of contextual information about the variegated digital artifacts that are produced in field research. We perform interviews with scholars involved in field-based environmental and urban sensing research, to determine the extent of adoption of Twitter and similar microblogging platforms and their potential use for field-specific research applications. Based on the results of these interviews as well as participant observation of field activities, we present the design, development, and pilot evaluation of a microblogging application integrated with an existing data collection platform on a handheld device. We investigate whether microblogging accommodates the variable and unpredictable nature of highly mobile research and whether it represents a suitable mechanism to document the context of field research data early in the scientific information lifecycle.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 1, "cs.SY": 0 }
1109.1841
Digital Libraries, Conceptual Knowledge Systems, and the Nebula Interface
[ "cs.DL", "cs.AI" ]
Concept Analysis provides a principled approach to effective management of wide area information systems, such as the Nebula File System and Interface. This not only offers evidence to support the assertion that a digital library is a bounded collection of incommensurate information sources in a logical space, but also sheds light on techniques for collaboration through coordinated access to the shared organization of knowledge.
{ "Other": 1, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1844
Weighted Clustering
[ "cs.LG" ]
One of the most prominent challenges in clustering is "the user's dilemma," which is the problem of selecting an appropriate clustering algorithm for a specific task. A formal approach for addressing this problem relies on the identification of succinct, user-friendly properties that formally capture when certain clustering methods are preferred over others. Until now these properties focused on advantages of classical Linkage-Based algorithms, failing to identify when other clustering paradigms, such as popular center-based methods, are preferable. We present surprisingly simple new properties that delineate the differences between common clustering paradigms, which clearly and formally demonstrates advantages of center-based approaches for some applications. These properties address how sensitive algorithms are to changes in element frequencies, which we capture in a generalized setting where every element is associated with a real-valued weight.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1852
Long Trend Dynamics in Social Media
[ "physics.soc-ph", "cs.CY", "cs.SI" ]
A main characteristic of social media is that its diverse content, copiously generated by both standard outlets and general users, constantly competes for the scarce attention of large audiences. Out of this flood of information some topics manage to get enough attention to become the most popular ones and thus to be prominently displayed as trends. Equally important, some of these trends persist long enough so as to shape part of the social agenda. How this happens is the focus of this paper. By introducing a stochastic dynamical model that takes into account the user's repeated involvement with given topics, we can predict the distribution of trend durations as well as the thresholds in popularity that lead to their emergence within social media. Detailed measurements of datasets from Twitter confirm the validity of the model and its predictions.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 1, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 1, "cs.SY": 0 }
1109.1865
Progressive versus Random Projections for Compressive Capture of Images, Lightfields and Higher Dimensional Visual Signals
[ "cs.CV" ]
Computational photography involves sophisticated capture methods. A new trend is to capture projection of higher dimensional visual signals such as videos, multi-spectral data and lightfields on lower dimensional sensors. Carefully designed capture methods exploit the sparsity of the underlying signal in a transformed domain to reduce the number of measurements and use an appropriate reconstruction method. Traditional progressive methods may capture successively more detail using a sequence of simple projection basis, such as DCT or wavelets and employ straightforward backprojection for reconstruction. Randomized projection methods do not use any specific sequence and use L0 minimization for reconstruction. In this paper, we analyze the statistical properties of natural images, videos, multi-spectral data and light-fields and compare the effectiveness of progressive and random projections. We define effectiveness by plotting reconstruction SNR against compression factor. The key idea is a procedure to measure best-case effectiveness that is fast, independent of specific hardware and independent of the reconstruction procedure. We believe this is the first empirical study to compare different lossy capture strategies without the complication of hardware or reconstruction ambiguity. The scope is limited to linear non-adaptive sensing. The results show that random projections produce significant advantages over other projections only for higher dimensional signals, and suggest more research to nascent adaptive and non-linear projection methods.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1874
A Capacity Improvement Method for CDMA based Mesh Networks in SUI Multipath Fading Channels
[ "cs.IT", "math.IT" ]
Code Division Multiple Access (CDMA) is the most promising candidate for wideband data access. This is due to the advantage of soft limit on the number of active mobile devices. Many wireless mesh systems impose an upper bound on the BER performance which restricts the increase in number of mobile users. Capacity is further reduced in Multipath Fading Environment (MFE). This paper presents an effective method of improving the capacity of a CDMA based mesh network by managing the transmitted powers of the mobile devices and using MMSE based Multiuser Detection (MUD). The proposed scheme improves the capacity two times as compared to the conventional CDMA based mesh network. Simulation results have been presented to demonstrate the effectiveness of the proposed scheme.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1879
A Real-time Localization System Using RFID for Visually Impaired
[ "cs.MA", "cs.NI" ]
Gadgets helping the disabled, especially blind that are in least accessibility of information, use acoustic methods that can cause stress to ear and infringe user's privacy. Even if some project uses embedded Radio Frequency Identification (RFID) into the sidewalk for blind's free walking, the tag memory design is not specified for buildings and road conditions. This paper suggested allocation scheme of RFID tag referring to EPCglobal SGLN, tactile method for conveying information, and use of lithium battery as power source with solar cells as an alternative. Results have shown independent mobility, accidents prevention, stress relief and satisfied factors in terms of cost and human usability.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 1, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1895
Support Recovery of Sparse Signals in the Presence of Multiple Measurement Vectors
[ "cs.IT", "math.IT" ]
This paper studies the problem of support recovery of sparse signals based on multiple measurement vectors (MMV). The MMV support recovery problem is connected to the problem of decoding messages in a Single-Input Multiple-Output (SIMO) multiple access channel (MAC), thereby enabling an information theoretic framework for analyzing performance limits in recovering the support of sparse signals. Sharp sufficient and necessary conditions for successful support recovery are derived in terms of the number of measurements per measurement vector, the number of nonzero rows, the measurement noise level, and especially the number of measurement vectors. Through the interpretations of the results, in particular the connection to the multiple output communication system, the benefit of having MMV for sparse signal recovery is illustrated providing a theoretical foundation to the performance improvement enabled by MMV as observed in many existing simulation results. In particular, it is shown that the structure (rank) of the matrix formed by the nonzero entries plays an important role on the performance limits of support recovery.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1900
Weakly-coupled systems in quantum control
[ "math.AP", "cs.SY", "math-ph", "math.MP", "math.OC" ]
This paper provides rigorous definitions and analysis of the dynamics of weakly-coupled systems and gives sufficient conditions for an infinite dimensional quantum control system to be weakly-coupled. As an illustration we provide examples chosen among common physical systems.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 1 }
1109.1913
Tolerant identification with Euclidean balls
[ "cs.IT", "cs.DM", "math.CO", "math.IT" ]
The concept of identifying codes was introduced by Karpovsky, Chakrabarty and Levitin in 1998. The identifying codes can be applied, for example, to sensor networks. In this paper, we consider as sensors the set Z^2 where one sensor can check its neighbours within Euclidean distance r. We construct tolerant identifying codes in this network that are robust against some changes in the neighbourhood monitored by each sensor. We give bounds for the smallest density of a tolerant identifying code for general values of r and Delta. We also provide infinite families of values (r,Delta) with optimal such codes and study the case of small values of r.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1922
Predicting the Energy Output of Wind Farms Based on Weather Data: Important Variables and their Correlation
[ "cs.AI" ]
Wind energy plays an increasing role in the supply of energy world-wide. The energy output of a wind farm is highly dependent on the weather condition present at the wind farm. If the output can be predicted more accurately, energy suppliers can coordinate the collaborative production of different energy sources more efficiently to avoid costly overproductions. With this paper, we take a computer science perspective on energy prediction based on weather data and analyze the important parameters as well as their correlation on the energy output. To deal with the interaction of the different parameters we use symbolic regression based on the genetic programming tool DataModeler. Our studies are carried out on publicly available weather and energy data for a wind farm in Australia. We reveal the correlation of the different variables for the energy output. The model obtained for energy prediction gives a very reliable prediction of the energy output for newly given weather data.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1949
Alternative Awaiting and Broadcast for Two-Way Relay Fading Channels
[ "cs.IT", "math.IT" ]
We investigate a two-way relay (TWR) fading channel based on store-and-forward (SF), where two source nodes wish to exchange information with the help of a relay node. A new upper bound on the ergodic sum-capacity for the TWR fading system is derived when delay tends to infinity.We further propose two alternative awaiting and broadcast (AAB) schemes: pure partial decoding (PPD) with SF-I and combinatorial decoding (CBD) with SF-II, which approach the new upper bound at high SNR with unbounded and bounded delay respectively. Numerical results show that the proposed AAB schemes significantly outperform the traditional physical layer network coding (PLNC) methods without delay. Compared to the traditional TWR schemes without delay, the proposed CBD with SF-II method significantly improves the maximum sum-rate with an average delay of only some dozen seconds in the relay buffer.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1966
The path inference filter: model-based low-latency map matching of probe vehicle data
[ "cs.AI" ]
We consider the problem of reconstructing vehicle trajectories from sparse sequences of GPS points, for which the sampling interval is between 10 seconds and 2 minutes. We introduce a new class of algorithms, called altogether path inference filter (PIF), that maps GPS data in real time, for a variety of trade-offs and scenarios, and with a high throughput. Numerous prior approaches in map-matching can be shown to be special cases of the path inference filter presented in this article. We present an efficient procedure for automatically training the filter on new data, with or without ground truth observations. The framework is evaluated on a large San Francisco taxi dataset and is shown to improve upon the current state of the art. This filter also provides insights about driving patterns of drivers. The path inference filter has been deployed at an industrial scale inside the Mobile Millennium traffic information system, and is used to map fleets of data in San Francisco, Sacramento, Stockholm and Porto.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1989
Efficient Personalized Web Mining: Utilizing The Most Utilized Data
[ "cs.IR" ]
Looking into the growth of information in the web it is a very tedious process of getting the exact information the user is looking for. Many search engines generate user profile related data listing. This paper involves one such process where the rating is given to the link that the user is clicking on. Rather than avoiding the uninterested links both interested links and the uninterested links are listed. But sorted according to the weightings given to each link by the number of visit made by the particular user and the amount of time spent on the particular link.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 1, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1990
Trace Lasso: a trace norm regularization for correlated designs
[ "cs.LG", "stat.ML" ]
Using the $\ell_1$-norm to regularize the estimation of the parameter vector of a linear model leads to an unstable estimator when covariates are highly correlated. In this paper, we introduce a new penalty function which takes into account the correlation of the design matrix to stabilize the estimation. This norm, called the trace Lasso, uses the trace norm, which is a convex surrogate of the rank, of the selected covariates as the criterion of model complexity. We analyze the properties of our norm, describe an optimization algorithm based on reweighted least-squares, and illustrate the behavior of this norm on synthetic data, showing that it is more adapted to strong correlations than competing methods such as the elastic net.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.1991
Effective Personalized Web Mining by Utilizing The Most Utilized Data
[ "cs.IR" ]
Looking into the growth of information in the web it is a very tedious process of getting the exact information the user is looking for. Many search engines generate user profile related data listing. This paper involves one such process where the rating is given to the link that the user is clicking on. Rather than avoiding the uninterested links both interested links and the uninterested links are listed. But sorted according to the weightings given to each link by the number of visit made by the particular user and the amount of time spent on the particular link.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 1, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2034
Learning Sequence Neighbourhood Metrics
[ "cs.NE", "cs.LG" ]
Recurrent neural networks (RNNs) in combination with a pooling operator and the neighbourhood components analysis (NCA) objective function are able to detect the characterizing dynamics of sequences and embed them into a fixed-length vector space of arbitrary dimensionality. Subsequently, the resulting features are meaningful and can be used for visualization or nearest neighbour classification in linear time. This kind of metric learning for sequential data enables the use of algorithms tailored towards fixed length vector spaces such as R^n.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 1, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2044
Information-sharing and aggregation models for interacting minds
[ "physics.soc-ph", "cs.SI", "stat.AP" ]
We study mathematical models of the collaborative solving of a two-choice discrimination task. We estimate the difference between the shared performance for a group of n observers over a single person performance. Our paper is a theoretical extension of the recent work of Bahrami et al. (2010) from a dyad (a pair) to a group of n interacting minds. We analyze several models of communication, decision-making and hierarchical information-aggregation. The maximal slope of psychometric function (closely related to the percentage of right answers vs. easiness of the task) is a convenient parameter characterizing performance. For every model we investigated, the group performance turns out to be a product of two numbers: a scaling factor depending of the group size and an average performance. The scaling factor is a power function of the group size (with the exponent ranging from 0 to 1), whereas the average is arithmetic mean, quadratic mean, or maximum of the individual slopes. Moreover, voting can be almost as efficient as more elaborate communication models, given the participants have similar individual performances.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 1, "cs.SY": 0 }
1109.2047
Learning From Labeled And Unlabeled Data: An Empirical Study Across Techniques And Domains
[ "cs.LG" ]
There has been increased interest in devising learning techniques that combine unlabeled data with labeled data ? i.e. semi-supervised learning. However, to the best of our knowledge, no study has been performed across various techniques and different types and amounts of labeled and unlabeled data. Moreover, most of the published work on semi-supervised learning techniques assumes that the labeled and unlabeled data come from the same distribution. It is possible for the labeling process to be associated with a selection bias such that the distributions of data points in the labeled and unlabeled sets are different. Not correcting for such bias can result in biased function approximation with potentially poor performance. In this paper, we present an empirical study of various semi-supervised learning techniques on a variety of datasets. We attempt to answer various questions such as the effect of independence or relevance amongst features, the effect of the size of the labeled and unlabeled sets and the effect of noise. We also investigate the impact of sample-selection bias on the semi-supervised learning techniques under study and implement a bivariate probit technique particularly designed to correct for such bias.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2048
An Expressive Language and Efficient Execution System for Software Agents
[ "cs.AI" ]
Software agents can be used to automate many of the tedious, time-consuming information processing tasks that humans currently have to complete manually. However, to do so, agent plans must be capable of representing the myriad of actions and control flows required to perform those tasks. In addition, since these tasks can require integrating multiple sources of remote information ? typically, a slow, I/O-bound process ? it is desirable to make execution as efficient as possible. To address both of these needs, we present a flexible software agent plan language and a highly parallel execution system that enable the efficient execution of expressive agent plans. The plan language allows complex tasks to be more easily expressed by providing a variety of operators for flexibly processing the data as well as supporting subplans (for modularity) and recursion (for indeterminate looping). The executor is based on a streaming dataflow model of execution to maximize the amount of operator and data parallelism possible at runtime. We have implemented both the language and executor in a system called THESEUS. Our results from testing THESEUS show that streaming dataflow execution can yield significant speedups over both traditional serial (von Neumann) as well as non-streaming dataflow-style execution that existing software and robot agent execution systems currently support. In addition, we show how plans written in the language we present can represent certain types of subtasks that cannot be accomplished using the languages supported by network query engines. Finally, we demonstrate that the increased expressivity of our plan language does not hamper performance; specifically, we show how data can be integrated from multiple remote sources just as efficiently using our architecture as is possible with a state-of-the-art streaming-dataflow network query engine.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2049
Structure-Based Local Search Heuristics for Circuit-Level Boolean Satisfiability
[ "cs.AI" ]
This work focuses on improving state-of-the-art in stochastic local search (SLS) for solving Boolean satisfiability (SAT) instances arising from real-world industrial SAT application domains. The recently introduced SLS method CRSat has been shown to noticeably improve on previously suggested SLS techniques in solving such real-world instances by combining justification-based local search with limited Boolean constraint propagation on the non-clausal formula representation form of Boolean circuits. In this work, we study possibilities of further improving the performance of CRSat by exploiting circuit-level structural knowledge for developing new search heuristics for CRSat. To this end, we introduce and experimentally evaluate a variety of search heuristics, many of which are motivated by circuit-level heuristics originally developed in completely different contexts, e.g., for electronic design automation applications. To the best of our knowledge, most of the heuristics are novel in the context of SLS for SAT and, more generally, SLS for constraint satisfaction problems.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2088
Online Learning Algorithms for Stochastic Water-Filling
[ "cs.LG", "cs.NI", "cs.SY", "math.OC", "math.PR" ]
Water-filling is the term for the classic solution to the problem of allocating constrained power to a set of parallel channels to maximize the total data-rate. It is used widely in practice, for example, for power allocation to sub-carriers in multi-user OFDM systems such as WiMax. The classic water-filling algorithm is deterministic and requires perfect knowledge of the channel gain to noise ratios. In this paper we consider how to do power allocation over stochastically time-varying (i.i.d.) channels with unknown gain to noise ratio distributions. We adopt an online learning framework based on stochastic multi-armed bandits. We consider two variations of the problem, one in which the goal is to find a power allocation to maximize $\sum\limits_i \mathbb{E}[\log(1 + SNR_i)]$, and another in which the goal is to find a power allocation to maximize $\sum\limits_i \log(1 + \mathbb{E}[SNR_i])$. For the first problem, we propose a \emph{cognitive water-filling} algorithm that we call CWF1. We show that CWF1 obtains a regret (defined as the cumulative gap over time between the sum-rate obtained by a distribution-aware genie and this policy) that grows polynomially in the number of channels and logarithmically in time, implying that it asymptotically achieves the optimal time-averaged rate that can be obtained when the gain distributions are known. For the second problem, we present an algorithm called CWF2, which is, to our knowledge, the first algorithm in the literature on stochastic multi-armed bandits to exploit non-linear dependencies between the arms. We prove that the number of times CWF2 picks the incorrect power allocation is bounded by a function that is polynomial in the number of channels and logarithmic in time, implying that its frequency of incorrect allocation tends to zero.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 1 }
1109.2127
Integrating Learning from Examples into the Search for Diagnostic Policies
[ "cs.AI" ]
This paper studies the problem of learning diagnostic policies from training examples. A diagnostic policy is a complete description of the decision-making actions of a diagnostician (i.e., tests followed by a diagnostic decision) for all possible combinations of test results. An optimal diagnostic policy is one that minimizes the expected total cost, which is the sum of measurement costs and misdiagnosis costs. In most diagnostic settings, there is a tradeoff between these two kinds of costs. This paper formalizes diagnostic decision making as a Markov Decision Process (MDP). The paper introduces a new family of systematic search algorithms based on the AO* algorithm to solve this MDP. To make AO* efficient, the paper describes an admissible heuristic that enables AO* to prune large parts of the search space. The paper also introduces several greedy algorithms including some improvements over previously-published methods. The paper then addresses the question of learning diagnostic policies from examples. When the probabilities of diseases and test results are computed from training data, there is a great danger of overfitting. To reduce overfitting, regularizers are integrated into the search algorithms. Finally, the paper compares the proposed methods on five benchmark diagnostic data sets. The studies show that in most cases the systematic search methods produce better diagnostic policies than the greedy methods. In addition, the studies show that for training sets of realistic size, the systematic search algorithms are practical on todays desktop computers.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2128
LexRank: Graph-based Lexical Centrality as Salience in Text Summarization
[ "cs.CL" ]
We introduce a stochastic graph-based method for computing relative importance of textual units for Natural Language Processing. We test the technique on the problem of Text Summarization (TS). Extractive TS relies on the concept of sentence salience to identify the most important sentences in a document or set of documents. Salience is typically defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudo-sentence. We consider a new approach, LexRank, for computing sentence importance based on the concept of eigenvector centrality in a graph representation of sentences. In this model, a connectivity matrix based on intra-sentence cosine similarity is used as the adjacency matrix of the graph representation of sentences. Our system, based on LexRank ranked in first place in more than one task in the recent DUC 2004 evaluation. In this paper we present a detailed analysis of our approach and apply it to a larger data set including data from earlier DUC evaluations. We discuss several methods to compute centrality using the similarity graph. The results show that degree-based methods (including LexRank) outperform both centroid-based methods and other systems participating in DUC in most of the cases. Furthermore, the LexRank with threshold method outperforms the other degree-based techniques including continuous LexRank. We also show that our approach is quite insensitive to the noise in the data that may result from an imperfect topical clustering of documents.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2129
Extremal Behaviour in Multiagent Contract Negotiation
[ "cs.MA" ]
We examine properties of a model of resource allocation in which several agents exchange resources in order to optimise their individual holdings. The schemes discussed relate to well-known negotiation protocols proposed in earlier work and we consider a number of alternative notions of rationality covering both quantitative measures, e.g. cooperative and individual rationality and more qualitative forms, e.g. Pigou-Dalton transfers. While it is known that imposing particular rationality and structural restrictions may result in some reallocations of the resource set becoming unrealisable, in this paper we address the issue of the number of restricted rational deals that may be required to implement a particular reallocation when it is possible to do so. We construct examples showing that this number may be exponential (in the number of resources m), even when all of the agent utility functions are monotonic. We further show that k agents may achieve in a single deal a reallocation requiring exponentially many rational deals if at most k-1 agents can participate, this same reallocation being unrealisable by any sequences of rational deals in which at most k-2 agents are involved.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 1, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2130
Combining Knowledge- and Corpus-based Word-Sense-Disambiguation Methods
[ "cs.CL" ]
In this paper we concentrate on the resolution of the lexical ambiguity that arises when a given word has several different meanings. This specific task is commonly referred to as word sense disambiguation (WSD). The task of WSD consists of assigning the correct sense to words using an electronic dictionary as the source of word definitions. We present two WSD methods based on two main methodological approaches in this research area: a knowledge-based method and a corpus-based method. Our hypothesis is that word-sense disambiguation requires several knowledge sources in order to solve the semantic ambiguity of the words. These sources can be of different kinds--- for example, syntagmatic, paradigmatic or statistical information. Our approach combines various sources of knowledge, through combinations of the two WSD methods mentioned above. Mainly, the paper concentrates on how to combine these methods and sources of information in order to achieve good results in the disambiguation. Finally, this paper presents a comprehensive study and experimental work on evaluation of the methods and their combinations.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2131
On the Practical use of Variable Elimination in Constraint Optimization Problems: 'Still-life' as a Case Study
[ "cs.AI" ]
Variable elimination is a general technique for constraint processing. It is often discarded because of its high space complexity. However, it can be extremely useful when combined with other techniques. In this paper we study the applicability of variable elimination to the challenging problem of finding still-lifes. We illustrate several alternatives: variable elimination as a stand-alone algorithm, interleaved with search, and as a source of good quality lower bounds. We show that these techniques are the best known option both theoretically and empirically. In our experiments we have been able to solve the n=20 instance, which is far beyond reach with alternative approaches.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2132
Hybrid BDI-POMDP Framework for Multiagent Teaming
[ "cs.MA" ]
Many current large-scale multiagent team implementations can be characterized as following the belief-desire-intention (BDI) paradigm, with explicit representation of team plans. Despite their promise, current BDI team approaches lack tools for quantitative performance analysis under uncertainty. Distributed partially observable Markov decision problems (POMDPs) are well suited for such analysis, but the complexity of finding optimal policies in such models is highly intractable. The key contribution of this article is a hybrid BDI-POMDP approach, where BDI team plans are exploited to improve POMDP tractability and POMDP analysis improves BDI team plan performance. Concretely, we focus on role allocation, a fundamental problem in BDI teams: which agents to allocate to the different roles in the team. The article provides three key contributions. First, we describe a role allocation technique that takes into account future uncertainties in the domain; prior work in multiagent role allocation has failed to address such uncertainties. To that end, we introduce RMTDP (Role-based Markov Team Decision Problem), a new distributed POMDP model for analysis of role allocations. Our technique gains in tractability by significantly curtailing RMTDP policy search; in particular, BDI team plans provide incomplete RMTDP policies, and the RMTDP policy search fills the gaps in such incomplete policies by searching for the best role allocation. Our second key contribution is a novel decomposition technique to further improve RMTDP policy search efficiency. Even though limited to searching role allocations, there are still combinatorially many role allocations, and evaluating each in RMTDP to identify the best is extremely difficult. Our decomposition technique exploits the structure in the BDI team plans to significantly prune the search space of role allocations. Our third key contribution is a significantly faster policy evaluation algorithm suited for our BDI-POMDP hybrid approach. Finally, we also present experimental results from two domains: mission rehearsal simulation and RoboCupRescue disaster rescue simulation.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 1, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2134
Generalizing Boolean Satisfiability II: Theory
[ "cs.AI" ]
This is the second of three planned papers describing ZAP, a satisfiability engine that substantially generalizes existing tools while retaining the performance characteristics of modern high performance solvers. The fundamental idea underlying ZAP is that many problems passed to such engines contain rich internal structure that is obscured by the Boolean representation used; our goal is to define a representation in which this structure is apparent and can easily be exploited to improve computational performance. This paper presents the theoretical basis for the ideas underlying ZAP, arguing that existing ideas in this area exploit a single, recurring structure in that multiple database axioms can be obtained by operating on a single axiom using a subgroup of the group of permutations on the literals in the problem. We argue that the group structure precisely captures the general structure at which earlier approaches hinted, and give numerous examples of its use. We go on to extend the Davis-Putnam-Logemann-Loveland inference procedure to this broader setting, and show that earlier computational improvements are either subsumed or left intact by the new method. The third paper in this series discusses ZAPs implementation and presents experimental performance results.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2135
A Framework for Sequential Planning in Multi-Agent Settings
[ "cs.AI", "cs.MA" ]
This paper extends the framework of partially observable Markov decision processes (POMDPs) to multi-agent settings by incorporating the notion of agent models into the state space. Agents maintain beliefs over physical states of the environment and over models of other agents, and they use Bayesian updates to maintain their beliefs over time. The solutions map belief states to actions. Models of other agents may include their belief states and are related to agent types considered in games of incomplete information. We express the agents autonomy by postulating that their models are not directly manipulable or observable by other agents. We show that important properties of POMDPs, such as convergence of value iteration, the rate of convergence, and piece-wise linearity and convexity of the value functions carry over to our framework. Our approach complements a more traditional approach to interactive settings which uses Nash equilibria as a solution paradigm. We seek to avoid some of the drawbacks of equilibria which may be non-unique and do not capture off-equilibrium behaviors. We do so at the cost of having to represent, process and continuously revise models of other agents. Since the agents beliefs may be arbitrarily nested, the optimal solutions to decision making problems are only asymptotically computable. However, approximate belief updates and approximately optimal plans are computable. We illustrate our framework using a simple application domain, and we show examples of belief updates and value functions.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 1, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2136
Learning Content Selection Rules for Generating Object Descriptions in Dialogue
[ "cs.CL" ]
A fundamental requirement of any task-oriented dialogue system is the ability to generate object descriptions that refer to objects in the task domain. The subproblem of content selection for object descriptions in task-oriented dialogue has been the focus of much previous work and a large number of models have been proposed. In this paper, we use the annotated COCONUT corpus of task-oriented design dialogues to develop feature sets based on Dale and Reiters (1995) incremental model, Brennan and Clarks (1996) conceptual pact model, and Jordans (2000b) intentional influences model, and use these feature sets in a machine learning experiment to automatically learn a model of content selection for object descriptions. Since Dale and Reiters model requires a representation of discourse structure, the corpus annotations are used to derive a representation based on Grosz and Sidners (1986) theory of the intentional structure of discourse, as well as two very simple representations of discourse structure based purely on recency. We then apply the rule-induction program RIPPER to train and test the content selection component of an object description generator on a set of 393 object descriptions from the corpus. To our knowledge, this is the first reported experiment of a trainable content selection component for object description generation in dialogue. Three separate content selection models that are based on the three theoretical models, all independently achieve accuracies significantly above the majority class baseline (17%) on unseen test data, with the intentional influences model (42.4%) performing significantly better than either the incremental model (30.4%) or the conceptual pact model (28.9%). But the best performing models combine all the feature sets, achieving accuracies near 60%. Surprisingly, a simple recency-based representation of discourse structure does as well as one based on intentional structure. To our knowledge, this is also the first empirical comparison of a representation of Grosz and Sidners model of discourse structure with a simpler model for any generation task.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2137
Relational Dynamic Bayesian Networks
[ "cs.AI" ]
Stochastic processes that involve the creation of objects and relations over time are widespread, but relatively poorly studied. For example, accurate fault diagnosis in factory assembly processes requires inferring the probabilities of erroneous assembly operations, but doing this efficiently and accurately is difficult. Modeled as dynamic Bayesian networks, these processes have discrete variables with very large domains and extremely high dimensionality. In this paper, we introduce relational dynamic Bayesian networks (RDBNs), which are an extension of dynamic Bayesian networks (DBNs) to first-order logic. RDBNs are a generalization of dynamic probabilistic relational models (DPRMs), which we had proposed in our previous work to model dynamic uncertain domains. We first extend the Rao-Blackwellised particle filtering described in our earlier work to RDBNs. Next, we lift the assumptions associated with Rao-Blackwellization in RDBNs and propose two new forms of particle filtering. The first one uses abstraction hierarchies over the predicates to smooth the particle filters estimates. The second employs kernel density estimation with a kernel function specifically designed for relational domains. Experiments show these two methods greatly outperform standard particle filtering on the task of assembly plan execution monitoring.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2138
Reasoning about Action: An Argumentation - Theoretic Approach
[ "cs.AI" ]
We present a uniform non-monotonic solution to the problems of reasoning about action on the basis of an argumentation-theoretic approach. Our theory is provably correct relative to a sensible minimisation policy introduced on top of a temporal propositional logic. Sophisticated problem domains can be formalised in our framework. As much attention of researchers in the field has been paid to the traditional and basic problems in reasoning about actions such as the frame, the qualification and the ramification problems, approaches to these problems within our formalisation lie at heart of the expositions presented in this paper.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2139
Solving Set Constraint Satisfaction Problems using ROBDDs
[ "cs.AI" ]
In this paper we present a new approach to modeling finite set domain constraint problems using Reduced Ordered Binary Decision Diagrams (ROBDDs). We show that it is possible to construct an efficient set domain propagator which compactly represents many set domains and set constraints using ROBDDs. We demonstrate that the ROBDD-based approach provides unprecedented flexibility in modeling constraint satisfaction problems, leading to performance improvements. We also show that the ROBDD-based modeling approach can be extended to the modeling of integer and multiset constraint problems in a straightforward manner. Since domain propagation is not always practical, we also show how to incorporate less strict consistency notions into the ROBDD framework, such as set bounds, cardinality bounds and lexicographic bounds consistency. Finally, we present experimental results that demonstrate the ROBDD-based solver performs better than various more conventional constraint solvers on several standard set constraint problems.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2140
Learning Concept Hierarchies from Text Corpora using Formal Concept Analysis
[ "cs.AI" ]
We present a novel approach to the automatic acquisition of taxonomies or concept hierarchies from a text corpus. The approach is based on Formal Concept Analysis (FCA), a method mainly used for the analysis of data, i.e. for investigating and processing explicitly given information. We follow Harris distributional hypothesis and model the context of a certain term as a vector representing syntactic dependencies which are automatically acquired from the text corpus with a linguistic parser. On the basis of this context information, FCA produces a lattice that we convert into a special kind of partial order constituting a concept hierarchy. The approach is evaluated by comparing the resulting concept hierarchies with hand-crafted taxonomies for two domains: tourism and finance. We also directly compare our approach with hierarchical agglomerative clustering as well as with Bi-Section-KMeans as an instance of a divisive clustering algorithm. Furthermore, we investigate the impact of using different measures weighting the contribution of each attribute as well as of applying a particular smoothing technique to cope with data sparseness.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2141
Efficiency versus Convergence of Boolean Kernels for On-Line Learning Algorithms
[ "cs.LG" ]
The paper studies machine learning problems where each example is described using a set of Boolean features and where hypotheses are represented by linear threshold elements. One method of increasing the expressiveness of learned hypotheses in this context is to expand the feature set to include conjunctions of basic features. This can be done explicitly or where possible by using a kernel function. Focusing on the well known Perceptron and Winnow algorithms, the paper demonstrates a tradeoff between the computational efficiency with which the algorithm can be run over the expanded feature space and the generalization ability of the corresponding learning algorithm. We first describe several kernel functions which capture either limited forms of conjunctions or all conjunctions. We show that these kernels can be used to efficiently run the Perceptron algorithm over a feature space of exponentially many conjunctions; however we also show that using such kernels, the Perceptron algorithm can provably make an exponential number of mistakes even when learning simple functions. We then consider the question of whether kernel functions can analogously be used to run the multiplicative-update Winnow algorithm over an expanded feature space of exponentially many conjunctions. Known upper bounds imply that the Winnow algorithm can learn Disjunctive Normal Form (DNF) formulae with a polynomial mistake bound in this setting. However, we prove that it is computationally hard to simulate Winnows behavior for learning DNF over such a feature set. This implies that the kernel functions which correspond to running Winnow for this problem are not efficiently computable, and that there is no general construction that can run Winnow with kernels.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2142
Generalizing Boolean Satisfiability III: Implementation
[ "cs.AI" ]
This is the third of three papers describing ZAP, a satisfiability engine that substantially generalizes existing tools while retaining the performance characteristics of modern high-performance solvers. The fundamental idea underlying ZAP is that many problems passed to such engines contain rich internal structure that is obscured by the Boolean representation used; our goal has been to define a representation in which this structure is apparent and can be exploited to improve computational performance. The first paper surveyed existing work that (knowingly or not) exploited problem structure to improve the performance of satisfiability engines, and the second paper showed that this structure could be understood in terms of groups of permutations acting on individual clauses in any particular Boolean theory. We conclude the series by discussing the techniques needed to implement our ideas, and by reporting on their performance on a variety of problem instances.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2143
Ignorability in Statistical and Probabilistic Inference
[ "cs.AI" ]
When dealing with incomplete data in statistical learning, or incomplete observations in probabilistic inference, one needs to distinguish the fact that a certain event is observed from the fact that the observed event has happened. Since the modeling and computational complexities entailed by maintaining this proper distinction are often prohibitive, one asks for conditions under which it can be safely ignored. Such conditions are given by the missing at random (mar) and coarsened at random (car) assumptions. In this paper we provide an in-depth analysis of several questions relating to mar/car assumptions. Main purpose of our study is to provide criteria by which one may evaluate whether a car assumption is reasonable for a particular data collecting or observational process. This question is complicated by the fact that several distinct versions of mar/car assumptions exist. We therefore first provide an overview over these different versions, in which we highlight the distinction between distributional and coarsening variable induced versions. We show that distributional versions are less restrictive and sufficient for most applications. We then address from two different perspectives the question of when the mar/car assumption is warranted. First we provide a static analysis that characterizes the admissibility of the car assumption in terms of the support structure of the joint probability distribution of complete data and incomplete observations. Here we obtain an equivalence characterization that improves and extends a recent result by Grunwald and Halpern. We then turn to a procedural analysis that characterizes the admissibility of the car assumption in terms of procedural models for the actual data (or observation) generating process. The main result of this analysis is that the stronger coarsened completely at random (ccar) condition is arguably the most reasonable assumption, as it alone corresponds to data coarsening procedures that satisfy a natural robustness property.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2145
Perseus: Randomized Point-based Value Iteration for POMDPs
[ "cs.AI" ]
Partially observable Markov decision processes (POMDPs) form an attractive and principled framework for agent planning under uncertainty. Point-based approximate techniques for POMDPs compute a policy based on a finite set of points collected in advance from the agents belief space. We present a randomized point-based value iteration algorithm called Perseus. The algorithm performs approximate value backup stages, ensuring that in each backup stage the value of each point in the belief set is improved; the key observation is that a single backup may improve the value of many belief points. Contrary to other point-based methods, Perseus backs up only a (randomly selected) subset of points in the belief set, sufficient for improving the value of each belief point in the set. We show how the same idea can be extended to dealing with continuous action spaces. Experimental results show the potential of Perseus in large scale POMDP problems.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2146
CIXL2: A Crossover Operator for Evolutionary Algorithms Based on Population Features
[ "cs.NE" ]
In this paper we propose a crossover operator for evolutionary algorithms with real values that is based on the statistical theory of population distributions. The operator is based on the theoretical distribution of the values of the genes of the best individuals in the population. The proposed operator takes into account the localization and dispersion features of the best individuals of the population with the objective that these features would be inherited by the offspring. Our aim is the optimization of the balance between exploration and exploitation in the search process. In order to test the efficiency and robustness of this crossover, we have used a set of functions to be optimized with regard to different criteria, such as, multimodality, separability, regularity and epistasis. With this set of functions we can extract conclusions in function of the problem at hand. We analyze the results using ANOVA and multiple comparison statistical tests. As an example of how our crossover can be used to solve artificial intelligence problems, we have applied the proposed model to the problem of obtaining the weight of each network in a ensemble of neural networks. The results obtained are above the performance of standard methods.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 1, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2147
Risk-Sensitive Reinforcement Learning Applied to Control under Constraints
[ "cs.LG" ]
In this paper, we consider Markov Decision Processes (MDPs) with error states. Error states are those states entering which is undesirable or dangerous. We define the risk with respect to a policy as the probability of entering such a state when the policy is pursued. We consider the problem of finding good policies whose risk is smaller than some user-specified threshold, and formalize it as a constrained MDP with two criteria. The first criterion corresponds to the value function originally given. We will show that the risk can be formulated as a second criterion function based on a cumulative return, whose definition is independent of the original value function. We present a model free, heuristic reinforcement learning algorithm that aims at finding good deterministic policies. It is based on weighting the original value function and the risk. The weight parameter is adapted in order to find a feasible solution for the constrained problem that has a good performance with respect to the value function. The algorithm was successfully applied to the control of a feed tank with stochastic inflows that lies upstream of a distillation column. This control task was originally formulated as an optimal control problem with chance constraints, and it was solved under certain assumptions on the model to obtain an optimal solution. The power of our learning algorithm is that it can be used even when some of these restrictive assumptions are relaxed.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2148
Logical Hidden Markov Models
[ "cs.AI" ]
Logical hidden Markov models (LOHMMs) upgrade traditional hidden Markov models to deal with sequences of structured symbols in the form of logical atoms, rather than flat characters. This note formally introduces LOHMMs and presents solutions to the three central inference problems for LOHMMs: evaluation, most likely hidden state sequence and parameter estimation. The resulting representation and algorithms are experimentally evaluated on problems from the domain of bioinformatics.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2153
mGPT: A Probabilistic Planner Based on Heuristic Search
[ "cs.AI" ]
We describe the version of the GPT planner used in the probabilistic track of the 4th International Planning Competition (IPC-4). This version, called mGPT, solves Markov Decision Processes specified in the PPDDL language by extracting and using different classes of lower bounds along with various heuristic-search algorithms. The lower bounds are extracted from deterministic relaxations where the alternative probabilistic effects of an action are mapped into different, independent, deterministic actions. The heuristic-search algorithms use these lower bounds for focusing the updates and delivering a consistent value function over all states reachable from the initial state and the greedy policy.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2154
Macro-FF: Improving AI Planning with Automatically Learned Macro-Operators
[ "cs.AI" ]
Despite recent progress in AI planning, many benchmarks remain challenging for current planners. In many domains, the performance of a planner can greatly be improved by discovering and exploiting information about the domain structure that is not explicitly encoded in the initial PDDL formulation. In this paper we present and compare two automated methods that learn relevant information from previous experience in a domain and use it to solve new problem instances. Our methods share a common four-step strategy. First, a domain is analyzed and structural information is extracted, then macro-operators are generated based on the previously discovered structure. A filtering and ranking procedure selects the most useful macro-operators. Finally, the selected macros are used to speed up future searches. We have successfully used such an approach in the fourth international planning competition IPC-4. Our system, Macro-FF, extends Hoffmanns state-of-the-art planner FF 2.3 with support for two kinds of macro-operators, and with engineering enhancements. We demonstrate the effectiveness of our ideas on benchmarks from international planning competitions. Our results indicate a large reduction in search effort in those complex domains where structural information can be inferred.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1109.2155
Optiplan: Unifying IP-based and Graph-based Planning
[ "cs.AI" ]
The Optiplan planning system is the first integer programming-based planner that successfully participated in the international planning competition. This engineering note describes the architecture of Optiplan and provides the integer programming formulation that enabled it to perform reasonably well in the competition. We also touch upon some recent developments that make integer programming encodings significantly more competitive.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }