id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
1401.4605
Consistency Techniques for Flow-Based Projection-Safe Global Cost Functions in Weighted Constraint Satisfaction
cs.AI cs.DS
Many combinatorial problems deal with preferences and violations, the goal of which is to find solutions with the minimum cost. Weighted constraint satisfaction is a framework for modeling such problems, which consists of a set of cost functions to measure the degree of violation or preferences of different combinations of variable assignments. Typical solution methods for weighted constraint satisfaction problems (WCSPs) are based on branch-and-bound search, which are made practical through the use of powerful consistency techniques such as AC*, FDAC*, EDAC* to deduce hidden cost information and value pruning during search. These techniques, however, are designed to be efficient only on binary and ternary cost functions which are represented in table form. In tackling many real-life problems, high arity (or global) cost functions are required. We investigate efficient representation scheme and algorithms to bring the benefits of the consistency techniques to also high arity cost functions, which are often derived from hard global constraints from classical constraint satisfaction. The literature suggests some global cost functions can be represented as flow networks, and the minimum cost flow algorithm can be used to compute the minimum costs of such networks in polynomial time. We show that naive adoption of this flow-based algorithmic method for global cost functions can result in a stronger form of null-inverse consistency. We further show how the method can be modified to handle cost projections and extensions to maintain generalized versions of AC* and FDAC* for cost functions with more than two variables. Similar generalization for the stronger EDAC* is less straightforward. We reveal the oscillation problem when enforcing EDAC* on cost functions sharing more than one variable. To avoid oscillation, we propose a weak version of EDAC* and generalize it to weak EDGAC* for non-binary cost functions. Using various benchmarks involving the soft variants of hard global constraints ALLDIFFERENT, GCC, SAME, and REGULAR, empirical results demonstrate that our proposal gives improvements of up to an order of magnitude when compared with the traditional constraint optimization approach, both in terms of time and pruning.
1401.4606
Drake: An Efficient Executive for Temporal Plans with Choice
cs.AI
This work presents Drake, a dynamic executive for temporal plans with choice. Dynamic plan execution strategies allow an autonomous agent to react quickly to unfolding events, improving the robustness of the agent. Prior work developed methods for dynamically dispatching Simple Temporal Networks, and further research enriched the expressiveness of the plans executives could handle, including discrete choices, which are the focus of this work. However, in some approaches to date, these additional choices induce significant storage or latency requirements to make flexible execution possible. Drake is designed to leverage the low latency made possible by a preprocessing step called compilation, while avoiding high memory costs through a compact representation. We leverage the concepts of labels and environments, taken from prior work in Assumption-based Truth Maintenance Systems (ATMS), to concisely record the implications of the discrete choices, exploiting the structure of the plan to avoid redundant reasoning or storage. Our labeling and maintenance scheme, called the Labeled Value Set Maintenance System, is distinguished by its focus on properties fundamental to temporal problems, and, more generally, weighted graph algorithms. In particular, the maintenance system focuses on maintaining a minimal representation of non-dominated constraints. We benchmark Drakes performance on random structured problems, and find that Drake reduces the size of the compiled representation by a factor of over 500 for large problems, while incurring only a modest increase in run-time latency, compared to prior work in compiled executives for temporal plans with discrete choices.
1401.4607
Reformulating the Situation Calculus and the Event Calculus in the General Theory of Stable Models and in Answer Set Programming
cs.AI cs.LO
Circumscription and logic programs under the stable model semantics are two well-known nonmonotonic formalisms. The former has served as a basis of classical logic based action formalisms, such as the situation calculus, the event calculus and temporal action logics; the latter has served as a basis of a family of action languages, such as language A and several of its descendants. Based on the discovery that circumscription and the stable model semantics coincide on a class of canonical formulas, we reformulate the situation calculus and the event calculus in the general theory of stable models. We also present a translation that turns the reformulations further into answer set programs, so that efficient answer set solvers can be applied to compute the situation calculus and the event calculus.
1401.4609
Computing All-Pairs Shortest Paths by Leveraging Low Treewidth
cs.DS cs.AI
We present two new and efficient algorithms for computing all-pairs shortest paths. The algorithms operate on directed graphs with real (possibly negative) weights. They make use of directed path consistency along a vertex ordering d. Both algorithms run in O(n^2 w_d) time, where w_d is the graph width induced by this vertex ordering. For graphs of constant treewidth, this yields O(n^2) time, which is optimal. On chordal graphs, the algorithms run in O(nm) time. In addition, we present a variant that exploits graph separators to arrive at a run time of O(n w_d^2 + n^2 s_d) on general graphs, where s_d andlt= w_d is the size of the largest minimal separator induced by the vertex ordering d. We show empirically that on both constructed and realistic benchmarks, in many cases the algorithms outperform Floyd-Warshalls as well as Johnsons algorithm, which represent the current state of the art with a run time of O(n^3) and O(nm + n^2 log n), respectively. Our algorithms can be used for spatial and temporal reasoning, such as for the Simple Temporal Problem, which underlines their relevance to the planning and scheduling community.
1401.4612
Modelling Observation Correlations for Active Exploration and Robust Object Detection
cs.RO cs.CV
Today, mobile robots are expected to carry out increasingly complex tasks in multifarious, real-world environments. Often, the tasks require a certain semantic understanding of the workspace. Consider, for example, spoken instructions from a human collaborator referring to objects of interest; the robot must be able to accurately detect these objects to correctly understand the instructions. However, existing object detection, while competent, is not perfect. In particular, the performance of detection algorithms is commonly sensitive to the position of the sensor relative to the objects in the scene. This paper presents an online planning algorithm which learns an explicit model of the spatial dependence of object detection and generates plans which maximize the expected performance of the detection, and by extension the overall plan performance. Crucially, the learned sensor model incorporates spatial correlations between measurements, capturing the fact that successive measurements taken at the same or nearby locations are not independent. We show how this sensor model can be incorporated into an efficient forward search algorithm in the information space of detected objects, allowing the robot to generate motion plans efficiently. We investigate the performance of our approach by addressing the tasks of door and text detection in indoor environments and demonstrate significant improvement in detection performance during task execution over alternative methods in simulated and real robot experiments.
1401.4613
Local Consistency and SAT-Solvers
cs.AI cs.LO
Local consistency techniques such as k-consistency are a key component of specialised solvers for constraint satisfaction problems. In this paper we show that the power of using k-consistency techniques on a constraint satisfaction problem is precisely captured by using a particular inference rule, which we call negative-hyper-resolution, on the standard direct encoding of the problem into Boolean clauses. We also show that current clause-learning SAT-solvers will discover in expected polynomial time any inconsistency that can be deduced from a given set of clauses using negative-hyper-resolvents of a fixed size. We combine these two results to show that, without being explicitly designed to do so, current clause-learning SAT-solvers efficiently simulate k-consistency techniques, for all fixed values of k. We then give some experimental results to show that this feature allows clause-learning SAT-solvers to efficiently solve certain families of constraint problems which are challenging for conventional constraint-programming solvers.
1401.4633
Efficient Codes for Adversarial Wiretap Channels
cs.IT math.IT
In [13] we proposed a ({\rho}_r , {\rho}_w )-adversarial wiretap channel model (AWTP) in which the adversary can adaptively choose to see a fraction {\rho}_r of the codeword sent over the channel, and modify a fraction {\rho}_w of the codeword by adding arbitrary noise values to them. In this paper we give the first efficient construction of a capacity achieving code family that provides perfect secrecy for this channel.
1401.4634
The Capacity of String-Replication Systems
cs.IT cs.CL math.IT
It is known that the majority of the human genome consists of repeated sequences. Furthermore, it is believed that a significant part of the rest of the genome also originated from repeated sequences and has mutated to its current form. In this paper, we investigate the possibility of constructing an exponentially large number of sequences from a short initial sequence and simple replication rules, including those resembling genomic replication processes. In other words, our goal is to find out the capacity, or the expressive power, of these string-replication systems. Our results include exact capacities, and bounds on the capacities, of four fundamental string-replication systems.
1401.4642
On the Capacity of Memoryless Adversary
cs.IT math.IT
In this paper, we study a model of communication under adversarial noise. In this model, the adversary makes online decisions on whether to corrupt a transmitted bit based on only the value of that bit. Like the usual binary symmetric channel of information theory or the fully adversarial channel of combinatorial coding theory, the adversary can, with high probability, introduce at most a given fraction of error. It is shown that, the capacity (maximum rate of reliable information transfer) of such memoryless adversary is strictly below that of the binary symmetric channel. We give new upper bound on the capacity of such channel -- the tightness of this upper bound remains an open question. The main component of our proof is the careful examination of error-correcting properties of a code with skewed distance distribution.
1401.4644
Time series modeling and large scale global solar radiation forecasting from geostationary satellites data
cs.CE physics.comp-ph stat.AP
When a territory is poorly instrumented, geostationary satellites data can be useful to predict global solar radiation. In this paper, we use geostationary satellites data to generate 2-D time series of solar radiation for the next hour. The results presented in this paper relate to a particular territory, the Corsica Island, but as data used are available for the entire surface of the globe, our method can be easily exploited to another place. Indeed 2-D hourly time series are extracted from the HelioClim-3 surface solar irradiation database treated by the Heliosat-2 model. Each point of the map have been used as training data and inputs of artificial neural networks (ANN) and as inputs for two persistence models (scaled or not). Comparisons between these models and clear sky estimations were proceeded to evaluate the performances. We found a normalized root mean square error (nRMSE) close to 16.5% for the two best predictors (scaled persistence and ANN) equivalent to 35-45% related to ground measurements. Finally in order to validate our 2-D predictions maps, we introduce a new error metric called the gamma index which is a criterion for comparing data from two matrixes in medical physics. As first results, we found that in winter and spring, scaled persistence gives the best results (gamma index test passing rate is respectively 67.7% and 86%), in autumn simple persistence is the best predictor (95.3%) and ANN is the best in summer (99.8%).
1401.4648
Visual Tracking using Particle Swarm Optimization
cs.CV
The problem of robust extraction of visual odometry from a sequence of images obtained by an eye in hand camera configuration is addressed. A novel approach toward solving planar template based tracking is proposed which performs a non-linear image alignment for successful retrieval of camera transformations. In order to obtain global optimum a bio-metaheuristic is used for optimization of similarity among the planar regions. The proposed method is validated on image sequences with real as well as synthetic transformations and found to be resilient to intensity variations. A comparative analysis of the various similarity measures as well as various state-of-art methods reveal that the algorithm succeeds in tracking the planar regions robustly and has good potential to be used in real applications.
1401.4650
A Gray Code for cross-bifix-free sets
cs.IT cs.DM math.CO math.IT
A cross-bifix-free set of words is a set in which no prefix of any length of any word is the suffix of any other word in the set. A construction of cross-bifix-free sets has recently been proposed by Chee {\it et al.} in 2013 within a constant factor of optimality. We propose a \emph{trace partitioned} Gray code for these cross-bifix-free sets and a CAT algorithm generating it.
1401.4657
Power Control Factor Selection in Uplink OFDMA Cellular Networks
cs.IT math.IT
Uplink power control plays a key role on the performance of uplink cellular network. In this work, the power control factor ($\in[0,1]$) is evaluated based on three parameters namely: average transmit power, coverage probability and average rate. In other words, we evaluate power control factor such that average transmit power should be low, coverage probability of cell-edge users should be high and also average rate over all the uplink users should be high. We show through numerical studies that the power control factor should be close to $0.5$ in order to achieve an acceptable trade-off between these three parameters.
1401.4660
On the Resilience of an Ant-based System in Fuzzy Environments. An Empirical Study
cs.NE
The current work describes an empirical study conducted in order to investigate the behavior of an optimization method in a fuzzy environment. MAX-MIN Ant System, an efficient implementation of a heuristic method is used for solving an optimization problem derived from the Traveling Salesman Problem (TSP). Several publicly-available symmetric TSP instances and their fuzzy variants are tested in order to extract some general features. The entry data was adapted by introducing a two-dimensional systematic degree of fuzziness, proportional with the number of nodes, the dimension of the instance and also with the distances between nodes, the scale of the instance. The results show that our proposed method can handle the data uncertainty, showing good resilience and adaptability.
1401.4662
Optimal Thresholds for Coverage and Rate in FFR Schemes for Planned Cellular Networks
cs.IT math.IT
Fractional frequency reuse (FFR) is an inter-cell interference coordination scheme that is being actively researched for emerging wireless cellular networks. In this work, we consider hexagonal tessellation based planned FFR deployments, and derive expressions for the coverage probability and normalized average rate for the downlink. In particular, given reuse $\frac{1}{3}$ (FR$3$ ) and reuse $1$ (FR$1$) regions, and a Signal-to-Interference-plus-noise-Ratio (SINR) threshold $S_{th}$ which decides the user assignment to either the FR$1$ or FR$3$ regions, we theoretically show that: $(i)$ The optimal choice of $S_{th}$ which maximizes the coverage probability is $S_{th} = T$, where $T$ is the required target SINR (for ensuring coverage), and $(ii)$ The optimal choice of $S_{th}$ which maximizes the normalized average rate is given by the expression $S_{th}=\max(T, T')$, where $T'$ is a function of the path loss exponent and the fade parameters. For the optimal choice of $S_{th}$, we show that FFR gives a higher rate than FR$1$ and a better coverage probability than FR$3$. The impact of frequency correlation over the sub-bands allocated to the FR$1$ and FR$3$ regions is analysed, and it is shown that correlation decreases the average rate of the FFR network. Numerical results are provided, and these match with the analytical results.
1401.4663
Impact of Correlation between Nakagami-m Interferers on Coverage Probability and Rate in Cellular Systems
cs.IT math.IT
Coverage probability and rate expressions are theoretically compared for the following cases: $(i).$ Both the user channel and the $N$ interferers are independent and non identical Nakagami-m distributed random variables (RVs). $(ii).$ The $N$ interferers are correlated Nakagami-m RVs. It is analytically shown that the coverage probability in the presence of correlated interferers is greater than or equal to the coverage probability in the presence of non-identical independent interferers when the shape parameter of the channel between the user and its base station is not greater than one. It is further analytically shown that the average rate in the presence of correlated interferers is greater than or equal to the average rate in the presence of non-identical independent interferers. Simulation results are provided and these match with the obtained theoretical results. The utility of our results are also discussed.
1401.4672
Parallel versus Sequential Update and the Evolution of Cooperation with the Assistance of Emotional Strategies
physics.soc-ph cs.SI
Our study contributes to the debate on the evolution of cooperation in the single-shot Prisoner's Dilemma (PD) played on networks. We construct a model in which individuals are connected with positive and negative ties. Some agents play sign-dependent strategies that use the sign of the relation as a shorthand for determining appropriate action toward the opponent. In the context of our model in which network topology, agent strategic types and relational signs coevolve, the presence of sign-dependent strategies catalyzes the evolution of cooperation. We highlight how the success of cooperation depends on a crucial aspect of implementation: whether we apply parallel or sequential strategy update. Parallel updating, with averaging of payoffs across interactions in the social neighborhood, supports cooperation in a much wider set of parameter values than sequential updating. Our results cast doubts about the realism and generalizability of models that claim to explain the evolution of cooperation but implicitly assume parallel updating.
1401.4674
Evolving Accuracy: A Genetic Algorithm to Improve Election Night Forecasts
cs.NE
In this paper, we apply genetic algorithms to the field of electoral studies. Forecasting election results is one of the most exciting and demanding tasks in the area of market research, especially due to the fact that decisions have to be made within seconds on live television. We show that the proposed method outperforms currently applied approaches and thereby provide an argument to tighten the intersection between computer science and social science, especially political science, further. We scrutinize the performance of our algorithm's runtime behavior to evaluate its applicability in the field. Numerical results with real data from a local election in the Austrian province of Styria from 2010 substantiate the applicability of the proposed approach.
1401.4676
Universal hierarchical behavior of citation networks
physics.soc-ph cs.DL cs.SI physics.data-an
Many of the essential features of the evolution of scientific research are imprinted in the structure of citation networks. Connections in these networks imply information about the transfer of knowledge among papers, or in other words, edges describe the impact of papers on other publications. This inherent meaning of the edges infers that citation networks can exhibit hierarchical features, that is typical of networks based on decision-making. In this paper, we investigate the hierarchical structure of citation networks consisting of papers in the same field. We find that the majority of the networks follow a universal trend towards a highly hierarchical state, and i) the various fields display differences only concerning their phase in life (distance from the "birth" of a field) or ii) the characteristic time according to which they are approaching the stationary state. We also show by a simple argument that the alterations in the behavior are related to and can be understood by the degree of specialization corresponding to the fields. Our results suggest that during the accumulation of knowledge in a given field, some papers are gradually becoming relatively more influential than most of the other papers.
1401.4680
Multiple Hybrid Phase Transition: Bootstrap Percolation on Complex Networks with Communities
physics.soc-ph cs.SI
Bootstrap percolation is a well-known model to study the spreading of rumors, new products or innovations on social networks. The empirical studies show that community structure is ubiquitous among various social networks. Thus, studying the bootstrap percolation on the complex networks with communities can bring us new and important insights of the spreading dynamics on social networks. It attracts a lot of scientists' attentions recently. In this letter, we study the bootstrap percolation on Erd\H{o}s-R\'{e}nyi networks with communities and observed second order, hybrid (both second and first order) and multiple hybrid phase transitions, which is rare in natural system. Moreover, we have analytically solved this system and obtained the phase diagram, which is further justified well by the corresponding simulations.
1401.4691
An algorithm for calculating steady state probabilities of $M|E_r|c|K$ queueing systems
cs.SY cs.PF
This paper presents a method for calculating steady state probabilities of $M|E_r|c|K$ queueing systems. The infinitesimal generator matrix is used to define all possible states in the system and their transition probabilities. While this matrix can be written down immediately for many other $M|PH|c|K$ queueing systems with phase-type service times (e.g. Coxian, Hypoexponential, \ldots), it requires a more careful analysis for systems with Erlangian service times. The constructed matrix may then be used to calculate steady state probabilities using an iterative algorithm. The resulting steady state probabilities can be used to calculate various performance measures, e.g. the average queue length. Additionally, computational issues of the implementation are discussed and an example from the field of telecommunication call-center queue length will be outlined to substantiate the applicability of these efforts. In the appendix, tables of the average queueing length given a specific number of service channels, traffic density, and system size are presented.
1401.4696
Evolutionary Optimization for Decision Making under Uncertainty
cs.NE
Optimizing decision problems under uncertainty can be done using a variety of solution methods. Soft computing and heuristic approaches tend to be powerful for solving such problems. In this overview article, we survey Evolutionary Optimization techniques to solve Stochastic Programming problems - both for the single-stage and multi-stage case.
1401.4709
Adaptive Power Allocation Strategies using DSTC in Cooperative MIMO Networks
cs.IT math.IT
Adaptive Power Allocation (PA) algorithms with different criteria for a cooperative Multiple-Input Multiple-Output (MIMO) network equipped with Distributed Space-Time Coding (DSTC) are proposed and evaluated. Joint constrained optimization algorithms to determine the power allocation parameters, the channel parameters and the receive filter are proposed for each transmitted stream in each link. Linear receive filter and maximum-likelihood (ML) detection are considered with Amplify-and-Forward (AF) and Decode-and-Forward (DF) cooperation strategies. In the proposed algorithms, the elements in the PA matrices are optimized at the destination node and then transmitted back to the relay nodes via a feedback channel. The effects of the feedback errors are considered. Linear MMSE expressions and the PA matrices depend on each other and are updated iteratively. Stochastic gradient (SG) algorithms are developed with reduced computational complexity. Simulation results show that the proposed algorithms obtain significant performance gains as compared to existing power allocation schemes.
1401.4714
Revolutionary Algorithms
cs.NE
The optimization of dynamic problems is both widespread and difficult. When conducting dynamic optimization, a balance between reinitialization and computational expense has to be found. There are multiple approaches to this. In parallel genetic algorithms, multiple sub-populations concurrently try to optimize a potentially dynamic problem. But as the number of sub-population increases, their efficiency decreases. Cultural algorithms provide a framework that has the potential to make optimizations more efficient. But they adapt slowly to changing environments. We thus suggest a confluence of these approaches: revolutionary algorithms. These algorithms seek to extend the evolutionary and cultural aspects of the former to approaches with a notion of the political. By modeling how belief systems are changed by means of revolution, these algorithms provide a framework to model and optimize dynamic problems in an efficient fashion.
1401.4715
Construction of Partial MDS (PMDS) and Sector-Disk (SD) Codes with Two Global Parity Symbols
cs.IT math.IT
Partial MDS (PMDS) codes are erasure codes combining local (row) correction with global additional correction of entries, while Sector-Disk (SD) codes are erasure codes that address the mixed failure mode of current RAID systems. It has been an open problem to construct general codes that have the PMDS and the SD properties, and previous work has relied on Monte-Carlo searches. In this paper, we present a general construction that addresses the case of any number of failed disks and in addition, two erased sectors. The construction requires a modest field size. This result generalizes previous constructions extending RAID~5 and RAID~6.
1401.4725
Information profiles for DNA pattern discovery
q-bio.GN cs.IT math.IT
Finite-context modeling is a powerful tool for compressing and hence for representing DNA sequences. We describe an algorithm to detect genomic regularities, within a blind discovery strategy. The algorithm uses information profiles built using suitable combinations of finite-context models. We used the genome of the fission yeast Schizosaccharomyces pombe strain 972 h- for illustration, unveilling locations of low information content, which are usually associated with DNA regions of potential biological interest.
1401.4734
Optimal Fractional Repetition Codes based on Graphs and Designs
cs.IT cs.DM math.IT
Fractional repetition (FR) codes is a family of codes for distributed storage systems that allow for uncoded exact repairs having the minimum repair bandwidth. However, in contrast to minimum bandwidth regenerating (MBR) codes, where a random set of a certain size of available nodes is used for a node repair, the repairs with FR codes are table based. This usually allows to store more data compared to MBR codes. In this work, we consider bounds on the fractional repetition capacity, which is the maximum amount of data that can be stored using an FR code. Optimal FR codes which attain these bounds are presented. The constructions of these FR codes are based on combinatorial designs and on families of regular and biregular graphs. These constructions of FR codes for given parameters raise some interesting questions in graph theory. These questions and some of their solutions are discussed in this paper. In addition, based on a connection between FR codes and batch codes, we propose a new family of codes for DSS, namely fractional repetition batch codes, which have the properties of batch codes and FR codes simultaneously. These are the first codes for DSS which allow for uncoded efficient exact repairs and load balancing which can be performed by several users in parallel. Other concepts related to FR codes are also discussed.
1401.4740
Generalization of the PageRank Model
cs.SI cs.IR
This paper develops a generalization of the PageRank model of page centralities in the global webgraph of hyperlinks. The webgraph of adjacencies is generalized to a valued directed graph, and the scalar dampening coefficient for walks through the graph is relaxed to allow for heterogeneous values. A visitation count approach may be employed to apply the more general model, based on the number of visits to a page and the page's proportionate allocations of these visits to other nodes of the webgraph.
1401.4750
How Much Frequency Can Be Reused in 5G Cellular Networks---A Matrix Graph Model
cs.IT math.IT
The 5th Generation cellular network may have the key feature of smaller cell size and denser resource employment, resulted from diminishing resource and increasing communication demands. However, small cell may result in high interference between cells. Moreover, the random geographic patterns of small cell networks make them hard to analyze, at least excluding schemes in the well-accepted hexagonal grid model. In this paper, a new model---the matrix graph is proposed which takes advantage of the small cell size and high inter-cell interference to reduce computation complexity. This model can simulate real world networks accurately and offers convenience in frequency allocation problems which are usually NP-complete. An algorithm dealing with this model is also given, which asymptotically achieves the theoretical limit of frequency allocation, and has a complexity which decreases with cell size and grows linearly with the network size. This new model is specifically proposed to characterize the next-generation cellular networks.
1401.4753
Multi-Branch Tomlinson-Harashima Precoding for MU-MIMO Systems: Theory and Algorithms
cs.IT math.IT
Tomlinson-Harashima precoding (THP) is a nonlinear processing technique employed at the transmit side and is a dual to the successive interference cancelation (SIC) detection at the receive side. Like SIC detection, the performance of THP strongly depends on the ordering of the precoded symbols. The optimal ordering algorithm, however, is impractical for multiuser MIMO (MU-MIMO) systems with multiple receive antennas due to the fact that the users are geographically distributed. In this paper, we propose a multi-branch THP (MB-THP) scheme and algorithms that employ multiple transmit processing and ordering strategies along with a selection scheme to mitigate interference in MU-MIMO systems. Two types of multi-branch THP (MB-THP) structures are proposed. The first one employs a decentralized strategy with diagonal weighted filters at the receivers of the users and the second uses a diagonal weighted filter at the transmitter. The MB-MMSE-THP algorithms are also derived based on an extended system model with the aid of an LQ decomposition, which is much simpler compared to the conventional MMSE-THP algorithms. Simulation results show that a better bit error rate (BER) performance can be achieved by the proposed MB-MMSE-THP precoder with a small computational complexity increase.
1401.4786
Common Information based Markov Perfect Equilibria for Linear-Gaussian Games with Asymmetric Information
cs.SY cs.GT math.OC
We consider a class of two-player dynamic stochastic nonzero-sum games where the state transition and observation equations are linear, and the primitive random variables are Gaussian. Each controller acquires possibly different dynamic information about the state process and the other controller's past actions and observations. This leads to a dynamic game of asymmetric information among the controllers. Building on our earlier work on finite games with asymmetric information, we devise an algorithm to compute a Nash equilibrium by using the common information among the controllers. We call such equilibria common information based Markov perfect equilibria of the game, which can be viewed as a refinement of Nash equilibrium in games with asymmetric information. If the players' cost functions are quadratic, then we show that under certain conditions a unique common information based Markov perfect equilibrium exists. Furthermore, this equilibrium can be computed by solving a sequence of linear equations. We also show through an example that there could be other Nash equilibria in a game of asymmetric information, not corresponding to common information based Markov perfect equilibria.
1401.4788
Generalized Bhattacharyya and Chernoff upper bounds on Bayes error using quasi-arithmetic means
cs.CV cs.IT math.IT
Bayesian classification labels observations based on given prior information, namely class-a priori and class-conditional probabilities. Bayes' risk is the minimum expected classification cost that is achieved by the Bayes' test, the optimal decision rule. When no cost incurs for correct classification and unit cost is charged for misclassification, Bayes' test reduces to the maximum a posteriori decision rule, and Bayes risk simplifies to Bayes' error, the probability of error. Since calculating this probability of error is often intractable, several techniques have been devised to bound it with closed-form formula, introducing thereby measures of similarity and divergence between distributions like the Bhattacharyya coefficient and its associated Bhattacharyya distance. The Bhattacharyya upper bound can further be tightened using the Chernoff information that relies on the notion of best error exponent. In this paper, we first express Bayes' risk using the total variation distance on scaled distributions. We then elucidate and extend the Bhattacharyya and the Chernoff upper bound mechanisms using generalized weighted means. We provide as a byproduct novel notions of statistical divergences and affinity coefficients. We illustrate our technique by deriving new upper bounds for the univariate Cauchy and the multivariate $t$-distributions, and show experimentally that those bounds are not too distant to the computationally intractable Bayes' error.
1401.4799
LDPC Codes for Partial-Erasure Channels in Multi-Level Memories
cs.IT math.IT
In this paper, we develop a new channel model, which we name the $q$-ary partial erasure channel (QPEC). QPEC has a $q$-ary input, and its output is either one symbol or a set of $M$ possible values. This channel mimics situations when current/voltage levels in measurement channels are only partially known, due to high read rates or imperfect current/voltage sensing. Our investigation is concentrated on the performance of low-density parity-pheck (LDPC) codes when used over this channel, due to their low decoding complexity with iterative-decoding algorithms. We give the density evolution equations of this channel, and develop its decoding-threshold analysis. Part of the analysis shows that finding the exact decoding threshold efficiently lies upon a solution to an open problem in additive combinatorics. For this part we give bounds and approximations.
1401.4831
Approximate Capacities of Two-Dimensional Codes by Spatial Mixing
cs.IT math.IT
We apply several state-of-the-art techniques developed in recent advances of counting algorithms and statistical physics to study the spatial mixing property of the two-dimensional codes arising from local hard (independent set) constraints, including: hard-square, hard-hexagon, read/write isolated memory (RWIM), and non-attacking kings (NAK). For these constraints, the strong spatial mixing would imply the existence of polynomial-time approximation scheme (PTAS) for computing the capacity. It was previously known for the hard-square constraint the existence of strong spatial mixing and PTAS. We show the existence of strong spatial mixing for hard-hexagon and RWIM constraints by establishing the strong spatial mixing along self-avoiding walks, and consequently we give PTAS for computing the capacities of these codes. We also show that for the NAK constraint, the strong spatial mixing does not hold along self-avoiding walks.
1401.4834
On Low-Complexity Full-diversity Detection In Multi-User MIMO Multiple-Access Channels
cs.IT math.IT
Multiple-input multiple-output (MIMO) techniques are becoming commonplace in recent wireless communication standards. This added dimension (i.e., space) can be efficiently used to mitigate the interference in the multi-user MIMO context. In this paper, we focus on the uplink of a MIMO multiple access channel (MAC) where perfect channel state information (CSI) is only available at the destination. We provide a new set of sufficient conditions for a wide range of space-time block codes (STBC)s to achieve full-diversity under \emph{partial interference cancellation group decoding} (PICGD) with or without successive interference cancellation (SIC) for completely blind users. Explicit interference cancellation (IC) schemes for two and three users are then provided and shown to satisfy the derived full-diversity criteria. Besides the complexity reduction due to the fact that the proposed IC schemes enable separate decoding of distinct users without sacrificing the diversity gain, further reduction of the decoding complexity may be obtained. In fact, thanks to the structure of the proposed schemes, the real and imaginary parts of each user's symbols may be decoupled without any loss of performance. Finally, our theoretical claims are corroborated by simulation results and the new IC scheme for two-user MIMO MAC is shown to outperform the recently proposed two-user IC scheme especially for high spectral efficiency while requiring significantly less decoding complexity.
1401.4840
Termination of oblivious chase is undecidable
cs.DB
We show that all--instances termination of chase is undecidable. More precisely, there is no algorithm deciding, for a given set $\cal T$ consisting of Tuple Generating Dependencies (a.k.a. Datalog$^\exists$ program), whether the $\cal T$-chase on $D$ will terminate for every finite database instance $D$. Our method applies to Oblivious Chase, Semi-Oblivious Chase and -- after a slight modification -- also for Standard Chase. This means that we give a (negative) solution to the all--instances termination problem for all version of chase that are usually considered. The arity we need for our undecidability proof is three. We also show that the problem is EXPSPACE-hard for binary signatures, but decidability for this case is left open. Both the proofs -- for ternary and binary signatures -- are easy. Once you know them.
1401.4848
An Evolutionary Approach towards Clustering Airborne Laser Scanning Data
cs.NE
In land surveying, the generation of maps was greatly simplified with the introduction of orthophotos and at a later stage with airborne LiDAR laser scanning systems. While the original purpose of LiDAR systems was to determine the altitude of ground elevations, newer full wave systems provide additional information that can be used on classifying the type of ground cover and the generation of maps. The LiDAR resulting point clouds are huge, multidimensional data sets that need to be grouped in classes of ground cover. We propose a genetic algorithm that aids in classifying these data sets and thus make them usable for map generation. A key feature are tailor-made genetic operators and fitness functions for the subject. The algorithm is compared to a traditional k-means clustering.
1401.4849
On the influence of the seed graph in the preferential attachment model
math.PR cs.DM cs.SI math.ST stat.TH
We study the influence of the seed graph in the preferential attachment model, focusing on the case of trees. We first show that the seed has no effect from a weak local limit point of view. On the other hand, we conjecture that different seeds lead to different distributions of limiting trees from a total variation point of view. We take a first step in proving this conjecture by showing that seeds with different degree profiles lead to different limiting distributions for the (appropriately normalized) maximum degree, implying that such seeds lead to different (in total variation) limiting trees.
1401.4857
A Genetic Algorithm to Optimize a Tweet for Retweetability
cs.NE cs.CY cs.SI physics.soc-ph
Twitter is a popular microblogging platform. When users send out messages, other users have the ability to forward these messages to their own subgraph. Most research focuses on increasing retweetability from a node's perspective. Here, we center on improving message style to increase the chance of a message being forwarded. To this end, we simulate an artificial Twitter-like network with nodes deciding deterministically on retweeting a message or not. A genetic algorithm is used to optimize message composition, so that the reach of a message is increased. When analyzing the algorithm's runtime behavior across a set of different node types, we find that the algorithm consistently succeeds in significantly improving the retweetability of a message.
1401.4869
Does Syntactic Knowledge help English-Hindi SMT?
cs.CL cs.AI
In this paper we explore various parameter settings of the state-of-art Statistical Machine Translation system to improve the quality of the translation for a `distant' language pair like English-Hindi. We proposed new techniques for efficient reordering. A slight improvement over the baseline is reported using these techniques. We also show that a simple pre-processing step can improve the quality of the translation significantly.
1401.4872
Classification of IDS Alerts with Data Mining Techniques
cs.CR cs.DB cs.LG
A data mining technique to reduce the amount of false alerts within an IDS system is proposed. The new technique achieves an accuracy of 99% compared to 97% by the current systems.
1401.4907
Impact of Transceiver Power Consumption on the Energy Efficiency of Zero-Forcing Detector in Massive MIMO Systems
cs.IT math.IT
We consider the impact of transceiver power consumption on the energy efficiency (EE) of the Zero Forcing (ZF) detector in the uplink of massive MIMO systems, where a base station (BS) with $M$ antennas communicates coherently with $K$ single antenna user terminals (UTs). We consider the problem of maximizing the EE with respect to (M,K) for a fixed sum spectral efficiency. Through analysis we study the impact of system parameters on the optimal EE. System parameters consists of the average channel gain to the users and the power consumption parameters (PCPs) (e.g., power consumed by each RF antenna/receiver at BS). When the average user channel gain is high or else the BS/UT design is power inefficient, our analysis reveals that it is optimal to have a few BS antennas and a single user, i.e., non-massive MIMO regime. Similarly, when the channel gain is small or else the BS/UT design is power efficient, it is optimal of have a larger (M,K), i.e., massive MIMO regime. Tight analytical bounds on the optimal EE are proposed for both these regimes. The impact of the system parameters on the optimal EE is studied and several interesting insights are drawn.
1401.4912
An Importance Sampling Scheme on Dual Factor Graphs. I. Models in a Strong External Field
stat.CO cond-mat.stat-mech cs.IT math.IT
We propose an importance sampling scheme to estimate the partition function of the two-dimensional ferromagnetic Ising model and the two-dimensional ferromagnetic $q$-state Potts model, both in the presence of an external magnetic field. The proposed scheme operates in the dual Forney factor graph and is capable of efficiently computing an estimate of the partition function under a wide range of model parameters. In particular, we consider models that are in a strong external magnetic field.
1401.4935
Performance Analysis of a Network of Event-based Systems
cs.SY cs.NI
We consider a scenario where multiple event-based systems use a wireless network to communicate with their respective controllers. These systems use a contention resolution mechanism (CRM) to arbitrate access to the network. We present a Markov model for the network interactions between the event-based systems. Using this model, we obtain an analytical expression for the reliability, or the probability of successfully transmitting a packet, in this network. There are two important aspects to our model. Firstly, our model captures the joint interactions of the event-triggering policy and the CRM. This is required because event-triggering policies typically adapt to the CRM outcome. Secondly, the model is obtained by decoupling interactions between the different systems in the network, drawing inspiration from Bianchi's analysis of IEEE 802.11. This is required because the network interactions introduce a correlation between the system variables. We present Monte-Carlo simulations that validate our model under various network configurations, and verify our performance analysis as well.
1401.4936
Low-Complexity Robust Data-Adaptive Dimensionality Reduction Based on Joint Iterative Optimization of Parameters
cs.IT math.IT
This paper presents a low-complexity robust data-dependent dimensionality reduction based on a modified joint iterative optimization (MJIO) algorithm for reduced-rank beamforming and steering vector estimation. The proposed robust optimization procedure jointly adjusts the parameters of a rank-reduction matrix and an adaptive beamformer. The optimized rank-reduction matrix projects the received signal vector onto a subspace with lower dimension. The beamformer/steering vector optimization is then performed in a reduced-dimension subspace. We devise efficient stochastic gradient and recursive least-squares algorithms for implementing the proposed robust MJIO design. The proposed robust MJIO beamforming algorithms result in a faster convergence speed and an improved performance. Simulation results show that the proposed MJIO algorithms outperform some existing full-rank and reduced-rank algorithms with a comparable complexity.
1401.4942
Info-computational constructivism in modelling of life as cognition
cs.AI
This paper addresses the open question formulated as: Which levels of abstraction are appropriate in the synthetic modelling of life and cognition? within the framework of info-computational constructivism, treating natural phenomena as computational processes on informational structures. At present we lack the common understanding of the processes of life and cognition in living organisms with the details of co-construction of informational structures and computational processes in embodied, embedded cognizing agents, both living and artifactual ones. Starting with the definition of an agent as an entity capable of acting on its own behalf, as an actor in Hewitt Actor model of computation, even so simple systems as molecules can be modelled as actors exchanging messages (information). We adopt Kauffmans view of a living agent as something that can reproduce and undergoes at least one thermodynamic work cycle. This definition of living agents leads to the Maturana and Varelas identification of life with cognition. Within the info-computational constructive approach to living beings as cognizing agents, from the simplest to the most complex living systems, mechanisms of cognition can be studied in order to construct synthetic model classes of artifactual cognizing agents on different levels of organization.
1401.4944
Iterative pre-distortion of the non-linear satellite channel
cs.IT math.IT
Digital Video Broadcasting - Satellite - Second Generation (DVB-S2) is the current European standard for satellite broadcast and broadband communications. It relies on high order modulations up to 32-amplitude/phase-shift-keying (APSK) in order to increase the system spectral efficiency. Unfortunately, as the modulation order increases, the receiver becomes more sensitive to physical layer impairments, and notably to the distortions induced by the power amplifier and the channelizing filters aboard the satellite. Pre-distortion of the non-linear satellite channel has been studied for many years. However, the performance of existing pre-distortion algorithms generally becomes poor when high-order modulations are used on a non-linear channel with a long memory. In this paper, we investigate a new iterative method that pre-distorts blocks of transmitted symbols so as to minimize the Euclidian distance between the transmitted and received symbols. We also propose approximations to relax the pre-distorter complexity while keeping its performance acceptable.
1401.4952
Packing circles within circular containers: a new heuristic algorithm for the balance constraints case
cs.CG cs.CE
In this work we propose a heuristic algorithm for the layout optimization for disks installed in a rotating circular container. This is a unequal circle packing problem with additional balance constraints. It proved to be an NP-hard problem, which justifies heuristics methods for its resolution in larger instances. The main feature of our heuristic is based on the selection of the next circle to be placed inside the container according to the position of the system's center of mass. Our approach has been tested on a series of instances up to 55 circles and compared with the literature. Computational results show good performance in terms of solution quality and computational time for the proposed algorithm.
1401.4994
A Review of Verbal and Non-Verbal Human-Robot Interactive Communication
cs.RO cs.CL
In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-looking conclusion.
1401.5004
Stability Analysis and Design of a Network of Event-based Systems
cs.SY
We consider a network of event-based systems that use a shared wireless medium to communicate with their respective controllers. These systems use a contention resolution mechanism to arbitrate access to the shared network. We identify sufficient conditions for Lyapunov mean square stability of each control system in the network, and design event-based policies that guarantee it. Our stability analysis is based on a Markov model that removes the network-induced correlation between the states of the control systems in the network. Analyzing the stability of this Markov model remains a challenge, as the event-triggering policy renders the estimation error non-Gaussian. Hence, we identify an auxiliary system that furnishes an upper bound for the variance of the system states. Using the stability analysis, we design policies, such as the constant-probability policy, for adapting the event-triggering thresholds to the delay in accessing the network. Realistic wireless networked control examples illustrate the applicability of the presented approach.
1401.5027
Tie Strength Distribution in Scientific Collaboration Networks
physics.soc-ph cs.SI
Science is increasingly dominated by teams. Understanding patterns of scientific collaboration and their impacts on the productivity and evolution of disciplines is crucial to understand scientific processes. Electronic bibliography offers a unique opportunity to map and investigate the nature of scientific collaboration. Recent work have demonstrated a counter-intuitive organizational pattern of scientific collaboration networks: densely interconnected local clusters consist of weak ties, whereas strong ties play the role of connecting different clusters. This pattern contrasts itself from many other types of networks where strong ties form communities while weak ties connect different communities. Although there are many models for collaboration networks, no model reproduces this pattern. In this paper, we present an evolution model of collaboration networks, which reproduces many properties of real-world collaboration networks, including the organization of tie strengths, skewed degree and weight distribution, high clustering and assortative mixing.
1401.5031
A Scalable Conditional Independence Test for Nonlinear, Non-Gaussian Data
cs.AI stat.ME
Many relations of scientific interest are nonlinear, and even in linear systems distributions are often non-Gaussian, for example in fMRI BOLD data. A class of search procedures for causal relations in high dimensional data relies on sample derived conditional independence decisions. The most common applications rely on Gaussian tests that can be systematically erroneous in nonlinear non-Gaussian cases. Recent work (Gretton et al. (2009), Tillman et al. (2009), Zhang et al. (2011)) has proposed conditional independence tests using Reproducing Kernel Hilbert Spaces (RKHS). Among these, perhaps the most efficient has been KCI (Kernel Conditional Independence, Zhang et al. (2011)), with computational requirements that grow effectively at least as O(N3), placing it out of range of large sample size analysis, and restricting its applicability to high dimensional data sets. We propose a class of O(N2) tests using conditional correlation independence (CCI) that require a few seconds on a standard workstation for tests that require tens of minutes to hours for the KCI method, depending on degree of parallelization, with similar accuracy. For accuracy on difficult nonlinear, non-Gaussian data sets, we also compare a recent test due to Harris & Drton (2012), applicable to nonlinear, non-Gaussian distributions in the Gaussian copula, as well as to partial correlation, a linear Gaussian test.
1401.5037
Achieving SK Capacity in the Source Model: When Must All Terminals Talk?
cs.IT math.IT
In this paper, we address the problem of characterizing the instances of the multiterminal source model of Csisz\'ar and Narayan in which communication from all terminals is needed for establishing a secret key of maximum rate. We give an information-theoretic sufficient condition for identifying such instances. We believe that our sufficient condition is in fact an exact characterization, but we are only able to prove this in the case of the three-terminal source model. We also give a relatively simple criterion for determining whether or not our condition holds for a given multiterminal source model.
1401.5039
Experimental Design for Human-in-the-Loop Driving Simulations
cs.SY cs.HC
This report describes a new experimental setup for human-in-the-loop simulations. A force feedback simulator with four axis motion has been setup for real-time driving experiments. The simulator will move to simulate the forces a driver feels while driving, which allows for a realistic experience for the driver. This setup allows for flexibility and control for the researcher in a realistic simulation environment. Experiments concerning driver distraction can also be carried out safely in this test bed, in addition to multi-agent experiments. All necessary code to run the simulator, the additional sensors, and the basic processing is available for use.
1401.5051
WaterFowl, a Compact, Self-indexed RDF Store with Inference-enabled Dictionaries
cs.DB
In this paper, we present a novel approach -- called WaterFowl -- for the storage of RDF triples that addresses some key issues in the contexts of big data and the Semantic Web. The architecture of our prototype, largely based on the use of succinct data structures, enables the representation of triples in a self-indexed, compact manner without requiring decompression at query answering time. Moreover, it is adapted to efficiently support RDF and RDFS entailment regimes thanks to an optimized encoding of ontology concepts and properties that does not require a complete inference materialization or extensive query rewriting algorithms. This approach implies to make a distinction between the terminological and the assertional components of the knowledge base early in the process of data preparation, i.e., preprocessing the data before storing it in our structures. The paper describes the complete architecture of this system and presents some preliminary results obtained from evaluations conducted on our first prototype.
1401.5054
An\'alisis e implementaci\'on de algoritmos evolutivos para la optimizaci\'on de simulaciones en ingenier\'ia civil. (draft)
cs.NE cs.AI
This paper studies the applicability of evolutionary algorithms, particularly, the evolution strategies family in order to estimate a degradation parameter in the shear design of reinforced concrete members. This problem represents a great computational task and is highly relevant in the framework of the structural engineering that for the first time is solved using genetic algorithms. You are viewing a draft, the authors appreciate corrections, comments and suggestions to this work.
1401.5092
Symmetric Two-User Gaussian Interference Channel with Common Messages
cs.IT math.IT
We consider symmetric two-user Gaussian interference channel with common messages. We derive an upper bound on the sum capacity, and show that the upper bound is tight in the low interference regime, where the optimal transmission scheme is to send no common messages and each receiver treats interference as noise. Our result shows that although the availability of common messages provides a cooperation opportunity for transmitters, in the low interference regime the presence of common messages does not help increase the sum capacity.
1401.5093
Localization and centrality in networks
cs.SI cond-mat.stat-mech physics.soc-ph
Eigenvector centrality is a common measure of the importance of nodes in a network. Here we show that under common conditions the eigenvector centrality displays a localization transition that causes most of the weight of the centrality to concentrate on a small number of nodes in the network. In this regime the measure is no longer useful for distinguishing among the remaining nodes and its efficacy as a network metric is impaired. As a remedy, we propose an alternative centrality measure based on the nonbacktracking matrix, which gives results closely similar to the standard eigenvector centrality in dense networks where the latter is well behaved, but avoids localization and gives useful results in regimes where the standard centrality fails.
1401.5098
Study of Efficient Technique Based On 2D Tsallis Entropy For Image Thresholding
cs.CV
Thresholding is an important task in image processing. It is a main tool in pattern recognition, image segmentation, edge detection and scene analysis. In this paper, we present a new thresholding technique based on two-dimensional Tsallis entropy. The two-dimensional Tsallis entropy was obtained from the twodimensional histogram which was determined by using the gray value of the pixels and the local average gray value of the pixels, the work it was applied a generalized entropy formalism that represents a recent development in statistical mechanics. The effectiveness of the proposed method is demonstrated by using examples from the real-world and synthetic images. The performance evaluation of the proposed technique in terms of the quality of the thresholded images are presented. Experimental results demonstrate that the proposed method achieve better result than the Shannon method.
1401.5108
An Identification System Using Eye Detection Based On Wavelets And Neural Networks
cs.CV
The randomness and uniqueness of human eye patterns is a major breakthrough in the search for quicker, easier and highly reliable forms of automatic human identification. It is being used extensively in security solutions. This includes access control to physical facilities, security systems and information databases, Suspect tracking, surveillance and intrusion detection and by various Intelligence agencies through out the world. We use the advantage of human eye uniqueness to identify people and approve its validity as a biometric. . Eye detection involves first extracting the eye from a digital face image, and then encoding the unique patterns of the eye in such a way that they can be compared with pre-registered eye patterns. The eye detection system consists of an automatic segmentation system that is based on the wavelet transform, and then the Wavelet analysis is used as a pre-processor for a back propagation neural network with conjugate gradient learning. The inputs to the neural network are the wavelet maxima neighborhood coefficients of face images at a particular scale. The output of the neural network is the classification of the input into an eye or non-eye region. An accuracy of 90% is observed for identifying test images under different conditions included in training stage.
1401.5124
Channels with cost constraints: strong converse and dispersion
cs.IT math.IT
This paper shows the strong converse and the dispersion of memoryless channels with cost constraints and performs refined analysis of the third order term in the asymptotic expansion of the maximum achievable channel coding rate, showing that it is equal to $\frac 1 2 \frac {\log n}{n}$ in most cases of interest. The analysis is based on a non-asymptotic converse bound expressed in terms of the distribution of a random variable termed the $\mathsf b$-tilted information density, which plays a role similar to that of the $\mathsf d$-tilted information in lossy source coding. We also analyze the fundamental limits of lossy joint-source-channel coding over channels with cost constraints.
1401.5125
Nonasymptotic noisy lossy source coding
cs.IT math.IT
This paper shows new general nonasymptotic achievability and converse bounds and performs their dispersion analysis for the lossy compression problem in which the compressor observes the source through a noisy channel. While this problem is asymptotically equivalent to a noiseless lossy source coding problem with a modified distortion function, nonasymptotically there is a noticeable gap in how fast their minimum achievable coding rates approach the common rate-distortion function, as evidenced both by the refined asymptotic analysis (dispersion) and the numerical results. The size of the gap between the dispersions of the noisy problem and the asymptotically equivalent noiseless problem depends on the stochastic variability of the channel through which the compressor observes the source.
1401.5136
A Unifying Framework for Typical Multi-Task Multiple Kernel Learning Problems
cs.LG
Over the past few years, Multi-Kernel Learning (MKL) has received significant attention among data-driven feature selection techniques in the context of kernel-based learning. MKL formulations have been devised and solved for a broad spectrum of machine learning problems, including Multi-Task Learning (MTL). Solving different MKL formulations usually involves designing algorithms that are tailored to the problem at hand, which is, typically, a non-trivial accomplishment. In this paper we present a general Multi-Task Multi-Kernel Learning (Multi-Task MKL) framework that subsumes well-known Multi-Task MKL formulations, as well as several important MKL approaches on single-task problems. We then derive a simple algorithm that can solve the unifying framework. To demonstrate the flexibility of the proposed framework, we formulate a new learning problem, namely Partially-Shared Common Space (PSCS) Multi-Task MKL, and demonstrate its merits through experimentation.
1401.5151
Signal recovery using expectation consistent approximation for linear observations
cs.IT cond-mat.dis-nn math.IT
A signal recovery scheme is developed for linear observation systems based on expectation consistent (EC) mean field approximation. Approximate message passing (AMP) is known to be consistent with the results obtained using the replica theory, which is supposed to be exact in the large system limit, when each entry of the observation matrix is independently generated from an identical distribution. However, this is not necessarily the case for general matrices. We show that EC recovery exhibits consistency with the replica theory for a wider class of random observation matrices. This is numerically confirmed by experiments for the Bayesian optimal signal recovery of compressed sensing using random row-orthogonal matrices.
1401.5156
Harmony Search Algorithm for Curriculum-Based Course Timetabling Problem
cs.AI
In this paper, harmony search algorithm is applied to curriculum-based course timetabling. The implementation, specifically the process of improvisation consists of memory consideration, random consideration and pitch adjustment. In memory consideration, the value of the course number for new solution was selected from all other course number located in the same column of the Harmony Memory. This research used the highest occurrence of the course number to be scheduled in a new harmony. The remaining courses that have not been scheduled by memory consideration will go through random consideration, i.e. will select any feasible location available to be scheduled in the new harmony solution. Each course scheduled out of memory consideration is examined as to whether it should be pitch adjusted with probability of eight procedures. However, the algorithm produced results that were not comparatively better than those previously known as best solution. With proper modification in terms of the approach in this algorithm would make the algorithm perform better on curriculum-based course timetabling.
1401.5157
Skill Analysis with Time Series Image Data
cs.AI
We present a skill analysis with time series image data using data mining methods, focused on table tennis. We do not use body model, but use only hi-speed movies, from which time series data are obtained and analyzed using data mining methods such as C4.5 and so on. We identify internal models for technical skills as evaluation skillfulness for the forehand stroke of table tennis, and discuss mono and meta-functional skills for improving skills.
1401.5162
A Simple Software Application for Simulating Commercially Available Solar Panels
cs.CE
This article addresses the formulation and validation of a simple PC based software application developed for simulating commercially available solar panels. The important feature of this application is its capability to produce speedy results in the form of solar panel output characteristics at given environmental conditions by using minimal input data. Besides, it is able to deliver critical information about the maximum power point of the panel at a given environmental condition in quick succession. The application is based on a standard equation which governs solar panels and works by means of estimating unknown parameters in the equation to fit a given solar panel. The process of parameter estimation is described in detail with the aid of equations and data of a commercial solar panel. A validation of obtained results for commercial solar panels is also presented by comparing the panel manufacturers' results with the results generated by the application. In addition, implications of the obtained results are discussed along with possible improvements to the developed software application.
1401.5168
Distributed Storage Schemes over Unidirectional Ring Networks
cs.IT math.IT
In this paper, we study distributed storage problems over unidirectional ring networks. A lower bound on the reconstructing bandwidth to recover total original data for each user is proposed, and it is achievable for arbitrary parameters. If a distributed storage scheme can achieve this lower bound with equality for each user, we say it an optimal reconstructing distributed storage scheme (ORDSS). Furthermore, the repair problem for a failed storage node in ORDSSes is under consideration and a tight lower bound on the repair bandwidth for each storage node is obtained. Particularly, we indicate the fact that for any ORDSS, every storage node can be repaired with repair bandwidth achieving the lower bound with equality. In addition, we present an efficient approach to construct ORDSSes for arbitrary parameters by using the concept of Euclidean division. Finally, we take an example to characterize the above approach.
1401.5175
Supporting MOOC Instruction with Social Network Analysis
cs.SI
With an expansive and ubiquitously available gold mine of educational data, Massive Open Online courses (MOOCs) have become the an important foci of learning analytics research. In this paper, we investigate potential reasons as to why are these digitalized learning repositories being plagued with huge attrition rates. We analyze an ongoing online course offered in Coursera using a social network perspective, with an objective to identify students who are actively participating in course discussions and those who are potentially at a risk of dropping off. We additionally perform extensive forum analysis to visualize student's posting patterns longitudinally. Our results provide insights that can assist educational designers in establishing a pedagogical basis for decision-making while designing MOOCs. We infer prominent characteristics about the participation patterns of distinct groups of students in the networked learning community, and effectively discover important discussion threads. These methods can, despite the otherwise prohibitive number of students involved, allow an instructor to leverage forum behavior to identify opportunities for support.
1401.5181
Automation of Prosthetic Upper Limbs for Transhumeral Amputees Using Switch-controlled Motors
cs.HC cs.RO
The issues of research required in the field of bio medical engineering and externally-powered prostheses are attracting attention of regulatory bodies and the common people in various parts of the globe. Today, 90 percent of prostheses used are conventional body powered cable-controlled ones which are very uncomfortable to the amputees as fairly large amount of forces and excursions have to be generated by the amputee. Additionally, its amount of rotation is limited. Alternatively, prosthetic limbs driven using electrical motors might deliver added functionality and improved control, accompanied by better cosmesis, however,it could be bulky and costly. Presently existing proposals usually require fewer bodily response and need additional upkeep than the cable operated prosthetic limbs. Due to the motives mentioned, proposal for mechanization of body-powered prostheses, with ease of maintenance and cost in mind, is presented in this paper. The prosthetic upper limb which is being automated is for Transhumeral type of amputees that is amputated from above elbow. The study consists of two main portions: one is lifting mechanism of the limb and the other is gripping mechanism for the hand using switch controls, which is the most cost effective and optimized solution, rather than using complex and expensive myoelectric control signals.
1401.5187
Inequalities for the Bayes Risk
cs.IT math.IT math.ST stat.TH
Several inequalities are presented which, in part, generalize inequalities by Weinstein and Weiss, giving rise to new lower bounds for the Bayes risk under squared error loss.
1401.5194
Fundamental Finite Key Limits for One-Way Information Reconciliation in Quantum Key Distribution
quant-ph cs.IT math.IT
The security of quantum key distribution protocols is guaranteed by the laws of quantum mechanics. However, a precise analysis of the security properties requires tools from both classical cryptography and information theory. Here, we employ recent results in non-asymptotic classical information theory to show that one-way information reconciliation imposes fundamental limitations on the amount of secret key that can be extracted in the finite key regime. In particular, we find that an often used approximation for the information leakage during information reconciliation is not generally valid. We propose an improved approximation that takes into account finite key effects and numerically test it against codes for two probability distributions, that we call binary-binary and binary-Gaussian, that typically appear in quantum key distribution protocols.
1401.5197
A user-friendly nano-CT image alignment and 3D reconstruction platform based on LabVIEW
cs.CE physics.med-ph
X-ray computed tomography at the nanometer scale (nano-CT) offers a wide range of applications in scientific and industrial areas. Here we describe a reliable, user-friendly and fast software package based on LabVIEW that may allow to perform all procedures after the acquisition of raw projection images in order to obtain the inner structure of the investigated sample. A suitable image alignment process to address misalignment problems among image series due to mechanical manufacturing errors, thermal expansion and other external factors has been considered together with a novel fast parallel beam 3D reconstruction procedure, developed ad hoc to perform the tomographic reconstruction. Remarkably improved reconstruction results obtained at the Beijing Synchrotron Radiation Facility after the image calibration confirmed the fundamental role of this image alignment procedure that minimizes unwanted blurs and additional streaking artifacts always present in reconstructed slices. Moreover, this nano-CT image alignment and its associated 3D reconstruction procedure fully based on LabVIEW routines, significantly reduce the data post-processing cycle, thus making faster and easier the activity of the users during experimental runs.
1401.5200
Conformance Testing as Falsification for Cyber-Physical Systems
cs.SY
In Model-Based Design of Cyber-Physical Systems (CPS), it is often desirable to develop several models of varying fidelity. Models of different fidelity levels can enable mathematical analysis of the model, control synthesis, faster simulation etc. Furthermore, when (automatically or manually) transitioning from a model to its implementation on an actual computational platform, then again two different versions of the same system are being developed. In all previous cases, it is necessary to define a rigorous notion of conformance between different models and between models and their implementations. This paper argues that conformance should be a measure of distance between systems. Albeit a range of theoretical distance notions exists, a way to compute such distances for industrial size systems and models has not been proposed yet. This paper addresses exactly this problem. A universal notion of conformance as closeness between systems is rigorously defined, and evidence is presented that this implies a number of other application-dependent conformance notions. An algorithm for detecting that two systems are not conformant is then proposed, which uses existing proven tools. A method is also proposed to measure the degree of conformance between two systems. The results are demonstrated on a range of models.
1401.5216
Multi-GPU parallel memetic algorithm for capacitated vehicle routing problem
cs.DC cs.NE
The goal of this paper is to propose and test a new memetic algorithm for the capacitated vehicle routing problem in parallel computing environment. In this paper we consider simple variation of vehicle routing problem in which the only parameter is the capacity of the vehicle and each client only needs one package. We present simple reduction to prove the existence of polynomial-time algorithm for capacity 2. We analyze the efficiency of the algorithm using hierarchical Parallel Random Access Machine (PRAM) model and run experiments with code written in CUDA (for capacities larger than 2).
1401.5221
Optimal Intelligent Control for Wind Turbulence Rejection in WECS Using ANNs and Genetic Fuzzy Approach
cs.SY cs.NE
One of the disadvantages in Connection of wind energy conversion systems (WECSs) to transmission networks is plentiful turbulence of wind speed. Therefore effects of this problem must be controlled. Nowadays, pitch-controlled WECSs are increasingly used for variable speed and pitch wind turbines. Megawatt class wind turbines generally turn at variable speed in wind farm. Thus turbine operation must be controlled in order to maximize the conversion efficiency below rated power and reduce loading on the drive-train. Due to random and non-linear nature of the wind turbulence and the ability of Multi-Layer Perceptron (MLP) and Radial Basis Function (RBF) Artificial Neural Networks (ANNs) in the modeling and control of this turbulence, in this study, widespread changes of wind have been perused using MLP and RBF artificial NNs. In addition in this study, a new genetic fuzzy system has been successfully applied to identify disturbance wind in turbine input. Thus output power has been regulated in optimal and nominal range by pitch angle regulation. Consequently, our proposed approaches have regulated output aerodynamic power and torque in the nominal rang.
1401.5224
Least Entropy-Like Approach for Reconstructing L-Shaped Surfaces Using a Rotating Array of Ultrasonic Sensors
cs.RO
This paper introduces a new algorithm for accurately reconstructing two smooth orthogonal surfaces by processing ultrasonic data. The proposed technique is based on a preliminary analysis of a waveform energy indicator in order to classify the data as belonging to one of the two flat surfaces. The following minimization of a nonlinear cost function, inspired by the mathematical definition of Gibbs entropy, allows to estimate the plane parameters robustly with respect to the presence of outlying data. These outliers are mainly due to the effect of multiple reflections arising in the surfaces intersection region. The scanning system consists of four inexpensive ultrasonic sensors rotated by means of a precision servo digital motor in order to obtain distance measurements for each orientation. Experimental results are presented and compared with the classic Least Squares Method demonstrating the potentiality of the proposed approach in terms of precision and reliability.
1401.5226
The Why and How of Nonnegative Matrix Factorization
stat.ML cs.IR cs.LG math.OC
Nonnegative matrix factorization (NMF) has become a widely used tool for the analysis of high-dimensional data as it automatically extracts sparse and meaningful features from a set of nonnegative data vectors. We first illustrate this property of NMF on three applications, in image processing, text mining and hyperspectral imaging --this is the why. Then we address the problem of solving NMF, which is NP-hard in general. We review some standard NMF algorithms, and also present a recent subclass of NMF problems, referred to as near-separable NMF, that can be solved efficiently (that is, in polynomial time), even in the presence of noise --this is the how. Finally, we briefly describe some problems in mathematics and computer science closely related to NMF via the nonnegative rank.
1401.5232
Bio-inspired friction switches: adaptive pulley systems
cs.RO
Frictional influences in tendon-driven robotic systems are generally unwanted, with efforts towards minimizing them where possible. In the human hand however, the tendon-pulley system is found to be frictional with a difference between high-loaded static post-eccentric and post-concentric force production of 9-12% of the total output force. This difference can be directly attributed to tendon-pulley friction. Exploiting this phenomenon for robotic and prosthetic applications we can achieve a reduction of actuator size, weight and consequently energy consumption. In this study, we present the design of a bio-inspired friction switch. The adaptive pulley is designed to minimize the influence of frictional forces under low and medium-loading conditions and maximize it under high-loading conditions. This is achieved with a dual-material system that consists of a high-friction silicone substrate and low-friction polished steel pins. The system, designed to switch its frictional properties between the low-loaded and high-loaded conditions, is described and its behavior experimentally validated with respect to the number and spacing of pins. The results validate its intended behavior, making it a viable choice for robotic tendon-driven systems.
1401.5234
On the third weight of generalized Reed-Muller codes
cs.IT math.IT math.NT
In this paper, we study the third weight of generalized Reed-Muller codes. We prove under some restrictive condition that the third weight of generalized Reed-Muller codes depends on the third weight of generalized Reed-Muller codes of small order with two variables. In some cases, we are able to determine the third weight and the third weight codewords of generalized Reed-Muller codes.
1401.5245
Edge detection of binary images using the method of masks
cs.CV
In this work the method of masks, creating and using of inverted image masks, together with binary operation of image data are used in edge detection of binary images, monochrome images, which yields about 300 times faster than ordinary methods. The method is divided into three stages: Mask construction, Fundamental edge detection, and Edge Construction Comparison with an ordinary method and a fuzzy based method is carried out.
1401.5246
Genetic Algorithms and its use with back-propagation network
cs.NE
Genetic algorithms are considered as one of the most efficient search techniques. Although they do not offer an optimal solution, their ability to reach a suitable solution in considerably short time gives them their respectable role in many AI techniques. This work introduces genetic algorithms and describes their characteristics. Then a novel method using genetic algorithm in best training set generation and selection for a back-propagation network is proposed. This work also offers a new extension to the original genetic algorithms
1401.5247
Information content: assessing meso-scale structures in complex networks
physics.soc-ph cs.SI
We propose a novel measure to assess the presence of meso-scale structures in complex networks. This measure is based on the identification of regular patterns in the adjacency matrix of the network, and on the calculation of the quantity of information lost when pairs of nodes are iteratively merged. We show how this measure is able to quantify several meso-scale structures, like the presence of modularity, bipartite and core-periphery configurations, or motifs. Results corresponding to a large set of real networks are used to validate its ability to detect non-trivial topological patterns.
1401.5272
The Rate-Distortion Function and Excess-Distortion Exponent of Sparse Regression Codes with Optimal Encoding
cs.IT math.IT math.ST stat.TH
This paper studies the performance of sparse regression codes for lossy compression with the squared-error distortion criterion. In a sparse regression code, codewords are linear combinations of subsets of columns of a design matrix. It is shown that with minimum-distance encoding, sparse regression codes achieve the Shannon rate-distortion function for i.i.d. Gaussian sources $R^*(D)$ as well as the optimal excess-distortion exponent. This completes a previous result which showed that $R^*(D)$ and the optimal exponent were achievable for distortions below a certain threshold. The proof of the rate-distortion result is based on the second moment method, a popular technique to show that a non-negative random variable $X$ is strictly positive with high probability. In our context, $X$ is the number of codewords within target distortion $D$ of the source sequence. We first identify the reason behind the failure of the standard second moment method for certain distortions, and illustrate the different failure modes via a stylized example. We then use a refinement of the second moment method to show that $R^*(D)$ is achievable for all distortion values. Finally, the refinement technique is applied to Suen's correlation inequality to prove the achievability of the optimal Gaussian excess-distortion exponent.
1401.5297
Navigating MazeMap: indoor human mobility, spatio-logical ties and future potential
cs.SY cs.NI cs.SI
Global navigation systems and location-based services have found their way into our daily lives. Recently, indoor positioning techniques have also been proposed, and there are several live or trial systems already operating. In this paper, we present insights from MazeMap, the first live indoor/outdoor positioning and navigation system deployed at a large university campus in Norway. Our main contribution is a measurement case study; we show the spatial and temporal distribution of MazeMap geo-location and wayfinding requests, construct the aggregated human mobility map of the campus and find strong logical ties between different locations. On one hand, our findings are specific to the venue; on the other hand, the nature of available data and insights coupled with our discussion on potential usage scenarios for indoor positioning and location-based services predict a successful future for these systems and applications.
1401.5305
Bounds on the ML Decoding Error Probability of RS-Coded Modulation over AWGN Channels
cs.IT math.IT
This paper is concerned with bounds on the maximum-likelihood (ML) decoding error probability of Reed-Solomon (RS) codes over additive white Gaussian noise (AWGN) channels. To resolve the difficulty caused by the dependence of the Euclidean distance spectrum on the way of signal mapping, we propose to use random mapping, resulting in an ensemble of RS-coded modulation (RS-CM) systems. For this ensemble of RS-CM systems, analytic bounds are derived, which can be evaluated from the known (symbol-level) Hamming distance spectrum. Also presented in this paper are simulation-based bounds, which are applicable to any specific RS-CM system and can be evaluated by the aid of a list decoding (in the Euclidean space) algorithm. The simulation-based bounds do not need distance spectrum and are numerically tight for short RS codes in the regime where the word error rate (WER) is not too low. Numerical comparison results are relevant in at least three aspects. First, in the short code length regime, RS-CM using BPSK modulation with random mapping has a better performance than binary random linear codes. Second, RS-CM with random mapping (time varying) can have a better performance than with specific mapping. Third, numerical results show that the recently proposed Chase-type decoding algorithm is essentially the ML decoding algorithm for short RS codes.
1401.5311
Multi-Directional Multi-Level Dual-Cross Patterns for Robust Face Recognition
cs.CV
To perform unconstrained face recognition robust to variations in illumination, pose and expression, this paper presents a new scheme to extract "Multi-Directional Multi-Level Dual-Cross Patterns" (MDML-DCPs) from face images. Specifically, the MDMLDCPs scheme exploits the first derivative of Gaussian operator to reduce the impact of differences in illumination and then computes the DCP feature at both the holistic and component levels. DCP is a novel face image descriptor inspired by the unique textural structure of human faces. It is computationally efficient and only doubles the cost of computing local binary patterns, yet is extremely robust to pose and expression variations. MDML-DCPs comprehensively yet efficiently encodes the invariant characteristics of a face image from multiple levels into patterns that are highly discriminative of inter-personal differences but robust to intra-personal variations. Experimental results on the FERET, CAS-PERL-R1, FRGC 2.0, and LFW databases indicate that DCP outperforms the state-of-the-art local descriptors (e.g. LBP, LTP, LPQ, POEM, tLBP, and LGXP) for both face identification and face verification tasks. More impressively, the best performance is achieved on the challenging LFW and FRGC 2.0 databases by deploying MDML-DCPs in a simple recognition scheme.
1401.5321
An Integer Programming Approach to UEP Coding for Multiuser Broadcast Channels
cs.IT math.IT
In this paper, an integer programming approach is introduced to construct Unequal Error Protection (UEP) codes for multiuser broadcast channels. We show that the optimal codes can be constructed that satisfy the integer programming bound. Based on the bound, we compute asymptotic code rate and perform throughput analysis for the degraded broadcast channel.
1401.5327
Compositional Operators in Distributional Semantics
cs.CL cs.AI math.CT
This survey presents in some detail the main advances that have been recently taking place in Computational Linguistics towards the unification of the two prominent semantic paradigms: the compositional formal semantics view and the distributional models of meaning based on vector spaces. After an introduction to these two approaches, I review the most important models that aim to provide compositionality in distributional semantics. Then I proceed and present in more detail a particular framework by Coecke, Sadrzadeh and Clark (2010) based on the abstract mathematical setting of category theory, as a more complete example capable to demonstrate the diversity of techniques and scientific disciplines that this kind of research can draw from. This paper concludes with a discussion about important open issues that need to be addressed by the researchers in the future.
1401.5330
Study of Neural Network Algorithm for Straight-Line Drawings of Planar Graphs
cs.CG cs.NE
Graph drawing addresses the problem of finding a layout of a graph that satisfies given aesthetic and understandability objectives. The most important objective in graph drawing is minimization of the number of crossings in the drawing, as the aesthetics and readability of graph drawings depend on the number of edge crossings. VLSI layouts with fewer crossings are more easily realizable and consequently cheaper. A straight-line drawing of a planar graph G of n vertices is a drawing of G such that each edge is drawn as a straight-line segment without edge crossings. However, a problem with current graph layout methods which are capable of producing satisfactory results for a wide range of graphs is that they often put an extremely high demand on computational resources. This paper introduces a new layout method, which nicely draws internally convex of planar graph that consumes only little computational resources and does not need any heavy duty preprocessing. Here, we use two methods: The first is self organizing map known from unsupervised neural networks which is known as (SOM) and the second method is Inverse Self Organized Map (ISOM).
1401.5334
A Microkernel Architecture for Constraint Programming
cs.AI cs.PL
This paper presents a microkernel architecture for constraint programming organized around a number of small number of core functionalities and minimal interfaces. The architecture contrasts with the monolithic nature of many implementations. Experimental results indicate that the software engineering benefits are not incompatible with runtime efficiency.
1401.5339
Complex Objects in the Polytopes of the Linear State-Space Process
math.OC cs.MA
A simple object (one point in $m$-dimensional space) is the resultant of the evolving matrix polynomial of walks in the irreducible aperiodic network structure of the first order DeGroot (weighted averaging) state-space process. This paper draws on a second order generalization the DeGroot model that allows complex object resultants, i.e, multiple points with distinct coordinates, in the convex hull of the initial state-space. It is shown that, holding network structure constant, a unique solution exists for the particular initial space that is a sufficient condition for the convergence of the process to a specified complex object. In addition, it is shown that, holding network structure constant, a solution exists for dampening values sufficient for the convergence of the process to a specified complex object. These dampening values, which modify the values of the walks in the network, control the system's outcomes, and any strongly connected typology is a sufficient condition of such control.
1401.5341
Domain Views for Constraint Programming
cs.AI cs.PL
Views are a standard abstraction in constraint programming: They make it possible to implement a single version of each constraint, while avoiding to create new variables and constraints that would slow down propagation. Traditional constraint-programming systems provide the concept of {\em variable views} which implement a view of the type $y = f(x)$ by delegating all (domain and constraint) operations on variable $y$ to variable $x$. This paper proposes the alternative concept of {\em domain views} which only delegate domain operations. Domain views preserve the benefits of variable views but simplify the implementation of value-based propagation. Domain views also support non-injective views compositionally, expanding the scope of views significantly. Experimental results demonstrate the practical benefits of domain views.
1401.5360
Positivity, Discontinuity, Finite Resources and Nonzero Error for Arbitrarily Varying Quantum Channels
quant-ph cs.IT math-ph math.IT math.MP
This work is motivated by a quite general question: Under which circumstances are the capacities of information transmission systems continuous? The research is explicitly carried out on arbitrarily varying quantum channels (AVQCs). We give an explicit example that answers the recent question whether the transmission of messages over AVQCs can benefit from distribution of randomness between the legitimate sender and receiver in the affirmative. The specific class of channels introduced in that example is then extended to show that the deterministic capacity does have discontinuity points, while that behaviour is, at the same time, not generic: We show that it is continuous around its positivity points. This is in stark contrast to the randomness-assisted capacity, which is always continuous in the channel. Our results imply that the deterministic message transmission capacity of an AVQC can be discontinuous only in points where it is zero, while the randomness assisted capacity is nonzero. Apart from the zero-error capacities, this is the first result that shows a discontinuity of a capacity for a large class of quantum channels. The continuity of the respective capacity for memoryless quantum channels had, among others, been listed as an open problem on the problem page of the ITP Hannover for about six years before it was proven to be continuous. We also quantify the interplay between the distribution of finite amounts of randomness between the legitimate sender and receiver, the (nonzero) decoding error with respect to the average error criterion that can be achieved over a finite number of channel uses and the number of messages that can be sent. This part of our results also applies to entanglement- and strong subspace transmission. In addition, we give a new sufficient criterion for the entanglement transmission capacity with randomness assistance to vanish.
1401.5364
HMACA: Towards Proposing a Cellular Automata Based Tool for Protein Coding, Promoter Region Identification and Protein Structure Prediction
cs.CE cs.LG
Human body consists of lot of cells, each cell consist of DeOxaRibo Nucleic Acid (DNA). Identifying the genes from the DNA sequences is a very difficult task. But identifying the coding regions is more complex task compared to the former. Identifying the protein which occupy little place in genes is a really challenging issue. For understating the genes coding region analysis plays an important role. Proteins are molecules with macro structure that are responsible for a wide range of vital biochemical functions, which includes acting as oxygen, cell signaling, antibody production, nutrient transport and building up muscle fibers. Promoter region identification and protein structure prediction has gained a remarkable attention in recent years. Even though there are some identification techniques addressing this problem, the approximate accuracy in identifying the promoter region is closely 68% to 72%. We have developed a Cellular Automata based tool build with hybrid multiple attractor cellular automata (HMACA) classifier for protein coding region, promoter region identification and protein structure prediction which predicts the protein and promoter regions with an accuracy of 76%. This tool also predicts the structure of protein with an accuracy of 80%.
1401.5389
Which Clustering Do You Want? Inducing Your Ideal Clustering with Minimal Feedback
cs.IR cs.CL cs.LG
While traditional research on text clustering has largely focused on grouping documents by topic, it is conceivable that a user may want to cluster documents along other dimensions, such as the authors mood, gender, age, or sentiment. Without knowing the users intention, a clustering algorithm will only group documents along the most prominent dimension, which may not be the one the user desires. To address the problem of clustering documents along the user-desired dimension, previous work has focused on learning a similarity metric from data manually annotated with the users intention or having a human construct a feature space in an interactive manner during the clustering process. With the goal of reducing reliance on human knowledge for fine-tuning the similarity function or selecting the relevant features required by these approaches, we propose a novel active clustering algorithm, which allows a user to easily select the dimension along which she wants to cluster the documents by inspecting only a small number of words. We demonstrate the viability of our algorithm on a variety of commonly-used sentiment datasets.
1401.5390
Learning to Win by Reading Manuals in a Monte-Carlo Framework
cs.CL cs.AI cs.LG
Domain knowledge is crucial for effective performance in autonomous control systems. Typically, human effort is required to encode this knowledge into a control algorithm. In this paper, we present an approach to language grounding which automatically interprets text in the context of a complex control application, such as a game, and uses domain knowledge extracted from the text to improve control performance. Both text analysis and control strategies are learned jointly using only a feedback signal inherent to the application. To effectively leverage textual information, our method automatically extracts the text segment most relevant to the current game state, and labels it with a task-centric predicate structure. This labeled text is then used to bias an action selection policy for the game, guiding it towards promising regions of the action space. We encode our model for text analysis and game playing in a multi-layer neural network, representing linguistic decisions via latent variables in the hidden layers, and game action quality via the output layer. Operating within the Monte-Carlo Search framework, we estimate model parameters using feedback from simulated games. We apply our approach to the complex strategy game Civilization II using the official game manual as the text guide. Our results show that a linguistically-informed game-playing agent significantly outperforms its language-unaware counterpart, yielding a 34% absolute improvement and winning over 65% of games when playing against the built-in AI of Civilization.
1401.5401
Linear MIMO Precoding in Jointly-Correlated Fading Multiple Access Channels with Finite Alphabet Signaling
cs.IT math.IT
In this paper, we investigate the design of linear precoders for multiple-input multiple-output (MIMO) multiple access channels (MAC). We assume that statistical channel state information (CSI) is available at the transmitters and consider the problem under the practical finite alphabet input assumption. First, we derive an asymptotic (in the large-system limit) weighted sum rate (WSR) expression for the MIMO MAC with finite alphabet inputs and general jointly-correlated fading. Subsequently, we obtain necessary conditions for linear precoders maximizing the asymptotic WSR and propose an iterative algorithm for determining the precoders of all users. In the proposed algorithm, the search space of each user for designing the precoding matrices is its own modulation set. This significantly reduces the dimension of the search space for finding the precoding matrices of all users compared to the conventional precoding design for the MIMO MAC with finite alphabet inputs, where the search space is the combination of the modulation sets of all users. As a result, the proposed algorithm decreases the computational complexity for MIMO MAC precoding design with finite alphabet inputs by several orders of magnitude. Simulation results for finite alphabet signalling indicate that the proposed iterative algorithm achieves significant performance gains over existing precoder designs, including the precoder design based on the Gaussian input assumption, in terms of both the sum rate and the coded bit error rate.
1401.5407
Patterns of Ship-borne Species Spread: A Clustering Approach for Risk Assessment and Management of Non-indigenous Species Spread
cs.SI
The spread of non-indigenous species (NIS) through the global shipping network (GSN) has enormous ecological and economic cost throughout the world. Previous attempts at quantifying NIS invasions have mostly taken "bottom-up" approaches that eventually require the use of multiple simplifying assumptions due to insufficiency and/or uncertainty of available data. By modeling implicit species exchanges via a graph abstraction that we refer to as the Species Flow Network (SFN), a different approach that exploits the power of network science methods in extracting knowledge from largely incomplete data is presented. Here, coarse-grained species flow dynamics are studied via a graph clustering approach that decomposes the SFN to clusters of ports and inter-cluster connections. With this decomposition of ports in place, NIS flow among clusters can be very efficiently reduced by enforcing NIS management on a few chosen inter-cluster connections. Furthermore, efficient NIS management strategy for species exchanges within a cluster (often difficult due higher rate of travel and pathways) are then derived in conjunction with ecological and environmental aspects that govern the species establishment. The benefits of the presented approach include robustness to data uncertainties, implicit incorporation of "stepping-stone" spread of invasive species, and decoupling of species spread and establishment risk estimation. Our analysis of a multi-year (1997--2006) GSN dataset using the presented approach shows the existence of a few large clusters of ports with higher intra-cluster species flow that are fairly stable over time. Furthermore, detailed investigations were carried out on vessel types, ports, and inter-cluster connections. Finally, our observations are discussed in the context of known NIS invasions and future research directions are also presented.
1401.5424
Real Time Strategy Language
cs.AI
Real Time Strategy (RTS) games provide complex domain to test the latest artificial intelligence (AI) research. In much of the literature, AI systems have been limited to playing one game. Although, this specialization has resulted in stronger AI gaming systems it does not address the key concerns of AI researcher. AI researchers seek the development of AI agents that can autonomously interpret learn, and apply new knowledge. To achieve human level performance, current AI systems rely on game specific knowledge of an expert. The paper presents the full RTS language in hopes of shifting the current research focus to the development of general RTS agents. General RTS agents are AI gaming systems that can play any RTS games, defined in the RTS language. This prevents game specific knowledge from being hard coded into the system, thereby facilitating research that addresses the fundamental concerns of artificial intelligence.