id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
1006.3021
A General Framework for Equivalences in Answer-Set Programming by Countermodels in the Logic of Here-and-There
cs.AI
Different notions of equivalence, such as the prominent notions of strong and uniform equivalence, have been studied in Answer-Set Programming, mainly for the purpose of identifying programs that can serve as substitutes without altering the semantics, for instance in program optimization. Such semantic comparisons are usually characterized by various selections of models in the logic of Here-and-There (HT). For uniform equivalence however, correct characterizations in terms of HT-models can only be obtained for finite theories, respectively programs. In this article, we show that a selection of countermodels in HT captures uniform equivalence also for infinite theories. This result is turned into coherent characterizations of the different notions of equivalence by countermodels, as well as by a mixture of HT-models and countermodels (so-called equivalence interpretations). Moreover, we generalize the so-called notion of relativized hyperequivalence for programs to propositional theories, and apply the same methodology in order to obtain a semantic characterization which is amenable to infinite settings. This allows for a lifting of the results to first-order theories under a very general semantics given in terms of a quantified version of HT. We thus obtain a general framework for the study of various notions of equivalence for theories under answer-set semantics. Moreover, we prove an expedient property that allows for a simplified treatment of extended signatures, and provide further results for non-ground logic programs. In particular, uniform equivalence coincides under open and ordinary answer-set semantics, and for finite non-ground programs under these semantics, also the usual characterization of uniform equivalence in terms of maximal and total HT-models of the grounding is correct, even for infinite domains, when corresponding ground programs are infinite.
1006.3033
Extension of Wirtinger's Calculus to Reproducing Kernel Hilbert Spaces and the Complex Kernel LMS
cs.LG
Over the last decade, kernel methods for nonlinear processing have successfully been used in the machine learning community. The primary mathematical tool employed in these methods is the notion of the Reproducing Kernel Hilbert Space. However, so far, the emphasis has been on batch techniques. It is only recently, that online techniques have been considered in the context of adaptive signal processing tasks. Moreover, these efforts have only been focussed on real valued data sequences. To the best of our knowledge, no adaptive kernel-based strategy has been developed, so far, for complex valued signals. Furthermore, although the real reproducing kernels are used in an increasing number of machine learning problems, complex kernels have not, yet, been used, in spite of their potential interest in applications that deal with complex signals, with Communications being a typical example. In this paper, we present a general framework to attack the problem of adaptive filtering of complex signals, using either real reproducing kernels, taking advantage of a technique called \textit{complexification} of real RKHSs, or complex reproducing kernels, highlighting the use of the complex gaussian kernel. In order to derive gradients of operators that need to be defined on the associated complex RKHSs, we employ the powerful tool of Wirtinger's Calculus, which has recently attracted attention in the signal processing community. To this end, in this paper, the notion of Wirtinger's calculus is extended, for the first time, to include complex RKHSs and use it to derive several realizations of the Complex Kernel Least-Mean-Square (CKLMS) algorithm. Experiments verify that the CKLMS offers significant performance improvements over several linear and nonlinear algorithms, when dealing with nonlinearities.
1006.3035
Products of Weighted Logic Programs
cs.AI cs.PL
Weighted logic programming, a generalization of bottom-up logic programming, is a well-suited framework for specifying dynamic programming algorithms. In this setting, proofs correspond to the algorithm's output space, such as a path through a graph or a grammatical derivation, and are given a real-valued score (often interpreted as a probability) that depends on the real weights of the base axioms used in the proof. The desired output is a function over all possible proofs, such as a sum of scores or an optimal score. We describe the PRODUCT transformation, which can merge two weighted logic programs into a new one. The resulting program optimizes a product of proof scores from the original programs, constituting a scoring function known in machine learning as a ``product of experts.'' Through the addition of intuitive constraining side conditions, we show that several important dynamic programming algorithms can be derived by applying PRODUCT to weighted logic programs corresponding to simpler weighted logic programs. In addition, we show how the computation of Kullback-Leibler divergence, an information-theoretic measure, can be interpreted using PRODUCT.
1006.3056
Solving Inverse Problems with Piecewise Linear Estimators: From Gaussian Mixture Models to Structured Sparsity
cs.CV
A general framework for solving image inverse problems is introduced in this paper. The approach is based on Gaussian mixture models, estimated via a computationally efficient MAP-EM algorithm. A dual mathematical interpretation of the proposed framework with structured sparse estimation is described, which shows that the resulting piecewise linear estimate stabilizes the estimation when compared to traditional sparse inverse problem techniques. This interpretation also suggests an effective dictionary motivated initialization for the MAP-EM algorithm. We demonstrate that in a number of image inverse problems, including inpainting, zooming, and deblurring, the same algorithm produces either equal, often significantly better, or very small margin worse results than the best published ones, at a lower computational cost.
1006.3128
The Sampling Rate-Distortion Tradeoff for Sparsity Pattern Recovery in Compressed Sensing
cs.IT math.IT
Recovery of the sparsity pattern (or support) of an unknown sparse vector from a limited number of noisy linear measurements is an important problem in compressed sensing. In the high-dimensional setting, it is known that recovery with a vanishing fraction of errors is impossible if the measurement rate and the per-sample signal-to-noise ratio (SNR) are finite constants, independent of the vector length. In this paper, it is shown that recovery with an arbitrarily small but constant fraction of errors is, however, possible, and that in some cases computationally simple estimators are near-optimal. Bounds on the measurement rate needed to attain a desired fraction of errors are given in terms of the SNR and various key parameters of the unknown vector for several different recovery algorithms. The tightness of the bounds, in a scaling sense, as a function of the SNR and the fraction of errors, is established by comparison with existing information-theoretic necessary bounds. Near optimality is shown for a wide variety of practically motivated signal models.
1006.3151
Channel Tracking for Relay Networks via Adaptive Particle MCMC
cs.IT math.IT
This paper presents a new approach for channel tracking and parameter estimation in cooperative wireless relay networks. We consider a system with multiple relay nodes operating under an amplify and forward relay function. We develop a novel algorithm to efficiently solve the challenging problem of joint channel tracking and parameters estimation of the Jakes' system model within a mobile wireless relay network. This is based on \textit{particle Markov chain Monte Carlo} (PMCMC) method. In particular, it first involves developing a Bayesian state space model, then estimating the associated high dimensional posterior using an adaptive Markov chain Monte Carlo (MCMC) sampler relying on a proposal built using a Rao-Blackwellised Sequential Monte Carlo (SMC) filter.
1006.3154
Spectrum Sensing in Cooperative Cognitive Radio Networks with Partial CSI
cs.IT math.IT
We develop an efficient algorithm for cooperative spectrum sensing in a relay based cognitive radio network. We consider a stochastic model where data is sent from the Base Station (BS) of the Primary User (PU). The data is relayed by the Secondary Users (SU) to the SU BS. The SU BS has only partial CSI knowledge of the wireless channels. In order to obtain the optimal decision rule based on Likelihood Ratio Test (LRT), the marginal likelihood under each hypothesis needs to be evaluated pointwise. These, however, cannot be obtained analytically due to the intractability of the integrals. Instead, we approximate these quantities by utilising the Laplace method. Performance is evaluated via numerical simulations and it is shown that the proposed spectrum sensing scheme can achieve superior results to the energy detection scheme.
1006.3155
Blind Spectrum Sensing in Cognitive Radio over Fading Channels and Frequency Offsets
cs.IT math.IT
This paper deals with the challenging problem of spectrum sensing in cognitive radio. We consider a stochastic system model where the the Primary User (PU) transmits a periodic signal over fading channels. The effect of frequency offsets due to oscillator mismatch, and Doppler offset is studied. We show that for this case the Likelihood Ratio Test (LRT) cannot be evaluated poitnwise. We present a novel approach to approximate the marginilisation of the frequency offset using a single point estimate. This is obtained via a low complexity Constrained Adaptive Notch Filter (CANF) to estimate the frequency offset. Performance is evaluated via numerical simulations and it is shown that the proposed spectrum sensing scheme can achieve the same performance as the near-optimal scheme, that is based on a bank of matched filters, using only a fraction of the complexity required.
1006.3156
Decoding of Convolutional Codes over the Erasure Channel
cs.IT math.IT
In this paper we study the decoding capabilities of convolutional codes over the erasure channel. Of special interest will be maximum distance profile (MDP) convolutional codes. These are codes which have a maximum possible column distance increase. We show how this strong minimum distance condition of MDP convolutional codes help us to solve error situations that maximum distance separable (MDS) block codes fail to solve. Towards this goal, we define two subclasses of MDP codes: reverse-MDP convolutional codes and complete-MDP convolutional codes. Reverse-MDP codes have the capability to recover a maximum number of erasures using an algorithm which runs backward in time. Complete-MDP convolutional codes are both MDP and reverse-MDP codes. They are capable to recover the state of the decoder under the mildest condition. We show that complete-MDP convolutional codes perform in certain sense better than MDS block codes of the same rate over the erasure channel.
1006.3180
H2O: An Autonomic, Resource-Aware Distributed Database System
cs.DB cs.DC
This paper presents the design of an autonomic, resource-aware distributed database which enables data to be backed up and shared without complex manual administration. The database, H2O, is designed to make use of unused resources on workstation machines. Creating and maintaining highly-available, replicated database systems can be difficult for untrained users, and costly for IT departments. H2O reduces the need for manual administration by autonomically replicating data and load-balancing across machines in an enterprise. Provisioning hardware to run a database system can be unnecessarily costly as most organizations already possess large quantities of idle resources in workstation machines. H2O is designed to utilize this unused capacity by using resource availability information to place data and plan queries over workstation machines that are already being used for other tasks. This paper discusses the requirements for such a system and presents the design and implementation of H2O.
1006.3215
Solving Functional Constraints by Variable Substitution
cs.AI cs.LO cs.PL
Functional constraints and bi-functional constraints are an important constraint class in Constraint Programming (CP) systems, in particular for Constraint Logic Programming (CLP) systems. CP systems with finite domain constraints usually employ CSP-based solvers which use local consistency, for example, arc consistency. We introduce a new approach which is based instead on variable substitution. We obtain efficient algorithms for reducing systems involving functional and bi-functional constraints together with other non-functional constraints. It also solves globally any CSP where there exists a variable such that any other variable is reachable from it through a sequence of functional constraints. Our experiments on random problems show that variable elimination can significantly improve the efficiency of solving problems with functional constraints.
1006.3222
MIMO Detection Algorithms for High Data Rate Wireless Transmission
cs.OH cs.IT math.IT
Motivated by MIMO broad-band fading channel model, in this section a comparative study is presented regarding various uncoded adaptive and non-adaptive MIMO detection algorithms with respect to BER/PER performance, and hardware complexity. All the simulations are conducted within MIMO-OFDM framework and with a packet structure similar to that of IEEE 802.11a/g standard. As the comparison results show, the RLS algorithm appears to be an affordable solution for wideband MIMO system targeting at Giga-bit wireless transmission. So MIMO can overcome huge processing power required for MIMO detection by using optimizing channel coding and MIMO detection.
1006.3271
The probabilistic analysis of language acquisition: Theoretical, computational, and experimental analysis
cs.CL physics.data-an q-bio.NC
There is much debate over the degree to which language learning is governed by innate language-specific biases, or acquired through cognition-general principles. Here we examine the probabilistic language acquisition hypothesis on three levels: We outline a novel theoretical result showing that it is possible to learn the exact generative model underlying a wide class of languages, purely from observing samples of the language. We then describe a recently proposed practical framework, which quantifies natural language learnability, allowing specific learnability predictions to be made for the first time. In previous work, this framework was used to make learnability predictions for a wide variety of linguistic constructions, for which learnability has been much debated. Here, we present a new experiment which tests these learnability predictions. We find that our experimental results support the possibility that these linguistic constructions are acquired probabilistically from cognition-general principles.
1006.3275
Normalized Information Distance is Not Semicomputable
cs.CC cs.CV physics.data-an
Normalized information distance (NID) uses the theoretical notion of Kolmogorov complexity, which for practical purposes is approximated by the length of the compressed version of the file involved, using a real-world compression program. This practical application is called 'normalized compression distance' and it is trivially computable. It is a parameter-free similarity measure based on compression, and is used in pattern recognition, data mining, phylogeny, clustering, and classification. The complexity properties of its theoretical precursor, the NID, have been open. We show that the NID is neither upper semicomputable nor lower semicomputable.
1006.3301
Codebook-Based SDMA for Coexistence with Fixed Wireless Service
cs.IT cs.NI math.IT
A portion of frequency band for International Mobile Telecommunications (IMT)-Advanced is currently allocated to Fixed Wireless Service (FWS) such as Fixed Service (FS), Fixed Satellite Service (FSS), or Fixed Wireless Access (FWA), which requires frequency sharing between both the systems. SDMA, due to its high throughput nature, is candidate for IMT-Advanced. This paper proposes a systematic design of a precoder codebook for SDMA sharing spectrum with existing FWS. Based on an estimated direction angle of a victim FWS system, an interfering transmitter adaptively constructs a codebook forming a transmit null in the direction angle while satisfying orthogonal beamforming constraint. We derive not only asymptotic throughput scaling laws, but also an upperbound on throughput loss to analyze performance loss of the proposed SDMA relative to the popular SDMA called per-user unitary rate control (PU2RC). Furthermore, we develop a method of evaluating protection distance in order to analyze the spectrum sharing performance of the proposed approach. The simulation results of protection distance confirm that the proposed SDMA efficiently shares spectrum with FWS systems by reducing protection distance to more than 66%. Although our proposed SDMA always has lower throughput compared to PU2RC in non-coexistence scenario, it offers an intriguing opportunity to reuse spectrum already allocated to FWS.
1006.3360
Base station cooperation on the downlink: Large system analysis
cs.IT math.IT
This paper considers maximizing the network-wide minimum supported rate in the downlink of a two-cell system, where each base station (BS) is endowed with multiple antennas. This is done for different levels of cell cooperation. At one extreme, we consider single cell processing where the BS is oblivious to the interference it is creating at the other cell. At the other extreme, we consider full cooperative macroscopic beamforming. In between, we consider coordinated beamforming, which takes account of inter-cell interference, but does not require full cooperation between the BSs. We combine elements of Lagrangian duality and large system analysis to obtain limiting SINRs and bit-rates, allowing comparison between the considered schemes. The main contributions of the paper are theorems which provide concise formulas for optimal transmit power, beamforming vectors, and achieved signal to interference and noise ratio (SINR) for the considered schemes. The formulas obtained are valid for the limit in which the number of users per cell, K, and the number of antennas per base station, N, tend to infinity, with fixed ratio. These theorems also provide expressions for the effective bandwidths occupied by users, and the effective interference caused in the adjacent cell, which allow direct comparisons between the considered schemes.
1006.3385
A Fixed Precoding Approach to Achieve the Degrees of Freedom in X channel
cs.IT math.IT
This paper aims to provide a fixed precoding scheme to achieve the Degrees of Freedom DoF of the generalized ergodic X channel. This is achieved through using the notion of ergodic interference alignment technique. Accordingly, in the proposed method the transmitters do not require to know the full channel state information, while this assumption is the integral part of existing methods. Instead, a finite-rate feed-back channel is adequate to achieve the DoF. In other words, it is demonstrated that quantized versions of channel gains are adequate to achieve theDOF. To get an insight regarding the functionality of the proposed method, first we rely on finite field channel models, and then extend the terminology to more realistic cases, including dispersive fading channels in the presence of quantizer. Accordingly, in a Rayliegh fading environment, it is shown a feedback rate of 2log(p)+Theta(log(log(p))) can provide the DoF, where $p$ is the total transmit power.
1006.3403
Image processing of a spectrogram produced by Spectrometer Airglow Temperature Imager
physics.comp-ph cs.CV physics.ins-det
The Spectral Airglow Temperature Imager is an instrument, specially designed for investigation of the wave processes in the Mesosphere-Lower Thermosphere. In order to determine the kinematics parameters of a wave, the values of a physical quantity in different space points and their changes in the time should be known. An approach for image processing of registered spectrograms is proposed. A detailed description is made of the steps of this approach, related to recovering CCD pixel values, influenced by cosmic particles, dark image correction and filter parameters determination.
1006.3417
Fictitious Play with Time-Invariant Frequency Update for Network Security
cs.GT cs.CR cs.LG
We study two-player security games which can be viewed as sequences of nonzero-sum matrix games played by an Attacker and a Defender. The evolution of the game is based on a stochastic fictitious play process, where players do not have access to each other's payoff matrix. Each has to observe the other's actions up to present and plays the action generated based on the best response to these observations. In a regular fictitious play process, each player makes a maximum likelihood estimate of her opponent's mixed strategy, which results in a time-varying update based on the previous estimate and current action. In this paper, we explore an alternative scheme for frequency update, whose mean dynamic is instead time-invariant. We examine convergence properties of the mean dynamic of the fictitious play process with such an update scheme, and establish local stability of the equilibrium point when both players are restricted to two actions. We also propose an adaptive algorithm based on this time-invariant frequency update.
1006.3424
Porting Decision Tree Algorithms to Multicore using FastFlow
cs.DC cs.DB
The whole computer hardware industry embraced multicores. For these machines, the extreme optimisation of sequential algorithms is no longer sufficient to squeeze the real machine power, which can be only exploited via thread-level parallelism. Decision tree algorithms exhibit natural concurrency that makes them suitable to be parallelised. This paper presents an approach for easy-yet-efficient porting of an implementation of the C4.5 algorithm on multicores. The parallel porting requires minimal changes to the original sequential code, and it is able to exploit up to 7X speedup on an Intel dual-quad core machine.
1006.3425
Power law in website ratings
cs.IR cs.IT math.IT physics.soc-ph
In the practical work of websites popularization, analysis of their efficiency and downloading it is of key importance to take into account web-ratings data. The main indicators of website traffic include the number of unique hosts from which the analyzed website was addressed and the number of granted web pages (hits) per unit time (for example, day, month or year). Of certain interest is the ratio between the number of hits (S) and hosts (H). In practice there is even used such a concept as "average number of viewed pages" (S/H), which on default supposes a linear dependence of S on H. What actually happens is that linear dependence is observed only as a partial case of power dependence, and not always. Another new power law has been discovered on the Internet, in particular, on the WWW.
1006.3448
Orthogonal Persistence Revisited
cs.PL cs.DB
The social and economic importance of large bodies of programs and data that are potentially long-lived has attracted much attention in the commercial and research communities. Here we concentrate on a set of methodologies and technologies called persistent programming. In particular we review programming language support for the concept of orthogonal persistence, a technique for the uniform treatment of objects irrespective of their types or longevity. While research in persistent programming has become unfashionable, we show how the concept is beginning to appear as a major component of modern systems. We relate these attempts to the original principles of orthogonal persistence and give a few hints about how the concept may be utilised in the future.
1006.3455
An External Description for MIMO Systems Sampled in an Aperiodic Way
cs.DM cs.IT math.IT
An external description for aperiodically sampled MIMO linear systems has been developed. Emphasis is on the sampling period sequence, included among the variables to be handled. The computational procedure is simple and no use of polynomial matrix theory is required. This input/output description is believed to be a basic formulation for its later application to the problem of optimal control and/or identification of linear dynamical systems.
1006.3468
Algorithm for Sector Spectra Calculation from Images Registered by the Spectral Airglow Temperature Imager
physics.data-an cs.CV
The Spectral Airglow Temperature Imager is an instrument, specially designed for investigation of the wave processes in the Mesosphere-Lower Thermosphere. In order to determine the kinematic parameters of a wave, the values of a physical quantity in different space points and their changes in the time should be known. As a result of the possibilities of the SATI instrument for space scanning, different parts of the images (sectors of spectrograms) correspond to the respective mesopause areas (where the radiation is generated). Algorithms for sector spectra calculation are proposed. In contrast to the original algorithms where twelve sectors with angles of 30 degrees are only determined now sectors with arbitrary orientation and angles are calculated. An algorithm is presented for sector calculation based on pixel division into sub pixels. A comparative results are shown.
1006.3498
Fast and accurate annotation of short texts with Wikipedia pages
cs.IR
We address the problem of cross-referencing text fragments with Wikipedia pages, in a way that synonymy and polysemy issues are resolved accurately and efficiently. We take inspiration from a recent flow of work [Cucerzan 2007, Mihalcea and Csomai 2007, Milne and Witten 2008, Chakrabarti et al 2009], and extend their scenario from the annotation of long documents to the annotation of short texts, such as snippets of search-engine results, tweets, news, blogs, etc.. These short and poorly composed texts pose new challenges in terms of efficiency and effectiveness of the annotation process, that we address by designing and engineering TAGME, the first system that performs an accurate and on-the-fly annotation of these short textual fragments. A large set of experiments shows that TAGME outperforms state-of-the-art algorithms when they are adapted to work on short texts and it results fast and competitive on long texts.
1006.3506
Action Recognition in Videos: from Motion Capture Labs to the Web
cs.CV
This paper presents a survey of human action recognition approaches based on visual data recorded from a single video camera. We propose an organizing framework which puts in evidence the evolution of the area, with techniques moving from heavily constrained motion capture scenarios towards more challenging, realistic, "in the wild" videos. The proposed organization is based on the representation used as input for the recognition task, emphasizing the hypothesis assumed and thus, the constraints imposed on the type of video that each technique is able to address. Expliciting the hypothesis and constraints makes the framework particularly useful to select a method, given an application. Another advantage of the proposed organization is that it allows categorizing newest approaches seamlessly with traditional ones, while providing an insightful perspective of the evolution of the action recognition task up to now. That perspective is the basis for the discussion in the end of the paper, where we also present the main open issues in the area.
1006.3514
Similarity Search and Locality Sensitive Hashing using TCAMs
cs.DB cs.IR
Similarity search methods are widely used as kernels in various machine learning applications. Nearest neighbor search (NNS) algorithms are often used to retrieve similar entries, given a query. While there exist efficient techniques for exact query lookup using hashing, similarity search using exact nearest neighbors is known to be a hard problem and in high dimensions, best known solutions offer little improvement over a linear scan. Fast solutions to the approximate NNS problem include Locality Sensitive Hashing (LSH) based techniques, which need storage polynomial in $n$ with exponent greater than $1$, and query time sublinear, but still polynomial in $n$, where $n$ is the size of the database. In this work we present a new technique of solving the approximate NNS problem in Euclidean space using a Ternary Content Addressable Memory (TCAM), which needs near linear space and has O(1) query time. In fact, this method also works around the best known lower bounds in the cell probe model for the query time using a data structure near linear in the size of the data base. TCAMs are high performance associative memories widely used in networking applications such as access control lists. A TCAM can query for a bit vector within a database of ternary vectors, where every bit position represents $0$, $1$ or $*$. The $*$ is a wild card representing either a $0$ or a $1$. We leverage TCAMs to design a variant of LSH, called Ternary Locality Sensitive Hashing (TLSH) wherein we hash database entries represented by vectors in the Euclidean space into $\{0,1,*\}$. By using the added functionality of a TLSH scheme with respect to the $*$ character, we solve an instance of the approximate nearest neighbor problem with 1 TCAM access and storage nearly linear in the size of the database. We believe that this work can open new avenues in very high speed data mining.
1006.3520
Information Distance
cs.IT math.IT math.PR physics.data-an
While Kolmogorov complexity is the accepted absolute measure of information content in an individual finite object, a similarly absolute notion is needed for the information distance between two individual objects, for example, two pictures. We give several natural definitions of a universal information metric, based on length of shortest programs for either ordinary computations or reversible (dissipationless) computations. It turns out that these definitions are equivalent up to an additive logarithmic term. We show that the information distance is a universal cognitive similarity distance. We investigate the maximal correlation of the shortest programs involved, the maximal uncorrelation of programs (a generalization of the Slepian-Wolf theorem of classical information theory), and the density properties of the discrete metric spaces induced by the information distances. A related distance measures the amount of nonreversibility of a computation. Using the physical theory of reversible computation, we give an appropriate (universal, anti-symmetric, and transitive) measure of the thermodynamic work required to transform one object in another object by the most efficient process. Information distance between individual objects is needed in pattern recognition where one wants to express effective notions of "pattern similarity" or "cognitive similarity" between individual objects and in thermodynamics of computation where one wants to analyse the energy dissipation of a computation from a particular input to a particular output.
1006.3537
Fastest Distributed Consensus Averaging Problem on Chain of Rhombus Networks
cs.IT cs.DC cs.DM math.IT
Distributed consensus has appeared as one of the most important and primary problems in the context of distributed computation and it has received renewed interest in the field of sensor networks (due to recent advances in wireless communications), where solving fastest distributed consensus averaging problem over networks with different topologies is one of the primary problems in this issue. Here in this work analytical solution for the problem of fastest distributed consensus averaging algorithm over Chain of Rhombus networks is provided, where the solution procedure consists of stratification of associated connectivity graph of the network and semidefinite programming, particularly solving the slackness conditions, where the optimal weights are obtained by inductive comparing of the characteristic polynomials initiated by slackness conditions. Also characteristic polynomial together with its roots corresponding to eigenvalues of weight matrix including SLEM of network is determined inductively. Moreover to see the importance of rhombus graphs it is indicated that convergence rate of path network increases by replacing a single node by a rhombus sub graph within the path network.
1006.3573
Nested Polar Codes for Wiretap and Relay Channels
cs.IT math.IT
We show that polar codes asymptotically achieve the whole capacity-equivocation region for the wiretap channel when the wiretapper's channel is degraded with respect to the main channel, and the weak secrecy notion is used. Our coding scheme also achieves the capacity of the physically degraded receiver-orthogonal relay channel. We show simulation results for moderate block length for the binary erasure wiretap channel, comparing polar codes and two edge type LDPC codes.
1006.3650
The Use of Probabilistic Systems to Mimic the Behaviour of Idiotypic AIS Robot Controllers
cs.AI cs.NE cs.RO
Previous work has shown that robot navigation systems that employ an architecture based upon the idiotypic network theory of the immune system have an advantage over control techniques that rely on reinforcement learning only. This is thought to be a result of intelligent behaviour selection on the part of the idiotypic robot. In this paper an attempt is made to imitate idiotypic dynamics by creating controllers that use reinforcement with a number of different probabilistic schemes to select robot behaviour. The aims are to show that the idiotypic system is not merely performing some kind of periodic random behaviour selection, and to try to gain further insight into the processes that govern the idiotypic mechanism. Trials are carried out using simulated Pioneer robots that undertake navigation exercises. Results show that a scheme that boosts the probability of selecting highly-ranked alternative behaviours to 50% during stall conditions comes closest to achieving the properties of the idiotypic system, but remains unable to match it in terms of all round performance.
1006.3652
Modelling Reactive and Proactive Behaviour in Simulation
cs.AI cs.CE cs.MA
This research investigated the simulation model behaviour of a traditional and combined discrete event as well as agent based simulation models when modelling human reactive and proactive behaviour in human centric complex systems. A departmental store was chosen as human centric complex case study where the operation system of a fitting room in WomensWear department was investigated. We have looked at ways to determine the efficiency of new management policies for the fitting room operation through simulating the reactive and proactive behaviour of staff towards customers. Once development of the simulation models and their verification had been done, we carried out a validation experiment in the form of a sensitivity analysis. Subsequently, we executed a statistical analysis where the mixed reactive and proactive behaviour experimental results were compared with some reactive experimental results from previously published works. Generally, this case study discovered that simple proactive individual behaviour could be modelled in both simulation models. In addition, we found the traditional discrete event model performed similar in the simulation model output compared to the combined discrete event and agent based simulation when modelling similar human behaviour.
1006.3654
Detecting Anomalous Process Behaviour using Second Generation Artificial Immune Systems
cs.AI cs.CR cs.NE
Artificial Immune Systems have been successfully applied to a number of problem domains including fault tolerance and data mining, but have been shown to scale poorly when applied to computer intrusion detec- tion despite the fact that the biological immune system is a very effective anomaly detector. This may be because AIS algorithms have previously been based on the adaptive immune system and biologically-naive mod- els. This paper focuses on describing and testing a more complex and biologically-authentic AIS model, inspired by the interactions between the innate and adaptive immune systems. Its performance on a realistic process anomaly detection problem is shown to be better than standard AIS methods (negative-selection), policy-based anomaly detection methods (systrace), and an alternative innate AIS approach (the DCA). In addition, it is shown that runtime information can be used in combination with system call information to enhance detection capability.
1006.3678
Functional Answer Set Programming
cs.LO cs.AI
In this paper we propose an extension of Answer Set Programming (ASP), and in particular, of its most general logical counterpart, Quantified Equilibrium Logic (QEL), to deal with partial functions. Although the treatment of equality in QEL can be established in different ways, we first analyse the choice of decidable equality with complete functions and Herbrand models, recently proposed in the literature. We argue that this choice yields some counterintuitive effects from a logic programming and knowledge representation point of view. We then propose a variant called QELF where the set of functions is partitioned into partial and Herbrand functions (we also call constructors). In the rest of the paper, we show a direct connection to Scott's Logic of Existence and present a practical application, proposing an extension of normal logic programs to deal with partial functions and equality, so that they can be translated into function-free normal programs, being possible in this way to compute their answer sets with any standard ASP solver.
1006.3679
Segmentation of Natural Images by Texture and Boundary Compression
cs.CV cs.IT cs.LG math.IT
We present a novel algorithm for segmentation of natural images that harnesses the principle of minimum description length (MDL). Our method is based on observations that a homogeneously textured region of a natural image can be well modeled by a Gaussian distribution and the region boundary can be effectively coded by an adaptive chain code. The optimal segmentation of an image is the one that gives the shortest coding length for encoding all textures and boundaries in the image, and is obtained via an agglomerative clustering process applied to a hierarchy of decreasing window sizes as multi-scale texture features. The optimal segmentation also provides an accurate estimate of the overall coding length and hence the true entropy of the image. We test our algorithm on the publicly available Berkeley Segmentation Dataset. It achieves state-of-the-art segmentation results compared to other existing methods.
1006.3726
Diamond Dicing
cs.DB
In OLAP, analysts often select an interesting sample of the data. For example, an analyst might focus on products bringing revenues of at least 100 000 dollars, or on shops having sales greater than 400 000 dollars. However, current systems do not allow the application of both of these thresholds simultaneously, selecting products and shops satisfying both thresholds. For such purposes, we introduce the diamond cube operator, filling a gap among existing data warehouse operations. Because of the interaction between dimensions the computation of diamond cubes is challenging. We compare and test various algorithms on large data sets of more than 100 million facts. We find that while it is possible to implement diamonds in SQL, it is inefficient. Indeed, our custom implementation can be a hundred times faster than popular database engines (including a row-store and a column-store).
1006.3780
Least Squares Superposition Codes of Moderate Dictionary Size, Reliable at Rates up to Capacity
cs.IT cs.LG math.IT math.ST stat.TH
For the additive white Gaussian noise channel with average codeword power constraint, new coding methods are devised in which the codewords are sparse superpositions, that is, linear combinations of subsets of vectors from a given design, with the possible messages indexed by the choice of subset. Decoding is by least squares, tailored to the assumed form of linear combination. Communication is shown to be reliable with error probability exponentially small for all rates up to the Shannon capacity.
1006.3782
Near-Optimal Deviation-Proof Medium Access Control Designs in Wireless Networks
cs.NI cs.IT math.IT
Distributed medium access control (MAC) protocols are essential for the proliferation of low cost, decentralized wireless local area networks (WLANs). Most MAC protocols are designed with the presumption that nodes comply with prescribed rules. However, selfish nodes have natural motives to manipulate protocols in order to improve their own performance. This often degrades the performance of other nodes as well as that of the overall system. In this work, we propose a class of protocols that limit the performance gain which nodes can obtain through selfish manipulation while incurring only a small efficiency loss. The proposed protocols are based on the idea of a review strategy, with which nodes collect signals about the actions of other nodes over a period of time, use a statistical test to infer whether or not other nodes are following the prescribed protocol, and trigger a punishment if a departure from the protocol is perceived. We consider the cases of private and public signals and provide analytical and numerical results to demonstrate the properties of the proposed protocols.
1006.3787
Complete Complementary Results Report of the MARF's NLP Approach to the DEFT 2010 Competition
cs.CL
This companion paper complements the main DEFT'10 article describing the MARF approach (arXiv:0905.1235) to the DEFT'10 NLP challenge (described at http://www.groupes.polymtl.ca/taln2010/deft.php in French). This paper is aimed to present the complete result sets of all the conducted experiments and their settings in the resulting tables highlighting the approach and the best results, but also showing the worse and the worst and their subsequent analysis. This particular work focuses on application of the MARF's classical and NLP pipelines to identification tasks within various francophone corpora to identify decades when certain articles were published for the first track (Piste 1) and place of origin of a publication (Piste 2), such as the journal and location (France vs. Quebec). This is the sixth iteration of the release of the results.
1006.3855
Impact of Channel Asymmetry on Performance of Channel Estimation and Precoding for Downlink Base Station Cooperative Transmission
cs.IT math.IT
Base station (BS) cooperative transmission can improve the spectrum efficiency of cellular systems, whereas using which the channels will become asymmetry. In this paper, we study the impact of the asymmetry on the performance of channel estimation and precoding in downlink BS cooperative multiple-antenna multiple-carrier systems. We first present three linear estimators which jointly estimate the channel coefficients from users in different cells with minimum mean square error, robust design and least square criterion, and then study the impact of uplink channel asymmetry on their performance. It is shown that when the large scale channel information is exploited for channel estimation, using non-orthogonal training sequences among users in different cells leads to minor performance loss. Next, we analyze the impact of downlink channel asymmetry on the performance of precoding with channel estimation errors. Our analysis shows that although the estimation errors of weak cross links are large, the resulting rate loss is minor because their contributions are weighted by the receive signal to noise ratio. The simulation results verify our analysis and show that the rate loss per user is almost constant no matter where the user is located, when the channel estimators exploiting the large scale fading gains.
1006.3870
Toward Fast Reliable Communication at Rates Near Capacity with Gaussian Noise
cs.IT cs.LG math.IT math.ST stat.TH
For the additive Gaussian noise channel with average codeword power constraint, sparse superposition codes and adaptive successive decoding is developed. Codewords are linear combinations of subsets of vectors, with the message indexed by the choice of subset. A feasible decoding algorithm is presented. Communication is reliable with error probability exponentially small for all rates below the Shannon capacity.
1006.3959
Molecular Communication Using Brownian Motion with Drift
physics.bio-ph cond-mat.mes-hall cond-mat.soft cs.IT math.IT
Inspired by biological communication systems, molecular communication has been proposed as a viable scheme to communicate between nano-sized devices separated by a very short distance. Here, molecules are released by the transmitter into the medium, which are then sensed by the receiver. This paper develops a preliminary version of such a communication system focusing on the release of either one or two molecules into a fluid medium with drift. We analyze the mutual information between transmitter and the receiver when information is encoded in the time of release of the molecule. Simplifying assumptions are required in order to calculate the mutual information, and theoretical results are provided to show that these calculations are upper bounds on the true mutual information. Furthermore, optimized degree distributions are provided, which suggest transmission strategies for a variety of drift velocities.
1006.4026
A proof of two conjectures on APN functions
math.NT cs.IT math.IT
Dobbertin, Mills, M\"uller, Pott and Willems conjecture that two families of power mapping are families of APN functions. Here we prove those two conjectures.
1006.4030
A Novel VLSI Architecture of Fixed-complexity Sphere Decoder
cs.IT math.IT
Fixed-complexity Sphere Decoder (FSD) is a recently proposed technique for Multiple-Input Multiple-Output (MIMO) detection. It has several outstanding features such as constant throughput and large potential parallelism, which makes it suitable for efficient VLSI implementation. However, to our best knowledge, no VLSI implementation of FSD has been reported in the literature, although some FPGA prototypes of FSD with pipeline architecture have been developed. These solutions achieve very high throughput but at very high cost of hardware resources, making them impractical in real applications. In this paper, we present a novel four-nodes-per-cycle parallel architecture of FSD, with a breadth-first processing that allows for short critical path. The implementation achieves a throughput of 213.3 Mbps at 400 MHz clock frequency, at a cost of 0.18 mm2 Silicon area on 0.13{\mu}m CMOS technology. The proposed solution is much more economical compared with the existing FPGA implementations, and very suitable for practicl applications because of its balanced performance and hardware-complexity; moreover it has the flexibility to be expanded into an eight-nodes-per-cycle version in order to double the throughput.
1006.4035
Towards the Development of a Simulator for Investigating the Impact of People Management Practices on Retail Performance
cs.AI cs.CE cs.MA
Often models for understanding the impact of management practices on retail performance are developed under the assumption of stability, equilibrium and linearity, whereas retail operations are considered in reality to be dynamic, non-linear and complex. Alternatively, discrete event and agent-based modelling are approaches that allow the development of simulation models of heterogeneous non-equilibrium systems for testing out different scenarios. When developing simulation models one has to abstract and simplify from the real world, which means that one has to try and capture the 'essence' of the system required for developing a representation of the mechanisms that drive the progression in the real system. Simulation models can be developed at different levels of abstraction. To know the appropriate level of abstraction for a specific application is often more of an art than a science. We have developed a retail branch simulation model to investigate which level of model accuracy is required for such a model to obtain meaningful results for practitioners.
1006.4039
Distributed Autonomous Online Learning: Regrets and Intrinsic Privacy-Preserving Properties
cs.LG cs.AI
Online learning has become increasingly popular on handling massive data. The sequential nature of online learning, however, requires a centralized learner to store data and update parameters. In this paper, we consider online learning with {\em distributed} data sources. The autonomous learners update local parameters based on local data sources and periodically exchange information with a small subset of neighbors in a communication network. We derive the regret bound for strongly convex functions that generalizes the work by Ram et al. (2010) for convex functions. Most importantly, we show that our algorithm has \emph{intrinsic} privacy-preserving properties, and we prove the sufficient and necessary conditions for privacy preservation in the network. These conditions imply that for networks with greater-than-one connectivity, a malicious learner cannot reconstruct the subgradients (and sensitive raw data) of other learners, which makes our algorithm appealing in privacy sensitive applications.
1006.4046
Online Identification and Tracking of Subspaces from Highly Incomplete Information
cs.IT cs.SY math.IT math.OC stat.ML
This work presents GROUSE (Grassmanian Rank-One Update Subspace Estimation), an efficient online algorithm for tracking subspaces from highly incomplete observations. GROUSE requires only basic linear algebraic manipulations at each iteration, and each subspace update can be performed in linear time in the dimension of the subspace. The algorithm is derived by analyzing incremental gradient descent on the Grassmannian manifold of subspaces. With a slight modification, GROUSE can also be used as an online incremental algorithm for the matrix completion problem of imputing missing entries of a low-rank matrix. GROUSE performs exceptionally well in practice both in tracking subspaces and as an online algorithm for matrix completion.
1006.4088
The Stability of Low-Rank Matrix Reconstruction: a Constrained Singular Value View
cs.IT math.IT
The stability of low-rank matrix reconstruction with respect to noise is investigated in this paper. The $\ell_*$-constrained minimal singular value ($\ell_*$-CMSV) of the measurement operator is shown to determine the recovery performance of nuclear norm minimization based algorithms. Compared with the stability results using the matrix restricted isometry constant, the performance bounds established using $\ell_*$-CMSV are more concise, and their derivations are less complex. Isotropic and subgaussian measurement operators are shown to have $\ell_*$-CMSVs bounded away from zero with high probability, as long as the number of measurements is relatively large. The $\ell_*$-CMSV for correlated Gaussian operators are also analyzed and used to illustrate the advantage of $\ell_*$-CMSV compared with the matrix restricted isometry constant. We also provide a fixed point characterization of $\ell_*$-CMSV that is potentially useful for its computation.
1006.4114
How to build a DNA search engine like Google?
q-bio.GN cs.ET cs.IR
This paper proposed a new method to build the large scale DNA sequences search system based on web search engine technology. We give a very brief introduction for the methods used in search engine first. Then how to build a DNA search system like Google is illustrated in detail. Since there is no local alignment process, this system is able to provide the ms level search services for billions of DNA sequences in a typical server.
1006.4173
Better size estimation for sparse matrix products
cs.DS cs.DB
We consider the problem of doing fast and reliable estimation of the number of non-zero entries in a sparse boolean matrix product. This problem has applications in databases and computer algebra. Let n denote the total number of non-zero entries in the input matrices. We show how to compute a 1 +- epsilon approximation (with small probability of error) in expected time O(n) for any epsilon > 4/\sqrt[4]{z}. The previously best estimation algorithm, due to Cohen (JCSS 1997), uses time O(n/epsilon^2). We also present a variant using O(sort(n)) I/Os in expectation in the cache-oblivious model. In contrast to these results, the currently best algorithms for computing a sparse boolean matrix product use time omega(n^{4/3}) (resp. omega(n^{4/3}/B) I/Os), even if the result matrix has only z=O(n) nonzero entries. Our algorithm combines the size estimation technique of Bar-Yossef et al. (RANDOM 2002) with a particular class of pairwise independent hash functions that allows the sketch of a set of the form A x C to be computed in expected time O(|A|+|C|) and O(sort(|A|+|C|)) I/Os. We then describe how sampling can be used to maintain (independent) sketches of matrices that allow estimation to be performed in time o(n) if z is sufficiently large. This gives a simpler alternative to the sketching technique of Ganguly et al. (PODS 2005), and matches a space lower bound shown in that paper. Finally, we present experiments on real-world data sets that show the accuracy of both our methods to be significantly better than the worst-case analysis predicts.
1006.4175
Optimization of Weighted Curvature for Image Segmentation
cs.CV
Minimization of boundary curvature is a classic regularization technique for image segmentation in the presence of noisy image data. Techniques for minimizing curvature have historically been derived from descent methods which could be trapped in a local minimum and therefore required a good initialization. Recently, combinatorial optimization techniques have been applied to the optimization of curvature which provide a solution that achieves nearly a global optimum. However, when applied to image segmentation these methods required a meaningful data term. Unfortunately, for many images, particularly medical images, it is difficult to find a meaningful data term. Therefore, we propose to remove the data term completely and instead weight the curvature locally, while still achieving a global optimum.
1006.4255
Polar codes for the two-user multiple-access channel
cs.IT math.IT
Arikan's polar coding method is extended to two-user multiple-access channels. It is shown that if the two users of the channel use the Arikan construction, the resulting channels will polarize to one of five possible extremals, on each of which uncoded transmission is optimal. The sum rate achieved by this coding technique is the one that correponds to uniform input distributions. The encoding and decoding complexities and the error performance of these codes are as in the single-user case: $O(n\log n)$ for encoding and decoding, and $o(\exp(-n^{1/2-\epsilon}))$ for block error probability, where $n$ is the block length.
1006.4270
Two-dimensional ranking of Wikipedia articles
cs.IR physics.soc-ph
The Library of Babel, described by Jorge Luis Borges, stores an enormous amount of information. The Library exists {\it ab aeterno}. Wikipedia, a free online encyclopaedia, becomes a modern analogue of such a Library. Information retrieval and ranking of Wikipedia articles become the challenge of modern society. While PageRank highlights very well known nodes with many ingoing links, CheiRank highlights very communicative nodes with many outgoing links. In this way the ranking becomes two-dimensional. Using CheiRank and PageRank we analyze the properties of two-dimensional ranking of all Wikipedia English articles and show that it gives their reliable classification with rich and nontrivial features. Detailed studies are done for countries, universities, personalities, physicists, chess players, Dow-Jones companies and other categories.
1006.4330
Large gaps imputation in remote sensed imagery of the environment
stat.AP cs.CV
Imputation of missing data in large regions of satellite imagery is necessary when the acquired image has been damaged by shadows due to clouds, or information gaps produced by sensor failure. The general approach for imputation of missing data, that could not be considered missed at random, suggests the use of other available data. Previous work, like local linear histogram matching, take advantage of a co-registered older image obtained by the same sensor, yielding good results in filling homogeneous regions, but poor results if the scenes being combined have radical differences in target radiance due, for example, to the presence of sun glint or snow. This study proposes three different alternatives for filling the data gaps. The first two involves merging radiometric information from a lower resolution image acquired at the same time, in the Fourier domain (Method A), and using linear regression (Method B). The third method consider segmentation as the main target of processing, and propose a method to fill the gaps in the map of classes, avoiding direct imputation (Method C). All the methods were compared by means of a large simulation study, evaluating performance with a multivariate response vector with four measures: Q, RMSE, Kappa and Overall Accuracy coefficients. Difference in performance were tested with a MANOVA mixed model design with two main effects, imputation method and type of lower resolution extra data, and a blocking third factor with a nested sub-factor, introduced by the real Landsat image and the sub-images that were used. Method B proved to be the best for all criteria.
1006.4358
Combining Channel Output Feedback and CSI Feedback for MIMO Wireless Systems
cs.IT math.IT
The use of channel output feedback to improve the reliability of fading channels has received scant attention in the literature. In most work on feedback for fading channels, only channel state information (CSI) feedback has been exploited for coding at the transmitter. In this work, the design of a coding scheme for multiple-input multiple-output (MIMO) fading systems with channel output and channel state feedback at the transmitter is considered. Under the assumption of additive white Gaussian noise and an independent and identically distributed fading process, a simple linear coding strategy that achieves any rate up to capacity is proposed. The framework assumes perfect CSI at the transmitter and receiver. This simple linear processing scheme can provide a doubly exponential probability of error decay with blocklength for all rates less than capacity. Remarkably, this encoding scheme actually consists of two separate encoding blocks: one that adapts to the current CSI and one that adapts to the previous channel output feedback. This scheme is extended to the case when the CSI is quantized at the receiver and conveyed to the transmitter over a limited rate feedback channel; for multiple-input single-output (MISO) fading systems it is shown the doubly exponential probability of error decay is achieved as the blocklength increases.
1006.4386
Collaborative Relay Beamforming for Secrecy
cs.IT math.IT
In this paper, collaborative use of relays to form a beamforming system and provide physical-layer security is investigated. In particular, decode-and-forward (DF) and amplify-and-forward (AF) relay beamforming designs under total and individual relay power constraints are studied with the goal of maximizing the secrecy rates when perfect channel state information (CSI) is available. In the DF scheme, the total power constraint leads to a closed-form solution, and in this case, the optimal beamforming structure is identified in the low and high signal-to-noise ratio (SNR) regimes. The beamforming design under individual relay power constraints is formulated as an optimization problem which is shown to be easily solved using two different approaches, namely semidefinite programming and second-order cone programming. A simplified and suboptimal technique which reduces the computation complexity under individual power constraints is also presented. In the AF scheme, not having analytical solutions for the optimal beamforming design under both total and individual power constraints, an iterative algorithm is proposed to numerically obtain the optimal beamforming structure and maximize the secrecy rates. Finally, robust beamforming designs in the presence of imperfect CSI are investigated for DF-based relay beamforming, and optimization frameworks are provided
1006.4425
On-the-fly Uniformization of Time-Inhomogeneous Infinite Markov Population Models
math.PR cs.CE cs.NA
This paper presents an on-the-fly uniformization technique for the analysis of time-inhomogeneous Markov population models. This technique is applicable to models with infinite state spaces and unbounded rates, which are, for instance, encountered in the realm of biochemical reaction networks. To deal with the infinite state space, we dynamically maintain a finite subset of the states where most of the probability mass is located. This approach yields an underapproximation of the original, infinite system. We present experimental results to show the applicability of our technique.
1006.4442
On the Implementation of the Probabilistic Logic Programming Language ProbLog
cs.PL cs.LG cs.LO
The past few years have seen a surge of interest in the field of probabilistic logic learning and statistical relational learning. In this endeavor, many probabilistic logics have been developed. ProbLog is a recent probabilistic extension of Prolog motivated by the mining of large biological networks. In ProbLog, facts can be labeled with probabilities. These facts are treated as mutually independent random variables that indicate whether these facts belong to a randomly sampled program. Different kinds of queries can be posed to ProbLog programs. We introduce algorithms that allow the efficient execution of these queries, discuss their implementation on top of the YAP-Prolog system, and evaluate their performance in the context of large networks of biological entities.
1006.4458
Few Algorithms for ascertaining merit of a document and their applications
cs.IR
Existing models for ranking documents(mostly in world wide web) are prestige based. In this article, three algorithms to objectively judge the merit of a document are proposed - 1) Citation graph maxflow 2) Recursive Gloss Overlap based intrinsic merit scoring and 3) Interview algorithm. A short discussion on generic judgement and its mathematical treatment is presented in introduction to motivate these algorithms.
1006.4474
sTeX+ - a System for Flexible Formalization of Linked Data
cs.SE cs.AI
We present the sTeX+ system, a user-driven advancement of sTeX - a semantic extension of LaTeX that allows for producing high-quality PDF documents for (proof)reading and printing, as well as semantic XML/OMDoc documents for the Web or further processing. Originally sTeX had been created as an invasive, semantic frontend for authoring XML documents. Here, we used sTeX in a Software Engineering case study as a formalization tool. In order to deal with modular pre-semantic vocabularies and relations, we upgraded it to sTeX+ in a participatory design process. We present a tool chain that starts with an sTeX+ editor and ultimately serves the generated documents as XHTML+RDFa Linked Data via an OMDoc-enabled, versioned XML database. In the final output, all structural annotations are preserved in order to enable semantic information retrieval services.
1006.4484
Interactive Reconciliation with Low-Density Parity-Check Codes
cs.IT math.IT
Efficient information reconciliation is crucial in several scenarios, being quantum key distribution a remarkable example. However, efficiency is not the only requirement for determining the quality of the information reconciliation process. In some of these scenarios we find other relevant parameters such as the interactivity or the adaptability to different channel statistics. We propose an interactive protocol for information reconciliation based on low-density parity-check codes. The coding rate is adapted in real time by using simultaneously puncturing and shortening strategies, allowing it to cover a predefined error rate range with just a single code. The efficiency of the information reconciliation process using the proposed protocol is considerably better than the efficiency of its non-interactive version.
1006.4509
Receive Diversity and Ergodic Performance of Interference Alignment on the MIMO Gaussian Interference Channel
cs.IT math.IT
We consider interference alignment (IA) over K-user Gaussian MIMO interference channel (MIMO-IC) when the SNR is not asymptotically high. We introduce a generalization of IA which enables receive diversity inside the interference-free subspace. We generalize the existence criterion of an IA solution proposed by Yetis et al. to this case, thereby establishing a multi-user diversity-multiplexing trade-off (DMT) for the interference channel. Furthermore, we derive a closed-form tight lower-bound for the ergodic mutual information achievable using IA over a Gaussian MIMO-IC with Gaussian i.i.d. channel coefficients at arbitrary SNR, when the transmitted signals are white inside the subspace defined by IA. Finally, as an application of the previous results, we compare the performance achievable by IA at various operating points allowed by the DMT, to a recently introduced distributed method based on game theory.
1006.4524
Fundamental Rate-Reliability-Complexity Limits in Outage Limited MIMO Communications
cs.IT cs.CC math.IT math.ST stat.TH
The work establishes fundamental limits with respect to rate, reliability and computational complexity, for a general setting of outage-limited MIMO communications. In the high-SNR regime, the limits are optimized over all encoders, all decoders, and all complexity regulating policies. The work then proceeds to explicitly identify encoder-decoder designs and policies, that meet this optimal tradeoff. In practice, the limits aim to meaningfully quantify different pertinent measures, such as the optimal rate-reliability capabilities per unit complexity and power, the optimal diversity gains per complexity costs, or the optimal number of numerical operations (i.e., flops) per bit. Finally the tradeoff's simple nature, renders it useful for insightful comparison of the rate-reliability-complexity capabilities for different encoders-decoders.
1006.4535
Studies on Relevance, Ranking and Results Display
cs.IR
This study considers the extent to which users with the same query agree as to what is relevant, and how what is considered relevant may translate into a retrieval algorithm and results display. To combine user perceptions of relevance with algorithm rank and to present results, we created a prototype digital library of scholarly literature. We confine studies to one population of scientists (paleontologists), one domain of scholarly scientific articles (paleo-related), and a prototype system (PaleoLit) that we built for the purpose. Based on the principle that users do not pre-suppose answers to a given query but that they will recognize what they want when they see it, our system uses a rules-based algorithm to cluster results into fuzzy categories with three relevance levels. Our system matches at least 1/3 of our participants' relevancy ratings 87% of the time. Our subsequent usability study found that participants trusted our uncertainty labels but did not value our color-coded horizontal results layout above a standard retrieval list. We posit that users make such judgments in limited time, and that time optimization per task might help explain some of our findings.
1006.4540
A Novel Rough Set Reduct Algorithm for Medical Domain Based on Bee Colony Optimization
cs.LG cs.AI cs.NE
Feature selection refers to the problem of selecting relevant features which produce the most predictive outcome. In particular, feature selection task is involved in datasets containing huge number of features. Rough set theory has been one of the most successful methods used for feature selection. However, this method is still not able to find optimal subsets. This paper proposes a new feature selection method based on Rough set theory hybrid with Bee Colony Optimization (BCO) in an attempt to combat this. This proposed work is applied in the medical domain to find the minimal reducts and experimentally compared with the Quick Reduct, Entropy Based Reduct, and other hybrid Rough Set methods such as Genetic Algorithm (GA), Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO).
1006.4544
Human Disease Diagnosis Using a Fuzzy Expert System
cs.AI
Human disease diagnosis is a complicated process and requires high level of expertise. Any attempt of developing a web-based expert system dealing with human disease diagnosis has to overcome various difficulties. This paper describes a project work aiming to develop a web-based fuzzy expert system for diagnosing human diseases. Now a days fuzzy systems are being used successfully in an increasing number of application areas; they use linguistic rules to describe systems. This research project focuses on the research and development of a web-based clinical tool designed to improve the quality of the exchange of health information between health care professionals and patients. Practitioners can also use this web-based tool to corroborate diagnosis. The proposed system is experimented on various scenarios in order to evaluate it's performance. In all the cases, proposed system exhibits satisfactory results.
1006.4551
Vagueness of Linguistic variable
cs.AI
In the area of computer science focusing on creating machines that can engage on behaviors that humans consider intelligent. The ability to create intelligent machines has intrigued humans since ancient times and today with the advent of the computer and 50 years of research into various programming techniques, the dream of smart machines is becoming a reality. Researchers are creating systems which can mimic human thought, understand speech, beat the best human chessplayer, and countless other feats never before possible. Ability of the human to estimate the information is most brightly shown in using of natural languages. Using words of a natural language for valuation qualitative attributes, for example, the person pawns uncertainty in form of vagueness in itself estimations. Vague sets, vague judgments, vague conclusions takes place there and then, where and when the reasonable subject exists and also is interested in something. The vague sets theory has arisen as the answer to an illegibility of language the reasonable subject speaks. Language of a reasonable subject is generated by vague events which are created by the reason and which are operated by the mind. The theory of vague sets represents an attempt to find such approximation of vague grouping which would be more convenient, than the classical theory of sets in situations where the natural language plays a significant role. Such theory has been offered by known American mathematician Gau and Buehrer .In our paper we are describing how vagueness of linguistic variables can be solved by using the vague set theory.This paper is mainly designed for one of directions of the eventology (the theory of the random vague events), which has arisen within the limits of the probability theory and which pursue the unique purpose to describe eventologically a movement of reason.
1006.4553
Evolution of Biped Walking Using Neural Oscillators Controller and Harmony Search Algorithm Optimizer
cs.RO
In this paper, a simple Neural controller has been used to achieve stable walking in a NAO biped robot, with 22 degrees of freedom that implemented in a virtual physics-based simulation environment of Robocup soccer simulation environment. The algorithm uses a Matsuoka base neural oscillator to generate control signal for the biped robot. To find the best angular trajectory and optimize network parameters, a new population-based search algorithm, called the Harmony Search (HS) algorithm, has been used. The algorithm conceptualized a group of musicians together trying to search for better state of harmony. Simulation results demonstrate that the modification of the step period and the walking motion due to the sensory feedback signals improves the stability of the walking motion.
1006.4561
An Efficient Technique for Similarity Identification between Ontologies
cs.AI
Ontologies usually suffer from the semantic heterogeneity when simultaneously used in information sharing, merging, integrating and querying processes. Therefore, the similarity identification between ontologies being used becomes a mandatory task for all these processes to handle the problem of semantic heterogeneity. In this paper, we propose an efficient technique for similarity measurement between two ontologies. The proposed technique identifies all candidate pairs of similar concepts without omitting any similar pair. The proposed technique can be used in different types of operations on ontologies such as merging, mapping and aligning. By analyzing its results a reasonable improvement in terms of completeness, correctness and overall quality of the results has been found.
1006.4563
The State of the Art: Ontology Web-Based Languages: XML Based
cs.AI
Many formal languages have been proposed to express or represent Ontologies, including RDF, RDFS, DAML+OIL and OWL. Most of these languages are based on XML syntax, but with various terminologies and expressiveness. Therefore, choosing a language for building an Ontology is the main step. The main point of choosing language to represent Ontology is based mainly on what the Ontology will represent or be used for. That language should have a range of quality support features such as ease of use, expressive power, compatibility, sharing and versioning, internationalisation. This is because different kinds of knowledge-based applications need different language features. The main objective of these languages is to add semantics to the existing information on the web. The aims of this paper is to provide a good knowledge of existing language and understanding of these languages and how could be used.
1006.4567
Understanding Semantic Web and Ontologies: Theory and Applications
cs.AI
Semantic Web is actually an extension of the current one in that it represents information more meaningfully for humans and computers alike. It enables the description of contents and services in machine-readable form, and enables annotating, discovering, publishing, advertising and composing services to be automated. It was developed based on Ontology, which is considered as the backbone of the Semantic Web. In other words, the current Web is transformed from being machine-readable to machine-understandable. In fact, Ontology is a key technique with which to annotate semantics and provide a common, comprehensible foundation for resources on the Semantic Web. Moreover, Ontology can provide a common vocabulary, a grammar for publishing data, and can supply a semantic description of data which can be used to preserve the Ontologies and keep them ready for inference. This paper provides basic concepts of web services and the Semantic Web, defines the structure and the main applications of ontology, and provides many relevant terms are explained in order to provide a basic understanding of ontologies.
1006.4568
Approaches, Challenges and Future Direction of Image Retrieval
cs.IR
This paper attempts to discuss the evolution of the retrieval approaches focusing on development, challenges and future direction of the image retrieval. It highlights both the already addressed and outstanding issues. The explosive growth of image data leads to the need of research and development of Image Retrieval. However, Image retrieval researches are moving from keyword, to low level features and to semantic features. Drive towards semantic features is due to the problem of the keywords which can be very subjective and time consuming while low level features cannot always describe high level concepts in the users' mind. Hence, introducing an interpretation inconsistency between image descriptors and high level semantics that known as the semantic gap. This paper also discusses the semantic gap issues, user query mechanisms as well as common ways used to bridge the gap in image retrieval.
1006.4588
Efficient Region-Based Image Querying
cs.CV
Retrieving images from large and varied repositories using visual contents has been one of major research items, but a challenging task in the image management community. In this paper we present an efficient approach for region-based image classification and retrieval using a fast multi-level neural network model. The advantages of this neural model in image classification and retrieval domain will be highlighted. The proposed approach accomplishes its goal in three main steps. First, with the help of a mean-shift based segmentation algorithm, significant regions of the image are isolated. Secondly, color and texture features of each region are extracted by using color moments and 2D wavelets decomposition technique. Thirdly the multi-level neural classifier is trained in order to classify each region in a given image into one of five predefined categories, i.e., "Sky", "Building", "SandnRock", "Grass" and "Water". Simulation results show that the proposed method is promising in terms of classification and retrieval accuracy results. These results compare favorably with the best published results obtained by other state-of-the-art image retrieval techniques.
1006.4645
SPOT: An R Package For Automatic and Interactive Tuning of Optimization Algorithms by Sequential Parameter Optimization
cs.NE cs.AI math.OC stat.AP
The sequential parameter optimization (SPOT) package for R is a toolbox for tuning and understanding simulation and optimization algorithms. Model-based investigations are common approaches in simulation and optimization. Sequential parameter optimization has been developed, because there is a strong need for sound statistical analysis of simulation and optimization algorithms. SPOT includes methods for tuning based on classical regression and analysis of variance techniques; tree-based models such as CART and random forest; Gaussian process models (Kriging), and combinations of different meta-modeling approaches. This article exemplifies how SPOT can be used for automatic and interactive tuning.
1006.4703
A construction of universal secure network coding
cs.IT cs.CR math.IT
We construct a universal secure network coding. Our construction just modifies the transmission scheme at the source node and works with every linear coding at an intermediate node. We relax the security criterion such that the mutual information between the message and the eavesdropped signal is sufficiently small instead of strictly zero. Our construction allows the set of eavesdropped links to change at each time slot.
1006.4754
Active Sites model for the B-Matrix Approach
cs.NE
This paper continues on the work of the B-Matrix approach in hebbian learning proposed by Dr. Kak. It reports the results on methods of improving the memory retrieval capacity of the hebbian neural network which implements the B-Matrix approach. Previously, the approach to retrieving the memories from the network was to clamp all the individual neurons separately and verify the integrity of these memories. Here we present a network with the capability to identify the "active sites" in the network during the training phase and use these "active sites" to generate the memories retrieved from these neurons. Three methods are proposed for obtaining the update order of the network from the proximity matrix when multiple neurons are to be clamped. We then present a comparison between the new methods to the classical case and also among the methods themselves.
1006.4786
Compressive Direction Finding Based on Amplitude Comparison
cs.IT math.IT
This paper exploits recent developments in compressive sensing (CS) to efficiently perform the direction finding via amplitude comprarison. The new method is proposed based on unimodal characteristic of antenna pattern and sparse property of received data. Unlike the conventional methods based peak-searching and symmetric constraint, the sparse reconstruction algorithm requires less pulse and takes advantage of CS. Simulation results validate the performance of the proposed method is better than the conventional methods.
1006.4801
Noise Invalidation Denoising
stat.ME cs.CV math.ST stat.TH
A denoising technique based on noise invalidation is proposed. The adaptive approach derives a noise signature from the noise order statistics and utilizes the signature to denoise the data. The novelty of this approach is in presenting a general-purpose denoising in the sense that it does not need to employ any particular assumption on the structure of the noise-free signal, such as data smoothness or sparsity of the coefficients. An advantage of the method is in denoising the corrupted data in any complete basis transformation (orthogonal or non-orthogonal). Experimental results show that the proposed method, called Noise Invalidation Denoising (NIDe), outperforms existing denoising approaches in terms of Mean Square Error (MSE).
1006.4804
The General Solutions of Linear ODE and Riccati Equation
math.CA cs.SY math-ph math.AP math.MP math.OC nlin.SI
This paper gives out the general solutions of variable coefficients ODE and Riccati equation by way of integral series E(X) and F(X). Such kinds of integral series are the generalized form of exponential function, and keep the properties of convergent and reversible.
1006.4818
Stability (over time) of Modified-CS and LS-CS for Recursive Causal Sparse Reconstruction
cs.IT math.IT stat.ME
In this work, we obtain sufficient conditions for the ``stability" of our recently proposed algorithms, modified-CS (for noisy measurements) and Least Squares CS-residual (LS-CS), designed for recursive reconstruction of sparse signal sequences from noisy measurements. By ``stability" we mean that the number of misses from the current support estimate and the number of extras in it remain bounded by a time-invariant value at all times. The concept is meaningful only if the bound is small compared to the current signal support size. A direct corollary is that the reconstruction errors are also bounded by a time-invariant and small value.
1006.4832
MINLIP for the Identification of Monotone Wiener Systems
cs.LG
This paper studies the MINLIP estimator for the identification of Wiener systems consisting of a sequence of a linear FIR dynamical model, and a monotonically increasing (or decreasing) static function. Given $T$ observations, this algorithm boils down to solving a convex quadratic program with $O(T)$ variables and inequality constraints, implementing an inference technique which is based entirely on model complexity control. The resulting estimates of the linear submodel are found to be almost consistent when no noise is present in the data, under a condition of smoothness of the true nonlinearity and local Persistency of Excitation (local PE) of the data. This result is novel as it does not rely on classical tools as a 'linearization' using a Taylor decomposition, nor exploits stochastic properties of the data. It is indicated how to extend the method to cope with noisy data, and empirical evidence contrasts performance of the estimator against other recently proposed techniques.
1006.4833
A Generic Storage API
cs.DB
We present a generic API suitable for provision of highly generic storage facilities that can be tailored to produce various individually customised storage infrastructures. The paper identifies a candidate set of minimal storage system building blocks, which are sufficiently simple to avoid encapsulating policy where it cannot be customised by applications, and composable to build highly flexible storage architectures. Four main generic components are defined: the store, the namer, the caster and the interpreter. It is hypothesised that these are sufficiently general that they could act as building blocks for any information storage and retrieval system. The essential characteristics of each are defined by an interface, which may be implemented by multiple implementing classes.
1006.4910
Kalman Filters and Homography: Utilizing the Matrix $A$
cs.CV
Many problems in Computer Vision can be reduced to either working around a known transform, or given a model for the transform computing the inverse problem of the transform itself. We will look at two ways of working with the matrix $A$ and see how transforms are at the root of image processing and vision problems.
1006.4925
Simulating information creation in social Semantic Web applications
cs.CE
Appropriate ranking algorithms and incentive mechanisms are essential to the creation of high-quality information by users of a social network. However, evaluating such mechanisms in a quantifiable way is a difficult problem. Studies of live social networks of limited utility, due to the subjective nature of ranking and the lack of experimental control. Simulation provides a valuable alternative: insofar as the simulation resembles the live social network, fielding a new algorithm within a simulated network can predict the effect it will have on the live network. In this paper, we propose a simulation model based on the actor-conceptinstance model of semantic social networks, then we evaluate the model against a number of common ranking algorithms.We observe their effects on information creation in such a network, and we extend our results to the evaluation of generic ranking algorithms and incentive mechanisms.
1006.4948
Automatic Music Composition using Answer Set Programming
cs.LO cs.AI
Music composition used to be a pen and paper activity. These these days music is often composed with the aid of computer software, even to the point where the computer compose parts of the score autonomously. The composition of most styles of music is governed by rules. We show that by approaching the automation, analysis and verification of composition as a knowledge representation task and formalising these rules in a suitable logical language, powerful and expressive intelligent composition tools can be easily built. This application paper describes the use of answer set programming to construct an automated system, named ANTON, that can compose melodic, harmonic and rhythmic music, diagnose errors in human compositions and serve as a computer-aided composition tool. The combination of harmonic, rhythmic and melodic composition in a single framework makes ANTON unique in the growing area of algorithmic composition. With near real-time composition, ANTON reaches the point where it can not only be used as a component in an interactive composition tool but also has the potential for live performances and concerts or automatically generated background music in a variety of applications. With the use of a fully declarative language and an "off-the-shelf" reasoning engine, ANTON provides the human composer a tool which is significantly simpler, more compact and more versatile than other existing systems. This paper has been accepted for publication in Theory and Practice of Logic Programming (TPLP).
1006.4949
Artificial Immune Systems (2010)
cs.AI cs.MA cs.NE
The human immune system has numerous properties that make it ripe for exploitation in the computational domain, such as robustness and fault tolerance, and many different algorithms, collectively termed Artificial Immune Systems (AIS), have been inspired by it. Two generations of AIS are currently in use, with the first generation relying on simplified immune models and the second generation utilising interdisciplinary collaboration to develop a deeper understanding of the immune system and hence produce more complex models. Both generations of algorithms have been successfully applied to a variety of problems, including anomaly detection, pattern recognition, optimisation and robotics. In this chapter an overview of AIS is presented, its evolution is discussed, and it is shown that the diversification of the field is linked to the diversity of the immune system itself, leading to a number of algorithms as opposed to one archetypal system. Two case studies are also presented to help provide insight into the mechanisms of AIS; these are the idiotypic network approach and the Dendritic Cell Algorithm.
1006.4953
Large scale link based latent Dirichlet allocation for web document classification
cs.IR
In this paper we demonstrate the applicability of latent Dirichlet allocation (LDA) for classifying large Web document collections. One of our main results is a novel influence model that gives a fully generative model of the document content taking linkage into account. In our setup, topics propagate along links in such a way that linked documents directly influence the words in the linking document. As another main contribution we develop LDA specific boosting of Gibbs samplers resulting in a significant speedup in our experiments. The inferred LDA model can be applied for classification as dimensionality reduction similarly to latent semantic indexing. In addition, the model yields link weights that can be applied in algorithms to process the Web graph; as an example we deploy LDA link weights in stacked graphical learning. By using Weka's BayesNet classifier, in terms of the AUC of classification, we achieve 4% improvement over plain LDA with BayesNet and 18% over tf.idf with SVM. Our Gibbs sampling strategies yield about 5-10 times speedup with less than 1% decrease in accuracy in terms of likelihood and AUC of classification.
1006.4959
Open-Ended Evolutionary Robotics: an Information Theoretic Approach
cs.RO
This paper is concerned with designing self-driven fitness functions for Embedded Evolutionary Robotics. The proposed approach considers the entropy of the sensori-motor stream generated by the robot controller. This entropy is computed using unsupervised learning; its maximization, achieved by an on-board evolutionary algorithm, implements a "curiosity instinct", favouring controllers visiting many diverse sensori-motor states (sms). Further, the set of sms discovered by an individual can be transmitted to its offspring, making a cultural evolution mode possible. Cumulative entropy (computed from ancestors and current individual visits to the sms) defines another self-driven fitness; its optimization implements a "discovery instinct", as it favours controllers visiting new or rare sensori-motor states. Empirical results on the benchmark problems proposed by Lehman and Stanley (2008) comparatively demonstrate the merits of the approach.
1006.4990
GraphLab: A New Framework for Parallel Machine Learning
cs.LG cs.DC
Designing and implementing efficient, provably correct parallel machine learning (ML) algorithms is challenging. Existing high-level parallel abstractions like MapReduce are insufficiently expressive while low-level tools like MPI and Pthreads leave ML experts repeatedly solving the same design challenges. By targeting common patterns in ML, we developed GraphLab, which improves upon abstractions like MapReduce by compactly expressing asynchronous iterative algorithms with sparse computational dependencies while ensuring data consistency and achieving a high degree of parallel performance. We demonstrate the expressiveness of the GraphLab framework by designing and implementing parallel versions of belief propagation, Gibbs sampling, Co-EM, Lasso and Compressed Sensing. We show that using GraphLab we can achieve excellent parallel performance on large scale real-world problems.
1006.5008
Detecting Danger: The Dendritic Cell Algorithm
cs.AI cs.CR cs.NE
The Dendritic Cell Algorithm (DCA) is inspired by the function of the dendritic cells of the human immune system. In nature, dendritic cells are the intrusion detection agents of the human body, policing the tissue and organs for potential invaders in the form of pathogens. In this research, and abstract model of DC behaviour is developed and subsequently used to form an algorithm, the DCA. The abstraction process was facilitated through close collaboration with laboratory- based immunologists, who performed bespoke experiments, the results of which are used as an integral part of this algorithm. The DCA is a population based algorithm, with each agent in the system represented as an 'artificial DC'. Each DC has the ability to combine multiple data streams and can add context to data suspected as anomalous. In this chapter the abstraction process and details of the resultant algorithm are given. The algorithm is applied to numerous intrusion detection problems in computer security including the detection of port scans and botnets, where it has produced impressive results with relatively low rates of false positives.
1006.5036
Performance evaluation for ML sequence detection in ISI channels with Gauss Markov Noise
cs.IT math.IT
Inter-symbol interference (ISI) channels with data dependent Gauss Markov noise have been used to model read channels in magnetic recording and other data storage systems. The Viterbi algorithm can be adapted for performing maximum likelihood sequence detection in such channels. However, the problem of finding an analytical upper bound on the bit error rate of the Viterbi detector in this case has not been fully investigated. Current techniques rely on an exhaustive enumeration of short error events and determine the BER using a union bound. In this work, we consider a subset of the class of ISI channels with data dependent Gauss-Markov noise. We derive an upper bound on the pairwise error probability (PEP) between the transmitted bit sequence and the decoded bit sequence that can be expressed as a product of functions depending on current and previous states in the (incorrect) decoded sequence and the (correct) transmitted sequence. In general, the PEP is asymmetric. The average BER over all possible bit sequences is then determined using a pairwise state diagram. Simulations results which corroborate the analysis of upper bound, demonstrate that analytic bound on BER is tight in high SNR regime. In the high SNR regime, our proposed upper bound obviates the need for computationally expensive simulation.
1006.5040
The comparison of Wiktionary thesauri transformed into the machine-readable format
cs.IR
Wiktionary is a unique, peculiar, valuable and original resource for natural language processing (NLP). The paper describes an open-source Wiktionary parser: its architecture and requirements followed by a description of Wiktionary features to be taken into account, some open problems of Wiktionary and the parser. The current implementation of the parser extracts the definitions, semantic relations, and translations from English and Russian Wiktionaries. The paper's goal is to interest researchers (1) in using the constructed machine-readable dictionary for different NLP tasks, (2) in extending the software to parse 170 still unused Wiktionaries. The comparison of a number and types of semantic relations, a number of definitions, and a number of translations in the English Wiktionary and the Russian Wiktionary has been carried out. It was found that the number of semantic relations in the English Wiktionary is larger by 1.57 times than in Russian (157 and 100 thousands). But the Russian Wiktionary has more "rich" entries (with a big number of semantic relations), e.g. the number of entries with three or more semantic relations is larger by 1.63 times than in the English Wiktionary. Upon comparison, it was found out the methodological shortcomings of the Wiktionary.
1006.5041
GroupLiNGAM: Linear non-Gaussian acyclic models for sets of variables
cs.AI
Finding the structure of a graphical model has been received much attention in many fields. Recently, it is reported that the non-Gaussianity of data enables us to identify the structure of a directed acyclic graph without any prior knowledge on the structure. In this paper, we propose a novel non-Gaussianity based algorithm for more general type of models; chain graphs. The algorithm finds an ordering of the disjoint subsets of variables by iteratively evaluating the independence between the variable subset and the residuals when the remaining variables are regressed on those. However, its computational cost grows exponentially according to the number of variables. Therefore, we further discuss an efficient approximate approach for applying the algorithm to large sized graphs. We illustrate the algorithm with artificial and real-world datasets.
1006.5051
Fast ABC-Boost for Multi-Class Classification
cs.LG stat.ML
Abc-boost is a new line of boosting algorithms for multi-class classification, by utilizing the commonly used sum-to-zero constraint. To implement abc-boost, a base class must be identified at each boosting step. Prior studies used a very expensive procedure based on exhaustive search for determining the base class at each boosting step. Good testing performances of abc-boost (implemented as abc-mart and abc-logitboost) on a variety of datasets were reported. For large datasets, however, the exhaustive search strategy adopted in prior abc-boost algorithms can be too prohibitive. To overcome this serious limitation, this paper suggests a heuristic by introducing Gaps when computing the base class during training. That is, we update the choice of the base class only for every $G$ boosting steps (i.e., G=1 in prior studies). We test this idea on large datasets (Covertype and Poker) as well as datasets of moderate sizes. Our preliminary results are very encouraging. On the large datasets, even with G=100 (or larger), there is essentially no loss of test accuracy. On the moderate datasets, no obvious loss of test accuracy is observed when G<= 20~50. Therefore, aided by this heuristic, it is promising that abc-boost will be a practical tool for accurate multi-class classification.
1006.5059
Capacity Planning for Vertical Search Engines
cs.IR
Vertical search engines focus on specific slices of content, such as the Web of a single country or the document collection of a large corporation. Despite this, like general open web search engines, they are expensive to maintain, expensive to operate, and hard to design. Because of this, predicting the response time of a vertical search engine is usually done empirically through experimentation, requiring a costly setup. An alternative is to develop a model of the search engine for predicting performance. However, this alternative is of interest only if its predictions are accurate. In this paper we propose a methodology for analyzing the performance of vertical search engines. Applying the proposed methodology, we present a capacity planning model based on a queueing network for search engines with a scale typically suitable for the needs of large corporations. The model is simple and yet reasonably accurate and, in contrast to previous work, considers the imbalance in query service times among homogeneous index servers. We discuss how we tune up the model and how we apply it to predict the impact on the query response time when parameters such as CPU and disk capacities are changed. This allows a manager of a vertical search engine to determine a priori whether a new configuration of the system might keep the query response under specified performance constraints.
1006.5060
Learning sparse gradients for variable selection and dimension reduction
stat.ML cs.LG stat.ME
Variable selection and dimension reduction are two commonly adopted approaches for high-dimensional data analysis, but have traditionally been treated separately. Here we propose an integrated approach, called sparse gradient learning (SGL), for variable selection and dimension reduction via learning the gradients of the prediction function directly from samples. By imposing a sparsity constraint on the gradients, variable selection is achieved by selecting variables corresponding to non-zero partial derivatives, and effective dimensions are extracted based on the eigenvectors of the derived sparse empirical gradient covariance matrix. An error analysis is given for the convergence of the estimated gradients to the true ones in both the Euclidean and the manifold setting. We also develop an efficient forward-backward splitting algorithm to solve the SGL problem, making the framework practically scalable for medium or large datasets. The utility of SGL for variable selection and feature extraction is explicitly given and illustrated on artificial data as well as real-world examples. The main advantages of our method include variable selection for both linear and nonlinear predictions, effective dimension reduction with sparse loadings, and an efficient algorithm for large p, small n problems.
1006.5061
Optimal Bandwidth and Power Allocation for Sum Ergodic Capacity under Fading Channels in Cognitive Radio Networks
cs.IT math.IT
This paper studies optimal bandwidth and power allocation in a cognitive radio network where multiple secondary users (SUs) share the licensed spectrum of a primary user (PU) under fading channels using the frequency division multiple access scheme. The sum ergodic capacity of all the SUs is taken as the performance metric of the network. Besides all combinations of the peak/average transmit power constraints at the SUs and the peak/average interference power constraint imposed by the PU, total bandwidth constraint of the licensed spectrum is also taken into account. Optimal bandwidth allocation is derived in closed-form for any given power allocation. The structures of optimal power allocations are also derived under all possible combinations of the aforementioned power constraints. These structures indicate the possible numbers of users that transmit at nonzero power but below their corresponding peak powers, and show that other users do not transmit or transmit at their corresponding peak power. Based on these structures, efficient algorithms are developed for finding the optimal power allocations.
1006.5066
Power Allocation Strategies across N Orthogonal Channels at Both Source and Relay
cs.IT math.IT
We consider a wireless relay network with one source, one relay and one destination, where communications between nodes are preformed via N orthogonal channels. This, for example, is the case when orthogonal frequency division multiplexing is employed for data communications. Since the power available at the source and relay is limited, we study optimal power allocation strategies at the source and relay in order to maximize the overall source-destination capacity under individual power constraints at the source and/or the relay. Depending on the availability of the channel state information at the source and rely, optimal power allocation strategies are performed at both the source and relay or only at the relay. Considering different setups for the problem, various optimization problems are formulated and solved. Some properties of the optimal solution are also proved.
1006.5086
Split Bregman method for large scale fused Lasso
stat.CO cs.LG math.OC
rdering of regression or classification coefficients occurs in many real-world applications. Fused Lasso exploits this ordering by explicitly regularizing the differences between neighboring coefficients through an $\ell_1$ norm regularizer. However, due to nonseparability and nonsmoothness of the regularization term, solving the fused Lasso problem is computationally demanding. Existing solvers can only deal with problems of small or medium size, or a special case of the fused Lasso problem in which the predictor matrix is identity matrix. In this paper, we propose an iterative algorithm based on split Bregman method to solve a class of large-scale fused Lasso problems, including a generalized fused Lasso and a fused Lasso support vector classifier. We derive our algorithm using augmented Lagrangian method and prove its convergence properties. The performance of our method is tested on both artificial data and real-world applications including proteomic data from mass spectrometry and genomic data from array CGH. We demonstrate that our method is many times faster than the existing solvers, and show that it is especially efficient for large p, small n problems.
1006.5087
Gaussian Z-Interference Channel with a Relay Link: Achievability Region and Asymptotic Sum Capacity
cs.IT math.IT
This paper studies a Gaussian Z-interference channel with a rate-limited digital relay link from one receiver to another. Achievable rate regions are derived based on a combination of Han-Kobayashi common-private information splitting technique and several different relay strategies including compress-and-forward and a partial decode-and-forward strategy, in which the interference is partially decoded then binned and forwarded through the digital link for subtraction at the other end. For the Gaussian Z-interference channel with a digital link from the interference-free receiver to the interfered receiver, the capacity region is established in the strong interference regime; an achievable rate region is established in the weak interference regime. In the weak interference regime, the partial decode-and-forward strategy is shown to be asymptotically sum-capacity achieving in the high signal-to-noise ratio and high interference-to-noise ratio limit. In this case, each relay bit asymptotically improves the sum capacity by one bit. For the Gaussian Z-interference channel with a digital link from the interfered receiver to the interference-free receiver, the capacity region is established in the strong interference regime; achievable rate regions are established in the moderately strong and weak interference regimes. In addition, the asymptotically sum capacity is established in the limit of large relay link rate. In this case, the sum capacity improvement due to the digital link is bounded by half a bit when the interference link is weaker than certain threshold, but the sum capacity improvement becomes unbounded as the interference link becomes stronger.