text
stringlengths 0
4.09k
|
|---|
Abstract: Boosting has attracted much research attention in the past decade. The success of boosting algorithms may be interpreted in terms of the margin theory. Recently it has been shown that generalization error of classifiers can be obtained by explicitly taking the margin distribution of the training data into account. Most of the current boosting algorithms in practice usually optimizes a convex loss function and do not make use of the margin distribution. In this work we design a new boosting algorithm, termed margin-distribution boosting (MDBoost), which directly maximizes the average margin and minimizes the margin variance simultaneously. This way the margin distribution is optimized. A totally-corrective optimization algorithm based on column generation is proposed to implement MDBoost. Experiments on UCI datasets show that MDBoost outperforms AdaBoost and LPBoost in most cases.
|
Title: Least Squares estimation of two ordered monotone regression curves
|
Abstract: In this paper, we consider the problem of finding the Least Squares estimators of two isotonic regression curves $g^\circ_1$ and $g^\circ_2$ under the additional constraint that they are ordered; e.g., $g^\circ_1 \le g^\circ_2$. Given two sets of $n$ data points $y_1, ..., y_n$ and $z_1, >...,z_n$ observed at (the same) design points, the estimates of the true curves are obtained by minimizing the weighted Least Squares criterion $L_2(a, b) = \sum_j=1^n (y_j - a_j)^2 w_1,j+ \sum_j=1^n (z_j - b_j)^2 w_2,j$ over the class of pairs of vectors $(a, b) \in ^n \times ^n $ such that $a_1 \le a_2 \le ...\le a_n $, $b_1 \le b_2 \le ...\le b_n $, and $a_i \le b_i, i=1, ...,n$. The characterization of the estimators is established. To compute these estimators, we use an iterative projected subgradient algorithm, where the projection is performed with a "generalized" pool-adjacent-violaters algorithm (PAVA), a byproduct of this work. Then, we apply the estimation method to real data from mechanical engineering.
|
Title: A Distributed Software Architecture for Collaborative Teleoperation based on a VR Platform and Web Application Interoperability
|
Abstract: Augmented Reality and Virtual Reality can provide to a Human Operator (HO) a real help to complete complex tasks, such as robot teleoperation and cooperative teleassistance. Using appropriate augmentations, the HO can interact faster, safer and easier with the remote real world. In this paper, we present an extension of an existing distributed software and network architecture for collaborative teleoperation based on networked human-scaled mixed reality and mobile platform. The first teleoperation system was composed by a VR application and a Web application. However the 2 systems cannot be used together and it is impossible to control a distant robot simultaneously. Our goal is to update the teleoperation system to permit a heterogeneous collaborative teleoperation between the 2 platforms. An important feature of this interface is based on different Mobile platforms to control one or many robots.
|
Title: A vanilla Rao--Blackwellization of Metropolis--Hastings algorithms
|
Abstract: Casella and Robert [Biometrika 83 (1996) 81--94] presented a general Rao--Blackwellization principle for accept-reject and Metropolis--Hastings schemes that leads to significant decreases in the variance of the resulting estimators, but at a high cost in computation and storage. Adopting a completely different perspective, we introduce instead a universal scheme that guarantees variance reductions in all Metropolis--Hastings-based estimators while keeping the computation cost under control. We establish a central limit theorem for the improved estimators and illustrate their performances on toy examples and on a probit model estimation.
|
Title: Inferring Dynamic Bayesian Networks using Frequent Episode Mining
|
Abstract: Motivation: Several different threads of research have been proposed for modeling and mining temporal data. On the one hand, approaches such as dynamic Bayesian networks (DBNs) provide a formal probabilistic basis to model relationships between time-indexed random variables but these models are intractable to learn in the general case. On the other, algorithms such as frequent episode mining are scalable to large datasets but do not exhibit the rigorous probabilistic interpretations that are the mainstay of the graphical models literature. Results: We present a unification of these two seemingly diverse threads of research, by demonstrating how dynamic (discrete) Bayesian networks can be inferred from the results of frequent episode mining. This helps bridge the modeling emphasis of the former with the counting emphasis of the latter. First, we show how, under reasonable assumptions on data characteristics and on influences of random variables, the optimal DBN structure can be computed using a greedy, local, algorithm. Next, we connect the optimality of the DBN structure with the notion of fixed-delay episodes and their counts of distinct occurrences. Finally, to demonstrate the practical feasibility of our approach, we focus on a specific (but broadly applicable) class of networks, called excitatory networks, and show how the search for the optimal DBN structure can be conducted using just information from frequent episodes. Application on datasets gathered from mathematical models of spiking neurons as well as real neuroscience datasets are presented. Availability: Algorithmic implementations, simulator codebases, and datasets are available from our website at http://neural-code.cs.vt.edu/dbn
|
Title: Delayed rejection schemes for efficient Markov-Chain Monte-Carlo sampling of multimodal distributions
|
Abstract: A number of problems in a variety of fields are characterised by target distributions with a multimodal structure in which the presence of several isolated local maxima dramatically reduces the efficiency of Markov Chain Monte Carlo sampling algorithms. Several solutions, such as simulated tempering or the use of parallel chains, have been proposed to facilitate the exploration of the relevant parameter space. They provide effective strategies in the cases in which the dimension of the parameter space is small and/or the computational costs are not a limiting factor. These approaches fail however in the case of high-dimensional spaces where the multimodal structure is induced by degeneracies between regions of the parameter space. In this paper we present a fully Markovian way to efficiently sample this kind of distribution based on the general Delayed Rejection scheme with an arbitrary number of steps, and provide details for an efficient numerical implementation of the algorithm.
|
Title: Uncovering shared common genetic risk factors for various aspects of complex disorders captured in multiple traits
|
Abstract: Identifying shared genetic risk factors for multiple measured traits has been of great interest in studying complex disorders. Marlow's (2003) method for detecting shared gene effects on complex traits has been highly influential in the literature of neurodevelopmental disorders as well as other disorders including obesity and asthma. Although its method has been widely applied and has been recommended as potentially powerful, the validity and power of this method have not been examined either theoretically or by simulation. This paper establishes the validity and quantifies and explains the power of the method. We show the method has correct type 1 error rates regardless of the number of traits in the model, and confirm power increases compared to standard univariate methods across different genetic models. We discover the main source of these power gains is correlations among traits induced by a common major gene effect component. We compare the use of the complete pleiotropy model, as assumed by Marlow, to the use of a more general model allowing additional correlation parameters, and find that even when the true model includes those parameters, the complete pleiotropy model is more powerful as long as traits are moderately correlated by a major gene component. We implement this method and a power calculator in software that can assist in designing studies by using pilot data to calculate required sample sizes and choose traits for further linkage studies. We apply the software to data on reading disability in the Russian language.
|
Title: Why Global Performance is a Poor Metric for Verifying Convergence of Multi-agent Learning
|
Abstract: Experimental verification has been the method of choice for verifying the stability of a multi-agent reinforcement learning (MARL) algorithm as the number of agents grows and theoretical analysis becomes prohibitively complex. For cooperative agents, where the ultimate goal is to optimize some global metric, the stability is usually verified by observing the evolution of the global performance metric over time. If the global metric improves and eventually stabilizes, it is considered a reasonable verification of the system's stability. The main contribution of this note is establishing the need for better experimental frameworks and measures to assess the stability of large-scale adaptive cooperative systems. We show an experimental case study where the stability of the global performance metric can be rather deceiving, hiding an underlying instability in the system that later leads to a significant drop in performance. We then propose an alternative metric that relies on agents' local policies and show, experimentally, that our proposed metric is more effective (than the traditional global performance metric) in exposing the instability of MARL algorithms.
|
Title: Computation of confidence intervals in regression utilizing uncertain prior information
|
Abstract: We consider a linear regression model with regression parameter beta =(beta_1, ..., beta_p) and independent and identically N(0, sigma^2)distributed errors. Suppose that the parameter of interest is theta = a^T beta where a is a specified vector. Define the parameter tau = c^T beta - t where the vector c and the number t are specified and a and c are linearly independent. Also suppose that we have uncertain prior information that tau = 0. Kabaila and Giri (2009c) present a new frequentist 1-alpha confidence interval for theta that utilizes this prior information. This interval has expected length that (a) is relatively small when the prior information about tau is correct and (b) has a maximum value that is not too large. It coincides with the standard 1-alpha confidence interval (obtained by fitting the full model to the data) when the data strongly contradicts the prior information. At first sight, the computation of this new confidence interval seems to be infeasible. However, by the use of the various computational devices that are presented in detail in the present paper, this computation becomes feasible and practicable.
|
Title: A Methodology for Learning Players' Styles from Game Records
|
Abstract: We describe a preliminary investigation into learning a Chess player's style from game records. The method is based on attempting to learn features of a player's individual evaluation function using the method of temporal differences, with the aid of a conventional Chess engine architecture. Some encouraging results were obtained in learning the styles of two recent Chess world champions, and we report on our attempt to use the learnt styles to discriminate between the players from game records by trying to detect who was playing white and who was playing black. We also discuss some limitations of our approach and propose possible directions for future research. The method we have presented may also be applicable to other strategic games, and may even be generalisable to other domains where sequences of agents' actions are recorded.
|
Title: Exponential Family Graph Matching and Ranking
|
Abstract: We present a method for learning max-weight matching predictors in bipartite graphs. The method consists of performing maximum a posteriori estimation in exponential families with sufficient statistics that encode permutations and data features. Although inference is in general hard, we show that for one very relevant application - web page ranking - exact inference is efficient. For general model instances, an appropriate sampler is readily available. Contrary to existing max-margin matching models, our approach is statistically consistent and, in addition, experiments with increasing sample sizes indicate superior improvement over such models. We apply the method to graph matching in computer vision as well as to a standard benchmark dataset for learning web page ranking, in which we obtain state-of-the-art results, in particular improving on max-margin variants. The drawback of this method with respect to max-margin alternatives is its runtime for large graphs, which is comparatively high.
|
Title: Principle of development
|
Abstract: Today, science have a powerful tool for the description of reality - the numbers. However, the concept of number was not immediately, lets try to trace the evolution of the concept. The numbers emerged as the need for accurate estimates of the amount in order to permit a comparison of some objects. So if you see to it how many times a day a person uses the numbers and compare, it becomes evident that the comparison is used much more frequently. However, the comparison is not possible without two opposite basic standards. Thus, to introduce the concept of comparison, must have two opposing standards, in turn, the operation of comparison is necessary to introduce the concept of number. Arguably, the scientific description of reality is impossible without the concept of opposites. In this paper analyzes the concept of opposites, as the basis for the introduction of the principle of development.
|
Title: Sparse Bayesian Hierarchical Modeling of High-dimensional Clustering Problems
|
Abstract: Clustering is one of the most widely used procedures in the analysis of microarray data, for example with the goal of discovering cancer subtypes based on observed heterogeneity of genetic marks between different tissues. It is well-known that in such high-dimensional settings, the existence of many noise variables can overwhelm the few signals embedded in the high-dimensional space. We propose a novel Bayesian approach based on Dirichlet process with a sparsity prior that simultaneous performs variable selection and clustering, and also discover variables that only distinguish a subset of the cluster components. Unlike previous Bayesian formulations, we use Dirichlet process (DP) for both clustering of samples as well as for regularizing the high-dimensional mean/variance structure. To solve the computational challenge brought by this double usage of DP, we propose to make use of a sequential sampling scheme embedded within Markov chain Monte Carlo (MCMC) updates to improve the naive implementation of existing algorithms for DP mixture models. Our method is demonstrated on a simulation study and illustrated with the leukemia gene expression dataset.
|
Title: L1-Penalized Quantile Regression in High-Dimensional Sparse Models
|
Abstract: We consider median regression and, more generally, a possibly infinite collection of quantile regressions in high-dimensional sparse models. In these models the overall number of regressors $p$ is very large, possibly larger than the sample size $n$, but only $s$ of these regressors have non-zero impact on the conditional quantile of the response variable, where $s$ grows slower than $n$. We consider quantile regression penalized by the $\ell_1$-norm of coefficients ($\ell_1$-QR). First, we show that $\ell_1$-QR is consistent at the rate $ $. The overall number of regressors $p$ affects the rate only through the $\log p$ factor, thus allowing nearly exponential growth in the number of zero-impact regressors. The rate result holds under relatively weak conditions, requiring that $s/n$ converges to zero at a super-logarithmic speed and that regularization parameter satisfies certain theoretical constraints. Second, we propose a pivotal, data-driven choice of the regularization parameter and show that it satisfies these theoretical constraints. Third, we show that $\ell_1$-QR correctly selects the true minimal model as a valid submodel, when the non-zero coefficients of the true model are well separated from zero. We also show that the number of non-zero coefficients in $\ell_1$-QR is of same stochastic order as $s$. Fourth, we analyze the rate of convergence of a two-step estimator that applies ordinary quantile regression to the selected model. Fifth, we evaluate the performance of $\ell_1$-QR in a Monte-Carlo experiment, and illustrate its use on an international economic growth application.
|
Title: Towards an Intelligent System for Risk Prevention and Management
|
Abstract: Making a decision in a changeable and dynamic environment is an arduous task owing to the lack of information, their uncertainties and the unawareness of planners about the future evolution of incidents. The use of a decision support system is an efficient solution of this issue. Such a system can help emergency planners and responders to detect possible emergencies, as well as to suggest and evaluate possible courses of action to deal with the emergency. We are interested in our work to the modeling of a monitoring preventive and emergency management system, wherein we stress the generic aspect. In this paper we propose an agent-based architecture of this system and we describe a first step of our approach which is the modeling of information and their representation using a multiagent system.
|
Title: Agent-Based Decision Support System to Prevent and Manage Risk Situations
|
Abstract: The topic of risk prevention and emergency response has become a key social and political concern. One approach to address this challenge is to develop Decision Support Systems (DSS) that can help emergency planners and responders to detect emergencies, as well as to suggest possible course of actions to deal with the emergency. Our research work comes in this framework and aims to develop a DSS that must be generic as much as possible and independent from the case study.
|
Title: Posterior Inference in Curved Exponential Families under Increasing Dimensions
|
Abstract: This work studies the large sample properties of the posterior-based inference in the curved exponential family under increasing dimension. The curved structure arises from the imposition of various restrictions on the model, such as moment restrictions, and plays a fundamental role in econometrics and others branches of data analysis. We establish conditions under which the posterior distribution is approximately normal, which in turn implies various good properties of estimation and inference procedures based on the posterior. In the process we also revisit and improve upon previous results for the exponential family under increasing dimension by making use of concentration of measure. We also discuss a variety of applications to high-dimensional versions of the classical econometric models including the multinomial model with moment restrictions, seemingly unrelated regression equations, and single structural equation models. In our analysis, both the parameter dimension and the number of moments are increasing with the sample size.
|
Title: Efficient Construction of Neighborhood Graphs by the Multiple Sorting Method
|
Abstract: Neighborhood graphs are gaining popularity as a concise data representation in machine learning. However, naive graph construction by pairwise distance calculation takes $O(n^2)$ runtime for $n$ data points and this is prohibitively slow for millions of data points. For strings of equal length, the multiple sorting method (Uno, 2008) can construct an $\epsilon$-neighbor graph in $O(n+m)$ time, where $m$ is the number of $\epsilon$-neighbor pairs in the data. To introduce this remarkably efficient algorithm to continuous domains such as images, signals and texts, we employ a random projection method to convert vectors to strings. Theoretical results are presented to elucidate the trade-off between approximation quality and computation time. Empirical results show the efficiency of our method in comparison to fast nearest neighbor alternatives.
|
Title: FastLMFI: An Efficient Approach for Local Maximal Patterns Propagation and Maximal Patterns Superset Checking
|
Abstract: Maximal frequent patterns superset checking plays an important role in the efficient mining of complete Maximal Frequent Itemsets (MFI) and maximal search space pruning. In this paper we present a new indexing approach, FastLMFI for local maximal frequent patterns (itemset) propagation and maximal patterns superset checking. Experimental results on different sparse and dense datasets show that our work is better than the previous well known progressive focusing technique. We have also integrated our superset checking approach with an existing state of the art maximal itemsets algorithm Mafia, and compare our results with current best maximal itemsets algorithms afopt-max and FP (zhu)-max. Our results outperform afopt-max and FP (zhu)-max on dense (chess and mushroom) datasets on almost all support thresholds, which shows the effectiveness of our approach.
|
Title: HybridMiner: Mining Maximal Frequent Itemsets Using Hybrid Database Representation Approach
|
Abstract: In this paper we present a novel hybrid (arraybased layout and vertical bitmap layout) database representation approach for mining complete Maximal Frequent Itemset (MFI) on sparse and large datasets. Our work is novel in terms of scalability, item search order and two horizontal and vertical projection techniques. We also present a maximal algorithm using this hybrid database representation approach. Different experimental results on real and sparse benchmark datasets show that our approach is better than previous state of art maximal algorithms.
|
Title: Ramp: Fast Frequent Itemset Mining with Efficient Bit-Vector Projection Technique
|
Abstract: Mining frequent itemset using bit-vector representation approach is very efficient for dense type datasets, but highly inefficient for sparse datasets due to lack of any efficient bit-vector projection technique. In this paper we present a novel efficient bit-vector projection technique, for sparse and dense datasets. To check the efficiency of our bit-vector projection technique, we present a new frequent itemset mining algorithm Ramp (Real Algorithm for Mining Patterns) build upon our bit-vector projection technique. The performance of the Ramp is compared with the current best (all, maximal and closed) frequent itemset mining algorithms on benchmark datasets. Different experimental results on sparse and dense datasets show that mining frequent itemset using Ramp is faster than the current best algorithms, which show the effectiveness of our bit-vector projection idea. We also present a new local maximal frequent itemsets propagation and maximal itemset superset checking approach FastLMFI, build upon our PBR bit-vector projection technique. Our different computational experiments suggest that itemset maximality checking using FastLMFI is fast and efficient than a previous will known progressive focusing approach.
|
Title: Fast Algorithms for Mining Interesting Frequent Itemsets without Minimum Support
|
Abstract: Real world datasets are sparse, dirty and contain hundreds of items. In such situations, discovering interesting rules (results) using traditional frequent itemset mining approach by specifying a user defined input support threshold is not appropriate. Since without any domain knowledge, setting support threshold small or large can output nothing or a large number of redundant uninteresting results. Recently a novel approach of mining only N-most/Top-K interesting frequent itemsets has been proposed, which discovers the top N interesting results without specifying any user defined support threshold. However, mining interesting frequent itemsets without minimum support threshold are more costly in terms of itemset search space exploration and processing cost. Thereby, the efficiency of their mining highly depends upon three main factors (1) Database representation approach used for itemset frequency counting, (2) Projection of relevant transactions to lower level nodes of search space and (3) Algorithm implementation technique. Therefore, to improve the efficiency of mining process, in this paper we present two novel algorithms called (N-MostMiner and Top-K-Miner) using the bit-vector representation approach which is very efficient in terms of itemset frequency counting and transactions projection. In addition to this, several efficient implementation techniques of N-MostMiner and Top-K-Miner are also present which we experienced in our implementation. Our experimental results on benchmark datasets suggest that the NMostMiner and Top-K-Miner are very efficient in terms of processing time as compared to current best algorithms BOMO and TFP.
|
Title: Using Association Rules for Better Treatment of Missing Values
|
Abstract: The quality of training data for knowledge discovery in databases (KDD) and data mining depends upon many factors, but handling missing values is considered to be a crucial factor in overall data quality. Today real world datasets contains missing values due to human, operational error, hardware malfunctioning and many other factors. The quality of knowledge extracted, learning and decision problems depend directly upon the quality of training data. By considering the importance of handling missing values in KDD and data mining tasks, in this paper we propose a novel Hybrid Missing values Imputation Technique (HMiT) using association rules mining and hybrid combination of k-nearest neighbor approach. To check the effectiveness of our HMiT missing values imputation technique, we also perform detail experimental results on real world datasets. Our results suggest that the HMiT technique is not only better in term of accuracy but it also take less processing time as compared to current best missing values imputation technique based on k-nearest neighbor approach, which shows the effectiveness of our missing values imputation technique.
|
Title: Introducing Partial Matching Approach in Association Rules for Better Treatment of Missing Values
|
Abstract: Handling missing values in training datasets for constructing learning models or extracting useful information is considered to be an important research task in data mining and knowledge discovery in databases. In recent years, lot of techniques are proposed for imputing missing values by considering attribute relationships with missing value observation and other observations of training dataset. The main deficiency of such techniques is that, they depend upon single approach and do not combine multiple approaches, that why they are less accurate. To improve the accuracy of missing values imputation, in this paper we introduce a novel partial matching concept in association rules mining, which shows better results as compared to full matching concept that we described in our previous work. Our imputation technique combines the partial matching concept in association rules with k-nearest neighbor approach. Since this is a hybrid technique, therefore its accuracy is much better than as compared to those techniques which depend upon single approach. To check the efficiency of our technique, we also provide detail experimental results on number of benchmark datasets which show better results as compared to previous approaches.
|
Title: Optimistic Initialization and Greediness Lead to Polynomial Time Learning in Factored MDPs - Extended Version
|
Abstract: In this paper we propose an algorithm for polynomial-time reinforcement learning in factored Markov decision processes (FMDPs). The factored optimistic initial model (FOIM) algorithm, maintains an empirical model of the FMDP in a conventional way, and always follows a greedy policy with respect to its model. The only trick of the algorithm is that the model is initialized optimistically. We prove that with suitable initialization (i) FOIM converges to the fixed point of approximate value iteration (AVI); (ii) the number of steps when the agent makes non-near-optimal decisions (with respect to the solution of AVI) is polynomial in all relevant quantities; (iii) the per-step costs of the algorithm are also polynomial. To our best knowledge, FOIM is the first algorithm with these properties. This extended version contains the rigorous proofs of the main theorem. A version of this paper appeared in ICML'09.
|
Title: A method for Hedging in continuous time
|
Abstract: We present a method for hedging in continuous time.
|
Title: Toggling operators in computability logic
|
Abstract: Computability logic (CL) (see http://www.cis.upenn.edu/ giorgi/cl.html ) is a research program for redeveloping logic as a formal theory of computability, as opposed to the formal theory of truth which it has more traditionally been. Formulas in CL stand for interactive computational problems, seen as games between a machine and its environment; logical operators represent operations on such entities; and "truth" is understood as existence of an effective solution. The formalism of CL is open-ended, and may undergo series of extensions as the studies of the subject advance. So far three -- parallel, sequential and choice -- sorts of conjunction and disjunction have been studied. The present paper adds one more natural kind to this collection, termed toggling. The toggling operations can be characterized as lenient versions of choice operations where choices are retractable, being allowed to be reconsidered any finite number of times. This way, they model trial-and-error style decision steps in interactive computation. The main technical result of this paper is constructing a sound and complete axiomatization for the propositional fragment of computability logic whose vocabulary, together with negation, includes all four -- parallel, toggling, sequential and choice -- kinds of conjunction and disjunction. Along with toggling conjunction and disjunction, the paper also introduces the toggling versions of quantifiers and recurrence operations.
|
Title: Structured Variable Selection with Sparsity-Inducing Norms
|
Abstract: We consider the empirical risk minimization problem for linear supervised learning, with regularization by structured sparsity-inducing norms. These are defined as sums of Euclidean norms on certain subsets of variables, extending the usual $\ell_1$-norm and the group $\ell_1$-norm by allowing the subsets to overlap. This leads to a specific set of allowed nonzero patterns for the solutions of such problems. We first explore the relationship between the groups defining the norm and the resulting nonzero patterns, providing both forward and backward algorithms to go back and forth from groups to patterns. This allows the design of norms adapted to specific prior knowledge expressed in terms of nonzero patterns. We also present an efficient active set algorithm, and analyze the consistency of variable selection for least-squares linear regression in low and high-dimensional settings.
|
Title: Variations of the Turing Test in the Age of Internet and Virtual Reality
|
Abstract: Inspired by Hofstadter's Coffee-House Conversation (1982) and by the science fiction short story SAM by Schattschneider (1988), we propose and discuss criteria for non-mechanical intelligence. Firstly, we emphasize the practical need for such tests in view of massively multiuser online role-playing games (MMORPGs) and virtual reality systems like Second Life. Secondly, we demonstrate Second Life as a useful framework for implementing (some iterations of) that test.
|
Title: Introduction to Machine Learning: Class Notes 67577
|
Abstract: Introduction to Machine learning covering Statistical Inference (Bayes, EM, ML/MaxEnt duality), algebraic and spectral methods (PCA, LDA, CCA, Clustering), and PAC learning (the Formal model, VC dimension, Double Sampling theorem).
|
Title: Considerations upon the Machine Learning Technologies
|
Abstract: Artificial intelligence offers superior techniques and methods by which problems from diverse domains may find an optimal solution. The Machine Learning technologies refer to the domain of artificial intelligence aiming to develop the techniques allowing the computers to "learn". Some systems based on Machine Learning technologies tend to eliminate the necessity of the human intelligence while the others adopt a man-machine collaborative approach.
|
Title: Semantic Social Network Analysis
|
Abstract: Social Network Analysis (SNA) tries to understand and exploit the key features of social networks in order to manage their life cycle and predict their evolution. Increasingly popular web 2.0 sites are forming huge social network. Classical methods from social network analysis (SNA) have been applied to such online networks. In this paper, we propose leveraging semantic web technologies to merge and exploit the best features of each domain. We present how to facilitate and enhance the analysis of online social networks, exploiting the power of semantic social network analysis.
|
Title: Orbit-Product Representation and Correction of Gaussian Belief Propagation
|
Abstract: We present a new view of Gaussian belief propagation (GaBP) based on a representation of the determinant as a product over orbits of a graph. We show that the GaBP determinant estimate captures totally backtracking orbits of the graph and consider how to correct this estimate. We show that the missing orbits may be grouped into equivalence classes corresponding to backtrackless orbits and the contribution of each equivalence class is easily determined from the GaBP solution. Furthermore, we demonstrate that this multiplicative correction factor can be interpreted as the determinant of a backtrackless adjacency matrix of the graph with edge weights based on GaBP. Finally, an efficient method is proposed to compute a truncated correction factor including all backtrackless orbits up to a specified length.
|
Title: Automated Epilepsy Diagnosis Using Interictal Scalp EEG
|
Abstract: Approximately over 50 million people worldwide suffer from epilepsy. Traditional diagnosis of epilepsy relies on tedious visual screening by highly trained clinicians from lengthy EEG recording that contains the presence of seizure (ictal) activities. Nowadays, there are many automatic systems that can recognize seizure-related EEG signals to help the diagnosis. However, it is very costly and inconvenient to obtain long-term EEG data with seizure activities, especially in areas short of medical resources. We demonstrate in this paper that we can use the interictal scalp EEG data, which is much easier to collect than the ictal data, to automatically diagnose whether a person is epileptic. In our automated EEG recognition system, we extract three classes of features from the EEG data and build Probabilistic Neural Networks (PNNs) fed with these features. We optimize the feature extraction parameters and combine these PNNs through a voting mechanism. As a result, our system achieves an impressive 94.07% accuracy, which is very close to reported human recognition accuracy by experienced medical professionals.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.