text
stringlengths
0
4.09k
Abstract: There are many on-line settings in which users publicly express opinions. A number of these offer mechanisms for other users to evaluate these opinions; a canonical example is Amazon.com, where reviews come with annotations like "26 of 32 people found the following review helpful." Opinion evaluation appears in many off-line settings as well, including market research and political campaigns. Reasoning about the evaluation of an opinion is fundamentally different from reasoning about the opinion itself: rather than asking, "What did Y think of X?", we are asking, "What did Z think of Y's opinion of X?" Here we develop a framework for analyzing and modeling opinion evaluation, using a large-scale collection of Amazon book reviews as a dataset. We find that the perceived helpfulness of a review depends not just on its content but also but also in subtle ways on how the expressed evaluation relates to other evaluations of the same product. As part of our approach, we develop novel methods that take advantage of the phenomenon of review "plagiarism" to control for the effects of text in opinion evaluation, and we provide a simple and natural mathematical model consistent with our findings. Our analysis also allows us to distinguish among the predictions of competing theories from sociology and social psychology, and to discover unexpected differences in the collective opinion-evaluation behavior of user populations from different countries.
Title: Automatic Defect Detection and Classification Technique from Image: A Special Case Using Ceramic Tiles
Abstract: Quality control is an important issue in the ceramic tile industry. On the other hand maintaining the rate of production with respect to time is also a major issue in ceramic tile manufacturing. Again, price of ceramic tiles also depends on purity of texture, accuracy of color, shape etc. Considering this criteria, an automated defect detection and classification technique has been proposed in this report that can have ensured the better quality of tiles in manufacturing process as well as production rate. Our proposed method plays an important role in ceramic tiles industries to detect the defects and to control the quality of ceramic tiles. This automated classification method helps us to acquire knowledge about the pattern of defect within a very short period of time and also to decide about the recovery process so that the defected tiles may not be mixed with the fresh tiles.
Title: Hybrid Rules with Well-Founded Semantics
Abstract: A general framework is proposed for integration of rules and external first order theories. It is based on the well-founded semantics of normal logic programs and inspired by ideas of Constraint Logic Programming (CLP) and constructive negation for logic programs. Hybrid rules are normal clauses extended with constraints in the bodies; constraints are certain formulae in the language of the external theory. A hybrid program is a pair of a set of hybrid rules and an external theory. Instances of the framework are obtained by specifying the class of external theories, and the class of constraints. An example instance is integration of (non-disjunctive) Datalog with ontologies formalized as description logics. The paper defines a declarative semantics of hybrid programs and a goal-driven formal operational semantics. The latter can be seen as a generalization of SLS-resolution. It provides a basis for hybrid implementations combining Prolog with constraint solvers. Soundness of the operational semantics is proven. Sufficient conditions for decidability of the declarative semantics, and for completeness of the operational semantics are given.
Title: Squeezing the Arimoto-Blahut algorithm for faster convergence
Abstract: The Arimoto--Blahut algorithm for computing the capacity of a discrete memoryless channel is revisited. A so-called ``squeezing'' strategy is used to design algorithms that preserve its simplicity and monotonic convergence properties, but have provably better rates of convergence.
Title: Bayesian Forecasting of WWW Traffic on the Time Varying Poisson Model
Abstract: Traffic forecasting from past observed traffic data with small calculation complexity is one of important problems for planning of servers and networks. Focusing on World Wide Web (WWW) traffic as fundamental investigation, this paper would deal with Bayesian forecasting of network traffic on the time varying Poisson model from a viewpoint from statistical decision theory. Under this model, we would show that the estimated forecasting value is obtained by simple arithmetic calculation and expresses real WWW traffic well from both theoretical and empirical points of view.
Title: Soft Constraints for Quality Aspects in Service Oriented Architectures
Abstract: We propose the use of Soft Constraints as a natural way to model Service Oriented Architecture. In the framework, constraints are used to model components and connectors and constraint aggregation is used to represent their interactions. The "quality of a service" is measured and considered when performing queries to service providers. Some examples consist in the levels of cost, performance and availability required by clients. In our framework, the QoS scores are represented by the softness level of the constraint and the measure of complex (web) services is computed by combining the levels of the components.
Title: Accurate Parametric Inference for Small Samples
Abstract: We outline how modern likelihood theory, which provides essentially exact inferences in a variety of parametric statistical problems, may routinely be applied in practice. Although the likelihood procedures are based on analytical asymptotic approximations, the focus of this paper is not on theory but on implementation and applications. Numerical illustrations are given for logistic regression, nonlinear models, and linear non-normal models, and we describe a sampling approach for the third of these classes. In the case of logistic regression, we argue that approximations are often more appropriate than `exact' procedures, even when these exist.
Title: Principal Fitted Components for Dimension Reduction in Regression
Abstract: We provide a remedy for two concerns that have dogged the use of principal components in regression: (i) principal components are computed from the predictors alone and do not make apparent use of the response, and (ii) principal components are not invariant or equivariant under full rank linear transformation of the predictors. The development begins with principal fitted components [Cook, R. D. (2007). Fisher lecture: Dimension reduction in regression (with discussion). Statist. Sci. 22 1--26] and uses normal models for the inverse regression of the predictors on the response to gain reductive information for the forward regression of interest. This approach includes methodology for testing hypotheses about the number of components and about conditional independencies among the predictors.
Title: The Golden Age of Statistical Graphics
Abstract: Statistical graphics and data visualization have long histories, but their modern forms began only in the early 1800s. Between roughly 1850 and 1900 ($\pm10$), an explosive growth occurred in both the general use of graphic methods and the range of topics to which they were applied. Innovations were prodigious and some of the most exquisite graphics ever produced appeared, resulting in what may be called the ``Golden Age of Statistical Graphics.'' In this article I trace the origins of this period in terms of the infrastructure required to produce this explosive growth: recognition of the importance of systematic data collection by the state; the rise of statistical theory and statistical thinking; enabling developments of technology; and inventions of novel methods to portray statistical data. To illustrate, I describe some specific contributions that give rise to the appellation ``Golden Age.''
Title: Remembering Wassily Hoeffding
Abstract: Wasssily Hoeffding's terminal illness and untimely death in 1991 put an end to efforts that were made to interview him for Statistical Science. An account of his scientific work is given in Fisher and Sen [The Collected Works of Wassily Hoeffding (1994) Springer], but the present authors felt that the statistical community should also be told about the life of this remarkable man. He contributed much to statistical science, but will also live on in the memory of those who knew him as a kind and modest teacher and friend, whose courage and learning were matched by a wonderful sense of humor.
Title: Bayesian two-sample tests
Abstract: In this paper, we present two classes of Bayesian approaches to the two-sample problem. Our first class of methods extends the Bayesian t-test to include all parametric models in the exponential family and their conjugate priors. Our second class of methods uses Dirichlet process mixtures (DPM) of such conjugate-exponential distributions as flexible nonparametric priors over the unknown distributions.
Title: Physical Modeling Techniques in Active Contours for Image Segmentation
Abstract: Physical modeling method, represented by simulation and visualization of the principles in physics, is introduced in the shape extraction of the active contours. The objectives of adopting this concept are to address the several major difficulties in the application of Active Contours. Primarily, a technique is developed to realize the topological changes of Parametric Active Contours (Snakes). The key strategy is to imitate the process of a balloon expanding and filling in a closed space with several objects. After removing the touched balloon surfaces, the objects can be identified by surrounded remaining balloon surfaces. A burned region swept by Snakes is utilized to trace the contour and to give a criterion for stopping the movement of Snake curve. When the Snakes terminates evolution totally, through ignoring this criterion, it can form a connected area by evolving the Snakes again and continuing the region burning. The contours extracted from the boundaries of the burned area can represent the child snake of each object respectively. Secondly, a novel scheme is designed to solve the problems of leakage of the contour from the large gaps, and the segmentation error in Geometric Active Contours (GAC). It divides the segmentation procedure into two processing stages. By simulating the wave propagating in the isotropic substance at the final stage, it can significantly enhance the effect of image force in GAC based on Level Set and give the satisfied solutions to the two problems. Thirdly, to support the physical models for active contours above, we introduce a general image force field created on a template plane over the image plane. This force is more adaptable to noisy images with complicated geometric shapes.
Title: Recommender Systems for the Conference Paper Assignment Problem
Abstract: Conference paper assignment, i.e., the task of assigning paper submissions to reviewers, presents multi-faceted issues for recommender systems research. Besides the traditional goal of predicting `who likes what?', a conference management system must take into account aspects such as: reviewer capacity constraints, adequate numbers of reviews for papers, expertise modeling, conflicts of interest, and an overall distribution of assignments that balances reviewer preferences with conference objectives. Among these, issues of modeling preferences and tastes in reviewing have traditionally been studied separately from the optimization of paper-reviewer assignment. In this paper, we present an integrated study of both these aspects. First, due to the paucity of data per reviewer or per paper (relative to other recommender systems applications) we show how we can integrate multiple sources of information to learn paper-reviewer preference models. Second, our models are evaluated not just in terms of prediction accuracy but in terms of the end-assignment quality. Using a linear programming-based assignment optimization formulation, we show how our approach better explores the space of unsupplied assignments to maximize the overall affinities of papers assigned to reviewers. We demonstrate our results on real reviewer preference data from the IEEE ICDM 2007 conference.
Title: An Event Based Approach To Situational Representation
Abstract: Many application domains require representing interrelated real-world activities and/or evolving physical phenomena. In the crisis response domain, for instance, one may be interested in representing the state of the unfolding crisis (e.g., forest fire), the progress of the response activities such as evacuation and traffic control, and the state of the crisis site(s). Such a situation representation can then be used to support a multitude of applications including situation monitoring, analysis, and planning. In this paper, we make a case for an event based representation of situations where events are defined to be domain-specific significant occurrences in space and time. We argue that events offer a unifying and powerful abstraction to building situational awareness applications. We identify challenges in building an Event Management System (EMS) for which traditional data and knowledge management systems prove to be limited and suggest possible directions and technologies to address the challenges.
Title: Automatic Spatially-Adaptive Balancing of Energy Terms for Image Segmentation
Abstract: Image segmentation techniques are predominately based on parameter-laden optimization. The objective function typically involves weights for balancing competing image fidelity and segmentation regularization cost terms. Setting these weights suitably has been a painstaking, empirical process. Even if such ideal weights are found for a novel image, most current approaches fix the weight across the whole image domain, ignoring the spatially-varying properties of object shape and image appearance. We propose a novel technique that autonomously balances these terms in a spatially-adaptive manner through the incorporation of image reliability in a graph-based segmentation framework. We validate on synthetic data achieving a reduction in mean error of 47% (p-value << 0.05) when compared to the best fixed parameter segmentation. We also present results on medical images (including segmentations of the corpus callosum and brain tissue in MRI data) and on natural images.
Title: A Conversation with Pranab Kumar Sen
Abstract: Pranab Kumar Sen was born on November 7, 1937 in Calcutta, India. His father died when Pranab was 10 years old, so his mother raised the family of seven children. Given his superior performance on an exam, Pranab nearly went into medical school, but did not because he was underage. He received a B.Sc. degree in 1955 and an M.Sc. degree in 1957 in statistics from Calcutta University, topping the class both times. Dr. Sen's dissertation on order statistics and nonparametrics, under the direction of Professor Hari Kinkar Nandi, was completed in 1961. After teaching for three years at Calcutta University, 1961--1964, Professor Sen came to Berkeley as a Visiting Assistant Professor in 1964. In 1965, he joined the Departments of Statistics and Biostatistics at the University of North Carolina at Chapel Hill, where he has remained. Professor Sen's pioneering contributions have touched nearly every area of statistics. He is the first person who, in joint collaboration with Professor S. K. Chatterjee, developed multivariate rank tests as well as time-sequential nonparametric methods. He is also the first person who carried out in-depth research in sequential nonparametrics culminating in his now famous Wiley book Sequential Nonparametrics: Invariance Principles and Statistical Inference and SIAM monograph. Professor Sen has over 600 research publications. In addition, he has authored or co-authored 11 books and monographs, and has edited or co-edited 11 more volumes. He has supervised over 80 Ph.D. students, many of whom have achieved distinction both nationally and internationally. Professor Sen is the founding co-editor of two international journals: Sequential Analysis and Statistics and Decisions.
Title: Rough Set Model for Discovering Hybrid Association Rules
Abstract: In this paper, the mining of hybrid association rules with rough set approach is investigated as the algorithm RSHAR.The RSHAR algorithm is constituted of two steps mainly. At first, to join the participant tables into a general table to generate the rules which is expressing the relationship between two or more domains that belong to several different tables in a database. Then we apply the mapping code on selected dimension, which can be added directly into the information system as one certain attribute. To find the association rules, frequent itemsets are generated in second step where candidate itemsets are generated through equivalence classes and also transforming the mapping code in to real dimensions. The searching method for candidate itemset is similar to apriori algorithm. The analysis of the performance of algorithm has been carried out.
Title: On Chase Termination Beyond Stratification
Abstract: We study the termination problem of the chase algorithm, a central tool in various database problems such as the constraint implication problem, Conjunctive Query optimization, rewriting queries using views, data exchange, and data integration. The basic idea of the chase is, given a database instance and a set of constraints as input, to fix constraint violations in the database instance. It is well-known that, for an arbitrary set of constraints, the chase does not necessarily terminate (in general, it is even undecidable if it does or not). Addressing this issue, we review the limitations of existing sufficient termination conditions for the chase and develop new techniques that allow us to establish weaker sufficient conditions. In particular, we introduce two novel termination conditions called safety and inductive restriction, and use them to define the so-called T-hierarchy of termination conditions. We then study the interrelations of our termination conditions with previous conditions and the complexity of checking our conditions. This analysis leads to an algorithm that checks membership in a level of the T-hierarchy and accounts for the complexity of termination conditions. As another contribution, we study the problem of data-dependent chase termination and present sufficient termination conditions w.r.t. fixed instances. They might guarantee termination although the chase does not terminate in the general case. As an application of our techniques beyond those already mentioned, we transfer our results into the field of query answering over knowledge bases where the chase on the underlying database may not terminate, making existing algorithms applicable to broader classes of constraints.
Title: The Feature Importance Ranking Measure
Abstract: Most accurate predictions are typically obtained by learning machines with complex feature spaces (as e.g. induced by kernels). Unfortunately, such decision rules are hardly accessible to humans and cannot easily be used to gain insights about the application domain. Therefore, one often resorts to linear models in combination with variable selection, thereby sacrificing some predictive power for presumptive interpretability. Here, we introduce the Feature Importance Ranking Measure (FIRM), which by retrospective analysis of arbitrary learning machines allows to achieve both excellent predictive performance and superior interpretation. In contrast to standard raw feature weighting, FIRM takes the underlying correlation structure of the features into account. Thereby, it is able to discover the most relevant features, even if their appearance in the training data is entirely prevented by noise. The desirable properties of FIRM are investigated analytically and illustrated in simulations.
Title: Constructive Decision Theory
Abstract: In most contemporary approaches to decision making, a decision problem is described by a sets of states and set of outcomes, and a rich set of acts, which are functions from states to outcomes over which the decision maker (DM) has preferences. Most interesting decision problems, however, do not come with a state space and an outcome space. Indeed, in complex problems it is often far from clear what the state and outcome spaces would be. We present an alternative foundation for decision making, in which the primitive objects of choice are syntactic programs. A representation theorem is proved in the spirit of standard representation theorems, showing that if the DM's preference relation on objects of choice satisfies appropriate axioms, then there exist a set S of states, a set O of outcomes, a way of interpreting the objects of choice as functions from S to O, a probability on S, and a utility function on O, such that the DM prefers choice a to choice b if and only if the expected utility of a is higher than that of b. Thus, the state space and outcome space are subjective, just like the probability and utility; they are not part of the description of the problem. In principle, a modeler can test for SEU behavior without having access to states or outcomes. We illustrate the power of our approach by showing that it can capture decision makers who are subject to framing effects.
Title: Reasoning About Knowledge of Unawareness Revisited
Abstract: In earlier work, we proposed a logic that extends the Logic of General Awareness of Fagin and Halpern [1988] by allowing quantification over primitive propositions. This makes it possible to express the fact that an agent knows that there are some facts of which he is unaware. In that logic, it is not possible to model an agent who is uncertain about whether he is aware of all formulas. To overcome this problem, we keep the syntax of the earlier paper, but allow models where, with each world, a possibly different language is associated. We provide a sound and complete axiomatization for this logic and show that, under natural assumptions, the quantifier-free fragment of the logic is characterized by exactly the same axioms as the logic of Heifetz, Meier, and Schipper [2008].
Title: A Logical Characterization of Iterated Admissibility
Abstract: Brandenburger, Friedenberg, and Keisler provide an epistemic characterization of iterated admissibility (i.e., iterated deletion of weakly dominated strategies) where uncertainty is represented using LPSs (lexicographic probability sequences). Their characterization holds in a rich structure called a complete structure, where all types are possible. Here, a logical charaacterization of iterated admisibility is given that involves only standard probability and holds in all structures, not just complete structures. A stronger notion of strong admissibility is then defined. Roughly speaking, strong admissibility is meant to capture the intuition that "all the agent knows" is that the other agents satisfy the appropriate rationality assumptions. Strong admissibility makes it possible to relate admissibility, canonical structures (as typically considered in completeness proofs in modal logic), complete structures, and the notion of ``all I know''.
Title: A Bayes factor with reasonable model selection consistency for ANOVA model
Abstract: For the ANOVA model, we propose a new g-prior based Bayes factor without integral representation, with reasonable model selection consistency for any asymptotic situations (either number of levels of the factor and/or number of replication in each level goes to infinity). Exact analytic calculation of the marginal density under a special choice of the priors enables such a Bayes factor.
Title: Updating Sets of Probabilities
Abstract: There are several well-known justifications for conditioning as the appropriate method for updating a single probability measure, given an observation. However, there is a significant body of work arguing for sets of probability measures, rather than single measures, as a more realistic model of uncertainty. Conditioning still makes sense in this context--we can simply condition each measure in the set individually, then combine the results--and, indeed, it seems to be the preferred updating procedure in the literature. But how justified is conditioning in this richer setting? Here we show, by considering an axiomatic account of conditioning given by van Fraassen, that the single-measure and sets-of-measures cases are very different. We show that van Fraassen's axiomatization for the former case is nowhere near sufficient for updating sets of measures. We give a considerably longer (and not as compelling) list of axioms that together force conditioning in this setting, and describe other update methods that are allowed once any of these axioms is dropped.
Title: KNIFE: Kernel Iterative Feature Extraction
Abstract: Selecting important features in non-linear or kernel spaces is a difficult challenge in both classification and regression problems. When many of the features are irrelevant, kernel methods such as the support vector machine and kernel ridge regression can sometimes perform poorly. We propose weighting the features within a kernel with a sparse set of weights that are estimated in conjunction with the original classification or regression problem. The iterative algorithm, KNIFE, alternates between finding the coefficients of the original problem and finding the feature weights through kernel linearization. In addition, a slight modification of KNIFE yields an efficient algorithm for finding feature regularization paths, or the paths of each feature's weight. Simulation results demonstrate the utility of KNIFE for both kernel regression and support vector machines with a variety of kernels. Feature path realizations also reveal important non-linear correlations among features that prove useful in determining a subset of significant variables. Results on vowel recognition data, Parkinson's disease data, and microarray data are also given.
Title: Learning with Spectral Kernels and Heavy-Tailed Data
Abstract: Two ubiquitous aspects of large-scale data analysis are that the data often have heavy-tailed properties and that diffusion-based or spectral-based methods are often used to identify and extract structure of interest. Perhaps surprisingly, popular distribution-independent methods such as those based on the VC dimension fail to provide nontrivial results for even simple learning problems such as binary classification in these two settings. In this paper, we develop distribution-dependent learning methods that can be used to provide dimension-independent sample complexity bounds for the binary classification problem in these two popular settings. In particular, we provide bounds on the sample complexity of maximum margin classifiers when the magnitude of the entries in the feature vector decays according to a power law and also when learning is performed with the so-called Diffusion Maps kernel. Both of these results rely on bounding the annealed entropy of gap-tolerant classifiers in a Hilbert space. We provide such a bound, and we demonstrate that our proof technique generalizes to the case when the margin is measured with respect to more general Banach space norms. The latter result is of potential interest in cases where modeling the relationship between data elements as a dot product in a Hilbert space is too restrictive.
Title: On landmark selection and sampling in high-dimensional data analysis
Abstract: In recent years, the spectral analysis of appropriately defined kernel matrices has emerged as a principled way to extract the low-dimensional structure often prevalent in high-dimensional data. Here we provide an introduction to spectral methods for linear and nonlinear dimension reduction, emphasizing ways to overcome the computational limitations currently faced by practitioners with massive datasets. In particular, a data subsampling or landmark selection process is often employed to construct a kernel based on partial information, followed by an approximate spectral analysis termed the Nystrom extension. We provide a quantitative framework to analyse this procedure, and use it to demonstrate algorithmic performance bounds on a range of practical approaches designed to optimize the landmark selection process. We compare the practical implications of these bounds by way of real-world examples drawn from the field of computer vision, whereby low-dimensional manifold structure is shown to emerge from high-dimensional video data streams.
Title: Acquiring Knowledge for Evaluation of Teachers Performance in Higher Education using a Questionnaire
Abstract: In this paper, we present the step by step knowledge acquisition process by choosing a structured method through using a questionnaire as a knowledge acquisition tool. Here we want to depict the problem domain as, how to evaluate teachers performance in higher education through the use of expert system technology. The problem is how to acquire the specific knowledge for a selected problem efficiently and effectively from human experts and encode it in the suitable computer format. Acquiring knowledge from human experts in the process of expert systems development is one of the most common problems cited till yet. This questionnaire was sent to 87 domain experts within all public and private universities in Pakistani. Among them 25 domain experts sent their valuable opinions. Most of the domain experts were highly qualified, well experienced and highly responsible persons. The whole questionnaire was divided into 15 main groups of factors, which were further divided into 99 individual questions. These facts were analyzed further to give a final shape to the questionnaire. This knowledge acquisition technique may be used as a learning tool for further research work.
Title: Bayesian separation of spectral sources under non-negativity and full additivity constraints
Abstract: This paper addresses the problem of separating spectral sources which are linearly mixed with unknown proportions. The main difficulty of the problem is to ensure the full additivity (sum-to-one) of the mixing coefficients and non-negativity of sources and mixing coefficients. A Bayesian estimation approach based on Gamma priors was recently proposed to handle the non-negativity constraints in a linear mixture model. However, incorporating the full additivity constraint requires further developments. This paper studies a new hierarchical Bayesian model appropriate to the non-negativity and sum-to-one constraints associated to the regressors and regression coefficients of linear mixtures. The estimation of the unknown parameters of this model is performed using samples generated using an appropriate Gibbs sampler. The performance of the proposed algorithm is evaluated through simulation results conducted on synthetic mixture models. The proposed approach is also applied to the processing of multicomponent chemical mixtures resulting from Raman spectroscopy.
Title: Minimum Probability Flow Learning
Abstract: Fitting probabilistic models to data is often difficult, due to the general intractability of the partition function and its derivatives. Here we propose a new parameter estimation technique that does not require computing an intractable normalization factor or sampling from the equilibrium distribution of the model. This is achieved by establishing dynamics that would transform the observed data distribution into the model distribution, and then setting as the objective the minimization of the KL divergence between the data distribution and the distribution produced by running the dynamics for an infinitesimal time. Score matching, minimum velocity learning, and certain forms of contrastive divergence are shown to be special cases of this learning technique. We demonstrate parameter estimation in Ising models, deep belief networks and an independent component analysis model of natural scenes. In the Ising model case, current state of the art techniques are outperformed by at least an order of magnitude in learning time, with lower error in recovered coupling parameters.
Title: Efficient IRIS Recognition through Improvement of Feature Extraction and subset Selection
Abstract: The selection of the optimal feature subset and the classification has become an important issue in the field of iris recognition. In this paper we propose several methods for iris feature subset selection and vector creation. The deterministic feature sequence is extracted from the iris image by using the contourlet transform technique. Contourlet transform captures the intrinsic geometrical structures of iris image. It decomposes the iris image into a set of directional sub-bands with texture details captured in different orientations at various scales so for reducing the feature vector dimensions we use the method for extract only significant bit and information from normalized iris images. In this method we ignore fragile bits. And finally we use SVM (Support Vector Machine) classifier for approximating the amount of people identification in our proposed system. Experimental result show that most proposed method reduces processing time and increase the classification accuracy and also the iris feature vector length is much smaller versus the other methods.
Title: A Bounded Derivative Method for the Maximum Likelihood Estimation on Weibull Parameters
Abstract: For the basic maximum likelihood estimating function of the two parameters Weibull distribution, a simple proof on its global monotonicity is given to ensure the existence and uniqueness of its solution. The boundary of the function's first-order derivative is defined based on its scale-free property. With a bounded derivative, the possible range of the root of this function can be determined. A novel root-finding algorithm employing these established results is proposed accordingly, its convergence is proved analytically as well. Compared with other typical algorithms for this problem, the efficiency of the proposed algorithm is also demonstrated by numerical experiments.
Title: Vision Based Navigation for a Mobile Robot with Different Field of Views
Abstract: The basic idea behind evolutionary robotics is to evolve a set of neural controllers for a particular task at hand. It involves use of various input parameters such as infrared sensors, light sensors and vision based methods. This paper aims to explore the evolution of vision based navigation in a mobile robot. It discusses in detail the effect of different field of views for a mobile robot. The individuals have been evolved using different FOV values and the results have been recorded and analyzed.The optimum values for FOV have been proposed after evaluating more than 100 different values. It has been observed that the optimum FOV value requires lesser number of generations for evolution and the mobile robot trained with that particular value is able to navigate well in the environment.
Title: Inference for graphs and networks: Extending classical tools to modern data
Abstract: Graphs and networks provide a canonical representation of relational data, with massive network data sets becoming increasingly prevalent across a variety of scientific fields. Although tools from mathematics and computer science have been eagerly adopted by practitioners in the service of network inference, they do not yet comprise a unified and coherent framework for the statistical analysis of large-scale network data. This paper serves as both an introduction to the topic and a first step toward formal inference procedures. We develop and illustrate our arguments using the example of hypothesis testing for network structure. We invoke a generalized likelihood ratio framework and use it to highlight the growing number of topics in this area that require strong contributions from statistical science. We frame our discussion in the context of previous work from across a variety of disciplines, and conclude by outlining fundamental statistical challenges whose solutions will in turn serve to advance the science of network inference.