text
stringlengths
0
4.09k
Title: Development of Hybrid Intelligent Systems and their Applications from Engineering Systems to Complex Systems
Abstract: In this study, we introduce general frame of MAny Connected Intelligent Particles Systems (MACIPS). Connections and interconnections between particles get a complex behavior of such merely simple system (system in system).Contribution of natural computing, under information granulation theory, are the main topic of this spacious skeleton. Upon this clue, we organize different algorithms involved a few prominent intelligent computing and approximate reasoning methods such as self organizing feature map (SOM)[9], Neuro- Fuzzy Inference System[10], Rough Set Theory (RST)[11], collaborative clustering, Genetic Algorithm and Ant Colony System. Upon this, we have employed our algorithms on the several engineering systems, especially emerged systems in Civil and Mineral processing. In other process, we investigated how our algorithms can be taken as a linkage of government-society interaction, where government catches various fashions of behavior: solid (absolute) or flexible. So, transition of such society, by changing of connectivity parameters (noise) from order to disorder is inferred. Add to this, one may find an indirect mapping among finical systems and eventual market fluctuations with MACIPS. In the following sections, we will mention the main topics of the suggested proposal, briefly Details of the proposed algorithms can be found in the references.
Title: Developing Bayesian Information Entropy-based Techniques for Spatially Explicit Model Assessment
Abstract: The aim of this paper is to explore and develop advanced spatial Bayesian assessment methods and techniques for land use modeling. The paper provides a comprehensive guide for assessing additional informational entropy value of model predictions at the spatially explicit domain of knowledge, and proposes a few alternative metrics and indicators for extracting higher-order information dynamics from simulation tournaments. A seven-county study area in South-Eastern Wisconsin (SEWI) has been used to simulate and assess the accuracy of historical land use changes (1963-1990) using artificial neural network simulations of the Land Transformation Model (LTM). The use of the analysis and the performance of the metrics helps: (a) understand and learn how well the model runs fits to different combinations of presence and absence of transitions in a landscape, not simply how well the model fits our given data; (b) derive (estimate) a theoretical accuracy that we would expect a model to assess under the presence of incomplete information and measurement; (c) understand the spatially explicit role and patterns of uncertainty in simulations and model estimations, by comparing results across simulation runs; (d) compare the significance or estimation contribution of transitional presence and absence (change versus no change) to model performance, and the contribution of the spatial drivers and variables to the explanatory value of our model; and (e) compare measurements of informational uncertainty at different scales of spatial resolution.
Title: A chain dictionary method for Word Sense Disambiguation and applications
Abstract: A large class of unsupervised algorithms for Word Sense Disambiguation (WSD) is that of dictionary-based methods. Various algorithms have as the root Lesk's algorithm, which exploits the sense definitions in the dictionary directly. Our approach uses the lexical base WordNet for a new algorithm originated in Lesk's, namely "chain algorithm for disambiguation of all words", CHAD. We show how translation from a language into another one and also text entailment verification could be accomplished by this disambiguation.
Title: Manifold Learning: The Price of Normalization
Abstract: We analyze the performance of a class of manifold-learning algorithms that find their output by minimizing a quadratic form under some normalization constraints. This class consists of Locally Linear Embedding (LLE), Laplacian Eigenmap, Local Tangent Space Alignment (LTSA), Hessian Eigenmaps (HLLE), and Diffusion maps. We present and prove conditions on the manifold that are necessary for the success of the algorithms. Both the finite sample case and the limit case are analyzed. We show that there are simple manifolds in which the necessary conditions are violated, and hence the algorithms cannot recover the underlying manifolds. Finally, we present numerical results that demonstrate our claims.
Title: Local Procrustes for Manifold Embedding: A Measure of Embedding Quality and Embedding Algorithms
Abstract: We present the Procrustes measure, a novel measure based on Procrustes rotation that enables quantitative comparison of the output of manifold-based embedding algorithms (such as LLE (Roweis and Saul, 2000) and Isomap (Tenenbaum et al, 2000)). The measure also serves as a natural tool when choosing dimension-reduction parameters. We also present two novel dimension-reduction techniques that attempt to minimize the suggested measure, and compare the results of these techniques to the results of existing algorithms. Finally, we suggest a simple iterative method that can be used to improve the output of existing algorithms.
Title: Supervised functional classification: A theoretical remark and some comparisons
Abstract: The problem of supervised classification (or discrimination) with functional data is considered, with a special interest on the popular k-nearest neighbors (k-NN) classifier. First, relying on a recent result by Cerou and Guyader (2006), we prove the consistency of the k-NN classifier for functional data whose distribution belongs to a broad family of Gaussian processes with triangular covariance functions. Second, on a more practical side, we check the behavior of the k-NN method when compared with a few other functional classifiers. This is carried out through a small simulation study and the analysis of several real functional data sets. While no global "uniform" winner emerges from such comparisons, the overall performance of the k-NN method, together with its sound intuitive motivation and relative simplicity, suggests that it could represent a reasonable benchmark for the classification problem with functional data.
Title: Decoding Beta-Decay Systematics: A Global Statistical Model for Beta^- Halflives
Abstract: Statistical modeling of nuclear data provides a novel approach to nuclear systematics complementary to established theoretical and phenomenological approaches based on quantum theory. Continuing previous studies in which global statistical modeling is pursued within the general framework of machine learning theory, we implement advances in training algorithms designed to improved generalization, in application to the problem of reproducing and predicting the halflives of nuclear ground states that decay 100% by the beta^- mode. More specifically, fully-connected, multilayer feedforward artificial neural network models are developed using the Levenberg-Marquardt optimization algorithm together with Bayesian regularization and cross-validation. The predictive performance of models emerging from extensive computer experiments is compared with that of traditional microscopic and phenomenological models as well as with the performance of other learning systems, including earlier neural network models as well as the support vector machines recently applied to the same problem. In discussing the results, emphasis is placed on predictions for nuclei that are far from the stability line, and especially those involved in the r-process nucleosynthesis. It is found that the new statistical models can match or even surpass the predictive performance of conventional models for beta-decay systematics and accordingly should provide a valuable additional tool for exploring the expanding nuclear landscape.
Title: Learning Graph Matching
Abstract: As a fundamental problem in pattern recognition, graph matching has applications in a variety of fields, from computer vision to computational biology. In graph matching, patterns are modeled as graphs and pattern recognition amounts to finding a correspondence between the nodes of different graphs. Many formulations of this problem can be cast in general as a quadratic assignment problem, where a linear term in the objective function encodes node compatibility and a quadratic term encodes edge compatibility. The main research focus in this theme is about designing efficient algorithms for approximately solving the quadratic assignment problem, since it is NP-hard. In this paper we turn our attention to a different question: how to estimate compatibility functions such that the solution of the resulting graph matching problem best matches the expected solution that a human would manually provide. We present a method for learning graph matching: the training examples are pairs of graphs and the `labels' are matches between them. Our experimental results reveal that learning can substantially improve the performance of standard graph matching algorithms. In particular, we find that simple linear assignment with such a learning scheme outperforms Graduated Assignment with bistochastic normalisation, a state-of-the-art quadratic assignment relaxation algorithm.
Title: Neural networks in 3D medical scan visualization
Abstract: For medical volume visualization, one of the most important tasks is to reveal clinically relevant details from the 3D scan (CT, MRI ...), e.g. the coronary arteries, without obscuring them with less significant parts. These volume datasets contain different materials which are difficult to extract and visualize with 1D transfer functions based solely on the attenuation coefficient. Multi-dimensional transfer functions allow a much more precise classification of data which makes it easier to separate different surfaces from each other. Unfortunately, setting up multi-dimensional transfer functions can become a fairly complex task, generally accomplished by trial and error. This paper explains neural networks, and then presents an efficient way to speed up visualization process by semi-automatic transfer function generation. We describe how to use neural networks to detect distinctive features shown in the 2D histogram of the volume data and how to use this information for data classification.
Title: Maximum Likelihood Drift Estimation for Multiscale Diffusions
Abstract: We study the problem of parameter estimation using maximum likelihood for fast/slow systems of stochastic differential equations. Our aim is to shed light on the problem of model/data mismatch at small scales. We consider two classes of fast/slow problems for which a closed coarse-grained equation for the slow variables can be rigorously derived, which we refer to as averaging and homogenization problems. We ask whether, given data from the slow variable in the fast/slow system, we can correctly estimate parameters in the drift of the coarse-grained equation for the slow variable, using maximum likelihood. We show that, whereas the maximum likelihood estimator is asymptotically unbiased for the averaging problem, for the homogenization problem maximum likelihood fails unless we subsample the data at an appropriate rate. An explicit formula for the asymptotic error in the log likelihood function is presented. Our theory is applied to two simple examples from molecular dynamics.
Title: BART: Bayesian additive regression trees
Abstract: We develop a Bayesian "sum-of-trees" model where each tree is constrained by a regularization prior to be a weak learner, and fitting and inference are accomplished via an iterative Bayesian backfitting MCMC algorithm that generates samples from a posterior. Effectively, BART is a nonparametric Bayesian regression approach which uses dimensionally adaptive random basis elements. Motivated by ensemble methods in general, and boosting algorithms in particular, BART is defined by a statistical model: a prior and a likelihood. This approach enables full posterior inference including point and interval estimates of the unknown regression function as well as the marginal effects of potential predictors. By keeping track of predictor inclusion frequencies, BART can also be used for model-free variable selection. BART's many features are illustrated with a bake-off against competing methods on 42 different data sets, with a simulation experiment and on a drug discovery classification problem.
Title: Fast computation of the median by successive binning
Abstract: This paper describes a new median algorithm and a median approximation algorithm. The former has O(n) average running time and the latter has O(n) worst-case running time. These algorithms are highly competitive with the standard algorithm when computing the median of a single data set, but are significantly faster in updating the median when more data is added.
Title: Information field theory for cosmological perturbation reconstruction and non-linear signal analysis
Abstract: We develop information field theory (IFT) as a means of Bayesian inference on spatially distributed signals, the information fields. A didactical approach is attempted. Starting from general considerations on the nature of measurements, signals, noise, and their relation to a physical reality, we derive the information Hamiltonian, the source field, propagator, and interaction terms. Free IFT reproduces the well known Wiener-filter theory. Interacting IFT can be diagrammatically expanded, for which we provide the Feynman rules in position-, Fourier-, and spherical harmonics space, and the Boltzmann-Shannon information measure. The theory should be applicable in many fields. However, here, two cosmological signal recovery problems are discussed in their IFT-formulation. 1) Reconstruction of the cosmic large-scale structure matter distribution from discrete galaxy counts in incomplete galaxy surveys within a simple model of galaxy formation. We show that a Gaussian signal, which should resemble the initial density perturbations of the Universe, observed with a strongly non-linear, incomplete and Poissonian-noise affected response, as the processes of structure and galaxy formation and observations provide, can be reconstructed thanks to the virtue of a response-renormalization flow equation. 2) We design a filter to detect local non-linearities in the cosmic microwave background, which are predicted from some Early-Universe inflationary scenarios, and expected due to measurement imperfections. This filter is the optimal Bayes' estimator up to linear order in the non-linearity parameter and can be used even to construct sky maps of non-linearities in the data.
Title: Statistical Learning of Arbitrary Computable Classifiers
Abstract: Statistical learning theory chiefly studies restricted hypothesis classes, particularly those with finite Vapnik-Chervonenkis (VC) dimension. The fundamental quantity of interest is the sample complexity: the number of samples required to learn to a specified level of accuracy. Here we consider learning over the set of all computable labeling functions. Since the VC-dimension is infinite and a priori (uniform) bounds on the number of samples are impossible, we let the learning algorithm decide when it has seen sufficient samples to have learned. We first show that learning in this setting is indeed possible, and develop a learning algorithm. We then show, however, that bounding sample complexity independently of the distribution is impossible. Notably, this impossibility is entirely due to the requirement that the learning algorithm be computable, and not due to the statistical nature of the problem.
Title: How Is Meaning Grounded in Dictionary Definitions?
Abstract: Meaning cannot be based on dictionary definitions all the way down: at some point the circularity of definitions must be broken in some way, by grounding the meanings of certain words in sensorimotor categories learned from experience or shaped by evolution. This is the "symbol grounding problem." We introduce the concept of a reachable set -- a larger vocabulary whose meanings can be learned from a smaller vocabulary through definition alone, as long as the meanings of the smaller vocabulary are themselves already grounded. We provide simple algorithms to compute reachable sets for any given dictionary.
Title: Improved testing inference in mixed linear models
Abstract: Mixed linear models are commonly used in repeated measures studies. They account for the dependence amongst observations obtained from the same experimental unit. Oftentimes, the number of observations is small, and it is thus important to use inference strategies that incorporate small sample corrections. In this paper, we develop modified versions of the likelihood ratio test for fixed effects inference in mixed linear models. In particular, we derive a Bartlett correction to such a test and also to a test obtained from a modified profile likelihood function. Our results generalize those in Zucker et al. (Journal of the Royal Statistical Society B, 2000, 62, 827-838) by allowing the parameter of interest to be vector-valued. Additionally, our Bartlett corrections allow for random effects nonlinear covariance matrix structure. We report numerical evidence which shows that the proposed tests display superior finite sample behavior relative to the standard likelihood ratio test. An application is also presented and discussed.
Title: Computational Approaches to Measuring the Similarity of Short Contexts : A Review of Applications and Methods
Abstract: Measuring the similarity of short written contexts is a fundamental problem in Natural Language Processing. This article provides a unifying framework by which short context problems can be categorized both by their intended application and proposed solution. The goal is to show that various problems and methodologies that appear quite different on the surface are in fact very closely related. The axes by which these categorizations are made include the format of the contexts (headed versus headless), the way in which the contexts are to be measured (first-order versus second-order similarity), and the information used to represent the features in the contexts (micro versus macro views). The unifying thread that binds together many short context applications and methods is the fact that similarity decisions must be made between contexts that share few (if any) words in common.
Title: Conceptualization of seeded region growing by pixels aggregation. Part 1: the framework
Abstract: Adams and Bishop have proposed in 1994 a novel region growing algorithm called seeded region growing by pixels aggregation (SRGPA). This paper introduces a framework to implement an algorithm using SRGPA. This framework is built around two concepts: localization and organization of applied action. This conceptualization gives a quick implementation of algorithms, a direct translation between the mathematical idea and the numerical implementation, and an improvement of algorithms efficiency.
Title: Conceptualization of seeded region growing by pixels aggregation. Part 2: how to localize a final partition invariant about the seeded region initialisation order
Abstract: In the previous paper, we have conceptualized the localization and the organization of seeded region growing by pixels aggregation (SRGPA) but we do not give the issue when there is a collision between two distinct regions during the growing process. In this paper, we propose two implementations to manage two classical growing processes: one without a boundary region region to divide the other regions and another with. Unfortunately, as noticed by Mehnert and Jakway (1997), this partition depends on the seeded region initialisation order (SRIO). We propose a growing process, invariant about SRIO such as the boundary region is the set of ambiguous pixels.
Title: Conceptualization of seeded region growing by pixels aggregation. Part 3: a wide range of algorithms
Abstract: In the two previous papers of this serie, we have created a library, called Population, dedicated to seeded region growing by pixels aggregation and we have proposed different growing processes to get a partition with or without a boundary region to divide the other regions or to get a partition invariant about the seeded region initialisation order. Using this work, we implement some algorithms belonging to the field of SRGPA using this library and these growing processes.
Title: Conceptualization of seeded region growing by pixels aggregation. Part 4: Simple, generic and robust extraction of grains in granular materials obtained by X-ray tomography
Abstract: This paper proposes a simple, generic and robust method to extract the grains from experimental tridimensionnal images of granular materials obtained by X-ray tomography. This extraction has two steps: segmentation and splitting. For the segmentation step, if there is a sufficient contrast between the different components, a classical threshold procedure followed by a succession of morphological filters can be applied. If not, and if the boundary needs to be localized precisely, a watershed transformation controlled by labels is applied. The basement of this transformation is to localize a label included in the component and another label in the component complementary. A "soft" threshold following by an opening is applied on the initial image to localize a label in a component. For any segmentation procedure, the visualisation shows a problem: some groups of two grains, close one to each other, become connected. So if a classical cluster procedure is applied on the segmented binary image, these numerical connected grains are considered as a single grain. To overcome this problem, we applied a procedure introduced by L. Vincent in 1993. This grains extraction is tested for various complexes porous media and granular material, to predict various properties (diffusion, electrical conductivity, deformation field) in a good agreement with experiment data.
Title: Use of a Quantum Computer and the Quick Medical Reference To Give an Approximate Diagnosis
Abstract: The Quick Medical Reference (QMR) is a compendium of statistical knowledge connecting diseases to findings (symptoms). The information in QMR can be represented as a Bayesian network. The inference problem (or, in more medical language, giving a diagnosis) for the QMR is to, given some findings, find the probability of each disease. Rejection sampling and likelihood weighted sampling (a.k.a. likelihood weighting) are two simple algorithms for making approximate inferences from an arbitrary Bayesian net (and from the QMR Bayesian net in particular). Heretofore, the samples for these two algorithms have been obtained with a conventional "classical computer". In this paper, we will show that two analogous algorithms exist for the QMR Bayesian net, where the samples are obtained with a quantum computer. We expect that these two algorithms, implemented on a quantum computer, can also be used to make inferences (and predictions) with other Bayesian nets.
Title: Information In The Non-Stationary Case
Abstract: Information estimates such as the ``direct method'' of Strong et al. (1998) sidestep the difficult problem of estimating the joint distribution of response and stimulus by instead estimating the difference between the marginal and conditional entropies of the response. While this is an effective estimation strategy, it tempts the practitioner to ignore the role of the stimulus and the meaning of mutual information. We show here that, as the number of trials increases indefinitely, the direct (or ``plug-in'') estimate of marginal entropy converges (with probability 1) to the entropy of the time-averaged conditional distribution of the response, and the direct estimate of the conditional entropy converges to the time-averaged entropy of the conditional distribution of the response. Under joint stationarity and ergodicity of the response and stimulus, the difference of these quantities converges to the mutual information. When the stimulus is deterministic or non-stationary the direct estimate of information no longer estimates mutual information, which is no longer meaningful, but it remains a measure of variability of the response distribution across time.
Title: Design, Development and Testing of Underwater Vehicles: ITB Experience
Abstract: The last decade has witnessed increasing worldwide interest in the research of underwater robotics with particular focus on the area of autonomous underwater vehicles (AUVs). The underwater robotics technology has enabled human to access the depth of the ocean to conduct environmental surveys, resources mapping as well as scientific and military missions. This capability is especially valuable for countries with major water or oceanic resources. As an archipelagic nation with more than 13,000 islands, Indonesia has one of the most abundant living and non-organic oceanic resources. The needs for the mapping, exploration, and environmental preservation of the vast marine resources are therefore imperative. The challenge of the deep water exploration has been the complex issues associated with hazardous and unstructured undersea and sea-bed environments. The paper reports the design, development and testing efforts of underwater vehicle that have been conducted at Institut Teknologi Bandung. Key technology areas have been identified and step-by-step development is presented in conjunction with the need to meet the challenge of underwater vehicle operation. A number of future research directions are also highlighted.
Title: Linear Parameter Varying Model Identification for Control of Rotorcraft-based UAV
Abstract: A rotorcraft-based unmanned aerial vehicle exhibits more complex properties compared to its full-size counterparts due to its increased sensitivity to control inputs and disturbances and higher bandwidth of its dynamics. As an aerial vehicle with vertical take-off and landing capability, the helicopter specifically poses a difficult problem of transition between forward flight and unstable hover and vice versa. The LPV control technique explicitly takes into account the change in performance due to the real-time parameter variations. The technique therefore theoretically guarantees the performance and robustness over the entire operating envelope. In this study, we investigate a new approach implementing model identification for use in the LPV control framework. The identification scheme employs recursive least square technique implemented on the LPV system represented by dynamics of helicopter during a transition. The airspeed as the scheduling of parameter trajectory is not assumed to vary slowly. The exclusion of slow parameter change requirement allows for the application of the algorithm for aggressive maneuvering capability without the need of expensive computation. The technique is tested numerically and will be validated in the autonomous flight of a small scale helicopter.
Title: High-dimensional additive modeling
Abstract: We propose a new sparsity-smoothness penalty for high-dimensional generalized additive models. The combination of sparsity and smoothness is crucial for mathematical theory as well as performance for finite-sample data. We present a computationally efficient algorithm, with provable numerical convergence properties, for optimizing the penalized likelihood. Furthermore, we provide oracle results which yield asymptotic optimality of our estimator for high dimensional but sparse additive models. Finally, an adaptive version of our sparsity-smoothness penalized approach yields large additional performance gains.
Title: Agnostically Learning Juntas from Random Walks
Abstract: We prove that the class of functions g:-1,+1^n -> -1,+1 that only depend on an unknown subset of k<<n variables (so-called k-juntas) is agnostically learnable from a random walk in time polynomial in n, 2^k^2, epsilon^-k, and log(1/delta). In other words, there is an algorithm with the claimed running time that, given epsilon, delta > 0 and access to a random walk on -1,+1^n labeled by an arbitrary function f:-1,+1^n -> -1,+1, finds with probability at least 1-delta a k-junta that is (opt(f)+epsilon)-close to f, where opt(f) denotes the distance of a closest k-junta to f.
Title: On Sequences with Non-Learnable Subsequences
Abstract: The remarkable results of Foster and Vohra was a starting point for a series of papers which show that any sequence of outcomes can be learned (with no prior knowledge) using some universal randomized forecasting algorithm and forecast-dependent checking rules. We show that for the class of all computationally efficient outcome-forecast-based checking rules, this property is violated. Moreover, we present a probabilistic algorithm generating with probability close to one a sequence with a subsequence which simultaneously miscalibrates all partially weakly computable randomized forecasting algorithms. %subsequences non-learnable by each randomized algorithm. According to the Dawid's prequential framework we consider partial recursive randomized algorithms.
Title: Prediction with Expert Advice in Games with Unbounded One-Step Gains
Abstract: The games of prediction with expert advice are considered in this paper. We present some modification of Kalai and Vempala algorithm of following the perturbed leader for the case of unrestrictedly large one-step gains. We show that in general case the cumulative gain of any probabilistic prediction algorithm can be much worse than the gain of some expert of the pool. Nevertheless, we give the lower bound for this cumulative gain in general case and construct a universal algorithm which has the optimal performance; we also prove that in case when one-step gains of experts of the pool have ``limited deviations'' the performance of our algorithm is close to the performance of the best expert.
Title: Computationally Efficient Estimators for Dimension Reductions Using Stable Random Projections
Abstract: The method of stable random projections is a tool for efficiently computing the $l_\alpha$ distances using low memory, where $0<\alpha \leq 2$ is a tuning parameter. The method boils down to a statistical estimation task and various estimators have been proposed, based on the geometric mean, the harmonic mean, and the fractional power etc. This study proposes the optimal quantile estimator, whose main operation is selecting, which is considerably less expensive than taking fractional power, the main operation in previous estimators. Our experiments report that the optimal quantile estimator is nearly one order of magnitude more computationally efficient than previous estimators. For large-scale learning tasks in which storing and computing pairwise distances is a serious bottleneck, this estimator should be desirable. In addition to its computational advantages, the optimal quantile estimator exhibits nice theoretical properties. It is more accurate than previous estimators when $\alpha>1$. We derive its theoretical error bounds and establish the explicit (i.e., no hidden constants) sample complexity bound.
Title: On Approximating the Lp Distances for p>2
Abstract: Applications in machine learning and data mining require computing pairwise Lp distances in a data matrix A. For massive high-dimensional data, computing all pairwise distances of A can be infeasible. In fact, even storing A or all pairwise distances of A in the memory may be also infeasible. This paper proposes a simple method for p = 2, 4, 6, ... We first decompose the l_p (where p is even) distances into a sum of 2 marginal norms and p-1 ``inner products'' at different orders. Then we apply normal or sub-Gaussian random projections to approximate the resultant ``inner products,'' assuming that the marginal norms can be computed exactly by a linear scan. We propose two strategies for applying random projections. The basic projection strategy requires only one projection matrix but it is more difficult to analyze, while the alternative projection strategy requires p-1 projection matrices but its theoretical analysis is much easier. In terms of the accuracy, at least for p=4, the basic strategy is always more accurate than the alternative strategy if the data are non-negative, which is common in reality.
Title: On empirical meaning of randomness with respect to a real parameter
Abstract: We study the empirical meaning of randomness with respect to a family of probability distributions $P_\theta$, where $\theta$ is a real parameter, using algorithmic randomness theory. In the case when for a computable probability distribution $P_\theta$ an effectively strongly consistent estimate exists, we show that the Levin's a priory semicomputable semimeasure of the set of all $P_\theta$-random sequences is positive if and only if the parameter $\theta$ is a computable real number. The different methods for generating ``meaningful'' $P_\theta$-random sequences with noncomputable $\theta$ are discussed.
Title: The model of quantum evolution
Abstract: This paper has been withdrawn by the author due to extremely unscientific errors.
Title: Predicting Regional Classification of Levantine Ivory Sculptures: A Machine Learning Approach