text
stringlengths
0
4.09k
Title: Dynamic Modeling and Statistical Analysis of Event Times
Abstract: This review article provides an overview of recent work in the modeling and analysis of recurrent events arising in engineering, reliability, public health, biomedicine and other areas. Recurrent event modeling possesses unique facets making it different and more difficult to handle than single event settings. For instance, the impact of an increasing number of event occurrences needs to be taken into account, the effects of covariates should be considered, potential association among the interevent times within a unit cannot be ignored, and the effects of performed interventions after each event occurrence need to be factored in. A recent general class of models for recurrent events which simultaneously accommodates these aspects is described. Statistical inference methods for this class of models are presented and illustrated through applications to real data sets. Some existing open research problems are described.
Title: Threshold Regression for Survival Analysis: Modeling Event Times by a Stochastic Process Reaching a Boundary
Abstract: Many researchers have investigated first hitting times as models for survival data. First hitting times arise naturally in many types of stochastic processes, ranging from Wiener processes to Markov chains. In a survival context, the state of the underlying process represents the strength of an item or the health of an individual. The item fails or the individual experiences a clinical endpoint when the process reaches an adverse threshold state for the first time. The time scale can be calendar time or some other operational measure of degradation or disease progression. In many applications, the process is latent (i.e., unobservable). Threshold regression refers to first-hitting-time models with regression structures that accommodate covariate data. The parameters of the process, threshold state and time scale may depend on the covariates. This paper reviews aspects of this topic and discusses fruitful avenues for future research.
Title: Advances in Data Combination, Analysis and Collection for System Reliability Assessment
Abstract: The systems that statisticians are asked to assess, such as nuclear weapons, infrastructure networks, supercomputer codes and munitions, have become increasingly complex. It is often costly to conduct full system tests. As such, we present a review of methodology that has been proposed for addressing system reliability with limited full system testing. The first approaches presented in this paper are concerned with the combination of multiple sources of information to assess the reliability of a single component. The second general set of methodology addresses the combination of multiple levels of data to determine system reliability. We then present developments for complex systems beyond traditional series/parallel representations through the use of Bayesian networks and flowgraph models. We also include methodological contributions to resource allocation considerations for system relability assessment. We illustrate each method with applications primarily encountered at Los Alamos National Laboratory.
Title: On the Statistical Modeling and Analysis of Repairable Systems
Abstract: We review basic modeling approaches for failure and maintenance data from repairable systems. In particular we consider imperfect repair models, defined in terms of virtual age processes, and the trend-renewal process which extends the nonhomogeneous Poisson process and the renewal process. In the case where several systems of the same kind are observed, we show how observed covariates and unobserved heterogeneity can be included in the models. We also consider various approaches to trend testing. Modern reliability data bases usually contain information on the type of failure, the type of maintenance and so forth in addition to the failure times themselves. Basing our work on recent literature we present a framework where the observed events are modeled as marked point processes, with marks labeling the types of events. Throughout the paper the emphasis is more on modeling than on statistical inference.
Title: A Review of Accelerated Test Models
Abstract: Engineers in the manufacturing industries have used accelerated test (AT) experiments for many decades. The purpose of AT experiments is to acquire reliability information quickly. Test units of a material, component, subsystem or entire systems are subjected to higher-than-usual levels of one or more accelerating variables such as temperature or stress. Then the AT results are used to predict life of the units at use conditions. The extrapolation is typically justified (correctly or incorrectly) on the basis of physically motivated models or a combination of empirical model fitting with a sufficient amount of previous experience in testing similar units. The need to extrapolate in both time and the accelerating variables generally necessitates the use of fully parametric models. Statisticians have made important contributions in the development of appropriate stochastic models for AT data [typically a distribution for the response and regression relationships between the parameters of this distribution and the accelerating variable(s)], statistical methods for AT planning (choice of accelerating variable levels and allocation of available test units to those levels) and methods of estimation of suitable reliability metrics. This paper provides a review of many of the AT models that have been used successfully in this area.
Title: A Conversation With Harry Martz
Abstract: Harry F. Martz was born June 16, 1942 and grew up in Cumberland, Maryland. He received a Bachelor of Science degree in mathematics (with a minor in physics) from Frostburg State University in 1964, and earned a Ph.D. in statistics at Virginia Polytechnic Institute and State University in 1968. He started his statistics career at Texas Tech University's Department of Industrial Engineering and Statistics right after graduation. In 1978, he joined the technical staff at Los Alamos National Laboratory (LANL) in Los Alamos, New Mexico after first working as Full Professor in the Department of Industrial Engineering at Utah State University in the fall of 1977. He has had a prolific 23-year career with the statistics group at LANL; over the course of his career, Martz has published over 80 research papers in books and refereed journals, one book (with co-author Ray Waller), and has four patents associated with his work at LANL. He is a fellow of the American Statistical Association and has received numerous awards, including the Technometrics Frank Wilcoxon Prize for Best Applications Paper (1996), Los Alamos National Laboratory Achievement Award (1998), R&D 100 Award by R&D Magazine (2003), Council for Chemical Research Collaboration Success Award (2004), and Los Alamos National Laboratory's Distinguished Licensing Award (2004). Since retiring as a Technical Staff member at LANL in 2001, he has worked as a LANL Laboratory Associate.
Title: Virtual Manufacturing : Tools for improving Design and Production
Abstract: The research area "Virtual Manufacturing" can be defined as an integrated manufacturing environment which can enhance one or several levels of decision and control in manufacturing process. Several domains can be addressed: Product and Process Design, Process and Production Planning, Machine Tool, Robot and Manufacturing System. As automation technologies such as CAD/CAM have substantially shortened the time required to design products, Virtual Manufacturing will have a similar effect on the manufacturing phase thanks to the modelling, simulation and optimisation of the product and the processes involved in its fabrication.
Title: A preliminary analysis on metaheuristics methods applied to the Haplotype Inference Problem
Abstract: Haplotype Inference is a challenging problem in bioinformatics that consists in inferring the basic genetic constitution of diploid organisms on the basis of their genotype. This information allows researchers to perform association studies for the genetic variants involved in diseases and the individual responses to therapeutic agents. A notable approach to the problem is to encode it as a combinatorial problem (under certain hypotheses, such as the pure parsimony criterion) and to solve it using off-the-shelf combinatorial optimization techniques. The main methods applied to Haplotype Inference are either simple greedy heuristic or exact methods (Integer Linear Programming, Semidefinite Programming, SAT encoding) that, at present, are adequate only for moderate size instances. We believe that metaheuristic and hybrid approaches could provide a better scalability. Moreover, metaheuristics can be very easily combined with problem specific heuristics and they can also be integrated with tree-based search techniques, thus providing a promising framework for hybrid systems in which a good trade-off between effectiveness and efficiency can be reached. In this paper we illustrate a feasibility study of the approach and discuss some relevant design issues, such as modeling and design of approximate solvers that combine constructive heuristics, local search-based improvement strategies and learning mechanisms. Besides the relevance of the Haplotype Inference problem itself, this preliminary analysis is also an interesting case study because the formulation of the problem poses some challenges in modeling and hybrid metaheuristic solver design that can be generalized to other problems.
Title: Real-time control and monitoring system for LIPI's Public Cluster
Abstract: We have developed a monitoring and control system for LIPI's Public Cluster. The system consists of microcontrollers and full web-based user interfaces for daily operation. It is argued that, due to its special natures, the cluster requires fully dedicated and self developed control and monitoring system. We discuss the implementation of using parallel port and dedicated micro-controller for this purpose. We also show that integrating such systems enables an autonomous control system based on the real time monitoring, for instance an autonomous power supply control based on the actual temperature, etc.
Title: Structure or Noise?
Abstract: We show how rate-distortion theory provides a mechanism for automated theory building by naturally distinguishing between regularity and randomness. We start from the simple principle that model variables should, as much as possible, render the future and past conditionally independent. From this, we construct an objective function for model making whose extrema embody the trade-off between a model's structural complexity and its predictive power. The solutions correspond to a hierarchy of models that, at each level of complexity, achieve optimal predictive power at minimal cost. In the limit of maximal prediction the resulting optimal model identifies a process's intrinsic organization by extracting the underlying causal states. In this limit, the model's complexity is given by the statistical complexity, which is known to be minimal for achieving maximum prediction. Examples show how theory building can profit from analyzing a process's causal compressibility, which is reflected in the optimal models' rate-distortion curve--the process's characteristic for optimally balancing structure and noise at different levels of representation.
Title: A multivariate central limit theorem for randomized orthogonal array sampling designs in computer experiments
Abstract: Let $f:[0,1)^d \to \mathbb R$ be an integrable function. An objective of many computer experiments is to estimate $\int_[0,1)^d f(x) dx$ by evaluating f at a finite number of points in [0,1)^d. There is a design issue in the choice of these points and a popular choice is via the use of randomized orthogonal arrays. This article proves a multivariate central limit theorem for a class of randomized orthogonal array sampling designs [Owen (1992a)] as well as for a class of OA-based Latin hypercubes [Tang (1993)].
Title: Reconstruction of Protein-Protein Interaction Pathways by Mining Subject-Verb-Objects Intermediates
Abstract: The exponential increase in publication rate of new articles is limiting access of researchers to relevant literature. This has prompted the use of text mining tools to extract key biological information. Previous studies have reported extensive modification of existing generic text processors to process biological text. However, this requirement for modification had not been examined. In this study, we have constructed Muscorian, using MontyLingua, a generic text processor. It uses a two-layered generalization-specialization paradigm previously proposed where text was generically processed to a suitable intermediate format before domain-specific data extraction techniques are applied at the specialization layer. Evaluation using a corpus and experts indicated 86-90% precision and approximately 30% recall in extracting protein-protein interactions, which was comparable to previous studies using either specialized biological text processing tools or modified existing tools. Our study had also demonstrated the flexibility of the two-layered generalization-specialization paradigm by using the same generalization layer for two specialized information extraction tasks.
Title: Convergence of adaptive mixtures of importance sampling schemes
Abstract: In the design of efficient simulation algorithms, one is often beset with a poor choice of proposal distributions. Although the performance of a given simulation kernel can clarify a posteriori how adequate this kernel is for the problem at hand, a permanent on-line modification of kernels causes concerns about the validity of the resulting algorithm. While the issue is most often intractable for MCMC algorithms, the equivalent version for importance sampling algorithms can be validated quite precisely. We derive sufficient convergence conditions for adaptive mixtures of population Monte Carlo algorithms and show that Rao--Blackwellized versions asymptotically achieve an optimum in terms of a Kullback divergence criterion, while more rudimentary versions do not benefit from repeated updating.
Title: Modeling Visual Information Processing in Brain: A Computer Vision Point of View and Approach
Abstract: We live in the Information Age, and information has become a critically important component of our life. The success of the Internet made huge amounts of it easily available and accessible to everyone. To keep the flow of this information manageable, means for its faultless circulation and effective handling have become urgently required. Considerable research efforts are dedicated today to address this necessity, but they are seriously hampered by the lack of a common agreement about "What is information?" In particular, what is "visual information" - human's primary input from the surrounding world. The problem is further aggravated by a long-lasting stance borrowed from the biological vision research that assumes human-like information processing as an enigmatic mix of perceptual and cognitive vision faculties. I am trying to find a remedy for this bizarre situation. Relying on a new definition of "information", which can be derived from Kolmogorov's compexity theory and Chaitin's notion of algorithmic information, I propose a unifying framework for visual information processing, which explicitly accounts for the perceptual and cognitive image processing peculiarities. I believe that this framework will be useful to overcome the difficulties that are impeding our attempts to develop the right model of human-like intelligent image processing.
Title: A flexible Bayesian generalized linear model for dichotomous response data with an application to text categorization
Abstract: We present a class of sparse generalized linear models that include probit and logistic regression as special cases and offer some extra flexibility. We provide an EM algorithm for learning the parameters of these models from data. We apply our method in text classification and in simulated data and show that our method outperforms the logistic and probit models and also the elastic net, in general by a substantial margin.
Title: Estimating the proportion of differentially expressed genes in comparative DNA microarray experiments
Abstract: DNA microarray experiments, a well-established experimental technique, aim at understanding the function of genes in some biological processes. One of the most common experiments in functional genomics research is to compare two groups of microarray data to determine which genes are differentially expressed. In this paper, we propose a methodology to estimate the proportion of differentially expressed genes in such experiments. We study the performance of our method in a simulation study where we compare it to other standard methods. Finally we compare the methods in real data from two toxicology experiments with mice.
Title: Empirical Bayes methods for controlling the false discovery rate with dependent data
Abstract: False discovery rate (FDR) has been widely used as an error measure in large scale multiple testing problems, but most research in the area has been focused on procedures for controlling the FDR based on independent test statistics or the properties of such procedures for test statistics with certain types of stochastic dependence. Based on an approach proposed in Tang and Zhang (2005), we further develop in this paper empirical Bayes methods for controlling the FDR with dependent data. We implement our methodology in a time series model and report the results of a simulation study to demonstrate the advantages of the empirical Bayes approach.
Title: A smoothing model for sample disclosure risk estimation
Abstract: When a sample frequency table is published, disclosure risk arises when some individuals can be identified on the basis of their values in certain attributes in the table called key variables, and then their values in other attributes may be inferred, and their privacy is violated. On the basis of the sample to be released, and possibly some partial knowledge of the whole population, an agency which considers releasing the sample, has to estimate the disclosure risk. Risk arises from non-empty sample cells which represent small population cells and from population uniques in particular. Therefore risk estimation requires assessing how many of the relevant population cells are likely to be small. Various methods have been proposed for this task, and we present a method in which estimation of a population cell frequency is based on smoothing using a local neighborhood of this cell, that is, cells having similar or close values in all attributes. We provide some preliminary results and experiments with this method. Comparisons are made to two other methods: 1. a log-linear models approach in which inference on a given cell is based on a ``neighborhood'' of cells determined by the log-linear model. Such neighborhoods have one or some common attributes with the cell in question, but some other attributes may differ significantly. 2 The Argus method in which inference on a given cell is based only on the sample frequency in the specific cell, on the sample design and on some known marginal distributions of the population, without learning from any type of ``neighborhood'' of the given cell, nor from any model which uses the structure of the table.
Title: An Interval Analysis Based Study for the Design and the Comparison of 3-DOF Parallel Kinematic Machines
Abstract: This paper addresses an interval analysis based study that is applied to the design and the comparison of 3-DOF parallel kinematic machines. Two design criteria are used, (i) a regular workspace shape and, (ii) a kinetostatic performance index that needs to be as homogeneous as possible throughout the workspace. The interval analysis based method takes these two criteria into account: on the basis of prescribed kinetostatic performances, the workspace is analysed to find out the largest regular dextrous workspace enclosed in the Cartesian workspace. An algorithm describing this method is introduced. Two 3-DOF translational parallel mechanisms designed for machining applications are compared using this method. The first machine features three fixed linear joints which are mounted orthogonally and the second one features three linear joints which are mounted in parallel. In both cases, the mobile platform moves in the Cartesian x-y-z space with fixed orientation.
Title: Deconvolution by simulation
Abstract: Given samples (x_1,...,x_m) and (z_1,...,z_n) which we believe are independent realizations of random variables X and Z respectively, where we further believe that Z=X+Y with Y independent of X, the problem is to estimate the distribution of Y. We present a new method for doing this, involving simulation. Experiments suggest that the method provides useful estimates.
Title: A comparison of the accuracy of saddlepoint conditional cumulative distribution function approximations
Abstract: Consider a model parameterized by a scalar parameter of interest and a nuisance parameter vector. Inference about the parameter of interest may be based on the signed root of the likelihood ratio statistic R. The standard normal approximation to the conditional distribution of R typically has error of order O(n^-1/2), where n is the sample size. There are several modifications for R, which reduce the order of error in the approximations. In this paper, we mainly investigate Barndorff-Nielsen's modified directed likelihood ratio statistic, Severini's empirical adjustment, and DiCiccio and Martin's two modifications, involving the Bayesian approach and the conditional likelihood ratio statistic. For each modification, two formats were employed to approximate the conditional cumulative distribution function; these are Barndorff-Nielson formats and the Lugannani and Rice formats. All approximations were applied to inference on the ratio of means for two independent exponential random variables. We constructed one and two-sided hypotheses tests and used the actual sizes of the tests as the measurements of accuracy to compare those approximations.
Title: Statistical inverse problems in active network tomography
Abstract: The analysis of computer and communication networks gives rise to some interesting inverse problems. This paper is concerned with active network tomography where the goal is to recover information about quality-of-service (QoS) parameters at the link level from aggregate data measured on end-to-end network paths. The estimation and monitoring of QoS parameters, such as loss rates and delays, are of considerable interest to network engineers and Internet service providers. The paper provides a review of the inverse problems and recent research on inference for loss rates and delay distributions. Some new results on parametric inference for delay distributions are also developed. In addition, a real application on Internet telephony is discussed.
Title: Using data network metrics, graphics, and topology to explore network characteristics
Abstract: Yehuda Vardi introduced the term network tomography and was the first to propose and study how statistical inverse methods could be adapted to attack important network problems (Vardi, 1996). More recently, in one of his final papers, Vardi proposed notions of metrics on networks to define and measure distances between a network's links, its paths, and also between different networks (Vardi, 2004). In this paper, we apply Vardi's general approach for network metrics to a real data network by using data obtained from special data network tools and testing procedures presented here. We illustrate how the metrics help explicate interesting features of the traffic characteristics on the network. We also adapt the metrics in order to condition on traffic passing through a portion of the network, such as a router or pair of routers, and show further how this approach helps to discover and explain interesting network characteristics.
Title: Functional analysis via extensions of the band depth
Abstract: The notion of data depth has long been in use to obtain robust location and scale estimates in a multivariate setting. The depth of an observation is a measure of its centrality, with respect to a data set or a distribution. The data depths of a set of multivariate observations translates to a center-outward ordering of the data. Thus, data depth provides a generalization of the median to a multivariate setting (the deepest observation), and can also be used to screen for extreme observations or outliers (the observations with low data depth). Data depth has been used in the development of a wide range of robust and non-parametric methods for multivariate data, such as non-parametric tests of location and scale [Li and Liu (2004)], multivariate rank-tests [Liu and Singh (1993)], non-parametric classification and clustering [Jornsten (2004)], and robust regression [Rousseeuw and Hubert (1999)]. Many different notions of data depth have been developed for multivariate data. In contrast, data depth measures for functional data have only recently been proposed [Fraiman and Muniz (1999), L\'opez-Pintado and Romo (2006a)]. While the definitions of both of these data depth measures are motivated by the functional aspect of the data, the measures themselves are in fact invariant with respect to permutations of the domain (i.e. the compact interval on which the functions are defined). Thus, these measures are equally applicable to multivariate data where there is no explicit ordering of the data dimensions. In this paper we explore some extensions of functional data depths, so as to take the ordering of the data dimensions into account.
Title: A Practical Ontology for the Large-Scale Modeling of Scholarly Artifacts and their Usage
Abstract: The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. We present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.
Title: Cost-minimising strategies for data labelling : optimal stopping and active learning
Abstract: Supervised learning deals with the inference of a distribution over an output or label space $\CY$ conditioned on points in an observation space $\CX$, given a training dataset $D$ of pairs in $\CX \times \CY$. However, in a lot of applications of interest, acquisition of large amounts of observations is easy, while the process of generating labels is time-consuming or costly. One way to deal with this problem is \em active learning, where points to be labelled are selected with the aim of creating a model with better performance than that of an model trained on an equal number of randomly sampled points. In this paper, we instead propose to deal with the labelling cost directly: The learning goal is defined as the minimisation of a cost which is a function of the expected model performance and the total cost of the labels used. This allows the development of general strategies and specific algorithms for (a) optimal stopping, where the expected cost dictates whether label acquisition should continue (b) empirical evaluation, where the cost is used as a performance metric for a given combination of inference, stopping and sampling methods. Though the main focus of the paper is optimal stopping, we also aim to provide the background for further developments and discussion in the related field of active learning.
Title: Pathwise coordinate optimization
Abstract: We consider ``one-at-a-time'' coordinate-wise descent algorithms for a class of convex optimization problems. An algorithm of this kind has been proposed for the $L_1$-penalized regression (lasso) in the literature, but it seems to have been largely ignored. Indeed, it seems that coordinate-wise algorithms are not often used in convex optimization. We show that this algorithm is very competitive with the well-known LARS (or homotopy) procedure in large lasso problems, and that it can be applied to related methods such as the garotte and elastic net. It turns out that coordinate-wise descent does not work in the ``fused lasso,'' however, so we derive a generalized algorithm that yields the solution in much less time that a standard convex optimizer. Finally, we generalize the procedure to the two-dimensional fused lasso, and demonstrate its performance on some image smoothing problems.
Title: Defensive forecasting for optimal prediction with expert advice
Abstract: The method of defensive forecasting is applied to the problem of prediction with expert advice for binary outcomes. It turns out that defensive forecasting is not only competitive with the Aggregating Algorithm but also handles the case of "second-guessing" experts, whose advice depends on the learner's prediction; this paper assumes that the dependence on the learner's prediction is continuous.
Title: A Data-Parallel Version of Aleph
Abstract: This is to present work on modifying the Aleph ILP system so that it evaluates the hypothesised clauses in parallel by distributing the data-set among the nodes of a parallel or distributed machine. The paper briefly discusses MPI, the interface used to access message- passing libraries for parallel computers and clusters. It then proceeds to describe an extension of YAP Prolog with an MPI interface and an implementation of data-parallel clause evaluation for Aleph through this interface. The paper concludes by testing the data-parallel Aleph on artificially constructed data-sets.
Title: Learning Phonotactics Using ILP
Abstract: This paper describes experiments on learning Dutch phonotactic rules using Inductive Logic Programming, a machine learning discipline based on inductive logical operators. Two different ways of approaching the problem are experimented with, and compared against each other as well as with related work on the task. The results show a direct correspondence between the quality and informedness of the background knowledge and the constructed theory, demonstrating the ability of ILP to take good advantage of the prior domain knowledge available. Further research is outlined.
Title: Markov Chain Modelling for Reliability Estimation of Engineering Systems at Different Scales - Some Considerations
Abstract: The concepts of probability, statistics and stochastic theory are being successfully used in structural engineering. Markov Chain modelling is a simple stochastic process model that has found its application in both describing stochastic evolution of system and in system reliability estimation. The recent developments in Markov Chain Monte Carlo and the possible integration of Bayesian theory within Markov Chain theory have enhanced its application possibilities. However, the application possibility can be furthered to range over wider scales of application (perhaps from nano- to macro-) by considering the developments in Physics (in particular Quantum Physics). This paper tries to present the results of quantum physics that would help in interpretation of transition probability matrix. However, care has to be taken in the choice of densities in computing the transition probability matrix. The paper is based on available literature, and the aim is only to make an attempt to show how Markov Chain can be used to model systems at various scales.
Title: Optimal Causal Inference: Estimating Stored Information and Approximating Causal Architecture
Abstract: We introduce an approach to inferring the causal architecture of stochastic dynamical systems that extends rate distortion theory to use causal shielding---a natural principle of learning. We study two distinct cases of causal inference: optimal causal filtering and optimal causal estimation. Filtering corresponds to the ideal case in which the probability distribution of measurement sequences is known, giving a principled method to approximate a system's causal structure at a desired level of representation. We show that, in the limit in which a model complexity constraint is relaxed, filtering finds the exact causal architecture of a stochastic dynamical system, known as the causal-state partition. From this, one can estimate the amount of historical information the process stores. More generally, causal filtering finds a graded model-complexity hierarchy of approximations to the causal architecture. Abrupt changes in the hierarchy, as a function of approximation, capture distinct scales of structural organization. For nonideal cases with finite data, we show how the correct number of underlying causal states can be found by optimal causal estimation. A previously derived model complexity control term allows us to correct for the effect of statistical fluctuations in probability estimates and thereby avoid over-fitting.
Title: Updating Probabilities with Data and Moments
Abstract: We use the method of Maximum (relative) Entropy to process information in the form of observed data and moment constraints. The generic "canonical" form of the posterior distribution for the problem of simultaneous updating with data and moments is obtained. We discuss the general problem of non-commuting constraints, when they should be processed sequentially and when simultaneously. As an illustration, the multinomial example of die tosses is solved in detail for two superficially similar but actually very different problems.