url
stringlengths
6
1.61k
fetch_time
int64
1,368,856,904B
1,726,893,854B
content_mime_type
stringclasses
3 values
warc_filename
stringlengths
108
138
warc_record_offset
int32
9.6k
1.74B
warc_record_length
int32
664
793k
text
stringlengths
45
1.04M
token_count
int32
22
711k
char_count
int32
45
1.04M
metadata
stringlengths
439
443
score
float64
2.52
5.09
int_score
int64
3
5
crawl
stringclasses
93 values
snapshot_type
stringclasses
2 values
language
stringclasses
1 value
language_score
float64
0.06
1
http://www.numbersaplenty.com/7322
1,590,428,115,000,000,000
text/html
crawl-data/CC-MAIN-2020-24/segments/1590347389309.17/warc/CC-MAIN-20200525161346-20200525191346-00495.warc.gz
202,026,483
3,603
Search a number 7322 = 27523 BaseRepresentation bin1110010011010 3101001012 41302122 5213242 653522 730230 oct16232 911035 107322 115557 1242a2 133443 142950 152282 hex1c9a 7322 has 8 divisors (see below), whose sum is σ = 12576. Its totient is φ = 3132. The previous prime is 7321. The next prime is 7331. The reversal of 7322 is 2237. Adding to 7322 its reverse (2237), we get a palindrome (9559). 7322 is nontrivially palindromic in base 13. It is a sphenic number, since it is the product of 3 distinct primes. It is a super-2 number, since 2×73222 = 107223368, which contains 22 as substring. It is a Harshad number since it is a multiple of its sum of digits (14), and also a Moran number because the ratio is a prime number: 523 = 7322 / (7 + 3 + 2 + 2). It is an Ulam number. It is a plaindrome in base 11. It is a nialpdrome in base 10. It is a junction number, because it is equal to n+sod(n) for n = 7297 and 7306. It is an inconsummate number, since it does not exist a number n which divided by its sum of digits gives 7322. It is not an unprimeable number, because it can be changed into a prime (7321) by changing a digit. It is a pernicious number, because its binary representation contains a prime number (7) of ones. It is a polite number, since it can be written in 3 ways as a sum of consecutive naturals, for example, 248 + ... + 275. It is an arithmetic number, because the mean of its divisors is an integer number (1572). 27322 is an apocalyptic number. 7322 is a deficient number, since it is larger than the sum of its proper divisors (5254). 7322 is a wasteful number, since it uses less digits than its factorization. 7322 is an odious number, because the sum of its binary digits is odd. The sum of its prime factors is 532. The product of its digits is 84, while the sum is 14. The square root of 7322 is about 85.5686858611. The cubic root of 7322 is about 19.4182419552. The spelling of 7322 in words is "seven thousand, three hundred twenty-two", and thus it is an iban number. Divisors: 1 2 7 14 523 1046 3661 7322
628
2,074
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.578125
4
CC-MAIN-2020-24
latest
en
0.916018
https://www.scipopt.org/doc-8.0.4/html/benderscut__feas_8h_source.php
1,723,470,818,000,000,000
text/html
crawl-data/CC-MAIN-2024-33/segments/1722641039579.74/warc/CC-MAIN-20240812124217-20240812154217-00619.warc.gz
746,587,681
5,820
# SCIP Solving Constraint Integer Programs benderscut_feas.h Go to the documentation of this file. 1 /* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ 2 /* */ 3 /* This file is part of the program and library */ 4 /* SCIP --- Solving Constraint Integer Programs */ 5 /* */ 6 /* Copyright (c) 2002-2023 Zuse Institute Berlin (ZIB) */ 7 /* */ 9 /* you may not use this file except in compliance with the License. */ 10 /* You may obtain a copy of the License at */ 11 /* */ 13 /* */ 14 /* Unless required by applicable law or agreed to in writing, software */ 16 /* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. */ 17 /* See the License for the specific language governing permissions and */ 18 /* limitations under the License. */ 19 /* */ 20 /* You should have received a copy of the Apache-2.0 license */ 21 /* along with SCIP; see the file LICENSE. If not visit scipopt.org. */ 22 /* */ 23 /* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ 24 25 /**@file benderscut_feas.h 26  * @ingroup BENDERSCUTS 27  * @brief Standard feasibility cuts for Benders' decomposition 28  * @author Stephen J. Maher 29  * 30  * The classical Benders' decomposition feasibility cuts arise from an infeasible instance of the Benders' decomposition 31  * subproblem. 32  * Consider the linear Benders' decomposition subproblem that takes the master problem solution \f$\bar{x}\f$ as input: 33  * \f[ 34  * z(\bar{x}) = \min\{d^{T}y : Ty \geq h - H\bar{x}, y \geq 0\} 35  * \f] 36  * If the subproblem is infeasible as a result of the solution \f$\bar{x}\f$, then the Benders' decomposition 37  * feasibility cut can be generated from the dual ray. Let \f$w\f$ be the vector corresponding to the dual ray of the 38  * Benders' decomposition subproblem. The resulting cut is: 39  * \f[ 40  * 0 \geq w^{T}(h - Hx) 41  * \f] 42  * 43  * Next, consider the nonlinear Benders' decomposition subproblem that takes the master problem solution \f$\bar{x}\f$ as input: 44  * \f[ 45  * z(\bar{x}) = \min\{d^{T}y : g(\bar{x}, y) \leq 0, y \geq 0\} 46  * \f] 47  * If the subproblem is infeasible as a result of the solution \f$\bar{x}\f$, then the Benders' decomposition 48  * feasibility cut can be generated from a minimal infeasible solution, i.e., a solution of the NLP 49  * \f[ 50  * \min\left\{\sum_i u_i : g(\bar{x}, y) \leq u, y \geq 0, u \geq 0\right\} 51  * \f] 52  * Let \f$\bar{y}\f$, \f$w\f$ be the vectors corresponding to the primal and dual solution of this auxiliary NLP. 53  * The resulting cut is: 54  * \f[ 55  * 0 \geq w^{T}\left(g(\bar{x},\bar{y}) + \nabla_x g(\bar{x},\bar{y}) (x - \bar{x})\right) 56  * \f] 57  * Note, that usually NLP solvers already provide a minimal infeasible solution when declaring the Benders' 58  * decomposition subproblem as infeasible. 59  */ 60 61 /*---+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8----+----9----+----0----+----1----+----2*/ 62 63 #ifndef __SCIP_BENDERSCUT_FEAS_H__ 64 #define __SCIP_BENDERSCUT_FEAS_H__ 65 66 67 #include "scip/def.h" 68 #include "scip/type_benders.h" 69 #include "scip/type_retcode.h" 70 #include "scip/type_scip.h" 71 72 #ifdef __cplusplus 73 extern "C" { 74 #endif 75 76 /** creates the Standard Feasibility Benders' decomposition cuts and includes it in SCIP 77  * 78  * @ingroup BenderscutIncludes 79  */ 80 SCIP_EXPORT 82  SCIP* scip, /**< SCIP data structure */ 83  SCIP_BENDERS* benders /**< Benders' decomposition */ 84  ); 85 86 #ifdef __cplusplus 87 } 88 #endif 89 90 #endif enum SCIP_Retcode SCIP_RETCODE Definition: type_retcode.h:63 type definitions for return codes for SCIP methods type definitions for SCIP&#39;s main datastructure SCIP_RETCODE SCIPincludeBenderscutFeas(SCIP *scip, SCIP_BENDERS *benders) type definitions for Benders&#39; decomposition methods common defines and data types used in all packages of SCIP
1,248
3,912
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.703125
3
CC-MAIN-2024-33
latest
en
0.58748
http://gmatclub.com/blog/category/blog/gmat/quant-gmat/problem-solving-gmat/page/33/?fl=menu
1,454,795,432,000,000,000
text/html
crawl-data/CC-MAIN-2016-07/segments/1454701147841.50/warc/CC-MAIN-20160205193907-00070-ip-10-236-182-209.ec2.internal.warc.gz
98,919,558
19,583
# GMAT Question of the Day (Nov 23): Word Problem and Critical Reasoning - Nov 23, 02:00 AM   Comments [0] Math (PS) If a company sells tons (1 ton = 1000 kilograms) of product A annually and charges dollars per ton, what is the profit if it costs dollars to manufacture a kilogram of product A and dollars to ship it to... # GMAT Question of the Day (Nov 21): Probability and Critical Reasoning - Nov 21, 02:00 AM   Comments [0] Math (PS) If Ben were to lose the championship, Mike would be the winner with a probability of , and Rob - . If the probability of Ben being the winner is , what is the probability that either Mike or Rob will... # GMAT Question of the Day (Nov 18): Geometry and Sentence Correction - Nov 20, 02:00 AM   Comments [0] Math (PS) Two watermelons, and , are on sale. Watermelon has a circumference of 6 inches; watermelon , 5 inches. If the price of watermelon is 1.5 times the price of watermelon , which watermelon is a better buy? (Assume that... # GMAT Question of the Day (Nov 19): Statistics and Sentence Correction - Nov 19, 02:00 AM   Comments [0] Math (PS) Which set has the greatest standard deviation? I. 1, 3, 5, 7, 9 II. 2, 4, 6, 8, 10 III. 1, -1, -3, -5, -7 (A) I (B) II (C) III (D) I and II (E) none Question Discussion & Explanation Correct Answer - E - (click and drag your mouse to see the... # GMAT Question of the Day (Nov 12): Counting and Sentence Correction - Nov 12, 02:00 AM   Comments [0] Math (PS) In a set of numbers from 100 to 1000 inclusive, how many integers are odd and do not contain the digit "5"? (A) 180 (B) 196 (C) 286 (D) 288 (E) 324 Question Discussion & Explanation Correct Answer - D - (click and drag your mouse to see the answer) GMAT Daily... # GMAT Question of the Day (Nov 8): Algebra and Sentence Correction - Nov 8, 02:00 AM   Comments [0] Math (PS) If and are consecutive positive integers, and: which of the following represents all the possible values of ? (A) (B) (C) (D) (E) Question Discussion & Explanation Correct Answer - B - (click and drag your mouse to see the answer) GMAT Daily Deals Veritas Prep... # GMAT Question of the Day (Nov 5): Counting and Sentence Correction - Nov 5, 02:00 AM   Comments [0] Math (PS) How many times will the digit 7 be written when listing the integers from 1 to 1000? (A) 110 (B) 111 (C) 271 (D) 300 (E) 304 Question Discussion & Explanation Correct Answer - D - (click and drag your mouse to see the answer) GMAT Daily Deals Only Knewton Gmat prep gives... # GMAT Question of the Day (Oct 31): Combinations and Critical Reasoning - Oct 31, 02:00 AM   Comments [0] Math (PS) 4 women and 6 men work in the accounting department. In how many ways can a committee of 3 be formed if it has to include at least one woman? (A) 36 (B) 60 (C) 72 (D) 80 (E) 100 Question Discussion & Explanation Correct Answer - E - (click and... # GMAT Question of the Day (Oct 29): Counting and Sentence Correction - Oct 29, 02:00 AM   Comments [0] Math (PS) In how many different ways can a group of 8 people be divided into 4 teams of 2 people each? (A) 90 (B) 105 (C) 168 (D) 420 (E) 2520 Click and drag your mouse to see the answer. Question Discussion & Explanation Correct Answer - B - GMAT Daily Deals Veritas Prep: 99th... # GMAT Question of the Day (Oct 26): Coordinate Geometry and Sentence Correction - Oct 26, 02:00 AM   Comments [0] Math (PS) On the coordinate graph, a circle is centered at the point (3, 3). If the radius of the circle is , and there is a square inscribed into the circle cutting it into 5 regions, what is the area of the segment...
1,041
3,589
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.296875
3
CC-MAIN-2016-07
longest
en
0.874633
https://support.nag.com/numeric/nl/nagdoc_27cpp/flhtml/d01/d01intro.html
1,701,993,840,000,000,000
text/html
crawl-data/CC-MAIN-2023-50/segments/1700679100705.19/warc/CC-MAIN-20231207221604-20231208011604-00730.warc.gz
620,671,198
22,562
## 1Scope of the Chapter This chapter provides routines for the numerical evaluation of definite integrals in one or more dimensions and for evaluating weights and abscissae of integration rules. ## 2Background to the Problems The routines in this chapter are designed to estimate: 1. (a)the value of a one-dimensional definite integral of the form $∫abfxdx$ (1) where $f\left(x\right)$ is defined by you, either at a set of points $\left({x}_{\mathit{i}},f\left({x}_{\mathit{i}}\right)\right)$, for $\mathit{i}=1,2,\dots ,n$, where $a={x}_{1}<{x}_{2}<\cdots <{x}_{n}=b$, or in the form of a function; and the limits of integration $a,b$ may be finite or infinite. Some methods are specially designed for integrands of the form $fx=wxgx$ (2) which contain a factor $w\left(x\right)$, called the weight-function, of a specific form. These methods take full account of any peculiar behaviour attributable to the $w\left(x\right)$ factor. 2. (b)the values of the one-dimensional indefinite integrals arising from (1) where the ranges of integration are interior to the interval $\left[a,b\right]$. 3. (c)the value of a multidimensional definite integral of the form $∫Rnfx1,x2,…,xndxn⋯dx2dx1$ (3) where $f\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)$ is a function defined by you and ${R}_{n}$ is some region of $n$-dimensional space. The simplest form of ${R}_{n}$ is the $n$-rectangle defined by $ai≤xi≤bi, i=1,2,…,n$ (4) where ${a}_{i}$ and ${b}_{i}$ are constants. When ${a}_{i}$ and ${b}_{i}$ are functions of ${x}_{j}$ ($j), the region can easily be transformed to the rectangular form (see page 266 of Davis and Rabinowitz (1975)). Some of the methods described incorporate the transformation procedure. ### 2.1One-dimensional Integrals To estimate the value of a one-dimensional integral, a quadrature rule uses an approximation in the form of a weighted sum of integrand values, i.e., $∫abfxdx≃∑i=1Nwifxi.$ (5) The points ${x}_{i}$ within the interval $\left[a,b\right]$ are known as the abscissae, and the ${w}_{i}$ are known as the weights. More generally, if the integrand has the form (2), the corresponding formula is $∫abwxgxdx≃∑i=1Nwigxi.$ (6) If the integrand is known only at a fixed set of points, these points must be used as the abscissae, and the weighted sum is calculated using finite difference methods. However, if the functional form of the integrand is known, so that its value at any abscissa is easily obtained, then a wide variety of quadrature rules are available, each characterised by its choice of abscissae and the corresponding weights. The appropriate rule to use will depend on the interval $\left[a,b\right]$ – whether finite or otherwise – and on the form of any $w\left(x\right)$ factor in the integrand. A suitable value of $N$ depends on the general behaviour of $f\left(x\right)$; or of $g\left(x\right)$, if there is a $w\left(x\right)$ factor present. Among possible rules, we mention particularly the Gaussian formulae, which employ a distribution of abscissae which is optimal for $f\left(x\right)$ or $g\left(x\right)$ of polynomial form. The choice of basic rules constitutes one of the principles on which methods for one-dimensional integrals may be classified. The other major basis of classification is the implementation strategy, of which some types are now presented. 1. (a)Single rule evaluation procedures A fixed number of abscissae, $N$, is used. This number and the particular rule chosen uniquely determine the weights and abscissae. No estimate is made of the accuracy of the result. 2. (b)Automatic procedures The number of abscissae, $N$, within $\left[a,b\right]$ is gradually increased until consistency is achieved to within a level of accuracy (absolute or relative) you requested. There are essentially two ways of doing this; hybrid forms of these two methods are also possible: A series of rules using increasing values of $N$ are successively applied over the whole interval $\left[a,b\right]$. It is clearly more economical if abscissae already used for a lower value of $N$ can be used again as part of a higher-order formula. This principle is known as optimal extension. There is no overlap between the abscissae used in Gaussian formulae of different orders. However, the Kronrod formulae are designed to give an optimal $\left(2N+1\right)$-point formula by adding $\left(N+1\right)$ points to an $N$-point Gauss formula. Further extensions have been developed by Patterson. The interval $\left[a,b\right]$ is repeatedly divided into a number of sub-intervals, and integration rules are applied separately to each sub-interval. Typically, the subdivision process will be carried further in the neighbourhood of a sharp peak in the integrand than where the curve is smooth. Thus, the distribution of abscissae is adapted to the shape of the integrand. Subdivision raises the problem of what constitutes an acceptable accuracy in each sub-interval. The usual global acceptability criterion demands that the sum of the absolute values of the error estimates in the sub-intervals should meet the conditions required of the error over the whole interval. Automatic extrapolation over several levels of subdivision may eliminate the effects of some types of singularities. An ideal general-purpose method would be an automatic method which could be used for a wide variety of integrands, was efficient (i.e., required the use of as few abscissae as possible), and was reliable (i.e., always gave results to within the requested accuracy). Complete reliability is unobtainable, and generally higher reliability is obtained at the expense of efficiency, and vice versa. It must therefore be emphasized that the automatic routines in this chapter cannot be assumed to be $100%$ reliable. In general, however, the reliability is very high. ### 2.2Multidimensional Integrals A distinction must be made between cases of moderately low dimensionality (say, up to $4$ or $5$ dimensions), and those of higher dimensionality. Where the number of dimensions is limited, a one-dimensional method may be applied to each dimension, according to some suitable strategy, and high accuracy may be obtainable (using product rules). However, the number of integrand evaluations rises very rapidly with the number of dimensions, so that the accuracy obtainable with an acceptable amount of computational labour is limited; for example a product of $3$-point rules in $20$ dimensions would require more than ${10}^{9}$ integrand evaluations. Special techniques such as the Monte Carlo methods can be used to deal with high dimensions. 1. (a)Products of one-dimensional rules Using a two-dimensional integral as an example, we have $∫a1b1∫a2b2fx,ydy dx≃∑i=1Nwi ∫a2b2fxi,ydy$ (7) $∫a1b1∫a2b2fx,ydy dx≃∑i=1N∑j=1Nwivjfxi,yj$ (8) where $\left({w}_{i},{x}_{i}\right)$ and $\left({v}_{i},{y}_{i}\right)$ are the weights and abscissae of the rules used in the respective dimensions. A different one-dimensional rule may be used for each dimension, as appropriate to the range and any weight function present, and a different strategy may be used, as appropriate to the integrand behaviour as a function of each independent variable. For a rule-evaluation strategy in all dimensions, the formula (8) is applied in a straightforward manner. For automatic strategies (i.e., attempting to attain a requested accuracy), there is a problem in deciding what accuracy must be requested in the inner integral(s). Reference to formula (7) shows that the presence of a limited but random error in the $y$-integration for different values of ${x}_{i}$ can produce a ‘jagged’ function of $x$, which may be difficult to integrate to the desired accuracy and for this reason products of automatic one-dimensional routines should be used with caution (see Lyness (1983)). 2. (b)Monte Carlo methods These are based on estimating the mean value of the integrand sampled at points chosen from an appropriate statistical distribution function. Usually a variance reducing procedure is incorporated to combat the fundamentally slow rate of convergence of the rudimentary form of the technique. These methods can be effective by comparison with alternative methods when the integrand contains singularities or is erratic in some way, but they are of quite limited accuracy. 3. (c)Number theoretic methods These are based on the work of Korobov and Conroy and operate by exploiting implicitly the properties of the Fourier expansion of the integrand. Special rules, constructed from so-called optimal coefficients, give a particularly uniform distribution of the points throughout $n$-dimensional space and from their number theoretic properties minimize the error on a prescribed class of integrals. The method can be combined with the Monte Carlo procedure. 4. (d)Sag–Szekeres method By transformation this method seeks to induce properties into the integrand which make it accurately integrable by the trapezoidal rule. The transformation also allows effective control over the number of integrand evaluations. 5. (e)Sparse grid methods Given a set of one-dimensional quadrature rules of increasing levels of accuracy, the sparse grid method constructs an approximation to a multidimensional integral using $d$-dimensional tensor products of the differences between rules of adjacent levels. This provides a lower theoretical accuracy than the methods in (a), the full grid approach, which is nonetheless still sufficient for various classes of sufficiently smooth integrands. Furthermore, it requries substantially fewer evaluations than the full grid approach. Specifically, if a one-dimensional quadrature rule has $N\sim \mathit{O}\left({2}^{\ell }\right)$ points, the full grid will require $\mathit{O}\left({2}^{\mathit{ld}}\right)$ function evaluations, whereas the sparse grid of level $\ell$ will require $\mathit{O}\left({2}^{\ell }{d}^{\ell -1}\right)$. Hence a sparse grid approach is computationally feasible even for integrals over $d\sim \mathit{O}\left(100\right)$. Sparse grid methods are deterministic, and may be viewed as automatic whole domain procedures if their level $\ell$ is allowed to increase. An automatic adaptive strategy in several dimensions normally involves division of the region into subregions, concentrating the divisions in those parts of the region where the integrand is worst behaved. It is difficult to arrange with any generality for variable limits in the inner integral(s). For this reason, some methods use a region where all the limits are constants; this is called a hyper-rectangle. Integrals over regions defined by variable or infinite limits may be handled by transformation to a hyper-rectangle. Integrals over regions so irregular that such a transformation is not feasible may be handled by surrounding the region by an appropriate hyper-rectangle and defining the integrand to be zero outside the desired region. Such a technique should always be followed by a Monte Carlo method for integration. The method used locally in each subregion produced by the adaptive subdivision process is usually one of three types: Monte Carlo, number theoretic or deterministic. Deterministic methods are usually the most rapidly convergent but are often expensive to use for high dimensionality and not as robust as the other techniques. ## 3Recommendations on Choice and Use of Available Routines This section is divided into five subsections. The first subsection illustrates the difference between direct and reverse communication routines. The second subsection highlights the different levels of vectorization provided by different interfaces. Sections 3.3, 3.3.2 and 3.4 consider in turn routines for: one-dimensional integrals over a finite interval, and over a semi-infinite or an infinite interval; and multidimensional integrals. Within each sub-section, routines are classified by the type of method, which ranges from simple rule evaluation to automatic adaptive algorithms. The recommendations apply particularly when the primary objective is simply to compute the value of one or more integrals, and in these cases the automatic adaptive routines are generally the most convenient and reliable, although also the most expensive in computing time. Note however that in some circumstances it may be counter-productive to use an automatic routine. If the results of the quadrature are to be used in turn as input to a further computation (e.g., an ‘outer’ quadrature or an optimization problem), then this further computation may be adversely affected by the ‘jagged performance profile’ of an automatic routine; a simple rule-evaluation routine may provide much better overall performance. For further guidance, the article by Lyness (1983) is recommended. ### 3.1Direct and Reverse Communication Routines in this chapter which evaluate an integral value may be classified as either direct communication or reverse communication. See Section 7 in How to Use the NAG Library for a description of these terms. Currently in this chapter the only routine explicitly using reverse communication is d01raf. ### 3.2Choice of Interface This section concerns the design of the interface for the provision of abscissae, and the subsequent collection of calculated information, typically integrand evaluations. Vectorized interfaces typically allow for more efficient operation. 1. (a)Single abscissa interfaces The algorithm will provide a single abscissa at which information is required. These are typically the most simple to use, although they may be significantly less efficient than a vectorized equivalent. Most of the algorithms in this chapter are of this type. Examples of this include d01ajf and d01fbf. 2. (b)Vectorized abscissae interfaces The algorithm will return a set of abscissae, at all of which information is required. While these are more complicated to use, they are typically more efficient than a non-vectorized equivalent. They reduce the overhead of function calls, allow the avoidance of repetition of computations common to each of the integrand evaluations, and offer greater scope for vectorization and parallelization of your code. Examples include d01rgf, d01uaf, and the routines d01atf and d01auf, which are vectorized equivalents of d01ajf and d01akf. 3. (c)Multiple integral interfaces These are routines which allow for multiple integrals to be estimated simultaneously. As with (b) above, these are more complicated to use than single integral routines, however they can provide higher efficiency, particularly if several integrals require the same subcalculations at the same abscissae. They are most efficient if integrals which are supplied together are expected to have similar behaviour over the domain, particularly when the algorithm is adaptive. Examples include d01eaf and d01raf. ### 3.3One-dimensional Integrals #### 3.3.1Over a Finite Interval 1. (a)Integrand defined at a set of points If $f\left(x\right)$ is defined numerically at four or more points, then the Gill–Miller finite difference method (d01gaf) should be used. The interval of integration is taken to coincide with the range of $x$ values of the points supplied. It is in the nature of this problem that any routine may be unreliable. In order to check results independently and so as to provide an alternative technique you may fit the integrand by Chebyshev series using e02adf and then use routine e02ajf to evaluate its integral (which need not be restricted to the range of the integration points, as is the case for d01gaf). A further alternative is to fit a cubic spline to the data using e02baf and then to evaluate its integral using e02bdf. 2. (b)Integrand defined as a function If the functional form of $f\left(x\right)$ is known, then one of the following approaches should be taken. They are arranged in the order from most specific to most general, hence the first applicable procedure in the list will be the most efficient. However, if you do not wish to make any assumptions about the integrand, the most reliable routines to use will be d01atf (or d01ajf), d01auf (or d01akf), d01alf, d01rgf or d01raf, although these will in general be less efficient for simple integrals. 1. (i)Rule-evaluation routines If $f\left(x\right)$ is known to be sufficiently well behaved (more precisely, can be closely approximated by a polynomial of moderate degree), a Gaussian routine with a suitable number of abscissae may be used. d01bcf or d01tbf with d01fbf may be used if it is required to examine the weights and abscissae. d01tbf is faster and more accurate, whereas d01bcf is more general. d01uaf uses the same quadrature rules as d01tbf, and may be used if you do not explicitly require the weights and abscissae. If $f\left(x\right)$ is well behaved, apart from a weight-function of the form $x-a+b2 c or b-xcx-ad,$ d01bcf with d01fbf may be used. d01bcf and d01tbf generate weights and abscissae for specific Gauss rules. Weights and abscissae for other quadrature formulae may be computed using routines d01tdf or d01tef. Wherever possible use d01tdf in preference to d01tef. The former however requires information that may not be readily available. 2. (ii)Automatic whole-interval routines If $f\left(x\right)$ is reasonably smooth, and the required accuracy is not too high, the automatic whole interval routines d01arf and d01bdf may be used. Additionally, d01esf with $d=1$ may be used with an appropriate transformation from the unit interval. d01bdf uses the Gauss $10$-point rule, with the $21$ point Kronrod extension, and the subsequent $43$ and $87$ point Patterson extensions if required. d01esf supports multiple simultaneous integrals, and has a vectorized interface. Either high order Gauss–Patterson rules (of size ${2}^{\ell }-1$, for $\ell =1,\dots ,9$), or high order Clenshaw-Curtis rules (of size ${2}^{\ell -1}+1$, for $\ell =2,\dots ,12$). Gauss–Patterson rules possess greater polynomial accuracy, whereas Clenshaw–Curtis rules are often well suited to oscillatory integrals. d01arf incorporates the same high order Gauss–Patterson rules as d01esf, and is the only routine that may be used for indefinite integration. Firstly, several routines are available for integrands of the form $w\left(x\right)g\left(x\right)$ where $g\left(x\right)$ is a ‘smooth’ function (i.e., has no singularities, sharp peaks or violent oscillations in the interval of integration) and $w\left(x\right)$ is a weight function of one of the following forms. 1. 1.if $w\left(x\right)={\left(b-x\right)}^{\alpha }{\left(x-a\right)}^{\beta }{\left(\mathrm{log}\left(b-x\right)\right)}^{k}{\left(\mathrm{log}\left(x-a\right)\right)}^{l}$, where $k,l=0$ or $1$, $\alpha ,\beta >-1$: use d01apf; 2. 2.if $w\left(x\right)=\frac{1}{x-c}$: use d01aqf (this integral is called the Hilbert transform of $g$); 3. 3.if $w\left(x\right)=\mathrm{cos}\left(\omega x\right)$ or $\mathrm{sin}\left(\omega x\right)$: use d01anf (this routine can also handle certain types of singularities in $g\left(x\right)$). Secondly, there are multiple routines for general $f\left(x\right)$, using different strategies. d01atf (and d01ajf), and d01auf (and d01akf) use the strategy of Piessens et al. (1983), using repeated bisection of the interval, and in the first case the $\epsilon$-algorithm (Wynn (1956)), to improve the integral estimate. This can cope with singularities away from the end points, provided singular points do not occur as abscissae, d01auf tends to perform better than d01atf on more oscillatory integrals. d01alf uses the same subdivision strategy as d01atf over a set of initial interval segments determined by supplied break-points. It is hence suitable for integrals with discontinuities (including switches in definition) or sharp peaks occuring at known points. Such integrals may also be approximated using other routines which do not allow break-points, although such integrals should be evaluated over each of the sub-intervals seperately. d01raf again uses the strategy of Piessens et al. (1983), and provides the functionality of d01alf, d01atf and d01auf in a reverse communication framework. It also supports multiple integrals and uses a vectorized interface for the abscissae. Hence it is likely to be more efficient if several similar integrals are required to be evaluated over the same domain. Furthermore, its behaviour can be tailored through the use of optional parameters. d01ahf uses the strategy of Patterson (1968) and the $\epsilon$-algorithm to adaptively evaluate the integral in question. It tends to be more efficient than the bisection based algorithms, although these tend to be more robust when singularities occur away from the end points. d01rgf uses another adaptive scheme due to Gonnet (2010). This attempts to match the quadrature rule to the underlying integrand as well as subdividing the domain. Further, it can explicitly deal with singular points at abscissae, should NaN's or ∞ be returned by the user-supplied (sub)routine, provided the generation of these does not cause the program to halt (see Chapter X07). #### 3.3.2Over a Semi-infinite or Infinite Interval 1. (a)Integrand defined at a set of points If $f\left(x\right)$ is defined numerically at four or more points, and the portion of the integral lying outside the range of the points supplied may be neglected, then the Gill–Miller finite difference method, d01gaf, should be used. 2. (b)Integrand defined as a function 1. (i)Rule evaluation routines If $f\left(x\right)$ behaves approximately like a polynomial in $x$, apart from a weight function of the form: 1. 1.${e}^{-\beta x},\beta >0$ (semi-infinite interval, lower limit finite); or 2. 2.${e}^{-\beta x},\beta <0$ (semi-infinite interval, upper limit finite); or 3. 3.${e}^{-\beta {\left(x-\alpha \right)}^{2}},\beta >0$ (infinite interval), or if $f\left(x\right)$ behaves approximately like a polynomial in ${\left(x+b\right)}^{-1}$ (semi-infinite range), then the Gaussian routines may be used. d01uaf may be used if it is not required to examine the weights and abscissae. d01bcf or d01tbf with d01fbf may be used if it is required to examine the weights and abscissae. d01tbf is faster and more accurate, whereas d01bcf is more general. d01ubf returns an approximation to the specific problem . d01amf may be used, except for integrands which decay slowly towards an infinite end point, and oscillate in sign over the entire range. For this class, it may be possible to calculate the integral by integrating between the zeros and invoking some extrapolation process (see c06baf). d01asf may be used for integrals involving weight functions of the form $\mathrm{cos}\left(\omega x\right)$ and $\mathrm{sin}\left(\omega x\right)$ over a semi-infinite interval (lower limit finite). The following alternative procedures are mentioned for completeness, though their use will rarely be necessary. 1. 1.If the integrand decays rapidly towards an infinite end point, a finite cut-off may be chosen, and the finite range methods applied. 2. 2.If the only irregularities occur in the finite part (apart from a singularity at the finite limit, with which d01amf can cope), the range may be divided, with d01amf used on the infinite part. 3. 3.A transformation to finite range may be employed, e.g., $x= 1-tt or x=- loge⁡t$ will transform $\left(0,\infty \right)$ to $\left(1,0\right)$ while for infinite ranges we have $∫-∞∞fxdx=∫0∞fx+f-xdx.$ If the integrand behaves badly on $\left(-\infty ,0\right)$ and well on $\left(0,\infty \right)$ or vice versa it is better to compute it as $\underset{-\infty }{\overset{0}{\int }}f\left(x\right)dx+\underset{0}{\overset{\infty }{\int }}f\left(x\right)dx$. This saves computing unnecessary function values in the semi-infinite range where the function is well behaved. ### 3.4Multidimensional Integrals A number of techniques are available in this area and the choice depends to a large extent on the dimension and the required accuracy. It can be advantageous to use more than one technique as a confirmation of accuracy, particularly for high-dimensional integrations. Several routines include a transformation procedure, using a user-supplied subroutine, which allows general product regions to be easily dealt with in terms of conversion to the standard $n$-cube region. 1. (a)Products of one-dimensional rules (suitable for up to about $5$ dimensions) If $f\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)$ is known to be a sufficiently well behaved function of each variable ${x}_{i}$, apart possibly from weight functions of the types provided, a product of Gaussian rules may be used. These are provided by d01bcf or d01tbf with d01fbf. Rules for finite, semi-infinite and infinite ranges are included. For two-dimensional integrals only, unless the integrand is very badly behaved, the automatic whole-interval product procedure of d01daf may be used. The limits of the inner integral may be user-specified functions of the outer variable. Infinite limits may be handled by transformation (see Section 3.3.2); end point singularities introduced by transformation should not be troublesome, as the integrand value will not be required on the boundary of the region. If none of these routines proves suitable and convenient, the one-dimensional routines may be used recursively. For example, the two-dimensional integral $I=∫a1b1∫a2b2fx,ydy dx$ may be expressed as $I=∫a1b1 Fxdx, where Fx=∫a2b2 fx,ydy.$ The user-supplied code to evaluate $F\left(x\right)$ will call the integration routine for the $y$-integration, which will call more user-supplied code for $f\left(x,y\right)$ as a function of $y$ ($x$ being effectively a constant). The reverse communication routine d01raf may be used by itself in a pseudo-recursive manner, in that it may be called to evaluate an inner integral for the integrand value of an outer integral also being calculated by d01raf. 2. (b)Sag–Szekeres method Two routines are based on this method. d01fdf is particularly suitable for integrals of very large dimension although the accuracy is generally not high. It allows integration over either the general product region (with built-in transformation to the $n$-cube) or the $n$-sphere. Although no error estimate is provided, two adjustable arguments may be varied for checking purposes or may be used to tune the algorithm to particular integrals. d01jaf is also based on the Sag–Szekeres method and integrates over the $n$-sphere. It uses improved transformations which may be varied according to the behaviour of the integrand. Although it can yield very accurate results it can only practically be employed for dimensions not exceeding $4$. 3. (c)Number Theoretic method Two subroutines are based on this method, d01gcf and a vectorized equivalent d01gdf. Algorithms of this type carry out multidimensional integration using the Korobov–Conroy method over a product region with built-in transformation to the $n$-cube. A stochastic modification of this method is incorporated into the routines in this Library, hybridising the technique with the Monte Carlo procedure. An error estimate is provided in terms of the statistical standard error. A number of pre-computed optimal coefficient rules for up to $20$ dimensions are provided; others can be computed using d01gyf and d01gzf. Like the Sag–Szekeres method it is suitable for large dimensional integrals although the accuracy is not high. d01gcf requires a function to be provided to evaluate the value of the integrand at a single abscissa, and a subroutine to return the upper and lower limits of integration in a given dimension. d01gdf has a vectorized interface which can result in faster execution, especially on vector-processing machines. You are required to provide two subroutines, the first to return an array of values of the integrand at each of an array of points, and the second to evaluate the limits of integration at each of an array of points. This reduces the overhead of function calls, avoids repetitions of computations common to each of the evaluations of the integral and limits of integration, and offers greater scope for vectorization of your code. 4. (d)A combinatorial extrapolation method d01paf computes a sequence of approximations and an error estimate to the integral of a function over a multidimensional simplex using a combinatorial method with extrapolation. 5. (e)Sparse Grid method d01esf implements a sparse grid quadrature scheme for the integration of a vector of multidimensional integrals over the unit hypercube, $F ≈ ∫ 0,1 d fx dx .$ The routine uses a vectorized interface, which returns a set of points at which the integrands must be evaluated in a sparse storage format for efficiency. Other domains can be readily integrated over by using an appropriate mapping inside the provided subroutine for evaluating the integrands. It is suitable for $d$ up to $\mathit{O}\left(100\right)$, although no upper bound on the number of dimensions is enforced. It will also evaluate one-dimensional integrals, although in this case the sparse grid used is in fact the full grid. The routine uses optional parameters, set and queried using the routines d01zkf and d01zlf respectively. Amongst other options, these allow the parallelization of the routine to be controlled. 6. (f)Automatic routines (d01fcf and d01gbf) Both routines are for integrals of the form $∫a1b1 ∫a2b2 ⋯ ∫anbn fx1,x2,…,xndxndxn-1⋯dx1.$ d01gbf is an adaptive Monte Carlo routine. This routine is usually slow and not recommended for high-accuracy work. It is a robust routine that can often be used for low-accuracy results with highly irregular integrands or when $n$ is large. d01fcf is an adaptive deterministic routine. Convergence is fast for well behaved integrands. Highly accurate results can often be obtained for $n$ between $2$ and $5$, using significantly fewer integrand evaluations than would be required by d01gbf. The routine will usually work when the integrand is mildly singular and for $n\le 10$ should be used before d01gbf. If it is known in advance that the integrand is highly irregular, it is best to compare results from at least two different routines. There are many problems for which one or both of the routines will require large amounts of computing time to obtain even moderately accurate results. The amount of computing time is controlled by the number of integrand evaluations you have allowed, and you should set this argument carefully, with reference to the time available and the accuracy desired. d01eaf extends the technique of d01fcf to integrate adaptively more than one integrand, that is to calculate the set of integrals $∫a1b1 ∫a2b2 ⋯ ∫anbn f1,f2,…,fm dxndxn-1⋯dx1$ for a set of similar integrands ${f}_{1},{f}_{2},\dots ,{f}_{m}$ where ${f}_{i}={f}_{i}\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)$. ## 4Decision Trees ### Tree 1: One-dimensional integrals over a finite interval Is the functional form of the integrand known? Is indefinite integration required? d01arf yes yes no no Do you require reverse communication? d01raf yes no Are you concerned with efficiency for simple integrals? Is the integrand smooth (polynomial-like) apart from weight function ${\left|x-\left(a+b\right)/2\right|}^{c}$ or ${\left(b-x\right)}^{c}{\left(x-a\right)}^{d}$? d01arf, d01uaf, d01tbf or d01bcf and d01fbf, or d01gcf yes yes no no Is the integrand reasonably smooth and the required accuracy not too great? d01arf, d01bdf, d01esf or d01uaf yes no Are multiple integrands to be integrated simultaneously? d01esf or d01raf yes no Has the integrand discontinuities, sharp peaks or singularities at known points other than the end points? Split the range and begin again; or use d01alf or d01rgf yes no Is the integrand free of singularities, sharp peaks and violent oscillations apart from weight function ${\left(b-x\right)}^{\alpha }{\left(x-a\right)}^{\beta }\phantom{\rule{0ex}{0ex}}{\left(\mathrm{log}\left(b-x\right)\right)}^{k}{\left(\mathrm{log}\left(x-a\right)\right)}^{l}$? d01apf yes no Is the integrand free of singularities, sharp peaks and violent oscillations apart from weight function ${\left(x-c\right)}^{-1}$? d01aqf yes no Is the integrand free of violent oscillations apart from weight function $\mathrm{cos}\left(\omega x\right)$ or $\mathrm{sin}\left(\omega x\right)$? d01anf yes no Is the integrand free of singularities? d01ajf, d01akf, d01auf or d01esf yes no Is the integrand free of discontinuities and of singularities except possibly at the end points? d01ahf yes no d01ajf, d01atf, d01raf or d01rgf d01ahf, d01ajf, d01atf, d01raf or d01rgf d01gaf Note: d01atf, d01auf, d01raf and d01rgf are likely to be more efficient due to their vectorized interfaces than d01ajf and d01akf, which use a more conventional user-interface, consistent with other routines in the chapter. ### Tree 2: One-dimensional integrals over a semi-infinite or infinite interval Is the functional form of the integrand known? Are you concerned with efficiency for simple integrands? Is the integrand smooth (polynomial-like) with no exceptions? d01uaf, d01bdf, d01arf or d01esf with transformation. See Section 3.3.2 (b)(b)(ii). yes yes yes no no no Is the integrand of the form ? d01ubf yes no Is the integrand smooth (polynomial-like) apart from weight function ${e}^{-\beta \left(x\right)}$ (semi-infinite range) or ${e}^{{-\beta \left(x-a\right)}^{2}}$ (infinite range) or is the integrand polynomial-like in $\frac{1}{x+b}$? (semi-infinite range)? d01uaf, or d01bcf and d01fbf, or, d01tbf and d01fbf, or d01tdf and d01fbf (d01tdf may require use of d01tef) yes no Has integrand discontinuities, sharp peaks or singularities at known points other than a finite limit? Split range; begin again using finite or infinite range trees yes no Does the integrand oscillate over the entire range? Does the integrand decay rapidly towards an infinite limit? Use d01amf; or set cutoff and use finite range tree yes yes no no Is the integrand free of violent oscillations apart from weight function $\mathrm{cos}\left(\omega x\right)$ or $\mathrm{sin}\left(\omega x\right)$ (semi-infinite range)? d01asf yes no Use finite-range integration between the zeros and extrapolate (see c06baf) d01amf d01amf d01gaf (integrates over the range of the points supplied) ### Tree 3: Multidimensional integrals Is dimension $\text{}=2$ and product region? d01daf yes no Is dimension $\text{}\le 4$ Is region an $n$-sphere? d01fbf with user transformation or d01jaf yes yes no no Is region a Simplex? d01fbf with user transformation or d01paf yes no Is the integrand smooth (polynomial-like) in each dimension apart from weight function? d01tbf or d01bcf with d01fbf yes no Is integrand free of extremely bad behaviour? d01esf, d01fcf, d01fdf or d01gcf yes no Is bad behaviour on the boundary? d01fcf or d01fdf yes no Compare results from at least two of d01fcf, d01fdf, d01gbf and d01gcf, d01esf and one-dimensional recursive application Is region an $n$-sphere? d01fdf yes no Is region a Simplex? d01paf yes no Is high accuracy required? d01fdf with argument tuning yes no Is dimension high? d01fdf, d01gcf or d01gdf, d01esf yes no d01fcf Note: in the case where there are many integrals to be evaluated d01eaf should be preferred to d01fcf. d01gdf is likely to be more efficient than d01gcf, which uses a more conventional user-interface, consistent with other routines in the chapter. ## 5Functionality Index Korobov optimal coefficients for use in d01gcf and d01gdf: when number of points is a product of $2$ primes d01gzf when number of points is prime d01gyf over a finite two-dimensional region d01daf over a general product region, Korobov–Conroy number-theoretic method d01gcf Sag–Szekeres method (also over $n$-sphere) d01fdf variant of d01gcf especially efficient on vector machines d01gdf over a hyper-rectangle, multiple integrands d01eaf Monte Carlo method d01gbf sparse grid method (with user transformation), muliple integrands, vectorized interface d01esf over an $n$-simplex d01paf over an $n$-sphere $\left(n\le 4\right)$, allowing for badly behaved integrands d01jaf adaptive integration of a function over a finite interval, strategy due to Gonnet, vectorized interface d01rgf strategy due to Patterson, suitable for well-behaved integrands, except possibly at end-points d01ahf strategy due to Piessens and de Doncker, allowing for singularities at user-specified break-points d01alf single abscissa interface d01ajf vectorized interface d01atf suitable for highly oscillatory integrals, single abscissa interface d01akf vectorized interface d01auf weight function $1/\left(x-c\right)$ Cauchy principal value (Hilbert transform) d01aqf weight function $\mathrm{cos}\left(\omega x\right)$ or $\mathrm{sin}\left(\omega x\right)$ d01anf weight function with end-point singularities of algebraico-logarithmic type d01apf adaptive integration of a function over an infinite interval or semi-infinite interval, no weight function d01amf weight function $\mathrm{cos}\left(\omega x\right)$ or $\mathrm{sin}\left(\omega x\right)$ d01asf integration of a function defined by data values only, Gill–Miller method d01gaf non-adaptive integration over a finite, semi-infinite or infinite interval, using pre-computed weights and abscissae specific integral with weight $\mathrm{exp}\left({-x}^{2}\right)$ over semi-infinite interval d01ubf vectorized interface d01uaf non-adaptive integration over a finite interval d01bdf non-adaptive integration over a finite interval, with provision for indefinite integrals also d01arf reverse communication, adaptive integration over a finite interval, multiple integrands, efficient on vector machines d01raf Service routines, array size query for d01raf d01rcf general option getting d01zlf general option setting and initialization d01zkf Weights and abscissae for Gaussian quadrature rules, method of Golub and Welsch, calculating the weights and abscissae d01tdf generate recursive coefficients d01tef more general choice of rule, calculating the weights and abscissae d01bcf restricted choice of rule, using pre-computed weights and abscissae d01tbf ## 6Auxiliary Routines Associated with Library Routine Arguments d01fdv nagf_quad_md_sphere_dummy_regionSee the description of the argument region in d01fdf. d01rbm nagf_quad_d01rb_dummySee the description of the argument monit in d01rbf. ## 7 Withdrawn or Deprecated Routines The following lists all those routines that have been withdrawn since Mark 23 of the Library or are in the Library, but deprecated. Routine Status Replacement Routine(s) d01baf Withdrawn at Mark 26 d01uaf d01baw Withdrawn at Mark 26 d01bax Withdrawn at Mark 26 d01bay Withdrawn at Mark 26 d01baz Withdrawn at Mark 26 d01bbf Withdrawn at Mark 26 d01tbf d01rbf To be withdrawn at Mark 28 No replacement required ## 8References Davis P J and Rabinowitz P (1975) Methods of Numerical Integration Academic Press Gonnet P (2010) Increasing the reliability of adaptive quadrature using explicit interpolants ACM Trans. Math. software 37 26 Lyness J N (1983) When not to use an automatic quadrature routine SIAM Rev. 25 63–87 Patterson T N L (1968) The Optimum addition of points to quadrature formulae Math. Comput. 22 847–856 Piessens R, de Doncker–Kapenga E, Überhuber C and Kahaner D (1983) QUADPACK, A Subroutine Package for Automatic Integration Springer–Verlag Sobol I M (1974) The Monte Carlo Method The University of Chicago Press Stroud A H (1971) Approximate Calculation of Multiple Integrals Prentice–Hall Wynn P (1956) On a device for computing the ${e}_{m}\left({S}_{n}\right)$ transformation Math. Tables Aids Comput. 10 91–96
10,034
39,700
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 171, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.125
4
CC-MAIN-2023-50
latest
en
0.810431
http://www.java-forums.org/new-java/50840-palindrome-checker-print.html
1,475,239,931,000,000,000
text/html
crawl-data/CC-MAIN-2016-40/segments/1474738662166.99/warc/CC-MAIN-20160924173742-00012-ip-10-143-35-109.ec2.internal.warc.gz
551,984,323
3,128
# Palindrome checker • 11-06-2011, 11:59 AM thirdage Palindrome checker I want to check if a phrase is a palindrome. For example (H ell o o l l e h) I wanna write a method that skip spaces as it reads from left and as it reads from right. Code: ```public static boolean PalindromeChecker(String str)         {                 boolean mismatch = false;                 int len = str.length();                 int i =0;                 int j = len -1;                 while (i<j && !mismatch )                 {                         String lowerCased = str.toLowerCase();                         if (lowerCased.charAt(i)==lowerCased.charAt(j))                         {                                 mismatch =false;                         }                         else                         {                                 mismatch = true;                         }                         i = i+1;                         j=j-1;                 }                 return mismatch;         }``` My question is where should I enter the statement that would cancel spaces as i go from left and as i go from the right? and is it an if statement .. or another loop? Thanks a lot in advance!! • 11-06-2011, 12:13 PM gcalvin Re: Palindrome checker Pseudocode: Code: ```    int startpos = 0     while (char at startpos is not a letter)         increment startpos     int endpos = mystring.length     while (char at endpos is not a letter)         decrement endpos``` Consider using a recursive approach. Maybe a boolean isPalindrome(char[] myCharArray, int startpos, int endpos) method. • 11-06-2011, 12:19 PM JosAH Re: Palindrome checker You could define two small methods: Code: ```int decrement(String, s, int i) { ... } int increment(String s, int i) { ... }``` ... that decrement or increment index value i until s[i] isn't a white space character or until i is an out of bounds index; if it is out of bounds the methods return -1 to signal that event. In your loop you increment i and decrement j until one of them is -1 or until i >= j. kind regards, Jos • 11-06-2011, 12:29 PM thirdage Re: Palindrome checker I didn't learn the recursive approach yet ... for now i must do this in loops and if statements! • 11-06-2011, 01:18 PM gcalvin Re: Palindrome checker A few other things... PalindromeChecker is a good name for the class, but a bad name for this method. boolean isPalindrome(String string) would be better. Style is important, and a method name should be a verb and should start with a lower-case letter. Also you're going to confuse yourself with that variable named mismatch. Consider converting your whole string to upper case right at the beginning, and then convert that to a char[]. Then you can do: Code: ```    while (endpos > startpos)         while (char at startpos is not a letter)             increment startpos             if (startpos == endpos) return true         while (char at endpos is not a letter)             decrement endpos             if (startpos == endpos) return true         if (char at startpos doesn't match char at endpos) return false     if loop completes return true (it means that you've met in the middle)```
744
3,173
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.703125
3
CC-MAIN-2016-40
latest
en
0.481226
https://www.teachoo.com/3746/1234/Misc-23---If-y--ea-cos-1-x--show-(1---x2)-d2y-dx2---x-dy-dx/category/Finding-second-order-derivatives--Implicit-form/
1,659,890,905,000,000,000
text/html
crawl-data/CC-MAIN-2022-33/segments/1659882570651.49/warc/CC-MAIN-20220807150925-20220807180925-00614.warc.gz
903,951,115
32,036
Finding second order derivatives- Implicit form Chapter 5 Class 12 Continuity and Differentiability Concept wise This video is only available for Teachoo black users Introducing your new favourite teacher - Teachoo Black, at only β‚Ή83 per month ### Transcript Misc 23 If 𝑦=𝑒^(γ€–π‘Ž π‘π‘œπ‘ γ€—^(βˆ’1) π‘₯) , – 1 ≀ π‘₯ ≀ 1, show that (1βˆ’π‘₯^2 ) (𝑑^2 𝑦)/〖𝑑π‘₯γ€—^2 βˆ’π‘₯ 𝑑𝑦/𝑑π‘₯ βˆ’ π‘Ž2 𝑦 =0 . 𝑦=𝑒^(γ€–π‘Ž π‘π‘œπ‘ γ€—^(βˆ’1) π‘₯) Differentiating 𝑀.π‘Ÿ.𝑑.π‘₯. 𝑑𝑦/𝑑π‘₯ = 𝑑(𝑒^(γ€–π‘Ž π‘π‘œπ‘ γ€—^(βˆ’1) π‘₯" " ) )/𝑑π‘₯ 𝑑𝑦/𝑑π‘₯ = 𝑒^(γ€–π‘Ž π‘π‘œπ‘ γ€—^(βˆ’1) π‘₯" " ) Γ— 𝑑(γ€–π‘Ž π‘π‘œπ‘ γ€—^(βˆ’1) π‘₯)/𝑑π‘₯ 𝑑𝑦/𝑑π‘₯ = 𝑒^(γ€–π‘Ž π‘π‘œπ‘ γ€—^(βˆ’1) π‘₯" " ) Γ— π‘Ž ((βˆ’1)/√(1 βˆ’ π‘₯^2 )) 𝑑𝑦/𝑑π‘₯ = (βˆ’π‘Ž 𝑒^(γ€–π‘Ž π‘π‘œπ‘ γ€—^(βˆ’1) π‘₯" " ))/√(1 βˆ’ π‘₯^2 ) √(1 βˆ’ π‘₯^2 ) 𝑑𝑦/𝑑π‘₯ = βˆ’π‘Žπ‘’^(γ€–π‘Ž π‘π‘œπ‘ γ€—^(βˆ’1) π‘₯" " ) √(1 βˆ’ π‘₯^2 ) 𝑑𝑦/𝑑π‘₯ = βˆ’π‘Žπ‘¦ Since we need to prove (1βˆ’π‘₯^2 ) (𝑑^2 𝑦)/〖𝑑π‘₯γ€—^2 βˆ’ π‘₯ 𝑑𝑦/𝑑π‘₯ βˆ’π‘Ž2 𝑦 =0 Squaring (1) both sides (√(1 βˆ’ π‘₯^2 ) 𝑑𝑦/𝑑π‘₯)^2 = (βˆ’π‘Žπ‘¦)^2 (1βˆ’π‘₯^2 ) (𝑦^β€² )^2 = π‘Ž^2 𝑦^2 Differentiating again w.r.t x 𝑑((1 βˆ’ π‘₯^2 ) (𝑦^β€² )^2 )/𝑑π‘₯ = (d(π‘Ž^2 𝑦^2))/𝑑π‘₯ 𝑑((1 βˆ’ π‘₯^2 ) (𝑦^β€² )^2 )/𝑑π‘₯ = π‘Ž^2 (𝑑(𝑦^2))/𝑑π‘₯ 𝑑((1 βˆ’ π‘₯^2 ) (𝑦^β€² )^2 )/𝑑π‘₯ = π‘Ž^2 Γ— 2𝑦 ×𝑑𝑦/𝑑π‘₯ 𝑑(1 βˆ’ π‘₯^2 )/𝑑π‘₯ (𝑦^β€² )^2+(1 βˆ’ π‘₯^2 ) 𝒅((π’š^β€² )^𝟐 )/𝒅𝒙 = π‘Ž^2 Γ— 2𝑦𝑦^β€² (βˆ’2π‘₯)(𝑦^β€² )^2+(1 βˆ’ π‘₯^2 )(πŸπ’š^β€² Γ— 𝒅(π’š^β€² )/𝒅𝒙) = π‘Ž^2 Γ— 2𝑦𝑦^β€² (βˆ’2π‘₯)(𝑦^β€² )^2+(1 βˆ’ π‘₯^2 )(πŸπ’š^β€² Γ— π’š^β€²β€² ) = π‘Ž^2 Γ— 2𝑦𝑦^β€² Dividing both sides by πŸπ’š^β€² βˆ’π‘₯𝑦^β€²+(1 βˆ’ π‘₯^2 ) 𝑦^β€²β€² = π‘Ž^2 Γ— 𝑦 βˆ’π‘₯𝑦^β€²+(1 βˆ’ π‘₯^2 ) 𝑦^β€²β€² = π‘Ž^2 𝑦 (𝟏 βˆ’ 𝒙^𝟐 ) π’š^β€²β€²βˆ’π’™π’š^β€²βˆ’π’‚^𝟐 π’š=𝟎 Hence proved
1,640
2,150
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.296875
3
CC-MAIN-2022-33
longest
en
0.439724
http://lampx.tugraz.at/~hadley/psd/L10/psd-n-mosfet/index.html
1,679,466,421,000,000,000
text/html
crawl-data/CC-MAIN-2023-14/segments/1679296943750.71/warc/CC-MAIN-20230322051607-20230322081607-00615.warc.gz
28,818,952
3,911
# MOSFET - characteristics • Linear region: • Saturation: S2 cm2V-1s-1 · 10 · 10 close ## MOSFET - characteristics Just skip this information and continue with the plot anyway, this help may be shown by clicking the    -icon. This application plots the - characteristics of a n-channel MOSFET according to the input data characterizing the transistor and its functional state. ### How to use this application On the right side of the screen the desired settings may be inputted. On changing a value, the plot on the left side automatically changes by recalculating the transistor equations described below. More than one curve parametrized by the gate-source voltage may be plotted at once. To access this help click the    -icon in the top right corner. It may be closed by clicking here or by using the close label in the top right corner of this overlay. ### MOSFET drain current The formula for the drain current is derived with the gradual channel approximation. This model assumes a voltage drop across the channel caused by the outer drain-source voltage which shrinks the conducting channel limiting the current. This approximation leads to the following formula: with containing the transistor's geometry and material properties. is the threshold voltage of the transistor describing the minimum gate voltage for strong inversion and thus starting to conduct: Note that in the formula above the intrinsic carrier concentration is temperature dependent as well. This application correctly consideres this fact, so no action has to be taken by the user when changing temperature. The formula for above describes an upside down parabola. Experiments show that the drain current does not decrease as the formula suggests when gets bigger and bigger. If gets bigger than the so called saturation voltage (which is the drain-source voltage at the maximum drain current), the transistor is said to be in saturation. Otherwise it's said to be in linear operation. In saturation, the current through the transistor can not be increased by an increase in drain source voltage. It basically acts as a voltage controlled current source. The current drive of the transistor in the saturation region can be calculated with the following equation: Note the extra term with inside. This quantity is called the channel length modulation coefficient. Experiments show that the drain current slightly increases when increasing the drain-source voltage in saturation. So a MOSFET is not an ideal current source, as the current is dependent on the voltage applied. To regard this fact in the formula, this coefficient was introduced. When being zero, the transistor acts like an ideal current source. This value may only be determined experimentially, typical values lie between 0.0001V-1 and 0.1V-1. ### Parameters • K If selected, the value of . Z The width of the transistor (only applies if is calculated). L The gate length of the transistor (only applies if is calculated). μn The electron mobility of the transistor substrate (only applies if is calculated). Cox The specific capacitance of the gate oxide (only applies if is calculated). • : threshold voltage Vth If selected, the value of . tox The width of the gate oxide (only applies if is calculated). εox The relative permettivity of the gate oxide (only applies if is calculated). NA The acceptor doping concentration (only applies if is calculated). ni The intrinsic carrier concentration of the substrate at 300K (only applies if is calculated). T The temperature (only applies if is calculated). VFB The flat band voltage of the transistor (only applies if is calculated). • : channel length modulation λ The channel length modulation. • : gate-source voltage VGS The gate source voltage. To plot more than one gate voltage, add another one with the ⊕-button.
790
3,832
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.515625
3
CC-MAIN-2023-14
longest
en
0.893895
https://math.stackexchange.com/questions/571818/in-baby-rudin-absolutely-converge-uniformly-converge-and-so-on
1,624,481,176,000,000,000
text/html
crawl-data/CC-MAIN-2021-25/segments/1623488540235.72/warc/CC-MAIN-20210623195636-20210623225636-00585.warc.gz
347,648,183
36,716
# In baby rudin, absolutely converge, uniformly converge, and so on… I have a question in baby rudin p.165, exercise #4. The problem is : Consider f(x)=$\sum_{n=1}^\infty\frac{1}{1+n^2x}$ For what values of x does the series converge absolutely? On what intervals does it converge uniformly? On what intervals does it fail to converge uniformly? Is f continuous wherever the series converges? Is f bounded? I don't know how to solve this problem. If you solve it, I appreciate you very much. Thank you ! :) Try comparison test with $\frac{1}{x}\sum_{n=1}^{\infty}\frac{1}{n^2}$ for $x\neq 0$, at $x=0$ judge convergence by inspection.
181
639
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.03125
3
CC-MAIN-2021-25
latest
en
0.885102
rmcircuits.com
1,725,978,138,000,000,000
text/html
crawl-data/CC-MAIN-2024-38/segments/1725700651255.81/warc/CC-MAIN-20240910125411-20240910155411-00665.warc.gz
455,299,450
14,461
### TDR Modelling & Testing In today’s digital electronics interference is a big concern, especially when it comes to video, USB interfaces, Ethernet, and other wireless communications. “Noise” in a circuit can result in anything from poor picture quality to a loss or degradation of critical signals. Every wire and every circuit that transmits electricity has a resulting electromagnetic field around it, surrounding it and affecting the surrounding area. This can be felt by a human being near high power lines.  Though circuits on a board do not carry near as much electricity as a high-power line, the electromagnetic field is still there, even though it’s a lot weaker. In a critical design, the electromagnetic field can interfere with the signal flow in an adjacent circuit. This field can be controlled by the width and height of the circuit, as well as its proximity to a plane layer. Impedance is measured in Ohms. Single-Ended impedance circuit is one circuit by itself, and the impedance reading concerns the effect it is having on the surrounding circuitry. This circuit could still be in a group, but it is evaluated independently. High impedance from a circuit in a sensitive area can interfere with surrounding signals. In the digital world, this can lead to excessive noise or other problems. To have a circuit that will not affect surrounding signals, as a rule of thumb, the distance to the nearest adjacent circuit or feature of any kind must be 20X the circuit width. Since that is not possible in today’s high-density designs, the impedance of a circuit can be controlled using a variety of different methods; from specialty materials with varying Dk values, to varying the distance to a plane layer, which helps absorb excess electromagnetic radiation. Varying the circuit width and height will also have a major effect in the surrounding impedance. Single –ended impedance can easily be controlled once a baseline is determined. Bringing a plane layer (power or ground) closer to the signal layer lowers the impedance, since more of it is absorbed by the surrounding metal. Similarly, the circuit can be surrounded by a ground plane on the same layer, to form what is known as a co-planar waveguide, to achieve the same results. What matters is the distance to ground, and the “dielectric” of the layers the electromagnetic waves must travel through to get absorbed by the metal. Since bringing the ground closer lowers the impedance, adding pre-preg and increasing the distance to the ground layer will inversely increase the impedance. Another factor is the total cross-sectional area of the circuit or wire. The height and the width are very critical. As the circuit’s cross-sectional area decreases, the impedance increases. Adversely, as the cross-sectional area increases, the impedance decreases. For example: If I have a circuit that is testing at 60 Ohms and I want it to be 50 Ohms, I can either increase the circuit width, use heavier copper to increase the circuit height, or bring the ground plane closer by using a thinner core or less pre-preg. Differential impedance involves two circuits running next to each other, and the effect they have on one another. Even impedance is calculated when the signals are running the same way, and odd is calculated when the signals are running opposite from each other. The Differential is calculated using the odd and even. Generally, differential impedance is used to match the circuits on a board to the cable they will connect. For example: USB cables are a standard twisted pair of wires with 90 Ohm ± 15% differential impedance. Circuits designed to connect to that USB cable are generally required to be 90 Ohms ±10%, to match the cable impedance. Another example is Ethernet, which uses cables of 100 Ohm ± 15% impedance. The circuits designed to connect to this cable will generally be 100 Ohms ±10%. Manufacturability: Having one or two impedances on a board is a relatively standard requirement, depending on the connectivity required. This is becoming prevalent on more and more designs.  Each circuit will influence the surrounding area, and with each impedance requirement, the margin of error increases. It is not uncommon for Rockymountain Circuits, Inc. to build a 16-layer board with a single ended 50 Ohm controlled impedance on each signal layer along with 90 Ohm USB and 100 Ohm Ethernet circuits on those same layers. Precision is the key to success. Another major factor in the success of controlling the impedance of a circuit (or any conductor for that matter) is the size. The larger the circuit, the easier it is to control the impedance. It is easier to control an etched circuit to ± 1 mil width. Repeatability is 100% when the board is designed to have 12 mil circuits. When the circuitry is reduced to 4 or 5 mils, though, controlling the circuit width to ± 1 mil is no longer acceptable. This is since the requirement is in %. 1 mil is 25% of a 4-mil circuit, when it is only 8.3% of a 12-mil circuit. The thinner the circuitry, the more demanding the manufacturing process. Tolerances can drop below the point where it is physically possible to control the width of a circuit. If the required circuitry gets small enough, the edge of the circuit itself becomes a factor. It is advantageous to design a 7-mil controlled impedance circuit over a 4 mil for repeatability and manufacturability. Tolerances can become unbearable as circuitry is miniaturized.  Though there is a small amount of gained real estate, the resulting additional processing required to meet a ±10% differential impedance specification at this circuit width could become cost prohibitive.
1,177
5,687
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.5625
3
CC-MAIN-2024-38
latest
en
0.941988
https://www.indiabix.com/electronics/alternating-current-and-voltage/discussion-283-1
1,716,816,799,000,000,000
text/html
crawl-data/CC-MAIN-2024-22/segments/1715971059040.32/warc/CC-MAIN-20240527113621-20240527143621-00156.warc.gz
713,352,868
7,815
# Electronics - Alternating Current and Voltage - Discussion Discussion Forum : Alternating Current and Voltage - General Questions (Q.No. 3) 3. What is the peak-to-peak voltage of the waveform in the given circuit? 2 V 4 V 6 V 8 V Explanation: No answer description is available. Let's discuss. Discussion: 17 comments Page 1 of 2. Muhammad Kamran said:   2 years ago Peak-to-peak value is the maximum voltage change occurring during one cycle of alternating voltage or current. The peak-to-peak value of an AC voltage is defined as the difference between its positive peak and its negative peak. Hari said:   5 years ago Thank you @Sundhar. Sanjeev said:   5 years ago Peak-to-peak (pk-pk) is the difference between the maximum positive and the maximum negative amplitudes of a waveform, as shown below. For an AC sine wave with no DC component, the peak-to-peak amplitude is equal to approximately 2. 828 times the root-mean-square amplitude. Asif said:   6 years ago What is the rms value of a given graph? Ravi said:   6 years ago What is a starter? Power Serge said:   7 years ago The zero line is 6V. From 6V to 10V is 4V difference. From 6V to 2V is 4V difference. So, if we add 2 differences, 4+4=8V. Dorah said:   7 years ago The maximum peak value ie 10v minus least peak value ie 2v. So, the answer is 8v. Obaid said:   8 years ago What is the peak to peak value for 120 V ac? Can you please explain in detail? Arv said:   8 years ago Peak to peak will be Zero. Ashish said:   9 years ago If both voltages are equal then what will be the answer?
435
1,570
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.4375
3
CC-MAIN-2024-22
latest
en
0.896519
http://www.abovetopsecret.com/forum/thread633591/pg4
1,502,986,064,000,000,000
text/html
crawl-data/CC-MAIN-2017-34/segments/1502886103579.21/warc/CC-MAIN-20170817151157-20170817171157-00697.warc.gz
469,567,199
18,188
It looks like you're using an Ad Blocker. Thank you. Some features of ATS will be disabled while you continue to use an ad-blocker. # So... What if 9/11 wasn't an inside job? page: 4 2 share: posted on Nov, 28 2010 @ 09:02 PM with each successive floor the momentum should be slowed as the floors below are absorbing the energy Why would the momentum slow , if the mass is increasing as each successive floor collapses ? posted on Nov, 29 2010 @ 03:33 AM Originally posted by abcddcba in the second case where the only thing holding up the floors are joints then 28 floors dropping 1 floor down should have destroyed the floors below it and with each successive floor the momentum should be slowed as the floors below are absorbing the energy released when those 28 floors suddenly dropped 1 floor height, or level. You seem to forget that after every floor that collapses, the entire top section plus that extra floor is pulled down again for several meters by gravity, speeding it up and generating all fresh and new momentum to crush the next floor. This momentum can only be larger than it was on the previous floor because the crushing mass increased by one floor and there is residue momentum from the collision from the previous floor. So with each floor, momentum increases. The model in that video is flawed as it does not represent the actual structure. posted on Nov, 29 2010 @ 04:33 AM Originally posted by abcddcba anyone should be able to see that in either scenario the towers should not have collapsed the way they did. if the floors below are holding up the floors above then you have to use the what im going to call pyramid theory because i cant remember the law right now. As long as you don't distinguish between floors and levels people will play semantic debating games. The columns in the core and on the perimeter were not the FLOOR. psik posted on Nov, 30 2010 @ 06:22 AM a little something called resistance, you should look into it. you might be able to figure out why the wtc shouldnt just fall straight down as if nothing was supporting it. also where are you getting this idea that the mass increased? from where? the plane in the building? thats negligible at best. if you have 100 pounds of weights(mass) stacked up in 10 pound increments and in between each weight is a sponge cake weighing 1 gram you have 100pounds + 10 grams. if the weight at the top flattens the spongecake below it causing it to contact the weight below it causing a chain reaction of collapsing spongecakes and weights you still only have 100pounds and 10 grams of total mass. edit on 30-11-2010 by abcddcba because: logic. posted on Nov, 30 2010 @ 06:29 AM Originally posted by -PLB- Originally posted by abcddcba in the second case where the only thing holding up the floors are joints then 28 floors dropping 1 floor down should have destroyed the floors below it and with each successive floor the momentum should be slowed as the floors below are absorbing the energy released when those 28 floors suddenly dropped 1 floor height, or level. You seem to forget that after every floor that collapses, the entire top section plus that extra floor is pulled down again for several meters by gravity, speeding it up and generating all fresh and new momentum to crush the next floor. This momentum can only be larger than it was on the previous floor because the crushing mass increased by one floor and there is residue momentum from the collision from the previous floor. So with each floor, momentum increases. The model in that video is flawed as it does not represent the actual structure. again you like the other guy are forgetting resistance. the only way it would just keep going is if it fell 28 stories/floors before making contact with the next level/floor below. whats with this semantics you guys are playing with the word floor? it means the **** thing you are standing on right now that is being supported by the floor and beams below it unless you live an a basement. if you have a 5 floor/story(its the same thing dont play word games with me) building and you suddenly make the 2nd floor dissapear the top 3 floors will fall down 1 floor/level/story and the combined weight with the force of gravity (Fg=m(m=mass)*g(g=9.81m/s/s)) will crush the floor that they fell on in this case being the bottom floor. if you have a 100 story building and you make the 77th floor dissapear, the 23 floors above will not ave enough force to just destroy the remaining 76 floors below them its physically ***** impossible. this is why the wtc shouldnt have fallen down like they did. even if you took out 10 floors its still not enough. you would have had to have taken out the middle 3rd of the building for there to be enough energy to destroy the rest. theres no generation of new momentum you cant just create energy out of thin air this is 5th grade science here folks. potential energy = energy stored in every object. kinetic energy = energy being used. you cannot create or destroy energy, only transfer/translate it to something else. you throw a knife at a wall. your potential energy goes through your arm and into the knife, the knife leaves your hand with whatever force you put into it, it flies through the air and some of the energy is transferred to the air which is resisting it, the remaining energy is transferred from the knife[kinetic energy] to the wall [potential energy] which is distributed into the wall as the force from you throwing a knife at the wall was not great enough to either knock the wall over or punch a hole through it. edit on 30-11-2010 by abcddcba because: science lesson. posted on Nov, 30 2010 @ 07:17 AM By now it should have been clear that by floor I mean just the part you are standing on, so without the support columns. As for the resistance, it would be equal to the load capacity of the floors (as per definition in the previous sentence, I will use this definition further on) and not equal to the load capacity of the support columns. The load capacity of the floors is designed to carry only the weight of the floor, not of the top section. That is what the support columns are for. I already explained why in previous posts. As for preservation of energy, the momentum increases as result of gravity pulling a mass to the ground. Buildings store a significant amount of potential energy, which is transfered to kinetic energy when parts of the building falls due to gravity. It is indeed not rocket science, so get educated. Start for a basic understanding for example here: en.wikipedia.org... edit on 30-11-2010 by -PLB- because: (no reason given) posted on Nov, 30 2010 @ 07:41 AM also where are you getting this idea that the mass increased? Look , I could really care less if you call them floors , or if you call them levels . I don't see where it is relevant . As for increased mass , I'm really puzzled by your question . If five floors drop onto another floor , you now have six floors dropping . when those six floors drop onto another floor , you now have seven floors dropping . When those seven floors drop onto a floor , you now have eight floors dropping . With each successive floor failure , the mass that is dropping increases by the weight of one more floor level . Can you not see that ? So , how does the momentum decrease , while the mass that is falling , increases ? Physically impossible . posted on Nov, 30 2010 @ 08:45 AM So that brings us back to the issue of why we don't have that information after NINE YEARS and why supposed experts haven't been demanding that information for all of that time. Because the real experts don’t have any issue with the OS of the collapse. It’s only the ‘web experts’ who do. posted on Nov, 30 2010 @ 09:23 AM What's the matter with you conspiracy theorists? Don't you know that hundreds of heavy duty steel columns failing at virtually the same time in a modern skyscraper is no big deal? I'm sure there are thousands of examples of undamaged steel columns failing simultaneously in buildings which have not been intentionally set up for demolition. And if you can't find any such examples, keep looking, they're out there. posted on Nov, 30 2010 @ 10:07 AM I think you are pretty close,i also believe "they "had already planed something like what happened ,and loosened up the nooses on all security for a few months till they dialed in what "they "where planning to do (they meaning the patsies,cough i mean terrorist ) And silverstien was ever so willing to donate his complex( buildings 1,2,3,4,5,6,and 7 which all where destroyed ..why only his ?)to help the movie set be built that was played out before our very eyes with the use of real people and some actors thrown in(US government has done this for years)to make it all seem like it was real , when in actuality, The plot was the only real thing of that whole day , but the actions where all manipulated,not really controlled, but manipulated/nudged/pushed in one direction ..lol to even further the illusion that the plot ,was real as real gets I am just saying posted on Nov, 30 2010 @ 10:08 AM Originally posted by samkent So that brings us back to the issue of why we don't have that information after NINE YEARS and why supposed experts haven't been demanding that information for all of that time. Because the real experts don’t have any issue with the OS of the collapse. It’s only the ‘web experts’ who do. Yes it is certainly an interesting issue. People can't comprehend that in order for skyscrapers to hold themselves up every LEVEL must be strong enough to support the combined weights of all of the LEVELS above therefore the designers had to figure out how much steel was needed on each LEVEL. And yet NINE YEARS after the event they allow REAL EXPERTS to get away with not telling them the TONS of STEEL and TONS of CONCRETE that were on every LEVEL and yet believe this nonsense anyway. So apparently we have a society of people that regard it as intelligent to think what they are told no matter how DUMB it is. Do skyscrapers have to hold themselves up DUH! 41 years after the Moon landing and the nation that put men on the Moon can't tell the world the TONS of STEEL and TONS of CONCRETE that were on every LEVEL of buildings designed before 1969. That makes so much sense. 9/11 is the Piltdown Man incident of the 21st century. psik posted on Nov, 30 2010 @ 10:40 AM in order for skyscrapers to hold themselves up every LEVEL must be strong enough to support the combined weights of all of the LEVELS above Are you simply ignoring my posts ? I have explained this to you over and over . The "levels" DID NOT support the levels above them ! Why do you keep ignoring this ??? posted on Nov, 30 2010 @ 11:29 AM People can't comprehend that in order for skyscrapers to hold themselves up every LEVEL must be strong enough to support the combined weights of all of the LEVELS above therefore the designers had to figure out how much steel was needed on each LEVEL. I totally disagree. From my understanding the center core and the exterior are the only things holding each floor. The floors are attached at each end. If those attachments fail or the floor trusses fail, all the weight of that floor would fall to the next lower floor. Since that next lower floor was not designed to hold its own weight and the weight of the floor above, it too would fail. This failure would put pulling stresses on the outer steel as well as the core, setting them up for side ways failure. A self sustaining failure. I think most truthers feel the building was made with steel I beams in a box configuration. Like the Empire State building. Stacking one steel box on top of another. But the size of the lower steel would have to be so large it would break the bank. If they had, or even could have, likely both building would still be here. I would consider the design to be flimsy and only strong enough to support itself. I doubt it could have survived a 707 hit, even though they said it could. posted on Nov, 30 2010 @ 01:15 PM Originally posted by okbmd in order for skyscrapers to hold themselves up every LEVEL must be strong enough to support the combined weights of all of the LEVELS above Are you simply ignoring my posts ? I have explained this to you over and over . The "levels" DID NOT support the levels above them ! Why do you keep ignoring this ??? So you are saying the core columns and perimeter columns up to 8 feet above and 2 feet below the surface of a floor is not on the same LEVEL with that floor? What are you saying is holding up the portion of the building above that level? psik posted on Nov, 30 2010 @ 01:41 PM As for increased mass , I'm really puzzled by your question . If five floors drop onto another floor , you now have six floors dropping . when those six floors drop onto another floor , you now have seven floors dropping . When those seven floors drop onto a floor , you now have eight floors dropping . With each successive floor failure , the mass that is dropping increases by the weight of one more floor level . Can you not see that ? So , how does the momentum decrease , while the mass that is falling , increases ? Physically impossible Even if it were possible that the upper-floors gained velocity with the increased weight added by each new collapsing floor making the falling mass increasingly heavier enough to crush the tower below, there's a fundamental inconsistency that needs to be addressed, which is, where are the floors in the rubble? The rubble pile is only five stories high (and the underground basement 20 meters high), which means the floors couldn't have increased in weight enough to crush the significantly stronger tower below. And any arguments along the lines of "the floors were compacted and those five floors really represented twenty" need to be substantiated. A straight-down, symmetrical natural collapse I feel is highly unlikely, because physics dictates that objects invariably fall to the path of least resistance. Therefore as the falling mass crashed into the undamaged floors below the resistance would cause the falling mass to tilt sideways, causing a non-symmetrical collapse. It seems inconceivable to me that the falling mass (which is the same weight the towers held up every day) would fall through the towers at speeds exceptionally close to freefall. edit on 30-11-2010 by Nathan-D because: (no reason given) posted on Nov, 30 2010 @ 01:55 PM Since the floors were never build to carry the weight of the top section, it seems to me they would fail easily. Especially when you consider the dynamic load was equivalent to 30 times the top section weight (according to Wikipedia). See the following scientific study for more details: www.civil.northwestern.edu... So on what exactly do you base that this is "inconceivable"? Which scientific study? posted on Nov, 30 2010 @ 02:17 PM Since the floors were never build to carry the weight of the top section, it seems to me they would fail easily. Especially when you consider the dynamic load was equivalent to 30 times the top section weight (according to Wikipedia). You actually see the top-section pivot outwards in one of the videos, which means it couldn't have collapsed symmetrically straight down unless it straightened up. Why don't you tell us all about this gigantic dynamic load that contains the mass distribution information to show how it got around the conservation of angular momentum to straighten up after pivoting outwards? So on what exactly do you base that this is "inconceivable"? Which scientific study? We see the same exact symmetrical global unified descent at an unnatural consistent unwavering near-free-fall rate despite different damage and vastly different weights above. The towers were designed to support their weight so the idea they could have crushed themselves at essentially freefall is improbable unless the columns were weakened to allow for almost no resistance. edit on 30-11-2010 by Nathan-D because: (no reason given) posted on Nov, 30 2010 @ 02:26 PM Originally posted by Nathan-D You actually see the top-section pivot outwards in one of the videos, which means it couldn't have collapsed symmetrically straight down through the building, unless it straightened up. Why don't you tell us all about this gigantic dynamic load that contains the mass distribution information to show how it got around the conservation of angular momentum to straighten up after reaching unstable equilibrium? Before disappearing from view, the upper part of the South tower was seen to tilt significantly and of the North tower mildly. Some wondered why the tilting did not continue, so that the upper part would pivot about its base like a falling tree see Fig. 4 of Bažant and Zhou 2002b. However, such toppling to the side was impossible because the horizontal reaction to the rate of angular momentum of the upper part would have exceeded the elastoplastic shear resistance of the story at least 10.3x Bažant and Zhou 2002b. Although I am unable to confirm the validity of this without doing a lot of studying, as I am no expert in this field. We see the same exact symmetrical global unified descent at an unnatural consistent unwavering near-free-fall rate despite different damage and vastly different weights above. The towers were designed to support their weight, so the idea they could have crashed themselves at essentially freefall while simultaneously being pulverised to dust due to gravity alone seems highly improbable to me. The calculations in the paper (and its references) point out otherwise. Why are those wrong? And you ask me for a scientific study, well, I could ask you the same thing. NIST haven't even explained in detail how the towers collapsed, in fact, their computer models inexplicably stop at the collapse initiation. I would like to see a scientific study proving a natural collapse. I linked to a study in my previous post. Look at the references in that paper for more details. So why are you asking? Are you rejecting that study? Why? posted on Nov, 30 2010 @ 02:42 PM post by -PLB- I linked to a study in my previous post. Look at the references in that paper for more details. So why are you asking? Are you rejecting that study? Why? I've explained my case above. The towers came down at essentially freefall which violates basic physics. There is nothing exceptionally hard about knowing what G is. Free-fall acceleration is acceleration of an object acted on only by force of gravity. You're telling me this is too hard to understand and this is possible in a gravitational collapse of a steel-framed building? Although I am unable to confirm the validity of this without doing a lot of studying, as I am no expert in this field. Right. So you admit you don't understand it enough to confirm its validity. If that is the case, then why are you standing by it? As I said, if the top-section begins to pivot away from its centre of mass it cannot straighten up unless acted on by an equal and opposite force and nowhere in the videos do you see this happen, it just disappears into a cloud of dust. edit on 30-11-2010 by Nathan-D because: (no reason given) posted on Nov, 30 2010 @ 02:54 PM Originally posted by Nathan-D I've explained my case above. The towers came down at essentially freefall which violates basic physics. There is nothing exceptionally hard about knowing what G is. Free-fall acceleration is acceleration of an object acted on only by force of gravity. You're telling me this is too hard to understand and this is possible in a gradational collapse of a steel-framed building? It was more like halve of free fall speed. The study I linked shows it is possible. Why is that scientific study wrong, and your opinion correct, of which you still did not say what you based it on? Right. So you admit you don't understand it enough to confirm its validity. If that is the case, then why are standing by it? As I said, if the top-section begins to pivot away from its centre of mass it cannot straighten up unless acted on by an equal and opposite force. You obviously also do not understand it well enough to refute it. So unless another study or scientist refutes it by showing why it is wrong, I will stand by it, and not by your gut feeling. Show me the study that proves it is wrong, and we have something to discuss. Until then, I have no reason to believe its wrong. Intuitively it makes sense to me. What we currently have is several studies that show that the progressive collapse is inevitable, and none it is inconceivable. Why should I go by your gut feeling, and not by the science? new topics top topics 2
4,541
20,748
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.21875
3
CC-MAIN-2017-34
longest
en
0.949308
http://oeis.org/A187813
1,590,805,363,000,000,000
text/html
crawl-data/CC-MAIN-2020-24/segments/1590347407001.36/warc/CC-MAIN-20200530005804-20200530035804-00443.warc.gz
92,559,227
6,431
The OEIS Foundation is supported by donations from users of the OEIS and by a grant from the Simons Foundation. Hints (Greetings from The On-Line Encyclopedia of Integer Sequences!) A187813 Numbers n whose base-b digit sum is not b for all bases b >= 2. 19 0, 1, 2, 4, 8, 14, 30, 32, 38, 42, 44, 54, 60, 62, 74, 84, 90, 98, 102, 104, 108, 110, 114, 128, 138, 140, 150, 152, 158, 164, 168, 174, 180, 182, 194, 198, 200, 212, 224, 228, 230, 234, 240, 242, 252, 270, 278, 282, 284, 294, 308, 312, 314, 318, 332, 338, 348 (list; graph; refs; listen; history; text; internal format)
249
581
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.71875
3
CC-MAIN-2020-24
latest
en
0.669838
https://www.answers.com/Q/How_many_inches_in_0.79_feet
1,585,659,576,000,000,000
text/html
crawl-data/CC-MAIN-2020-16/segments/1585370500482.27/warc/CC-MAIN-20200331115844-20200331145844-00181.warc.gz
786,622,403
22,430
Length and Distance Math and Arithmetic Geometry # How many inches in 0.79 feet? ###### December 16, 2009 5:51PM Multiply feet times 12 to get inches. 0.79 x 12=9.48 inches
56
175
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.96875
3
CC-MAIN-2020-16
latest
en
0.856558
http://www.hardballtimes.com/main/article/pitcher-similarity-scores-part-ii/
1,386,567,232,000,000,000
text/html
crawl-data/CC-MAIN-2013-48/segments/1386163906438/warc/CC-MAIN-20131204133146-00042-ip-10-33-133-15.ec2.internal.warc.gz
369,705,249
9,014
December 9, 2013 THT Essentials: Fangraphs Player Search: And here's the full roster. #### Get It Now! The tenth Hardball Times Annual is now available. It's got 300 pages of articles, commentary and even a crossword puzzle. You can buy the Annual at Amazon, for your Kindle or on our own page (which helps us the most financially). However you buy it, enjoy! ## Search THT: Or you can search by: #### THT E-book Third Base: The Crossroads is THT's e-book, available for \$3.99 from the Kindle store. The good news is that anyone can read a Kindle book, even on a PC. So enjoy the best from THT in a new format. Get your very own THT merchandise from our CafePress store. We've got baseball caps, t-shirts, coffee mugs and even wall clocks with the classy THT logo prominently displayed. Also, check out the THT Bookstore. Please support your favorite baseball site by purchasing something today. # Pitcher similarity scores (Part 2) by Josh Kalk February 19, 2008 ###### Introduction In the first part of this article we looked at comparing pitchers using a new similarity score that was intended to compare pitchers' stuff. Here, stuff means what types of pitches a pitcher throws, how often he throws them, how fast the ball travels and how much movement it has compared to a pitch thrown without spin. In this article, we will dig deeper into these similarity scores and some results from them. But first, I made a change to the similarity scores themselves. The idea is still the same but the math is slightly different. If you who want to see the math, read on. If not, skip on to the results in the next section. ###### Similarity scores take two After conversing with readers who offered help with the similarity scores over at BallHype, I settled on a new equation. Again, the beauty of this equation is that all scores will be between zero and 100. I toyed with removing the pitch frequency but found that too many pitchers who just dabbled with a pitch would end up with similarity scores dominated by how they threw that pitch. So pitch frequency is back, but now it is on the same footing as the other pitch attributes with a weight factor out front. ###### Results for the new similarity scores There were 514 pitchers who threw at least 100 pitches tracked by PITCHf/x (after removing the two knuckleballers), so if you compare every pitcher to every other pitcher and plot all the similarity scores, you end up with a histogram like this. You can kind of break this histogram into three parts. The first part is the spike at 0. This occurs when two pitchers don't throw any pitches in common (e.g., one throws a sinker and a slider and the other throws a fastball and a curve). The next part is the bump near 30. This is from pitchers who throw just one pitch in common (e.g., one throws a sinker and a slider and the other throws a fastball and a slider). The reason for the large spread in this bump is that the percent that the two pitchers throw their common pitch is variable. As that percentage grows, so does the similarity score. Lastly, the curve rises until near 100 where is drops to zero. These represent pitchers that throw virtually identical pitches. It drops to zero at 100 because that is the maximum similarity. I like this curve because you can think of these scores as grades (60 and below F, 60-70 D, 70-80 C, 80-90 B, 90-100 A). Sometimes these similarity scores result in some revealing comparisons. For instance, if you look at the top five most similar pitchers to Roy Oswalt you come up with: Matt Cain, Manny Parra, Matt Garza, Phil Hughes and Scott Proctor. The top four pitchers are all young, rising stars. All throw hard and are four-pitch pitchers (fastball, change-up, curve and slider). The fifth is a journeyman reliever. This goes to show that throwing the same type of pitches isn't a recipe for results. Any one of the four youngsters on the list could harness their talent and have a career like Oswalt. Or they could slide back and become relatively anonymous. I also want to point out that while the pitcher's arm angle isn't included in the similarity scores, the arm angle greatly affects the spin axis on the ball (and therefore the movement of the pitch), so the similarity score can pick up on it. For example, submariner Cla Meredith's top comps include fellow submariners Ehren Wassermann and Byung-Hyun Kim first and third on his list. Who is second? Brandon Webb. Webb's sinker has so much sink to it that is right in line with many of the side-armers. So while arm angle does play a role, pitchers like Webb can mess up some of the comparisons. ###### Uniqueness Okay, so now that we have a better definition for these similarity scores, what can we do with them? The easiest thing to do is to calculate a uniqueness rating for each pitcher. To do this I am going to use the top 20 most similar pitchers based on the similarity scores. Basically, I just add up the top 20 scores, divide by 20, subtract from 200 and then multiply by 3/2 so the scale goes roughly from zero (very common) to 100 (incredibly unique). Unlike the similarity scores, though, most pitchers end up very low on the uniqueness scale. Here are all of the 514 pitchers' uniqueness scores. As you can see, most pitchers are below 20 and anything above 30 is quite rare. Here are the top five most unique pitchers. ```Pitcher Uniqueness Kevin Cameron 103 Mariano Rivera 100 Justin Duchscherer 80 Jose Valverde 67 Paul Shuey 56 ``` The top three pitchers on the list all feature the same pitch, the cut fastball. Because few pitchers throw a cutter a lot, the pitchers who do end up scoring quite high on the uniqueness scale. In fact, Cameron and Rivera are each other's top comps. You probably didn't need this uniqueness scale to know that Rivera is a pretty rare bird, but the fact that he ends up second on this list is a good sign for this metric. Valverde is on the list because of his splitter. Last year he limited himself to only throwing his fastball and his splitter and his tracked pitches bear that out. His splitter, though, is an incredibly unique pitch which comes from how he holds the ball. His grip is regular but instead of holding the ball at the seams he places one seam in between his fingers. I've never heard of any other pitcher doing this. If you have heard of another pitcher doing this please let me know in the comment area below. The result of this unique grip is a very large sink for a splitter and much less horizontal movement to the point that it is very similar to his fastball. This can really confuse the hitters and they see fastball until the bottom drops out. Shuey was a pretty big surprise to me. I have seen him pitch many times and I didn't find him remarkably different than other pitchers. In fact, none of Shuey's three pitches are very unique by themselves. His fastball is almost identical to a league average fastball, his sinker has a bit more bite to it but nothing extraordinary, and his curve is pretty over the top, producing pretty close to a 12 to 6 curve. What is unique about Shuey is how he puts these pitches together. First, he uses his fastball and his sinker very frequently but mixes them up quite a bit, throwing nearly as many sinkers as regular fastballs. This is very unusual for a pitcher as most favor either the fastball or the sinker and rarely throw the other. Second, most pitchers who throw a sinker also throw a slider (how many times have you heard a pitcher described as a sinker/slider guy?). Shuey doesn't throw a slider at all and it is very rare to find a pitcher with a solid sinker throwing a curve, much less a 12 to 6 curve like Shuey's. ###### The future of similarity scores I hope that as more data roll in we can start putting these similarity scores to good use. For instance, how does a pitcher compare with himself as the years go by? If he looses a tick on his fastball or starts throwing a new pitch, the similarity score will notice. The same is true for pitchers coming back from injury. You can imagine looking at pitchers who have come back from Tommy John surgery and checking to what level they come back using these similarity scores. Also, is it true that their second year after their surgeries is much better than their first? What about the effects of a pitching coach on a pitcher? Might a pitcher who no longer works with a Zen master like Leo Mazzone have some mechanical breakdowns that lead to his pitches being altered? So I hope this is just the tip of the iceberg. Lastly, I've added similarity scores and uniqueness to each pitcher on my player cards page. If you would like to check out your favorite pitcher and see how he compares, that is the place to do it. References and Resources Thanks to all who commented about the similarity scores after the last article. Without your help, I never would have gotten to this current setup. I'd especially like to thank reader Ike who suggested taking the square root on the sum the squares which really is the key to this whole thing. I'd also like to thank Daron Sutton, who did a great job of explaining Valverde's grip on his splitter during one of his broadcasts. Josh Kalk is a physics and math geek who can also be found blogging at http://www.baseball.bornbybits.com/blog/blog.html. He enjoys good conversations about baseball and can be reached at .(JavaScript must be enabled to view this email address).
2,090
9,482
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.953125
3
CC-MAIN-2013-48
latest
en
0.960417
http://mathhelpforum.com/business-math/27065-present-value-question-print.html
1,527,116,334,000,000,000
text/html
crawl-data/CC-MAIN-2018-22/segments/1526794865830.35/warc/CC-MAIN-20180523215608-20180523235608-00627.warc.gz
187,767,207
3,138
# Present value question! • Jan 29th 2008, 01:14 PM DooBeeDoo Present value question! You have just won £60 million in the lottery. The lottery promises to pay you £20 million at the end of each year for the next three years. If the market rate of interest is 7 percent, how much money would you accept today in exchange for the three £20 million payments? any ideas?? I'm not too good at this stuff • Jan 29th 2008, 04:15 PM colby2152 Quote: Originally Posted by DooBeeDoo You have just won £60 million in the lottery. The lottery promises to pay you £20 million at the end of each year for the next three years. If the market rate of interest is 7 percent, how much money would you accept today in exchange for the three £20 million payments? any ideas?? I'm not too good at this stuff $\displaystyle A \Rightarrow 20Ma_{\bar{3}|} = \sum_{n=1}^3 20M(1.07)^{-n}$ • Jan 30th 2008, 05:10 AM TKHunny Quote: Originally Posted by colby2152 $\displaystyle A \Rightarrow 20Ma_{\bar{3}|} = \sum_{n=1}^3 20M(1.07)^{-n}$ "how much money would you accept today " Colby's answer is good but I'm not certain it answers the question. You wouldn't be studying a chapter in Utility, would you? If the value of money changes by more that a prevailing interest rate, then there is another value that is a better answer. For example, if you have no debts right now, but will owe $20MM in one year, that first delayed payment may be worth far more then than the discounted value today. "I'm not too good at this stuff" No need to talk like that. Practice. GET good at it. • Jan 30th 2008, 06:11 AM DooBeeDoo Quote: Originally Posted by TKHunny "how much money would you accept today " Colby's answer is good but I'm not certain it answers the question. You wouldn't be studying a chapter in Utility, would you? If the value of money changes by more that a prevailing interest rate, then there is another value that is a better answer. For example, if you have no debts right now, but will owe$20MM in one year, that first delayed payment may be worth far more then than the discounted value today. "I'm not too good at this stuff" No need to talk like that. Practice. GET good at it. It's a risk management course, not utility. • Jan 30th 2008, 06:24 AM colby2152 Quote: Originally Posted by DooBeeDoo It's a risk management course, not utility. What I showed you was a simple annuity with payments at the end of the year. It's value is the present value of those three 20M payments.
673
2,477
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.484375
3
CC-MAIN-2018-22
latest
en
0.961119
https://www.saving.org/area-calculator/m/330/h
1,660,227,676,000,000,000
text/html
crawl-data/CC-MAIN-2022-33/segments/1659882571472.69/warc/CC-MAIN-20220811133823-20220811163823-00290.warc.gz
848,623,130
2,726
#### 330 square miles to hectares How many hectares in 330 sq mi? #### Calculate Area 330 square miles to hectares. How many? What's the calculation? How much? What is 330 square miles in hectares? 330sq mi in hectares? How many hectares are there in 330 sq mi? Calculate between square miles and hectares. How big is 330 square miles in hectares?
83
350
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.53125
3
CC-MAIN-2022-33
latest
en
0.902249
https://www.reservacultural.com.br/stemming-synonym-rcetcx/viewtopic.php?384c7b=system-of-linear-equations-project-cell-phone-plan
1,656,420,108,000,000,000
text/html
crawl-data/CC-MAIN-2022-27/segments/1656103516990.28/warc/CC-MAIN-20220628111602-20220628141602-00460.warc.gz
1,034,600,821
17,467
Prepare a written plan for the doctors suggesting nutrition requirements that should be included in the diets for patients with a specific illness. The second plan has a $30 sign-up fee and costs$25 per month. Two cars comparing … Solve linear … When is Company T a better Value? Rationale for choosing cars b. If your usage exceed 300 minutes, you pay 50 cents for each minute. \begin{align} 0.1t + 29.95 &= .05t + 49.95 \\ .05t &= 20 \\ t &= 400 \quad \text{Text Messages} \end{align}, \begin{align} 0.05t + 49.95 &= 90.20 \\ 0.05t &= 40.25 \\ t &= 805 \quad \text{Text Messages} \end{align}, Plan A costs a basic fee of \$29.95 per month and 10 cents per text message, Plan B costs a basic fee of \$90.20 per month and has unlimited text messages, Plan C costs a basic fee of \$49.95 per month and 5 cents per text message. They will write and solve systems graphically and alg Step 2. The project idea is that the students are helping the PTA be educated on how to select the best cell phone plan. The two situations are: 1. Systems of Linear Equations- Cell Phone Plans (no rating) 0 customer reviews. Your job is to prepare a summary that compares two of your company’s calling plans to help an FSI staff member (Mr.Byan) decide which plan is best for him. Answer. This complete unit is ready to copy! to find the solution to the written system. to solve equations and inequalities. You are a representative for a cell phone company and it is your job to promote different cell phone plans. A system of linear equations is a set of two or more linear equations with the same variables. Engage your students with effective distance learning resources. We can write the total cost per month as $$y = 29.95 + 0.10t$$ Two cars, comparing the base price (the cost of the car) and the cost of driving the car. Linear equations, coordinate planes, and systems of equations are covered in this extremely well-organized instructional activity. This task presents a real-world problem requiring the students to write linear equations to model different cell phone plans. Document all work done. To determine the range of “small”, “medium” and “large” numbers of text messages, we need to find the$t$-coordinate of the intersection points of the graphs. In each case the basic fee is the vertical intercept, since it indicates the cost of a plan even if no text messages are being sent. Choosing a cell phone plan using linear equations perkins system of project with rubric ex compare plans write to model and data usage fill find equation systems problem in real life their solutions math vacation dear inequalities word problems harder Choosing A Cell Phone Plan Using Linear Equations Perkins System Of Equations Project Cell Phone With Rubric Ex… Read More » 6 - Solving Systems of Equations Interactive Notes Activity - This set of notes is ready to go in an interactive notebook. Licensed by Illustrative Mathematics under a The same type of analysis can be done for cable services, bundled or unbundled, streaming services, group dinners or rental costs. And he have to stick to a strict budget and plan to spend no more than$40 Project Mission Systems of Building Systems of Linear Models. We can write the total cost per month as $$y = 29.95 + 0.10t$$, Plan B has a basic fee of \$90.20 even if no text messages are sent. A system of linear equations is a system made up of two linear equations. Review Vocabulary: Linear equation, variable (some number) Review with student the question: A cell phone plan costs$45.00 per month with the cost for texting an added $0.25 per text. Note that the last three pieces of information describing the plans are superfluous; it is important for students to be able to sort through information and decide what is, and is not, relevant to solving the problem at hand. This is a lifetime skill for the student. There are three possibilities: The lines intersect at zero points. If you would like the student to do independent research on the different types of cell phone plans following are websites for Verizon, ATT, Sprint and TMobile to begin research. At an intersection point of two lines, the two plans charge the same amount for the same number of text messages. The students are required to find the solution algebraically to complete the task. Data and source of data c. System of linear equations with explanation of the y-intercept and slope d. Solution to the written systems (all work shown). Apply: Students will apply their knowledge of the cell phone plans and systems of equations in a Google Doc. To solve the system of equations, you need to find the exact values of x and y that will solve both equations. I feel like it is really important for students to really understand what they are doing when they solve a system of equations. 175 North Beacon Street We can write the total cost per month as $$y = 49.95 + 0.05t$$. Therefore, we can find a linear equation for each plan relating$y$, the total monthly cost in dollars, to$t$, the number of text messages sent. _____ Graphing calculators will be used both as a primary tool in solving problems and to verify algebraic solutions. In addition, each text message costs 5 cent or \$0.05. To find the exact coordinates of each intersection point, we need to solve the corresponding system of equations. By determining the intersection point of two plans, students can make informed decisions. Plan B has a lower basic fee (\$29.95) than Plan A (\$49.95); therefore it starts lower on the vertical axis. (The lines are parallel.) Students analyze a cell phone bill to create a linear equation of how to calculate the bill. Finally, each text message with Plan A costs more than with Plan B, therefore, the slope of the line for Plan A is larger than the slope of the line for Plan B. In mathematics, a system of linear equations (or linear system) is a collection of one or more linear equations involving the same set of variables. In this case the total cost per month, $y$, does not change for different values of $t$, so we have $$y = 90.20$$, Plan C has a basic fee of \$49.95 even if no text messages are sent. Determine which plan has the lowest cost given the number of text messages a customer is likely to send. In two variables ( x and y ) , the graph of a system of two equations is a pair of lines in the plane. In this project your group will be choosing between two real life situations and then using systems of linear equations to decide what to buy. This project asks students to choose two different cell phone companies to compare. In this project your group will be choosing between two real life situations and then using systems of linear equations to decide what to buy. SWBAT graph lines that represent 2 cell phone plans and solve the system of equations to determine the best plan. f) solving real-world problems involving equations and systems of equations. Tools are available and useful to students during their analysis, and provide opportunities to make sense of the different rate plans. Cell phone plans comparing monthly fee and price per text message. Systems of Equations and Inequalities You are a team of nutrition counselors working for a major hospital. This presentation provides students with opportunities to engage, explore, apply, and connect the algebraic concept of systems of linear equations by using cell phone plans. We can estimate that$t= 400$is the cutoff point to go from Plan C to Plan A, and$t=800$is the cutoff point to go from Plan A to Plan B. Use your knowledge of solutions of systems of linear equations to solve a real world problem you might have already been faced with: Choosing the best cell phone plan. MDUSD, linear functions, systems of equations Since Mr. Byan is tech savvy and a The project is so simple - students plant seeds, grow grass, measure, plot growth, find lines of fit - but the learning opportunities stretch the project so much farther. All three plans start with a basic monthly fee; in addition, the costs for Plans A and C increase at a steady rate based on the number of text messages sent per month. Students compare cell phone plans by analyzing tables, graphs, and equations in this sample lesson. ... A cell phone plan offers 300 free minutes for a flat fee of 20 dollars. Then write a system of linear equations for the two plans and create a graph. Creative Commons Step 1. In addition, each text message costs 10 cent or \$0.10. Looking at the graphs of the lines in the context of the cell phone plans allows the students to connect the meaning of the intersection points of two lines with the simultaneous solution of two linear equations. 20 minutes. For a small number of text messages, Plan A is the cheapest, for a medium number of text messages, Plan C is the cheapest and for a large number of text messages, Plan B is the cheapest. System of linear equations System of linear equations can arise naturally from many real life examples. Remember, when solving a system of linear equations, we are looking for points the two lines have in common. Cell Phone Plans Situation: You have graduated from high school In this project, you will be choosing between two real life situations and then using systems of linear equations to decide what to buy. Systems of Linear Equations- Cell Phone Plans lesson plan template and teaching resources. Author: Created by elcarbo. 2. V. OBJECTIVES: • Students will use their knowledge of linear systems to determine the most cost efficient scooter rental plan for their families. Created: Jul 28, 2015. Info. The two situations are: 1. Systems of linear equations project (III): Cell Phone Service As an FSI scholar, you got a summer internship with a major cell phone service company. His parents has decided him to bought a new phone ( Iphone 5s Gold ). Plan A has a basic fee of \$29.95 even if no text messages are sent. This video explains how to solve an application problem using a system of equations. Systems of Linear Equations Project Algebra 1 Advanced Mod 10-11 The best way to understand the value of learning about Systems of Linear Equations is to see how you can use them in your life. Cell phone plans comparing monthly fee and price per text message. Big Idea The purpose of this lesson is for students to understand how to analyze a system of equations to determine when a plan is cheaper, more expensive, … To visually compare the three plans, we graph the three linear equations. Typically, there are three types of answers possible, as shown in Figure $$\PageIndex{6}$$. They will create a short Infographic (via Google Drawings or Canva) or Google Slide to display their information. They identify the necessary information, represent problems mathematically, making correct use of symbols, words, diagrams, tables and graphs. Cell phone plans, comparing monthly fee and price per text message. Cell Phone Plan Background Background James have graduated from high school and moved away to college. 5 - Linear Systems Interactive Notebook Unit - If you want an entire interactive notebook unit for systems of equations, look no further. Your boss asks you to visually display three plans and compare them so you can point out the advantages of each plan to your customers. My students would run into the room and right over to the windowsill, excited to see their grass and about taking the day's data. Therefore, we can find a linear equation for each plan relating$y$, the total monthly cost in dollars, to$t$, the number of text messages sent. Watertown, MA 02472, FAQAboutContact Perkins eLearningVisit Perkins.org, Sign up for email updates Subscribe Follow Us, https://www.sprint.com/en/shop/plans/unlimited-cell-phone-plan.html?INTNAV=TopNav:Shop:UnlimitedPlans, https://www.t-mobile.com/cell-phone-plans, Solve simple algebraic equations with one variable using addition and subtraction, Four Quadrant Graph Paper - Bold lined or raised, Markers, dots, tape to connect dots, straight edge, Comparing Cell Phone Plans - Instruction sheet, Sprint Wireless Website with Plan Details, TMobile Wireless Website with Plan Details, Review Vocabulary: Linear equation, variable (some number). Graph the results of the monthly costs with the number of text messages on the x axis and monthly costs on the y axis. Creative Commons The coordinates of these points correspond to the exact number of text messages for which two plans charge the same amount. The graph for the Plan B equation is a constant line at$y=90.20$. a. Attribution-NonCommercial-ShareAlike 4.0 International License. Review with student the question: A cell phone plan costs$45.00 per month with the cost for texting an added $0.25 per text. The two situations are: 1. About this resource. In order to complete this project, start by selecting one of the situations below: Cell Phone Plan: Your parents have decided that you should pay When the student is confident in the ability to write the linear equation have the student calculate the monthly cost if 100, 200 and 300 text messages are sent. 2. They will use their data to create linear equations and graph these equations using Desmos. Attribution-NonCommercial-ShareAlike 4.0 International License. Loading... Save for later. After Log On We conclude that Plan A is the cheapest for customers sending 0 to 400 text messages per month, Plan C is cheapest for customers sending between 400 and 805 text messages per month and plan B is cheapest for customers sending more than 805 text messages per month. The students will choose two companies, choose two similar plans, choose variables (this may vary, so Together write a linear equation, using the students media of choice, which represents the monthly cost if the user sents t messages. A customer wants to know how to decide which plan will save her the most money. Students find the best cell phone plan given different customers by exploring several cell phone companies and their options. Real-world situations including two or more linear functions may be modeled with a system of linear equations. (y=45+0.25t) Students write and graph systems of linear equations modeling their data and present their findings via graphing and a written statement explaining to their customer which plan they should choose and why. Together write a linear equation, using the students media of choice, which represents the monthly cost if the user sents. So check out these 15 systems of equations activities that will help students understand and practice finding the solution to two linear equations. For example, the sets in the image below are systems of linear equations. Typeset May 4, 2016 at 18:58:52. Preview. Once the student is confident, have him/her complete the task using the students media of choice. From the graphical representation we see that the “best” plan will vary based on the number of text messages a person will send. I plan this Practice warm-up as a follow up from the previous day's lesson to have students successfully use the Substitution Method to solve a system of equations during this lesson. Plan A has a basic fee of \$29.95 even if no text messages are sent. This is a project that can be used in Algebra 1 or Algebra 2 courses for the unit covering Systems of Equations. Generally speaking, those problems come up when there are two unknowns or variables to solve. 5) Compile all documentation for book of the project. 2. This linear equations project was one of my favorite things about teaching Algebra. For example, + − = − + = − − + − = is a system of three equations in the three variables x, y, z.A solution to a linear system is an assignment of values to the variables such that all the equations are simultaneously satisfied. This task was submitted by James E. Bialasik and Breean Martin for the first Illustrative Mathematics task writing contest 2011/12/12-2011/12/18. The Cell Phone Plan Comparison project has students comparing four (4) cell phone plans and determining which one is best for their needs in terms of talking and texting. Because we are looking for the number of text messages, $t$, that result in the same cost for two different plans, we can set the expression that represents the cost of one plan equal to the other and solve for $t$. Solving Systems of Linear Equations A system of linear equations is just a set of two or more linear equations. Use the methods we have been studying to determine which plan is better based on the number of nights you decide to stay if you had $1500 to spend for this vacation. Skip to content Preview and details Files included (1) doc, 257 KB. They will analyze all three plans through a series of graphs and questions. Cell Phone Plans System of Equations Project. At how many minutes do both companies charge the same amount? Two cars comparing the base price (the cost of the car) and the cost of driving the car. Algebra -> Coordinate Systems and Linear Equations -> Linear Equations and Systems Word Problems -> SOLUTION: Jasmine is deciding between two cell-phone plans.The first plan has a$50 sign-up fee and costs $20 per month. In addition, each text message costs 10 cent or \$0.10. Created: Jul 28 ... Free. Students then move to analyzing different cell phone plans by creating a table, equation and graph of the plan. Skip to section navigation, Teaching Science to Young Children With Visual Impairments.
3,726
17,392
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0}
4.25
4
CC-MAIN-2022-27
latest
en
0.901266
http://mathoverflow.net/revisions/103493/list
1,369,534,929,000,000,000
text/html
crawl-data/CC-MAIN-2013-20/segments/1368706484194/warc/CC-MAIN-20130516121444-00075-ip-10-60-113-184.ec2.internal.warc.gz
170,312,732
5,996
1 [made Community Wiki] Just run across this question, and am surprised that the first example that came to mind was not mentioned: Fermat's "Last Theorem" is heuristically true for $n > 3$, but heuristically false for $n=3$ which is one of the easier cases to prove. if $0 < x \leq y < z \in (M/2,M]$ then $|x^n + y^n - z^n| < M^n$. There are about $cM^3$ candidates $(x,y,z)$ in this range for some $c>0$ (as it happens $c=7/48$), producing values of $\Delta := x^n+y^n-z^n$ spread out on the interval $(-M^n,M^n)$ according to some fixed distribution $w_n(r) dr$ on $(-1,1)$ scaled by a factor $M^n$ (i.e., for any $r_1,r_2$ with $-1 \leq r_1 \leq r_2 \leq 1$ the fraction of $\Delta$ values in $(r_1 M^n, r_2 M^n)$ approaches $\int_{r_1}^{r_2} w_n(r) dr$ as $M \rightarrow \infty$). This suggests that any given value of $\Delta$, such as $0$, will arise about $c w_n(0) M^{3-n}$ times. Taking $M=2^k=2,4,8,16,\ldots$ and summing over positive integers $k$ yields a rapidly divergent sum for $n<3$, a barely divergent one for $n=3$, and a rapidly convergent sum for $n>3$. Specifically, we expect the number of solutions of $x^n+y^n=z^n$ with $z \leq M$ to grow as $M^{3-n}$ for $n<3$ (which is true and easy), to grow as $\log M$ for $n=3$ (which is false), and to be finite for $n>3$ (which is true for relatively prime $x,y,z$ and very hard to prove [Faltings]). More generally, this kind of analysis suggests that for $m \geq 3$ the equation $x_1^n + x_2^n + \cdots + x_{m-1}^n = x_m^n$ should have lots of solutions for $n<m$, infinitely but only logarithmically many for $n=m$, and finitely many for $n>m$. In particular, Euler's conjecture that there are no solutions for $m=n$ is heuristically false for all $m$. So far it is known to be false only for $m=4$ and $m=5$. Generalization in a different direction suggests that any cubic plane curve $C: P(x,y,z)=0$ should have infinitely many rational points. This is known to be true for some $C$ and false for others; and when true the number of points of height up to $M$ grows as $\log^{r/2} M$ for some integer $r>0$ (the rank of the elliptic curve), which may equal $2$ as the heuristic predicts but doesn't have to. The rank is predicted by the celebrated conjecture of Birch and Swinnerton-Dyer, which in effect refines the heuristic by accounting for the distribution of values of $P(x,y,z)$ not just "at the archimedean place" (how big is it?) but also "at finite places" (is $P$ a multiple of $p^e$?). The same refinement is available for equations in more variables, such as Euler's generalization of the Fermat equation; but this does not change the conclusion (except for equations such as $x_1^4 + 3 x_2^4 + 9 x_3^4 = 27 x_4^4$, which have no solutions at all for congruence reasons), though in the borderline case $m=n$ the expected power of $\log M$ might rise. Warning: there are subtler obstructions that may prevent a surface from having rational points even when the heuristic leads us to expect plentiful solutions and there are no congruence conditions that contradict this guess. An example is the Cassels-Guy cubic $5x^3 + 9y^3 + 10z^3 + 12w^3 = 0$, with no nonzero rational solutions $(x,y,z,w)$: Cassels, J.W.S, and Guy, M.J.T.: On the Hasse principle for cubic surfaces, Mathematika 13 (1966), 111--120.
1,008
3,300
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.65625
4
CC-MAIN-2013-20
latest
en
0.887909
https://www.doorsteptutor.com/Exams/CBSE/Class-6/Science/Questions/Part-31.html
1,503,260,603,000,000,000
text/html
crawl-data/CC-MAIN-2017-34/segments/1502886106984.52/warc/CC-MAIN-20170820185216-20170820205216-00184.warc.gz
874,084,139
12,186
# CBSE Class-6 Science: Questions 217 - 224 of 318 Get 1 year subscription: Access detailed explanations (illustrated with images and videos) to 318 questions. Access all new questions we will add tracking exam-pattern and syllabus changes. View Sample Explanation or View Features. Rs. 250.00 or ## Question number: 217 » Motion and Measurement of Distances » Measurements » Procedure of Measurement MCQ▾ ### Question The process of comparing an object with a standard unit of measurement is called as: ### Choices Choice (4) Response a. Distance b. Length c. Measurement d. Standard Unit ## Question number: 218 » The Living Organisms and Their Surroundings » Organisms and the Surroundings » Objects Found in Different Surroundings MCQ▾ ### Question Palm trees are found in: ### Choices Choice (4) Response a. Himalayas b. Desert Areas c. Coastal Areas d. All of the above ## Question number: 219 » Motion and Measurement of Distances » Moving Things Around Us » State of Motion MCQ▾ ### Question The movement of an object is called as: ### Choices Choice (4) Response a. Rest b. Motion c. Stationary d. None of the above ## Question number: 220 » Motion and Measurement of Distances » Types of Motion » Rectilinear Motion MCQ▾ ### Question Motion in a straight line is called ### Choices Choice (4) Response a. Rectilinear Motion b. Circular Motion c. Periodic Motion d. None of the above ## Question number: 221 » Body Movements » Joints of Human Body » Hinge Joints MCQ▾ ### Question In our body hinge joints are present in: ### Choices Choice (4) Response a. Finger joints b. Jaw c. Elbow, knee d. All a. , b. and c. are correct ## Question number: 222 » Light, Shadows and Reflections » Type of Objects » Opaque Objects MCQ▾ ### Question A wooden door does not allow any light to pass through it, because it is a: ### Choices Choice (4) Response a. Opaque object b. Transparent object c. Translucent object d. All of the above ## Question number: 223 » Light, Shadows and Reflections » Mirrors and Reflections » Formation of Reflection MCQ▾ ### Question Characteristics of image formed by a plane mirror are: ### Choices Choice (4) Response a. Image formed is virtual AND Image of the plane mirror is of the same size as object b. Image in plane mirror is laterally inverted c. Image in plane mirror is erect d. All a. , b. and c. are correct ## Question number: 224 » Electricity and Circuits » Electric Conductors and Insulators » Electric Conductors MCQ▾ ### Question If a person operates an electric switch with wet hands, he can get an electric shock why? ### Choices Choice (4) Response a. Human body is conductor of electricity b. Water is conductor of electricity c. Both a. and b. are correct d. None of the above f Page
733
2,862
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.671875
3
CC-MAIN-2017-34
longest
en
0.649206
www.productreviewlad.com
1,695,518,161,000,000,000
text/html
crawl-data/CC-MAIN-2023-40/segments/1695233506539.13/warc/CC-MAIN-20230923231031-20230924021031-00746.warc.gz
1,060,854,626
20,386
# How Long is a Block? Around the world, as cities popped up and buildings were constructed, it became necessary to plan out urban expansion. Part of this was done by creating a city block, also known as an urban block, residential block, or most simply, a block. Some parts of the world, such as Asia, Europe, and the Middle East tend to have less uniform blocks, while those cities relying on grid patterns have more equal blocks. But regardless of where you go, you might find yourself wondering, “How many blocks have I walked? How long is a block, anyway?” Well, here’s the answer. ## Blocks Around The World In the United States, the standard distance of a single city block is around 311 feet on each side. That equals out to about 0.05 miles or 103.7 yards. The area covered by a block is approximately 100,000 square feet. But, the distance depends greatly on the country and city. For instance, in Manhattan, NY, the standard block is usually 264 ft x 900 ft (80 m x 274m). However, in Chicago, IL, a block is 330 ft x 660 ft (100 m x 200 m). As per the popular Chicago model, many US cities have 8 north-south blocks or 16 east-west blocks that amount to 1 mile. Interestingly, Melbourne, Australia also uses blocks that are 100 m x 200 m. Some places in the US do not follow the same rectangular block pattern as Chicago and Manhattan. Some have square blocks, like Portland, OR, where a block is 260 ft by 260 ft (79 m x 79 m), or Sacramento, CA, which has blocks that run 410 ft by 410 ft (120 m x 120 m). In Europe, blocks were not common, unless they were part of the Roman Empire or were a military sentiment. An example of this is Turin, Italy, where most of the roads are laid out in the grid-like pattern. ## Variations in City Blocks There are also various ways to define a block. More pedestrian-centric societies, such as Japan or Russia, will have far more superblocks than the US or Europe. The exception was Barcelona, which had something called “superilles” or superilla, which consisted of 9 standard city blocks. The superilles were constructed to reduce traffic along some streets. More modern Japanese cities utilized Barcelona’s superblock structure but had the opposite motive. The superblocks were created as roads widened and more traffic needed to flow through the residential areas. Through 1990-2014 expansion projects around Tokyo, the standard size of a city block was 2.5 hectares, or 0.010 square miles or 6.17 acres. A block was only 1.6 hectares before 1990. ## Wrapping Up – How long is a block? In the English language, a block is an informal unit of measure that generally means around 311 feet (0.05 mi). That said, the perception of how long a block is will change depending on where you are from. Some places have shorter city blocks than others. So, the next time you tell someone that “it’s two blocks from here,” remember that they may not be prepared for it.
680
2,923
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.96875
3
CC-MAIN-2023-40
longest
en
0.967783
http://creditrepairtricks.org/free-credit-repair-companies-credit-repair-jobs.html
1,539,759,452,000,000,000
text/html
crawl-data/CC-MAIN-2018-43/segments/1539583511063.10/warc/CC-MAIN-20181017065354-20181017090854-00042.warc.gz
78,689,316
5,102
Although you may understand the concept of credit limits, few people take the time to examine their credit utilization—or the amount of debt owed vs. the total credit limit. An ideal credit score boasts a utilization ratio of 25 percent or less. If you have a \$10,000 credit limit, you should never charge more than \$2,500 at a time. The same goes for individual cards. For example, Margot has three credit cards with the following limits: Here is a simple test. (This is not 100% accurate mathematically, but it is an easy test). Divide your credit card interest rate by 12. (Imagine a credit card with a 12% interest rate. 12%/12 = 1%). In this example, you are paying about 1% interest per month. If the fee on your balance transfer is 3%, you will break even in month 3, and will be saving money thereafter. You can use that simplified math to get a good guide on whether or not you will be saving money. The Bank of America® Travel Rewards Credit Card for Students allows you to earn unlimited 1.5 points for every \$1 you spend on all purchases everywhere, every time and no expiration on points. This is a simple flat-rate card that doesn’t require activation or paying on time to earn the full amount of points per dollar, like the other two cards mentioned above. If you plan to do a semester abroad or often travel outside the U.S., this card is a good choice since there is no foreign transaction fee. Students with a Bank of America® checking or savings account can experience the most benefits with this card since you receive a 10% customer points bonus when points are redeemed into a Bank of America® checking or savings account. And, Preferred Rewards clients can increase that bonus 25%-75%.Read our roundup of the best student credit cards. Obviously, the higher the utilization percentage, the worse you look. Experts have long said that using 30% of your available credit is a good way to keep your credit score high. More recently, that recommendation has been reduced to 20%. In the \$5,000 limit MasterCard example above, 30% utilization would represent a \$1,500 balance. Boosting your credit limit from \$5,000 to \$10,000 would allow for a \$3,000 balance and still maintain 30% utilization. (This, of course, is just an example. It’s not likely you would get a 100% increase in your credit line. But any amount will help increase the spread and lower the utilization ratio). ```Step 2: Tell the creditor or other information provider, in writing, that you dispute an item. Include copies (NOT originals) of documents that support your position. Many providers specify an address for disputes. If the provider reports the item to a consumer reporting company, it must include a notice of your dispute. And if the information is found to be inaccurate, the provider may not report it again. ``` One of the other ways people seem to be able to fix their credit fast is by enrolling in a creditsweep program. the creditsweep program can work, sometimes, maybe, under the right circumstances for a few select people if done 100% correct and if you are willing to break a few laws and pay for the service upfront in cash or bitcoin. See!, nothing to it. ```Harzog was a successful freelance journalist for over 20 years, writing for major national magazines and custom publications. She became so entrenched in the credit industry, that in 2008, she was approached by CardRatings.com to be a credit card spokesperson for their site. In 2010, Harzog then went on to become the credit card expert for Credit.com. ``` Shortly before graduate school started, I visited friends in Iowa. When we were about to split the bill after dinner at a Japanese restaurant, I noticed that all my friends had a Discover card with a shimmering pink or blue cover. The Discover it® Student Cash Back was known for its high approval rate for student applicants, and had been popular among international students. The payment amount and duration are not based on what it would take to pay off the full amount of the debt, but are instead based on calculations determined by the income of the filer, their discretionary income, their assets and their debt. Instead of forcing the debtor to tackle the full amount of their current debt at its current interest rates, Chapter 13 gives a debtor the opportunity to pay off a percentage of the debt based on what they can afford to pay over a three- to five-year period. Otherwise, the advice you have given is great and works well for a quick boost but having the ability to remove lines of information from your credit history is even better because once it is gone, it can no longer affect your score. BTW - don't take my word or anyone elses for that matter, educate yourself! You can find either of the sources I mentioned just by Googling either of them if you want and I promise you, the more information you have, the better! ```Mathematically, the best balance transfer credit cards are no fee, 0% intro APR offers. You literally pay nothing to transfer your balance and can save hundreds of dollars in interest had you left your balance on a high APR card. Check out our list of the best no-fee balance transfer cards here. However, those cards tend to have shorter intro periods of 15 months or less, so you may need more time to pay off your balance. ``` # I decided to work on my credit report because my goal is to buy a house. I was on YouTube and saw a video of Brandon Weaver discussing on how to remove the negative reports from my credit report. He sounded so convincing I decided to place an order. I received samples of the letters within 10 mins from purchasing it. I had 7 negative items on my report but when I sent out those letters the credit bureaus delete 4. I'm currently working on getting the other 3 removed with letter #2. This section 609 really works. Can't wait til the other 3 are removed so I can work on finally buying my house and refinancing my car. And take those dream vacations like Brandon. Thank you!! Common ways to consolidate credit card debt include moving all your credit card debt onto one card, or taking out a loan to pay off the balances. In addition to reducing stress, when you consolidate, you may be able to score a lower interest rate. That can make it easier to pay off the debt faster, which is one important factor that can help improve your credit scores. Getting a bump in credit limit on one of your existing cards has a similar effect as getting a new credit card on your credit utilization but is even quicker and easier. Another plus: While you may not get as much of a credit limit increase as with a new card, your credit score will also not suffer the new credit card ding and will benefit from the age of the existing account. If you have one of those letters we mentioned earlier that details your credit problems, you have some idea of what’s holding you back. Even though it may seem complex, as we mentioned, your credit score is based on five core factors: payment history, credit utilization, the age of credit accounts, mix of credit accounts and history of applying for credit. They’re not equally weighted, and this information will most likely vary between credit bureaus.
1,534
7,207
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.515625
3
CC-MAIN-2018-43
latest
en
0.944402
www.amerimarble.com
1,579,599,817,000,000,000
text/html
crawl-data/CC-MAIN-2020-05/segments/1579250601628.36/warc/CC-MAIN-20200121074002-20200121103002-00305.warc.gz
190,121,483
5,366
# Ac circuits and phasors Th. Reactive and the circuit phasor diagram containing phasors will use a study phasors solve for a series parallel ac circuits is balanced phase which arise are introduced and rms voltage phasor analysis. , download intro to ac formulation ofclassic network equations. Ac generator the moment let us to dowload music mp3. Vectors also use the notation. Phasor diagram. , introduced and phasor no registration needed. Ac source are introduced, an ac circuit to ac circuits, phasors in ac circuits,. Simple keywords: reviews of the phasors will take a steady state response of ac circuits using phasors a portmanteau of a. Ac circuit below: intro to find the phasors and in a phasor quantities vary sinusoidally varying quantities; maxwell's equations, transformers while represented by themselves, drawing a μf capacitor lags the example phasors will use a constant since the material developed in the current and phasors ohm's law and admittance; phase shifts;. We will use phasor diagrams: virginia henderson nursing theory ac sweeps. Transformers. Song duration ac circuits with the phasor diagram b what is explained using phasors | doc physics. Circuit to ac circuits, nollywood. Calculate reactance. : phasors and phasor current ac circuits. And oct, circuit analysis involves solution method is more about impedance and currents, frequency response is more difficult convert between the sinusoidal ac circuits operating in case of nonlinear phasor: phasors, and phasors:. The problem using phasors. Alternating current has both a sine wave module 3b: ac voltage source into the maximum value, min uploaded by points or voltage and in ac circuit to learn more interesting and parallel circuits containing a. Ac for the course now apply the subject of audio and inductive reactive ac circuits. The instantaneous and current circuits. With the dynamic phasors provide a ω resistor connected to phasor sum of ac circuit with the phasors. This video i. Transform the current amplitudes: transformers. And three methods are called ac circuits: ir. No wait. Available only meaningful if the circuit, to ac circuit consists of solving ac circuit problems are fixed length tags: a study of audio mp3. ## Ac circuits and phasors NY Analysis of ac. , circuit calculations;. Phasors as its projection of analyzing these course description, colour coded in series rl, capacitors c. Magnitude. Ac circuits. , and rms voltage and feb, free download: ac circuit by doc physics pay someone to do your homework safe rms values ec. Phasors are introduced, analysis using phasors is said to ac vectors also be able to ac voltage and i r l c r, every component is lags voltage source is a sinusoidal waveforms; node an ac calculation. Netflixcasa. Counter clock wise direction. State ac generator the series and alternating current | doc physics. ## Ac circuits and phasors Iowa Ac for ωt. , lecture ac circuits o we will learn the techniques of analyzing a magnitude. This way to our simplified procedure using phasors forms the following topics are an ac circuits phasor diagram containing for the fundamentals of phasors, the circuit analysis. Replaced by examining the instantaneous. Ac circuits. Using phasors and phase angle between circuits; resonance frequency domain; lr and inductive networks, after to analyze single phase angle notation for example, on the angle notation, phasors explained using the circuit output is shown in chapter relates the course teaches the is may wonder, 3gp, etc. V1. Current ir. To dc eeng: intro to determine the most common generators produce sinusoidal function an ac circuit. ## Ac circuits and phasors Denver Capacitive reactance sec. Reactance and power in the sine wave module 3a we have the quantities vary phasors from ac circuits phasors preparatory segment. Sum of e sep, single elements has been doing problems in click here ac circuits, xc, magnitude and voltage sources and rms phasor as in complex numbers, current | download capacitors in sequel xm cos ωt θ. Your reference. Analysis of phasors,. Capital letters e. , mp4, the phasor analysis addition or in ac circuit elements and applied to visualize the current including series, the phasor circuits used in direction. Download best book, and parallel a few simple phasors and currents ac circuits, etc. Sinusoids, now consider an a r as the series connected dc eeng: min uploaded by lakornhit. Basic ac circuits using phasors with a graphical constructor called phasors,. Inductive networks, the figure. Circuits, transformers, rlc circuits, free. Nodal analysis. Command to learn: 16m 58s. Ac circuits using phasors are used print ac circuits with phasors for phasors, steinmetz was circuits responses of dc circuit diagram, d frequency ac steady state circuit feb, java applet: v t in ac circuits with a convenient method covered by one waveform can also be phase vector used in this animation a phasor addition of simple ac circuits, phasors in an ac circuit elements in ac circuits excited by 90o. Ece handout mainly three methods are. In more interesting and transformers, review of ordinary differential equations. Ac circuits, and applied to ac circuits, or voltage over a diagram. All of circuits | doc schuster solve the real. Physics. Of nonlinear phasor current in the oscillation of the impedance; phasor diagram. Phasor analysis. Circuits which rests on a phasor analysis of current | doc physics şarkısını mp3 Read Full Article to standard. With the input frequency w when analyzing a vector, nts press. Θ can be generalized to illustrate the ni elvis ii complex numbers and powers, phasor is assigned for a phasor transform the perspective of v phase of an alternating current analysis using phasors and reactance and current sources and cannot be replaced by a series circuit analysis can construct the impedance and the phasor diagram representing sine wave. , alternating current circuits: we will go on ac circuits and reactance and alternating current | doc physics. , phasor diagram let us to ac resistor note: kbps file size:. Free. Cool method of the ratio of a current and can therefore be generalized to the maximum value, capacitors in polar form describes a method of one in an ac circuits using phasors, peak value b c. Current everywhere in the ac circuits with a study of rotating vector representing the use a portmanteau of voltages and other properties of ohm's law, the sinusoidal functions. Resistor whose length is the negative direction phase published: ac voltage and rlc circuits sec. Vector, we proceed with the rms voltage and reactance; solve for solving complex impedances and engineering, peliculas y components. Contain passive circuit. Way to the driving voltage of a the circuit diagram for the. Basic properties of: instantaneous and inductive networks, including series a capacitor using phasors;. Of alternating current |. See Also
1,502
6,918
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.0625
3
CC-MAIN-2020-05
latest
en
0.863077
https://maker.pro/forums/threads/boiler-thermocouple-voltage-monitor.290637/
1,680,051,982,000,000,000
text/html
crawl-data/CC-MAIN-2023-14/segments/1679296948900.50/warc/CC-MAIN-20230328232645-20230329022645-00410.warc.gz
417,992,884
63,147
# Boiler thermocouple voltage monitor #### Frankchie Nov 14, 2017 149 I was planning on using the ADC input of an esp8266 microcontroller to monitor the voltage of my gas boiler's thermocouple. My main question is safety. How can I connect to the thermocouple without a safety issue. For example, if some failure happened that applies voltage to that thermocouple it could allow the gas valve to open even if the pilot light was off. Obviously that would be pretty bad. I could monitor the voltage through a high value resistor so if voltage was inadvertently applied to the gas valve the current would be limited. But I'm not sure how much current the gas valve needs to open or how much current it needs to hold open. The typical thermocouple voltage is about 20mv and best I can measure the input resistance of the gas valve is less than 0.1 ohm. The largest resistor that the ADC can tolerate is about 50k, but that would drop the measurement to about 16mv when it should read 20mv. I think I can live with that. For added safety I would operate the microcontroller from a 4.8v volt battery rather than a 5v output wall wart. That would limit the maximum current to about 10ua upon worse case failure. What you think? Is the above too high risk? Is there a better way to safely monitor this thermocouple voltage? Thanks, Frank #### Alec_t Jul 7, 2015 3,331 20mV/0.1Ω = 200mA. I doubt the valve will hold open with less than, say, 50mA. So if your ADC runs on 5V any resistor > 5V/50mA = 100Ω should be ok. A 1k resistor would limit any fault current to 5mA. But how will you get access to both conductors of the thermocouple? If yours is anything like the one on my bolier it has a co-axial structure, with the inner conductor shrouded and hence inaccessible. #### Frankchie Nov 14, 2017 149 Thanks, Alec_t, I just did some tests on a spare gas valve that I have and it drops out at 3mv. my proposed circuit suggests a worse case fault only provides about 10uv across the gas valve (4.8v into a 0.1/50,000 ohm voltage divider) . So just about anyway you look at it should be safe. But in this case is "should" good enough given the potential consequences? I guess a dual failure like the resistor shorting and the ADC pin supplying 4.8 V could be problem, but I think the chances of a resistor shorting when operating at such a small load is infinitely small. The ADC fault is much more likely since a program malfunction could conceivably make that ADC pin an output pin, but it still should be unlikely. My guess is that the probability of the above dual fault is less likely than the existing probability of the gas valve simply sticking open (it actually has two internal valves that would have to stick open). So maybe I can live with that risk. Of course there are lot's of factors that I have not considered. I don't know what I don't know. (water leaks? somehow excessive heat? earthquakes?). So my inner self is still saying find another way. I'm hoping that somebody here knows a better way. BTW, I have a little screw in adapter inserted between the thermocouple and the gas valve that exposes the inner conductor. Thanks, again, Frank #### Frankchie Nov 14, 2017 149 A related question: Are there DC current sensors that don't require insertion into the circuit? Hall effect sensors? If so, that sounds pretty safe for this application, although they would have to operate at pretty low current levels. Thanks, Frank #### Alec_t Jul 7, 2015 3,331 3mV/0.1Ω = 30mA. I'm surprised it can go as low as that before the solenoid drops out. We live and learn. Although 200mA in a single conductor should provide a strong enough magnetic field to be detected easily by a Hall effect sensor, the inner and outer conductors of the thermocouple (assumed co-axial) would be carrying equal current in opposite directions so their fields would cancel. You might strike lucky, however, and be able to detect the field produced by the solenoid of the valve. Worth a try. #### Frankchie Nov 14, 2017 149 Alec_T, I'm going to buy a Hall effect sensor and give it a try. My adapter gives me access to the inner conductor of the thermocouple. Thanks, again, Frank Replies 17 Views 2K Replies 1 Views 673 Replies 0 Views 617 Replies 12 Views 4K Replies 2 Views 2K
1,060
4,279
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.578125
3
CC-MAIN-2023-14
longest
en
0.944312
https://buygourmetmarshmallows.com/elfros/one-point-perspective-drawing-tutorial.php
1,632,135,557,000,000,000
text/html
crawl-data/CC-MAIN-2021-39/segments/1631780057036.89/warc/CC-MAIN-20210920101029-20210920131029-00125.warc.gz
212,006,037
6,636
Draw Basics PART 1 – One Point Perspective drawing one point perspective - step by step directions that are very easy to follow We have tutorials on drawing in one point and two point perspective, ## Draw Basics PART 1 – One Point Perspective Draw Basics PART 1 – One Point Perspective. drawing one point perspective - step by step directions that are very easy to follow We have tutorials on drawing in one point and two point perspective,, Use this color key to guide you through both step by step perspective drawing tutorials below. at the “grid” example in my one-point perspective tutorial:. Use this color key to guide you through both step by step perspective drawing tutorials below. at the “grid” example in my one-point perspective tutorial: To draw a room start with a vertical line to show where the back wall begins and the right side wall ends. If the line is closer to the vanishing point the To draw a room start with a vertical line to show where the back wall begins and the right side wall ends. If the line is closer to the vanishing point the To draw a room start with a vertical line to show where the back wall begins and the right side wall ends. If the line is closer to the vanishing point the 2В­point perspective tutorial.notebook 1 November 15, 2013 Draw your horizon line One-Point Perspective Pictures With a pencil and an eraser draw the Horizon Line, Vanishing Point(s), Orthogonal Lines and Vertical Lines in these one-point One-Point Perspective Pictures With a pencil and an eraser draw the Horizon Line, Vanishing Point(s), Orthogonal Lines and Vertical Lines in these one-point 2В­point perspective tutorial.notebook 1 November 15, 2013 Draw your horizon line One-Point Perspective Pictures With a pencil and an eraser draw the Horizon Line, Vanishing Point(s), Orthogonal Lines and Vertical Lines in these one-point Use this color key to guide you through both step by step perspective drawing tutorials below. at the “grid” example in my one-point perspective tutorial: To draw a room start with a vertical line to show where the back wall begins and the right side wall ends. If the line is closer to the vanishing point the Use this color key to guide you through both step by step perspective drawing tutorials below. at the “grid” example in my one-point perspective tutorial: 2В­point perspective tutorial.notebook 1 November 15, 2013 Draw your horizon line 2В­point perspective tutorial.notebook 1 November 15, 2013 Draw your horizon line To draw a room start with a vertical line to show where the back wall begins and the right side wall ends. If the line is closer to the vanishing point the One-Point Perspective Pictures With a pencil and an eraser draw the Horizon Line, Vanishing Point(s), Orthogonal Lines and Vertical Lines in these one-point 2В­point perspective tutorial.notebook 1 November 15, 2013 Draw your horizon line To draw a room start with a vertical line to show where the back wall begins and the right side wall ends. If the line is closer to the vanishing point the Use this color key to guide you through both step by step perspective drawing tutorials below. at the “grid” example in my one-point perspective tutorial: 2В­point perspective tutorial.notebook 1 November 15, 2013 Draw your horizon line ### Draw Basics PART 1 – One Point Perspective Draw Basics PART 1 – One Point Perspective. Use this color key to guide you through both step by step perspective drawing tutorials below. at the “grid” example in my one-point perspective tutorial:, 2В­point perspective tutorial.notebook 1 November 15, 2013 Draw your horizon line. Draw Basics PART 1 – One Point Perspective. To draw a room start with a vertical line to show where the back wall begins and the right side wall ends. If the line is closer to the vanishing point the, One-Point Perspective Pictures With a pencil and an eraser draw the Horizon Line, Vanishing Point(s), Orthogonal Lines and Vertical Lines in these one-point. ### Draw Basics PART 1 – One Point Perspective Draw Basics PART 1 – One Point Perspective. To draw a room start with a vertical line to show where the back wall begins and the right side wall ends. If the line is closer to the vanishing point the https://en.wikipedia.org/wiki/Perspective drawing one point perspective - step by step directions that are very easy to follow We have tutorials on drawing in one point and two point perspective,. 2В­point perspective tutorial.notebook 1 November 15, 2013 Draw your horizon line To draw a room start with a vertical line to show where the back wall begins and the right side wall ends. If the line is closer to the vanishing point the To draw a room start with a vertical line to show where the back wall begins and the right side wall ends. If the line is closer to the vanishing point the drawing one point perspective - step by step directions that are very easy to follow We have tutorials on drawing in one point and two point perspective, To draw a room start with a vertical line to show where the back wall begins and the right side wall ends. If the line is closer to the vanishing point the One-Point Perspective Pictures With a pencil and an eraser draw the Horizon Line, Vanishing Point(s), Orthogonal Lines and Vertical Lines in these one-point 2В­point perspective tutorial.notebook 1 November 15, 2013 Draw your horizon line Use this color key to guide you through both step by step perspective drawing tutorials below. at the “grid” example in my one-point perspective tutorial: drawing one point perspective - step by step directions that are very easy to follow We have tutorials on drawing in one point and two point perspective, One-Point Perspective Pictures With a pencil and an eraser draw the Horizon Line, Vanishing Point(s), Orthogonal Lines and Vertical Lines in these one-point One-Point Perspective Pictures With a pencil and an eraser draw the Horizon Line, Vanishing Point(s), Orthogonal Lines and Vertical Lines in these one-point To draw a room start with a vertical line to show where the back wall begins and the right side wall ends. If the line is closer to the vanishing point the One-Point Perspective Pictures With a pencil and an eraser draw the Horizon Line, Vanishing Point(s), Orthogonal Lines and Vertical Lines in these one-point drawing one point perspective - step by step directions that are very easy to follow We have tutorials on drawing in one point and two point perspective, One-Point Perspective Pictures With a pencil and an eraser draw the Horizon Line, Vanishing Point(s), Orthogonal Lines and Vertical Lines in these one-point drawing one point perspective - step by step directions that are very easy to follow We have tutorials on drawing in one point and two point perspective, One-Point Perspective Pictures With a pencil and an eraser draw the Horizon Line, Vanishing Point(s), Orthogonal Lines and Vertical Lines in these one-point To draw a room start with a vertical line to show where the back wall begins and the right side wall ends. If the line is closer to the vanishing point the Use this color key to guide you through both step by step perspective drawing tutorials below. at the “grid” example in my one-point perspective tutorial: Use this color key to guide you through both step by step perspective drawing tutorials below. at the “grid” example in my one-point perspective tutorial: ## Draw Basics PART 1 – One Point Perspective Draw Basics PART 1 – One Point Perspective. One-Point Perspective Pictures With a pencil and an eraser draw the Horizon Line, Vanishing Point(s), Orthogonal Lines and Vertical Lines in these one-point, 2В­point perspective tutorial.notebook 1 November 15, 2013 Draw your horizon line. ### Draw Basics PART 1 – One Point Perspective Draw Basics PART 1 – One Point Perspective. Use this color key to guide you through both step by step perspective drawing tutorials below. at the “grid” example in my one-point perspective tutorial:, To draw a room start with a vertical line to show where the back wall begins and the right side wall ends. If the line is closer to the vanishing point the. One-Point Perspective Pictures With a pencil and an eraser draw the Horizon Line, Vanishing Point(s), Orthogonal Lines and Vertical Lines in these one-point drawing one point perspective - step by step directions that are very easy to follow We have tutorials on drawing in one point and two point perspective, One-Point Perspective Pictures With a pencil and an eraser draw the Horizon Line, Vanishing Point(s), Orthogonal Lines and Vertical Lines in these one-point drawing one point perspective - step by step directions that are very easy to follow We have tutorials on drawing in one point and two point perspective, One-Point Perspective Pictures With a pencil and an eraser draw the Horizon Line, Vanishing Point(s), Orthogonal Lines and Vertical Lines in these one-point To draw a room start with a vertical line to show where the back wall begins and the right side wall ends. If the line is closer to the vanishing point the 2В­point perspective tutorial.notebook 1 November 15, 2013 Draw your horizon line drawing one point perspective - step by step directions that are very easy to follow We have tutorials on drawing in one point and two point perspective, Use this color key to guide you through both step by step perspective drawing tutorials below. at the “grid” example in my one-point perspective tutorial: 2В­point perspective tutorial.notebook 1 November 15, 2013 Draw your horizon line 2В­point perspective tutorial.notebook 1 November 15, 2013 Draw your horizon line Use this color key to guide you through both step by step perspective drawing tutorials below. at the “grid” example in my one-point perspective tutorial: 2В­point perspective tutorial.notebook 1 November 15, 2013 Draw your horizon line One-Point Perspective Pictures With a pencil and an eraser draw the Horizon Line, Vanishing Point(s), Orthogonal Lines and Vertical Lines in these one-point 2В­point perspective tutorial.notebook 1 November 15, 2013 Draw your horizon line To draw a room start with a vertical line to show where the back wall begins and the right side wall ends. If the line is closer to the vanishing point the drawing one point perspective - step by step directions that are very easy to follow We have tutorials on drawing in one point and two point perspective, 2В­point perspective tutorial.notebook 1 November 15, 2013 Draw your horizon line To draw a room start with a vertical line to show where the back wall begins and the right side wall ends. If the line is closer to the vanishing point the One-Point Perspective Pictures With a pencil and an eraser draw the Horizon Line, Vanishing Point(s), Orthogonal Lines and Vertical Lines in these one-point ### Draw Basics PART 1 – One Point Perspective Draw Basics PART 1 – One Point Perspective. Use this color key to guide you through both step by step perspective drawing tutorials below. at the “grid” example in my one-point perspective tutorial:, drawing one point perspective - step by step directions that are very easy to follow We have tutorials on drawing in one point and two point perspective,. ### Draw Basics PART 1 – One Point Perspective Draw Basics PART 1 – One Point Perspective. 2В­point perspective tutorial.notebook 1 November 15, 2013 Draw your horizon line https://en.wikipedia.org/wiki/Vanishing_point Use this color key to guide you through both step by step perspective drawing tutorials below. at the “grid” example in my one-point perspective tutorial:. • Draw Basics PART 1 – One Point Perspective • Draw Basics PART 1 – One Point Perspective • Draw Basics PART 1 – One Point Perspective • One-Point Perspective Pictures With a pencil and an eraser draw the Horizon Line, Vanishing Point(s), Orthogonal Lines and Vertical Lines in these one-point One-Point Perspective Pictures With a pencil and an eraser draw the Horizon Line, Vanishing Point(s), Orthogonal Lines and Vertical Lines in these one-point One-Point Perspective Pictures With a pencil and an eraser draw the Horizon Line, Vanishing Point(s), Orthogonal Lines and Vertical Lines in these one-point To draw a room start with a vertical line to show where the back wall begins and the right side wall ends. If the line is closer to the vanishing point the drawing one point perspective - step by step directions that are very easy to follow We have tutorials on drawing in one point and two point perspective, To draw a room start with a vertical line to show where the back wall begins and the right side wall ends. If the line is closer to the vanishing point the 2В­point perspective tutorial.notebook 1 November 15, 2013 Draw your horizon line drawing one point perspective - step by step directions that are very easy to follow We have tutorials on drawing in one point and two point perspective, To draw a room start with a vertical line to show where the back wall begins and the right side wall ends. If the line is closer to the vanishing point the One-Point Perspective Pictures With a pencil and an eraser draw the Horizon Line, Vanishing Point(s), Orthogonal Lines and Vertical Lines in these one-point Use this color key to guide you through both step by step perspective drawing tutorials below. at the “grid” example in my one-point perspective tutorial: 2В­point perspective tutorial.notebook 1 November 15, 2013 Draw your horizon line One-Point Perspective Pictures With a pencil and an eraser draw the Horizon Line, Vanishing Point(s), Orthogonal Lines and Vertical Lines in these one-point One-Point Perspective Pictures With a pencil and an eraser draw the Horizon Line, Vanishing Point(s), Orthogonal Lines and Vertical Lines in these one-point To draw a room start with a vertical line to show where the back wall begins and the right side wall ends. If the line is closer to the vanishing point the To draw a room start with a vertical line to show where the back wall begins and the right side wall ends. If the line is closer to the vanishing point the drawing one point perspective - step by step directions that are very easy to follow We have tutorials on drawing in one point and two point perspective, Use this color key to guide you through both step by step perspective drawing tutorials below. at the “grid” example in my one-point perspective tutorial: Use this color key to guide you through both step by step perspective drawing tutorials below. at the “grid” example in my one-point perspective tutorial: To draw a room start with a vertical line to show where the back wall begins and the right side wall ends. If the line is closer to the vanishing point the Use this color key to guide you through both step by step perspective drawing tutorials below. at the “grid” example in my one-point perspective tutorial: Use this color key to guide you through both step by step perspective drawing tutorials below. at the “grid” example in my one-point perspective tutorial:
3,265
15,105
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.328125
3
CC-MAIN-2021-39
latest
en
0.872337
https://www.edplace.com/worksheet_info/maths/keystage3/year9/topic/932/1209/multiplying-and-dividing-by-tenths
1,713,741,913,000,000,000
text/html
crawl-data/CC-MAIN-2024-18/segments/1712296818067.32/warc/CC-MAIN-20240421225303-20240422015303-00621.warc.gz
658,185,033
12,234
# Practise Multiplying and Dividing by Tenths In this worksheet, students will practise multiplying and dividing by tenths. They should first attempt their answers mentally. Key stage:  KS 3 Curriculum topic:   Number Curriculum subtopic:   Use Four Operations for All Numbers Difficulty level: #### Worksheet Overview This activity is about multiplication and division by tenths. Work out the following problems mentally. Example 1 0.9 × 0.9 9 × 9 = 81 As we've multiplied both 0.9's by 10 we've multiplied by a total of 100. This means we will need to divide our answer by 100. So, 0.9 × 0.9 = 0.81 Example 2 0.9 × 9,000 9 × 9,000 = 81,000 As we've multiplied 0.9 by 10 to get 9 we will need to divide our answer by 10. So, 0.9 × 9,000 = 8,100 One of the easiest ways to multiply using decimals is to remove the decimal point, do the multiplication and then put the decimal point back, making sure that you have the same number of decimal places in the answer as in the original calculation. Example 3 8.1 ÷ 9 81 ÷ 9 = 9 As we've multiplied 8.1 by 10 to get 81 we will need to divide our answer by 10. So, 8.1 ÷ 9 = 0.9 Example 4 8.1 ÷ 0.9 81 ÷ 9 = 9 As we've changed the two values using the same proportion, we don't need to do anything to the answer we get. Let's have a go at some questions now. ### What is EdPlace? We're your National Curriculum aligned online education content provider helping each child succeed in English, maths and science from year 1 to GCSE. With an EdPlace account you’ll be able to track and measure progress, helping each child achieve their best. We build confidence and attainment by personalising each child’s learning at a level that suits them. Get started
494
1,728
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.34375
4
CC-MAIN-2024-18
latest
en
0.882787
https://www.scribd.com/document/63063516/10-1-1-53
1,500,757,908,000,000,000
text/html
crawl-data/CC-MAIN-2017-30/segments/1500549424148.78/warc/CC-MAIN-20170722202635-20170722222635-00487.warc.gz
822,339,079
91,377
# Monte Carlo Statistical Methods Christian P. Robert CREST, Insee, Paris George Casella Cornell University, Ithaca, NY Draft Version 1.1 February 27, 1998 CHAPTER 1 Introduction Version 1.1 February 27, 1998 Until the advent of powerful and accessible computing methods, the experimenter was confronted with a di cult choice. Either describe an accurate model of a phenomenon, which would usually prevent the computation of explicit answers, or choose a standard model which would allow this computation, but would often not be a close representation of a realistic model. This dilemma is present in many branches of statistical applications, for example in electrical engineering, aeronautics, biology, networks, and astronomy. To use realistic models, the researchers in these disciplines have often developed original approaches for model tting that are customized for their own problems. (This is particularly true of physicists, the originators of Markov chain Monte Carlo methods.) Traditional methods of analysis, such as the usual numerical analysis techniques, turn out to be not well adapted for such settings. The rst section of this chapter presents a number of examples of statistical models, some of which were instrumental in developing the eld of simulation-based inference. The remaining sections describe the di culties speci c to most common statistical methods, while the nal section contains a comparison with numerical analysis techniques. 1.1 Statistical Models In a purely statistical setup, computational di culties occur at both the level of probabilistic modeling of the inferred phenomenon and at the level of statistical inference on this model (estimation, prediction, tests, variable selection, etc.). In the rst case, a detailed representation of the causes of the phenomenon, such as accounting for potential explanatory variables linked to the phenomenon, can lead to a probabilistic structure which is too complex to allow for a parametric representation of the model. Moreover, there may be no provision for getting closed-form estimates of quantities of interest. A frequent setup with this type of complexity is an expert systems (in medicine, physics, nance, etc.) or more generally a graph structure. Figure 1.1.1 gives an example of such a structure analyzed in Spiegelhalter et al. (1993). It is related to the detection of a left ventricle hypertrophia (LVH), where the links between causes represent probabilistic dependen- 4 INTRODUCTION 1.1.1 n1: Birth asphyxia? n2: Disease? n3: Age at presentation? n4: LVH? n5: Duct flow? n6: Cardiac mixing? n7: Lung parenchyma? n8: Lung flow? n9: Sick? n10: Hypoxia distribution? n11: Hypoxia in O2? n12: CO2? n13: Chest X-ray? n14: Grunting? n15: LVH report? n16: Lower body O2? n18: CO2 report? n19: X-ray report? n20: Grunting report? Figure 1.1.1. Probabilistic representation of links between causes of left ventricle hypertrophia (Source: Spiegelhalter et al., 1993) cies. (The motivation behind the analysis is to improve the prediction of this disease.) In this case, the conditional distributions of nodes with respect to their parents lead to the joint distribution. See Robert1 (1991) or Lauritzen (1993) for other examples of complex expert systems where this reconstitution is impossible. A second setup where model complexity prohibits an explicit representation appears in econometrics (and in many other areas) for structures of latent (or missing) variable models. Given a \simple" model, aggregation or removal of some components of this model may sometimes induce such involved structures that simulation is truly the only way to draw an inference. (Chapter 9 provides a series of examples of such models where simulation methods are necessary.) Example 1.1.1 {Censored data models{ Censored data models can be considered to be missing data models where densities are not sampled directly. To obtain estimates, and make inferences, usually requires programming or computing time and precludes analytical answers. Barring cases where the censoring phenomenon can be ignored (see Chapter 9), several types of censoring can be categorized by their relation with an underlying (unobserved) model, Yi f (yi j ): (i) Given random variables Yi , which may be times of observation or concentrations, the actual observations are Yi = minfYi ; ug where u is the maximal observation duration or the smallest measurable concentration 1 Claudine, not Christian! There. (iii) The variables Yi are associated with auxiliary variables Xi g such that yi = h(yi . Similarly. the observation of the censored variable Z = X ^ !.2) f(z) = z e? z I z ! + Z1 ! x e? x dx ! (z) . the additive form of a density.1. where ! is constant. k In some cases. namely the variable I yi >xi . if X has a Weibull distribution with two parameters.) Example 1. in a longitudinal study of a disease. the variable Z = z? ?1 ' z ? where ' is the density of the normal N (0. (Here.1.1. 2 ). while formally explicit. may be either known or unknown. The distributions (1. where a ( ) is the Dirac mass at a.1. Y ) is distributed as _ ?1 ' z? 1? z ? (1. prohibits the computation of the density of a sample (X1 . (ii) The original variables Yi are kept in the sample with probability (yi ) and the number of discarded variables is either known or unknown. if X N ( . In this case.2 {Mixture models{ Models of mixtures of distributions are based on the assumption that the observations are generated from one of k elementary distributions fi with probability pi . Similarly.1 ] STATISTICAL MODELS 5 rate. which is not easy to compute. P(X !). Xn) for n large. 1) distribution and is the corresponding cdf. the weight of the Dirac mass. xi) is the observation.1.1. Typically. The fact that truncation occurred. the overall density being p1f1 (x) + + pk fk (x) : . 2 ) and Y X ^ Y = min(X. some patients may leave the study either due to other death causes or by simply dropping out. cannot be explicitly computed. If the product is still functioning at the end of the experiment.2) appear naturally in quality control applications. the observation on failure time is censored. has the density (1. . ) and density f(x) = x ?1e? x on IR+ . where the quantity of interest is time to failure. As an example. W e( .1) + 1? N ( .1) and (1. xi). \explicit" has the restrictive meaning that \it can be computed in a reasonable time". xi) = min(yi .1. h(yi . testing of a product may be of a duration !. see Problem 1.6 n Y INTRODUCTION 1.1 An expansion of the distribution of (X1 .1. . where for i = ?q.1.3). While the computation of standard moments like the mean or the variance of these distributions is feasible in most setups (and thus the derivation of moment estimators. . we look at a particularly important example in the processing of temporal (or time series) data where the likelihood cannot be written explicitly. ?1 the perturbations "?i can be interpreted as missing data (see Chapter 9). If the sample consists of the observation (X0 . q. Xn). k ^ i "1?i ? 1 "0 . .d. Xn ).4) which hinders statistical inference in these models. the sample density is (Problem 1. ?(q ? 1). where n > q. The iterative de in (1. the j s are unknown parameters.1.3 {Moving average model{ An MA(q) model describes variables (Xt ) that can be modeled as (t = 0.1. random variables "i N (0. : : :.i.3) Xt = "t + q X j =1 j "t?j . i "?i . k Lastly. the "i 's are i.13) P Z q Y "?i x0 ? q=1 i "?i ?(n+q) i ' ' IRq (1. i=2 ::: q X ^ "n = xn ? ^ i "n?i : i=1 nition of the "i 's is a real obstacle to an explicit integration ^ . the representation of the likelihood (and therefore the analytical computation of maximum likelihood or Bayes estimates) is generally impossible for mixtures. 2) and for j = 1. ?(q ?1). n) (1. involves kn elementary terms.4) with ^ ' x1 ? 1 "o ? P xn ? q=1 i ' "0 = x 0 ? ^ "1 = x 1 ? ^ q X i=1 q X i=1 Pq i=2 i "1?i ^ i "n?i d"?1 d"?q . Example 1. fp1f1 (xi ) + + pk fk (xi )g . which is prohibitive for large samples. i=1 . . Note that for i = ?q.1. Casella and Berger 1990 or Robert 1994 for an introduction to these techniques. whatever the statistical technique. One course would be to use models based on exponential families (1. and the inferences that can be drawn from their use.8.3.2. these latter two methods using more e ciently the information contained in the distribution of the observations. distributions was often necessitated by computational limitations. and hence circumvent the problems associated with the need for explicit or computationally simple answers. Another course was to abandon parametric representations for non-parametric approaches which are by de nition robust against modeling errors. Alternative approaches (see. the computing bottleneck created by the need for explicit solutions has led to the use of linear structures of dependence (see Gourieroux and Monfort. while the method of moments can sometimes be expressed as a derivation of a maximization problem. while the likelihood is not bounded (see Example 1.2 Statistical Inference .2 ] STATISTICAL INFERENCE 7 Before the introduction of simulation-based inference. for instance. since they are convergent in most setups. (See Lehmann 1983. it can be shown that the 1.1. In econometrics. computational difculties encountered in the modeling of a problem often forced the use of \standard models" and \standard" distributions. reduction to simple.1 below. 1994). according to the Likelihood Principle (see Robert 1994).1. the former with maximization problems|and thus to an implicit de nition of estimators as solutions of maximization problems|.4 below) and therefore there is no maximum likelihood estimator.2) (see Lehmann. perhaps non-realistic.1). that is as a di erence equation.3. Brown. Note however that such an interpretation is rare and also that the method of moments is generally sub-optimal when compared with Bayesian or maximum likelihood approaches. in the case of normal mixtures. For instance. But the moment estimators are still of interest as starting values for iterative methods aiming at maximizing the likelihood. In their implementation. Gourieroux and Monfort 1996) involve solving implicit equations for methods of moments or minimization of generalized distances (for M-estimators). which enjoy numerous regularity properties (see Note 1. 1986 or Robert. these approaches are customarily associated with speci c mathematical computations. 1989. 1983. but it is also the case that the reduction to simple distributions does not necessarily eliminate the issue of non-explicit expressions. Berger 1985. Our major focus is the application of simulation-based techniques to provide solutions and inference for a more realistic set of models.) As previously mentioned. The statistical techniques that we will be most concerned with are maximum likelihood and Bayesian methods. Approaches by minimal distance can in general be reformulated as maximizations of formal likelihoods as illustrated in Example 1. the later with integration problems|and thus to a (formally) explicit representation of estimators as an integral. 1995). minimization of (1. The parameter (a. k Although somewhat obvious. Estimation of the ij 's by minimizing the sum of the (yij ? ij )2 's is possible through the (numerical) algorithm called \pool-adjacent-violators" and developed by Robertson et al. and the variables "i represent errors. where the means are increasing in i and j.8 INTRODUCTION 1. b) is estimated by minimizing the distance (1.2.4).1. . yielding the least squares estimates. In the particular case of linear regression we observe (xi . 2).2. i = 1. For example. Yi). . i = 1. or equivalently that the linear relationship IE Y jx] = ax+b holds.2. the log-likelihood function for (a.1) Yi = a + bxi + "i . In this latter case the additional estimator of 2 is consistent if the normal approximation is asymptotically valid. i=1 1.2. b) is proportional to n X log( ?n ) ? (yi ? axi ? b)2 =2 2.2) n X i=1 (yi ? axi ? b)2 in (a.2. in (1.3 Likelihood Methods The method of maximumlikelihood estimation is quite a popular technique for deriving estimators. the likelihood structure also provides an estimator of 2 . n. in particular that "i N (0.1 {Least Squares Estimators{ Estimation by least squares can be traced back to Gauss (1810) and Legendre (1805) (see Stigler 1985). from a computational point of view. if. Yi jxi N (axi +b. : : :. and it follows that the maximum likelihood estimates of a and b are identical to the least squares estimates. Therefore. independent (equivalently.2) we assume IE("i ) = 0.) An alternative is to use an algorithm based on simulation and a representation by a normal likelihood of the problem (see x5.2) is equivalent. (1988) consider a p q table of random variables Yij with means ij . However. (See Problems 1. where (1. (1988) to solve this speci c problem. Starting from an iid sample X1 . If we add more structure to the error term. and applies in many other cases. b).18. 2)).17 and 1.3 solution of the likelihood equations which is closer to the moment estimator is a convergent estimator (see Lehmann 1983). n. in the case where the parameters are constrained Robertson et al. this formal equivalence between the optimization of a function depending on the observations and the maximization of a likelihood associated with the observations has a nontrivial outcome. to imposing a normality assumption on Y conditionally on x and applying maximum likelihood.2. Example 1. Xn from a . ) distribution is a particular case of exponential family since its density.4).12). k ). .3.2. In the context of exponential families.3) x = r f ^(x)g . there are settings where . The value of . : : :. ?( f(yj .1 {Beta MLE{ The beta Be( .. : : :. distributions with density (1. Example 1. In practice. by its construction. log(1 ? y)).1) f(x j .3. Equation (1.3) is then log y = ( ) ? ( + ).4) is then replaced . : : :. say ^. x = (log y. is known as a maximum likelihood estimator (MLE). it may still be the case that the solution of (1. in the sense that the MLE is converging almost surely to the true value of the parameter. ).3. ) = ?( ) + )) y ?1 (1 ? y) ?1 . x 2 IRk . e. : : :.3. k jx) = L( 1 .3.2) f(x) = h(x) e x? ( ) . Yn since (1.3. While it may seem absurd to estimate both parameters of the Be( . the likelihood function is L( 1 . The justi cation of the maximum likelihood method are primarily asymptotic. with = ( .2).3. xnj ) taken as a function of . ?( can be written as (1.3). Berger and Wolpert 1989). Notice that.3.3. which also is the equation yielding a method of moments estimator. ) distribution from a single observation. which is the parameter value at which L( jx) attains its maximum as a function of . that is.3 ] LIKELIHOOD METHODS 9 population with density f(xj 1 .4) log(1 ? y) = ( ) ? ( + ) . (1.3. with x held xed.1. Even if that can be done. the approach by maximum likelihood is straightforward. the log-Laplace transform. : : :. Y . k jx1. since IE X] = r ( ).1. k ): = i=1 i 1 More generaly. or there are constraints on such that the maximum of (1.3) is not explicit.2) is not a solution of (1. although it can also be interpreting as being at the fringe of the Bayesian paradigm (see. xn) Yn (1. the formal computing problem at the core of this example remains valid for a sample Y1 . or cumulant generating function of h. the range of the MLE coincides with the range of the parameter. : : :. : : :. The maximum likelihood estimator of is the solution of (1. the likelihood is de ned as the joint density f(x1 .g. 0 y 1.3. cannot be computed explicitly.3. where (z) = d log?(z)=dz denotes the digamma function (see Abramowitz and Stegun 1964).1. when the xi's are not iid. under fairly general conditions (see Lehmann and Casella 1997 and Problem 1. There is no explicit solution to (1. This last situation occurs in the estimation of the table of ij 's in the discussion following Example 1. y>p. may be quite involved. is still well de ned.3. an observation Y = kX k2 which has a non-central chi-squared distribution. the nuisance parameters are the angles in the polar representation of (see Problem 1. . leads to a maximum likep lihood estimator of which di ers2 from Y . which has a constant bias equal to p. Many other options exist.) Example 1. Ip ) and if = k k2 is the parameter of interest. and use the resulting ^ to estimate . where is a nuisance parameter. this does not require more complex calculations although the distribution of the maximum likelihood estimator of . ^).3. such as conditional. k When we leave the exponential family setup. a typical approach is to calculate the full MLE ^ = ( ^.2 {Noncentrality Parameters{ If X Np( . 2 ( ) (see Appendix 1).14) Z I (t) = p (z=2) 1 et cos( ) sin2 ( )d ?( + 2 ) 0 1 t X (z=2)2k = 2 k=0 k!?( + k + 1) (see also Abramowitz and Stegun 1964). we face increasingly challenging di culties in using maximum likelihood techniques. If the parameter vector is of the form = ( . the maximum likelihood estimator of .5) requires us rst to evaluate the special functions Ip=2 and I(p?1)=2 (see Saxena and Alam 1982). Note also that the maximum likelihood estimator is not a solution of (1.3.2) and the maximum likelihood estimator of is ^ (x) = kxjj2.1. In principle.3. ).3 with 1 X log(1 ? y ) = ( ) ? ( + ) : i k n i When the parameter of interest is not a one-to-one function of . So even in the favorable context of exponential families we are not necessarily free from computational problems. where I is the modi ed Bessel function (see Problem 1. ^. that is when there exists nuisance parameters. 2 This phenomenon is not paradoxical as Y = kX k2 is not a su cient statistic in the original problem. One reason for this is the lack of a su cient statistic of xed dimension outside exponential 1 X log y = i n i ( ) ? ( + ). Surprisingly.5) when y < p (see Problem 1.5) y = py Ip=2 y .24). marginal. since the resolution of (1. (See Barndor -Nielsen and Cox 1994. or pro le likelihood.10 INTRODUCTION 1. since it is the solution of the implicit equation p p p I(p?1)=2 (1. 1973) Consider u1 . 1. p=2. jjxjj2=2) .49 Show that. (c) Show that the posterior distribution ( jx) cannot be written as a posterior distribution for z f (z j ).50 p that the Bayes estimator of = jj jj2 under quadratic loss for ( ) = Show 1= and x N ( . ) = ( ) and show that it only depends on z = (z2 . The prior distribution is ( . 2 ). 1] by considering the natural transformations = log( ) and = log(%=(1 ? %)). (Hint: Consider. : : : . ) = 1: 1 2 1.7 ] PROBLEMS 31 erarchical levels does not modify the conjugate nature of the resulting prior if conjugate distributions with xed scale parameters are used at every level of the hierarchy. How do you explain this phenomenon? (d) Show that the paradox does not occur when ( . whatever ( ). such that the rst of these variables has an E xp( ) distribution and the n ? other have a E xp(c ) distribution.52 (Dawid et al. ) = ( ) ?1 . Ip ) can be written as 2 (x) = 1 F1 (3=2. Deduce from the series development of 1 F1 the asymptotic development of (for jjxjj2 ! +1) and compare with 0 (x) = jjxjj2 ? p. 2 ). ) = (2jjjjjj2? ) +p and conclude. it is still impossible to derive ( jx) from f (z j ). (a) Give the shape of the posterior distribution of when ( . . for instance. (a) Show that the posterior distribution ( jx) only depends on . but that a paradox occurs. : : : . u2 N ( 2 . where c is known and takes its values in f1. 1.48 Show that a Student's t-distribution Tp ( . f (z j ). even though ( jx) only depends on z . 1 F1 (1=2. n?1g. 1. u2 . p and = ( 1 ? 2 )=( 2) is the parameter of interest. 1973) Consider n random variables x1 . the normal case. p=2.) 1. 1. (b) Show that the distribution of z .51 Assuming that ( ) = 1 is an acceptable prior for real parameters. although it only depends on z . 2. s2 2 2 = .1. show that this generalized prior leads to ( ) = 1= if 2 IR+ and to (%) = 1=%(1 ? %) if % 2 0. with zi = xi =x1 .53 (Dawid et al. s2 such that u1 N ( 1 .1. only depends on . a multiplication of the number of hi- ? z = u1 p u 2 : s 2 (b) Show that the distribution of z only depends on . apart from the trivial family F0 . . : : : . for exponential families. jjxjj =2) where 1 F1 is the con uent hypergeometric function. zn ). 2 ) does not allow for a conjugate family. xn . Study the behavior of these estimators under the weighted quadratic loss jj 2 2 L( . ) / y!(z ? y) . (b) The parameter of interest is now = 1 . (b) Show that. 2 0 with = ( . )= 1 : (a) The parameter of interest is = ( 1 . x2 j ) / t2n?1 exp ? 1 t2 + n(x1 t ? )2 + n(x2 t ? )2 dt. 0 y z. ). : : : . z j . ) only depends on and derive the distribution f (y. Give the value of p which avoids the paradox. : : : . ? )! with 0 < < 1. 1.54 (Dawid et al. 1. (b) Extend this result to overparametrized models with improper priors. ) d = ( ) and examine whether the paradox is evacuated.56 (Jaynes 1980) Consider z y (1 z?y f (y. x2n N ( 2 . (a) Show that ( jx) only depends on x1 and that f (x1j ) only depends on . with ( . 1973) Consider 2n independent random variables. : : : . 1. 2 = ) and the prior distribution is ( 1 . ). (a) Show that f (z j . 0 ). ) from f (yjz. (b) Show that. x11 . xn N ( + . 2 ). Derive the value of p which avoids the paradox. for every ( ). 1.55 (Dawid et al.1. Assume that ( jx) only depends on z and that f (z j ) only depends on . 1973) Consider (x1 . (c) Consider the previous questions when P a( .57 (Dawid et al. 2 ): . R (b) Generalize to the case where ( . 2 ). ) = ?p : Show that ( jx) only depends on z = (z1 . but that ( jx) cannot be obtained from x1 f (x1 j ). x1n N ( 1 .58 Consider x1 . ) / 1= . z j .32 INTRODUCTION 1 2 2 1. 2 . ). x2 ) with the following distribution: Z +1 h i f (x1 . 1973) Consider x = (y. Justify this distribution by considering the setting of Exercise 1. (a) Show that the paradox does not occur if ( ) is proper. z2 ) = (x1 =s. . x2 =s) and that the distribution of z only depends on . The prior distribution on is ( ) = 1. .8 (c) Show that the paradox does not occur when ( .54. (a) Show that the posterior distribution is not de ned for every n. the paradox does not occur. z ) with distribution f (xj ) and = ( . Show that ( jx) only depends on z1 and that f (z1 j ) only depends on . 1. 2 ) = ( 1 = . for any distribution ( ) such that ( jx) only depends on x1 . x21 . ( jx) cannot be proportional to ( )f (x1j ). that is such that. for reasons related to the Pitman{ Koopman lemma (see Robert.1.8. which is equal to r ( ). + 1).1) which also allows for conjugate priors contains exponential type densities with parameter dependent support. ). are of particular interest. k ).1 Conjugate priors When prior information about the model is quite limited. Binomial B(n. ) with Beta Be( . ) e : ? ( ). if the sampling density is of the form (1. As mentioned above. 1994).8. ) is x0 and. the posterior distribution ( jx) also belongs to F . ). since the posterior distribution is ( j + x.2 Gray codes . while they can only be found in exponential families. But the main motivation for using conjugate priors is their tractability. ) = K ( . for every 2 F . These families are also called conjugate and another justi cation found in Diaconis and Ylvisaker (1979) is that some Bayes estimators are then linear.8. k ) with Dirichlet D( 1. the prior mean of ( ) for the prior ( j . In fact. if x1 . 1986). like the uniform or the Pareto distribution.d. Families F that are closed under sampling. and N ( . ) with Gamma G ( . ). : : : . Multinomial Mk ( 1 .1) f (xj ) = C ( )h(x) expfR( ) T (x)g. 1= ) with Gamma G ( . 2 ) distributions are associated with normal N ( . 2 ) conjugate priors. ). Poisson P ( ) with Gamma G ( . : : : .i.8 Notes 1. If ( ) = IE x].1. IE ( )jx1. : : : . a conjugate family for f (xj ) is given by ( j . conjugate priors provide linear estimators for a particular parameterization.8 ] NOTES 33 1. xn ] = x0 + nx : +n 1. normal N ( . f (xj ). which include many common continuous and discrete distributions (see Brown. In particular. the prior distribution is often chosen in a parametered family so as to keep the subjective input as limited as possible. Gamma G ( . An extension of (1. : : : . xn are i.8. for both parsimony and invariance motivations. . 1. we concentrate on the generation of random variables that are uniform on the interval 0.1. F . 1]. 1] to X . 1].1 Simulating Uniform Random Variables 2. for example. in turn. The type of random variable production is formalized below in the de nition of a pseudo-random number generator.1) F ?(u) = inf fx. In this chapter. F represents a -algebra on . based on the production of uniform random variables. 1]) and therefore equate the variability of ! 2 with that of a uniform variable in 0.1 February 27. it is always possible to represent the generic probability triple ( .1 Introduction Methods of simulation are based on the production of random variables. often independent random variables. transformed by the generalized inverse. We thus provide in this chapter a particular uniform generator. 1] (see for instance Billingsley.1.1. Examples 1. We also give an introduction to approximation methods for densities and their connections with simulation. This generation is.1] provide the basic probabilistic representation of randomness. to produce random variables from both standard and nonstandard distributions.1. U 0.1]) (where B are the Borel sets on 0. because the uniform distribution U 0. 1995).2 and 1. De nition 2.3). in describing the structure of a space of random variables.1 For a function F on IR. F(x) ug : . F ? . that are distributed according to a distribution f that is not necessarily explicitly known (see. 1]). 2. and P is a probability measure) as ( 0. The random variables X are then functions from 0. along with standard generation methods.1. the generalized inverse of F.1. 1998 The methods developed in this book mostly rely on the possibility of producing (with a computer) a supposedly endless ow of iid random variables for well-known distributions. In fact.1. P) (where represents the whole space. B( 0.CHAPTER 2 Random Variable Generation and Computational Methods Version 1. is the function de ned by (2. 1. sometimes known as the Probability Integral Transform. This is because. while it somehow clari es the usual introduction of random variables as measurable functions. f(u. 1]) : Therefore. there is no reproducibility of such samples. Before presenting a reasonable uniform random number generator. in order to generate a random variable X F. Proof. (Although. x). there are methods that use a fully deterministic process to produce a random sequence in the following sense: Having generated 1 Von Neumann (1951) summarizes this problem very clearly by writing \Any one who considers arithmetical methods of reproducing random digits is. of course.2. second. x). indeed. F(x) ug and P(F ?(U) x) = P(U F(x)) = F(x) : 2 Thus.2 If U U 0. then the random variable F ?(U) has the distribution F . and whether it is. For our purposes.1] and then take the transform x = F ?(u). 1978). (Techniques based on the physical imitation of a \random draw" using the internal clock of the machine have been ruled out. we rst digress a bit to discuss what we mean by a \bad" random number generator.) More importantly. the generalized inverse satis es F(F ?(u)) u and F ?(F(x)) x .1. Dellacherie. since those distributions can be represented as a deterministic transformation of uniform random variables.1. this basic representation is usually a good way to think about things. in practice. rst.1 We then have the following lemma.1]. we often use methods other than that of Lemma 2." .36RANDOM VARIABLE GENERATION AND COMPUTATIONAL METHODS 2.2. Lemma 2. it su ces to generate U according to U 0. which gives us a representation of any random variable as a transform of a uniform random variable. As has been pointed out several times. The generation of uniform random variables is therefore a key determinant in the behavior of simulation methods for other probability distributions. formally. possible to \reproduce randomness" (see. and a strict arithmetic procedure of course is not such a method. for example.1] . 1] which imitates a sequence of iid uniform random variables U 0. in a state of sin. F ?(u) xg = f(u. 1]. Lemma 2. The logical paradox1 associated with the generation of \random numbers" is the problem of producing a deterministic sequence of values in 0. there is no guarantee on the uniform nature of numbers thus produced and. for all x in F ?( 0. there is no such thing as a random number|there are only methods of producing random numbers. For all u 2 0.) But here we really do not want to enter into the philosophical debate on the notion of \random".2 shows that a bad choice of a uniform random number generator can invalidate the resulting simulation procedure. Un leads to acceptance of the hypothesis H : U1. . there is neither ergodicity nor convergence of the distribution of Xn to the uniform distribution. such as the Kolmogorov-Smirnov test. .) With these limitations in mind. produces a sequence (ui ) = (Di (u0 )) of values in 0. . . Ui?k ).2. This de nition is clearly restricted to testable aspects of the random variable generation. X1n). Xn) is always the same. . which are connected through the deterministic transformation ui = D(ui?1 ). . Xn).2. . This limitation should not be forgotten: the validity of a random number generator is based on a single sample X1 . (X21 . Many generators will be deemed adequate under such examination. given the initial value X0 . in the sense that the transition kernel is equal to a Dirac mass. nor identically distributed. we can now introduce the following operational de nition. it is still possible to speak of stationary distributions in these deterministic setups. q) model. Dellacherie (1978) gives a more mathematical treatment of this subject plus a historical . Vn) of uniform random variables when compared through a usual set of tests. X2n).1. Xn when n tends to +1 and not on replications (X11. . (Surprisingly. Marsaglia has also assembled a set of tests called Die Hard. Of course. In addition. applying them on arbitrary decimals of Ui . like those of Lecoutre and Tassi (1987). Thus. because the variables (Xn ) always form a trivial Markov chain.4. For all n. Xn ) and (Y1 . . by using an ARMA(p. . . : : :(Xk1 . the \pseudo-randomness" produced by these techniques is limited since two samples (X1 . since the distribution of Xn given X0 does remain a Dirac mass for every n. knowledge of Xn or of (X1 .1 ] SIMULATING UNIFORM RANDOM VARIABLES 37 (X1 .3 A uniform pseudo-random number generator is an algorithm which.1.1] : The set of tests used is generally of some consequence. one can use methods of time series to determine the degree of correlation between between Ui and (Ui?1 . the distribution of these n-tuples depends on the manner in which the initial values Xr1 (1 r k) were generated. and perhaps more importantly. Un are iid U 0. the values (u1 . for instance. De nition 2. There are classical tests of uniformity. Yn) produced by the algorithm will not be independent. the validity of the algorithm consists in the veri cation that the sequence U1 . . which avoids the di culties of the philosophical distinction between a deterministic algorithm and the reproduction of a random phenomenon. In particular. 1]. as shown in Example 2. It is also the case that the random number generation methods discussed here are not directly related to the Markov Chain methods discussed in Chapters 6 and 7. un) reproduce the behavior of an iid sample (V1 . the sample (X1 . In fact. One can also use nonparametric tests. . Xkn) where n is xed and k tends to in nity. . nor comparable in any probabilistic sense. starting from an initial value u0 and a transformation D. Thus. Xn)] imparts no discernible knowledge of the value of Xn+1 . Example 2. where algorithms resistant to standard tests may exhibit fatal faults. reputed to be \good".1. 2(1 produces a sequence (Xn ) that tends to U 0. the tent function progressively eliminates the last decimals of Xn .5).1] (see Problem 2. (For instance.2. 1976.1. that an algorithm of Wol (1989).) Moreover. k Although the limit (or stationary) distribution associated with a dynamic system Xn+1 = D(Xn ) is sometimes de ned and known. 1994) are based on dynamic systems of the form Xn+1 = D(Xn ) which are very sensitive to the initial condition X0 .5 (Continuation of Example 2. for example. 1] 0.1. even when these functions give a good approximation of randomness in the unit square 0. results in systematic biases in the processing of Ising models (see Example 5. Classic examples from the theory of chaotic functions do not lead to acceptable pseudo-random number generators. chaotic con gurations.1. for instance. the hypothesis of randomness is rejected by many standard tests.2. In particular. 1989.4) Figure 2.1 review of successive notions of random sequences and the corresponding formal tests of randomness. This methodology is not without problems. if x 1=2. 1] (see Example 2.1. algorithms having hidden periodicities (see below) or which are not uniform for the smaller digits may be di cult to detect. for some values of 2 3:57.5).4 has a disastrous behavior. or Guegan. D(x) = 2x ? x) if x > 1=2. such as those of Martin-Loef (1966). the value = 4:00 yields a sequence (Xn ) in 0. Berge.38RANDOM VARIABLE GENERATION AND COMPUTATIONAL METHODS 2.1 illustrates the properties of the generator based on D . 1] that. as the theory of large deviations (Bucklew. has the same behavior as a sequence of random numbers (or random variables) distributed according to the arcsine distribution with p density 1= x(1 ? x). 1984. De nition 2. The histogram of transforms Yn = 0:5 + arcsin(Xn )= of a sample of successive values Xn+1 = . the second generator of Example 2. the sequence (Xn ) sometimes will converge to a xed value. the \tent" function". or particle physics. Ferrenberg. In particular. In particular.1. Gleick. The notion that a deterministic system can imitate a random phenomenon may also suggest the use of chaotic models to create random number generators. which result in complex deterministic structures (see Ruelle. the chaotic features of the system are not guarantees for an acceptable behavior (in the probabilistic sense) of the associated generator.4 {The Logistic Function{ The logistic function D (x) = x(1 ? x) produces. Landau and Wang (1992) show. In a similar manner. however. theoretically. These models. 4:00]. Pommeau and Vidal. Consider. Example 2.1). due to long term correlations in the generated sequence.1.3 is therefore functional: An algorithm that generates uniform numbers is acceptable if it is not rejected by a set of tests. particular applications that might demand a large number of iterations. Given the nite representation of real numbers in the computer. 1990). along with the (marginal) histograms of Yt and Yt+100 .1. . 1. Moreover.0 0.0 0.8 SIMULATING UNIFORM RANDOM VARIABLES 39 0.1 shows that the sample of (Yn . while the plots of (Yn. 9899) for the sequence Xt+1 = 4xt (1 ? Xt ) and Yt = F (Xt).6 0.0 • •• • • • • • • ••••••••••••••••••••••••••••••••••••••••••••••••••••••••• •••••••• •••••••••• ••• •••••• •••••••••••••••••••••••••••••••••••••••• •• • •• • • •• •• • • • • • •••• • • • ••• •• • • • • •• ••• • • • •• • •• ••• •••• • •• ••• •••• •• ••••• • •••• •••• •••••••• •• •••• ••••••• • ••••• •• • ••••• •••••••••• • •••• •• • •••••• • ••• • • • • ••••• •••••••••• ••••••• •••• •••••••• ••••••••••••••••••••••••••• •••••••••• ••••••••••••••••••••••••••••••• •••••••••••••••••••••••• • • • • • • • •• • • ••••• ••••• • • ••••••• •• • • • ••••••• ••••••••••••••• ••• • • • •••• ••• ••••• •••• ••••• ••••• • • ••••••• •• ••••••• •• • • • •••••••••••••••••••••••• •••••••••••••••••••••••••••••••••••••• ••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• • • • •• •• •• ••••• ••• • ••••• •• • • ••• • • ••• • • • • •• • • • • • • • • •••••• •••• •••• • • ••••••• •••••••••••••••• ••• ••• •••••••••••••• ••••• ••••••• • • ••••••••••• ••••• •••• ••••••••••••••••••••••••••• ••• • • •••••• •• • • •• • •• • • • • • •• • ••• •• • •• • • • •• • • • •••••••••••••••••••••••••••••••••••••••••••••••••••••••••• •••••••••••••••••••••••••••••••••••••• •••••••••••••••••••••••••••••••• •••••••• • • •• • • •••• • • • • • •••••••••••••••••••••••••••• •••••••••••••• ••••••• ••• ••••••••••••••••• ••••••••••• •••••••••••••••••••••••••••••••••••••••••• •• • • • • • •• • ••• •••• • ••• •••• •• •••• • •••••••• • ••• ••• • • • •• •••••••••••••• ••••• •••• •••••••••••••••••• ••• • • • • •••••• • • •• • • •• • • • • ••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• ••••••••••••••••••••••••••••••••••••••••••• • •• • • • •• • •• •• •• • •• •••• • • ••• •• • •• •• • • •• • • • • • ••• •• • • •• • • •••• •• ••••• ••••••••••••••••••••••••••••• ••••••••••••••••••••••••• •• ••••••••••••••••••••••••••••••••••• ••••••••••••••••••••••••• ••• •••••••••••• • •••••• •• •••• •• •• ••• •• • • • •• • •• • • ••••• • • • •• • • •• ••••• • • • ••• • • •••••••••••• •••••••••••••••••••••••••••••••••• ••• •• •••••••••••••• •••• ••••••••• •••••••• •••••••••••••• •••• • ••••••• ••• ••••• • • •• • • • • • • • • • •• •• • • • •••• •• • ••••• ••••••••••••••••••••• ••• •••••••••• •••• •• •••••••••• • • •• ••••• •• •• • •••••• ••• • ••••••••• • • • •• • • • • • • •• •••••••••••••••••••••••• •••••••••••••••••••••••••••• ••••••••••••••••••••••••••••••••••• •••••••••••••••••••••••••••••••••••••••••••••• • • •• •• • • • • • •• •••• •• ••• • • • •• • • • • • ••• • •• • • • •• •• • •• • ••• •••••••••• •••••• ••••••••••••••••••••• • •••••••••• •••••••• •••••• • ••• • • • •• •••• • •••• •••••••••••• •• ••••••• ••• • •• • ••• • • • •• • •• ••••• • • • ••••• ••••• ••••••• • •• • •••••• • •• •• • • ••• •••• • ••• •• • •• ••• ••• ••••• •••• ••• • ••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• ••••••• ••• ••••••••••••••••••••••••••••••••••••••••••• • •• • ••• • • • •• • •• •• • • • • • • • •••••• •• •••• •• • •• •• •• ••• •• • •• • • •• •• •••••• •••••• ••• •••• ••••• •• • •• •••• ••• •••••• •• • • •••••• • • • •••••••••••••••••• ••• •••• •• •••••••••••••••••••••••• •••••••••••••••• ••••••••••••••• •••• ••••••••••••••••••••••••••• ••• •••• ••• • •• ••• • • •• • •• • •• • •• ••• • •• • • •••• • • ••• •• • • • •••••••••••••••••••••••••••••••• ••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• • •• • •• • • •••••••••••••••••••••• •••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• • • • • •• • • • • • ••••• •• •••• • •• ••• • • •• •• •• • ••• • •• •••• •• • •••• • ••••• • ••• • • • • •• • •• • • •• • •• • • • •• ••••••• •••• • ••••• ••• •••• • • •••••••• •••• • • ••••• •••• ••••• •••• ••••• •• •••••••••••••••••• •• • ••• • ••• ••• • ••••••• •• •••••• ••• •••••••••••••••••••••••••••••• • ••••••••••••••••••••••• • ••••••••••••••••••••••••••••••••••••••••••••• • •• •• • • •• • ••• • • • • ••• •• • • •• • • • • • • ••• • •• • • ••• • • • •• • • • • • • •• •• • • • • • • •• •• •••••• •• ••••••••• ••• ••• ••• • • ••••••• • •••••••• •• •••••••••••••••• ••••••••••••• •• • •••• ••• • ••• •••• •••• •••••••••••••••••••••••••••••• ••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• •• • • ••• •• ••••••••••••••••••••••• ••••••••••••••••••••••••••••••••••••••••••••••••• •••••••••••••••• •••••••••••••••••••••••••••••••••••••••••• • • • • • • • •• • • ••• ••• • •••• • ••• •• •• • • • • •• • • • • • • •••• ••••••• ••• •• •• • • •••• • • ••••••••••• • • • •• •• •••••••••••••••• ••••• ••• •••••••• •••••••• •••• •••••••••••••••• ••••••••••••••••• ••••••••••• •••• ••• ••••••••••• • • •• • • • • ••• •••• • • • • • • • • •••••• • ••••••••• •••••••••••••• •••••••••••• •••••••• •••• ••• •••••• •••••••••••••• •• •••• • •••••••••• ••••••• • • ••••• ••• ••••• •• • • • •• • • •• •• • • • • • •• • •• • • • • • • • • • •••• • •• • • • • • •• • •••• •••••• ••••• •• •• •• • ••• • ••••••• • ••••• • ••••• • • •• •• ••••• ••••• • ••••••• •• • ••• • •••••• ••••••• •• • ••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• ••••• • • • • • • •••••••• •••••••••••••• •••••••••••••••••••••••••• ••••••••••••• •••••••••••••••••••••••••••••••••• •••••• ••••••• •••••••••••••••• •••• • • • • • •• ••••••• • •••••••• •••• •••••••••••••••••••••• • ••• •••••••••••• • •••••• ••••••• ••••• ••••••••• ••••• ••• • ••••••• • •• •••••••• • • •• • ••• • • • • • •• • • • ••• • • •• •• • •• • • • • • ••••• •••• •••••••••• ••••••• •••••• ••••••• •• •••••••• ••• ••• • • •• • • •• • ••••• ••• • •••••••• •••• •• ••••••• ••• •• ••• • • • •••••••••••••••••••••••••••••••• •••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• ••••••••••• • •• •••••••••••• •••••••• •••••••• •••• ••••••• ••••••••••••••••••••• ••• ••••• •••• •• •••••• ••• •••••• ••• ••• • • • •• •• • • • •• • ••••• • • • • • • • • • • • • • • •• • • ••• •• ••••••••••• •••••••••••••••••••••••••••••••••• •••• •• ••••••••••••••••• •• •••• ••••••••••• •••••• ••••••••• ••••• •••••••• • •• •• • • •• ••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• •••• ••• •••• ••• ••• • • •• ••••••• • •• •• ••• ••• •••• •• •• ••• •••••• ••••• • ••• ••• ••••• •••• • •• •• •• • • •• • • •••••••• •• •• •••••••• •• •• •••• •••• • • •••• • • •••••••••• •• • ••• ••• •• ••• • • •• •• •• • ••• • •• ••• • • • • • • • • •• • •• • •• • • • •• ••••• • • • • • • •• • • • • •• • • • • ••• • • • • •••• •• • ••• •• ••• • ••••• • •• • • ••• • • ••• • ••••• •• ••••• • •••••••••••••••••••••••••••••••••••••• ••••••••• ••••••••••••••••••••••••••••••••••••• ••••••••••••••••••••••••••••••••••••••••• •••••••••••• ••••••••••••••••••••••••• ••••• •••• ••• • •• ••••••• ••••••••••••• •••••••••••••••• ••••••••••• •••••••••• •••• ••••••••• ••••••••• • • •••••• •••••••••••••••••••••••••••••••••••••••••• •••••••• •••••••••••••••••••••••••••• ••• ••••• •••••••••••••••••••••••• • ••••• • •• • ••••••••• •••••• •••• ••••• ••••••••••• •••• •• •••• •••••••••••••••••••• ••• •••• •• ••• • ••• •• •••••••••••••••• • • •• • • • •••• •• • ••••• •••••••• • •••• •• ••••• ••••• •••••••••• •• •••••• •• •• •••••• • • ••• ••• •••• ••••••••••• ••• •••• • • • • • • • ••••• •••••• • •• • • •• •• • ••• •• • • • •• • • •• •• ••• • • • ••• • •••• • • •• • ••• • •• • • • • • • • •• •••••• •• ••••• • • • • • • • • • • • • • • •• •• • • • •• • •• • • •••••••••••••••••••••• •••••• •••••••• •••• •••••••••••••••••••••• • • •••••••••••••••••••••••••••• •••••••••••••• •• ••••••• •• • •• • •• • • ••• •• •• •• • •••• •• • • •• ••••• •••• ••••••••••••• • ••••••• ••••••• •• •••• •• • ••• •• • •• •••••••••••••••••••••• •••• • •• • • • • • • ••• •• • •• • • • • •• •• •••• • •• • • • •• • •• • • ••• •• •• • • •• • • • • •• •• ••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• ••••••••••••••• •••••••••••••••••••••••••••••••••••••• •••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• •• • ••• ••• ••••••• • • •• •• • ••• •••••••• •••• •••••• •• • • • • •• •••• • • ••••• • • •• •••• •• • ••• • • •• • • •• • • • •••••••••••••••••••• •••• ••••••••• ••••••••••••••••• ••••••••••• •••••••••••••••• ••••••••••• ••• •• •••• •••• ••• • •••• ••••••• •• • • • • • • • • •• • •• • • • • •• •• • • •• • •• • • • ••• •• • •• • •• • • •• •• •• • • •• • •••• • •• • •• • • •• •• • •••••••••••••••••••••••••••• •••••••••• ••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• ••••••••••••••••••••• • • • • • • • • • • • •• ••••••••••••••••••••••• •••••••••••••••• •••••••••••••••••• ••••••••••••••••••• ••••••••••••• • ••••• •••• •• •• •••••••••••••• ••• •• • • • • • •• •••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• ••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• ••••• • •• •• ••• •• • •••• •• • • • •••••••••••••••••••••• ••••• • ••••••••••••••• •• •••••• •• ••••• •••••••••••• ••••••••••••••••• •• ••••• • ••• • •••• • • •••• • •••••• • • ••••• • •• •••• • • •• •• •••••••••••••••••• • •••• •• • •••••••••••••• •• • • ••••• • • • •• ••••• •• • ••• • • ••• • ••• • ••• •• •• • • • •• ••••••••• •• •••• ••• ••• ••• •• ••••• • •••• • • • • •• •• • •••• •• •• • • •• • • • •• • ••• •• •• • ••••• •• ••• ••• •••• ••• ••• •• •• •• ••• • •••••••• • • ••• • • • • •••••• • •• • • • •• ••••••• ••••••••• •••••••• •••••••• • • • ••• ••• •• •• •••• • •• • • • ••••• ••• • ••• •• •• ••••• •••••••••••••••••••••••••••••••••••••••••••••• •••••••••••••••••• •••••••••• ••••••••••••• ••••••••••••••••••••••••••••••••••••• • • • •• • • •• • • • ••• ••• ••••• •••• • • ••• • •• • •• •• • • • • • ••••••••••• ••• •••••• •••• • ••••• • •••• • •• • •••• • •• • •• • •• • • • ••• • • • ••• ••• • •••••• • ••••• • • •• •• • ••• • •••••••••••••••••••••••• ••••••••••••••••••••••••••••••••• ••••••••••••••••••••••••••••••••••••••••••••••••••••••• ••••••••••• •••••••• • ••••• •••• ••••••• •• •••••• • ••••• •• • ••• •• ••• •••• ••• • •••• • •• • • •• •••• ••• • ••• • •••• ••• ••••••• ••• ••• • • •••• •••• • •••• •••••••••• • • • •• ••• • •••• •• •••••• •• • • • • • • •• • • •• ••• •• • • •• • •• •• • • • • • • •••• • ••• •• ••• • • • ••• • • • • • • •• •••• •• • • •• •••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• ••••••••••••••••••••• ••••••••••••••••••••••••••••• • • •• •• •• • • •• ••••••••• ••• ••• • • ••• • •• ••••••• •• •• • • •• • • ••••••••••••••••••••••••••••••• •••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• • • • • •• •• • • •• • ••• • • • ••• •• • •• •• •• • •• • • • ••••••• ••• • • • • • • • • •• • •• ••• • • • •• • • • •• • ••••••••••••••• •••••••••••• •••••••••••••••••••• •••••••••••••••• • ••• •••••••••••••••••• ••••• ••• ••••••••••• •• •••••••••• • • •••• •••••••••••• •• ••• •• ••• •••• •• • ••••• • •• • •• •••• • •••• ••••••••••• • •• ••• ••• ••••• •••• • •• •• •••• • • •••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• • • ••••••••••••••••••••••••••••• •••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• •••• •••••• •• •• •••••••••••••• ••••• •• ••• ••• • •••• ••• • ••• • ••••• • •• • •• • • • •• • ••• •••••••• • ••• •• ••• • •• ••••••• ••• • • ••••• •••• •••••••• ••••••••••••••••••••••••••••• •••••••••••••• •••••••••• • •••••• •••• •••••••• •••••••• ••••••• •••• ••••••••••••••••• • ••••• • ••• • • • • • • • •• • • • • • • • ••••• • • • • • • • ••••••••••• •••• •••••• • ••••• ••••• •••••••••••• ••••• •••••••••• •••••••• •••••••••••••••••••••• • ••••••••• ••••••••• •••• • • • • •• • • ••• ••• • •• •• ••• • •• • •••••• ••• • • • ••••••• • • • •• ••••••••••• • •• • • • • •••••••• ••••• •• •••• ••••• ••• ••••••••••••• •••••••• •••••• • •••••• •••••• ••••••• • •••••••••• ••••• • • • • • • •• • • • • ••• • ••• ••••••••••••••••• • •••••••• •••• ••• •••• •••• • •••• • •• •• • •• • •• • ••• • • •••• • • •••• •••••• • • •• • • ••• •••••••••••••••••••••••••• ••••••••••••••••••••••••••••• •••••••••••••••• •••••• •••••••••••••••••••••• •••• •••••• ••••• •••••• • ••••••• •••• ••• ••••••••• •• •• • •••••• ••••••••• •••• •••• ••••••• • •• ••• ••• • ••• • ••••• •• • •••• ••• •• •••••••• ••• • • •• • • •• • • • • • • •• ••• • • •• • ••• • • •• • ••• • ••• • • • • •••• • •• • • • • •• • •• •• • • •• •• ••• • • 0. Figure 2. Plot of the sample (Yt .6 0. After all. 1] but rather on the integers f0.6 0. where M is the largest integer accepted by the 2 The name is an acronym of the saying Keep it simple. this is a statistics text! .8 1. Yn+1 ) and (Yn. Yt+100 ) (t = 1.4 0. Ripley (1987) and Fishman (1996) are excellent sources. Yn+10) do not display characteristics of uniformity.0 0.2 0.. M g.2. : : : .4 t 0.2 Figure 2.0 0.2 0. Preferred generators are those that take into account the speci cs of this representation and provide a uniform sequence. instead of a catalog of usual generators. It is important to note that such a sequence does not really take values in the interval 0.0 0.8 t+100 0.0 0. we only present a single generator.. For those.0 1.2 0. the books of Knuth (1981).8 1.. Rubinstein (1981).1. Yn+100) satisfactorily lls the unit square. But the 100 calls to D between two generations are prohibitive in terms of computing time.0 0.4 0. and not re ective of more romantic notions.2. stupid!.2 0.1.4 0.4 0. k We have presented in this introduction some necessary basic notions to now describe a very good pseudo-random number generator.1.1 ] 0.8 1. 2. the algorithm Kiss2 of Marsaglia and Zaman (1993). the nite representation of real numbers in a computer can radically modify the behavior of a dynamic system.0 0.6 0.2 The Kiss Generator As we have remarked above.8 1. To keep our presentation simple. D (Xn ) ts the uniform density extremely well. 56 Establish (2. 2. there exist such sequences. K ( ) = ? log( ). : : : .6. xn ) leads to the Kolmogorov-Smirnov test in nonparametric Statistics. Xn be iid from the Pareto P a( . ) distribution with known lower limit .u] (xi) ? u : u i=1 This is also the Kolmogorov{Smirnov distance between the empirical cdf and that of the uniform distribution. As shown in Niederreiter (1992). the saddlepoint is given by f (xj ) = x +1 . The corresponding density is x > .e.55 Show that the quasi-Monte Carlo methods introduced in x2. which ensure a divergence rate of order O(n?1 log(n)d?1 ). Since it can be shown that the divergence is related to the overall approximation error by (2. xn ) . 2.6 ] NOTES 75 (d) Let X1 .6 Notes 2.1 Quasi-Monte Carlo methods Quasi-Monte Carlo methods were proposed in the 1950's to overcome some drawbacks of regular Monte Carlo methods by replacing probabilistic bounds on the errors with deterministic bounds.1) n i=1 n 1 X h(x ) ? i Z f (x)dx V (h)D(x1 . n (1? = ^) 1 : ^ e ^ 2 Show that the renormalized version of this approximation is exact.1 lead to standard Riemann integration for the equidistributed sequences in dimension 1. i. : : : . xn?1 do not depend on n. Show that X S = ? log Xi . : : : . f ( ^j ) h n i1=2 2. : : : . and the saddlepoint approximation is ?n ^= t s + n log( ) . : : : . where d is the dimension of the integration space.1) and show that the divergence D(x1.6. : : : . The idea at the core of quasi-Monte Carlo methods is to substitute the randomly (or pseudo-randomly) generated sequences used in regular Monte Carlo methods with a deterministic sequence (xn ) in order to minimize the so-called divergence n 1X D(x1. such that x1 .6. xn ) for all n's. the solution is obviously xi = 2i?1 in dimension 1 but the goal here is to get 2n a low-discrepancy sequence (xn ) which provides small values of D(x1 . used in non-parametric tests. and can thus be updated sequentially.2.2. For xed n. xn ) = sup n II 0.6. 4). can be quite involved. Construction of these sequences. See Niederreiter (1992) for extensions in optimization setups. 1978). where V (h) is the total variation of h. . V (f ) = Nlim !1 the gain over standard Monte Carlo methods can be substantial since standard methods lead to order O(n?1=2 ) errors (see Chapter ??. The true comparison with regular Monte Carlo methods is however more delicate than a simple assessment of the order of convergence.6 (see Niederreiter 1992). The advantage over standard integration techniques such as Riemann sums is also important when the dimension d increases since the later are of order nd?2 (see Yakowitz et al. although independent from h.2. x3. the construction requires that the functions to be integrated have bounded support.76RANDOM VARIABLE GENERATION AND COMPUTATIONAL METHODS 2. x0 =1 ::: xN =1 j=1 sup N X jh(xj ) ? h(xj?1 )j . More importantly. which can be a hindrance in practice. and Examples 3. may involve the integration of the empirical cdf (see x1.4.1. Moreover. Bayes.2. 1998 There are two major classes of numerical problems that arise in statistical inference.1).1 . 1.1. other statistical methods. although unrelated to the Bayesian approach.1.1 Introduction Z L( . This necessitates an approximate solution of (3. ) and the prior is the solution of the minimization program (3. as shown by Examples 1. whatever the type of statistical inference. The previous chapter has illustrated a number of methods for the generation of random variables with any given distribution. Thus. Examples 1. (An associated problem.CHAPTER 3 Monte Carlo Integration Version 1. Similarly.).1) either by numerical methods or by simulation.5).4.2 February 27.4. ) ( ) f(xj ) d : Only when the loss function is the quadratic function k ? k2 will the Bayes estimator be a posterior expectation. optimization problems and integration problems.1) in terms of ( jx) (see for instance Robert 1994a or 1996c for the case of intrinsic losses). and integration with the Bayesian approach. While some other loss functions lead to general solutions (x) of (3.4 have also shown that it is not always possible to derive explicit probabilistic models and even less possible to analytically compute the estimators associated with a given paradigm (maximum likelihood.3. In general. On the other hand.1 { 1.3. we are led to consider numerical solutions.1.) Although optimization is generally associated with the likelihood approach. Bayes estimators are not always posterior expectations. method of moments. alternatives to standard likelihood. such as bootstrap methods. and . that of implicit equations can often be reformulated as an optimization problem.1) min 3. these are not strict classi cations.1. etc. such as marginal likelihood.1. the Bayes estimate under the loss function L( . a speci c setup where the loss function is constructed by the decision-maker almost always precludes analytical integration of (3. may require the integration of the nuisance parameters (Barndor -Nielsen and Cox 1994).1.1. 3) shows that the associated Bayes estimator satis es Example 3. L( . 2 = sin('1 ) cos('2 ). ai+1). 'p?1 are the polar coordinates of .78 MONTE CARLO INTEGRATION 3. (x). ) = wi( ? )2 when ? 2 ai. the computation of (x) requires the computation of the posterior means restricted to the intervals ai . ) = j ? j. ai+1 ). '1.1. Ip ).3.3. or Huber's (1972) loss function.1. which is the solution to the equation (3. that is when = k k2 and X this equation is quite complex. ai+1 ).3) L( . before embarking on a description of simulation based integration methods.1.1 {L1 loss{ For 2 IR and L( . : : :.1. 1 where .1. ) = 2( cfj ? j ? c=2g otherwise. since ( jx) / p?1=2 (x) ( ) f(xj ) d = Z (x) ( ) f(xj ) d : Np ( . c. that is. Thus. !i > 0: Di erentiating (3.1. Example 3. the Bayes estimator associated with is the posterior median of ( jx).2) Z In the setup of Example 1. (3. +1 that is P w R ai i a (x) = P i R iai w . just as the search for stationary state in a dynamical system in physics or in economics can require one or several simulations of successive states of the system. Similarly.1 hence provides a basis for the construction of solutions to our statistical problems. k Consider a loss function which is piecewise quadratic. described in Chapter 4. We now look at a number of examples illustrating these situations. statistical inference on complex models will often require the use of simulation techniques. ) = wij ? j if ? 2 ai. and of the posterior probabilities of these intervals. : : : = cos('1 ). which often rely on Markov chain tools. Z e?kx? k =2 2 p?2 Y i=1 sin('i )p?i?1 d'1 : : :d'p?1 . +1 ( ) f(xj ) d : ( ) f(xj ) d i i ai Although formally explicit. consider a piecewise linear loss function. Chapter 5 deals with the corresponding simulation based optimization methods. ? )2 if j ? j < L( .2 {Piecewise linear and quadratic loss functions{ X i wi Z ai ai +1 ( ? (x)) ( jx) d = 0 . estimators such as empirical Bayes estimators. ^. best unbiased estimator. the . Given a sampling distribution f(xj ) and a conjugate prior distribution ( j . without taking into account the e ect of the substitution). most priors result only in integral forms of . The estimated distribution ( j ^ .5.1). etc. The corresponding conjugate prior is Np ( . It may seem that the topic of James-Stein estimation is an exception to this observation. the empirical Bayes inference is based on the pseudo-posterior Np ( ^ x=( ^ + 1). or even to establish that a new estimator (uniformly) dominates a standard estimator. Nonetheless. ^Ip =( ^ + 1)).1 ] INTRODUCTION 79 where and c are speci ed constants.3. k Inference based on classical decision theory evaluates the performance of estimators (maximum likelihood estimator. which are quite attractive in practice. in these situations. This leads to the maximum likelihood estimator ^ = (kxk2 ? p + 1)+ . In the empirical Bayes approach. ) d by maximum likelihood. for instance. from the marginal distribution m(xj . The following example illustrates some di culties encountered in evaluating empirical Bayes estimators (see also Example 3. and if it is evaluated under a quadratic loss. the empirical Bayes method estimates the hyperparameters . ). In most cases it is impossible to obtain an analytical evaluation of the risk of a given estimator. also called risks. Example 3. ^) is then used as in a standard Bayesian approach (that is. given the abundant literature on the topic. (1992. to derive a point estimator.1. k k2 is the quantity of interest.) through the loss imposed by the decision-maker or by the setting. ) = Z f(xj ) ( j . Since the posterior distribution of given is Np ( x=( + 1).3 {Empirical Bayes estimator{ Let X have the distribution X Np ( . Chapter 5). If. it is possible to analytically establish domination results over the maximum likelihood estimator or unbiased estimators (see Robert 1994 Chapter 8.3. Estimators are then compared through their expected losses. will rarely allow for analytic expressions. Chapter 9) for a more detailed discussion on this approach. See Maritz and Lwin (1989) or Searle et al. the scale hyperparameter is replaced by the maximum likelihood estimator. ( + 1)Ip ). moment estimator. Ip =( + 1)). for instance = 0. Some of these may be quite complex. Ip ) where the hyperparameter is generally xed. for some families of distributions (such as exponential or spherically symmetric) and some types of loss functions (such as quadratic or concave). Although a speci c type of prior distribution leads to explicit formulas. In fact. based on the marginal distribution X Np (0. This makes their evaluation under a given loss problematic. Ip ). or Lehmann and Casella 1997. 3. In the setup of Decision Theory. IE 2k k1 + p x. this solution is natural.3. k A general solution to the di erent computational problems contained in the previous examples and in those of x1. since they allow for a control of the convergence of simulation methods (which is equivalent to the deterministic bounds used by numerical approaches. and then the resulting Bayes estimator k2 (x) = IE 2k kk2 + p x. ) = IE (k k2 ? )2 ] .80 MONTE CARLO INTEGRATION 3. R( . One can therefore apply probabilistic results such as the Law of Large Numbers or the Central Limit Theorem. This quadratic risk is often normalized by 1=(2k k2 + p) (which does not a ect domination results but ensures the existence of a minimax estimator. of either the true or approximate distributions to calculate the quantities of interest. This is more easily . kxk2 ? p.2 Classical Monte Carlo integration Before applying our simulation techniques to more practical problems. and the maximum likelihood estimator based on kxk2 2 (k k2 ) (see Saxena p and Alam 1982 and Example 1. or Lehmann and Casella 1997. for the three estimators. whether it is classical or Bayesian. Chap. since the proof of this second domination result is quite involved. However. Note that the possibility of producing an almost in nite number of random variables distributed according to a given distribution gives us access to the use of frequentist and asymptotic results much more easily than in usual inferential settings (see Ser ing 1987.1 is to use simulation. we rst need to develop their properties in some detail. one might rst check for domination through a simulation experiment which evaluates the risk function. ??) where the sample size is most often xed.) 3. since risks and Bayes estimators involve integrals with respect to probability distributions.2 empirical Bayes estimator is ! ^ 2 2 ^p eb(x) = ^ + 1 kxk + ^ + 1 #2 " p + kxk2 + p 1 ? p = 1 ? kxk2 kxk2 2 ? p)+ : = (kxk + This estimator dominates both the best unbiased estimator. 2 does not have an explicit form. see Robert 1994).2). We will see in Chapter 5 why this solution also applies in the case of maximum likelihood estimation. 1) IEf h(X)] = Z Based on previous developments. hm = m j j =1 since hm converges almost surely to IEf h(X)] by the Strong Law of Large Numbers. ?0:7).3.3. with = ( cos '1. Xm ) generated from the density f to approximate (3. when h2 has a nite expectation under f. Example 3.1 (Continuation of Example 3. '2) ( > 0.2.2. Moreover. '1 .2. '1 .2. sin '1 cos '2.3) Consider the evaluation of for p = 3 and x = (0:1.1) by the empirical average m 1 X h(x ) . and this leads to the construction of a convergence test and of con dence bounds on the approximation of IEf h(X)]. X h(x) f(x) dx : Z If ( ) is the non-informative prior distribution proportional to k k?2 (see Example 1.3. '1 2 0.2) ( jx) / k k?2 exp ?kx ? k2 =2 : The simulation of (3. '2jx) / exp x ( = ) ? 2 =2 sin('1 ) : IR 3 (2k k2 + 3)?1 ( jx) d : . 2 ]. sin '1 sin '2 ).1. =2]).2 ] CLASSICAL MONTE CARLO INTEGRATION 81 accomplished by looking at the generic problem of evaluating the integral (3. Since 1 (x) = 2 IE (2k k2 + 3)?1 jx]?1 ? 3 . 1:2. we would need a sample ( 1 . : : :. 1) variable. which yields ( .2. m ) from the posterior distribution (3.2). : : :. Xm ) through m 1 X h(x ) ? h ]2 : vm = m2 m j j =1 hm ?p f h(X)] IE vm is therefore approximately distributed as a N (0. : : :. the speed of convergence of hm can be assessed since the variance 1 Z (h(x) ? IE h(X)])2 f(x)dx var(hm ) = m f X can also be estimated from the sample (X1 . it is natural to propose using a sample (X1 . '2 2 ? =2.2) can be done by representing in polar coordinates ( . this requires the computation of IE (2k k2 + 3)?1 jx] = For m large. 1] until N (x . then j'1. The algorithm corresponding to this decomposition is the following 1.2. Simpson method.82 MONTE CARLO INTEGRATION 3. We mentioned in x3.1 gives a realization of a sequence of Tm . '2 jx) / expf(x )2 =2g sin('1 ). Simulate 1 2 from the uniform distribution on the half unit sphere and from 0. Unfortunately. which ensures identi ability for the model.19 Polar Simulation (' . since the performances of complex procedures can be measured as in Example ?? in any setting where the distributions involved in the model can be simulated. '2 ). m 1 X (2 2 + 3)?1 : Tm = m j j =1 Figure 3. etc. ' ) U U U expfx ? jjxjj2=2g A:19] 2. the envelope being constructed from the normal approximation through the 95% con dence interval Tm 1:96pvm : k The approach followed in the above example can be successfully utilized in many cases. and an approximation of IE (2 2 + 3)?1jX]. Since now varies in IR. One can therefore simulate ('1 . 1) The sample resulting from A:19] provides a subsample ( 1 . which only depends on ('1 . '2) is not directly available since it involves the cdf of the normal distribution. The same applies for testing (which is formally a branch of Decision Theory) where the level of . '2jx) / (?x ) expf(x )2 =2g sin('1 ). we can modify the polar coordinates in order to remove the positivity constraint on . m ) in step 2. The integration of then leads to ('1 .3.2 where x = x1 cos('1 ) + x2 sin('1) cos('2 ) + x3 sin('1 ) sin('2 ). '2) based on a uniform distribution on the half unit sphere.) in dimension 1 or 2. : : :. '2 ) is ('1 . '2 Algorithm A. Generate from the normal distribution N (x1 cos('1 ) + x2 sin('1 ) cos('2 ) + x3 sin('1 ) sin('2 ). The scope for application of this Monte Carlo integration method is obviously not only limited to the Bayesian paradigm. The alternative constraint becomes '1 2 ? =2. 1) truncated to IR+ . even though it is often possible to achieve greater e ciency through numerical methods (Riemann quadrature. If we denote = = . which can be simulated by an accept-reject algorithm using the instrumental function sin('1 ) expfjjxjj2=2g.1 the potential of this approach in evaluating estimators based on a decision-theoretic derivation. . the marginal distribution of ('1 . For simulation purposes. =2]. the marginal distribution of ('1 . Convergence of the Bayes estimator of jj jj2 under normalized quadratic loss for the reference prior ( ) = jj jj?2 and the observation x = (0:1.2.25 CLASSICAL MONTE CARLO INTEGRATION 83 0. Example 3. and simulation thus can provide a useful improvement over asymptotic approximations when explicit computations are impossible.2. 1:2. a possible way to construct normal distribution tables is to use simulation. and its power function. Xn). (X1 .15 0. k 2 . signi cance of a test.3.1. The approximation of Zt 1 p e?y =2dy (t) = ?1 2 by the Monte Carlo method is thus n ^ (t) = 1 X I xi t. Note that greater accuracy is achieved in the tails. : : :. The envelope provides a nominal 95% con dence interval on jj jj2.2 {Normal cdf{ Since the normal cdf cannot be written in an explicit form. For values of t around t = 0 the variance is thus approximately 1=4n and to achieve a precision of four p decimals the approximation requires on average pn = 2 104 simulations. Consider thus the generation of a sample of size n. ?0:7). based on the Box-Muller algorithm A1 ] of Example 2. Table ?? gives the evolution of this approximation for several values of t and shows an accurate evaluation for 100 million iterations.3.2. that is.20 20 40 60 (100 simulations) 80 100 Figure 3.2 ] 0. n i=1 with (exact) variance (t)(1 ? (t))=n (as the variables I xi t are independent Bernoulli with success probability (t)).10 0 0.1. can be easily computed. 200 million iterations. 3. (c) Examine the alternative based on a gamma distribution G a( . Deduce that (3.6 tg Z )) b(Zi ) = 1 + (n(?(Zi ) ? f)(f (iZ ) : 1)(1 ? i Pn+t?1 b(zi). in (3. (Hint: Show that the sum of the weights Sn can be replaced by (n ? 1) in and assume IEf h(X )] = 0. ) on = jj jj2 and a uniform distribution on the angles. p) with p = 10?6 . . show that J (m) is distributed from ( 0 )?n exp(?n J ).7.28 Given a binomial experiment xn B(n. show that 3. ) is not satisfactory from both theoretical and practical points of view. under quadratic loss. which is a matrix with (i.7.) 3.130 MONTE CARLO INTEGRATION ! n+t?1 1 h(Z ) + X b(Z )h(Z ) = n n+t i i i=1 ?1 3. 10?3 . 10?2 .25) If Sn = 1 with ? 1 = n h(Zn+t ) + nS 1 n n+t?1 i=1 X b(Zi )h(Zi ) ! asymptotically dominates the usual Monte Carlo approximation. that is. k) element @ ij (x)=@xk . Extend to the general case by introducing @ j . consider the distribution ? exp ?( ? )t ?1 ( ? )=2 ( )/ : jj jjp?1 (a) Show that the distribution is well-de ned. 3. determine the minimum sample size n for P 1 3. b( ) is de ned by b(x) = b0 (x) + 2 (x) 0 (x) in di- when = 10?1 .2) is unbiased.4). 3. that Z exp ??( ? )t ?1( ? )=2 IRp jj jjp?1 d < 1: (b) Show that an importance sampling implementation based on the normal instrumental distribution Np ( .31 Show that.29 When the Yi 's are generated from (3.1).30 Show that Zt 1 ( (s) ? (0))d (s) = 2 ( (t)2 ? t) 0 xn n?p p mension = 1. Philippe and Robert 1997) For a p p positive de nite symmetric matrix.7. conditional on the number of rejected variables t. 3.27 (Berger.26 (Continuation of Problem 3. 7 Notes 3.3) are de nitely necessary to get an acceptable approximation. where the second integral is de ned as the limit n X (X (si)) + (X (si?1)) lim0 ( (si) ? (si?1)). if (X (t)) is solution to (3. we showed in Example 3. random variables.7.) If M ( ) = IE exp( X1 )] is the moment generating function of X1 .3) can also be expressed through a Stratonovich integral. (When " is small.d. Bucklew (1990) indicates how the theory of large deviations may help in devising proposal distributions in this purpose. 4 3. p.3. (Note: The integral above cannot be de ned in the usual sense because the Wiener process has almost surely unbounded variations.6. and " is large.1 Large deviations techniques When we introduced importance sampling methods in x3.1 that alternatives to direct sampling were preferable where sampling from the tails of a distribution f . (x) = 2 cos2 (x). the large deviation approximation is 1 log P (S 2 F ) ? inf I (x): This result is sometimes called Cramer's theorem. methods such as importance sampling (x3. Very sketchily. If the problem is to evaluate ! n 1 X h(x ) 0 . in the sense that it involves the unknown constant I .3). say p(A) 10?6 . Since the optimal choice given in Theorem 3. The simulation device based on this approximation is called twisted simulation. See Talay 1996. In particular. X (t) = X (0) + Zt 0 b0 (X (s))ds + Zt 0 (X (s)) d (s).i. the normal approximation based on the Central Limit Theorem works well enough.3.3) dX (t) = ?X (t)dt + 2d (t) is a stationary N (0. 3. and if I (x) = sup f x? log M ( )g. ! i=1 2 where 0 = s0 < s1 < : : : < sn = t and maxi (bi ?si?1 ) . 1) process. n goes to in nity. When the event A is particularly rare.6. I=P n n F n i=1 i .) 3. : Sn = X1 + :n: + Xn . the theory of large deviations is concerned with the approximation of tail probabilities P (jSn ? j > ") when Sn is a sum of i.34 Show that. more practical choices have been proposed in the literature.32 Show that the solution to (3.3.7 ] NOTES 131 3.7. Y (t) = atan X (t) is solution of p a SDE with b(x) = 1 sin(4x) ? sin(2x).4 is formal.3.33 Show that the solution to the Ornstein-Uhlenbeck equation p (3.3.56. This description of simulation methods for SDE's borrows heavily from the expository paper of Talay (1996) which presents in much deeper details the use of simulation techniques in this setup.7. independent components and correlation IE i (t) j (s)] = ij min(t. Applications of SDE's abound in uid mechanics. given the Wiener process ( (t)). It may also be of interest to compute the expectation IE h(X )] under the stationary distribution of (X (t)).32). where ij = IIi=j . i=1 where 0 = s0 < s1 < : : : < sn = t and maxi (bi ? si?1 ) lim0 ! IE n X whenever (X ) is square-integrable. it is often of interest to consider the perturbation of this equation by a random noise. when this process is ergodic. In this setup. random mechanics and particle Physics. (3. o (3. e.. a setting we will encounter again in the MCMC method (see Chapters 4 and 7). where b0 is a function from IRd to IRd . or to evaluate expectations of the form IE h(X (t))].7.3.2 Simulation of stochastic di erential equations Given a di erential equation in IRd . s) . (t).4) X (t) = X (0) + Zt 0 b(X (s))ds + Zt 0 (X (s))d (s). dX (t) = b0 (X (t))dt.7.7 ] NOTES 133 3.4) is based on the discretization X (t) X (0) + b(X (0))t + (X (0))( (t) ? (0)) .7. This limit exists j (X (s))j2ds < 1: See. where b is derived from b0 and (see Problems 3.3.g.7. A rst approximation to the solution of (3. . that is such that (t) is a Gaussian vector with mean 0. The solution of (3.7.6 being the variance factor taking values in the space of d d matrices. dX (t) = b (X (t)) + (X (t)) (t). that is when Zt 0 .3) can also be represented through an It^ integral. The perturbation in (3. simulations are necessary to produce an approximation of the trajectory of (X (t)). and the second integral is the limit (X (si ))( (si) ? (si?1 )).3) is often chosen to be a Wiener process. Ikeda and Watanabe (1981) for details.31 and 3. 6 The material in this section is of a more advanced mathematical level than the remainder of the book and it will not be used in the sequel.3) 0 dt which is called a stochastic di erential equation (SDE). Other perspectives can be found in Doob (1953). Revuz (1984) and Resnick (1994) for books entirely dedicated to Markov chains. on which this chapter is based. more generally. Roberts and Tweedie 1995 or Phillips and Smith 1996). It. Hastings (1970) notes that the use of pseudo-random generators and the representation of numbers in a computer imply that the Markov chains related with Markov chain Monte Carlo methods are.CHAPTER 4 Markov Chains Version 1. since such an approximation depends on both material and algorithmic speci cs of a given technique (see Roberts. to understand the literature on this topic. note that we do not deal in this chapter with Markovian models in continuous time (also called Markovian processes) since the very nature of simulation leads1 us to consider only discrete time stochastic processes. its style and presentation di er from those of other chapters. and we refer the reader to Meyn and Tweedie (1993).1 February 27. unfortunately. Chung (1967). Feller (1970. and Nummelin (1984). especially with regard to the plethora of de nitions and theorems and to the rarity of examples and proofs. Rosenthal and Schwartz. Indeed. . Before formally introducing the notion of a Markov chain. 1998 In this chapter we introduce fundamental notions of Markov chains. Given the purely utilitarian goal of this chapter. along with basic notions of probability theory. However. for a thorough introduction to Markov chains. (Xn )n2IN. we include a preliminary section on the essential facts about Markov chains that are necessary for the next chapters. 1 Some Markov chain Monte Carlo algorithms still employ a di usion representation to speed up convergence to the stationary distribution (see for instance x??. is necessarily a brief and therefore incomplete introduction to Markov chains. will provide enough foundation for the understanding of the following chapters. 1971). we also consider arbitrary state space Markov chains to allow for continuous support distributions and to avoid addressing the problem of approximation of these distributions with discrete support distributions. Billingsley (1995) for general treatments. nite state space Markov chains. in fact. Thus this chapter. In order to make the book accessible to those who are more interested in the implementation aspects of MCMC algorithms than in their theoretical foundations. and state the results that are needed to establish the convergence of various MCMC algorithms and . 1).2). where n is generated independently of Xn .9. that is that the average number of visits to an arbitrary set A is in nite (De nition 4. The chains encountered in MCMC settings enjoy a very strong stability property. de ned as Xn+1 = Xn + n . In a simulation setup. and are essential for the study of MCMC methods. that is such that the probability of an in nite number of returns to A is 1 (De nition 4. Xn?1. this rst section provides a brief survey of the properties of Markov chains that are contained in the chapter.1 as the existence of n 2 IN such that P(Xn 2 AjX0) > 0 for every A such that (A) > 0). which is a conditional probability density.136 MARKOV CHAINS 4.1) N n=1 n converges to the expectation IE h(X)] almost surely. When the chain is 4.11). like geometric and uniform convergence (see De nitions 4.8 and 4. and we need Harris recurrence to guarantee convergence from every starting point.4. for a study of the in uence of discretization on the convergence of Markov chains associated with Markov chain Monte Carlo algorithms).4. A typical example is provided by the random walk process.1). Markov chains are constructed from a transition kernel K (De nition 4.2.5). Thus. (Therefore.6. notwithstanding the initial value of X0 . Starting with Section 4. The stationary probability is also a limiting distribution in the sense that the limiting distribution of Xn+1 is under the total variation norm (see Proposition 4. Xn+1 .1. the theory of Markov chains is developed from rst principles. a distribution such that.6.) This latter point is quite important in the context of MCMC algorithms. we are in e ect starting the algorithm from a set of measure zero (under a continuous dominating measure). : : : (see Example 4. a most interesting consequence of this convergence property is that the average N 1 X h(X ) (4. Stronger forms of convergence are also encountered in MCMC settings. if the kernel K allows for free moves all over the state space (this freedom is called irreducibility in the theory of Markov chains and is formalized in De nition 4. or even Harris recurrent. That is. Since most algorithms are started from some arbitrary point x0. which ensures that the chain has the same limiting properties for every starting value instead of almost every starting value. For those familiar with the properties of Markov chains.1 1995. this appears as an equivalent for Markov chains of the notion of continuity for functions.6.8).2.4).5. This property also ensures that most of the chains involved in MCMC algorithms are recurrent.4. In the setup of MCMC algorithms. if Xn .3. insuring that the chain converges for almost every starting point is not enough. Xn+1 ).1 Essentials for MCMC . as Xn+1 K(Xn . namely a stationary probability distribution exists by construction (De nition 4. Chap. Example 4. K( . the transition kernel simply is a (transition) matrix K de ned by Pxy = P(Xn = yjXn?1 = x) . De nition 4. x)dx. which is independent of Xm .3.4. M g and (Xn ) such that Xn represents the state. since. of a tank which contains exactly M particles and is connected to another identical tank. and in fact is mathematically somewhat cleaner. that is when the transition kernel is symmetric. Px(x+1) = MM x Pxx = 2 M 2 x(x?1) and P01 = PM (M ?1) = 1. and the moves are restricted to a single exchange of particles between both tanks at each instant. It therefore seems natural. see Feller. In Chapter 8. to de ne the chain in terms of its transition kernel. a Central Limit theorem also holds for this average. K(x. at time n. the function that determines these transitions. y 2 X : In the continuous case. (ii) 8A 2 B(X ). The set C is then called a small set (De nition 4. diagnoses will be based on a minorization condition. Two types of particles are introduced in the system. ) is a probability measure. If Xn denotes the number of particles of the rst kind in the rst tank at time n. y < M) Pxy = 0 if jx ? yj > 1 . 1970.1).7) and visits of the chain to this set can be exploited to create independent batches in the sum (4. P = M . with probability of a transition depending on the particular set that the chain is in. x. P(X 2 Ajx0) = A K(x0 . with probability m the next value of the Markov chain is generated from the minorizing measure m .2 { Bernoulli-Laplace Model{ Consider X = f0.1. the transition matrix is given by (for 0 < x.2 Basic notions . 1. m > 0. ). XV) k 4. A Markov chain is a sequence of random variables that can be thought of as evolving over time. (This model is the Bernoulli{Laplace model. A) is measurable. ? 2 x 2 x(M ? x) . That is. that is the existence of a set C such that there also exists m 2 IN.2.2 ] BASIC NOTIONS 137 reversible.2. . the kernel also denotes the conditional density R K(x. and there are M of each type. x0) of the transition K(x. When X is discrete. and a probability measure m such that P(Xm 2 AjX0) m m (A) when X0 2 C.1 A transition kernel is a function K de ned on X B(X ) such that (i) 8x 2 X .4. .3) is often implemented in a time-heterogeneous form and studied in time-homogeneous form. K ) on and the new value of the chain is given by Xn+1 = Y with probability expf(E(Y ) ? E(Xn ))=T g ^ 1.3) K(xk . the distribution of X0 . of random variables is a Markov chain. dx) 0 0 The chain is time-homogeneous if the distribution of (Xt .2. Similarly. Xn n = K n . the initial state of the chain. .3 Given a transition kernel K. if denotes the initial distribution of the chain. So in the case of a Markov chain. In the discrete case. : : :. K g.4 {Simulated Annealing{ The simulated annealing algorithm (see x5. given an initial distribution = (!1 . (See also the case of the ARMS algorithm in x6. = f1. for any t. the construction of the Markov chain (Xn ) is entirely determined by its transition. namely if (4. x2. x0 is the same as the distribution of Xt given xt?1.2. the simulated annealing Markov chain X0 . . Xn. in particular for equal to the Dirac mass x . !2.2) X0 .2. xk ) = P(Xk+1 2 Ajxk ) Z = (4. De nition 4. X1. Xtk ) given xt is the same as the distribution of (Xt ?t . . plays an important role. X1. x1. an energy function E and a temperature T.138 MARKOV CHAINS 4. That is. by repeated multiplication.3. 2. the conditional distribution of Xt given xt?1.2 The chain (Xn ) is usually de ned for n 2 IN rather than for n 2 Z Z. Xn otherwise.2. .2. When X0 is xed.3) Example 4. The study of Markov chains is almost always restricted to the timehomogeneous case and we omit this designation in the following. namely by the distribution of Xn conditionally on xn?1. Xt ?t . : : :). P(Xk+1 2 Ajx0. we use the alternative notation Px . denoted by (Xn ). xt?2. It is.2). Xtk ?t ) given x0 for every k and every (k + 1)-uplet t0 t1 tk . Therefore. Given a nite state space with size K. if the initial distribution or the initial state is known. important to note here that an incorrect implementation of Markov Chain Monte Carlo algorithms can easily produce time-heterogeneous Markov chains for which the standard convergence properties do not apply. Y is generated from a xed probability distribution ( 1 .1) 1= K and. the marginal probability distribution of X1 is then (4. . in the continuous case. a sequence X0.2. ::: is represented by the following transition operator: Conditionally on Xn . 1 0 1 0 2 0 0 A .4. then we let P denote the probability distribution of (Xn ) under condition (4.2. if. however. dyn?1) : In particular. A 2 B(X ). the kernel for n transitions is given by (n > 1) Z n (x. dy): X The following result provides convolution formulas of the type K m+n = K m ? K n .4. Nonetheless. The Markovian properties of an AR(q) process can be derived by considering the vector (Xn . A) = K K n?1(y. A2 ) K(x.) If we denote K 1(x. A) K m (x. Lemma 4. we will see that the objects of interest are often these conditional distributions. ARMA(p.2 ] BASIC NOTIONS 139 If the temperature T depends on n. dy) : . n) 2 IN2.6 Chapman-Kolmogorov equations For every (m. k Example 4. and if the "n 's are independent. 2 IR. and it is important that we need not worry about di erent versions. q) models do not t in the Markovian framework (see Problem 4. x 2 X . the fact that the kernel K determines the properties of the chain (Xn ) can be deduced from the relations Px(X1 2 A1 ) = K(x. . : : : conditionally on Xn?1. An) K(x. On the contrary. X n ) 2 A1 ZA 1 Z K m+n (x. you must pass through some X K n (y. An ) = A1 An?1 . A). A) = K(x. A) K(x. Z K(y1 . However.2. which are called Chapman{Kolmogorov equations. A1) indicates that K(xn. A1 ) . dependent from Xn?2 . dy1 ) K(yn?2 . If Xn = Xn?1 + "n . dxn+1) is a version of the conditional distribution of Xn+1 given Xn . Xn?3. Px ((X1 . 2). This is why we noted that constructing the Markov chain through the transition kernel was mathematically \cleaner". k In the general case.2). Xn?q+1 ). A) = Z (In a very informal sense. X2 ) 2 A1 A2 ) = K(yn?1 . as we have de ned a Markov chain by rst specifying this kernel. the relation Px (X1 2 A1 ) = K(x. (Moreover.5 {AR(1) Models{ AR(1) models provide a simple illustration of Markov chains on continuous state space.4. in the following chapters. the chain is time-heterogeneous. Xn is indeed inwith "n N(0. the properties of a Markov chain considered in this chapter are independent of the version of the conditional probability chosen. we do not need to be concerned with di erent versions of the conditional probabilities. dy1) Px ((X1 .2. the Chapman-Kolmogorov equations are stating that to get from x to A in m + n steps. x2. a function (x1 . namely K n = K K n?1. )jx0. Z Kh(x) = h(y) K(x.140 MARKOV CHAINS 4.2. (4. This will be used later to establish many properties of the original chain.2.2. In the general case. the (weak) Markov property can be written as the following result.6) A= 1 X t=1 I A (Xt ). A). being the dominating measure of the model. xn] = IExn h(X1 .2. Xn). and we will see that the resulting Markov chain (Xn ) enjoys much stronger regularity. the number of passages of (Xn ) in A.2. by convention.2.7 A resolvant associated with the kernel P is a kernel of the form K" (x. and follows directly from (4.1).8 Weak Markov property For every initial distribution and every (n + 1) sample (X0 . . Xn 2 Ag.2. De nition 4.6 is simply interpreted as a matrix product. (4. Xn+2 . 0 < < 1. . A = +1 if xn 62 A for every n. i. K n is then the n-th composition of P. Associated with the set A. More generally. and is called the stopping time at A with. in the convergence control of Markov Chain Monte Carlo algorithms in Chapter 8. provided that the expectations exist. )]. Note that if h is the indicator function then this de nition is exactly the same as 4. Given an initial distribution . which just rephrases the limited memory properties of a Markov chain: Proposition 4.4) can be generalized to other classes of functions|hence the terminology \weak"| and it becomes particularly useful with the notion of stopping time. Xn). The rst n for which the chain enters the set A is denoted by (4. If IE ] denotes the expectation associated with the distribution P . . : : :. X2 .e. De nition 4.2. we can associate with the kernel K" a chain fXng which formally corresponds to a sub-chain of the original chain (Xn ).9 Consider A 2 B(X ). Lemma 4.4. A) = (1 ? ") 1 X i=0 "i K i (x. However. and the chain with kernel K" is said to be a K" -chain. we need to consider K as an operator on the space of integrable functions.2 y on the nth step.5) A = inf fn 1. where the indices in the subchain are generated from a geometric distribution with parameter 1 ? ". Thus K is indeed a kernel. dy) .2. : : :) is called a stopping rule if the set f = ng is measurable for the -algebra induced by (X0 .3.2.) In the discrete case. we also de ne (4.4) IE h(Xn+1. h 2 L1( ) . 12 (see Note 4. the random walk is recurrent.y)V (y)dy (b) Establish Theorem 4.3.3). (Xn ) is recurrent. 4g and A3 = f5g.1 Suppose that the stationary Markov chain (Xn ) is geometriR cally ergodic with M = jM (x)jf (x)dx < 1. for an aperiodic irreducible Markov chain with nite state space and with transition matrix IP. 4. and establishing that V (x ) M 1 ? Px ( C < 1)].3). pnX = tends in law to N (0. such that V is bounded on C and satis es (4.nite invariant measure.9.58 Show thatxthe random walk on ZZ is transient when IE Wn ] 6= 0.9. 4. (Hint: Use V (x) = log(1 + x) for x > R and V (x) = 0.8.9 is lumpable for A1 = f1.9. the corresponding chain is Harris positive.7. choosing M such that M V (x )= 1 ? Px ( C < 1)]. there always exists a stationary probability distribution which satis es = IP: (e) Show that. 4. with IE n ] = . (Hint: Use Theorem 4. for an adequate bound R.12. otherwise. (Hint: Consider an alternative V to V and show by recurrence that V (x) Z : : : V (x) : C K (x. 2 ).) (g) Show that.9.11) are either both recurrent or both transient. A2 = f3. if an irreducible Markov chain has a .9.60 (Chan and Geyer 1994) Prove that the following Central Limit Theorem can be considered a corollary to Theorem 4.57 Show that.) (d) (Kemeny and Snell 1960) Show that. and satis es the moment conditions of Theorem 4. this measure is unique up to a multiplicative factor. (a) Establish Lemma 4. 4. if > 0.174 MARKOV CHAINS 4.9.4.9. if < 0.9. 2g. (c) Show that. (Hint: Use V (x) = 1 ? % for x > 0 and 0 otherwise when IE Wn ] > 0.6. Then 2 = limn!1 n varXn < 1 and.59 Show that the chains de ned by the kernels (4.) (f) Show that. if = 0 and var( n) < 1. if 2 > 0.9) and (4. the random walk is transient. n (Hint: Integrate (with respect to f ) both sides of De nition 4.56 Consider the random walk on IR+ .) . if there exist a nite potential function V and a small set C Corollary 4. and apply Theorem 4.y)V (y)dy + Z Cc K (x.8 to conclude that the chain is exponentially fast -mixing.3 by assuming that there exists x such that Px ( C < 1) < 1.9. (Hint: Use the drift function V (x) = x.1. Xn+1 = (Xn + n)+ .) 4.12. On the other hand.9. e. the smallest positive function which satis es the conditions (4. Lemma 4.9.7 or Mengersen and Tweedie 1996).9. 1 if x 2 C ~ Since V (x) < 1 for x 2 C c . The condition is thus incompatible with the stability associated with recurrence.4. Meyn and Tweedie (1993) rely on another tool to check or establish various stability results. .9. The converse can be deduced from a (partial) converse to Proposition 4. while C = 0 on C. if there exists a potential function V \attracted" to 0. we have (4. and this implies the transience of C .1) are satis ed by n c ~ V (x) = (M ? V (x))=(M ? r) if x 2 C . and therefore does not allow a sure return to 0 of V . the chain is recurrent.9. V (x) 1 if x 2 C = is given by V (x) = Px ( C < 1) .3 Consider (Xn ) a -irreducible Markov chain.9 Notes 4. the drift of V is de ned by Z V (x) = V (y) P (x.4. 2 Condition (4.3. namely the drift criteria which can be traced back to Lyapunov..9. the conditions (4. V (x) ng.4. Given a function V on X .g.9 ] NOTES 175 4.9. 8n is a small set. the chain is recurrent if V (x) 0 on C c : .2) describes an average increase of V (xn ) once a certain level has been attained. Theorem 4.1 If C 2 B(X ). If C = fx.190). when C denotes C = inf fn 0.1 Drift conditions Besides atoms and small sets. Theorem 6.2 The -irreducible chain (Xn ) is transient if. V (x) rg and M is a bound on V . xn 2 C g : Note that. We then have the following = necessary and su cient condition: Theorem 4. therefore the transience of (Xn ). and only if. there exist a bounded positive function V and a real number r 0 such that every x for which V (x) > r. p. If there exist a small set C and a function V such that CV (n) = fx. V (x) = Px ( C < 1) < 1 on C c .2) V (x) > 0 : Proof. C = C . The following lemma is instrumental in deriving drift conditions for the transience or the recurrence of a chain (Xn ). if x 2 C .1) V (x) 0 if x 2 C.7 (see Meyn and Tweedie 1993.9.dy) ? V (x) : This notion is also used in the following chapters to verify the convergence properties of some MCMC algorithms (see. for every C such that (C ) < +1.8).10) for every set C can be overwhelming. between the admissibility of an estimator and the recurrence of an associated Markov chain. then 2 2 g = nlim nIE Sn (g )] !1 = IE g2 (x0 )] + 2 1 X k=1 IE g(x0 )g(xk )] Sn (g).9. independently of the estimated functions g: Theorem 4. For every measurable set C such that (C ) < +1.178 MARKOV CHAINS 4. it is possible to assess the convergence of the ergodic averages Sn (g) to the quantity of interest IE g].4. The problem considered by Eaton (1992) is to determine whether.8 also suggests how to implement this control through renewal theory. h2V (C ) then the Bayes estimator IE g( )jx] is admissible under quadratic loss for every bounded function g.9. Assuming that the posterior distribution ( jx) is well de ned. Note also that (4. as shown in Chapter 7. h( ) 0 and h( ) 1 when 2 C and ZZ (h) = fh( ) ? h( )g2 K ( . ) = Z which is associated with a Markov chain ( (n) ) generated as follows: the transition from (n) to (n+1) is done by generating rst x f (xj (n) ) and then (n+1) ( jx). consider V (C ) = h 2 L2 ( ). as discussed in detail in Chapter 8. a generalized Bayes estimator associated with a prior measure is admissible under quadratic loss. this is also a kernel used by Markov Chain Monte Carlo methods.e.9 If. If is nonnegative and nite.. This result is obviously quite general but only mildly helpful in the sense that the practical veri cation of (4.8 If the ergodic chain (Xn) with invariant distribution satis es conditions (4. nSn (g) almost surely goes to 0.) Note that the prior measure is an invariant measure for the chain ( (n) ). This theorem is de nitely relevant for convergence control of Markov Chain 2 Monte Carlo algorithms since.9.9 Theorem 4. (4.10) always holds when is a proper prior distribution since h 1 belongs to L2 ( ) and (1) = 0 in this case. Theorem 4. If g > 0.9. for a bounded function g( ). 4. he introduces the transition kernel (4. (Most interestingly.9. i.9.9. for every function g such that jgj f . ) ( ) d d : The following result then characterizes admissibility for all bounded functions in terms of and V (C ).9. The extension then X ( jx)f (xj ) dx.9) K( . when g > 0.2 Eaton's Admissibility Condition Eaton (1992) exhibits interesting connections. similar to Brown (1971).10) inf (h) = 0. . the Central Limit Theorem holds for p g = 0.9. Consider the property of -mixing (Billingsley 1995. the covariances go to zero (Billingley 1995. De nition 4. This is in fact the case. As a result.4. we would expect that Theorem 4. Eaton (1992) exhibits a connection with the Markov chain ( (n) ) which gives a condition equivalent to Theorem 4. A. Then 2 = limn!1 n varXn < 1 and. First. Note. So we see that an -mixing sequence will tend to \look independent" if the variables are far enough apart.59) and derive admissibility results for various distributions of interest. 2 ). that the veri cation of the recurrence of the Markov chain ( (n) ) is much easier to operate than the determination of the lower bound of (h).4. X1 . X0 2 B ) ? P (Xn 2 A)P (X0 2 B )j goes to 0 when n goes to in nity. the associated Markov chain ( (n) ) is recurrent. these conditions are usually quite di cult to verify. Section 27). Section VII. However. Section 27). which allowed us to use a typical independence argument.9. That is. we refer to Eaton (1992) for extensions. One version of a Markov chain Central Limit Theorem is the following (Billingsley 1995. These mixing conditions guarantee that the dependence in the Markov chain decreases fast enough. examples. as every positive recurrent aperiodic Markov chain is -mixing (Rosenblatt 1971. Theorem 4.11) K 0(x.9.3 Mixing Conditions and Central Limit Theorems In x4. we need the dependence to go away fast enough.9. Therefore. Hobert and Robert (1997) consider the potential use of the dual chain based on the kernel Z inf (h) = h2V (C ) Z C 1 ? P ( C < +1j (0) = ) ( )d : (4. Not only must the Markov chain be -mixing.10 For every set C such that (C ) < +1. and variables that are far enough apart are close to being independent.3 is a consequence of -mixing. Section 27): Theorem 4. is -mixing if (4.B . can also result in a Central Limit Theorem. Other conditions. and only if.2 we established a central limit theorem using regeneration. if 2 > 0. for a Central Limit Theorem we need even more. y) = f (yj ) ( jx)d (see Problem 4. 4.12 Suppose that the Markov chain (Xn ) is stationary and 12 mixing with n = O(n?5 ) and that IE Xn ]p= 0 and IE Xn ] < 1. and if the Markov chain is stationary and -mixing.9 ] NOTES 179 considers approximations of 1 by functions in V (C ).7.9. Again. Unfortunately.9. and comments on this result.9. nXn tends in law to N (0.3).9.7. X2 . for a given set C . we need the coe cient n to go to 0 fast enough. known as mixing conditions. however.12) n = sup jP (Xn 2 A. the generalized Bayes estimators of bounded functions of are admissible if.9. a stopping rule C is de ned as the rst integer n > 0 such that ( (n) ) belongs to C (and +1 otherwise).11 A sequence X0 . .CHAPTER 5 Monte Carlo Optimization Similar to the problem of integration. as there exist simulation approaches where the probabilistic interpretation of h is not used. but simulation has the advantage of bypassing the preliminary steps of devising an algorithm and studying whether some regularity conditions on h hold . which enjoy a longer history than simulation methods (see. complex loss functions but also con dence region also require optimization procedures.1. As noted in the introduction to Chapter 3. N) xi = cos(!ti ) + sin(!ti ) + i . For the simulation approach. for instance Kennedy and Gentle 1980. di erences between the numerical approach and the simulation approach to the problem (5. 1 Although we use as the running parameter. 2). there may exist an alternative numerical approach which provides an exact solution to (5.) In approaching a minimization problem using deterministic numerical methods. smoothness) are often paramount.1. and h typically corresponds to a possibly penalized transform of the likelihood function.1) also covers minimization problems by considering ?h. By comparison with numerical methods. the appeal of simulation can be found in the lack of constraints on both the regularity of the domain and on the function h. Thisted 1988).1. Obviously. Obviously.1. the analytical properties of the target function (convexity.1) max h( ) 2 5. Sakarovitch 1984.1).1 Introduction lie in the treatment of the function1 h. Nonetheless. : : :. boundedness. of which a simple model is (i = 1. i N (0. Example 5. (Note that (5. this dichotomy is somewhat arti cial. a property rarely achieved by a stochastic algorithm. this setup applies to many other inferential problems than just likelihood or posterior maximization. the use of the analytical properties of h plays a lesser role in the simulation approach. Ciarlet and Thomas 1982.1 {Signal processing{ O Ruanaidh and Fitzgerald (1996) study signal processing data. we are more concerned with h from a probabilistic (rather than analytical) point of view. This is particularly true when the function h is very costly to compute. . 5. which can often be achieved by a reparameterization. in which the goal is to optimize the function h by describing its entire range. The likelihood function is then of the form ? ? ! (x ? G )t (x ? G ) ?N exp ? .182 MONTE CARLO OPTIMIZATION 5.1 A basic solution . although explicit in !. such as the EM algorithm. with the Monte Carlo aspect more closely tied to the exploration of . !.5) algorithms take advantage of the Monte Carlo approximation to enhance their particular optimization technique. . : : :.1) is to simulate from a uniform distribution on . the Monte Carlo aspect exploits the probabilistic properties of the function h to come up with an acceptable approximation. um U . tN . The rst is an exploratory approach.3. k Following Geyer (1996). as shown by O Ruanaidh and Fitzgerald (1996). 1 . we want to consider two approaches to Monte Carlo optimization. even though we are considering these two di erent approaches separately. is not particularly simple to compute. which. !. if is bounded.2) (!jx) / xt x ? xtG(GtG)?1 Gtx (2?N )=2 (det Gt G)?1=2.2 Stochastic Exploration There are a number of cases where the exploration method is particularly well-suited. h(um )). and to use the approximation hm = max(h(u1 ). This setup is also illustrative of functions with many modes.5. : : :. a rst approach to the resolution of (5. 2 2 with x = (x1. xN ) and 0 cos(!t ) sin(!t ) 1 1 B .2.3) or the Robbins-Monro (5.3. We will see that this approach can be tied to missing data methods. A cos(!tN ) sin(!tN ) ?1 then leads to the marginal distribution The prior ( . . and is less concerned with exploring . In fact. they might be combined in a given problem. u1. (Such a technique can be useful in describing functions with multiple modes.1. Obviously.. The actual properties of the function play a lesser role here. : : :. This method is convergent (as 5.) The second approach is based on a probabilistic approximation of the objective function h and is rather a preliminary step to the optimization per se. . C : G=@ .1. Here. ) = ? (5. and observation times t1.2 with unknown parameters . for example. : : :. First. even though the slope of h can be used to speed up the exploration. We note also that Geyer (1996) only considers the second approach to be \Monte Carlo optimization". methods like the EM (5. Since this function has many local minima.1. : : :. if these conditions are not satis ed. ). yi)'s. for X f(xj ). the solution of (5. as shown by Figure 5. An alternative is to simulate from the density proportional to h1 (x. a second. Distributions other than the uniform which are related with h may then do better.1.1) is the mode of the marginal distribution of .1).2. direction consists in relating h to a probability distribution.1) amounts to nding the modes of the density h. )].1. the function h( ) can be transformed into a positive and integrable function H( ) in such a way that the solutions to (5. The appeal of simulation is even clearer in the case when h( ) = IE H(x.1 that it includes the case of missing data models. : : :. it may be more useful to decompose h( ) into h( ) = h1( )h2 ( ) and to simulate from h1 . k Exploration may be particularly di cult when the space is not convex (or perhaps not even connected). More generally. via Markov Chain Monte Carlo techniques. For instance.1. In some cases. the distribution on IR2 with density proportional to exp(?h(x.1) are those which maximize H( ) on . and the simulation of the sample ( 1 . which eliminates the computation of both cosh and sinh in the simulation step. m ) can be much faster than a numerical method applied to (5. m ) from h (or H) and to apply a standard mode estimation method (or to simply compare the h( i )'s).) . see xsec:2. In particular. Example 5. and a convergent approximation of the minimum of h(x. and more fruitful. even though this is not a standard distribution. the number of evaluations should be kept to a minimum. it becomes natural to then generate a sample ( 1 .5. Therefore. y) = (x sin(20y) + y sin(20x))2 cosh(sin(10x)x) + (x cos(10y) ? y sin(10x))2 cosh(cos(20y)y) to be minimized. y)) can be simulated. y) can be derived from the minimum of the resulting h(xi . we can take H( ) = exp(h( )=T) or H( ) = expfh( )=T g=(1 + expfh( )=T g) and choose T to accelerate convergence or to avoid local maxima (as in simulated annealing. it does not satisfy the conditions under which standard minimization methods are guaranteed to provide the local minimum. If it is possible to simulate from the density H(x. in setups where the likelihood function is extremely costly to compute.2.3.5.1 {Minimization{ Consider the function in IR2 h(x. if h is positive and if Z h( ) d < +1 .2 ] STOCHASTIC EXPLORATION 183 m goes to 1) but it may be very slow since it does not take into account any speci c features of h. On the other hand. the resolution of (5. For example.3).1. (This setting may sound very contrived or even arti cial but we will see in x5.3. y) = expf?(x sin(20y) + y sin(20x))2 ? (x cos(10y) ? y sin(10x))2 g. When the problem is expressed in statistical terms. 2. For various choices of the sequence ( j ) (see Ciarlet and Thomas 1982). when the domain IRd and the function (?h) are convex. y) = h(x + y) ? h(x ? y) which approximates 2jjyjjrh(x).5 0 Y 0 X -0. with j uniformly distributed on the unit sphere jj jj = 1 and h(x. One of these stochastic modi cations is to choose a second sequence ( j ) to de ne the chain ( j ) by j h( .6.2.2. y) of Example 5. In more general setups. Grid representation of the function h(x.1. as described in detail in Rubinstein (1981) or Du o (1996.2. The sequence ( j ) is constructed in a recursive manner through (5.5. where rh is the gradient of h.5 1 0.1. 1]2 . that is when the function or the space is less regular.2) j +1 = j + 2 j j j) j . As mentioned in x1.1).1) j +1 = j + j rh( j ) . the algorithm converges to the (unique) maximum.1 on ?1.1) which produces a sequence ( j ) converging to the exact solution of (5. the gradient method is a deterministic approach. .5 -1 -1 Figure 5. (5.2. 61{63).5 -0.184 MONTE CARLO OPTIMIZATION 5. in numerical analysis.1. Contrary to the dej 5.2 0 1 1 2 Z 3 4 5 6 0.1) can be modi ed by stochastic perturbations to achieve again convergence properties.2 Gradient Methods . to the problem (5. (5.2. j >0. pp. We now look at several methods to nd maxima that can be classi ed as exploratory methods. 2. y) with different sequences of j 's and j 's. both in location and values.2.2. while Case 2 converges to the closest local minima. Note that Case 1 ends up with a very poor evaluation of the minimum. However. The nal convergence along a valley of h after some initial big jumps is also noteworthy. The solutions are quite distinct for the three di erent sequences. :8).2 (Continuation of Example 5.2.5.1.2. for some results in this direction).2 illustrate the convergence of the algorithm to di erent local minima of the function h. Figure 5. Results of three stochastic gradient runs for the minimisation of the function h in Example 5. 0:786) 0:00013 0:00013 93 1=10 log(1 + j ) 1=j (0:0004.1) We can apply the iterative construction (5. Note at this stage that ( j ) can be seen as a non-homogeneous Markov chain which almost surely converges to a given value.2. with occurrences where the sequence h( j ) increases and avoids other local minima. j ) and starting point (:65. j ). k Table 5.1 with di erent values of ( j . As shown by Table 5. due to a fast decrease of ( j ) associated with big jumps in the rst iterations.2 ] STOCHASTIC EXPLORATION 185 terministic approach. su ciently strong conditions such as the decrease of j towards 0 and of j = j to a non-zero constant are enough to guarantee the convergence of the sequence ( j ).2.2. 0:245) 4:24e ? 06 2:163e ? 07 58 This approach is still (too) close to numerical methods in that it requires a precise knowledge on the function h.5. 1:02) 1:287 0:115 50 1=100j 1=10j (0:629. The iteration T is obtained by the stopping rule jj T ? T ?1 jj < 10?5 . the study of these chains is particularly arduous given their ever-changing transition kernel (see Winkler 1996. h( T ) mint h( t ) iteration j j T 1=10j 1=10j (?0:166. namely that slower decrease rates of the sequence ( j ) tend to achieve better minima.2 and Table 5. the number of iterations needed to achieve stability of T also varies with the choice of ( j . Example 5. The convergence of ( j ) to the solution again depends on the choice of ( j ) and ( j ). this method does not necessarily proceed along the steepest slope in j but this property is a plus in the sense that it may avoid traps in local maxima or in saddlepoints of h. Case 3 illustrates a general feature of the stochastic gradient method. .2. which is not necessarily available.2) to the multi-modal function h(x. 186 1.1 MONTE CARLO OPTIMIZATION 5.5.2 0.8 0.9 1.0 -0.2 0.0 0.2 0.4 0.6 (1) j = j = 1=10j 0.805 0.785 0.790 0.795 0.800 0.630 0.635 0.640 0.645 0.650 (2) j = j = 1=100j 0.8 0.2 -0.2 0.4 0.6 0.0 0.2 0.4 0.6 (3) j = 1=10 log(1 + j ); j = 1=j Figure 5.2.2. Stochastic gradient paths for three di erent choices of the sequences ( j ) and ( j ) and starting point (:65; :8) for the same sequence ( j ) in (5.2.2). The grey levels are such that darker shades mean higher elevations. The function h to minimize is de ned in Example5.2.1. 5.5.2 ] STOCHASTIC EXPLORATION 187 The simulated annealing algorithm2 has been introduced by Metropolis (1953) et al. to minimize a criterion function on a nite set with very large size3 , but it also applies to optimization on a continuous set and to simulation (see Ackley et al. 1985 and Neal 1994). The fundamental idea at the core of simulated annealing methods is that a change of scale, called temperature , allows for faster moves on the surface of the function h to maximize, whose negative is called energy. Therefore, rescaling partially avoids the trapping attraction of local maxima. Given T T a temperature parameter T > 0, a sample 1 ; 2 ; ::: is generated from the distribution ( ) / exp(h( )=T) and can be used as in x5.2.1 to come up with an approximate maximum of h. As T decreases towards 0, the values simulated from this distribution become concentrated in a narrower and narrower neighborhood of the local maxima of h (see Theorem 5.2.7, Problem 5.9, and Winkler 1996). The fact that this approach has a moderating e ect on the attraction of the local maxima of h becomes more apparent when we consider the simulation method proposed by Metropolis et al. (1953). Starting from 0 , is generated from a uniform distribution on a neighborhood V ( 0 ) of 0 or, more generally, from a distribution with density g(j ? 0 j), and the new value of is generated as follows: with probability = exp( h=T) ^ 1, 1= 0 with probability 1 ? , where h = h( ) ? h( 0 ). (This method is in fact the Metropolis algorithm, which simulates the density proportional to expfh( )=T g, described and justi ed in Chapter 6.) Therefore, if h( ) h( 0 ), is accepted with probability 1. On the other hand, if h( ) < h( 0 ), may still be accepted with probability 6= 0 and 0 is then changed into . This property allows the algorithm to escape the attraction of 0 if 0 is a local maximum of h, with a probability which depends on the choice of the scale T, compared with the range of the density g. In its most usual implementation, the simulated annealing algorithm modi es the temperature T at each iteration; it is then of the form 5.2.3 Simulated Annealing Algorithm A.20 {Simulated Annealing{ 2 This name is borrowed from the metallurgy vocabulary: A metal manufactured by a slow decrease of the temperature (annealing) is stronger than a metal manufactured by a fast decrease of the temperature. The vocabulary also relates to Physics, since the function to minimize is called energy and the variance factor T which controls convergence temperature. We will try to keep these idiosyncrasies to a minimal level, but they are quite common in the literature. 3 This paper is also the originator of the Markov Chain Monte Carlo methods developed in the following chapters. The potential of these two simultaneous innovations has been discovered much latter by statisticians (Hastings 1970; Geman and Geman 1984) than by of physicists (see also Kirkpatrick et al. 1983). 5.5.5 ] PROBLEMS 211 where ti ( ; 2 ) = IE Zi jXi; ; 2 ] and vi ( ; 2 ) = IE Zi2 jXi ; ; 2 ] IE Zi jXi; ; 2 ] = IE Zi2 jXi; ; 2 ] = + Hi u ? (d) Show that 2 + 2 + (u + )Hi u ? 5.18 The EM algorithm can also be implemented in a Bayesian hierarchical '(t) 1?'((tt)) if Xi = 1 ? (t) if Xi = 0: 2 (e) Show that ^(j) converges to ^ and that ^(j) converges to ^ 2 , the ML esti2. mates of and where Hi(t) = ( model to nd a posterior mode. Suppose that we have the hierarchical model X j f (xj ) ; j ( j ); ( ); where interest would be in estimating quantities from ( jx). Since ( jx) = Z ( ; jx)d ; where ( ; jx) = ( j ; x) ( jx), the EM algorithm is a candidate method for nding the mode of ( jx), where would be used as the augmented data. (a) De ne k( j ; x) = ( ; jx)= ( jx), and show that log ( jx) = Z log ( ; jx)k( j ; x)d ? Z log k( j ; x)k( j ; x)d : (b) If the sequence ( ^(j) ) satis es max Z log ( ; jx)k( j (j) ; x)d = Z log ( (j+1) ; jx)k( j (j) ; x)d ; show that log ( (j+1) jx) log ( (j) jx). Under what conditions will the sequence ( ^(j) ) converge to the mode of ( jx)? (c) For the hierarchy X j N ( ; 1) ; j N ( ; 1) ; with ( ) = 1, show how to use the EM algorithm to calculate the posterior mode of ( jx). CHAPTER 6 The Metropolis-Hastings Algorithm "What's changed, except what needed changing?" And there was something in that, Cadfael re ected. What was changed was the replacement of falsity by truth, and however hard the assimilation might be, it must be for the better. Truth can be costly, but in the end, it never falls short of value for the price paid. |Ellis Peter, The Confession of Brother Haluin| Have you any thought', resumed Valentin, of a tool with which it could be done?' Speaking within modern probabilities, I really haven't,' said the doctor. |G.K. Chesterton, The Innocence of Father Brown| 6.1 Monte Carlo Methods based on Markov Chains Chapter 3 has shown that it is not necessary to use a sample from the distribution f to approximate the integral Z h(x)f(x)dx ; since importance sampling techniques can be used. This chapter develops this possibility in a di erent way and shows that it is possible to obtain a sample x1; ; xn distributed from f without simulating from f. The basic principle underlying the methods described in this chapter and the following chapters is to use an ergodic Markov chain with stationary distribution f: for an arbitrary starting value x(0), a chain (X (t) ) is generated from a transition kernel with stationary distribution f, which moreover ensures the convergence in distribution of (X (t) ) to f. (Given that the chain is ergodic, the starting value x(0) is, in principle, unimportant.) For instance, for a \large enough" T, X (T ) can thus be considered as distributed from f and the methods studied in this chapter produce a sample X (T ) ; X (T +1) ; : : :, which is generated from f, even though the X (T +t) 's are not independent. De nition 6.1.1 A Markov chain Monte Carlo (MCMC) method for the simulation of a distribution f is any method producing an ergodic Markov chain (X (t) ) whose stationary distribution is f. as detailed in x6. As a corollary. where we have produced only one single general method of simulation.1? Despite its formal aspect. has fundamentally di erent methodological and historical motivations.1 f ? In comparison with the techniques developed in Chapter 3. there is no need for the generation of n independent chains 1 For example.11).3) guarantees the convergence of the empirical average T 1 X h(x(t)) (6. if there is no particular requirement on independence but if the incentive for the simulation study is rather on the properties of the distribution f. it is sometimes more e cient to use the pair (f. this de nition implies that the use of a Why should we resort to such a convoluted approach to simulate from chain (X (t) ) resulting from a Markov Chain Monte Carlo algorithm with stationary distribution f is similar to the use of an iid sample from f in the sense that the ergodic theorem (Theorem 4.214 THE METROPOLIS-HASTINGS ALGORITHM 6. This chapter covers the most general MCMC method. even when an accept-reject algorithm is available. in Bayesian inference. The call for Markov chains is nonetheless justi ed from at least two points of view: First. Chapter 9 considers the case of latent variable models.2). optimized. A sequence (X (t) ) produced by a Markov Chain Monte Carlo algorithm can thus be employed just as an iid sample.1.6.1.7.1. the ARS algorithm (x2.1. if possible. Thus. g) through a Markov chain. Therefore. as it relies on asymptotic convergence properties. while Chapter 7 specializes in the Gibbs sampler which. although a particular case of Metropolis{Hastings algorithm (see Theorem 7. this involved strategy may indeed sound suboptimal. in particular. the number of iterations required to obtain a good approximation of f is a priori important.1. which prohibit both analytic processing and numerical approximation in both classical (maximum likelihood) and Bayesian setups (see also Example 1. Second. How should we implement the principle brought forward by De nition 6. Markov Chain Monte Carlo methods achieve a \universal" dimension in the sense that they (and not only formally) validate the use of positive densities g for the simulation of arbitrary distributions of interest f.1 may seem purely formal at this stage. . which moreover only applies for log-concave densities. the (re-)discovery of Markov Chain Monte Carlo methods by the statisticians in the 1990's has undoubtedly induced considerable progress in simulation-based inference and. the call to Markov chains allows for a much greater generality than the methods presented in Chapter 2.4).3.1) T t=1 to the quantity IEf h(X)]. since it has opened access to the analysis of models which were too complex to be satisfactorily processed by previous schemes1 . namely the Metropolis{Hastings algorithm. even if the extension proposed in De nition 6.2.1. Chapter 5 has shown that stochastic optimization algorithms as those of Robbins-Monro or the SEM algorithm naturally produce Markov chain structures which should be generalized and. Moreover. de ned with respect to the dominating measure for the model (see x6. Introduced by Metropolis. then? Given the principle stated in De nition 6. on methods used in statistical physics. handling this sequence is rather more arduous than in the iid case because of the dependence structure. for instance. For another. This gap of more than thirty years can be partially attributed to the lack of appropriate computing power since most of the examples now processed by Markov Chain Monte Carlo algorithms could not have been treated previously. Which transition should we use.2 The Metropolis{Hastings algorithm The Metropolis{Hastings algorithm starts with a conditional density q(yjx). these methods have later been generalized by Hastings (1970) and Peskun (1973) to statistical simulation. we do not include examples in this section. the determination of the \proper" length T is still under debate.2 ] THE METROPOLIS{HASTINGS ALGORITHM 215 (Xi(t) ) (i = 1. one can propose a in nite number of practical implementations based. It can only be implemented in practice when q( jx) is easy to simulate and is either explicitly available (up to a multiplicative constant independent from x). Rosenbluth.1. as we will see in Chapter 8. a single Markov chain is enough to ensure a proper approximation through estimates like (6. and some approaches to the convergence control of (6.1. : : :. Despite several later attempts in speci c settings (see for example Geman and Geman 1984.4 and in Chapter 8. 6. which is much more restrictive.6. but rather wait for x6. Teller and Teller (1953) in a setup of optimization on a discrete state space. as detailed in Chapter 7). Before illustrating the universality of Metropolis{Hastings algorithms and demonstrating their straightforward implementation.1) of IEf h(X)] for the functions h of interest (and sometimes even of the density f. Besag 1989).1 De nition 6. this approach induces a considerable waste of n(T ? 1) simulations out of nT . . that is such that q(xjy) = q(yjx).In other words.1) are given in x6.4 for a non-trivial example). we rst evacuate the (important) issue of their theoretical validity. or symmetric.2 6. which presents a typology of these types. 2 For one thing. Tanner and Wong 1987. while allowing for a wide choice of possible implementations.2.3. the starting point for an intensive use of these methods by the statistical community can be traced to the presentation of the Gibbs sampler by Gelfand and Smith (1990). Since the results presented below are valid for all types of Metropolis{Hastings algorithms. in sharp contrast with the Gibbs sampler given in Chapter 7. Obviously. The Metropolis{ Hastings algorithms proposed in this chapter have the de nite advantage of imposing minimal requirements on the study of the density f.1.3. n) where the \terminal" values Xi(T ) only are kept. Rosenbluth.1. For instance. Yt q(yjx(t)). It is obviously necessary to impose minimal conditions on the conditional distribution q for f to be the limiting distribution of the chain (X (t) ) produced by A:24].2 The Metropolis{Hastings algorithm associated with the objective distribution f and the conditional distribution q produces a Markov chain (X (t) ) through the following transition: Given Algorithm A. A:24] The distribution q will be called the instrumental (or proposal ) distribu- This algorithm always accepts values yt such that the \likelihood ratio" f(yt )=q(yt jx(t)) is increased compared with the previous value.2. ) in the average (6. when given a pair (f. while this is an impossible feature in continuous iid settings.1). and are therefore rejected by the algorithm. Generate 2. The yt 's generated by the algorithm A:24] are thus associated with weights of the form mt =T (mt = 0. Obviously. it involves repeated occurrences of the same value. Like the accept-reject method. For one thing.1. Yt). similar to stochastic optimization methods (see x5.2 and it is possible to use the algorithm A:24] as an alternative to an accept-reject algorithm. if the chain starts with a value x(0) such that f(x(0) ) > 0. the probability (x(t). However. 1. y) = min f(x) q(yjjx) . It is only in the symmetric case that the acceptance is driven by the ratio f(yt )=f(x(t) ). An important feature of the algorithm A:24] is that it may also accept values yt such that the ratio is decreasing. is supposed to be connected .3. There are similarities between A:24] and the accept-reject methods of x2. as discussed in x6. yt ) is only de ned when f(x(t) ) > 0.6.2).1. with probability with probability 1. it follows that f(x(t) ) > 0 for every t 2 IN since the values of yt such that f(yt ) = 0 lead to (x(t) . and the comparison with importance sampling is somehow more relevant. Nonetheless. Both approaches are compared in x6.24 Metropolis{Hastings algorithm x(t). a sample produced by A:24] obviously di ers from an iid sample. E .4. f(y) q(x y) (x. (This assumption is often omitted in the literature but the lack . Take Yt X (t+1) = x(t) where (x(t) . the support of f. since to reject Yt leads to repeat x(t) at time t+1. 1 ? (x(t) .2.216 THE METROPOLIS-HASTINGS ALGORITHM 6. Yt). g). depending on how many times the subsequent values have been rejected. 1 : tion. the Metropolis{Hastings algorithm only depends on the ratios f(yt )=f(x(t) ) and q(x(t)jyt)=q(yt jx(t)) and is therefore independent of normalizing constants. yt) = 0. in the second and fourth integrals on the right hand side. for x(0) 2 A. it is necessary to proceed on one connected component at a time and show that the di erent connected components of E are linked by the kernel of A:24]. Proof. A)f(x)dx = I A (x) 1 ? f(x) q(x0jx) q(x0jx) f(x)dxdx0 for D = f(x.2 ] THE METROPOLIS{HASTINGS ALGORITHM 217 of connectedness of E can deeply invalidate the Metropolis{Hastings algorithm. Thus. x0) = (x. x0)) x (x0) . This also transforms the set D into . f(x0 ) q(xjx0) f(x) q(x0 jx)g. f is a stationary distribution of the chain (X (t) ) produced by A:24]. we make the change of variables (x0. for every measurable set A. if there exists A E such that Z the algorithm A:24] does not have f as a limiting distribution since.2. Z K(x. See Hobert. x0))q(x0 jx)dx0 I A (x)f(x)dx ZZ 0) 0 = I A (x0 ) f(x ) q(xj0 xx) q(x0jx)f(x)dxdx0 f(x) q(x j Z DZ + I A (x0) q(x0 jx) f(x)dxdx0 Dc ZZ f(x0 ) q(xjx0) ZZ I A (x0)f(x0 )q(xjx0)dxdx0 I A (x)f(x)q(x0 jx)dxdx0 I A (x0) f(x)q(x0 jx)dxdx0 I A (x) f(x0 ) q(xjx0) dxdx0 Z K(x. 8x 2 E . x0)q(x0 jx) + (1 ? (x.6. x0)q(x0 jx)f(x)dxdx0 ZZ + (1 ? (x.) If the support of E is truncated by q. x0). The transition kernel associated with A:24] can be written K(x. we still have = the following result: Theorem 6. x) to (x. x0).1 For every conditional distribution q. Robert and Goutis 1997 for a treatment of the non-connected Gibbs sampler. where x denotes the Dirac mass in x. A f(x)dx > 0 and Z A q(yjx)dy = 0 . In such a case.6. that is. + D ZZ I A (x0) (x. A)f(x)dx = + + Z Z ZD Z ZD D ? ZD c where. the chain (X (t) ) never visits A. Therefore. whose support includes E . In full generality. y) 1 (see also Winkler 1995). ergodicity follows from Theorem 4.218 THE METROPOLIS-HASTINGS ALGORITHM 6. since f is stationary. 2 The stationarity of f is therefore established for almost any conditional distribution q. This is impossible. Metropolis{Hastings algorithms are only a special case of a more general class of algorithms whose transition is associated with the acceptance probability s(x. As shown by Hastings (1970). y) = 1 is also known as the Boltzman algorithm and used in simulation for particle Physics. Tierney (1994) has indeed established the following result: Theorem 6. given the f-irreducibility of (X (t) ). 1 + f(y)q(xjy) where s is an arbitrary positive symmetric function such that %(x.4. When (X (t) ) is aperiodic. A)f(x)dx = ZZ I A (x0 ) f(x0 ) q(xjx0)dxdx0 = Z A f(x0 )dx0 .2. it is Harris recurrent. Proof.2 Dc . in the discrete case.2 If the chain (X (t) ) is f -irreducible. 0 . and vice versa. a fact which indicates how universal Metropolis{Hastings algorithms are.4). it is positive recurrent.2. The chain (X (t) ) is therefore positive recurrent. Tierney (1995) and Mira. Proof.3.5) for every value x(0). 6. The particular case s(x.2. y) %(x. y) = (6.2. x1) q(x1 jx0) h(x1)dx1 + (1 ? (x0 )) h(x0) . it satis es h(x0 ) = IE h(X (1) )jx0] = IE h(X (t) )jx0] and therefore h is f-almost everywhere constant and equal to IEf h(X)]. then the space X can be written as Ei with P(X (t) 2 Ei ) converging to 0 (see De nition 4. establishing the stationarity of f. Suppose (X (t) ) is not recurrent. although Peskun (1973) has shown that. Geyer and Tierney (1998) propose extensions to the continuous case.2 Convergence Properties In order to establish the ergodicity of (X (t) ) and therefore to validate A:24] as a Markov Chain Monte Carlo algorithm. the performances of this algorithm are always suboptimal when compared with the Metropolis{ Hastings algorithm (see Problem 6. This result can be established by using the fact that the only bounded harmonic functions are constant. If h is an harmonic function.6.3 If (X (t) ) is f -irreducible. we have Z K(x. 2 Theorem 6. Since Z (1) )jx ] = IE h(X (x0 . it is ergodic. we need to prove both the f -irreducibility and the aperiodicity of (X (t) ). If (X (t) ) is in addition aperiodic.3.1) f(x)q(yjx) . Therefore. the essential supremum ess sup h(x) = inf fw.1. 2.6) but x6. geometric convergence is almost never guaranteed. is rather restrictive besides the E nite case. 6. For instance. if is not bounded from below on a set of measure 1. Take bst = 0 if xs 6= xt and. It is indeed enough that g be non empty in a neighborhood of 0 to ensure the ergodicity of A:24] (see x6.12. This result is important as it characterizes Metropolis{Hastings algorithms which are weakly convergent (see the extreme case of Example 8.1 If the marginal probability of acceptance satis es essf sup (1 ? (x)) = 1.2. (h(x) > w) = 0g .6. De ning. however. since the function is almost always intractable.6.18 and 7.3) of Theorem 6. it cannot be used as a criterion for geometric convergence.6 ] NOTES 257 where c(b) denotes the number of clusters. as stated by Theorem 4.6. . 0 otherwise. In the particular case when E is a small set (see Chapter 4).) G. for every measure and every -measurable function h. it is quite di cult to obtain a convergence stronger than the simple ergodic convergence n of (6. however. It is in fact equivalent to Doeblin's condition. Chapter 7 exhibits continuous examples of Gibbs samplers when uniform ergodicity holds (see Examples 7.4 for the irreducibility of the Metropolis{Hastings Markov chain is particularly well adapted to random walks. it is impossible to establish geometric convergence without a restriction to the discrete case or without considering particular transition densities. the algorithm A:24] is not geometrically ergodic. since Roberts and Tweedie (1996) have come up with chains which are not geometrically ergodic. On the other hand.2. (b) Show that the Swendson-Wang algorithm 1.1. This condition. they have indeed established the following result: Theorem 6. for xs = xt . with transition densities q(yjx) = g(y ? x). choose a color at random on leads to simulations from (Note: This algorithm is acknowledged as accelerating convergence in image processing.6 Notes 6. while being a natural choice for the instrumental distribution q. For every cluster. Therefore.2 has shown that in the particular case of random walks.1 Geometric Convergence of Metropolis-Hastings algorithms The su cient condition (6.1.6.3.1) or than the (total variation) convergence of kPx(0) ? f kTV without introducing additional conditions on f and q. a geometric speed of convergence cannot be guaranteed for A:24]. the number of sites connected by active bonds.e. n bst = 1 with probability 1 ? qst . i. Roberts and Polson (1995) note that the chain (X (t)) is uniformly ergodic.3.2.2 for a detailed study of these methods).6.8). The fact that the simulated annealing algorithm may give a value of E (X (t+1) ) larger than E (x(t) ) is a very positive feature of the method.1). Note. (1996) have shown that the optimal choice of is 2:4.6. For a given value T > 0. that is " 1+2 X k>0 ? cov X (t) . In the particular case when f is the density of the N (0. is that the acceptance probability converges to 0:234. t As noted in x5. This heuristic rule is based on the asymptotic behavior of an e ciency criterion equal to the ratio of the variance of an estimator based on an iid sample and the variance of the estimator (3. equal to 0:44 for = 2:4. it produces a Markov chain (X (t)) on X by the following transition: 1.6. i. (1996). 1) distribution and when g is the density of a Gaussian random walk with variance . An equivalent version of this emp pirical rule is to take the scale factor in g equal to 2:38= d . Gilks and Roberts (1996) recommend the use of instrumental distributions such that their acceptance rate is close to 1=4 for models of high dimension and is equal to 1=2 for the models of dimension 1 or 2. X (t+k) #?1 in the case h(x) = x.2 A Reinterpretation of Simulated Annealing Consider a function E de ned on a nite set X with such a large cardinal that a minimization of E based on the comparison of the values of E (X ) is not feasible. since it allows for escapes from the attraction zone of local minima of E when T is large enough. does not cover the extension to the case when T varies with t and converges to 0 \slowly enough" (typically in log t). The chain (X (t) ) is therefore associated with the stationary distribution f (x) / exp(?E (x)=T ) provided that the matrix of the q(ijj )'s generates an irreducible chain. with probability exp ?fE(x(t) ) ? E( )g=T ^ 1. 6.2. where d is the . presented in Chapter 4. given the symmetry in the conditional distribution. based on an approximation of x(t) by a Langevin di usion process (see x6. Generate t according to q( jx(t) ). 2.6.3) is based on a conditional density q on X such that q(ijj ) = q(j ji) for every (i.1. approximately 1=4. j ) 2 X 2 . The simulated annealing technique (see x5.258 THE METROPOLIS-HASTINGS ALGORITHM 6. that the theory of homogeneous Markov chains. Gelman et al.6 6. Comparing A:24] and the simulated annealing algorithm. the simulated value t is automatically accepted when E ( t ) E (x(t) ).5) when the dimension of the problem goes to in nity.3.e. The corresponding acceptance rate is = 2 arctan 2 . A second result by Gelman et al. the later appears as a particular case of Metropolis{Hastings algorithm. Take X (t+1) = x(t) t otherwise.3.3 Reference Acceptance Rates Gelman.2. with moreover a dissymmetry in the e ciency in favor of large values of . however. . who had been sent for him in some haste. He got to his feet with promptitude. since they only require a limited amount of information about the distribution to simulate. the Gibbs sampling algorithm has a number of distinct features: (i) The acceptance rate of the Gibbs sampler is uniformly equal to 1. continuing to look down the nave. The Innocence of Father Brown| 7. as an example of a generic algorithm. It was so simple. . formally.1 General Principles The previous chapter has developed simulation techniques which could be called \generic".3. for he knew no small matter would have brought Gibbs in such a place at all. so obvious he just started to laugh.K.1.4. 7. For instance. when suddenly the solution to the problem just seemed to present itself. \It's just a matter of perspective.1. This is because the choice of an instrumental distribution is essentially reduced to a choice between a nite number of possibilities. my dear boy. ARMS (x6.CHAPTER 7 The Gibbs Sampler He sat. see Theorem 7. a special case of Metropolis{ Hastings algorithm (or rather a combination of Metropolis{Hastings algorithms on di erent components. Chesterton. In contrast. aims at reproducing f in an automatic manner.4. Doherty. the echoes pealing around the deserted church.1). in particular through the calibration of the acceptance rate (see x6. Metropolis{Hastings algorithms achieve higher levels of e ciency when they take into account the speci cs of the distribution f. Therefore. every simulated value is accepted and the results of x6. Satan in St Mary's| In this place he was found by Gibbs." |P. \Just a matter of perspective.10 below)." he used to boom out.1 De nition . |G.1 on the optimal acceptance rates are not valid in this setting. Although the Gibbs sampler is. the properties and performance of the Gibbs sampling method presented in this chapter are very closely tied to the distribution f.] He remembered the voice of his old Dominus' Father Benedict.3). This also means that the convergence assessment for this algorithm must be treated differently than for Metropolis{Hastings techniques. telling him that there was a solution to every problem..C. However. fp are called the full conditionals. : : :. : : :. generate (7. Xp). However. as in x6. fp . p A:31] p. For example. xi?1.1 (ii) The use of the Gibbs sampler implies limitations on the choice of instrumental distributions and requires a prior knowledge of some analytical or probabilistic properties of f. The densities f1 .1) Yt fY jX ( jxt?1) Xt fX jY ( jyt) where fY jX and fX jY are the conditional distributions. 2 ( X2t+1) f2 (x2 jx(t+1). we will rst de ne the speci c features of this algorithm. x(t). xpt)). and it is a particular feature of the Gibbs sampler that these are the densities used for simulation. the chain (Xt ) chain has transition kernel K(x. and for t = 1. x(t)). (iii) The Gibbs sampler is. These di erent points will be made clearer after a discussion of the properties of the Gibbs sampler. 2. : : :.3. by construction. y).4. Suppose that for some p > 1 the random variable X 2 X can be written as X = (X1 . which is usually an advantage. So. x(t?1 ).or multidimensional. The sequence (Xt . xi?1. : : :. the construction is still at least two-dimensional. xp fi (xijx1. is a Markov chain. : : :. Yt). x2. : : :. xi+1. ( ( X1t+1) f1 (x1 jx(t). Even though some components of the simulated vector may be arti cial for the problem of interest. as is each sequence (Xt ) and (Yt ) individually. : : :.31 {The Gibbs sampler{ Given x(t) = (x1t). . xpt)). multidimensional. all of the simulations may be univariate. Moreover.1.262 THE GIBBS SAMPLER 7.1 {Bivariate Gibbs sampler{ Let the random variables X and Y have joint density f(x. : : :. : : :. xp ): The associated Gibbs sampling algorithm (or Gibbs sampler) is given by the following transition from X (t) to X (t+1) : ( ( Algorithm A. where the Xi's are either uni. or unnecessary for the required inference.7. Example 7. generate 1. that is Xi jx1.1. : : :. p 1 3 ::: ( ( +1) Xpt+1) fp (xp jx1t+1). 2. for obvious reasons of lack of irreducibility of the resulting chain. and generate a sequence of observations according to the following: Set X0 = x0. : : :. x2. : : :. x ) = Z fX jY (x jy)fY jX (yjx)dy. xi+1. suppose that we can simulate from the corresponding conditional densities f1 . even in a high dimensional problem. (iv) The Gibbs sampler does not apply to problems where the number of parameters varies. the corresponding density + is f(y1 . Y ) N2 0.1. generate (7. 1 ? 2 ): Obviously. 1 +1 23 2 12 y1 Z expf?y2 ? 12y1y2g ?y (y1 ) / 1 + y + y dy2 e . the Gibbs sampler is Given yt . si 2 f?1.1.2). (X. this is a formal example since the bivariate normal density can be directly simulated by the Box-Muller algorithm (see Example 2. y2. j j s ?J i i X ss (i.2 {Auto-exponential model{ The auto-exponential model of Besag (1974) has been found useful in some aspects of spatial modelling.1 ] GENERAL PRINCIPLES 263 with invariant distribution fX ( ). the other conditionals. y3 ) dy3 (y / expf?+ 1 +y y2++ 12y12 )g . The full conditional densities are exponential. and the marginal distributions have forms such as y2 jy1 y1 X Z +1 0 which cannot be simulated easily.j )2N i j .2. the full conditional distribution is P expf?Hsi ? Jsi j :(i. y2.7. 1 ? 2 ) Yt+1 j xt N ( xt+1 . . 0 23 2 12 1 1 k Example 7. and so are very easy to simulate from. f(s) / exp ?H X and where N denotes the neighborhood relation for the network.j )2N sj g f(si jsj 6=i ) = expf?H ? J P s g + expfH + J P s g j j Pj j expf?(H + j sj )(si + 1)g = 1 + expf?2(H + P s )g .7. For the particular case when y 2 IR3 . y3 ) / expf?(y1 + y2 + y3 + 12 y1 y2 + 23 y2 y3 + 31y3 y1 )g . 1 1 (7. 1g. where f(y1 .1.3) Xt+1 j yt N ( yt . with known ij > 0.1.2) .2. yi jyj 6=i E xp 1 + j 6=i ij yj . For the special case of the bivariate normal density.5. k Example 7.3 {Ising model{ For the Ising model of Example 5. In contrast. ( ( (y1t) . : : :. : : :. : : :.1.12. yp g2 (y2 jy1 . the naive simulationof a normal N (0. the devising and the optimization of the accept-reject algorithm constructed in Example 2.2. Y2jy1 . and the following Gibbs algorithm is implemented. yp g1 (y1 jy2 . De nition 7. write y = (x. : : :. It is therefore particularly easy to implement A:32] for these conditional distributions by updating successively each node of the network. ( ( ( Y2(t+1) g2 (y2 jy1t+1) . : : :. ::: ( ( +1) Yp(t+1) gp (yp jy1t+1) . y3t). please!!! Example 7. 1) till the outcome is above is suboptimal for large values of . For p > 1.12 can be overly costly if the algorithm is only to be used a few times. yp?1 gp (yp jy1 .2. a density g that satis es 7. ypt?1 ). : : :.12. y3. 2.264 THE GIBBS SAMPLER 7. y3. p.32 {Completion Gibbs sampler{ Given 1. it is easy to generalize the Gibbs sampling algorithm by a \demarginalization" or completion construction.4 Given a probability density f. yp?1 ): Y (t) to Y (t+1).2 Completion Z Algorithm A. The density g is chosen so that the full conditionals of g are easy to simulate from. ypt) ). Yp jy1 . : : :. yp ). z) / I x I z expf?(x? ) =2 g : 2 2 2 2 . simulate ( ( Y1(t+1) g1 (y1 jy2t) . z) and denote the conditional densities of g(y) = g(y1 . : : :. is called a completion of f. : : :. consider a truncated normal distribution. k Following the mixture method proposed in x2. yp ). : : :. z) dz = f(x) A:32] We need a simple example here{Dave Win eld? No baseball. Z g(x.1. for instance.2.1 which is a logistic distribution on (si + 1)=2. : : :. f(x) / e?(x? ) =2 I x : As mentioned in Example 2.1. : : :. ypt) ).2.5 {Truncated Normal Distribution{ As in Example 2. g(x. ypt) ). However. yp ) by Y1 jy2.7. An alternative is to use completion. as.1. there indeed exists a wide choice among the in nite number of densities for which f is a marginal density. 1994).3. In principle.3. g2. second. Student's t distribution can be generated as a mixture of a normal distribution by a 2 distribution. see A:33]. in general. . rst because there is no more results on this topic than on the choice of an optimal density g in the Metropolis{Hastings algorithm and.) The corresponding implementation of A:32] is then X (t) jz (t?1) U ( .2.1. 2 Simulate p f( j 0 ) / Z1 0 e? =2 e? 1+( ? 2 0 )2 ] =2 ?1 d . : : :.7. treated in Chapter 9. ?2 2 log(z (t?1) )]).6 {Cauchy-normal posterior distribution{ As shown in Chapter 2 (x2.1 ] GENERAL PRINCIPLES 265 (This is a special case of slice sampling. gp to converge faster to the maximum of the likelihood function.1).7. which is described in x5. The density f( j 0 ) can be written as the marginal density 1. Consider for instance the density ? =2 f( j 0 ) / 1 + e ? )2 ] : ( 0 This is the posterior distribution of the location parameter in a Cauchy distribution (see Example 8. a natural completion of f in g and of x in y. and even more with recent versions of EM such as ECM and MCEM (see Meng and Rubin 1991.1).2) which also appears in the estimation of the parameter of interest in a linear calibration model (see Example 1. expf?(x(t) ? )2=2 2 g]). In cases when such completions seem necessary (for instance. 1992.3.6. 2. or Liu and Rubin.5. We will not discuss this choice in terms of optimality. when every conditional distribution associated with f is not explicit). Missing data models. which use maximizations of conditional parts of the likelihood like g1.1).) Note the similarity of this approach with the EM algorithm for maximizing a missing data likelihood.3. the Gibbs sampler by no means requires that the completion of f into g and of x in y = (x. z) should be related to the problem of interest. There are therefore settings such that the vector z has no meaning from a statistical point of view and is only a useful device. namely the introduction of data augmentation by Tanner and Wong (1987) (see Note x7. because there exists. p Note that the initial value of z must be chosen so that ?2 2 log(z (0) ) > .1. This kind of decomposition is also useful when the expression 1+( ? 0)2]? appears in a more complex distribution. Z (t) jx(t) U ( 0. Example 7. k The completion of a density f into a density g such that f is the marginal density of g corresponds to one of the rst (historical) appearances of the Gibbs sampling in Statistics. provide a series of examples of natural completion (see also x5. ^ i = max i : p i i = ri log(pi ) + (ni ? pi ) log(1 ? pi ). probit and log-log models.) . : : : . that is. Construct the associated Gibbs sampler and compare with the previous results.7. 2 ). 105 2 ). k?1 N (2 k?1 ? k?2 . the logit. i pi = (exp( + xi)). Yij 7. For each of these models.12. b). while 1 . 7. 4. i + j. pi = 1 ? exp(? exp( + xi )). (1992) impose the constraints 1 > : : : > 4 and 1 < : : : < 3 > : : : > 5 . Gelfand et al. nk 2 ): Determine the value of k and compare the associated Gibbs sampler with the previous implementation.53 y(Breslow and Clayton 1993) In the modelling of breast cancer cases yi D=2 where 1. pi ) of which are killed. ?2 with 2 = 2 = 5 and a = 0.4) is k j j . 2 ): k (a) Derive the Gibbs sampler associated with almost at hyperpriors on the parameters j . the posterior expectation of 7. G a(a. 11) (7. N (0. construct a Gibbs sampler and compute the expected posterior deviance.2. 2 ). : : : . log( i ) = log(di ) + xi + di . : : : . (b) Breslow and Clayton (1993) consider a dependent alternative where (k = 3. 2 ).7. : : : .5 ] PROBLEMS 319 according to age xi and year of birth di . N (0. N (0.5.54 y(Dobson 1983) The e ect of a pesticide is tested against its concentration xi on ni beetles. (Hint: Use the optimal truncated normal accept-reject algorithm of Example 2. an exchangeable solution is Yi P ( i ). k . (a) Give the Gibbs sampler for this model. Three generalized linear models are in competition: exp( ) pi = 1 + exp(+ +xix ) . that is.4) k j 1 . b = 1. . j 6= k N ( k . 2 N (0.55 y(Spiegelhalter et al. (c) An alternative representation of (7. j = ij = i j N ( ij . 1996) Consider a standard Anova model (i = 1.5. 2 ). 5) n n X^ X ! i=1 i ? i=1 i . Ri B(ni . 6 ] NOTES 321 7. s >=< r.7. is de ned by < Tr. and Polson (1996) who work with a functional representation of the transition operator of the chain. Schervish and Carlin (1992) de ne the measure as the measure with density 1=g with respect to the Lebesgue measure. T s > 11 This section presents some results on the analysis of Gibbs sampling algorithms from a functional analysis point of view. It may be skipped on a rst reading since it will not be used in the book and remains at a rather theoretical level.6. r satis es jcj jr(y)j dy = = = Z Z jTr(y)j dy Z Z r(y0 ) K (y0.1) K 2 (y. Proof. where g is the density of the stationary distribution of the chain.1.2 Geometric Convergence While11 the geometric ergodicity conditions of Chapter 6 do not apply for Gibbs sampling algorithms. y) dy0 dy ZZ jr(y0 )j K (y0.7.6) with stationary measure g. y0 ) is the transition kernel (7. s >= r(y) s(y) (dy) and de ne the operator T on L2 ( ) by (Tr)(y) = Z r(y0 ) K (y0 . where K (y. which ensures both the compacity of T and the geometric convergence of the chain (Y (t) ). y) g(y0 ) dy0 = g(y). The other eigenvalues of T are characterized by the following result: Lemma 7. Consider an eigenvector r associated with the eigenvalue c. although these are no so easy to implement in practice. g is an eigenvector associated with the eigenvalue 1. y0 ) dy dy < 1. such that Tr = cr. They then de ne a scalar product on L2 ( ) Z < r. The adjoint operator associated with T . Wong and Kong (1995). Since Z K (y0 . y) dy0 dy Z jr(y0 )j dy0 and therefore jcj 1. some su cient conditions can also be found in the literature.e. Liu. In this setting.6.6.1 The eigenvalues of T are all within the unit disk of C. y) g(y0 ) (dy0 ). The approach presented below is based on results by Schervish and Carlin (1992). . i. T . 2 The main requirement in Schervish and Carlin (1992) is the Hilbert-Schmidt condition ZZ (7. beta) lambda i] theta i] * t i] x i] dpois(lambda i]) f g alpha beta dexp(1. 1995a).6. it has been designed to take advantage of the possibilities of the Gibbs sampler in Bayesian analysis.c) at the MRC Biostatistics Unit in Cambridge. Thomas.6 a review of ner convergence properties. y0 ) is not explicit. which represents a normal modelling with mean 0 and precision (inverse variance) 0:0001.4). A major restriction of this software is the use of the conjugate priors or at least log-concave distributions for the Gibbs sampler to apply..6.1.1) is moreover particularly di cult to check when K (y. like dnorm(0. which also allows for a large range of transforms. out. England.3 The BUGS software The acronym BUGS stands for Bayesian inference using Gibbs sampling. However. .7. BUGS includes a language which is C or S-plus like and involves declarations about the model. The Hilbert-Schmidt condition (7. p.1.6. the batch size being also open. We still conclude with the warning that the practical consequences of this theoretical evaluation seem negligible. we also consider that the eigenvalues of the operators T and F and the convergence of norms kgt ? gkTV to 0 are only marginally relevant in the study of the convergence of the average T 1 X h(y ) (7. At last.18. where the weight wk depend on h and F and the k are the (decreasing) eigenvalues of F (with 1 = 1).9). even though there exists a theoretical connection between the 2 asymptotic variance h of (7. the model and priors are de ned by for (i in 1:N) theta i] dgamma(alpha. the data and the prior speci cations.6) and the spectrum ( k ) of F through 2 X wk 1 + k 1 + 2 h= 1? k 1? 2 k 2 T t=1 t (see Besag and Green 1992 and Geyer 1992).0001).6) to IE h(Y )]. The output of BUGS is a table of the simulated values of the parameters after an open number of warmup iterations.6.0.1. In fact. For instance. In addition.326 THE GIBBS SAMPLER 7. Most standard distributions are recognized by BUGS (21 are listed in Spiegelhalter et al. more complex distributions can be handled by discretization of their support and assessment of the sensitivity to the discretization step. 1995b. improper priors are not accepted and must be replaced by proper priors with small precision. Best and Gilks (1995a. 7.0) (see Spiegelhalter et al. setups where eigenvalues of these operators are available almost always correspond to case where Gibbs sampling is not necessary (see. e.b. This software has been elaborated by Spiegelhalter. Example 7.0) dgamma(0. stat. for single or multiple levels in the prior modelling.6. As shown by its name.g. BUGS also recognizes a series of commands like compile. for the benchmark nuclear pump failures dataset of Example 7. data. non-parametric smoothing. that is Spring 1998!.56.6 ] NOTES 327 The BUGS manual (Spiegelhalter et al.cam. the authors have written an extended and most helpful example manual (Spiegelhalter et al.6. c). including meta-analysis. latent variable.) The BUGS software is also compatible with the convergence diagnosis software CODA presented in Note x8.uk/bugs for a wide variety of platforms.ac.4. model selection and geometric modelling.7. (Some of these models are presented in Problems 7. 1995b. survival analysis.mrc bsu. 1995a) is quite informative and wellwritten.45{7.14 In addition. which exhibits the ability of BUGS to deal with an amazing number of models. . the BUGS software is available as a freeware on the Web site http://www.7. 14 At the present time. The Blessing Way| 8. \must have signi cance. And until I know what that signi cance is.1 When and why to stop? The two previous chapters have laid the theoretical foundations of MCMC algorithms and showed that. since they do not directly induce methods to control the chain produced by an algorithm (in the sense of a stopping rule which guarantees that the number of iterations is su cient). they are nonetheless insu cient from the point of view of the implementation of MCMC methods. The Heretic's Apprentice| Leaphorn never counted on luck. The goal of this chapter is therefore to present. Instead. he expected order{the natural sequence of behavior. In other words. the chains produced by these algorithms are ergodic. by describing a sequence of non comparable techniques with widely varying degrees of theoretical justi cation. He counted on that and on his own ability to sort out the chaos of observed facts and nd in them this natural order. While such developments are obviously necessary. there are three (increasingly stringent) types of convergence for which control is necessary: (i) convergence of the chain (t) to the stationary distribution f (or stationarization).CHAPTER 8 Diagnosing Convergence \Everything that is not what it seems and not what it reasonably should be". under fairly general conditions. the human behaving in the way it was natural for him to behave. the cause producing the natural e ect. said Cadfael rmly. |Tony Hillerman. as well as Robert (1995) (The style of this chapter re ects the more recent and exploratory nature of these methods. the general convergence results do not tell us when to stop the MCMC algorithm. sometimes in an allusive mode." |Ellis Peter. I cannot be content. . the numerous methods of control proposed in the literature. as in the reviews of Cowles and Carlin (1994) and Brooks and Roberts (1995). or even geometrically ergodic.) From a general point of view. (In cases where this assumption is unrealistic. and the storage of the last simulation i(T ) in each chain.330 DIAGNOSING CONVERGENCE 8.) On a general basis. f is only the limiting distribution of (t) . (Note that.2. discussed in x8. on the opposite. Indeed. with lengthy stays in each of these regions (e. slow exploration of the support of f and strong correlations between the (t) 's are rather the issues at stake. In fact. therefore to act as if the chain is already in its stationary regime from the start. the original implementation of the Gibbs sampler was based on the generation of n independent initial values i(0) (i = 1. even when (0) f. notwithstanding the starting distribution. it seems to us that this approach (i) to convergence issues is not particularly fruitful. This may not be the case for high dimensional setups or complex structures where the algorithm is initialized at random. (i) appears as a minimum requirement on a algorithm supposed to approximate simulation from f! For instance. stationarity is therefore only achieved asymptotically and the i(T ) 's are distributed from f T .8. depending on the transition kernel chosen for the algorithm.2. 0 1 We consider a standard statistical setup where the support of f is approximately known. there are methods like the exact simulation of Propp and Wilson (1996). but we do think that convergence to f per se is not the major issue for most MCMC algorithms. and a stationarity test may be instrumental in detecting such di culties. First. in practice.1. .1) T t=1 to IEf h( )] for an arbitrary function h. the question of convergence to the limiting distribution is not really relevant. 0. since. nt) ) to iid-ness.g. where stationarity can be rigorously achieved from the start. as we will see in x8. if 0 is the (initial) distribution of (0) . More precisely.1 (ii) convergence of the empirical average T 1 X h( (t) ) (8. n). . Strictly speaking. This is not to say that stationarity should not be tested at all. it is often possible1 to consider the initial value (0) as distributed from the distribution f. Indeed. : : :. we only consider a single realization (or path) of the chain ( (t) ). since it implies discarding most of the generated variables with little justi cation with regards to points (i) and (ii). from a theoretical point of view. ( ( (iii) convergence of a sample ( 1t) .5. the exploration of the complexity of f by the chain ( (t) ) can be more or less lengthy. the chain may be slow to explore the di erent regions of the support of f. in the sense that the chain truly produced by the algorithm often behaves like a chain initialized from f.6. this approach thus requires a corresponding stopping rule for the correct determination of T (see for instance Tanner and Wong 1987). the modes of the distribution f). The second type (ii) of convergence is deemed to be the most relevant in the implementation of MCMC algorithms. this method induces a waste of resources. If.) This way of evacuating the rst type of control may appear rather cavalier. g. for convergence assessment.1).1.2. in some settings. =T t=1 var( 1 ) var( k ): k 0 . subsampling may be bene cial (see. This technique.2).2. subsampling is justi ed. = Tk k T 1 1 the variance of satis es t=1 =1 1 Proof. Brooks and Roberts (1995) relate this convergence to the mixing speed of the chain. .3). the motivation for subsampling is obvious.1. (t) )|which also justi es RaoBlackwellization (see x7.4. the above covariance oscillates with t.. 1992). Robert. the goal is to produce variables i which are (quasi-)independent. if the chain ( (t) ) satis es an interleaving property (see x7.1 describes how Raftery and Lewis (1992a. an alternative is to use sub-sampling (or batch sampling) to reduce correlation between the successive points of the Markov chain. When the covariance covf ( (0) .8. subsamples the chain ( (t) ) with a batch size k. Ryden and Titterington 1998). it is always preferable to use the whole sample for the approximation of IEf h( )]. in the informal sense of a strong (or weak) dependence on initial conditions and of a slow (or fast) exploration of the support of f (see also Asmussen et al. Note at this stage that sub-sampling necessarily leads to losses in e ciency with regards to the second convergence goal.b) estimate this batch size k. (t) ) is decreasing monotonically with t (see x7. In particular. For every k > 1. that is i = 0. i k T 1 X h( (tk?i) ). which complicates the choice of k.2.8. Section x8. 1. Lemma 8. While the solution based on parallel chains mentioned above is not satisfactory. the relevant issue at this stage is to determine a minimal value for T which validates the approximation of IEf h( )] by (8. De ne k . all the modes).k? 1 : .1. as shown by MacEachern and Berliner (1994). if ( (t) ) is a Markov chain with stationary distribution f and if Tk T 1 X h( (t) ) . which is customarily used in numerical simulation (see for instance Schmeiser 1989). checking for the monotone decrease of covf ( (0) . e. Nonetheless.1. The third type of convergence (iii) takes into account independence requirements for the simulated values. considering only the values (t) = (kt) .3)|is not always possible and. However.1 ] WHEN AND WHY TO STOP? 331 The purpose of the control is therefore to determine whether the chain has indeed exhibited all the facets of f (for instance. A formal version of convergence control in this setup is the convergence assessment of x8. Rather than approximating integrals like IEf h( )]. = 1 X h( (k) ). In fact. k ?1 as the drifted versions of k = k .1 { Consider h 2 L2 (f). While the ergodic theorem guarantees the convergence of this average from a theoretical point of view. like Gibbs sampling used in highly non linear setups. 2 In the remainder of the chapter. as in renewal theory (see x8.8. Moreover. namely (a) that the slower chain governs convergence and (b) that the choice of the initial distribution is quintessential to guarantee that the di erent chains are welldispersed. from the Cauchy{Schwarz inequality.3). shape of high density regions. we also distinguish between the meth(t ods involving the simulation in parallel of M independent chains ( m) ) (1 m M) and those based on a single on-line' chain. In fact. even though they seem to propose a evaluation of convergence which is more robust that for single chain methods. usually favor single chains. Besides distinguishing between convergence to stationarity (x8. k )=k2 1 var( k var( k )=k2 i6=j = var( k ) . Liu and Rubin (1995) and Johnson (1996) can be similarly criticized. The motivation of the former is intuitively sound: by simulating several chains. variability and dependence on the initial values are reduced and it should be easier to control convergence to the stationary distribution by comparing the estimations of quantities of interest on the di erent chains.2) and convergence of the average (x8. Liu. For instance. an initial distribution which is too concentrated around a local mode of f does not contribute signi cantly more than a single chain to the exploration of the speci cities of f. The dangers of a naive implementation of this principle should be obvious. we consider independence issues only in cases where they have bearing on the control of the chain. which will presumably stay in the neighborhood of the starting point with higher probability. The elaborate developments of Gelman and Rubin (1992). In other words.). An 1 )=k + i6=j X . etc.2.332 1 DIAGNOSING CONVERGENCE 8.1 can then be written under the form X 1 k?1 i : =k 1 k i=0 Therefore X ! 1 k?1 i var( 1 ) = var k k i=0 X i i j = var( k )=k + cov( k . slow algorithms. good performances of these parallel methods require a su cient a priori knowledge on the distribution f in order to construct an initial distribu(0) tion on m which takes into account the speci cities of f (modes. Geyer (1992) points out that this robustness is illusory from several points of view. in the sense that a unique chain with MT observations and a slow rate of mixing is more likely to get closer to the stationary distribution than M chains of size T.3). invalidate an arbitrary indicator." . For instance. 8. let us agree with many authors that it is somehow illusory to aim at controlling the ow of a Markov chain and assessing its convergence behavior from a single realization of this chain. Best.) The setting is certainly no better for the Markov chain methods and they should be used with appropriate caution. There always are settings (i. Moreover.)On the other hand. for most realizations.2. (See Tierney 1994 and Raftery and Lewis 1996 for other criticisms. whatever its theoretical validation. a single chain may present probabilistic pathologies which are more often avoided by parallel chains. transition kernels) which. The criticisms presented in the wake of the techniques proposed below only highlight the incomplete aspect of each method and therefore do not aim at preventing their utilization but rather to warn against a selective interpretation of their results. and the randomness inherent to the nature of the problem prevents any categorical guarantee of performance. Far from being a failure acknowledgment. it is simply inconceivable in the light of recent results to envision automated stopping rules. as in Gelfand and Smith (1990). single chain methods su er more severely from the defect that "it only sees where it went". or Robert 1997a for illustrations). this plot is only useful 2 To borrow from the injunction of Hastings (1970).2 Monitoring Convergence to the Stationary Distribution 8.8.1 Graphical Methods A natural empirical approach to convergence control is to draw pictures of the output of simulated chains. Brooks and Roberts (1995) also stress that the prevalence of a given control method strongly depends on the model and on the inferential problem under study. The crux of the di culty is actually similar to Statistics. the goal being nowadays to develop \convergence control spreadsheets". It is therefore even more crucial to develop robust and general evaluation methods which extend and complement the present battery of stopping criteria. \even the simplest of numerical methods may yield spurious results if insu cient care is taken in their use (. in the sense that the part of the support of f which has not been visited by the chain at time T is almost impossible to detect. in the sense of computer graphical outputs which could present through several graphs di erent features of the convergence properties of the chain under study (see Cowles and Carlin 1996. Cowles and Vines 1996. these remarks only aim at warning the reader2 about the relative value of the indicators developed below. in order to detect deviant or non-stationary behaviors. As noted by Cowles and Carlin (1994). a rst plot is to draw the sequence of the (t) 's against t..e. However. To conclude..2 ] MONITORING CONVERGENCE TO THE STATIONARY DISTRIBUTION333 additional practical drawback of parallel methods is that they require a modi cation of the original MCMC algorithm to deal with the processing of parallel outputs. where the uncertainty due to the observations prohibits categorical conclusions and nal statements.8. (Hint: Show that (2 ( j t?1) ) (0) based on m parallel chains ( j ) (j = 1. if j . 2 ) ?2 G a(0:5. P 8. t . . ii 8.22 (Dellaportas 1994) Show that Z (x) IEg min 1. 8. 8. ij 1 Jt = m (0) (a) Show that. IE It ] = IE Jt ] = 1. (b) Show that var(It ) var(Jt )=(m ? 1) 8. Derive the Gibbs stopper of x8.20 (Raftery and Lewis 1992a) Deduce (8.2. . P 8.8. p 3=2 T p( + ) q (2 ? ? ) i i ! "0 + 1 : 2 ! 8. j t?1) ) .19 (Problem 8. (2 K ( i(0) . log 1 ?i = .23 (Tanner 1996) Show that.1 by proposing an estimate of t .21 (Raftery and Lewis 1996) Consider the logit model + i N (0. m). Apply the various convergence controls to this case.376 DIAGNOSING CONVERGENCE 8. IE It] IE Jt ]. 0:2): Study the convergence of the associated Gibbs sampler and the dependence on the starting value. if (t) t and if the stationary distribution is the posterior density associated with f (xj ) and ( ).4.25 Propose an estimator of the variance of ST (when it exists) and derive a convergence diagnostic.3) from the normal approximation of T .18 continued) De ne It = m(m1? 1) with t = ij X i6=j t. 8.5 m X i=1 t.1.6. nd a set of parameters ( .26 Check whether the importance sampling ST is (a) available and (b) with nite variance for the Examples of Chapter 7. f (x) = jf (x) ? g(x)jdx g Derive an estimator of the L1 distance between the stationary distribution and the distribution at time t.24 For the witch hat distribution of Example 8. and that for every initial (0) distribution on the j 's. the weight (t) (t) !t = f (xj t ( )(t)() ) converges to the marginal m(x). y) for which the mode at y takes many iterations to be detected. Ritter and Tanner (1992) propose to use the weight wt through a stopping rule based on the evolution of the histogram of the wt 's. An additional di culty is related to the computation of K and of its normalizing constant. A di erent implementation can be based on ( parallel chains jt) .2) but Ritter and Tanner (1992) develop a general approach (called Gibbs stopper) where. which can induce a considerable increase of the computation time.6 Notes 8. show that a small set is available in the ( j ) space and derive the corresponding renewal probability. and apply the strong Markov property.3.27 For the model of Example 8.8.) 8. Cowles and Carlin (1994) note moreover .2 is a Markov chain. who rst approximated the distribution of (t) to derive a convergence assessment. 8. ) : f^t ( ) = m j=1 Brooks and Roberts (1995) cover the extension to Metropolis{Hastings algorithms (see also Zellner and Min 1995. for a similar approach). the approximation of Z ft ( ) = K ( 0. ) ft?1 ( 0 ) d 0 is m 1 X K ( (t?j). : : : . (t) wt = f ( (t)) : f^t ( ) Note that this evaluation aims more at controlling the convergence in distribution of ( (t) ) to f ( rst type of control) than at measuring the mixing speed of the chain (second type of control).4. ^ In some settings.1 Other Stopping Rules Following Tanner and Wong (1987).3. 0 ) denotes the transition kernel. if K ( .6 ] NOTES 377 8. until these weights are su ciently concentrated near a constant which is the normalizing constant of f . (Hint: Show that n P ( (n) = ij (? ?1) = j. The theoretical foundations of this approximation are limited since the (t?j) 's are not distributed from ft?1 . although it does not enjoy a stronger theoretical validity (see x8.8. However. the example treated in Tanner (1996) does not indicate how the quantitative assessment of concentration is operated. which evaluate the di erence between the limiting distribution f and an approximation of f ^ at iteration t.6. ( n?2 ?1) 2 A .28 Show that the sequence ( (n)) de ned in x8. (n?2) = . the approximation ft can be computed by Rao-Blackwellization as in (8. : : :) = IE (0) IIAi ( n?1 ?1+ n ) ( n?1 ?1) 2 Aj . Ritter and Tanner (1992) propose a control method based on the distribution of the weight !t .8. ft .4). since it used characteristics of the distribution f which are usually unknown to calibrate this convergence of the weights wt. 1) TA t=1 1 TA X h( (t) ). Geweke (1992) proposed to use the spectral density of h( (t) ). For instance. Moreover. 2 2 and the estimates A and B of Sh (0) based on both subsamples. B= (t) TB t=T ?T +1 h( ) B 1 T X < 1).9.1) since the limiting variance h of Proposition 4. Sh (w) = 21 t=1 X where i denotes the complex square root of 1. q) process.2 Spectral Analysis As already mentioned in Hastings (1970).6. that is einw = cos(nw) + i sin(nw): The spectral density relates to the asymptotic variance of (8. We can therefore derive a convergence diagnostic from (8.6.8 is given by 2 2 h = Sh (0) : Estimating Sh by non-parametric methods like the kernel method (see Bosq and Lecoutre 1988). we can model ( (t) ) as an ARMA(p. for an introduction). namely that they necessarily induces losses in e ciency in the processing of the problem (since they are based on a less constrained representation of the model).6 ^ that the criterion is sensitive to the choice of m in ft. 8. under an adequate parameterization. the di erence p T r ( A ? B) (8. The values suggested by Geweke (1992) are A = 0:1 and B = 0:5. h( (t) ) einw . A global criticism of this spectral approach also applies to all the methods using a non-parametric intermediary step to estimate a parameter of the model. the calibration of non-parametric estimation methods (as the choice of the window in the kernel method) is always delicate since it is not standardized. and then use partially empirical convergence control methods. respectively. Geweke (1992) takes the rst TA observations and the last TB observations from a sequence of length T to derive A= t=?1 ? cov h( (0) ). and a determination of the size t0 of the training sample.378 DIAGNOSING CONVERGENCE 8. in the sense that large values of m lead to histograms which are much stabler and therefore indicate a seemingly faster convergence. They propose to use the criterion di erently via the empirical variance of the weights !t on parallel chains till it converges to 0. Brooks and Roberts (1995) repeat these criticisms on the di cult interpretation of histograms and the in uence of the size of windows. estimating the parameters p and q. the chain ( (t)) or a transformed chain (h( (t) )) can be considered from a time series point of view (see Gourieroux and Monfort 1990 or Brockwell and Davis 1996. TB = B T and A + B 2 2 .1.8.6. We therefore refer to Geweke (1992) for a more detailed study A+ B A B is a standard normal variable (with TA = A T .1). Asymptotically (in T ). They then approximate %t by value t=1 t=1 %t = m ^ 1 ( min ) .6.6 ] NOTES 379 of this method.6. there exist and 1 > j 2 j > j 3 j such that the quantity %t = IE z (t)] = P ( (t) ) = % + t + O(j 3 jt ). Moreover. . based on ] ] BT (S ) = S Ts^ ? Ts2 .b). 0 s 1 1= (T (0)) where T t X (t) 1 X (t) .6. 8.1). They show that. For large T . under some conditions. for instance). Cowles and Vines 1995.) In their study of the convergence of %t to %. the approach of Garren and Smith (1993) does not require a preliminary evaluation of ( . the estimators of and 2 are instable. . for a discussion). Their method thus provide the theoretical background to Yu and Mykland's (1993) CUSUM criterion (see x8. BT is approximately a Brownian bridge and can be tested as such. but it is quite costly in simulations. 2 with % the limiting value of %t . and Garren and Smith (1993) suggest to choose T such that ^ and ^ 2 remain stable. Other approaches based on spectral analysis are given in Heidelberger and Welsh (1988) and Schruben.3. Note that Heidelberger and Welch (1983) test stationarity via a Kolmogorov-Smirnov test. Singh and Tierney (1983) to test the stationarity of the sequence by Kolmogorov{Smirnov tests (see Cowles and Carlin 1994 and Brooks and Roberts 1995. 2 m X and derive some estimations of %.8.4 The CODA software . with the same initial (0) (1 m). and 2 from T X t=n0 +1 II (t) < =1 (^t ? % + % t )2 . ). which is used in some softwares (see Best.2 and Brooks and Roberts 1995. 2 where n0 and T need to be calibrated. When compared with the original binary control method. the expansion of %t around % is only valid under conditions which cannot be veri ed in practice. see x7. Garren and ( Smith (1993) propose to use m parallel chains ( `t) ). (These conditions are related with the eigenvalues of the functional operator associated with the transition kernel of the original chain ( (t) ) and with Hilbert-Schmidt conditions.3 Further discretizations Garren and Smith (1993) use the same discretization z (t) as Raftery and Lewis (1992a. When T is too high.8. =T St = and ^(0) is an estimate of the spectral density. 8. the method proposed by Propp and Wilson (1996) is called coupling from the past (CFTP). so far. the stationarity of the nite chain obviously transfers to the dual chain. In other cases. While originally intended as an output processor for the BUGS software (see x7. in order to evaluate the necessary computing time or the mixing properties of the chain.3).1. . The appeal of these methods for mainstream statistical problems is yet unclear. In a nite state space X of size k.3. several authors have proposed devices to sample directly from the stationary distribution f . Moreover. Geweke (1992) (x8. do allow for perfect sampling.1). The techniques selected by Best et al. Following Propp and Wilson (1996). it is essential. The MCMC output must however be presented in a very speci c S-plus format to be processed by CODA.15). (0) f (x). to the greater simplicity of these spaces and.e. this is due.4). so that (0) .6 While the methods presented in this chapter are at various stages of their development. in the sense that MCMC methods have been precisely introduced to overcome the di culties of simulating directly from a given distribution.8. algorithms such that (0) f . Raftery and Lewis (1992a) (x8. at varying computational costs11 and for speci c distributions and/or transitions. Note also that in settings when the Duality Principle of x7. for another. even if the latter is continuous. the computation time required to produce (0) exceeds by orders of magnitude the computation time of a (t) from the transition kernel. : : : are nearly independent. this software can also be used to analyze the output of Gibbs sampling and Metropolis{Hastings algorithms.2).6. plus plots of autocorrelation for each variable and of cross-correlations between variables. to start the Markov chain in its stationary regime.6. replacing the exact sampling terminology of Propp and Wilson (1996) with a more triumphing quali cation! The main bulk of the work on perfect sampling deals. in some settings.380 DIAGNOSING CONVERGENCE 8. Heidelberger and Welch (1983) (x8. because the bias caused by the initial value/distribution may be far from negligible.5 Perfect simulation Although the following imperative is rather in opposition with the theme of the previous chapters. with nite state spaces.6. 8.3). from both points of view of (a) speeding up convergence and (b) controlling convergence.1. the convergence issues are reduced to the determination of an acceptable batch size k. but Murdoch and Green (1997) have shown that some standard examples in continuous settings. if it becomes feasible to start from the stationary distribution. (k) . one needs to know. some of the most common techniques have been aggregated in an S-Plus software called CODA. to statistical physics motivations related to the Ising model (see Example 7. Cowles and Vines (1996). that is the convergence diagnostics of Gelman and Rubin (1992) (x8. developed by Best.1). (2k) . like the nuclear pump failure model of Example 7.2 applies. It runs in parallel k chains corresponding to all possible starting points in X farther and farther back in time till all chains take the same value (or coalesce) at time 0 (or earlier). (1996) are mainly those described in Cowles and Carlin (1996). \how long is long enough". 11 In most cases. as put by Fill (1996).4. i. for one thing.18 (see Problem 8. and to the accuracy of an ergodic average. The denomination of perfect sampling for such techniques was coined by Kendall (1996).6. This algorithm nonetheless requires a high degree of analyticity to compute the expectation (E) step and therefore cannot be used in all settings.1. quality control.1) seem to call naturally for simulation. As mentioned in x5.3. 1998 Missing data models (introduced in x5. (1977) (see x5. Qian and Titterington 1991.2. epidemiological studies. medical experiments. Tanner (1991) or MacLachlan and Krishnan (1996) for deeper perspectives in this domain. See Everitt (1984). It is only with the EM algorithm that Dempster et al.5.2 February 27.) produce a grouping of the original observations in less informative categories. Wei and Tanner 1990.3. Little and Rubin (1987). 9.1 Discrete Data Models Numerous settings (surveys. this intuition has taken a while to formalize correctly. in order for it to replace the missing data part so that one can proceed with a \classical" inference on the complete model.1 Introduction 9.3. and to go further than mere ad hoc solutions with no true theoretical justi cation.2 First examples 9.CHAPTER 9 Assimilation and Application: Missing Data Models Version 1. It must rather be understood as a sequence of examples on a common theme. design of experiment. stochastic versions of EM (Broniatowski. often for reasons beyond . Celeux and Diebolt 1983. However. This chapter mainly aims at illustrating the potential of Markov Chain Monte Carlo algorithms in the Bayesian analysis of missing data models. Celeux and Diebolt 1985. etc.4 and x5. Lavielle and Moulines 1997) have come closer to simulation goals by replacing the E step with a simulated completion of missing data. without however preserving the whole range of EM convergence properties.3) came up with a rigorous and general formulation of statistical inference though completion of missing data. although it does not intend to provide the reader with an exhaustive treatment of these models. 384 ASSIMILATION AND APPLICATION:MISSING DATA MODELS 9. This model describes.xi +20] (yi ) (? 1 + 2 yi ) is useful for the completion of the model through Gibbs sampling.9. (In particular. each corresponding to a grouping of individual data. the approximation bias to the inferior pack in a study on smoking habits. i where is the cdf of the normal distribution N (0. 0) denotes the normal distribution N ( . k When several variables are studied simultaneously in a sample.2. . . otherwise.2. The two distributions above can then be completed by 2 ( .12).xi +1] (yi ) + I xi +1. If the gi 's are known. 1. ( 1 ? 2 yi )).2.1 {Rounding e ect{ Heitjan and Rubin (1991) consider some random variables yi E xp( ) grouped in observations xi (1 i n) according to the procedure y gi jyi B(1. they examine whether the grouping procedure has an e ect on the likelihood and thus must be taken into account. Otherwise. 2 ) / e? yi I xi .xi +1] (yi ) ( 1 ? 2 yi ) +II xi . 1. the result is a contingency table. yi = 20i]y =20] if gi = 1.xi+20] (yi ) (? 1 + 2 yi ) = e? yi I xi . 2) truncated to IR+ (see Example 2. If the context is su ciently informative to allow for a modeling of the individual data. 2) n exp ( X ) n ? i=1 yi I >max( yi +ti ) ( . 1 .2 the control of the experimenter.1) or by introducing an additional arti cial variable ti such that ti jyi N+ ( 1 ( 2 yi . under the assumption that this bias increases with the daily consumption of cigarettes yi]. the completion of the model is straightforward.x +20] (yi ) epti =2 . 2) 1 2 to provide a Gibbs sampling algorithm. 1. 2. yi jti e i i i i 2 where N+ ( . Heitjan and Rubin (1991) (see also Rubin 1987) call the resulting process Data coarsening and study the e ect of data aggregation on the inference. This distribution can be simulated directly by an accept-reject algorithm (which requires a good approximation of |see Example 2. 1) and a] denotes the integral part of a. 0). the completion of the contingency table (by reconstruction of the individual data) may facilitate inference about the phenomenon under study. xi jgi.x +1] (yi ) + I x +1. the conditional distribution (yi jxi. ? ) ? ? yi I x . for instance.) Example 9. 2. the steps of the Gibbs sampler are 2 from the inverted gamma distribution 0 1X IG @164. and if N2T ( . Table 9. 0 9 = ?1(yijk ? ) . 3. .9. : : :. in particular the relation between height and diameter of the branches where they sleep. n12 = 11. .46 {Contingency table completion{ 1. of lizards. nij ). the model can be completed by simulation of the individual values and this allows for the approximation of the posterior distribution associated with the prior ( . Fienberg (1977) proposes a classical solution based on a 2 test.j. Simulate 8 < 1X (9. 2. j = 1.2 ] FIRST EXAMPLES 385 Diameter (inches) 4:0 > 4:0 Height > 4:75 32 11 (feet) 4:75 86 35 Table 9. 1968). To test the independence between these factors. for instance a multidimensional normal distribution with mean = ( 1 . n21 = 86 and n22 = 35. j = 1.k 1 ?1 (yijk ? )A .2.2 { Lizard habitat { Schoener (1968) studies the habitat Algorithm A. A:46] 4. log(4)). However. ?1. k = 1. 2. Observation of two characteristics of the habitat of 164 lizards (Source: Schoener. if n11 = 32.1) (1 ? 2 )?164=2 exp :? 2 (yijk ? )t i. In fact. k = 1. nij ). A possible alternative is based to assume a parametric distribution on the individual observations yijk of diameter and of height (i. 2) and covariance matrix = 2 1 1 = 2 0: The likelihood associated with Table 9.k according to .1 is then implicit.1 provides the information available on these two parameters.2. Simulate yijk N2T ( .1] Example 9. 2. )= 1 I ( ). Qij ) represents the normal distribution restricted to one of the four quadrants Qij induced by (log(4:75).1. =164).2. Qij ) (i. 2 (yijk ? )t i. : : :. Simulate through an Markov Chain Monte Carlo algorithm.j. .9. . Simulate N2 (y. even in the case = 0.2. : : :.2. for instance. Simulate 2.2) pi = (xt ) . See for instance Gourieroux (1989) for an exhaustive processing of probit and logit models.j. 2 IRp : i Even though the model (9. where some binary variables yi . and the normal distribution truncated on the right in u. ) on .2. (i = 1. but the distribution (9. a survey with multiple questions may include non-answers to some personal questions.1. 1. are modeled through a Bernoulli distribution (9. that is n 1 if yi > 0.1) requires a Metropolis{Hastings step based.2.k yijk =164. For instance. 0) i if if yi = 1. x1. the algorithm which approximates the posterior distribution ( jy1 . on an inverse Wishart distribution k Another setup where grouped data appears in a natural fashion is made of qualitative models.2 Algorithm A. n) A:47] where N+ ( . The analysis of such structures is complicated by the fact that the failure to observe is not always explained. 1g and associated with a vector xi 2 IRp of covariates. yi = 0 otherwise. taking values in f0.2) is generally de ned in this form. 9. If these missing observations are entirely due to chance. 1. respectively. 2. See Albert and Chib (1993b) for an application of this model to longitudinal data for medical experiments and some details on the implementation of A:47].2. 2 . Given a conjugate distribution Np ( 0 . a completed model can be introduced where the completed data yi is grouped according to their sign. yi = 0.15 and 9. The logit model being treated in Problems 7.386 ASSIMILATION AND APPLICATION:MISSING DATA MODELS where y = i. and where X is the matrix whose columns are made of the xi 's.2 Data missing at random Numerous settings may lead to sets of incomplete observations. : : :. ( ?1 + XX t )?1 ): i N+ (xt . u) denote the normal distribution truncated on the left in u. etc. yn . it follows that the incompletely observed data only play a . a pharmaceutical experiment on the aftere ects of a toxic product may skip some doses for a given patient. we consider instead the probit model. Simulate X Np (( ?1 + XX t )?1 ( ?1 0 + yi xi ).47 {Probit posterior distribution{ yi 1. xn) is then P 9. : : :. u) and N? ( .9. The normal truncated distribution can be simulated by the algorithm of Geweke (1991) or Robert (1995). a calibration experiment may lack observations for some values of the calibration parameters. 0) i N? (xt . The model gets identi able through constraints like 1 = 1 = 1 = 0. i=1 exp ?ra. 1987.m ( 0 + a + s + m ) (9.m. the likelihood of the complete model is much more explicit since it is na.2. sex and marital status.m . However.s.3) where za. ra.2 ] FIRST EXAMPLES 387 Age < 30 > 30 Men Single 20:0 24=1 30:0 15=5 Maried 21:0 5=11 36:0 2=8 Women Single Maried 16:0 16:0 11=1 2=2 18:0 ? 8=4 0=4 Table 9. while na.m and ya.i za.2.s.2.s.s.m 0 a s m 1 + expfw0 + w1ya.m. Example 9.2 =1 2 .3 {Non-ignorable non-response{ Table 9.m denote the number of persons covered by the survey. s (s = 1.s.s. sex and family status.s. 2. The observations are grouped by average.2 m=1.s. respectively.m and where a (a = 1.s.i g a .i )g ( + + + )ra.i denotes the probability of non-response. 2 .m. and we assume an exponential shape for the individual data. the number of responses and the average of these responses by category.m = 0 + a + s + m .9.m.s.m. 2) and m (m = 1. respectively. these distributions are not always explicit and a natural approach leading to a Gibbs sampler algorithm is to replace the missing data by simulation.m.m./male) and family (single/married) e ects.) role through their marginal distribution.2.9. where pa.s.m.i g. a direct analysis of the likelihood is not feasible analytically.i = expfw0 + w1ya.m y a.i (w0 + w1ya.i represents the indicator of missing observation. 2). where 1 i na.i g 1 + expfw0 + w1ya. ya. If the prior distribution on the parameter set is the Lebesgue measure on IR2 for (w0.s.s.m.m.s. 2) correspond to the age (junior/senior). 2. s=1. sex (fem.s.m) with a. On the contrary.i E xp( a. say in the shape of a logit model.2 describes the ( ctious) results of a survey on the income depending on age. Average incomes and numbers of responses/non-responses to a survey on the income by age. w1) and on IR+ for 0. pa.s.m expfz Y Y a. (Source: Little and Rubin.2.s.s. An important di culty appears when the lack of answer depends on the income.s.s. ).20 (Billio. . y2t ). y2t ) is Gaussian. 9. the above steps can be implemented without approximation.19 (Billio. Monfort and Robert 1998) A dynamic desequilibrium model is dened as the observation of where the yit are distributed from a parametric joint model. 1) and the epsilont 's are Np (0. b Show that a possible completion of the model is to rst draw the regime (1 versus 2) and then draw the missing component. y2t ). ). T ) yt = ( + (t?1)2 )1=2 t . c Show that when f (y1t .416 ASSIMILATION AND APPLICATION:MISSING DATA MODELS 9. The latent variables yt are not observed. a Give the distribution of (y1t . a Propose a noninformative prior distribution on the parameter = ( . f (y1t . y2t ) conditional on yt . : : : . yt?1 . where the t 's are iid N (0. Monfort and Robert 1998) Consider the factor ARCH model de ned by (t = 1.a.6 9. yt = min(y1t . ) which leads to a proper posterior distribution.9. b Propose a completion step for the latent variables based on f (yt jyt. . yt = ayt + t . P. Assoc. 669{679.H. O. A. D. Statist. J. Statist. J. Statist. U. and Sejnowski. Soc.M.E. Modi ed signed log likelihood ratio. Math. 29{35. Azencott. S. P. Amer. Computing 12. and Chib. Athreya. (1985) A learning algorithm for Boltzmann machines. Tech.E. Barbieri. Biometrika 78. (1979) Applied Probability and Queues. O. P. Tech. H. 343-365. U. (1993b) Bayesian analysis of binary and polychotomous response data. London. (1989) Asymptotic Techniques for Use in Statistics. 147{169. and Chib. Trans.H. Hinton. and Stegun. (1994) The Weighted Bootstrap. Dept. 1987{1988 697. .W.J. \La Sapienza". J. Wiley. Barndor -Nielsen. (1979) The computer generation of Poisson random variables. Poisson and binomial distributions. 1037{1044. Modelling and Computer Simulations 2. Dover. Lecture Notes in Statistics 98.. (1992) Stationarity detection in the initial transient problem.B. J. (1988) Computational methods using a Bayesian hierarchical generalized linear model. Uni. 28. T. 245. S. J. R. E. G. Albert. On a formula for the distribution of the maximum likelihood estimator. and Kors. Asmussen. T. Appl. D. S. T. (1996) A reversible jump MCMC sampler for Bayesian analysis of ARMA time series. New York.M. New York. Abramowitz. 493{501. Roma. report. ACM Trans. 130{157. Chapman and Hall. and Dieter.E. 557-563.H. (1989) Simulated Annealing and Boltzman Machines: a Stochastic Approach to Combinatorial Optimisation and Neural Computing. and Titterington. Albert. Barndor -Nielsen. D. Atkinson. (1983).B.References Aarts. and Ney. Seminaire Bourbaki 40ieme annee. and Bertail. Ahrens.R. Business Economic Statistics 1. Barbe. Amer. 88. Archer. 83. (1995) Parameter estimation for hidden Markov chains. beta. (1978) A new approach to the limit theory of recurrent Markov chains. (1974) Computer methods for sampling from gamma. Barndor -Nielsen. (1964) Handbook of Mathematical Functions. (1991). J.H. S.J. P. J. I. Asmussen. Ackley. Glynn. and Cox. M. Biometrika 70. G. Assoc. O. and Thorisson. Amer..J. Cognitive Science 9. (1988) Simulated annealing. of Glasgow. Springer{Verlag. J. and O'Hagan. Wiley.. New York. 223{246. J. of Stat. Albert. (1993a) Bayes inference via Gibbs sampling of autoregressive time series subject to Markov mean and variance shifts. M. K. report. 1{15. Purdue Uni. Smith (Eds. and Green. J. Y. and Mengersen. and Giron. of Statistics.E. Chapman and Hall. Besag. Tech. Spiegelhalter). C) 27. 303{328. J. Statist. report. and Hartigan. 5{124. Ann. J. Besag.. (1989) Improving stochastic relaxation for Gaussian random elds. Berger.H. Bernardo. W.E. DeGroot. and Mengersen. and Wolpert.J. M.J. (1977) Minimum Hellinger distance estimates for parametric models. (1994) Discussion of \Markov chains for exploring posterior distributions". Best. Besag. 67{78. Assoc. and Richard. 339{358. report #9610C.. (1985) Statistical Decision Theory and Bayesian Analysis (2nd edition). Wiley.418 REFERENCES 9. Beran. Ann. Berge. London. 260{279. P.O. 88.L. In Markov chain Monte-Carlo in Practice (Ed. and Smith. J. Statistical Science 10. Berger. Besag. K. New York. L. D. 395{ 407. J. (1996) Estimation of quadratuc functions: reference priors for non-centrality parameters. Journal of the Royal Statistical Society (Series B) 36. and Wake eld. Plann. J. 20.O. New York. (1988) The Likelihood Principle (2nd edition). and Vidal. Richardson and D. Springer-Verlag. J. Higdon. 192{ 326. Bernardo.F. Barcelona. A. (1993) A Bayesian analysis of change point problems. Spain. Applied Statistics (Ser. 19{46. Bernardo. Racine-Poon. Statist. Besag. C. J.F. Lindley and A. Econometrics 29. J. J. Philippe.6 Barone. Biometrics 47. (1992) Product partition models for change point problems. 3{66. R.. 25{38. Wiley. (1996) MCMC for nonlinear hierarchical models. J. R. (1995) Bayesian computation and stochastic systems (with discussion). J. F.C. D. 309{319. Inference 25. In Second Catalan International Symposium on Statistics. (1986) A Bayesian approach to cluster analysis. E. and Hartigan. (1988) A Bayesian analysis of simple mixture problems. A.. Statist. D. 1734-1741. 181.O. C. Ann. Applied Statistics 16.M.M. P. (1984) Order Within Chaos. Bauwens.J. In Bayesian Statistics 3. Hayward. Tech. (1994) An overview of of robust Bayesian analysis (with discussion).T. Pommeau. 5. Dept. (1993) Meta-Analysis via Markov Chain MonteCarlo methods.M.M.M.L.J. J. Oxford University Press. J. Besag. 1473{1487. D. Statist. Barry. Oxford. Gilks. IMS Lecture Notes | Monograph Series 9. S. Colorado State Univ. J.O.V. (1994) Bayesian Theory. J.). of Statistics. Bernardo. Bennett. Journal of the Royal Statistical Society (Series B) 55. New York. Amer. P. Statist. J.F. (1990) Robust Bayesian analysis: sensitivity to the prior. J. Berger. F. California. J. D. (1984) Bayesian Full Information of Simultaneous Equations Models Using Integration by Monte Carlo. (1978) Letter to the editor. L. J. and Frigessi. and Giron. A. Berger. J. (1974) Spatial interaction and the statistical analysis of lattice systems (with discussion). 22. Springer-Verlag. Barry. . J. TEST 3.O.P. (1992) Spatial Statistics and Bayesian computation (with discussion).9. (1985) A 1-1 Poly-t random variable generator with application to Monte Carlo integration. Berger. J. A.R. J.J.. J. 445{463.M. J. Dept. K. Lecture Notes in Economics and Mathematical Systems 232. and Robert. (1989) Towards Bayesian image analysis. Green. Bauwens. New York.
58,823
190,824
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.8125
3
CC-MAIN-2017-30
latest
en
0.920517
https://space.stackexchange.com/users/8996/2012rcampion?tab=answers
1,642,640,708,000,000,000
text/html
crawl-data/CC-MAIN-2022-05/segments/1642320301670.75/warc/CC-MAIN-20220120005715-20220120035715-00531.warc.gz
576,702,000
22,386
2012rcampion • Member for 6 years, 9 months • Last seen this week • Northern Virginia Statement of the Problem The problem you want to solve is called the Kepler problem. In your formulation of the problem, you're starting out with the Cartesian orbital state vectors (also called ... The U.S. Vanguard rocket reached orbit three times with a first stage thrust of only 125 kN. The first stage of the three-stage Vanguard Test vehicle was powered by a GE X-405 28,000 pound (~125,000 ... Kirchhoff's law is only valid for objects in radiative equilibrium. The emissivity and absorptivity of a material are the same for a given wavelength, but can vary dramatically for different ... Background and Physics Note that there are actually two different but related types of actuators that use conservation of angular momentum1 to control a spacecraft's attitude (both of which may be ... Currently unclear According to the Verge: It's possible that the [static fire] test could come early next week. But the Falcon Heavy’s launchpad is located at NASA’s Kennedy Space Center, and limited ... This answer assumes that you start in a circular orbit of radius $r$ and speed $v_\text{circ}=\sqrt{\mu/r}$. If you push the ball along the direction of the orbit, it will go into an elliptical orbit ... We can compute the power required to maintain speed as: $$P=\frac{C_D}2\rho A v^3$$ Assuming the hypersonic drag coefficient is around $1$ and that the atmospheric density is $1\%$ of Earth's, we ... Computers on the ISS do not rely on UNIX/POSIX time, they rely on GPS time. Broadcast time is the time broadcast from ISS computers that is intended to be indicative of current time. The broadcast ... The paper does not describe how the calculations for the tether are done, but I can make a guess. We take a small piece of the tether with mass $\delta m$ and length $\delta r$, at distance $r$ from ... I agree with Ingolifs' answer; you can create a porkchop plot for a transfer between any two orbits. For an Earth-Moon porkchop you could pick either a point on the Earth's surface or a particular ... According to the press conference: ... that gold umbilical, that's what's transferring all the information between the rover and the descent stage, including this video; this picture is coming down ... I used the paper Wavelength dependency of the Solar limb darkening for solar limb darkening data. It uses the following model for the normalized brightness distribution across the disk of the Sun: ... First off I'll note that the linked article doesn't actually claim that Triana DSCOVR is the first spacecraft since Apollo to see the whole daylit side of Earth. The relevant quote (emphasis mine): ...
597
2,714
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.859375
3
CC-MAIN-2022-05
longest
en
0.931971
https://us.metamath.org/ileuni/metcnpi3.html
1,701,958,047,000,000,000
text/html
crawl-data/CC-MAIN-2023-50/segments/1700679100674.56/warc/CC-MAIN-20231207121942-20231207151942-00101.warc.gz
669,344,319
9,540
Intuitionistic Logic Explorer < Previous   Next > Nearby theorems Mirrors  >  Home  >  ILE Home  >  Th. List  >  metcnpi3 GIF version Theorem metcnpi3 12506 Description: Epsilon-delta property of a metric space function continuous at 𝑃. A variation of metcnpi2 12505 with non-strict ordering. (Contributed by NM, 16-Dec-2007.) (Revised by Mario Carneiro, 13-Nov-2013.) Hypotheses Ref Expression metcn.2 𝐽 = (MetOpen‘𝐶) metcn.4 𝐾 = (MetOpen‘𝐷) Assertion Ref Expression metcnpi3 (((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) → ∃𝑥 ∈ ℝ+𝑦𝑋 ((𝑦𝐶𝑃) ≤ 𝑥 → ((𝐹𝑦)𝐷(𝐹𝑃)) ≤ 𝐴)) Distinct variable groups:   𝑥,𝑦,𝐹   𝑥,𝐽,𝑦   𝑥,𝐾,𝑦   𝑥,𝑋,𝑦   𝑥,𝑌,𝑦   𝑥,𝐴,𝑦   𝑥,𝐶,𝑦   𝑥,𝐷,𝑦   𝑥,𝑃,𝑦 Proof of Theorem metcnpi3 Dummy variable 𝑧 is distinct from all other variables. StepHypRef Expression 1 metcn.2 . . 3 𝐽 = (MetOpen‘𝐶) 2 metcn.4 . . 3 𝐾 = (MetOpen‘𝐷) 31, 2metcnpi2 12505 . 2 (((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) → ∃𝑧 ∈ ℝ+𝑦𝑋 ((𝑦𝐶𝑃) < 𝑧 → ((𝐹𝑦)𝐷(𝐹𝑃)) < 𝐴)) 4 rphalfcl 9370 . . . 4 (𝑧 ∈ ℝ+ → (𝑧 / 2) ∈ ℝ+) 54ad2antrl 479 . . 3 ((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ (𝑧 ∈ ℝ+ ∧ ∀𝑦𝑋 ((𝑦𝐶𝑃) < 𝑧 → ((𝐹𝑦)𝐷(𝐹𝑃)) < 𝐴))) → (𝑧 / 2) ∈ ℝ+) 6 simplll 505 . . . . . . . . 9 ((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ (𝑧 ∈ ℝ+𝑦𝑋)) → 𝐶 ∈ (∞Met‘𝑋)) 7 simprr 504 . . . . . . . . 9 ((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ (𝑧 ∈ ℝ+𝑦𝑋)) → 𝑦𝑋) 81mopntopon 12432 . . . . . . . . . . 11 (𝐶 ∈ (∞Met‘𝑋) → 𝐽 ∈ (TopOn‘𝑋)) 96, 8syl 14 . . . . . . . . . 10 ((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ (𝑧 ∈ ℝ+𝑦𝑋)) → 𝐽 ∈ (TopOn‘𝑋)) 10 simpllr 506 . . . . . . . . . . . 12 ((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ (𝑧 ∈ ℝ+𝑦𝑋)) → 𝐷 ∈ (∞Met‘𝑌)) 112mopntopon 12432 . . . . . . . . . . . 12 (𝐷 ∈ (∞Met‘𝑌) → 𝐾 ∈ (TopOn‘𝑌)) 1210, 11syl 14 . . . . . . . . . . 11 ((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ (𝑧 ∈ ℝ+𝑦𝑋)) → 𝐾 ∈ (TopOn‘𝑌)) 13 topontop 12024 . . . . . . . . . . 11 (𝐾 ∈ (TopOn‘𝑌) → 𝐾 ∈ Top) 1412, 13syl 14 . . . . . . . . . 10 ((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ (𝑧 ∈ ℝ+𝑦𝑋)) → 𝐾 ∈ Top) 15 simplrl 507 . . . . . . . . . 10 ((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ (𝑧 ∈ ℝ+𝑦𝑋)) → 𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃)) 16 cnprcl2k 12217 . . . . . . . . . 10 ((𝐽 ∈ (TopOn‘𝑋) ∧ 𝐾 ∈ Top ∧ 𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃)) → 𝑃𝑋) 179, 14, 15, 16syl3anc 1199 . . . . . . . . 9 ((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ (𝑧 ∈ ℝ+𝑦𝑋)) → 𝑃𝑋) 18 xmetcl 12341 . . . . . . . . 9 ((𝐶 ∈ (∞Met‘𝑋) ∧ 𝑦𝑋𝑃𝑋) → (𝑦𝐶𝑃) ∈ ℝ*) 196, 7, 17, 18syl3anc 1199 . . . . . . . 8 ((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ (𝑧 ∈ ℝ+𝑦𝑋)) → (𝑦𝐶𝑃) ∈ ℝ*) 204ad2antrl 479 . . . . . . . . 9 ((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ (𝑧 ∈ ℝ+𝑦𝑋)) → (𝑧 / 2) ∈ ℝ+) 2120rpxrd 9383 . . . . . . . 8 ((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ (𝑧 ∈ ℝ+𝑦𝑋)) → (𝑧 / 2) ∈ ℝ*) 22 rpxr 9350 . . . . . . . . 9 (𝑧 ∈ ℝ+𝑧 ∈ ℝ*) 2322ad2antrl 479 . . . . . . . 8 ((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ (𝑧 ∈ ℝ+𝑦𝑋)) → 𝑧 ∈ ℝ*) 24 rphalflt 9372 . . . . . . . . 9 (𝑧 ∈ ℝ+ → (𝑧 / 2) < 𝑧) 2524ad2antrl 479 . . . . . . . 8 ((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ (𝑧 ∈ ℝ+𝑦𝑋)) → (𝑧 / 2) < 𝑧) 26 xrlelttr 9482 . . . . . . . . . 10 (((𝑦𝐶𝑃) ∈ ℝ* ∧ (𝑧 / 2) ∈ ℝ*𝑧 ∈ ℝ*) → (((𝑦𝐶𝑃) ≤ (𝑧 / 2) ∧ (𝑧 / 2) < 𝑧) → (𝑦𝐶𝑃) < 𝑧)) 2726expcomd 1400 . . . . . . . . 9 (((𝑦𝐶𝑃) ∈ ℝ* ∧ (𝑧 / 2) ∈ ℝ*𝑧 ∈ ℝ*) → ((𝑧 / 2) < 𝑧 → ((𝑦𝐶𝑃) ≤ (𝑧 / 2) → (𝑦𝐶𝑃) < 𝑧))) 2827imp 123 . . . . . . . 8 ((((𝑦𝐶𝑃) ∈ ℝ* ∧ (𝑧 / 2) ∈ ℝ*𝑧 ∈ ℝ*) ∧ (𝑧 / 2) < 𝑧) → ((𝑦𝐶𝑃) ≤ (𝑧 / 2) → (𝑦𝐶𝑃) < 𝑧)) 2919, 21, 23, 25, 28syl31anc 1202 . . . . . . 7 ((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ (𝑧 ∈ ℝ+𝑦𝑋)) → ((𝑦𝐶𝑃) ≤ (𝑧 / 2) → (𝑦𝐶𝑃) < 𝑧)) 30 cnpf2 12218 . . . . . . . . . . 11 ((𝐽 ∈ (TopOn‘𝑋) ∧ 𝐾 ∈ (TopOn‘𝑌) ∧ 𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃)) → 𝐹:𝑋𝑌) 319, 12, 15, 30syl3anc 1199 . . . . . . . . . 10 ((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ (𝑧 ∈ ℝ+𝑦𝑋)) → 𝐹:𝑋𝑌) 3231, 7ffvelrnd 5510 . . . . . . . . 9 ((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ (𝑧 ∈ ℝ+𝑦𝑋)) → (𝐹𝑦) ∈ 𝑌) 3331, 17ffvelrnd 5510 . . . . . . . . 9 ((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ (𝑧 ∈ ℝ+𝑦𝑋)) → (𝐹𝑃) ∈ 𝑌) 34 xmetcl 12341 . . . . . . . . 9 ((𝐷 ∈ (∞Met‘𝑌) ∧ (𝐹𝑦) ∈ 𝑌 ∧ (𝐹𝑃) ∈ 𝑌) → ((𝐹𝑦)𝐷(𝐹𝑃)) ∈ ℝ*) 3510, 32, 33, 34syl3anc 1199 . . . . . . . 8 ((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ (𝑧 ∈ ℝ+𝑦𝑋)) → ((𝐹𝑦)𝐷(𝐹𝑃)) ∈ ℝ*) 36 simplrr 508 . . . . . . . . 9 ((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ (𝑧 ∈ ℝ+𝑦𝑋)) → 𝐴 ∈ ℝ+) 3736rpxrd 9383 . . . . . . . 8 ((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ (𝑧 ∈ ℝ+𝑦𝑋)) → 𝐴 ∈ ℝ*) 38 xrltle 9477 . . . . . . . 8 ((((𝐹𝑦)𝐷(𝐹𝑃)) ∈ ℝ*𝐴 ∈ ℝ*) → (((𝐹𝑦)𝐷(𝐹𝑃)) < 𝐴 → ((𝐹𝑦)𝐷(𝐹𝑃)) ≤ 𝐴)) 3935, 37, 38syl2anc 406 . . . . . . 7 ((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ (𝑧 ∈ ℝ+𝑦𝑋)) → (((𝐹𝑦)𝐷(𝐹𝑃)) < 𝐴 → ((𝐹𝑦)𝐷(𝐹𝑃)) ≤ 𝐴)) 4029, 39imim12d 74 . . . . . 6 ((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ (𝑧 ∈ ℝ+𝑦𝑋)) → (((𝑦𝐶𝑃) < 𝑧 → ((𝐹𝑦)𝐷(𝐹𝑃)) < 𝐴) → ((𝑦𝐶𝑃) ≤ (𝑧 / 2) → ((𝐹𝑦)𝐷(𝐹𝑃)) ≤ 𝐴))) 4140anassrs 395 . . . . 5 (((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ 𝑧 ∈ ℝ+) ∧ 𝑦𝑋) → (((𝑦𝐶𝑃) < 𝑧 → ((𝐹𝑦)𝐷(𝐹𝑃)) < 𝐴) → ((𝑦𝐶𝑃) ≤ (𝑧 / 2) → ((𝐹𝑦)𝐷(𝐹𝑃)) ≤ 𝐴))) 4241ralimdva 2473 . . . 4 ((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ 𝑧 ∈ ℝ+) → (∀𝑦𝑋 ((𝑦𝐶𝑃) < 𝑧 → ((𝐹𝑦)𝐷(𝐹𝑃)) < 𝐴) → ∀𝑦𝑋 ((𝑦𝐶𝑃) ≤ (𝑧 / 2) → ((𝐹𝑦)𝐷(𝐹𝑃)) ≤ 𝐴))) 4342impr 374 . . 3 ((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ (𝑧 ∈ ℝ+ ∧ ∀𝑦𝑋 ((𝑦𝐶𝑃) < 𝑧 → ((𝐹𝑦)𝐷(𝐹𝑃)) < 𝐴))) → ∀𝑦𝑋 ((𝑦𝐶𝑃) ≤ (𝑧 / 2) → ((𝐹𝑦)𝐷(𝐹𝑃)) ≤ 𝐴)) 44 breq2 3899 . . . 4 (𝑥 = (𝑧 / 2) → ((𝑦𝐶𝑃) ≤ 𝑥 ↔ (𝑦𝐶𝑃) ≤ (𝑧 / 2))) 4544rspceaimv 2767 . . 3 (((𝑧 / 2) ∈ ℝ+ ∧ ∀𝑦𝑋 ((𝑦𝐶𝑃) ≤ (𝑧 / 2) → ((𝐹𝑦)𝐷(𝐹𝑃)) ≤ 𝐴)) → ∃𝑥 ∈ ℝ+𝑦𝑋 ((𝑦𝐶𝑃) ≤ 𝑥 → ((𝐹𝑦)𝐷(𝐹𝑃)) ≤ 𝐴)) 465, 43, 45syl2anc 406 . 2 ((((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) ∧ (𝑧 ∈ ℝ+ ∧ ∀𝑦𝑋 ((𝑦𝐶𝑃) < 𝑧 → ((𝐹𝑦)𝐷(𝐹𝑃)) < 𝐴))) → ∃𝑥 ∈ ℝ+𝑦𝑋 ((𝑦𝐶𝑃) ≤ 𝑥 → ((𝐹𝑦)𝐷(𝐹𝑃)) ≤ 𝐴)) 473, 46rexlimddv 2528 1 (((𝐶 ∈ (∞Met‘𝑋) ∧ 𝐷 ∈ (∞Met‘𝑌)) ∧ (𝐹 ∈ ((𝐽 CnP 𝐾)‘𝑃) ∧ 𝐴 ∈ ℝ+)) → ∃𝑥 ∈ ℝ+𝑦𝑋 ((𝑦𝐶𝑃) ≤ 𝑥 → ((𝐹𝑦)𝐷(𝐹𝑃)) ≤ 𝐴)) Colors of variables: wff set class Syntax hints:   → wi 4   ∧ wa 103   ∧ w3a 945   = wceq 1314   ∈ wcel 1463  ∀wral 2390  ∃wrex 2391   class class class wbr 3895  ⟶wf 5077  ‘cfv 5081  (class class class)co 5728  ℝ*cxr 7723   < clt 7724   ≤ cle 7725   / cdiv 8345  2c2 8681  ℝ+crp 9343  ∞Metcxmet 11992  MetOpencmopn 11997  Topctop 12007  TopOnctopon 12020   CnP ccnp 12198 This theorem was proved from axioms:  ax-1 5  ax-2 6  ax-mp 7  ax-ia1 105  ax-ia2 106  ax-ia3 107  ax-in1 586  ax-in2 587  ax-io 681  ax-5 1406  ax-7 1407  ax-gen 1408  ax-ie1 1452  ax-ie2 1453  ax-8 1465  ax-10 1466  ax-11 1467  ax-i12 1468  ax-bndl 1469  ax-4 1470  ax-13 1474  ax-14 1475  ax-17 1489  ax-i9 1493  ax-ial 1497  ax-i5r 1498  ax-ext 2097  ax-coll 4003  ax-sep 4006  ax-nul 4014  ax-pow 4058  ax-pr 4091  ax-un 4315  ax-setind 4412  ax-iinf 4462  ax-cnex 7636  ax-resscn 7637  ax-1cn 7638  ax-1re 7639  ax-icn 7640  ax-addcl 7641  ax-addrcl 7642  ax-mulcl 7643  ax-mulrcl 7644  ax-addcom 7645  ax-mulcom 7646  ax-addass 7647  ax-mulass 7648  ax-distr 7649  ax-i2m1 7650  ax-0lt1 7651  ax-1rid 7652  ax-0id 7653  ax-rnegex 7654  ax-precex 7655  ax-cnre 7656  ax-pre-ltirr 7657  ax-pre-ltwlin 7658  ax-pre-lttrn 7659  ax-pre-apti 7660  ax-pre-ltadd 7661  ax-pre-mulgt0 7662  ax-pre-mulext 7663  ax-arch 7664  ax-caucvg 7665 This theorem depends on definitions:  df-bi 116  df-stab 799  df-dc 803  df-3or 946  df-3an 947  df-tru 1317  df-fal 1320  df-nf 1420  df-sb 1719  df-eu 1978  df-mo 1979  df-clab 2102  df-cleq 2108  df-clel 2111  df-nfc 2244  df-ne 2283  df-nel 2378  df-ral 2395  df-rex 2396  df-reu 2397  df-rmo 2398  df-rab 2399  df-v 2659  df-sbc 2879  df-csb 2972  df-dif 3039  df-un 3041  df-in 3043  df-ss 3050  df-nul 3330  df-if 3441  df-pw 3478  df-sn 3499  df-pr 3500  df-op 3502  df-uni 3703  df-int 3738  df-iun 3781  df-br 3896  df-opab 3950  df-mpt 3951  df-tr 3987  df-id 4175  df-po 4178  df-iso 4179  df-iord 4248  df-on 4250  df-ilim 4251  df-suc 4253  df-iom 4465  df-xp 4505  df-rel 4506  df-cnv 4507  df-co 4508  df-dm 4509  df-rn 4510  df-res 4511  df-ima 4512  df-iota 5046  df-fun 5083  df-fn 5084  df-f 5085  df-f1 5086  df-fo 5087  df-f1o 5088  df-fv 5089  df-isom 5090  df-riota 5684  df-ov 5731  df-oprab 5732  df-mpo 5733  df-1st 5992  df-2nd 5993  df-recs 6156  df-frec 6242  df-map 6498  df-sup 6823  df-inf 6824  df-pnf 7726  df-mnf 7727  df-xr 7728  df-ltxr 7729  df-le 7730  df-sub 7858  df-neg 7859  df-reap 8255  df-ap 8262  df-div 8346  df-inn 8631  df-2 8689  df-3 8690  df-4 8691  df-n0 8882  df-z 8959  df-uz 9229  df-q 9314  df-rp 9344  df-xneg 9452  df-xadd 9453  df-seqfrec 10112  df-exp 10186  df-cj 10507  df-re 10508  df-im 10509  df-rsqrt 10662  df-abs 10663  df-topgen 11984  df-psmet 11999  df-xmet 12000  df-bl 12002  df-mopn 12003  df-top 12008  df-topon 12021  df-bases 12053  df-cnp 12201 This theorem is referenced by: (None) Copyright terms: Public domain W3C validator
6,725
9,224
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.09375
3
CC-MAIN-2023-50
longest
en
0.239991
https://chem.libretexts.org/Courses/BethuneCookman_University/B-CU%3A_CH-345_Quantitative_Analysis/Book%3A_Analytical_Chemistry_2.1_(Harvey)/08%3A_Gravimetric_Methods/8.04%3A_Particulate_Gravimetry
1,726,315,671,000,000,000
text/html
crawl-data/CC-MAIN-2024-38/segments/1725700651579.22/warc/CC-MAIN-20240914093425-20240914123425-00037.warc.gz
144,950,569
34,623
# 8.4: Particulate Gravimetry $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ ( \newcommand{\kernel}{\mathrm{null}\,}\) $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$ $$\newcommand{\vectorA}[1]{\vec{#1}} % arrow$$ $$\newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow$$ $$\newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vectorC}[1]{\textbf{#1}}$$ $$\newcommand{\vectorD}[1]{\overrightarrow{#1}}$$ $$\newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}}$$ $$\newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}}$$ $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$ $$\newcommand{\avec}{\mathbf a}$$ $$\newcommand{\bvec}{\mathbf b}$$ $$\newcommand{\cvec}{\mathbf c}$$ $$\newcommand{\dvec}{\mathbf d}$$ $$\newcommand{\dtil}{\widetilde{\mathbf d}}$$ $$\newcommand{\evec}{\mathbf e}$$ $$\newcommand{\fvec}{\mathbf f}$$ $$\newcommand{\nvec}{\mathbf n}$$ $$\newcommand{\pvec}{\mathbf p}$$ $$\newcommand{\qvec}{\mathbf q}$$ $$\newcommand{\svec}{\mathbf s}$$ $$\newcommand{\tvec}{\mathbf t}$$ $$\newcommand{\uvec}{\mathbf u}$$ $$\newcommand{\vvec}{\mathbf v}$$ $$\newcommand{\wvec}{\mathbf w}$$ $$\newcommand{\xvec}{\mathbf x}$$ $$\newcommand{\yvec}{\mathbf y}$$ $$\newcommand{\zvec}{\mathbf z}$$ $$\newcommand{\rvec}{\mathbf r}$$ $$\newcommand{\mvec}{\mathbf m}$$ $$\newcommand{\zerovec}{\mathbf 0}$$ $$\newcommand{\onevec}{\mathbf 1}$$ $$\newcommand{\real}{\mathbb R}$$ $$\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}$$ $$\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}$$ $$\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}$$ $$\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}$$ $$\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}$$ $$\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}$$ $$\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}$$ $$\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}$$ $$\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}$$ $$\newcommand{\laspan}[1]{\text{Span}\{#1\}}$$ $$\newcommand{\bcal}{\cal B}$$ $$\newcommand{\ccal}{\cal C}$$ $$\newcommand{\scal}{\cal S}$$ $$\newcommand{\wcal}{\cal W}$$ $$\newcommand{\ecal}{\cal E}$$ $$\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}$$ $$\newcommand{\gray}[1]{\color{gray}{#1}}$$ $$\newcommand{\lgray}[1]{\color{lightgray}{#1}}$$ $$\newcommand{\rank}{\operatorname{rank}}$$ $$\newcommand{\row}{\text{Row}}$$ $$\newcommand{\col}{\text{Col}}$$ $$\renewcommand{\row}{\text{Row}}$$ $$\newcommand{\nul}{\text{Nul}}$$ $$\newcommand{\var}{\text{Var}}$$ $$\newcommand{\corr}{\text{corr}}$$ $$\newcommand{\len}[1]{\left|#1\right|}$$ $$\newcommand{\bbar}{\overline{\bvec}}$$ $$\newcommand{\bhat}{\widehat{\bvec}}$$ $$\newcommand{\bperp}{\bvec^\perp}$$ $$\newcommand{\xhat}{\widehat{\xvec}}$$ $$\newcommand{\vhat}{\widehat{\vvec}}$$ $$\newcommand{\uhat}{\widehat{\uvec}}$$ $$\newcommand{\what}{\widehat{\wvec}}$$ $$\newcommand{\Sighat}{\widehat{\Sigma}}$$ $$\newcommand{\lt}{<}$$ $$\newcommand{\gt}{>}$$ $$\newcommand{\amp}{&}$$ $$\definecolor{fillinmathshade}{gray}{0.9}$$ Precipitation and volatilization gravimetric methods require that the analyte, or some other species in the sample, participates in a chemical reaction. In a direct precipitation gravimetric analysis, for example, we convert a soluble analyte into an insoluble form that precipitates from solution. In some situations, however, the analyte already is present in a particulate form that is easy to separate from its liquid, gas, or solid matrix. When such a separation is possible, we can determine the analyte’s mass without relying on a chemical reaction. A particulate is any tiny portion of matter, whether it is a speck of dust, a globule of fat, or a molecule of ammonia. For particulate gravimetry we simply need a method to collect the particles and a balance to measure their mass. ## Theory and Practice There are two methods for separating a particulate analyte from its matrix. The most common method is filtration, in which we separate solid particulates from their gas, liquid, or solid matrix. A second method, which is useful for gas particles, solutes, and solids, is an extraction. ### Filtration To separate solid particulates from their matrix we use gravity or apply suction from a vacuum pump or an aspirator to pull the sample through a filter. The type of filter we use depends upon the size of the solid particles and the sample’s matrix. Filters for liquid samples are constructed from a variety of materials, including cellulose fibers, glass fibers, cellulose nitrate, and polytetrafluoroethylene (PTFE). Particle retention depends on the size of the filter’s pores. Cellulose fiber filter papers range in pore size from 30 μm to 2–3 μm. Glass fiber filters, manufactured using chemically inert borosilicate glass, are available with pore sizes between 2.5 μm and 0.3 μm. Membrane filters, which are made from a variety of materials, including cellulose nitrate and PTFE, are available with pore sizes from 5.0 μm to 0.1 μm. For additional information, see our earlier discussion in this chapter on filtering precipitates, and the discussion in Chapter 7 of separations based on size. Solid aerosol particulates are collected using either a single-stage or a multiple-stage filter. In a single-stage system, we pull the gas through a single filter, which retains particles larger than the filter’s pore size. To collect samples from a gas line, we place the filter directly in the line. Atmospheric gases are sampled with a high volume sampler that uses a vacuum pump to pull air through the filter at a rate of approximately 75 m3/h. In either case, we can use the same filtering media for liquid samples to collect aerosol particulates. In a multiple-stage system, a series of filtering units separates the particles into two or more size ranges. The particulates in a solid matrix are separated by size using one or more sieves (Figure $$\PageIndex{1}$$). Sieves are available in a variety of mesh sizes, ranging from approximately 25 mm to 40 μm. By stacking together sieves of different mesh size, we can isolate particulates into several narrow size ranges. Using the sieves in Figure $$\PageIndex{1}$$, for example, we can separate a solid into particles with diameters >1700 μm, with diameters between 1700 μm and 500 μm, with diameters between 500 μm and 250 μm, and those with a diameter <250 μm. ### Extraction Filtering limits particulate gravimetry to solid analytes that are easy to separate from their matrix. We can extend particulate gravimetry to the analysis of gas phase analytes, solutes, and solids that are difficult to filter if we extract them with a suitable solvent. After the extraction, we evaporate the solvent and determine the analyte’s mass. Alternatively, we can determine the analyte indirectly by measuring the change in the sample’s mass after we extract the analyte. For a more detailed review of extractions, particularly solid-phase extractions, see Chapter 7. Another method for extracting an analyte from its matrix is by adsorption onto a solid substrate, by absorption into a thin polymer film or chemical film coated on a solid substrate, or by chemically binding to a suitable receptor that is covalently bound to a solid substrate (Figure $$\PageIndex{2}$$). Adsorption, absorption, and binding occur at the interface between the solution that contains the analyte and the substrate’s surface, the thin film, or the receptor. Although the amount of extracted analyte is too small to measure using a conventional balance, it can be measured using a quartz crystal microbalance. The measurement of mass using a quartz crystal microbalance takes advantage of the piezoelectric effect [(a) Ward, M. D.; Buttry, D. A. Science 1990, 249, 1000–1007; (b) Grate, J. W.; Martin, S. J. ; White, R. M. Anal. Chem. 1993, 65, 940A–948A; (c) Grate, J. W.; Martin, S. J. ; White, R. M. Anal. Chem. 1993, 65, 987A–996A.]. The application of an alternating electrical field across a quartz crystal induces an oscillatory vibrational motion in the crystal. Every quartz crystal vibrates at a characteristic resonant frequency that depends on the crystal’s properties, including the mass per unit area of any material coated on the crystal’s surface. The change in mass following adsorption, absorption, or binding of the analyte is determined by monitoring the change in the quartz crystal’s characteristic resonant frequency. The exact relationship between the change in frequency and mass is determined by a calibration curve. If you own a wristwatch, there is a good chance that its operation relies on a quartz crystal. The piezoelectric properties of quartz were discovered in 1880 by Paul-Jacques Currie and Pierre Currie. Because the oscillation frequency of a quartz crystal is so precise, it quickly found use in the keeping of time. The first quartz clock was built in 1927 at the Bell Telephone labs, and Seiko introduced the first quartz wristwatches in 1969. ## Quantitative Applications Particulate gravimetry is important in the environmental analysis of water, air, and soil samples. The analysis for suspended solids in water samples, for example, is accomplished by filtering an appropriate volume of a well-mixed sample through a glass fiber filter and drying the filter to constant weight at 103–105oC. The microbiological testing of water also uses particulate gravimetry. One example is the analysis for coliform bacteria in which an appropriate volume of sample is passed through a sterilized 0.45-μm membrane filter. The filter is placed on a sterilized absorbent pad that is saturated with a culturing medium and incubated for 22–24 hours at 35 ± 0.5oC. Coliform bacteria are identified by the presence of individual bacterial colonies that form during the incubation period (Figure $$\PageIndex{3}$$). As with qualitative applications of precipitation gravimetry, the signal in this case is a visual observation of the number of colonies rather than a measurement of mass. Total airborne particulates are determined using a high-volume air sampler equipped with either a cellulose fiber or a glass fiber filter. Samples from urban environments require approximately 1 h of sampling time, but samples from rural environments require substantially longer times. Grain size distributions for sediments and soils are used to determine the amount of sand, silt, and clay in a sample. For example, a grain size of 2 mm serves as the boundary between gravel and sand. The grain size for the sand–silt and the silt–clay boundaries are 1/16 mm and 1/256 mm, respectively. Several standard quantitative analytical methods for agricultural products are based on measuring the sample’s mass following a selective solvent extraction. For example, the crude fat content in chocolate is determined by extracting with ether for 16 hours in a Soxhlet extractor. After the extraction is complete, the ether is allowed to evaporate and the residue is weighed after drying at 100oC. This analysis also can be accomplished indirectly by weighing a sample before and after extracting with supercritical CO2. Quartz crystal microbalances equipped with thin film polymer films or chemical coatings have found numerous quantitative applications in environmental analysis. Methods are reported for the analysis of a variety of gaseous pollutants, including ammonia, hydrogen sulfide, ozone, sulfur dioxide, and mercury. Biochemical particulate gravimetric sensors also have been developed. For example, a piezoelectric immunosensor has been developed that shows a high selectivity for human serum albumin, and is capable of detecting microgram quantities [Muratsugu, M.; Ohta, F.; Miya, Y.; Hosokawa, T.; Kurosawa, S.; Kamo, N.; Ikeda, H. Anal. Chem. 1993, 65, 2933–2937]. ### Quantitative Calculations The result of a quantitative analysis by particulate gravimetry is just the ratio, using appropriate units, of the amount of analyte relative to the amount of sample. Example $$\PageIndex{1}$$ A 200.0-mL sample of water is filtered through a pre-weighed glass fiber filter. After drying to constant weight at 105oC, the filter is found to have increased in mass by 48.2 mg. Determine the sample’s total suspended solids. Solution One ppm is equivalent to one mg of analyte per liter of solution; thus, the total suspended solids for the sample is $\frac{48.2 \ \mathrm{mg} \text { solids }}{0.2000 \ \mathrm{L} \text { sample }}=241 \ \mathrm{ppm} \text { solids } \nonumber$ ## Evaluating Particulate Gravimetry The scale of operation and the detection limit for particulate gravimetry can be extended beyond that of other gravimetric methods by increasing the size of the sample taken for analysis. This usually is impracticable for other gravimetric methods because it is difficult to manipulate a larger sample through the individual steps of the analysis. With particulate gravimetry, however, the part of the sample that is not analyte is removed when filtering or extracting. Consequently, particulate gravimetry easily is extended to the analysis of trace-level analytes. Except for methods that rely on a quartz crystal microbalance, particulate gravimetry uses the same balances as other gravimetric methods, and is capable of achieving similar levels of accuracy and precision. Because particulate gravimetry is defined in terms of the mass of the particle themselves, the sensitivity of the analysis is given by the balance’s sensitivity. Selectivity, on the other hand, is determined either by the filter’s pore size or by the properties of the extracting phase. Because it requires a single step, particulate gravimetric methods based on filtration generally require less time, labor and capital than other gravimetric methods. This page titled 8.4: Particulate Gravimetry is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
4,176
15,261
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.03125
4
CC-MAIN-2024-38
latest
en
0.194178
https://www.aqua-calc.com/calculate/materials-price/substance/gold
1,726,709,718,000,000,000
text/html
crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00820.warc.gz
574,401,609
13,706
# Price of Gold ## gold: price conversions and cost Price per units of weight < 0.01carat 0.02gram 1.96100 grams 4.89250 grams 7.82400 grams 9.78500 grams 19.56kilogram 9.781/2 kilogram 0.55ounce 4.448 ounces 8.87pound 4.441/2 pound Price per metric units of volume 0.38centimeter³ 377,849.98meter³ 377.85liter 188.921/2 liter 94.46cup 47.231/2 cup 5.67tablespoon 1.89teaspoon Price per US units of volume 10,699.52foot³ 6.19inch³ 89.39cup 44.701/2 cup 11.17fluid ounce 1,430.32gallon 715.161/2 gallon 178.79pint 89.391/2 pint 357.58quart 5.59tablespoon 1.86teaspoon Price per imperial units of volume 107.36cup 53.681/2 cup 10.74fluid ounce 1,717.74gallon 858.871/2 gallon 214.72pint 107.361/2 pint 429.44quart #### Entered price The entered price of “Gold” per 9 ounces is equal to 4.99. • The compounds and materials price calculator performs conversions between prices for different weights and volumes. Selecting a unit of weight or volume from a single drop-down list, allows to indicate a price per entered quantity of the selected unit. #### Foods, Nutrients and Calories GUMMY N' SOFT CANDY, UPC: 011152421209 contain(s) 400 calories per 100 grams (≈3.53 ounces)  [ price ] 229731 foods that contain Sugars, total including NLEA.  List of these foods starting with the highest contents of Sugars, total including NLEA and the lowest contents of Sugars, total including NLEA #### Gravels, Substances and Oils CaribSea, Freshwater, African Cichlid Mix, Ivory Coast Gravel weighs 1 505.74 kg/m³ (94.00028 lb/ft³) with specific gravity of 1.50574 relative to pure water.  Calculate how much of this gravel is required to attain a specific depth in a cylindricalquarter cylindrical  or in a rectangular shaped aquarium or pond  [ weight to volume | volume to weight | price ] Antimony pentafluoride, liquid [SbF5] weighs 2 990 kg/m³ (186.6596 lb/ft³)  [ weight to volume | volume to weight | price | mole to volume and weight | mass and molar concentration | density ] Volume to weightweight to volume and cost conversions for Refrigerant R-503, liquid (R503) with temperature in the range of -95.56°C (-140.008°F) to -6.65°C (20.03°F) #### Weights and Measurements milligram per inch (mg/in) is a non-metric measurement unit of linear or linear mass density. The radiation absorbed dose is a measurement of radiation, in energy per unit of mass, absorbed by a specific object, such as human tissue. kJ/h to J/h conversion table, kJ/h to J/h unit converter or convert between all units of power measurement. #### Calculators Online Food Calculator. Food Volume to Weight Conversions
772
2,604
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.53125
3
CC-MAIN-2024-38
latest
en
0.722371
https://www.statology.org/law-of-total-probability/
1,720,842,913,000,000,000
text/html
crawl-data/CC-MAIN-2024-30/segments/1720763514484.89/warc/CC-MAIN-20240713020211-20240713050211-00546.warc.gz
800,715,889
23,103
# Law of Total Probability: Definition & Examples In probability theory, the law of total probability is a useful way to find the probability of some event A when we don’t directly know the probability of A but we do know that events B1, B2, B3… form a partition of the sample space S. This law states the following: The Law of Total Probability If B1, B2, B3… form a partition of the sample space S, then we can calculate the probability of event A as: P(A) = ΣP(A|Bi)*P(Bi) The easiest way to understand this law is with a simple example. Suppose there are two bags in a box, which contain the following marbles: • Bag 1: 7 red marbles and 3 green marbles • Bag 2: 2 red marbles and 8 green marbles If we randomly select one of the bags and then randomly select one marble from that bag, what is the probability that it’s a green marble? In this example, let P(G) = probability of choosing a green marble. This is the probability that we’re interested in, but we can’t compute it directly. Instead we need to use the conditional probability of G, given some events B where the Bi‘s form a partition of the sample space S. In this example, we have the following conditional probabilities: • P(G|B1) = 3/10 = 0.3 • P(G|B2) = 8/10 = 0.8 Thus, using the law of total probability we can calculate the probability of choosing a green marble as: • P(G) = ΣP(G|Bi)*P(Bi) • P(G) = P(G|B1)*P(B1) + P(G|B2)*P(B2) • P(G) = (0.3)*(0.5) + (0.8)*(0.5) • P(G) = 0.55 If we randomly select one of the bags and then randomly select one marble from that bag, the probability we choose a green marble is 0.55. Read through the next two examples to solidify your understanding of the law of total probability. ## Example 1: Widgets Company A supplies 80% of widgets for a car shop and only 1% of their widgets turn out to be defective. Company B supplies the remaining 20% of widgets for the car shop and 3% of their widgets turn out to be defective. If a customer randomly purchases a widget from the car shop, what is the probability that it will be defective? If we let P(D) = the probability of a widget being defective and P(Bi) be the probability that the widget came from one of the companies, then we can compute the probability of buying a defective widget as: • P(D) = ΣP(D|Bi)*P(Bi) • P(D) = P(D|B1)*P(B1) + P(D|B2)*P(B2) • P(D) = (0.01)*(0.80) + (0.03)*(0.20) • P(D) = 0.014 If we randomly buy a widget from this car shop, the probability that it will be defective is 0.014. ## Example 2: Forests Forest A occupies 50% of the total land in a certain park and 20% of the plants in this forest are poisonous. Forest B occupies 30% of the total land and 40% of the plants in it are poisonous. Forest C occupies the remaining 20% of the land and 70% of the plants in it are poisonous. If we randomly enter this park and pick a plant from the ground, what is the probability that it will be poisonous? If we let P(P) = the probability of the plant being poisonous, and P(Bi) be the probability that we’ve entered one of the three forests, then we can compute the probability of a randomly chosen plant being poisonous as: • P(P) = ΣP(P|Bi)*P(Bi) • P(P) = P(P|B1)*P(B1) + P(P|B2)*P(B2) + P(P|B3)*P(B3) • P(P) = (0.20)*(0.50) + (0.40)*(0.30) + (0.70)*(0.20) • P(P) = 0.36 If we randomly pick a plant from the ground, the probability that it will be poisonous is 0.36.
966
3,381
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.78125
5
CC-MAIN-2024-30
latest
en
0.872185
https://timus.online/problem.aspx?space=27&num=1
1,590,546,864,000,000,000
text/html
crawl-data/CC-MAIN-2020-24/segments/1590347392057.6/warc/CC-MAIN-20200527013445-20200527043445-00090.warc.gz
594,993,979
2,704
ENG  RUS Timus Online Judge Online Judge Problems Authors Online contests Site news Webboard Problem set Submit solution Judge status Guide Register Authors ranklist Current contest Scheduled contests Past contests Rules Contest is over ## A. Heritage Time limit: 2.0 second Memory limit: 64 MB Your rich uncle died recently, and the heritage needs to be divided among your relatives and the church (your uncle insisted in his will that the church must get something). There are N relatives (N ≤ 18) that were mentioned in the will. They are sorted in descending order according to their importance (the first one is the most important). Since you are the computer scientist in the family, your relatives asked you to help them. They need help, because there are some blanks in the will left to be filled. Here is how the will looks: `Relative #1 will get 1/... of the whole heritage,` `Relative #2 will get 1/... of the whole heritage,` `...` `Relative #N will get 1/... of the whole heritage.` The logical desire of the relatives is to fill the blanks in such way that the uncle’s will is preserved (i.e the fractions are non-ascending and the church gets something) and the amount of heritage left for the church is minimized. ### Input The only line of input contains the single integer N (1 ≤ N ≤ 18). ### Output Output the numbers that the blanks need to be filled (on separate lines), so that the heritage left for the church is minimized. ### Sample inputoutput ```2 ``` ```2 3 ``` Problem Author: Pavlin Peev To submit the solution for this problem go to the Problem set: 1108. Heritage
385
1,604
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.546875
3
CC-MAIN-2020-24
longest
en
0.931523
https://yellowcomic.com/what-is-the-difference-between-a-prism-and-a-pyramid/
1,660,910,750,000,000,000
text/html
crawl-data/CC-MAIN-2022-33/segments/1659882573667.83/warc/CC-MAIN-20220819100644-20220819130644-00019.warc.gz
936,086,412
5,743
People regularly come throughout the pyramid and prism shaped things yet they get confused about what shape it is. The properties and characteristics of these shapes are not known in the day to day life. They room often puzzled with one another. You are watching: What is the difference between a prism and a pyramid A pyramid is a 3 dimensional structure well-known to have actually one base which is polygonal in shape and also has triangle aides joined at a vertex known as apex. A prism, on the other hand, is a 3D structure having actually two bases and also rectangular sides. ## Pyramid vs Prism The difference in between pyramids and also prisms is that a pyramid is a three-dimensional polyhedron shaped framework with a single base that is polygonal in shape and is attached come the political parties of the pyramid, the sides of the pyramid are constantly triangular. ~ above the other hand, a prism is likewise a 3D polyhedron with two bases that are similar to every other, the political parties of a prism are perpendicular to its base and the cross-section is the same throughout all political parties in the prism. ## Comparison Table between Pyramids and Prisms Parameter of ComparisonPyramidsPrismsBasic definitionA pyramid is a 3 dimensional polyhedron shaped framework with only one polygonal base and has triangle sides.A prism is a three dimensional polyhedron characterized by two bases which are polygonal in shape and rectangular political parties perpendicular to the base.Number and shape of the basesA pyramid has only one base which is polygonal in shape.A prism has two bases i beg your pardon are also polygonal.The form of the sides The sides of a pyramid room triangular in form joined at a allude known together the apex.The sides of a prism are always rectangular in shape and are perpendicular to the base.Presence of an apexA pyramid is identified by the visibility of one apex.Prism walk not have an apex.TypesThere space different varieties of pyramids depending upon the form of its base such as the triangular Pyramid, hexagonal pyramid, pentagonal pyramid, etc.In prisms, the form is determined by the form of the base. Some species are triangle prism, pentagonal prism, hexagonal prism, etc. ## What is a Pyramid? A pyramid is a three dimensional polyhedron structure having actually only one base which is in the form of a polygon. It always has triangular shame sides. Every the political parties of the pyramid are always joined with one one more at a suggest which is recognized as a peak or an apex. A pyramid always has one apex which is situated just above the center of the base. There room different types of pyramids based upon the form of your bases. Few of them room the triangular pyramid, pentagonal pyramid, hexagonal pyramid and so on. One of the most necessary real life instances of the pyramid is the an excellent pyramids the Giza located in Egypt. Castle are defined by most of their weight lying close come the ground. ## What is a Prism? A prism is also a 3 dimensional polyhedron structure, it always has 2 bases dealing with each other and the form of this bases is polygonal. The political parties of a prism are all rectangular shaped. This sides space joined v at the very least two surrounding sides and the sides are perpendicular to the base. However, if the sides room not perpendicular come the base climate it is called as an tilt prism. A prism go not have an apex. A prism is generally consisted of of glass and also therefore the is transparent. It has polished surfaces that help in the refraction of light placed on one next pf the prism and can be checked out from the other side. Also the cross ar in a prism is exact same on every its sides.The species of a prism is figured out by the shape of that base. Some examples are triangular prism, pentagonal prism, hexagonal prism and also so on. Prism is that utmost prominence in geometry and optics. Prism plays a an essential role in the studies concerned the reflection, refraction and splitting that the light. ## Main Differences between Pyramids and Prisms Pyramid and also prism both space three dimensional polyhedron shaped structures and the major difference lies in your base.A pyramid has actually only one base, top top the other hand, a prism is characterized by two bases.The base of a pyramid and also a prism is polygonal in shape.The sides of a pyramid are always triangular in shape, top top the various other hand, the political parties of a prism are constantly rectangular in shape.The political parties of a pyramid room inclined in ~ an angle through the base, on the various other hand, the political parties of a prism space perpendicular to the base.All the political parties of a pyramid are always joined together at a point, ~ above the various other hand, all the sides of a prism space not necessarily join to each various other at a point.The point of junction of all the political parties of a pyramid is termed as an apex or vertex and it is vertically over the center of the base whereas there is no such allude in a prism.The form of a pyramid or prism is dependent on the form of your bases.There are the triangle pyramid or prism, pentagonal pyramid or prism, hexagonal pyramid or prism, etc.A pyramid is concerned the ar of geometry, on the other hand, a prism is concerned the ar of geometry and optics. See more: How To Wrap A Spiral Perm Wrap, Spiral/Specialty Perm Wrap ## Conclusion The pyramid and also prism both have actually their importance in their particular fields. Both play vital role in the clinical studies pertained to reflection, refraction and also splitting the the light. They help in stating the facts based on these studies. ## References https://www.sciencedirect.com/science/article/pii/S1386947708002488 Page Contents1 Pyramid vs Prism2 comparison Table between Pyramids and also Prisms 3 What is a Pyramid?4 What is a Prism?5 key Differences between Pyramids and also Prisms 6 Conclusion7 References
1,232
6,018
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.28125
3
CC-MAIN-2022-33
latest
en
0.970939
https://myknowsys.com/blog/2012/06/617-averages.html
1,550,537,371,000,000,000
text/html
crawl-data/CC-MAIN-2019-09/segments/1550247489282.7/warc/CC-MAIN-20190219000551-20190219022551-00002.warc.gz
636,261,192
12,086
# Averages On this day in 1885 the Statue of Liberty arrived in New York Harbor. There were 350 pieces in more than 200 cases. The statue was designed by french sculptor Frederic-Auguste Bartholdi and was intended to commemorate both the American Revolution and a century of friendship between the United States and France. You can read more about the history of the Statue of Liberty here. ## 6/17 Averages Read the following SAT test question and then select the correct answer choice. Alice bought m pens for n dollars each, and Ben bought n pens for m dollars each. Which of the following is the average price per pen, in dollars, for all the pens that Alice and Ben bought? Remember that the first step in solving any math problem is to read the problem carefully. Next, you should identify the bottom line. For this problem, you are looking for " the average price per pen, in dollars, for all the pens that Alice and Ben bought." Before you start trying to solve the problems, always be sure to assess your options. Think about what you could do. In this case, you could write a formula, or you could pick numbers. Then think about what you should do. Since this is a fairly straightforward problem, it's probably best to just write a formula. Since the SAT is a timed test, you want to select the method that will get you to the correct answer in the least time. Now, all you need to do is calculate the average cost of the pens step by step. Alice bought m pens for n dollars so she spent a total of m*n dollars and bought m pens. Ben bought a total of n pens for m dollars so he spent a total of n*m dollars and bought n pens. If we add up the total cost of the pens and divide that by the total number of pens purchased we have the following $\frac{(m*n)+(n*m)}{m+n} = \frac{2mn}{m+n}$ Now, all you need to do is look at the answer choices and select the answer that matches your solution. Don't forget to loop back and verify that your answer matches the bottom line. (A) (B) (C) (D) (E) The correct answer choice is (B) On sat.collegeboard.org 44% of the responses were correct. for more help with math, visit www.myknowsys.com.
514
2,157
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.28125
4
CC-MAIN-2019-09
latest
en
0.951697
http://waybuilder.net/sweethaven/Math/pre-algebra/PreAlg0102/default.asp?iNum=0101
1,516,751,406,000,000,000
text/html
crawl-data/CC-MAIN-2018-05/segments/1516084892802.73/warc/CC-MAIN-20180123231023-20180124011023-00515.warc.gz
381,894,477
3,522
top Chapter 1—Whole Numbers 1-1 Introducing Whole Numbers Introducing Whole Numbers Natural Numbers When you count on your fingers—one-two-three-four, and so on—you are counting with natural numbers.  Using your fingers and thumbs for counting, you can count from 1 to 10. If you include all your toes, you can count another ten whole numbers. This is the kind of counting that comes most naturally. Of course we need to count a lot higher than ten. We need to count into the tens. hundreds, thousands, millions, and so on. So the natural number system goes far beyond the limitations of simple finger-and-toe counting. It begins with the number 1 and goes upward as far as we can imagine. And even then, it keeps going. We can show the natural number system this way: 1, 2, 3, 4, 5, 6, 7, 8, 9, ... where the ellipsis (three dots in row) indicates that the counting continues without end. Another way to portray the natural number system is with a number line. A number line is a scale that looks and works much like a measuring stick. You can see that the numbers are marked on the scale and arranged in order, from left to right. The dashes and arrow ( ) at the right end of the line tells us that the scale and the counting can go on this way forever. Natural number line. Whole Numbers Although the natural number system proved to be very handy for conducting business in ancient times, it lacked one very important feature: the concept of zero. When a zero is added at the beginning of the natural number system, the system becomes the whole number system: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ... The whole number system begins with zero and counts upward through tens, hundreds, thousands, millions, and so on. The number line for the whole number system looks like this: Whole number line The scale for the whole number line begins with zero and runs to the right. How far does it run to the right? We can say that the whole number system extends from 0 to ¥ (spoken as "from zero to infinity"). The Decimal Numbering System The whole-number system uses only ten characters--0 through 9. They are the characters of our familiar decimal numbering system. Note that the deci- in decimal means ten -- the total number of digits (fingers and thumbs) on our two hands. 0 1 2 3 4 5 6 7 8 9 Every number we might ever want to express can be written as a combination of these ten, simple digits. The Value of a Decimal Number Numbers have a certain value, or magnitude. In our decimal numbering system, the numeral 6 represents six things (stones, fingers, sticks, etc.). The numeral 4, on the other hand, represents four such things. Thus we can say that 6 is larger than 4. Likewise, we can say that 9 has a greater value than 2, 3 has a smaller value than 5, and 9 is the largest of all the decimal digits. The value, or magnitude, of a decimal number can also be indicated on a number line: You can see that the value of these decimal whole numbers increase from left to right. (Of course you can also say that the values decrease from right to left.) Values increase from left to right on the whole-number line
742
3,135
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.6875
5
CC-MAIN-2018-05
longest
en
0.925902
https://forgeatl.com/northern-ireland/two-variable-data-table-example.php
1,611,453,375,000,000,000
text/html
crawl-data/CC-MAIN-2021-04/segments/1610703544403.51/warc/CC-MAIN-20210124013637-20210124043637-00419.warc.gz
346,620,957
6,704
Variable data table example two 12. Two Way Tables — R Tutorial cyclismo.org Excel Magic Trick 255 Data Table 2 Variable What If. Data table will appear here. for example, profits. column variable list this is a list of the different values part two: use of a one-variable data table to, this will be one of our two variable inputs for the data table. next, we need to define our second set of variable inputs. car loan calculator data table example.. Data Table Variables studio.uipath.com Excel Data Tables. Excel VBA Code to Loop Through OzGrid. To create a two-variable data table in excel 2013, you enter two ranges of possible input values for the same formula in the data table dialog box: a range of values, 16/04/2018в в· how to create and use two-input data tables in changes in two variables the data table dialog box. to create a simple two. In the second example, the dependent variable is weight and in the two examples: example 3: so what the table does is organize your dependent data. data tables are a necessary part of a school science use a measurable unit to compare the two variables. examples of measurable units would be time (hours 11/04/2009в в· here is an example, the value to be calculated in my two-dimensional data table is based on the hlookup formula. three variable "data table" creating a two-variable data table in excel. we have used another example in this article to work with the two-variable data table. in this example, a company wants In the second example, the dependent variable is weight and in the two examples: example 3: so what the table does is organize your dependent data. 23/06/2008в в· help!! i am building a two-way variable data table and everytime i press ok, it just repeats the upper-most left column result which is the formula... You can use a two-variable data table to gauge the effect on one formula by changing the value of two input download data table example workbook. excel vba tips tutoring to enhance science skills tutoring two: trial 1 independent dependent variable (unit) variable (unit) trial 2 guidelines for making a data table You can use a two-variable data table to gauge the effect on one formula by changing the value of two input download data table example workbook. excel vba tips ... what-if analysis with data tables by for example, build simple one-variable data tables that show two-variable data tables let you experiment A simple one-variable data table example the two-variable data table must follow excelвђ™s rules for data tables. advantages to using a data table include: ms excel: two-dimensional lookup (example #1) this excel tutorial explains how to perform a two-dimensional lookup (with screenshots and step-by-step instructions). To create a two-variable data table to perform what-if analysis in excel 2010, you enter two ranges of possible input values for the same formula: a range of values 15/01/2012в в· demonstration of two-way data table feature in excel. this kind of table is sometimes referred to as a two-variable data table. this tool is one of the Data Table Variables studio.uipath.com How to make a two way (two variable) data table in Excel. I'm trying to use data.table in r to summarize the following data table: summarize based on two grouping variables in r using from the example under ?key, examples of categorical variables are race, a two-way table presents categorical data by counting the number of observations that fall into each group for two. Excel Data Table with More Than Two Input Variables Math How to make a two way (two variable) data table in Excel. A table is an arrangement of data in rows for example, in the following diagram, two alternate representations of the dependent and independent variables; This will be one of our two variable inputs for the data table. next, we need to define our second set of variable inputs. car loan calculator data table example.. ... what-if analysis with data tables by for example, build simple one-variable data tables that show two-variable data tables let you experiment template samples excelta table example create two variable youtube download spreadsheet examples excel data just another 2007 pivot Tutoring to enhance science skills tutoring two: trial 1 independent dependent variable (unit) variable (unit) trial 2 guidelines for making a data table advanced data analysis what-if analysis with data tables example. there is a loan of a two-variable data table can be used if you want to see how different Template samples excelta table example create two variable youtube download spreadsheet examples excel data just another 2007 pivot excel variable data tables here is an example of how the number of dimensions in a simple two way data table can be increased to three by using right and left Excel data tables. using excel data tables. the attached workbook shows an example of a two-variable data table. rules. a couple of rules for data tables: here we look at some examples of how to work with two way tables. we assume that you can enter data and understand the different data types. Bivariate data is when you are studying two variables. for example, the results from bivariate analysis can be stored in a two-column data table. for example, examples of categorical variables are race, a two-way table presents categorical data by counting the number of observations that fall into each group for two Template samples excelta table example create two variable youtube download spreadsheet examples excel data just another 2007 pivot advanced data analysis what-if analysis with data tables example. there is a loan of a two-variable data table can be used if you want to see how different Select into a table variable in t-sql. the data in the table variable would be later used to insert/update it back if you use a global temp table (two hash 23/06/2008в в· help!! i am building a two-way variable data table and everytime i press ok, it just repeats the upper-most left column result which is the formula... Yet with a real-life example, use real-life examples to teach excel data tables. iвђ™ll show you how two-variable data tables can be used to decide whether or a table is an arrangement of data in rows for example, in the following diagram, two alternate representations of the dependent and independent variables; Data table examples. type depends on the nature of the variable being graphed. like data tables, reserved в©2015 the environmental literacy council you can use a two-variable data table to gauge the effect on one formula by changing the value of two input download data table example workbook. excel vba tips
1,377
6,664
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.84375
3
CC-MAIN-2021-04
latest
en
0.763966
https://www.jiskha.com/display.cgi?id=1195449984
1,502,985,831,000,000,000
text/html
crawl-data/CC-MAIN-2017-34/segments/1502886103579.21/warc/CC-MAIN-20170817151157-20170817171157-00367.warc.gz
925,407,234
4,204
posted by . Find the equation of a hyperbola with foci of (0,8), and (0,-8) and Asymptotes of y=4x and y=-4x In a hyperbola with centre at the origin the equation of the asymptote is y = ±(b/a)x. so b/a = 4 or b = 4a Also c = 8 then in a^2 + b^2 = c^2 for a hyperbola a^2 + 16a^2 = 64 17a^2 = 64 a = 8/√17 then b=32/√17 using x^2/a^2 = y^2/b^2 = -1 x^2/(64/17) - y^2/(32/17) = -1 17x^2 /64 - 17y^2 /64 = -1 correction: using x^2/a^2 = y^2/b^2 = -1 should say using x^2/a^2 - y^2/b^2 = -1 Wow it's that easy. I was making it harder that it is. I think that's how you set it up, but wouldn't 32^2 be 1024 <..but wouldn't 32^2 be 1024> of course, good for you for catching that. let's blame it on a "senior moment" ## Similar Questions Identify the graph of the equation 4x^2-25y^2=100. Then write the equation of the translated graph for T(5,-2) in general form. Answer: hyperbola; 4(x-5)^2 -25(y+2)^2=100 2)Find the coordinates of the center, the foci, and the vertices, … 2. ### College Algebra. Find the equation in standard form of the hyperbola that satisfies the stated condition. Foci (0,5) and (0,-5), asymptotes y = x and y =-x 3. ### math 30 conics Hi i was given the equation (y-2)^2 - x^2/4 =1. I need to find the center, vertices and asymptotes of this hyperbola. I found the center (0,2) and the vertices were tricky but I think they are (0,1) (0,3) but I'm having trouble finding … 4. ### Precalc Find the standard form of the equation of the hyperbola with the given characteristics. foci: (±4, 0) asymptotes: y= +/- 3x 5. ### algebra 2 Graph the equation. Identify the vertices, foci, and asymptotes of the hyperbola. Y^2/16-x^2/36=1 I think....vertices (0,4),(0,-4) foci? 6. ### College algebra Find the standard form of the equation of the hyperbola with the given characteristics. Foci:(+4,0) or (-4,0) Asymptotes: y=4x or y=-4x. 7. ### Math/ Algebra Find an equation for the hyperbola described: 1) Vertices at (0,+/- 10); asymptotes at y= 5/3x 2) Vertices at (-+5,0); foci at (-+6,0) 8. ### precalculus find an hyperbola equation with foci (0, +/-8) and asymptotes; y=+/-1/2x? 9. ### Algebra For the following equation of a hyperbola determine the center, vertices, foci, and asymptotes? 10. ### Algebra2 What are the vertices, foci, and asymptotes of the hyperbola with the equation 16x^2-4y^2=64? More Similar Questions
843
2,367
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4
4
CC-MAIN-2017-34
latest
en
0.861554
http://webreference.com/programming/javascript/jf/2.html
1,542,641,330,000,000,000
text/html
crawl-data/CC-MAIN-2018-47/segments/1542039745800.94/warc/CC-MAIN-20181119150816-20181119172816-00386.warc.gz
374,939,951
13,870
Ad-Rotation in JavaScript | 2 | WebReference # Ad-Rotation in JavaScript | 2 [previous] Next, we initialize the variable len, and set it to the length of both variables added together, and divided by two. We'll use this variable later on in our showAd() function. After that, we run an if/else statement, which basically checks if it is an even or odd number. We multiply len by two to get the total of both variable lengths (remember we divided len by two earlier), and then we use the modulus operator to get the remainder of the division of len and two. In other words, we divide len by two (len/2) and then get the remainder of that equation--the result is what the modulus operator provides. (For more information on the modulus and other Mathematical operators, see http://devedge.netscape.com/library/manuals/2000/javascript/1.3/reference/ops.html#1042400.) If the returned value is 1, the number is odd. So assuming that our script is missing an array value, the returnValue would be set to equal false. To continue, we want to write out the function that will display the ad. Remember, creating a function and calling a function are two different things--we're putting this part of the code in the HEAD tag, but we're going to call it from another part of the page, so that the image loads as if it had been placed there statically (no pre-loading, in other words). { /* Begin function showAd() */ if(returnValue==false) /* If we set the return value to false, use document.write() to display a blank image, and then exit the function (don't display the ad) */ {document.write('<img src="blank.gif">'); return false;} var rand = Math.floor(len*Math.random()); /* Calculate a random number from zero to the maximum possible number, which we earlier set to the variable "len." */ /* Set the variable "link" to point to one value of the "links" array. Since "rand" is a random number, in this case from 0 to 2, it will select one of the three strings in the "links" array. Remember again, arrays begin at index zero. So the first value of the links array is links[0], which would give us the value "http://yahoo.com/" and the second would be links[1]. The total string amount is three, but we're going from 0 to 2, which would select properly from the array index. */ var img = imgs[rand]; /* We're doing the same thing with the imgs array here, as we did with the links array above. How will it correspond with the first array, though? Simple, we set the variable "rand" to a random value, but it does not change every time we call it. This means that if rand were to equal 1 in links[rand], it will also equal 1 in img[rand] and continue to equal 1 until it was set to something else. */ /* Use document.write() to write out the HTML code. Here basically all we're doing is filling in the HREF, TITLE, SRC and ALT attributes with different variables that we set above. You can test this by using alert(link) or alert(img) before or after the above line, but not before the variables link and img are set. */ } // And finally, we end the function Here, we check to see if we set returnValue to false, and if so, display a blank image when the function is called to avoid any broken images or page display failures. Then we use return false to exit the function before anything else in the function is run--this ends the function in the case of a syntax error. Next a variable (rand) is set to a random number whose maximum value is equal to the variable len. Remember, len is equal to the length of both arrays, divided by two. The result is what's necessary to specify how far the random value can go. In this case, its maximum is 2. Next, we set the variable img to the random string, and we set the variable link to the random string; it corresponds properly because the variable rand doesn’t change each time we call it--it only changes when we change it. After all the processing is done, it's time to write the HTML to the page and use the variables link and img as the values. Now we need to call the function. In your HTML code, use the following script and it will display your random image wherever it’s placed--you can even call it more than once (but a different ad may be displayed, depending on the value of rand). <script type="text/javascript"><!-- //--></script> <noscript> <img src="blank.gif" alt=""> </noscript> Shortening the Code More? Yep… We can shorten this as seen below: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html lang="en"> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <script type="text/javascript"><!-- var returnValue = true, links = ["http://yahoo.com/", "http://msn.com/", "http://cnn.com/"], imgs = ["yahoo.gif", "msn.gif", "cnn.gif"], len = (links.length + imgs.length) / 2; if((len*2)%2 == 1){returnValue=false;} if(returnValue==false) {document.write('<img src="blank.gif" alt="">');return false;} var rand = Math.floor(len*Math.random()) } // --></script> <script type="text/javascript"><!-- //--></script> <noscript> <img src="blank.gif" alt=""> </noscript> </body></html> Here is working example of Ad-Rotation in practice. Clicking on the link repeatedly, randomly rotates the ads. ### Why not just change the SRC of the image onLoad? The reason we don’t do that, is because all users with JavaScript enabled would have to download the new ad after the page loads—which would just take up more processing time and bandwidth. This method will display a different image for users without JavaScript (a blank image), and users that have JavaScript will see a different ad in the order of where it would be had you used static HTML code instead. This avoids the page loading problems, and reduces confusion.
1,345
5,750
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.9375
3
CC-MAIN-2018-47
latest
en
0.827763
http://www.lmfdb.org/ModularForm/GL2/TotallyReal/4.4.1125.1/holomorphic/4.4.1125.1-89.1-a
1,563,655,189,000,000,000
text/html
crawl-data/CC-MAIN-2019-30/segments/1563195526670.1/warc/CC-MAIN-20190720194009-20190720220009-00064.warc.gz
234,242,306
5,479
Properties Base field $$\Q(\zeta_{15})^+$$ Weight [2, 2, 2, 2] Level norm 89 Level $[89, 89, w^{3} + w^{2} - w - 4]$ Label 4.4.1125.1-89.1-a Dimension 1 CM no Base change no Related objects Base field $$\Q(\zeta_{15})^+$$ Generator $$w$$, with minimal polynomial $$x^{4} - x^{3} - 4x^{2} + 4x + 1$$; narrow class number $$2$$ and class number $$1$$. Form Weight [2, 2, 2, 2] Level $[89, 89, w^{3} + w^{2} - w - 4]$ Label 4.4.1125.1-89.1-a Dimension 1 Is CM no Is base change no Parent newspace dimension 2 Hecke eigenvalues ($q$-expansion) The Hecke eigenvalue field is $\Q$. Norm Prime Eigenvalue 5 $[5, 5, -w^{2} + 1]$ $\phantom{-}3$ 9 $[9, 3, w^{3} + w^{2} - 4w - 3]$ $\phantom{-}4$ 16 $[16, 2, 2]$ $-7$ 29 $[29, 29, -w^{3} - w^{2} + 2w + 3]$ $\phantom{-}0$ 29 $[29, 29, -w^{2} + w + 3]$ $-6$ 29 $[29, 29, w^{3} - w^{2} - 4w + 2]$ $\phantom{-}6$ 29 $[29, 29, 2w^{3} + w^{2} - 7w]$ $-3$ 31 $[31, 31, -2w + 1]$ $-7$ 31 $[31, 31, 2w^{2} - 5]$ $\phantom{-}2$ 31 $[31, 31, 2w^{3} + 2w^{2} - 6w - 3]$ $-4$ 31 $[31, 31, 2w^{3} - 8w + 1]$ $-1$ 59 $[59, 59, w^{3} + w^{2} - 2w - 5]$ $\phantom{-}0$ 59 $[59, 59, -w^{3} + 2w^{2} + 4w - 5]$ $-9$ 59 $[59, 59, -3w^{3} + 10w - 4]$ $\phantom{-}0$ 59 $[59, 59, -2w^{3} - w^{2} + 7w - 2]$ $\phantom{-}9$ 61 $[61, 61, 4w^{3} + w^{2} - 13w - 1]$ $-1$ 61 $[61, 61, 2w^{3} - w^{2} - 5w + 2]$ $-4$ 61 $[61, 61, -3w^{3} - w^{2} + 8w]$ $-7$ 61 $[61, 61, 3w^{3} - w^{2} - 10w + 5]$ $-1$ 89 $[89, 89, w^{3} + w^{2} - w - 4]$ $-1$ Display number of eigenvalues Atkin-Lehner eigenvalues Norm Prime Eigenvalue 89 $[89, 89, w^{3} + w^{2} - w - 4]$ $1$
853
1,587
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.8125
3
CC-MAIN-2019-30
latest
en
0.274798
https://www.convertunits.com/from/cubic+hectometer/to/litre
1,659,932,371,000,000,000
text/html
crawl-data/CC-MAIN-2022-33/segments/1659882570765.6/warc/CC-MAIN-20220808031623-20220808061623-00592.warc.gz
637,589,363
12,782
## ››Convert cubic hectometre to liter cubic hectometer litre How many cubic hectometer in 1 litre? The answer is 1.0E-9. We assume you are converting between cubic hectometre and liter. You can view more details on each measurement unit: cubic hectometer or litre The SI derived unit for volume is the cubic meter. 1 cubic meter is equal to 1.0E-6 cubic hectometer, or 1000 litre. Note that rounding errors may occur, so always check the results. Use this page to learn how to convert between cubic hectometers and liters. Type in your own numbers in the form to convert the units! ## ››Quick conversion chart of cubic hectometer to litre 1 cubic hectometer to litre = 1000000000 litre 2 cubic hectometer to litre = 2000000000 litre 3 cubic hectometer to litre = 3000000000 litre 4 cubic hectometer to litre = 4000000000 litre 5 cubic hectometer to litre = 5000000000 litre 6 cubic hectometer to litre = 6000000000 litre 7 cubic hectometer to litre = 7000000000 litre 8 cubic hectometer to litre = 8000000000 litre 9 cubic hectometer to litre = 9000000000 litre 10 cubic hectometer to litre = 10000000000 litre ## ››Want other units? You can do the reverse unit conversion from litre to cubic hectometer, or enter any two units below: ## Enter two units to convert From: To: ## ››Definition: Litre The litre (spelled liter in American English and German) is a metric unit of volume. The litre is not an SI unit, but (along with units such as hours and days) is listed as one of the "units outside the SI that are accepted for use with the SI." The SI unit of volume is the cubic metre (m³). ## ››Metric conversions and more ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data. Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres squared, grams, moles, feet per second, and many more!
529
2,097
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.5625
3
CC-MAIN-2022-33
latest
en
0.776557
http://www.scientificlib.com/en/Mathematics/LX/CAGroup.html
1,604,193,025,000,000,000
text/html
crawl-data/CC-MAIN-2020-45/segments/1603107922746.99/warc/CC-MAIN-20201101001251-20201101031251-00340.warc.gz
162,061,491
4,658
Hellenica World # . In mathematics, in the realm of group theory, a group is said to be a CA-group or centralizer abelian group if the centralizer of any nonidentity element is an abelian subgroup. Finite CA-groups are of historical importance as an early example of the type of classifications that would be used in the Feit–Thompson theorem and the classification of finite simple groups. Several important infinite groups are CA-groups, such as free groups, Tarski monsters, and some Burnside groups, and the locally finite CA-groups have been classified explicitly. CA-groups are also called commutative-transitive groups (or CT-groups for short) because commutativity is a transitive relation amongst the non-identity elements of a group if and only if the group is a CA-group. History Locally finite CA-groups were classified by several mathematicians from 1925 to 1998. First, finite CA-groups were shown to be simple or solvable in (Weisner 1925). Then in the Brauer-Suzuki-Wall theorem (Brauer, Suzuki & Wall 1958), finite CA-groups of even order were shown to be Frobenius groups, abelian groups, or two dimensional projective special linear groups over a finite field of even order, PSL(2, 2f) for f ≥ 2. Finally, finite CA-groups of odd order were shown to be Frobenius groups or abelian groups in (Suzuki 1957), and so in particular, are never non-abelian simple. CA-groups were important in the context of the classification of finite simple groups. Michio Suzuki showed that every finite, simple, non-abelian, CA-group is of even order. This result was first extended to the Feit–Hall–Thompson theorem showing that finite, simple, non-abelian, CN-groups had even order, and then to the Feit–Thompson theorem which states that every finite, simple, non-abelian group is of even order. A textbook exposition of the classification of finite CA-groups is given as example 1 and 2 in (Suzuki 1986, pp. 291–305). A more detailed description of the Frobenius groups appearing is included in (Wu 1998), where it is shown that a finite, solvable CA-group is a semidirect product of an abelian group and a fixed-point-free automorphism, and that conversely every such semidirect product is a finite, solvable CA-group. Wu also extended the classification of Suzuki et al. to locally finite groups. Examples Every abelian group is a CA-group, and a group with a non-trivial center is a CA-group if and only if it is abelian. The finite CA-groups are classified: the solvable ones are semidirect products of abelian groups by cyclic groups such that every non-trivial element acts fixed-point-freely and include groups such as the dihedral groups of order 4k+2, and the alternating group on 4 points of order 12, while the nonsolvable ones are all simple and are the 2-dimensional projective special linear groups PSL(2, 2n) for n ≥ 2. Infinite CA-groups include free groups, PSL(2, R), and Burnside groups of large prime exponent, (Lyndon & Schupp 2001, p. 10). Some more recent results in the infinite case are included in (Wu 1998), including a classification of locally finite CA-groups. Wu also observes that Tarski monsters are obvious examples of infinite simple CA-groups. References Brauer, R.; Suzuki, Michio; Wall, G. E. (1958), "A characterization of the one-dimensional unimodular projective groups over finite fields", Illinois Journal of Mathematics 2: 718–745, ISSN 0019-2082, MR 0104734 Schupp, Paul E.; Lyndon, Roger C. (2001), Combinatorial group theory, Berlin, New York: Springer-Verlag, ISBN 978-3-540-41158-1, MR 0577064 Suzuki, Michio (1957), "The nonexistence of a certain type of simple groups of odd order", Proceedings of the American Mathematical Society (American Mathematical Society) 8 (4): 686–695, doi:10.2307/2033280, ISSN 0002-9939, JSTOR 2033280, MR 0086818 Suzuki, Michio (1986), Group theory. II, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] 248, Berlin, New York: Springer-Verlag, ISBN 978-0-387-10916-9, MR 815926 Weisner, L. (1925), "Groups in which the normaliser of every element except identity is abelian.", Bulletin of the American Mathematical Society 31: 413–416, doi:10.1090/S0002-9904-1925-04079-3, ISSN 0002-9904, JFM 51.0112.06 Wu, Yu-Fen (1998), "Groups in which commutativity is a transitive relation", Journal of Algebra 207 (1): 165–181, doi:10.1006/jabr.1998.7468, ISSN 0021-8693, MR 1643082 Mathematics Encyclopedia
1,169
4,437
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.78125
3
CC-MAIN-2020-45
latest
en
0.931226
https://nickadamsinamerica.com/printable-blank-multiplication-grid-12x12/6fff51231dd01d70e21999fb6badf11e/
1,643,132,926,000,000,000
text/html
crawl-data/CC-MAIN-2022-05/segments/1642320304859.70/warc/CC-MAIN-20220125160159-20220125190159-00014.warc.gz
481,096,865
9,463
By . Worksheet. At Wednesday, November 17th 2021, 08:34:09 AM. This Logic Truth Table Handout will produce a handout that gives a brief explanation as well as truth tables of logical operations that are used when evaluating logic statements. These Logic Worksheets are a good resource for students in the 8th Grade through the 12th Grade. These subtraction worksheets are good for introducing algebra concepts. You may select various types of characters to replace the missing numbers on these subtraction worksheets. The formats of the subtraction worksheets are horizontal and the numbers range from 0 to 99. You may select up to 30 subtraction problems for these worksheets. When we add numbers that need to be carried over we can only carry over a digit into the right spot. For example, when we add the 4 and 9 from the problem on the board we have to carry over the answer (13) into the correct place value spots. Free Printable Multiplication Times Table Chart Multiplication Times Tables Multiplication Chart Times Table Chart Blank Times Table Grid For Timed Times Table Writing Like I Remember When I Was In School So Glad I Foun Multiplication Multiplication Chart Times Table Chart Multiplication Times Table Chart To 12x12 Blank Gif 1000 1294 Multiplication Multiplication Chart Times Table Chart Times Table Grid To 12x12 Multiplication Multiplication Chart Homeschool Math Multiplication Times Table Chart To 12x12 Mini Blank 1 Early Practice Printable Chart Math Folders Times Table Grid Printable Blank Multiplication Table 12x12 Chart Rgb Color Codes Coding Color Coding Multiplication Times Table Chart To 12x12 Blank Multiplication Times Tables Multiplication Chart Multiplication Multiplication Grids 12x12 Blank Filled In For Busy Teachers Multiplication Grid Multiplication Math About Me Multiplication Table Pdf Matematik Carpma Okuma Times Table Grid To 12x12 In 2021 Multiplication Chart Multiplication Chart Printable Blank Multiplication Chart Blank Multiplication Charts Up To 12x12 Multiplication Chart Blank Multiplication Chart Printable Chart Blank Multiplication Table 1 12 Multiplication Chart Multiplication Worksheets Blank Multiplication Chart Multiplication Table Multiplication Table Multiplication Multiplication Chart Easy Printable 12x12 Multiplication Table Times Table Chart Times Tables Multiplication Table Multiplication Chart Empty Pdf Printable Blank Multiplication Grid Multiplication Chart Pinterest Multiplication Chart Multiplication Table Multiplication Grid
505
2,531
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.6875
3
CC-MAIN-2022-05
latest
en
0.755487
https://wizardofvegas.com/forum/gambling/slots/24728-want-to-try-something-new/
1,686,107,736,000,000,000
text/html
crawl-data/CC-MAIN-2023-23/segments/1685224653501.53/warc/CC-MAIN-20230607010703-20230607040703-00109.warc.gz
662,491,005
12,299
GamecoxJax Joined: Jan 5, 2016 • Posts: 22 January 8th, 2016 at 11:07:22 AM permalink Wife and I are going on a cruise, and we love to gamble. Problem is, neither of us are very good, and rarely have a winning session, let alone a trip. We play craps and BJ, and while we both understand odds and BS, we still just have that black cloud of negative results that follows us. We love it, despite the lack of success. So, this trip, we are thinking about sitting down at a slot bank, and trying \$20 at a time that way, rather that \$200 at a time on craps or BJ table. So, the question I have... What would be the preferred game or type of game? I have avoided slots almost exclusively, just because we love the live action of the tables. But, in an effort to make our gambling dollar go farther, we want to try the flashing lights and bells and whistles flavor. So, again, what games or strategy should noobs take into a slot endeavor? Thanks Romes Joined: Jul 22, 2014 • Posts: 5580 January 8th, 2016 at 11:20:33 AM permalink Sorry to hear about the poor variance you've had thus far in Craps/BJ. However, you really should stick to these games. Playing \$20 at a time won't change the house edge and your much, much greater negative expectation. Example: \$200 at an average .5% BJ Game. Say you play for 2 hours, flat betting \$10 per hand, with about 70 hands per hour. EV = NumHands*AvgBet*HE = 140*10*(-.005) = -\$7 Thus, in your 2 hour spam playing blackjack, in the "long run" on average you could expect to lose \$7. Now let's look at slots, which most states have a minimum payback of 85%, putting it at a 15% HE. Hell, let's give you the benefit of the doubt and say it's only 10% HE. Let's say you play the same amount of time/action through as blackjack. Now let's look at your EV. EV = TotalAction*HE (TotalAction is the same has NumHands*AvgBet) = 1400*(-.1) = -\$140. Thus, in your couple hour spam playing the slots, in the "long run" on average you could expect to lose \$140. I hope you see the MASSIVE difference. Yes, you've been having poor variance for blackjack, but even if you're playing a losing game (just basic strategy) the same math applies to you as it does an AP. You'll have an expectation, and you'll have standard deviations around that expectation (which is how some basic strategy players can be winning players for a little while), but eventually everyone will gravitate towards their true EV of the game. For counters, that's a positive number. For basic strategy players, it's a negative number, but not "hideously" negative (I guess depends on your action and what you consider hideous). So even you, a basic strategy player, can be "below expectations" and "running bad"...i.e. having bad luck/variance. All this means is IN THE LONG RUN you will balance back "upwards" towards your natural negative EV. So if your long term EV is -\$10,000 for your lifetime, and you're already -\$20,000... This means by the end of your life (decades from now) you can mathematically EXPECT to gravitate upwards towards that -\$10,000 number. That was a long winded way of saying, keep playing blackjack and craps. You're chances for success are MUCH MUCH higher there and you'll lose a LOT less money in the long run. Yes, you've had some bad luck with your "black cloud" but that black cloud is just a figment of your imagination. It happens, to both AP's and BS players alike. Gotta keep on playing through it and eventually it'll turn in to a white cloud! Not that you'll have a positive expectation in the long run just playing BS, but you will do much better than now when you're having "bad luck." I hope that makes sense =). Playing it correctly means you've already won. GWAE Joined: Sep 20, 2013 • Posts: 9854 January 8th, 2016 at 11:42:22 AM permalink Romes don't forget he mentioned playing on a cruise line. Game choice for BJ is going to be worse, but moreover the slots are not regulated and they do not disclose nor have to adhear to state laws on payback percent. I would say they are probably in the 15-20%. They do have VP but you are usually looking at games like 6/5 JOB and worse. I will say I was the only vulture on my last cruise so that turned out very very well. Expect the worst and you will never be disappointed. I AM NOT PART OF GWAE RADIO SHOW GamecoxJax Joined: Jan 5, 2016 • Posts: 22 January 8th, 2016 at 11:45:09 AM permalink Absolutely makes sense.... No arguments on any of that..... But even AP know that sometimes, for some reason, there is just that guy that can't catch a break. Trust me, I could be hired as a "cooler" if I lived in the desert. Its that bad. But I just love it so much... There is nothing in the world that I have ever experienced as fun as a craps table with 10 people at it, a thousand dollars spread by everyone, and the nervous silence as those dice bounce off the wall. Or the know that comes from a double down with an 11 when you know the count is in your favor, but just haven't seen your luck turn. I do appreciate the mathematical break down. I guess I just figured with bonuses and extra spins and side games, we might enjoy our time in the slots section this trip. But I don't want to go in blind. GamecoxJax Joined: Jan 5, 2016 • Posts: 22 January 8th, 2016 at 11:45:58 AM permalink Vulture? GWAE Joined: Sep 20, 2013 • Posts: 9854 January 8th, 2016 at 11:52:05 AM permalink Quote: GamecoxJax Vulture? Means playing certain games when they are at an advantage but not creating the advantage. Expect the worst and you will never be disappointed. I AM NOT PART OF GWAE RADIO SHOW GWAE Joined: Sep 20, 2013 • Posts: 9854 January 8th, 2016 at 11:54:12 AM permalink Quote: GamecoxJax I do appreciate the mathematical break down. I guess I just figured with bonuses and extra spins and side games, we might enjoy our time in the slots section this trip. But I don't want to go in blind. The bonuses are part of the house edge. They are not extra on top. Play slots if you think they are fun but don't play them thinking they are going to be better than table games. Of course there is always the exception but we don't write about those on public boards. Expect the worst and you will never be disappointed. I AM NOT PART OF GWAE RADIO SHOW Romes Joined: Jul 22, 2014 • Posts: 5580 January 8th, 2016 at 11:55:22 AM permalink Quote: GamecoxJax ...I do appreciate the mathematical break down. I guess I just figured with bonuses and extra spins and side games, we might enjoy our time in the slots section this trip. But I don't want to go in blind. Enjoyment is in the eye of the beholder =P but you will ABSOLUTELY lose more money playing slots. Not only will you lose more money in the long run, but in the short run of your trip you're much more likely to lose playing slots as well. Trust me when I say I understand the bad run you're in. My business partner and I went through a 5 month spread of "YOU ARE NOT ALLOWED TO WIN ONE HAND" this year. We put in 250 hours this year and finished slightly in the RED for blackjack... and this is with us playing with a 1-2% (or more from other things we do) advantage! KewlJ, a professional blackjack player on this site, had a 6 month losing streak in years past as well. We absolutely KNOW what it's like, looks like, feels like, etc, etc. You just walk in feeling like you're going to lose your next bet before you even place it. KewlJ played through it, we played through it, and it SUCKED but it's what you have to do to get to the good side of the coin. You have to "try" to ignore the fact that you've been getting busted up playing and just keep playing your game. We've had WEEKS where we busted on every single 12 we hit, WEEKS where every 11 double down was met with a deuce and the dealer draws out on a bust card... We've literally had 3 shoes (6 deck) in a row where the dealer didn't bust once. I went 1-22 with the hand 19 and had losing streaks of like 14 in a row (.48^14 odds of that happening). I've seen the ugly side of the coin, and I have a much larger respect for the GRIND that blackjack truly is (AP or not). All I'm saying is if you have fun playing the game, don't let "bad luck" stop you from playing... especially if that means you're going to play a MUCH MUCH worse game such as slots =P. Playing it correctly means you've already won. GamecoxJax Joined: Jan 5, 2016
2,155
8,364
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.53125
3
CC-MAIN-2023-23
latest
en
0.957344
https://www.jiskha.com/search?query=Find+the+value+of+K+such+that+the+following+trinomials+can+be+factored+over+the+integers%3A+1.+36x%5E2%2B18x%2BK+2.+3x%5E2+-+16x%2BK
1,607,038,385,000,000,000
text/html
crawl-data/CC-MAIN-2020-50/segments/1606141732835.81/warc/CC-MAIN-20201203220448-20201204010448-00696.warc.gz
617,234,737
13,082
# Find the value of K such that the following trinomials can be factored over the integers: 1. 36x^2+18x+K 2. 3x^2 - 16x+K 93,331 results 1. ## alegebra 1. What is the factored form of 4x 2 + 12x + 5? (1 point) (2x + 4)(2x + 3) (4x + 5)(x + 1) (2x + 1)(2x + 5) (4x + 1)(x + 5) 2. What is the factored form of 2x 2 + x – 3? (1 point) (2x + 3)(x – 1) (2x + 1)(x – 3) (2x – 3)(x + 1) (2x – 1)(x + 3) 3. 2. ## Math 2. The sum of the reciprocals of two consecutive positive integers is 17/72. Write an equation that can be used to find the two integers. What are the integers? Steve helped me yesterday and gave me the hint 8+9=17. Then I thought about it and saw 8*9=72, 3. ## Math The sum of four consecutive even integers is the same as the least of the integers. Find the integers. I'm not sure how to solve it and put it in an equation! 4. ## Algebra x^2 + kx - 19 Find all values of k so that each polynomial can be factored using integers. 5. ## Math Determine 2 values of k so that trinomials can be factored over integers 36m^2 + 8m + k and 18x^2 - 42y + k. 6. ## math The sum of the reciprocals of two consecutive even integers is 11/60. Find the integers....... 1/n + 1/(n+2) = 11/60 what is the question? the question is, "What is the value of the integers? I will be happy to critique your work. We don't do it for 7. ## Math, Please Help 1) Given the arithmetic sequence an = 4 - 3(n - 1), what is the domain for n? All integers where n ≥ 1 All integers where n > 1 All integers where n ≤ 4 All integers where n ≥ 4 2) What is the 6th term of the geometric sequence where a1 = 1,024 and 8. ## Math Sum of 2 consecutive integers is 59. Write an equation that models that situation and find values of both integers 9. ## quad. eq. find 3 consecutive integers such that the product of the second and third integer is 20 Take three integers x, y, and z. The for xyz, we want y*z = 20 The factors of 20 are 20*1 10*2 5*4. 20*1 are not consecutive. 10*2 are not consecutive. But 5 and 4 are 10. ## math Match each polynomial in standard form to its equivalent factored form. Standard forms: 8x^3+1 2x^4+16x x^3+8 the equivalent equation that would match with it (x+2)(x2−2x+4) The polynomial cannot be factored over the integers using the sum of cubes 11. ## math The sum of 4 consecutive odd integers is -104. Find the integers with x for unknown numbers. 12. ## Pre Calc 12 Four consecutive integers have a product of 360 Find the integers by writing a plynomial equation that represents the integers and then solving algebraically. 13. ## Algebra The sum of the squares of two consecutive positive even integers is one hundred sixty-four. Find the two integers. 14. ## Math The sum of four consecutive even integers is the same as the least of the integers. Find the integers. Help please! 15. ## Algebra There are three consecutive integers the square of the largest one equals the sum of the squares of the two other.Find the integers 16. ## algebra Find the complex zeros of the polynomial function. Write f in factored form. F(x)=x^3-8x^2+29x-52 Use the complex zeros to write f in factored form F(x)=____(reduce fractions and simplify roots) 17. ## algebra the sum of 2 integers is 41.when 3 times the smaller is subtracted from the larger the result is 17. find 2 integers 18. ## Math Two even integers are represented by 2n and 2n+2. Explain how you can find the value of those integers if their sum is 14. Name the integers. 19. ## Algebra The sum of 3 integers is 194. the sum of the first and second integers exceeds the third by 80. The third integer is 45 less than the first. Find the three integers 20. ## Math Find the value of K such that the following trinomials can be factored over the integers: 1. 36x^2+18x+K 2. 3x^2 - 16x+K 21. ## Algebra II The sum of the reciprocals of two consecutive positive integers is 17/12. Write an equation that can be used to find the two integers. What are the integers? 22. ## Algebra he sum of three integers is 193. The sum of the first and second integers exceeds the third by 85. The third integer is 15 less than the first. Find the three integers. 23. ## Algebra II factoring I need help. i am having trouble factoring trinomials into binomials. an example problem is 4n^2-5n-6 can someone show me step by step how to factor these kind of problems easily? Take the coefficient of your quadratic term in this case 4 and multiply it 24. ## Math State whether each expression is a polynomial. If the expression is a polynomials, identify it as a monomial,a binomial, or a trinomials. 1.7a^2b+3b^2-a^2b 2.1/5y^3+y^2-9 3.6g^2h^3k Are these the right answers? 1.Yes-trinomials 2.No 3.Yes- monomials. 25. ## algebra Five times the smallest of three consecutive odd integers is ten more than twice the longest. Find the integers. The sum of three integers is one hundred twenty three more than the first number. The second number is two more than the first number. the 26. ## math Consecutive integers are integers that follow each other in order (for example 5, 6, and 7). The sum of three consecutive integers is 417.Let n be the first one. Write an equation that will determine the three integers. 27. ## algebra the sum of three integers is 220. The sum of the first and second integers exceeds the third by 94. The third integer is 55 less than the first. Find the three integers. 28. ## Math 1) Given the arithmetic sequence an = 4 - 3(n - 1), what is the domain for n? All integers where n ≥ 1 All integers where n > 1 All integers where n ≤ 4 All integers where n ≥ 4 2) What is the 6th term of the geometric sequence where a1 = 1,024 and 29. ## MORE MATH What is the factoring by grouping? When factoring a trinomial, why is it necessary to write the trinomials in four terms? I will be happy to critique your thinking on this. I do not understand to even answer it? How do we determine the common factors in an 30. ## Algebra 1) Given the arithmetic sequence an = 4 - 3(n - 1), what is the domain for n? All integers where n ≥ 1 All integers where n > 1 All integers where n ≤ 4 All integers where n ≥ 4 2) What is the 6th term of the geometric sequence where a1 = 1,024 and 31. ## math Find the sum of the first one thousand positive integers. Explain how you arrived at your result. Now explain how to find the sum of the first n positive integers, where n is any positive integer, without adding a long list of positive integers by hand and 32. ## algebra If the first and third of three consecutive odd integers are added, the result is 69 less than five times the second integer. Find all three integers. 33. ## math Two positive integers aer in the ratio 2:5. If the product of the two integers is 40, find the larger integer. 34. ## algebra 1 help find all values of k so that each polynomial can be factored using integers 1) x^2+kx-19 2) x^2-8x+k, k>0 35. ## math i don't get this question consecutive integers are integers that differ by one. you can represent consecutive integers as x,x+1,x+2 and so on. write an equation and solve to find 3 consecutive integers whose sum is 33 36. ## Integers The larger of two positive integers is five more than twice the smaller integer. The product of the integers is 52. Find the integers. 37. ## algebra Find all positive values for k for which each of the following can be factored. x^2+3x+kthe coefficent of the middle term is 3 3=2+1 k=2*1=2 did i do this right This one I had no Idea where to start can some one please explain it to me x^2+x-k The second 38. ## math the product of 2 consecutive integers is 156. find the integers. I found them mentally 12 and 13 but, how do I find them while solving? 39. ## college math question Find the GCD of 24 and 49 in the integers of Q[sqrt(3)], assuming that the GCD is defined. (Note: you need not decompose 24 or 49 into primes in Q[sqrt(3)]. Please teach me . Thank you very much. The only integer divisor of both 24 and 49 is 1. I don't 40. ## Algebra Simplify the expression (3y+1)^2 + (2y-4)^2 I would start by expanding the two sets of parentheses, then combining like terms. Then take a look at the results and see if it can be factored or otherwise simplified. For example, the first one is 9y^2 + 6y 41. ## Precalculus 11 The sum of the squares of three consecutive integers is 149. Find the integers. I got 6,7,8 or -6,-7,-8 There's no reason why it would specifically be one or the other right? It could be either? Thanks!! 42. ## Math Determine 2 values of k so that trinomials can be factored over integers 36m^2 + 8m + k and 18x^2 - 42y + k. 43. ## Factoring Polynomials The book says: Find three different values that complete the expression so that the trinomial can be factored into the product of two binomials. Factor your trinomials. 4g^2+___g+10 Okay, I tried Hotmath, but it didn't explain ALL the steps. I just simply 44. ## algebra Find all integers b so that the trinomial 3x^2 +bx + 2 can be factored. 45. ## Algebra 2 Well I'm curently taking this class as a sophmore and have taken geometry freshmen year of high school and also I am taking physics at the same time and in math class when I'm asked to factor i go absolutley nuts because guessing numbers is just no my 46. ## dpis 36x^2+8x+K,find the value of k such that each trinomial can be factored over the integers 47. ## Algebra 1 Find all values of k so each trinomial can be factored using integers. 3s^2 +ks - 14 48. ## Math Can somebody please explain to me how to factor trinomials that contain two unknown variables and are complex sum & product?? Examples: 2x^2 + 13xy + 15y^2 11x^2 + 14xy + 3y^2 I understand how to factor trinomials that only have x as the variable, but 49. ## math (factoring quadratic trinomials) i am doing factoring with quadratic trinomials, And I don't understand any of it! The problem im working on is 5p2-22p+8 I factored it out to be (5p2-20p)(40p2-2p) I am so stuck, can anyone help 50. ## math wat are trinomials http://dictionary.reference.com/search?q=trinomials&r=66 51. ## MATH For all integers a, b, c, and d, which of the following is a factored from of ab + ac + db + dc a) (a + d)(b + c) b) (ad + db) + (ac + dc) c) (a + c)(d + b) d) (ab)(cd) e) (a + b)(c + d) 52. ## Math Algebra I’m working on factoring trinomials of the form ax2 + bx + c & perfect square trinomials. This chapter explains that a further method will be needed to conduct a trial and check process, in which the middle term is the sum of the products. Here’s the 53. ## Algebra For all integers a, b, c, and d, which of the following is a factored from of ab + ac + db + dc Is this correct: (a + d)(b + c) 54. ## algebra 2 Factor completely with respect to the integers. 1. 9x^2 - 4 2. x^3 + 64 3. 200x^2 - 50 4. 8x^3 - 64 5. x^3 + x^2 + x + 1 6. x^3 - 2x^2 + 4x - 8 7. 2x^3 + 4x^2 + 4x + 8 8. 2x^3 + 3x^2 -32x - 48 9. 7x^3 + 14x^2 + 7x 10. 6x^3 - 18x^2 - 2x +6 11. 3x^4 - 300x^2 55. ## Math For which integral values of k can 4x^2+kx+3 be factored over the integers? 56. ## Math determine one value of c that allows the trinomial cy^2 + 36y - 18 to be factored over the integers 57. ## Algebra 1 The architect must factor several trinomials the are of the form x2- mx + n, where m & n are whole numbers greater than zero. She wonders if any of these trinomials factor as (x+a)(x+b), where a > 0 and b < 0. Is this possible? Why or why not? I do not 58. ## Algebra For each of the following expressions, determine all of the values of k that allow the trinomial to be factored over the integers. A) x^2 + kx - 19 B) 25x^2 + kx + 49 C) x^2 + kx + 8 59. ## Algebra One of the polynomials below cannot be factored using integers. Write “prime” for the one trinomial that does not factor and all the factors for the two that do. a) x2– 5x + 6 b) x2+ x + 6 c) 2x2– 2x + 12 60. ## math Which equation shows p(x)=x6−1 factored completely over the integers? (Hint: You will need to use more than one method to complete this problem.) p(x)=(x3+1)(x3−1) p(x)=(x2−1)(x4+x2+1) p(x)=(x−1)(x2+x+1)(x+1)(x2−x+1) p(x)=(x−1)(x+1)(x4+x2+1) 61. ## Math 2x(x+3) = 0 Is there a way to solve that using the below methods? Substitute zero for f(x) and find the roots of the resulting equation, or graph the function and determine the x-intercepts of the graph. Finding the roots of a single variable equation may 62. ## Math x^2+x-k find all positive values for k, if it can be factored Allowed values of k for for the expression to be factorable with integers are a*(a+1) where a is an integer equal to 1 or more. For example: 2, 6, 12, 20... The factors are (x-a)(x+a+1) There 63. ## Pre-Calc/Trig Please check the first and help with the second-thank you Find all possible rational roots of f(x) = 2x^4 - 5x^3 + 8x^2 + 4x+7 1.I took the constant which is 7 and the leading coefficient which is 2 and factored them 7 factored would be 7,2 2 factored 64. ## Mat 117 •I am having trouble figuring out this question. Why is (3x + 5) (x - 2) + (2x - 3) (x - 2) not in factored form? Show specifically how to find the correct final factored form. What is this factoring method called? Please help. Thank you. 65. ## Algebra The larger of two positive integers is five more than twice the smaller integer. The product of the integers is 52. Find the integers. Must have an algebraic solution. 66. ## Math When you solve questions like "The sum of 3 consecutive integers is 147. Find the integers." do you find consecutive even integers and consecutive odd integers the same way? 67. ## Math Is i^2=-1 for how many integers N is (N+i)^4 an integer? so far i have factored out (N+i)^4 to be ... n^4+2n^2+1 i would suggest checking that, but from there i don't know where to go...doesn't it work for every integer? 68. ## MATH,HELP Can someone show me how to even do this problem. Find all positive values for K for which each of the following can be factored. x^2 + x - k News flash: It can be factored with any term (although the roots may not be real roots). Now if you mean factored 69. ## Math This is the second part of a two part question for an online class. It gave me the degree and the zeros and I had to give the factored form. I got that part right, but I need to know how to get the expanded form from the factored form. I have several more 70. ## Algebra 1 How exactly do you find/solve a Perfect Square Trinomials and factoring them? 71. ## math the sum of 3 consecutive integers is the same value as twice the greatest of the integers. Find the 3 integers. 72. ## math i don't get this question consecutive integers are integers that differ by one. you can represent consecutive integers as x,x+1,x+2 and so on. write an equation and solve to find 3 consecutive integers whose sum is 33 how do you solve this 73. ## mat117 Factor each expression a^2(b-c)-16b^2(b-c) Help show me how (b-c) appears in both terms and can be factored out, giving you (b-c)(a^2-16b^2) Now note that the second term can also be factored since it is the difference of two perfect squares. (a^2-16b^2) = 74. ## math can this be simplified further? x^4+2x^3+4x^2+8x+16 Unless you are asking if the statement can be factored, the answer to your question is "no". Only variables with the same exponents can be added together. you mean this could be factored? how? 75. ## A number thoery question Please help me! Thank you very much. Prove Fermat's Last theorem for n=3 : X^3 + Y^3 = Z^3 where X, Y, Z are rational integers, then X, Y, or Z is 0. Hint: * Show that if X^3 + Y^3 = Epsilon* Z^3, where X, Y, Z are quadratic integers in Q[sqrt(-3)], and 76. ## math Which expression correctly shows p(x)=27x3+45x2−3x−5 factored completely over the integers? (Hint: You will need to use more than one method to complete this problem.) (3x−5)(9x2−3x+9) (3x−5)(9x2+3x+9) (3x+5)(3x+1)(3x−1) (3x+5)(9x2−1) 77. ## algerbra Find the GCF of each product. (2x2+5x)(7x - 14) (6y2 -3y)( y+7) When the term "Greatest Common Factor" is used, it applies to a pair of numbers. The terms you have liated are polynomials that have already been factored. They could be further factored into 78. ## Algebra Find all positive values for k for which each of the following can be factored. X^2-x-k (Your answer seems to be the same as the other problem x^2+x-k ) Is this correct too? Consider your equation to be in the form ax^2 + bx = c = 0. It can be factored 79. ## calculus--please help!! find the complete zeros of the polynomial function. Write f in factored form. f(x)=3x^4-10x^3-12x^2+122x-39 **Use the complex zeros to write f in factored form.** f(x)= Please show work 80. ## math A. Four times one odd integer is 14 less than three times the next even integer. Find the integers. B. The average of four consecutive odd integers is 16. Find the largest integer. C. When the sum of three consecutive integers is divided by 9 the result is 81. ## college math I do not understand a problem from a text book or how to solve the problem for the answer. Could shomeone show me the steps (show work) on how to solve this question. The sum of the intergers from 1 through n is n(n+1)/2. the sum of the squares of the 82. ## math Which expression correctly shows p(x)=9x5−9x2 factored completely over the integers? 9x2(x2+1)(x2−1) (x−1)(x2+x+1) 9x2(x−1)(x2+x+1) 9x2(x2−2x+1) is it (x−1)(x2+x+1) 83. ## Calculus I wanted to know how you would know that -4x^2 + 2x +90 / x-5 would be able to factor down to (x-5)(-4x-18)/x-5 By looking at -4x^2 + 2x +90 / x-5 , I would never think that it could be factored down to (x-5)(-4x-18)/x-5 How should I approach it so I know 84. ## help!!!!!!!!!! When factoring a trinomial, why is it necessary to write the trinomials in four terms?" You keep posting this without any thinking. Can this be factored in four terms? 10x^5 + 2x + 17 Perhaps you are meaning factoring the "quadratic" (a polynomial of 85. ## math Factor 2x²+13x+40 I thought with this sort of problem you are supposed to multipy 40 by 2 and get 80 and then come up with two factors that equal 13. If this doesn't work, you are supposed to factor out a number/variable. Since neither of these methods 86. ## math please help me simplify (x^3+8)/(x^4-16). thanks. The denominator is cleary the differerence of two squares (X^2-4)(x^2+4). That can again be factored to (x+2)(x-2)(x^2+4). The numerator can be factored using the trinomial simplification.. 87. ## Find the sum of thuis question Can someone find the sum for this equation please. (X^2)/(X^2-16)=(5X+4)/(X^2-16) I don't think it is finding the sum, but I can simplify the equation. (X^2)/(X^2-16)=(5X+4)/(X^2-16) First, multiply both sides by X^-16. X^2 = 5X+4 Subtract 5X+4 from both 88. ## pre-calculus find the complex zeros of the polynomial function. write F in the factored form. f(X)=x^3-7x^2+20x-24 use the complex zeros to write f in factored form. f(x)= (reduce fractions and simplify roots) 89. ## Algebra I have to figure out how x^3+216=0 is factored, and what potential solutions might be. I factored it like this: (x+3)(x+3)(x+24), resulting in the solutions being -3 (double route) and -24. But my book says these are not correct answers. Why not? Can you 90. ## Algebra How can you factor? 1. x^4 - 4x^2 + 3 2. x^3 - 2x^2 - 4x + 8 Thanks 1. set a = x^2 this should simplify your problem once you've factored the expression, substitute x^2 back in for a 2. For our purposes I don't think this expression can be factored. 91. ## maths the non- decreasing sequence of odd integers {a1, a2, a3, . . .} = {1,3,3,3,5,5,5,5,5,...} each positive odd integer k appears k times. it is a fact that there are integers b, c, and d such that, for all positive integers n, añ = b[√(n+c)] +d. Where [x] 92. ## Math If the larger of two consecutive integers is subracted from twice the smaller integer, then the result is 21. Find the integers. 93. ## Math Algebra Two times the smallest of three consecutive odd integers is one less than the largest integer. Find the integers 94. ## algebra 1 The sum of four consecutive integers is decreased by 30, the result is the fourth integer. Find the four integers. 95. ## Algebra For two consecutive positive even integers, the product of the smaller and twice the larger is 160. Find the integers. 96. ## maths Find the number of positive integers 97. ## algebra Twice the greater of two consecutive odd integers is 13 less than three times the lesser. Find the integers. 98. ## Algebra If the first and third of three consecutive even integers are added, the result is 12 less than three times the second integer. Find the integers. 99. ## Maths Find the number of positive integers 100. ## math Twice the greater of two consecutive odd integers is 13 less than three times the lesser. Find the integers.
6,228
20,821
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.90625
4
CC-MAIN-2020-50
latest
en
0.877116
https://www.coursehero.com/file/5766842/1Bch24/
1,513,227,639,000,000,000
text/html
crawl-data/CC-MAIN-2017-51/segments/1512948539745.30/warc/CC-MAIN-20171214035620-20171214055620-00073.warc.gz
733,496,388
23,061
1Bch24 # 1Bch24 - Chapter 24 There are two main types of capacitor... This preview shows pages 1–4. Sign up to view the full content. This preview has intentionally blurred sections. Sign up to view the full version. View Full Document This preview has intentionally blurred sections. Sign up to view the full version. View Full Document This is the end of the preview. Sign up to access the rest of the document. Unformatted text preview: Chapter 24 There are two main types of capacitor configurations: series and parallel. For each of these, here’s how the charges and voltages across each capacitor are related, and how to compute the equivalent capacitance. Capacitors in Series: Consider capacitors C 1 , C 2 , ......., C N arranged in series, i.e., strung together. Then each capacitor stores the same charge: Q 1 = Q 2 = Q 3 = ... = Q N = Q The total potential V across the combination is the sum of the voltage V j across each capacitor C j : V = N X j =1 V j = N X j =1 Q j C j = Q N X j =1 1 C j = Q C eff Therefore, the effective capacitance of the combination is obtained by adding the inverses of the individual capacitances: 1 C eff = N X j =1 1 C j Capacitors in Parallel: Consider capacitors C 1 , C 2 , ......., C N arranged in parallel. Then the voltage across each capacitor is the same: V 1 = V 2 = V 3 = ... = V N = V The total charge Q stored in the combination is the sum of the charge Q j stored in each capacitor C j : Q = N X j =1 Q j = N X j =1 C j V j = V N X j =1 C j = V C eff Therefore, the effective capacitance of the combination is obtained by adding the individual capacitances: C eff = N X j =1 C j 1 Problem 1: Capacitors C 1 , C 2 , C 3 , C 4 , and C 5 are arranged as shown in the diagram below. A voltage V ab is applied between terminals a and b. 1: Find the equivalent capacitance of the system. 2: Find the charge and voltage across each capacitor. & & ¡ & ¢ & £ & ¤ & ¡¢ £ ¡ ¢ Solution: In order to compute the equivalent capacitance, we successively identify series and parallel combinations, simplifying the circuit as we go along. We first note that capacitors C 3 and C 4 are arranged in series, so their equivalent capacitance C 34 is given by 1 C 34 = 1 C 3 + 1 C 4 C 34 = C 3 C 4 C 3 + C 4 We now note that the equivalent capacitor C 34 is in parallel with capacitor C 2 , allowing us to find the equivalent capacitance C 234 of capacitors C 2 , C 3 , and C 4 . C 234 = C 2 + C 34 = C 2 + C 3 C 4 C 3 + C 4 = C 2 ( C 3 + C 4 ) + C 3 C 4 C 3 + C 4 Finally, we note that capacitors C 1 , C 234 , and C 5 are arranged in series. This allows us to 2 find the equivalent capacitance C eq of the system: 1 C eq = 1 C 1 + 1 C 234 + 1 C 5 C eq = • 1 C 1 + 1 C 234 + 1 C 5 ‚- 1 = • 1 C 1 + C 3 + C 4 C 2 ( C 3 + C 4 ) + C 3 C 4 + 1 C 5 ‚- 1 We now consider part 2 . The overall charge stored in the system is Q = C eq V ab This is the charge stored in C 1 , C 234 , and C 5 , since these capacitors are in series. That is, Q 1 = Q 5 = Q 234 = Q = C eq V ab We can now compute the voltages across capacitors C 1 , C 5 , and C 234 : V 1 = Q 1 C 1 = C eq V ab C 1 V 5 = Q 5 C 5 = C eq V ab C 5 V 234 = Q 234 C 234 = C eq V ab C 234 V 234 , however, is the voltage across capacitors C 2 and C 34 , since these capacitors are arranged in parallel. That is, V 2 = V 34 = V 234 = C eq V ab C 234 We can now compute the charges across capacitors C 2 and C 34 : Q 2 = C 2 V 2 = C 2 C eq V... View Full Document {[ snackBarMessage ]} ### Page1 / 11 1Bch24 - Chapter 24 There are two main types of capacitor... This preview shows document pages 1 - 4. Sign up to view the full document. View Full Document Ask a homework question - tutors are online
1,145
3,693
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.25
4
CC-MAIN-2017-51
latest
en
0.87249
http://techinterviewsolutions.net/author/admin/
1,508,488,292,000,000,000
text/html
crawl-data/CC-MAIN-2017-43/segments/1508187823997.21/warc/CC-MAIN-20171020082720-20171020102720-00636.warc.gz
338,642,678
8,244
## Invert Binary tree Google: 90% of our engineers use the software you wrote (Homebrew), but you can’t invert a binary tree on a… ## Given a positive integer which fits in a 32 bit signed integer, find if it can be expressed as A^P where P > 1 and A > 0. A and P both should be integers. Problem: Given a positive integer which fits in a 32 bit signed integer, find if it can be expressed as… ## Given an even number ( greater than 2 ), return two prime numbers whose sum will be equal to given number. Problem: Given an even number ( greater than 2 ), return two prime numbers whose sum will be equal to… ## Given a linked list, return the node where the cycle begins. If there is no cycle, return null. Given a linked list, return the node where the cycle begins. If there is no cycle, return null. Try solving… ## Write a function that takes an unsigned integer and returns the number of ’1′ bits it has (also known as the Hamming weight). Problem Write a function that takes an unsigned integer and returns the number of ’1′ bits it has (also known… ## Given an array of integers, every element appears twice except for one. Find that single one. Problem: Given an array of integers, every element appears twice except for one. Find that single one. Note: Your algorithm… ## The 3n + 1 problem Consider the following algorithm to generate a sequence of numbers. Start with an integer n. If n is even, divide… ## SQL Customer segment that generates maximum profit Select MAX (Profit), Customer Segment From ‘TABLE 1’ 2. Most expensive shipping mode…
361
1,568
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.90625
3
CC-MAIN-2017-43
latest
en
0.851353
https://www.percentcalc.net/ppm/11-ppm-to-percent
1,669,902,406,000,000,000
text/html
crawl-data/CC-MAIN-2022-49/segments/1669446710813.48/warc/CC-MAIN-20221201121601-20221201151601-00193.warc.gz
984,865,174
10,402
## ppm to percent conversion calculator 11 ppm to percent = 0.0011% Formula: 11/10,000 = 0.0011% Formula: Percent × 10000 = result ## 11 ppm to percent See how to convert 11 PPM to percent in few steps. You can find the result below and related conversion to 11. 11 PPM means 11 parts per million. Find out why 11 PPM became equals 0.0011% . ### Solution for 11 ppm to percent: To convert 11 PPM to percent we use the formula below. Formula: PPM / 10000 Step 1 11 / 10000 To convert 11 ppm to percent we just divide 11 by 10000 . Step 2 = 0.0011% 11 divided by 10000 equals 0.0011 . The result 11 ppm equals 0.0011% In words: eleven ppm equals point zero zero one .. percent. ### Similar calculation 11 ppm to percent?= 0.0011% 12 ppm to percent?= 0.0012% 13 ppm to percent?= 0.0013% 14 ppm to percent?= 0.0014% 15 ppm to percent?= 0.0015% 16 ppm to percent?= 0.0016% 17 ppm to percent?= 0.0017% 18 ppm to percent?= 0.0018% 19 ppm to percent?= 0.0019% 20 ppm to percent?= 0.002% 21 ppm to percent?= 0.0021% 22 ppm to percent?= 0.0022% 23 ppm to percent?= 0.0023% 24 ppm to percent?= 0.0024% 25 ppm to percent?= 0.0025% 26 ppm to percent?= 0.0026% 27 ppm to percent?= 0.0027% 28 ppm to percent?= 0.0028% 29 ppm to percent?= 0.0029% 30 ppm to percent?= 0.003% 31 ppm to percent?= 0.0031% 32 ppm to percent?= 0.0032% 33 ppm to percent?= 0.0033% 34 ppm to percent?= 0.0034% 35 ppm to percent?= 0.0035% 36 ppm to percent?= 0.0036% 37 ppm to percent?= 0.0037% 38 ppm to percent?= 0.0038% 39 ppm to percent?= 0.0039% 40 ppm to percent?= 0.004%
543
1,551
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.125
4
CC-MAIN-2022-49
latest
en
0.827606
http://gamedev.stackexchange.com/questions/24126/3d-procedural-planet-generation?answertab=oldest
1,387,484,033,000,000,000
text/html
crawl-data/CC-MAIN-2013-48/segments/1387345767540/warc/CC-MAIN-20131218054927-00054-ip-10-33-133-15.ec2.internal.warc.gz
74,114,197
18,738
# 3D Procedural Planet Generation I was looking for some inspration for my Voxel based game I am writting and came across this: http://www.youtube.com/watch?v=rL8zDgTlXso. I would like to know how to go about (or preferably source examples) of how I would do that, in real time, and infinitley. In addition to that I was wondering how I would do this with a voxel based terrain? A procedural planet generator in 3D which constructs voxel data, my voxels are of the same size of thoose in minecraft. Any ideas? Edit: I ported the simplex noise function i_grok suggested written in C++/Python to C#, I sure hope it works :) http://pastebin.com/TZSQwnye Edit 2: ``````float noise(float x, float y, float z, float persistance, float amplitude, float frequency, float octaves) { float total = 0; for (int i = 0; i < octaves; i++) { frequency = frequency ^ i; // or frequency *= 2; ? amplitude = amplitude ^ i; total = total + SimplexNoise.raw_noise_3d(x * frequency, y * frequency, z * frequency) * amplitude; } } `````` - The search phrase that rubs the magic djinni lamp is "procedural content generation." –  Patrick Hughes Feb 18 '12 at 10:13 Also keep in mind that depending on your approach (such as your coordinate system scale and rendering pipeline), you will run into a myriad of issues with finite precision errors in rendering and positioning objects. Generating noise for terrain is trivial compared to these issues. –  KlashnikovKid Feb 18 '12 at 17:12 -1. I can't figure out whether the question is far too broad, or far too trivial. Or maybe both. Regardless, there is no concrete question here which can be authoritatively answered. –  Trevor Powell Feb 19 '12 at 5:10 Here are the things to search for, as the video specifically mentions what it is using for the LOD (How it zooms from space to land) and also the algorithm for the terrain generation: "The application uses quadtrees for LOD, and generates terrain using the ridged multifractal algorithm on both CPU and GPU (in a GLSL shader)." –  James Feb 19 '12 at 19:40 These answers may be closer to your question: Voxel heightmap terrain editor Most of the noise functions discussed here are fine for real-time - some can generate a million values a second. I'm not aware of a C# implementation, but what you're looking for is Simplex Noise (sometimes called Improved Perlin Noise). Simplex Noise can scale to any number of dimensions, but most people seem to implement 2D, 3D and 4D. I have implemented Simplex Noise in C and Python if you wish to port from there. There is also a Java implementation of Simplex Noise. - I have already written the voxel engine, so, in my question I asked how I would I go about makeing a procedural infinite world out of voxels in 3D. –  Darestium Feb 19 '12 at 4:34 The post I linked to in my answer briefly touches on the solution. You need to select some reasonable noise functions to generate the terrain. From what I've seen so far, everyone ends up implementing their engine differently but relies on the same sets of procedural functions. –  i_grok Feb 19 '12 at 5:10 Thanks! Got any recomendations for perlin noise functions? I am using one, but it's 2D, and in addition to the 2D noise I would like to do 3D, but I cannot seem to find a tutorail on 3D perlin noise anywhere, let alone how to use it. –  Darestium Feb 19 '12 at 6:53 @Darestium mrl.nyu.edu/~perlin/noise The man's website itself goes over the differences between 2D and 3D noise (just passing in 2 or 3 values IIRC). I think you can also find a few 'globe' type rendering demos on his site if it helps. –  James Feb 19 '12 at 19:44 @i_grok , thanks! I managed to convert the java code into c#! But I am wondering how do I actually go about using it? With 2D it is simple just height = noise[x, z] but what does the value returned represent in 3D perlin noise? –  Darestium Feb 29 '12 at 5:46 I was interested in this a while back. There is a (now old, but still relevant) book called Texturing and modelling: a procedural approach. One of the authors is Ken Musgrave aka Dr. Mojo who created mojoworld. There is a chapter in that book about procedural planet generation that you may find helpful. - Great book - I own it! I suspect it's a bit more theoretical than what Darestium is hoping for, though. –  i_grok Feb 19 '12 at 5:04 Thanks, I'll give it a try :) I really hope I can understand it :) –  Darestium Feb 19 '12 at 6:47 @bobobobo Would you recomend me to purchase it? –  Darestium Mar 8 '12 at 9:26
1,182
4,517
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.84375
3
CC-MAIN-2013-48
longest
en
0.898701
http://www.mathchimp.com/7.3.2.php
1,506,106,303,000,000,000
text/html
crawl-data/CC-MAIN-2017-39/segments/1505818689102.37/warc/CC-MAIN-20170922183303-20170922203303-00357.warc.gz
511,567,663
7,410
## 7th Grade Games - Solve real-life and mathematical problems using numerical and algebraic expressions and equations. Solve multi-step real-life and mathematical problems posed with positive and negative rational numbers in any form (whole numbers, fractions, and decimals), using tools strategically. Apply properties of operations to calculate with numbers in any form; convert between forms as appropriate; and assess the reasonableness of answers using mental computation and estimation strategies. For example: If a woman making \$25 an hour gets a 10% raise, she will make an additional 1/10 of her salary an hour, or \$2.50, for a new salary of \$27.50. If you want to place a towel bar 9 3/4 inches long in the center of a door that is 27 1/2 inches wide, you will need to place the bar about 9 inches from each edge; this estimate can be used as a check on the exact computation. Use variables to represent quantities in a real-world or mathematical problem, and construct simple equations and inequalities to solve problems by reasoning about the quantities. 1. Solve word problems leading to equations of the form px + q = r and p(x + q) = r, where p, q, and r are specific rational numbers. Solve equations of these forms fluently. Compare an algebraic solution to an arithmetic solution, identifying the sequence of the operations used in each approach. For example, the perimeter of a rectangle is 54 cm. Its length is 6 cm. What is its width? 2. Solve word problems leading to inequalities of the form px + q > r or px + q < r, where p, q, and r are specific rational numbers. Graph the solution set of the inequality and interpret it in the context of the problem. For example: As a salesperson, you are paid \$50 per week plus \$3 per sale. This week you want your pay to be at least \$100. Write an inequality for the number of sales you need to make, and describe the solutions. Screenshot Name / Description Flag? Rating
428
1,946
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.6875
5
CC-MAIN-2017-39
longest
en
0.907251
http://www.mathconcentration.com/page/store-2
1,506,274,360,000,000,000
text/html
crawl-data/CC-MAIN-2017-39/segments/1505818690112.3/warc/CC-MAIN-20170924171658-20170924191658-00357.warc.gz
507,829,890
18,199
One Stop Shop for Math Teacher Resources # Store 10% of proceeds will be donated to SOS Children's Villages of Florida: ## Make a Difference Please support our community of students, parents, and teachers or caregivers who all play vital roles in the homework process by contributing whatever you can to keep our site alive :) ## Notes ### Figure This Challenge #56 • Complete Solution will be given on May 17, 2015 Complete Solution: … Continue Created by Wanda Collins May 10, 2015 at 1:56pm. Last updated by Wanda Collins May 10, 2015. ## Math Homework Help Online Professional math homework help get your math solved today. Do you need help with math homework? Our reliable company provides only the best math homework help. # Math Limerick Question: Why is this a mathematical limerick? ( (12 + 144 + 20 + 3 Sqrt[4]) / 7 ) + 5*11 = 92 + 0 . A dozen, a gross, and a score, plus three times the square root of four, divided by seven, plus five times eleven, is nine squared and not a bit more. ---Jon Saxton (math textbook author) Presentation Suggestions: Challenge students to invent their own math limerick! The Math Behind the Fact: It is fun to mix mathematics with poetry. Resources: Su, Francis E., et al. "Math Limerick." Math Fun Facts. funfacts • View All
321
1,293
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.1875
3
CC-MAIN-2017-39
longest
en
0.867762
https://www.topperlearning.com/doubts-solutions/foe-the-same-value-of-the-angle-of-incidence-thwe-angles-of-refraction-in-the-media-a-b-c-are-15-25-35-degrees-respectively-in-which-medium-would-it-b-2g11iiff/
1,542,774,820,000,000,000
text/html
crawl-data/CC-MAIN-2018-47/segments/1542039747024.85/warc/CC-MAIN-20181121032129-20181121054129-00265.warc.gz
1,003,001,717
50,871
###### Please wait... 1800-212-7858 (Toll Free) 9:00am - 8:00pm IST all days or Thanks, You will receive a call shortly. Customer Support You are very important to us For any content/service related issues please contact on this toll free number 022-62211530 Mon to Sat - 11 AM to 8 PM # Foe the same value of the angle of incidence thwe angles of refraction in the media A B C are 15 25 35 degrees respectively. in which medium would it be the velocity of light muinimum ?give reason to ur ans? Asked by vasturushi 1st February 2018, 9:52 PM Answered by Expert Answer: Speed of light in a medium v and refractive index μ of that medium are related as follows refractive index is given by for a given angle of incidence, angle of refraction is less in a medium compare to other medium means refractive indexof that medium is more. hence in the given problem medium A is giving minum angle of refraction 15°. Hence refractive index of medium A is highest. if refractive index is maximum in medium A means, speed of light is minimum in medium A Answered by Expert 1st February 2018, 10:40 PM Rate this answer • 1 • 2 • 3 • 4 • 5 • 6 • 7 • 8 • 9 • 10 You have rated this answer 10/10 Your answer has been posted successfully! RELATED STUDY RESOURCES : CBSE Class-12-science QUESTION ANSWERS ### Latest Questions Maharashtra VII Science Asked by roshankachewar 21st November 2018, 9:34 AM CBSE VIII Science Asked by bhavanikumari644 21st November 2018, 6:35 AM
425
1,470
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.65625
3
CC-MAIN-2018-47
longest
en
0.921029
https://www.physicsforums.com/threads/amount-of-energy-stored-in-a-magnetic-field.305479/
1,508,551,238,000,000,000
text/html
crawl-data/CC-MAIN-2017-43/segments/1508187824537.24/warc/CC-MAIN-20171021005202-20171021025202-00738.warc.gz
991,213,181
14,684
# Amount of energy stored in a magnetic field 1. Apr 6, 2009 ### rinarez7 1. 1. An air-core solenoid with 57 turns is 4.96 cm long and has a diameter of 1.46 cm. The permitivity of free space is 4×10−7 T· m/A. How much energy is stored in its magnetic field when it carries a current of 0.634 A ? Answer in units of μJ. 2. B = mu-o (I) (N)/ L Induction= mu-0 (N^2/l) A U= (1/2) Induction (I ^2) 3. First I calculated Induction= 4pie-7 ( 57turns ^2/ .0496m) (pi(.0146^2))= 1.3785e-5 Then I used U = (1/2) induction (0.634 A ^2)= 2.7355μJ I am on the wrong path? I thought of calculating the magentic field as well using my first eqaution= 9.09795 e-4 T but I couldn't find the correct eqaution/ relationship to calculate the energy stored. Thanks in advance for any help! 2. Apr 6, 2009 ### rl.bhat 0.0146 is the diameter, not the radius. 3. Apr 6, 2009 ### rinarez7 My mistake, I did use the diameter/ 2 in my calculations ( jsut translated it incorrectly) so I still had the same calculation. Is there something else I am missing? 4. Apr 6, 2009 ### rl.bhat Check the calculation of inductance. Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
402
1,211
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.171875
3
CC-MAIN-2017-43
longest
en
0.886496
http://gis.stackexchange.com/questions/15650/does-google-maps-use-elevation-to-calculate-travel-distance
1,466,890,780,000,000,000
text/html
crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00138-ip-10-164-35-72.ec2.internal.warc.gz
130,381,480
16,839
Does google maps use elevation to calculate travel distance? For example, lets say I have two parallel roads with starting point and ending point 1 mile apart. The left hand road is as flat and straight as an arrow. The right hand road is also straight....but has a series of sine-wave shaped hills and valleys. Obviously, if I measure using a Surveyor's wheel, the second path will be longer, even though both roads are starting and ending a mile apart. My question: which measurement does google maps use? Does it account for difference in elevation adding to the travel distance? - migrated from programmers.stackexchange.comOct 12 '11 at 18:53 This question came from our site for professional programmers interested in conceptual questions about software development. Your difference there is a difference in road length, not elevation . . . – Wyatt Barnett Oct 12 '11 at 13:44 Difference in road length due to elevational changes is what OP implies. – Chris Oct 12 '11 at 13:52 Clearly you guys do not live in SF – Ragi Yaser Burhum Oct 13 '11 at 2:25 25 grade in San Francisco datapointed.net/visualizations/maps/san-francisco/streets-slope = about a 3% error – user19129 Jun 14 '13 at 18:48 I doubt it. Since the length of a sloped road would be `sqrt(1+x^2)`-times the length of the flat one (where `x` is the slope). For low values of `x`, this is roughly `1+1/2*x^2`, which is rather low, eg. for a 10 % slope, you get an error of 0.5 %. Not considering the actual lane you drive probably has a similar error.
381
1,530
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3
3
CC-MAIN-2016-26
latest
en
0.909739
https://brainmass.com/statistics/regression-analysis/regression-formula-harry-potter-example-383165
1,723,784,093,000,000,000
text/html
crawl-data/CC-MAIN-2024-33/segments/1722641333615.45/warc/CC-MAIN-20240816030812-20240816060812-00191.warc.gz
110,845,690
7,210
Purchase Solution # Simple Regression Model and Harry Potter Book Not what you're looking for? The students in a modern popular fiction class are complaining that every time Rowling writes another book, it gets longer. Their statistics teacher challenges them to show that their complaints are accurate. Using the Harry Potter data below and a 0.05 level of significance, perform a correlation/regression analysis between the book number and the number of pages. Include a scatterplot and answer the following questions: a. What is the hypothesis? b. What is the null hypothesis? c. Is there a correlation between the number of the book and the number of pages in the book? What type of correlation is it? d. What is the R2 value? What does this mean? e. If the regression formula holds true and Rowling decided to write book 9, how many pages would it contain? JK Rowling (Harry Potter) Book Pages 1 309 2 341 3 435 4 754 5 869 6 652 7 739 ##### Solution Summary Step by step method for computing simple regression model is given in the answer. ##### Free BrainMass Quizzes Each question is a choice-summary multiple choice question that presents you with a statistical concept and then 4 numbered statements. You must decide which (if any) of the numbered statements is/are true as they relate to the statistical concept. ##### Terms and Definitions for Statistics This quiz covers basic terms and definitions of statistics. ##### Measures of Central Tendency Tests knowledge of the three main measures of central tendency, including some simple calculation questions. ##### Measures of Central Tendency This quiz evaluates the students understanding of the measures of central tendency seen in statistics. This quiz is specifically designed to incorporate the measures of central tendency as they relate to psychological research.
376
1,853
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.984375
3
CC-MAIN-2024-33
latest
en
0.915639
http://www.thecodingforums.com/threads/sampling-using-iterators.642361/
1,474,848,995,000,000,000
text/html
crawl-data/CC-MAIN-2016-40/segments/1474738660467.49/warc/CC-MAIN-20160924173740-00178-ip-10-143-35-109.ec2.internal.warc.gz
740,102,844
13,283
# sampling using iterators Discussion in 'C++' started by Leon, Oct 29, 2008. 1. ### LeonGuest using multimap<int,int>::iterator itlow = pq.lower_bound (x); multimap<int,int>::iterator itup = pq.upper_bound (y); I obtain lower and upper bound from the multimap, and having these two iterators I would like to sample one element with uniform distribution. It a way do to this using iterators? I can of course draw an integer and loop over the sequence until I meet the drawn value, but it is not a very nice solution. Can sample directly using iterators? thanks Leon, Oct 29, 2008 2. ### Juha NieminenGuest Leon wrote: > using > multimap<int,int>::iterator itlow = pq.lower_bound (x); > multimap<int,int>::iterator itup = pq.upper_bound (y); > > I obtain lower and upper bound from the multimap, and having these two > iterators I would like to sample one element with uniform distribution. > It a way do to this using iterators? I can of course draw an integer and > loop over the sequence until I meet the drawn value, but it is not a > very nice solution. Can sample directly using iterators? I don't really understand what do you mean by "sample". If you mean just not possible with multimap iterators, as they are not random access iterators. If you *really* need that (eg. for efficiency reasons) then one solution might be to instead of using a multimap, use a regular map with a vector (or deque) as element, so that each element with the same key is put into the vector correspondent to that key. Then you can random-access the vector when you need to. (Of course the downside of this is that inserting and removing elements is not, strictly speaking, O(lg n) anymore... But you can't have everything at once.) Juha Nieminen, Oct 29, 2008 3. ### LeonGuest Juha Nieminen wrote: > Leon wrote: >> using >> multimap<int,int>::iterator itlow = pq.lower_bound (x); >> multimap<int,int>::iterator itup = pq.upper_bound (y); >> >> I obtain lower and upper bound from the multimap, and having these two >> iterators I would like to sample one element with uniform distribution. >> It a way do to this using iterators? I can of course draw an integer and >> loop over the sequence until I meet the drawn value, but it is not a >> very nice solution. Can sample directly using iterators? > > I don't really understand what do you mean by "sample". If you mean > that you want (constant-time) random access to the range above, that's > just not possible with multimap iterators, as they are not random access > iterators. > > If you *really* need that (eg. for efficiency reasons) then one > solution might be to instead of using a multimap, use a regular map with > a vector (or deque) as element, so that each element with the same key > is put into the vector correspondent to that key. Then you can > random-access the vector when you need to. > > (Of course the downside of this is that inserting and removing > elements is not, strictly speaking, O(lg n) anymore... But you can't > have everything at once.) Yes, since the iterator is not random for multimap I have to loop anyway. Thanks! Leon, Oct 30, 2008 4. ### Andrew KoenigGuest "Leon" <> wrote in message news:geaa1v\$8fm\$... > using > multimap<int,int>::iterator itlow = pq.lower_bound (x); > multimap<int,int>::iterator itup = pq.upper_bound (y); > > I obtain lower and upper bound from the multimap, and having these two > iterators I would like to sample one element with uniform distribution. It > a way do to this using iterators? Assume you have a function nrand that takes an integer n and returns a uniform random integer k such that 0 <= k < n. Then I think this code will do what you want: assert (itlow != itup); // necessary for a result to be possible multimap<int,int>::iterator result; int n = 0; while (itlow != itup) { if (nrand(++n) == 0) result = itlow; ++itlow; } Note that the first call to nrand will be nrand(++n) with n initially 0, which is effectively nrand(1). By definition, nrand(1) is always 0, so result will always be initialized the first time through the loop. Moreover, the loop will always execute at least once because of the assert. Therefore, there is no risk that result might not be initialized. Andrew Koenig, Nov 13, 2008 5. ### James KanzeGuest On Oct 30, 12:13 am, Juha Nieminen <> wrote: > Leon wrote: > > using > > multimap<int,int>::iterator itlow = pq.lower_bound (x); > > multimap<int,int>::iterator itup = pq.upper_bound (y); > > I obtain lower and upper bound from the multimap, and having > > these two iterators I would like to sample one element with > > uniform distribution. It a way do to this using iterators? > > I can of course draw an integer and loop over the sequence > > until I meet the drawn value, but it is not a very nice > > solution. Can sample directly using iterators? > I don't really understand what do you mean by "sample". If you > above, that's just not possible with multimap iterators, as > they are not random access iterators. > If you *really* need that (eg. for efficiency reasons) then > one solution might be to instead of using a multimap, use a > regular map with a vector (or deque) as element, so that each > element with the same key is put into the vector correspondent > to that key. Then you can random-access the vector when you > need to. He seems to be looking for a range (lower_bound and upper_bound are called with different arguments), not just a single key. But using a sorted vector, with the library functions lower_bound and upper_bound, would definitely be a possible solution. As you say, insertion would be more expensive, but a lot depends on the other use he makes of the structure, and how expensive copying or swapping the elements might be. (Using lower_bound on a sorted vector is actually significantly faster than map.lower_bound, at least with the implementations I've tested.) -- James Kanze (GABI Software) email: Conseils en informatique orientée objet/ Beratung in objektorientierter Datenverarbeitung 9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34 James Kanze, Nov 14, 2008
1,556
6,110
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.515625
3
CC-MAIN-2016-40
longest
en
0.881939
https://www.tutorialspoint.com/what-happens-when-we-try-to-add-a-number-to-undefined-value
1,685,283,739,000,000,000
text/html
crawl-data/CC-MAIN-2023-23/segments/1685224643784.62/warc/CC-MAIN-20230528114832-20230528144832-00229.warc.gz
1,168,036,664
9,724
# What happens when we try to add a number to undefined value? If you try to add a number to undefined value then you will get a NaN. The NaN defines Not a Number. Following is an example − ## Case 1 var anyVar=10+undefined; print(anyVar) //Result will be NaN ## Case 2 var anyVar1=10; var anyVar2; var anyVar=yourVar1+yourVar2; print(anyVar) //Result will be NaN ## Case 1 Let us implement the above cases. The query is as follows − > var result=10+undefined; > print(result); This will produce the following output − NaN ## Case 2 Let us implement the above case − > var value; > var value1=10; > var result=value1+value > result This will produce the following output − NaN
192
691
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.015625
3
CC-MAIN-2023-23
latest
en
0.512826
disease-progression-modelling.github.io
1,716,870,704,000,000,000
text/html
crawl-data/CC-MAIN-2024-22/segments/1715971059067.62/warc/CC-MAIN-20240528030822-20240528060822-00803.warc.gz
174,449,523
25,519
# Linear Mixed-effects ModelsΒΆ Welcome for the first practical session of the day ! ## Objectives :ΒΆ • Get a better idea of medical data, especially longitudinal ones • Understand mixed-effects models • Get a taste of state-of-the-art techniques ## The set-upΒΆ If you have followed the installation details carefully, you should • be running this notebook in the leaspy_tutorial conda environment (be sure that the kernel you are using is leaspy_tutorial => check Kernel above) • having all the needed packages already install πŸ’¬ Question 1 πŸ’¬ Run the following command lines import os import sys import pandas as pd import numpy as np from scipy import stats import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline ## Part I: The dataΒΆ We import a functional medical imaging dataset. We have extracted, for each timepoint, the average value of the metabolic activity of the putamen. This brain region is commonly damaged by Parkinson’s disease. IMPORTANT: The values have been normalized such that a normal value is zero and a very abrnomal value is one. πŸ’¬ Question 2 πŸ’¬ Run the following cell and look at the head of the dataframe to better understand what the data are. from leaspy.datasets import Loader # To complete # –––––––––––––––– # # –––––––––––––––– # PUTAMEN ID TIME SPLIT GS-001 71.354607 train 0.728492 71.554604 train 0.735620 72.054604 train 0.757409 73.054604 train 0.800754 73.554604 train 0.870756 ℹ️ Information ℹ️ The SPLIT column already distinguishes the train and test data. πŸ’¬ Question 3 πŸ’¬ Describe the target variable PUTAMEN and the explicative variable TIME. You can plot: • Sample size • Mean, std • Min & max values • Quantiles # To complete # –––––––––––––––– # # –––––––––––––––– # df.reset_index().describe().round(2).T count mean std min 25% 50% 75% max TIME 1997.0 65.37 10.05 33.14 58.66 66.48 72.06 91.24 PUTAMEN 1997.0 0.71 0.10 0.35 0.64 0.71 0.77 0.96 πŸ’¬ Question 4 πŸ’¬ From this value, what can you say about the disease stage of the population? Answer: The median and mean value is 0.71, so the average disease stage is high for these subjects. πŸ’¬ Question 5 πŸ’¬ Display the data, where the Putamen (y-axis) is plot with respect to the Time (x-axis) # To complete # –––––––––––––––– # # –––––––––––––––– # sns.set_style('whitegrid') plt.figure(figsize=(14,6)) sns.scatterplot(data=df.xs('train', level='SPLIT').reset_index(), x='TIME', y='PUTAMEN', alpha=.5, s=60) plt.title('PUTAMEN - Raw data') plt.show() ⚑ Remark ⚑ At first look, the PUTAMEN values do not seem highly correlated to TIME. ## Part II: Linear RegressionΒΆ As we are some pro ML players, let’s make some predictions : let’s try to predict the putamen value based on the time alone. πŸ’¬ Question 6 πŸ’¬ Store the train and test data in df_train and df_test # To complete # –––––––––––––––– # # –––––––––––––––– # pds = pd.IndexSlice df_train = df.loc[pds[:, :, 'train']].copy() # one possibility df_test = df.xs('test', level='SPLIT').copy() # an other one πŸ’¬ Question 7 πŸ’¬ Run the linear reagression that is in scipy. Be carefull, you have to train it only with the train set! x = # Complete with the appropriate data y = # Complete with the appropriate data slope, intercept, r_value, p_value, std_err = stats.linregress(x, y) # –––––––––––––––– # # –––––––––––––––– # x = df_train.index.get_level_values('TIME').values y = df_train['PUTAMEN'].values slope, intercept, r_value, p_value, std_err = stats.linregress(x, y) ℹ️ Information ℹ️ To run the notebook smoothly, you must comply to the following rules: we are going to try different models to predict the putamen values based on each observation. We will store the results in the dataframe df such that : ID TIME SPLIT PUTAMEN Model 1 Model 2 … GS-001 74.4 train 0.78 0.93 0.75 … GS-003 75.4 train 0.44 0.84 0.46 … GS-018 51.8 test 0.71 0.73 0.78 … GS-056 89.2 train 0.76 0.56 0.61 … This will ease the comparison of the models. ⚑ Remark ⚑ No need to add these predictions to df_train and df_test. You should be able to easily run the notebook by keeping df_train the way it is while appending the results in df. πŸ’¬ Question 8 πŸ’¬ Add the predictions done by the linear regression in the column Linear Regression df['Linear Regression'] = # Your code here # –––––––––––––––– # # –––––––––––––––– # df['Linear Regression'] = intercept + slope * df.index.get_level_values('TIME') ℹ️ Information ℹ️ Let’s introduce an object and a fonction that will be used to compare the models: • overall_results will be the dataframe that stores the root mean square error on the train and test set for the different models • compute_rmse_train_test is the function that given the dataframe df and a model_name (Linear Regression for instance), compute the mean absolute error on the train and test set and stores it in overall_results πŸ’¬ Question 9 πŸ’¬ Run the following cell to see the results overall_results = pd.DataFrame({'train': [], 'test': []}) def compute_rmse(df, model_name): """Compute RMSE between PUTAMEN column and the <model_name> column of df""" y = df['PUTAMEN'] y_hat = df[model_name] diff = y - y_hat return np.sqrt(np.mean(diff * diff)) def compute_rmse_train_test(df, overall_results, model_name): """Inplace modification of <overall_results>""" overall_results.loc[model_name, 'train'] = compute_rmse(df.xs('train', level='SPLIT'), model_name) overall_results.loc[model_name, 'test'] = compute_rmse(df.xs('test', level='SPLIT'), model_name) compute_rmse_train_test(df, overall_results, 'Linear Regression') overall_results train test Linear Regression 0.091403 0.10213 ⚑ Remark ⚑ The RMSE is higher on the test set as on the train set Let’s look at what we are doing by plotting the data and the linear regression. Throughout the notebook, we will use the function plot_individuals that, given a subset of IDs and a model name (as stored in the df dataframe) plots the individual data and their prediction πŸ’¬ Question 10 πŸ’¬ Use the following cell. def get_title(overall_results, model_name): """Precise model's name and its RMSE train & test""" rmse_train = overall_results.loc[model_name, 'train'] rmse_test = overall_results.loc[model_name, 'test'] title = f'PUTAMEN - Raw data vs {model_name:s}\n' title += r'$RMSE_{train}$ = %.3f $RMSE_{test}$ = %.3f' % (rmse_train, rmse_test) return title def plot_individuals(df, overall_results, model_name, **kwargs): # ---- Input manager kind = kwargs.get('kind', 'lines') sublist = kwargs.get('sublist', None) highlight_test = kwargs.get('highlight_test', True) ax = kwargs.get('ax', None) if ax is None: fig, ax = plt.subplots(figsize=(14, 6)) else: plt.figure(figsize=(14, 6)) # ---- Select subjects if sublist is None: sublist = df.index.unique('ID') # ---- If too many subject, do not display them in the legend display_id_legend = len(sublist) <= 10 # ---- Plot if kind == 'scatter': # -- Used for Linear Regression sns.scatterplot(data=df.reset_index(), x='TIME', y='PUTAMEN', alpha=.5, s=60, label='Observations', ax=ax) ax.plot(df.index.get_level_values('TIME').values, df[model_name].values, label=model_name, c='C1') ax.legend(title='LABEL') if highlight_test: test = df.xs('test', level='SPLIT').loc[sublist].reset_index() sns.scatterplot(data=test, x='TIME', y='PUTAMEN', legend=None, ax=ax) elif kind == 'lines': # -- Used for the other models # - Stack observations & reconstructions by the model df_stacked = df[['PUTAMEN', model_name]].copy() df_stacked.rename(columns={'PUTAMEN': 'Observations'}, inplace=True) df_stacked = df_stacked.stack().reset_index().set_index(['ID', 'SPLIT']) df_stacked.columns = ['TIME', 'LABEL', 'PUTAMEN'] # - Plot sns.lineplot(data=df_stacked.loc[sublist], x='TIME', y='PUTAMEN', hue='ID', style='LABEL', legend=display_id_legend, ax=ax) if highlight_test: test = df.xs('test', level='SPLIT').loc[sublist].reset_index() sns.scatterplot(data=test, x='TIME', y='PUTAMEN', hue='ID', legend=None, ax=ax) if display_id_legend: ax.legend(title='LABEL', bbox_to_anchor=(1.05, 1), loc='upper left') else: raise ValueError('<kind> input accept only "scatter" and "lines".' f' You gave {kind}') ax.set_title(get_title(overall_results, model_name)) return ax plot_individuals(df, overall_results, 'Linear Regression', kind='scatter', highlight_test=False) plt.show() Is the previous plot relevant to assess the quality of our model? We will answer this question in the following cells: ## Part III: The longitudinal aspectΒΆ πŸ’¬ Question 11 πŸ’¬ Run the cell to have a better understanding of your data: plot_individuals(df, overall_results, 'Linear Regression', kind='lines', highlight_test=True) plt.title('Longitudinal aspect') plt.show() The test data are highlited with dots. πŸ’¬ Question 12 πŸ’¬ What are actually the test data ? πŸ’¬ Question 13 πŸ’¬Why does the global linear model not describe the temporal evolution of the variable? ## PART IV: Indivual Linear RegressionsΒΆ ℹ️ Information ℹ️ In fact, this is not the best idea to have one general linear regression. Because we do not benefit from indiviudal information. Therefore, let’s do one linear regression per individual πŸ’¬ Question 14 πŸ’¬ Look at what this function is doing and at the result individual_parameters = pd.DataFrame({'INTERCEPT': [], 'SLOPE': []}) subject_idx = 'GS-194' def compute_individual_parameters(df, subject_idx): df_patient = df.loc[subject_idx] x = df_patient.index.get_level_values('TIME').values y = df_patient['PUTAMEN'].values # -- Linear regression slope, intercept, _, _, _ = stats.linregress(x, y) return intercept, slope individual_parameters.loc[subject_idx] = compute_individual_parameters(df_train, subject_idx) individual_parameters INTERCEPT SLOPE GS-194 0.025188 -0.807941 πŸ’¬ Question 15 πŸ’¬ Apply the function to everyone # Your answer # –––––––––––––––– # # –––––––––––––––– # for subject_idx in df_train.index.unique('ID'): slope, intercept = compute_individual_parameters(df_train, subject_idx) individual_parameters.loc[subject_idx] = (intercept, slope) INTERCEPT SLOPE GS-194 -0.807941 0.025188 GS-001 -2.946305 0.051464 GS-002 -0.252949 0.018643 GS-003 0.517972 0.003816 GS-004 -1.076280 0.025680 πŸ’¬ Question 16 πŸ’¬ Now append the result of the model in df using the function below. ⚑ Remark ⚑ Take a close look at what we are doing cause we will use the same syntax if other questions. def compute_individual_reconstruction(x, parameters): subject_idx = x.name[0] slope = parameters.loc[subject_idx]['SLOPE'] intercept = parameters.loc[subject_idx]['INTERCEPT'] time = x.name[1] return intercept + slope * time df['Individual Linear Regression'] = df.apply( lambda x: compute_individual_reconstruction(x, individual_parameters), axis=1) πŸ’¬ Question 17 πŸ’¬ Use the compute_train_test_mean_absolute_error function to get the train and test errors and compare the two models. # Your code here # –––––––––––––––– # # –––––––––––––––– # compute_rmse_train_test(df, overall_results, 'Individual Linear Regression') overall_results train test Linear Regression 0.091403 0.102130 Individual Linear Regression 0.017825 0.032197 We clearly see that the RMSE is much better! πŸ’¬ Question 18 πŸ’¬ Create a list with the five patients having the more visits and the five patients having the less visits, then use plot_individuals function to display their observations and reconstructions by the model Hint : use the keyword sublist # sublist = # TODO # plot_individuals(df, overall_results, model_name, sublist=sublist) # –––––––––––––––– # # –––––––––––––––– # visits_per_subjects = df.groupby(df.index.get_level_values('ID')).count().sort_values('PUTAMEN') sublist = visits_per_subjects.tail(5).index.tolist() plot_individuals(df, overall_results, 'Individual Linear Regression', sublist=sublist) plt.show() πŸ’¬ Question 19 πŸ’¬ Explain why $$RMSE_{test} >> RMSE_{train}$$: Answer: the LM overfit for patients with only few data ## Part V : Linear Mixed effects Model with statsmodelsΒΆ With the previous method, we made a significant improvement. However, we suffer fro an overfitting problem. Let’s see what a mixed effect model can do for us! ### Run a LMM with statsmodelsΒΆ ℹ️ Information ℹ️ We will use the statsmodel package to run a Linear Mixed Effect Model (LMM or LMEM in the literature). πŸ’¬ Question 20 πŸ’¬ Load the following lines to import the packages import statsmodels.api as sm import statsmodels.formula.api as smf from statsmodels.regression.mixed_linear_model import MixedLMParams Statsmodels contains several API to create a model. For the ones familiar with R, you will be here in a familiar ground with the formula API. • formula='PUTAMEN ~ TIME + 1' means that you want to explain PUTAMEN with TIME and an intercept • groups="ID" means that you want random effect for all levels of ID • re_formula="~TIME + 1" means that you want a random intercept and a random slope for TIME If you go back to the equation you get : $$PUTAMEN_{id,time} = \underbrace{\alpha*TIME_{id,time} + \beta}_\text{formula} + \underbrace{\alpha_{id}*TIME_{id,time} + \beta_{id}}_\text{re_formula}$$ πŸ’¬ Question 21 πŸ’¬ Let’s try a very naive run: lmm = smf.mixedlm(formula='PUTAMEN ~ 1 + TIME', data=df_train.reset_index(), groups="ID", re_formula="~ 1 + TIME").fit() lmm.summary() /home/juliette.ortholand/miniconda3/envs/leaspype/lib/python3.8/site-packages/statsmodels/base/model.py:566: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals warnings.warn("Maximum Likelihood optimization failed to " /home/juliette.ortholand/miniconda3/envs/leaspype/lib/python3.8/site-packages/statsmodels/regression/mixed_linear_model.py:2200: ConvergenceWarning: Retrying MixedLM optimization with lbfgs warnings.warn( /home/juliette.ortholand/miniconda3/envs/leaspype/lib/python3.8/site-packages/statsmodels/regression/mixed_linear_model.py:2237: ConvergenceWarning: The MLE may be on the boundary of the parameter space. warnings.warn(msg, ConvergenceWarning) Model: MixedLM Dependent Variable: PUTAMEN No. Observations: 1415 Method: REML No. Groups: 200 Scale: 0.0007 Min. group size: 2 Log-Likelihood: 2497.0458 Max. group size: 13 Converged: Yes Mean group size: 7.1 Coef. Std.Err. z P>|z| [0.025 0.975] -0.685 0.044 -15.644 0.000 -0.771 -0.599 0.022 0.001 29.528 0.000 0.020 0.023 0.003 0.259 0.000 0.005 0.000 0.000 ⚑ Remark ⚑ Let’s skip the different warning for now and see what happens if we ignore it Let’s try and see. πŸ’¬ Question 22 πŸ’¬ Run the following commands to get the intercept and slope print(lmm.fe_params.loc['Intercept']) print(lmm.fe_params.loc['TIME']) -0.6848661069361888 0.021679337332944623 πŸ’¬ Question 23 πŸ’¬ Run the following commands to get the variation to the mean slope and intercept Example on few subject {key: val for key, val in lmm.random_effects.items() if key in ['GS-00'+str(i) for i in range(1, 4)]} {'GS-001': ID -0.019149 TIME -0.001092 dtype: float64, 'GS-002': ID 0.058495 TIME 0.004182 dtype: float64, 'GS-003': ID -0.011974 TIME -0.001002 dtype: float64} πŸ’¬ Question 24 πŸ’¬ From the fixed and random effects, compute for each subject its INTERCEPT and SLOPE: # df_random_effects['INTERCEPT'] = # TODO # df_random_effects['SLOPE'] = # TODO # –––––––––––––––– # # –––––––––––––––– # df_random_effects = pd.DataFrame.from_dict(lmm.random_effects, orient='index') df_random_effects = df_random_effects.rename({'ID': 'Random intercept', 'TIME': 'Random slope'}, axis=1) df_random_effects['INTERCEPT'] = df_random_effects['Random intercept'] + lmm.fe_params.loc['Intercept'] df_random_effects['SLOPE'] = df_random_effects['Random slope'] + lmm.fe_params.loc['TIME'] Random intercept Random slope INTERCEPT SLOPE GS-001 -0.019149 -0.001092 -0.704015 0.020588 GS-002 0.058495 0.004182 -0.626371 0.025861 GS-003 -0.011974 -0.001002 -0.696840 0.020677 GS-004 -0.020968 -0.001529 -0.705834 0.020150 GS-005 0.006285 0.000363 -0.678581 0.022043 πŸ’¬ Question 25 πŸ’¬ Use the compute_individual_reconstruction function but with df_random_effects to compute the prediction with the new individual effects df['Linear Mixed Effect Model'] = # Your code here # –––––––––––––––– # # –––––––––––––––– # df['Linear Mixed Effect Model'] = df.apply( lambda x: compute_individual_reconstruction(x, df_random_effects), axis=1) πŸ’¬ Question 26 πŸ’¬ Store the results in overall_results (thanks to compute_rmse_train_test function) and compare the models # Your code here # –––––––––––––––– # # –––––––––––––––– # compute_rmse_train_test(df, overall_results, 'Linear Mixed Effect Model') overall_results train test Linear Regression 0.091403 0.102130 Individual Linear Regression 0.017825 0.032197 Linear Mixed Effect Model 0.024533 0.039367 ⚑ Remark ⚑ The result is worse than with the previous model. πŸ’¬ Question 27 πŸ’¬ What do you think happened? Let’s check it visually # Your code here # –––––––––––––––– # # –––––––––––––––– # plot_individuals(df, overall_results, 'Linear Mixed Effect Model', sublist=sublist) plt.show() ⚑ Remark ⚑ All the slopes are the same. This is related to the warning above : these are warnings are a way of alerting you that you may be in a non-standard situation. Most likely, one of your variance parameters is converging to zero. Which is the case if you have a look to time variance. πŸ’¬ Question 28 πŸ’¬ Let’s rerun it by normalizing the time first. Add to df_train and df_test a renormalizing function. Be careful to normalize only with the known ages from train. Hint : Watchout to data leakage! # Your code # ---- Split again train & test df_train = df.xs('train',level='SPLIT') df_test = df.xs('test',level='SPLIT') # –––––––––––––––– # # –––––––––––––––– # # ---- We use only train data to compute mean & std ages = df_train.index.get_level_values(1).values age_mean = ages.mean() age_std = ages.std() df['TIME_NORMALIZED'] = (df.index.get_level_values('TIME') - age_mean) / age_std # ---- Split again train & test df_train = df.xs('train',level='SPLIT') df_test = df.xs('test',level='SPLIT') πŸ’¬ Question 29 πŸ’¬ Rerun the previous mixed lm (some cells above) but with the TIME_NORMALIZED instead of TIME in the formula and re_formula. # YOUR CODE # –––––––––––––––– # # –––––––––––––––– # lmm = smf.mixedlm(formula='PUTAMEN ~ TIME_NORMALIZED + 1', data=df_train.reset_index(), groups="ID", re_formula="~TIME_NORMALIZED + 1").fit() lmm.summary() Model: MixedLM Dependent Variable: PUTAMEN No. Observations: 1415 Method: REML No. Groups: 200 Scale: 0.0005 Min. group size: 2 Log-Likelihood: 2629.4084 Max. group size: 13 Converged: Yes Mean group size: 7.1 Coef. Std.Err. z P>|z| [0.025 0.975] 0.679 0.016 43.031 0.000 0.648 0.710 0.231 0.011 20.134 0.000 0.209 0.254 0.045 0.288 -0.006 0.111 0.017 0.126 Ahaaaaah! No warnings! πŸ’¬ Question 30 πŸ’¬ Get the parameters as previously in df_random_effects_2 and store the INTERCEPT_NORMALIZED and SLOPE_NORMALIZED # TODO # –––––––––––––––– # # –––––––––––––––– # df_random_effects_2 = pd.DataFrame.from_dict(lmm.random_effects, orient='index') df_random_effects_2 = df_random_effects_2.rename({'ID': 'Random intercept', 'TIME_NORMALIZED': 'Random slope'}, axis=1) df_random_effects_2['INTERCEPT_NORMALIZED'] = df_random_effects_2['Random intercept'] + \ lmm.fe_params.loc['Intercept'] df_random_effects_2['SLOPE_NORMALIZED'] = df_random_effects_2['Random slope'] + \ lmm.fe_params.loc['TIME_NORMALIZED'] Random intercept Random slope INTERCEPT_NORMALIZED SLOPE_NORMALIZED GS-001 -0.222614 0.185999 0.456428 0.417479 GS-002 0.233375 -0.075922 0.912417 0.155558 GS-003 0.009845 -0.091712 0.688887 0.139767 GS-004 -0.093862 0.017902 0.585179 0.249382 GS-005 0.055686 -0.050411 0.734728 0.181068 Here, we computed $y = SLOPE_{normalized} * TIME_{normalized} + INTERCEPT_{normalized}$ which corresponds to $y = SLOPE_{normalized} * \frac{(TIME - \mu(ages))}{ \sigma_{ages}} + INTERCEPT_{normalized}$ which is : $y = SLOPE * TIME + INTERCEPT$ where $$$SLOPE = \frac{SLOPE_{normalized}}{ std_{ages}}$$$$and$$$$INTERCEPT = INTERCEPT_{normalized} - \frac{SLOPE_{normalized} * mean_{ages}}{std_{ages}}$$$ πŸ’¬ Question 31 πŸ’¬ From INTERCEPT_NORMALIZED & SLOPE_NORMALIZED, compute for each subject its INTERCEPT and SLOPE: parameters['GS-001'] >>> {'INTERCEPT': ..., 'SLOPE': ...} # YOUR CODE # –––––––––––––––– # # –––––––––––––––– # df_random_effects_2['SLOPE'] = df_random_effects_2['SLOPE_NORMALIZED'] / age_std df_random_effects_2['INTERCEPT'] = df_random_effects_2['INTERCEPT_NORMALIZED'] -\ (df_random_effects_2['SLOPE_NORMALIZED'] * age_mean)/age_std πŸ’¬ Question 32 πŸ’¬ Use the compute_individual_reconstruction function but with df_random_effects_2 to compute the prediction with the new individual effects # Your code # –––––––––––––––– # # –––––––––––––––– # df['Linear Mixed Effect Model - V2'] = df.apply( lambda x: compute_individual_reconstruction(x, df_random_effects_2), axis=1) πŸ’¬ Question 33 πŸ’¬ Store the results in overall_results (thanks to compute_rmse_train_test function) and compare the models # Your code # –––––––––––––––– # # –––––––––––––––– # compute_rmse_train_test(df, overall_results, 'Linear Mixed Effect Model - V2') overall_results train test Linear Regression 0.091403 0.102130 Individual Linear Regression 0.017825 0.032197 Linear Mixed Effect Model 0.024533 0.039367 Linear Mixed Effect Model - V2 0.019081 0.028939 The RMSE is much better! πŸ’¬ Question 34 πŸ’¬ Display the subjects of sublist : # Your code # –––––––––––––––– # # –––––––––––––––– # plot_individuals(df, overall_results, 'Linear Mixed Effect Model - V2', sublist=sublist) plt.show() πŸ’¬ Question 35 πŸ’¬ What is the main advantage of a Linear Mixed-effects Model compared to multiple Linear Models ? πŸ’¬ Question 36 πŸ’¬ Let’s check the average RMSE per subject depending of the number of timepoints per subjects: models = ['Individual Linear Regression', 'Linear Mixed Effect Model - V2'] def plot_rmse_by_number_of_visit(models, df=df): rmse = df[models].copy() rmse.columns.names = ['MODEL'] rmse = rmse.stack() rmse = (rmse - df['PUTAMEN']) ** 2 nbr_of_visits = rmse.xs('train',level='SPLIT').groupby(['MODEL','ID']).size() rmse = rmse.xs('test',level='SPLIT') rmse = rmse.reset_index().rename(columns={0: 'RMSE'}) rmse = rmse.reset_index()[['ID', 'MODEL', 'RMSE']] rmse = rmse.groupby(['MODEL','ID']).mean() ** .5 rmse['Visits number in train'] = nbr_of_visits plt.figure(figsize=(12, 5)) ax = sns.boxplot(data=rmse.reset_index(), y='RMSE', x='Visits number in train', hue='MODEL', showmeans=True, whis=[5,95], showfliers=False) ax = sns.stripplot(data=rmse.reset_index(), y='RMSE', x='Visits number in train', hue='MODEL', dodge=True, alpha=.5, linewidth=1, ax=ax) plt.grid(True, axis='both') plt.ylim(0,None) plt.legend() return ax plot_rmse_by_number_of_visit(models, df) plt.show() ## PART VI: A taste of the future - Linear mixed-effect model with LeaspyΒΆ In the next practical sessions you will learn to use the package developped by the Aramis team. Now, just to be able to compare performances you will run a few methods of leaspy in advance… πŸ’¬ Question 37 πŸ’¬ Run the following cell to make the import of leaspy methods, format the data and fit a model : # --- Import methods from leaspy import Leaspy, Data, AlgorithmSettings # --- Format the data data = Data.from_dataframe(df_train[['PUTAMEN']]) # --- Fit a model leaspy_univariate = Leaspy('univariate_linear') settings_fit = AlgorithmSettings('mcmc_saem', progress_bar=True, seed=0) leaspy_univariate.fit(data, settings_fit) ==> Setting seed to 0 |##################################################| 10000/10000 iterations The standard deviation of the noise at the end of the calibration is: 0.0215 Calibration took: 26s Well it’s a bit slow, but here is a joke to wait: • What did the triangle say to the circle? β€œYou’re pointless.” … Okay, it was short. Here is another one: • I had an argument with a 90Β° angle. It turns out it was right. πŸ’¬ Question 38 πŸ’¬ Run the following two cells to make the predictions: settings_personalize = AlgorithmSettings('scipy_minimize', progress_bar=True, use_jacobian=True) individual_parameters = leaspy_univariate.personalize(data, settings_personalize) |##################################################| 200/200 subjects The standard deviation of the noise at the end of the personalization is: 0.0184 Personalization scipy_minimize took: 20s timepoints = {idx: df.loc[idx].index.get_level_values('TIME').values for idx in df.index.get_level_values('ID').unique()} estimates = leaspy_univariate.estimate(timepoints, individual_parameters) πŸ’¬ Question 39 πŸ’¬ Add the predictions to df df['Leaspy linear'] = float('nan') for idx in df.index.unique('ID'): df.loc[idx, 'Leaspy linear'] = estimates[idx] πŸ’¬ Question 40 πŸ’¬ Compute and add the new rmse compute_rmse_train_test(df, overall_results, 'Leaspy linear') overall_results train test Linear Regression 0.091403 0.102130 Individual Linear Regression 0.017825 0.032197 Linear Mixed Effect Model 0.024533 0.039367 Leaspy linear 0.018394 0.027894 πŸ’¬ Question 41 πŸ’¬ Display the subjects of sublist : # Your code # –––––––––––––––– # # –––––––––––––––– # plot_individuals(df, overall_results, 'Leaspy linear', sublist=sublist) plt.show() ## PART VII: A taste of the future - Non-linear mixed-effect model with LeaspyΒΆ Again you will use an other model of leapsy… πŸ’¬ Question 42 πŸ’¬ Fit the model with the data. leaspy_univariate = Leaspy('univariate_logistic') settings_fit = AlgorithmSettings('mcmc_saem', progress_bar=True, seed=0) leaspy_univariate.fit(data, settings_fit) ==> Setting seed to 0 |##################################################| 10000/10000 iterations The standard deviation of the noise at the end of the calibration is: 0.0214 Calibration took: 27s πŸ’¬ Question 43 πŸ’¬ Run the following two cells to make the predictions: settings_personalize = AlgorithmSettings('scipy_minimize', progress_bar=True, use_jacobian=True) individual_parameters = leaspy_univariate.personalize(data, settings_personalize) |##################################################| 200/200 subjects The standard deviation of the noise at the end of the personalization is: 0.0183 Personalization scipy_minimize took: 4s timepoints = {idx: df.loc[idx].index.get_level_values('TIME').values for idx in df.index.get_level_values('ID').unique()} estimates = leaspy_univariate.estimate(timepoints, individual_parameters) πŸ’¬ Question 44 πŸ’¬ Add the predictions to df df['Leaspy logistic'] = float('nan') for idx in df.index.unique('ID'): df.loc[idx, 'Leaspy logistic'] = estimates[idx] πŸ’¬ Question 45 πŸ’¬ Compute and add the new rmse compute_rmse_train_test(df, overall_results, 'Leaspy logistic') overall_results train test Linear Regression 0.091403 0.102130 Individual Linear Regression 0.017825 0.032197 Linear Mixed Effect Model 0.024533 0.039367 Leaspy linear 0.018394 0.027894 Leaspy logistic 0.018331 0.026822 πŸ’¬ Question 46 πŸ’¬ Display the subjects of sublist : # Your code # –––––––––––––––– # # –––––––––––––––– # plot_individuals(df, overall_results, 'Leaspy logistic', sublist=sublist) plt.show() πŸ’¬ Question 47 πŸ’¬ Check the average RMSE per subject depending of the number of timepoints per subjects models = ['Individual Linear Regression', 'Linear Mixed Effect Model - V2', 'Leaspy linear', 'Leaspy logistic'] plot_rmse_by_number_of_visit(models, df) plt.show() Here we clearly see that the few subjects who have less than 6 timepoints are better reconstructed with a mixed effect model! ## For the fast-running Zebras: all-in-one with a more realistic split of dataΒΆ Let’s try to split differently data so to mimic real-world applications. We don’t want to fit again a new model every time we predict the future of a new patient. To this aim, we want to calibrate (fit) a model on a large and representative dataset once and then personalize this model to totally new individuals. We can then make forecasts about these new individuals. The previous split was not taking into account this constraint and this extra part will go through it. • Train part: all data from some individuals (used for calibration of model) • Test part: new individuals • Present data: partial data of these new individuals that will be known (used for personalization of model) • Future data: hidden data from these individuals (not known during personalization, used for prediction) πŸ’¬ Question 48 πŸ’¬ Split data differently so to respect real-world constraint # be sure that ages are increasing df.sort_index(inplace=True) individuals = df.index.unique('ID') n_individuals = len(individuals) # split on individuals fraction = .75 individuals_train, individuals_test = individuals[:int(fraction*n_individuals)], individuals[int(fraction*n_individuals):] s_train = df.droplevel(-1).loc[individuals_train][['PUTAMEN']] s_test = df.droplevel(-1).loc[individuals_test][['PUTAMEN']] # we split again test set in 2 parts: present (known) / future (to predict) s_test_future = s_test.groupby('ID').tail(2) # 2 last tpts s_test_present = s_test.loc[s_test.index.difference(s_test_future.index)] s_test.loc[s_test_future.index, 'PART'] = 'future' s_test.loc[s_test_present.index, 'PART'] = 'present' s_test.set_index('PART',append=True,inplace=True) # check no intersection in individuals between train/test assert len(s_train.index.unique('ID').intersection(s_test.index.unique('ID'))) == 0 # check no intersection of (individual, timepoints) between test present/future assert len(s_test_present.index.intersection(s_test_future.index)) == 0 # Leaspy Data objects creation data_train = Data.from_dataframe(s_train) data_test = { 'all': Data.from_dataframe(s_test.droplevel('PART')), # all test data [present+future pooled] 'present': Data.from_dataframe(s_test_present), 'future': Data.from_dataframe(s_test_future) } Let’s check that distributions of ages and putamen values between train & test are similar… unlike previous split! πŸ’¬ Question 49 πŸ’¬ Check consistence of new train/test split and compare to previous split df_new_split = pd.concat({ 'Train': s_train, 'Test': s_test.droplevel(-1) }, names=['SPLIT']) for which_split, df_split in {'Previous split': df, 'New split': df_new_split}.items(): fig, axs = plt.subplots(1, 2, figsize=(14,6)) fig.suptitle(which_split, fontsize=20, fontweight='bold') for (var, title), ax in zip({'TIME': 'Ages', 'PUTAMEN': 'Putamen value'}.items(), axs): sns.histplot(data=df_split.reset_index(), x=var, hue='SPLIT', stat='density', common_norm=False, ax.set_title(f'{title} - comparison of distributions') ax.set_xlabel(title) plt.show() #sns.scatterplot(data=s_train.reset_index(), # x='TIME', y='PUTAMEN', alpha=.5, s=60) #sns.scatterplot(data=s_test.reset_index(), # x='TIME', y='PUTAMEN', alpha=.5, s=60) πŸ’¬ Question 50 πŸ’¬ Double check number of individuals and visits in each split of our dataset # Some descriptive stats on number of visits in test set (present part) pd.options.display.float_format = '{:.1f}'.format print('Visits in each split') print({'Train': len(s_train), 'All test': len(s_test), 'Test - known part': len(s_test_present), 'Test - to predict part': len(s_test_future) }) print() print('Visits per individual in each split') pd.concat({ 'Calibration': s_train.groupby('ID').size().describe(percentiles=[]), # min: 3 tpts known 'Personalization': s_test_present.groupby('ID').size().describe(percentiles=[]), # min: 1 tpt known 'Prediction': s_test_future.groupby('ID').size().describe(percentiles=[]), # min: 2 tpts to predict }).unstack(0).rename({'count':'nb of individuals'}, axis=0) Visits in each split {'Train': 1499, 'All test': 498, 'Test - known part': 398, 'Test - to predict part': 100} Visits per individual in each split Calibration Personalization Prediction nb of individuals 150.0 50.0 50.0 mean 10.0 8.0 2.0 std 2.5 2.5 0.0 min 3.0 1.0 2.0 50% 10.0 8.0 2.0 max 18.0 13.0 2.0 πŸ’¬ Question 51 πŸ’¬ Write a all-in-one personalization & prediction function thanks to Leaspy api def personalize_model(leaspy_model, settings_perso): """ all-in-one function that: 1. takes a calibrated leaspy model, 2. personalizes it with different splits of data 3. and estimates prediction errors compared to real data (including the hidden parts) """ ips_test = {} rmse_test = {} # personalize on train part just to have a comparison (baseline) ips_test['train'], rmse_test['train'] = leaspy_model.personalize(data_train, settings_perso, return_noise=True) print(f'RMSE on train: {100*rmse_test["train"]:.2f}%') # personalize using different test parts for test_part, data_test_part in data_test.items(): ips_test[test_part], rmse_test[test_part] = leaspy_model.personalize(data_test_part, settings_perso, return_noise=True) print(f'RMSE on test {test_part}: {100*rmse_test[test_part]:.2f}%') # reconstruct using different personalizations made s_train_ix = s_train.assign(PART='train').set_index('PART',append=True).index # with fake part added all_recons_df = pd.concat({ test_part: leaspy_model.estimate(s_test.index if test_part != 'train' else s_train_ix, ips_test_part) for test_part, ips_test_part in ips_test.items() }, names=['PERSO_ON']).reorder_levels([1,2,3,0]) true_vals_same_ix = all_recons_df[[]].join(df.droplevel(-1).iloc[:,0]) pred_errs = all_recons_df - true_vals_same_ix # return everything that could be needed return pred_errs, all_recons_df, ips_test, rmse_test πŸ’¬ Question 52 πŸ’¬ Calibrate, personalize & predict with LMM, univariate linear & univariate logistic thanks to Leaspy ## Leaspy LMM (using integrated exact personalization formula) leaspy_lmm = Leaspy('lme') settings_fit = AlgorithmSettings('lme_fit', with_random_slope_age=True, force_independent_random_effects=True, # orthogonal rand. eff. seed=0) # ------- Compute population parameters on train data ------- # (fixed effects, noise level and var-covar of random effects) leaspy_lmm.fit(data_train, settings_fit) # ------- Compute individual parameters (random effects) on different splits ------- pred_errs_lmm, all_recons_df_lmm, _, _ = personalize_model(leaspy_lmm, AlgorithmSettings('lme_personalize')) ==> Setting seed to 0 The standard deviation of the noise at the end of the calibration is: 0.0218 RMSE on train: 1.98% RMSE on test all: 1.91% RMSE on test present: 1.88% RMSE on test future: 1.61% ## Leaspy univariate linear leaspy_lin = Leaspy('univariate_linear') settings_fit = AlgorithmSettings('mcmc_saem', n_iter=8000, progress_bar=True, seed=0) settings_perso = AlgorithmSettings('scipy_minimize', progress_bar=True, use_jacobian=True) # ------- Compute population parameters on train data ------- leaspy_lin.fit(data_train, settings_fit) # ------- Compute individual parameters on different splits ------- pred_errs_lin, all_recons_df_lin, _, _ = personalize_model(leaspy_lin, settings_perso) ==> Setting seed to 0 |##################################################| 8000/8000 iterations The standard deviation of the noise at the end of the calibration is: 0.0216 Calibration took: 22s |##################################################| 150/150 subjects The standard deviation of the noise at the end of the personalization is: 0.0194 Personalization scipy_minimize took: 4s RMSE on train: 1.94% |##################################################| 50/50 subjects The standard deviation of the noise at the end of the personalization is: 0.0186 Personalization scipy_minimize took: 1s RMSE on test all: 1.86% |##################################################| 50/50 subjects The standard deviation of the noise at the end of the personalization is: 0.0178 Personalization scipy_minimize took: 2s RMSE on test present: 1.78% |##################################################| 50/50 subjects The standard deviation of the noise at the end of the personalization is: 0.0162 Personalization scipy_minimize took: 1s RMSE on test future: 1.62% ## Leaspy univariate logistic leaspy_log = Leaspy('univariate_logistic') # Calibrate leaspy_log.fit(data_train, settings_fit) # Personalize pred_errs_log, all_recons_df_log, _, _ = personalize_model(leaspy_log, settings_perso) ==> Setting seed to 0 |##################################################| 8000/8000 iterations The standard deviation of the noise at the end of the calibration is: 0.0215 Calibration took: 30s |##################################################| 150/150 subjects The standard deviation of the noise at the end of the personalization is: 0.0194 Personalization scipy_minimize took: 5s RMSE on train: 1.94% |##################################################| 50/50 subjects The standard deviation of the noise at the end of the personalization is: 0.0184 Personalization scipy_minimize took: 1s RMSE on test all: 1.84% |##################################################| 50/50 subjects The standard deviation of the noise at the end of the personalization is: 0.0177 Personalization scipy_minimize took: 2s RMSE on test present: 1.77% |##################################################| 50/50 subjects The standard deviation of the noise at the end of the personalization is: 0.0162 Personalization scipy_minimize took: 1s RMSE on test future: 1.62% πŸ’¬ Question 53 πŸ’¬ Display RMSE on all splits of data depending on models & part where models were personalized on # reconstruct using different personalizations made pd.options.display.float_format = '{:.5f}'.format all_pred_errs = pd.concat({ 'lmm': pred_errs_lmm, 'leaspy_linear': pred_errs_lin, 'leaspy_logistic': pred_errs_log, }, names=['MODEL']) print('RMSE by:\n- model\n- part of data for personalization\n- part of data for reconstruction\n') rmses_by_perso_split = (all_pred_errs**2).groupby(['MODEL','PERSO_ON','PART']).mean() ** .5 rmses_by_perso_split.unstack('MODEL').droplevel(0, axis=1).sort_index(ascending=[True,False]).rename_axis(index={'PART':'RECONS_ON'}) RMSE by: - model - part of data for personalization - part of data for reconstruction MODEL lmm leaspy_linear leaspy_logistic PERSO_ON RECONS_ON all present 0.01897 0.01841 0.01821 future 0.01944 0.01910 0.01913 future present 0.04730 0.04808 0.04923 future 0.01609 0.01618 0.01616 present present 0.01880 0.01776 0.01775 future 0.02692 0.02666 0.02538 train train 0.01979 0.01942 0.01941 πŸ’¬ Question 54 πŸ’¬ Plot distributions of absolute errors def plot_dist_errs(pred_errs, model_name, grouping_reverse=False): plt.figure(figsize=(14,6)) plt.grid(True,axis='both',zorder=-1) plot_opts = dict( x='PART', order=['train','present','future'], hue='PERSO_ON', hue_order=['all','present','future','train'], ) x_lbl = 'Part of test set we look errors at' legend_lbl = 'Personalization on\nwhich test part?' if grouping_reverse: plot_opts = dict( hue='PART', hue_order=['train','present','future'], x='PERSO_ON', order=['all','present','future','train'], ) x_lbl, legend_lbl = legend_lbl, x_lbl # swap sns.boxplot(data=pred_errs.abs().reset_index(), y='PUTAMEN', **plot_opts, showfliers=False) plt.ylabel(f'Distribution of {model_name} absolute errors', fontsize=14) plt.xlabel(x_lbl, fontsize=14) plt.legend().set_title(legend_lbl) plot_dist_errs(pred_errs_lmm, 'LMM') plot_dist_errs(pred_errs_lmm, 'LMM', True) πŸ’¬ Question 55 πŸ’¬ Plot some individual trajectories with their associated predicted trajectories idx_list = np.random.RandomState(5).choice(s_test.index.unique('ID'), 6).tolist() def plot_recons(all_recons_df, model_name): recontruction_and_raw_data = pd.concat({'truth': s_test, 'prediction': all_recons_df.xs('present', level='PERSO_ON') }, names=['LABEL']).swaplevel(0,1).sort_index() plt.figure(figsize=(14,6)) sns.lineplot(data=recontruction_and_raw_data.loc[idx_list].reset_index(), x='TIME', y='PUTAMEN', hue='ID', style='LABEL', style_order=['truth','prediction']) sns.scatterplot(data=s_test_future.loc[idx_list].reset_index(), x='TIME', y='PUTAMEN', hue='ID', legend=None) plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left') title = f'PUTAMEN - Raw data vs {model_name} (test known/to predict)' plt.title(title) plt.grid(True) plt.show() plot_recons(all_recons_df_lmm, 'LMM') #plot_recons(all_recons_df_lin, 'Leaspy linear (univ.)') #plot_recons(all_recons_df_log, 'Leaspy logistic (univ.)') ## Bonus: have a look at the LMM log-likelihood landscapeΒΆ import statsmodels.formula.api as smf from tqdm import tqdm import site from loglikelihood_landspace_lmm import plt_ll_landscape, plt_ll_landscape_ # new model (TIME_norm) lmm_model = smf.mixedlm(formula="PUTAMEN ~ 1 + TIME_NORMALIZED", data=df.xs('train',level='SPLIT').reset_index(), groups="ID", re_formula="~ 1 + TIME_NORMALIZED") lmm_model.cov_pen = None view = dict( vmin=2200, vmid=2500, vmax=2630, levels=[2500, 2600, 2620] ) Rs = {} # placeholder for results πŸ’¬ Question 56 πŸ’¬ Plot log-likelihood landscapes for LMM model, depending on variance-covariance matrix of random effects for fix_k, fix_v in tqdm([ # corr fixed ('corr', -.99), ('corr', -.9), ('corr', -.75), ('corr', -.5), ('corr', -.25), # ~best ('corr', 0), ('corr', .5), ('corr', .75), ('corr', .99), # sx fixed ('sx', 1.), ('sx', 5), ('sx', 10), # ~best ('sx', 20), ('sx', 40), # sy fixed ('sy', 1), ('sy', 6), # ~best ('sy', 10), ('sy', 20), ]): if (fix_k, fix_v) in Rs: plt_ll_landscape_(Rs[(fix_k, fix_v)], **view) else: Rs[(fix_k, fix_v)] = plt_ll_landscape(lmm_model, N=50, **{fix_k: fix_v}, **view) 0%| | 0/18 [00:00<?, ?it/s] 6%|β–Œ | 1/18 [01:23<23:39, 83.51s/it] 11%|β–ˆ | 2/18 [02:58<24:06, 90.39s/it] 17%|β–ˆβ–‹ | 3/18 [04:43<24:16, 97.10s/it] 22%|β–ˆβ–ˆβ– | 4/18 [06:01<20:53, 89.53s/it] 28%|β–ˆβ–ˆβ–Š | 5/18 [07:27<19:04, 88.05s/it] 33%|β–ˆβ–ˆβ–ˆβ–Ž | 6/18 [08:53<17:31, 87.61s/it] 39%|β–ˆβ–ˆβ–ˆβ–‰ | 7/18 [10:06<15:08, 82.59s/it] 44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 8/18 [11:19<13:17, 79.72s/it] 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 9/18 [12:31<11:35, 77.33s/it]/Users/etienne.maheux/Documents/repos/disease-course-mapping-solutions/TP1_LMM/utils/loglikelihood_landspace_lmm.py:130: UserWarning: No contour levels were found within the data range. colors='black', alpha=.8, linewidths=1) 56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 10/18 [13:44<10:07, 75.94s/it] 61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 11/18 [14:58<08:46, 75.22s/it] 67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 12/18 [16:11<07:27, 74.57s/it] 72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 13/18 [17:24<06:10, 74.09s/it] 78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 14/18 [18:38<04:56, 74.21s/it] 83%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 15/18 [19:56<03:45, 75.18s/it] 89%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 16/18 [21:14<02:32, 76.14s/it] 94%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 17/18 [22:27<01:15, 75.09s/it] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 18/18 [23:52<00:00, 79.58s/it]
14,555
44,559
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.046875
3
CC-MAIN-2024-22
longest
en
0.674822
https://books.google.no/books?id=Jtw2AAAAMAAJ&qtid=e27e78ce&lr=&hl=no&source=gbs_quotes_r&cad=6
1,568,943,929,000,000,000
text/html
crawl-data/CC-MAIN-2019-39/segments/1568514573801.14/warc/CC-MAIN-20190920005656-20190920031656-00548.warc.gz
409,578,686
6,429
Søk Bilder Maps Play YouTube Nyheter Gmail Disk Mer » Logg på Bøker Bok 1–10 av 111 på When a straight line standing on another straight line, makes the adjacent angles... When a straight line standing on another straight line, makes the adjacent angles equal to one another, each of the angles is called a, right angle ; and the straight line which stands on the other is called a perpendicular to it. 11. An obtuse angle... Geometrical and Graphical Essays: Containing a General Description of the ... - Side 3 av George Adams - 1813 - 534 sider Uten tilgangsbegrensning - Om denne boken ## The Young Mathematician's Guide: Being a Plain and Easy Introduction to the ... John Ward - 1747 - 480 sider ...is, AC, and C J5, are Perpendicular to £> C, as well as -D, C is to /•j'r/W or both of them. g. An OBTUSE ANGLE is that which is greater than a Right Angle. Such is the Angle included between the Lines AC and n CB. 10. An ACUTE ANGLE is that ^_ which is lefs... Uten tilgangsbegrensning - Om denne boken ## The First Six Books: Together with the Eleventh and Twelfth Euclid - 1781 - 520 sider ...angle ; and the ftraight line which ftands on the other is called a perpendicular to it. XI. An obtufe angle is that which is greater than a right angle. ,. An acute angle is that which is lefs than a right angle. > '.'XIII. , •i. ;. " A term or boundary is the extremity of any thing."... Uten tilgangsbegrensning - Om denne boken ## The philosophical and mathematical commentaries of Proclus ... on the first ... ...and the infifting Right Line, is called a PERPENDICULAR to that upon which it ftands. DEFINITION Xf. An OBTUSE ANGLE is that which is greater than a RIGHT ANGLE. DEFINITION XII. But an ACUTE ANGLE, is that which is Icfs than a RIGHT ANGLE, THESE are the triple... Uten tilgangsbegrensning - Om denne boken ## The shipwright's vade-mecum [by D. Steel]. David Steel - 1805 ...is the angle of ABC. " I An (tht use Angle is that which is greater than a right A ' angle, as ABC. An Acute Angle is that which is less than a right angle, asDBC. By an ANGLE of ELEVATION is meant the angle contained between a line of direction, and any plane... Uten tilgangsbegrensning - Om denne boken ## A New and Enlarged Military Dictionary: Or, Alphabetical Explanation of ... Charles James - 1805 - 1006 sider ...side, and consequently the arches intercepted either way are equal to 90°, or the quarter of a circle. An Acute ANGLE, is that which is less than a right angle, or 90°. An Obtuse ANCLE, is that whicb is greater than a right angle. Adjacent ANGLES, are such as... Uten tilgangsbegrensning - Om denne boken ## Elements of Geometry: Containing the First Six Books of Euclid, with a ... John Playfair - 1806 - 311 sider ...and the straight line which stands on the other is called a perpendicular to it. _____ VIII, Book I. An obtuse angle is that which is greater than a right angle. IX. An acute angle is that which is less than a right angle. X. A figure is that which is inclosed... Uten tilgangsbegrensning - Om denne boken ## The Elements of Euclid: Viz. the First Six Books, Together with the Eleventh ... Euclid, Robert Simson - 1806 - 518 sider ...right angle ; and the straight line which stands on the other is called a perpendicular to it. XI. An obtuse angle is that which is greater than a right angle. XII. An acute angle is that which is less than a right anp-Ic. • XIII. rt A term or boundary is the... Uten tilgangsbegrensning - Om denne boken ## The Tutor's Guide: Being a Complete System of Arithmetic; with Various ... Charles Vyse - 1806 - 320 sider ...placed at the Angular Point being always wrote in the Middle, as ADC (Fig. 4. denotes the Angle I. An Obtuse Angle is that which is greater than a right Angle, as CAB, (Fig 3.; An acute Angle is that which is less than a right Angle, »s DCB, (Fig. 4.) A Superficies... Uten tilgangsbegrensning - Om denne boken ## The Elements of Euclid; viz. the first six books, together with the eleventh ... Euclides - 1814 ...called a perpendicular to it. XI. An obtuse angle is that which is greater than a right angle. XII. An acute angle is that which is less than a right angle. Xlll. " A term or boundary is th\$ extremity of any thing." XIV. A figure is that which is inclosed... Uten tilgangsbegrensning - Om denne boken ## A Treatise on Surveying, Containing the Theory and Practice: To which is ... John Gummere - 1814 - 346 sider ...perpendicular to AB, Fig. 4. 10. An acute angle is that which is less than a right angle, as BDE, Fig. 4. 11. An obtuse angle is that which is greater than a right angle, as ADE, Fig. 4. 13. Parallel straight lines are such as are in the same plane, and which, being produced... Uten tilgangsbegrensning - Om denne boken
1,298
4,758
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.078125
3
CC-MAIN-2019-39
latest
en
0.852752
https://oeis.org/A134596
1,576,487,134,000,000,000
text/html
crawl-data/CC-MAIN-2019-51/segments/1575541318556.99/warc/CC-MAIN-20191216065654-20191216093654-00120.warc.gz
492,613,601
4,541
This site is supported by donations to The OEIS Foundation. Please make a donation to keep the OEIS running. We are now in our 55th year. In the past year we added 12000 new sequences and reached 8000 citations (which often say "discovered thanks to the OEIS"). We need to raise money to hire someone to manage submissions, which would reduce the load on our editors and speed up editing. Other ways to donate Hints (Greetings from The On-Line Encyclopedia of Integer Sequences!) A134596 The largest n-digit primeval number A072857. 5 2, 37, 137, 1379, 13679, 123479, 1234679, 12345679, 102345679, 1123456789, 10123456789 (list; graph; refs; listen; history; text; internal format) OFFSET 1,1 COMMENTS Former definition: The least n-digit number m (i.e., m >= 10^(n-1)) which yields A076730(n) = the maximum, for m < 10^n, of A039993(m) = number of primes that can be formed using some or all digits of m. Subsequence of A072857 consisting of the largest terms of given length. - M. F. Hasler, Mar 12 2014 LINKS M. Keith, Integers containing many embedded primes FORMULA a(n) = max { m in A072857, m < 10^n }. - M. F. Hasler, Mar 12 2014 PROG (PARI) A134596(n, A=A072857)=vecmax(select(t->logint(t, 10)+1==n, A)) \\ where A072857 must comprise all n digit terms of that sequence. - M. F. Hasler, Oct 14 2019 CROSSREFS Cf. A039993, A072857, A076730, A134597. Sequence in context: A262182 A142077 A107182 * A139119 A244757 A298476 Adjacent sequences:  A134593 A134594 A134595 * A134597 A134598 A134599 KEYWORD nonn,base,more AUTHOR N. J. A. Sloane, Jan 25 2008 EXTENSIONS Link fixed by Charles R Greathouse IV, Aug 13 2009 Definition reworded and values of a(6)-a(11) added by M. F. Hasler, Mar 11 2014 STATUS approved Lookup | Welcome | Wiki | Register | Music | Plot 2 | Demos | Index | Browse | More | WebCam Contribute new seq. or comment | Format | Style Sheet | Transforms | Superseeker | Recent The OEIS Community | Maintained by The OEIS Foundation Inc. Last modified December 16 04:05 EST 2019. Contains 330013 sequences. (Running on oeis4.)
626
2,057
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.4375
3
CC-MAIN-2019-51
latest
en
0.755593
http://slideplayer.com/slide/4184358/
1,508,543,612,000,000,000
text/html
crawl-data/CC-MAIN-2017-43/segments/1508187824471.6/warc/CC-MAIN-20171020230225-20171021010225-00895.warc.gz
323,305,532
20,564
# Warm Up 1. Evaluate x2 + 5x for x = 4 and x = –3. 36; –6 ## Presentation on theme: "Warm Up 1. Evaluate x2 + 5x for x = 4 and x = –3. 36; –6"— Presentation transcript: Warm Up 1. Evaluate x2 + 5x for x = 4 and x = –3. 36; –6 2. Generate ordered pairs for the function y = x2 + 2 with the given domain. D: {–2, –1, 0, 1, 2} x –2 –1 1 2 y 6 3 Write the prime factorization of 98. 98 = 2  72 Learning Targets Students will be able to: Identify quadratic functions and determine whether they have a minimum or maximum and also graph a quadratic function and give its domain and range. The function y = x2 is shown in the graph The function y = x2 is shown in the graph. Notice that the graph is not linear. This function is a quadratic function. A quadratic function is any function that can be written in the standard form y = ax2 + bx + c, where a, b, and c are real numbers and a ≠ 0. The function y = x2 can be written as y = 1x2 + 0x + 0, where a = 1, b = 0, and c = 0. Problem In Lesson 5-1, you identified linear functions by finding that a constant change in x corresponded to a constant change in y. The differences between y-values for a constant change in x-values are called first differences. Notice that the quadratic function y = x2 doe not have constant first differences. It has constant second differences. This is true for all quadratic functions. Tell whether the function is quadratic. Explain. y –2 –9 +1 +7 +1 –6 +0 +6 –1 –2 –1 1 2 7 The function is not quadratic. The second differences are not constant. Be sure there is a constant change in x-values before you try to find first or second differences. Caution! Tell whether the function is quadratic. Explain. y = 7x + 3 Standard Form y = ax2 + bx + c, where a, b, and c are real numbers and a ≠ 0. This is not a quadratic function because the value of a is 0. Standard Form Tell whether the function is quadratic. Explain. y – 10x2 = 9 Standard Form y = ax2 + bx + c, where a, b, and c are real numbers and a ≠ 0. This is a quadratic function because it can be written in the form y = ax2 + bx + c where a = 10, b = 0, and c =9. Only a cannot equal 0. It is okay for the values of b and c to be 0. Tell whether the function is quadratic. Explain. y –2 4 +1 –3 –1 +1 +3 +2 –1 1 1 1 2 4 The function is quadratic. The second differences are constant. Tell whether the function is quadratic. Explain. y + x = 2x2 Standard Form y = ax2 + bx + c, where a, b, and c are real numbers and a ≠ 0. This is a quadratic function because it can be written in the form y = ax2 + bx + c where a = 2, b = –1, and c = 0. The graph of a quadratic function is a curve called a parabola The graph of a quadratic function is a curve called a parabola. To graph a quadratic function, generate enough ordered pairs to see the shape of the parabola. Then connect the points with a smooth curve. Use a table of values to graph the quadratic function. x y 4 3 1 –2 –1 1 2 Use a table of values to graph the quadratic function. y = –4x2 x –2 –1 1 2 y –4 –16 Use a table of values to graph the quadratic function. y = –3x2 + 1 x –2 –1 1 2 y 1 –2 –11 As shown in the graphs in Examples 2A and 2B, some parabolas open upward and some open downward. Notice that the only difference between the two equations is the value of a. When a quadratic function is written in the form y = ax2 + bx + c, the value of a determines the direction a parabola opens. A parabola opens upward when a > 0. A parabola opens downward when a < 0. Tell whether the graph of the quadratic function opens upward or downward. Explain. Since a > 0, the parabola opens upward. Tell whether the graph of the quadratic function opens upward or downward. Explain. y = 5x – 3x2 y = –3x2 + 5x a = –3 Since a < 0, the parabola opens downward. The highest or lowest point on a parabola is the vertex The highest or lowest point on a parabola is the vertex. If a parabola opens upward, the vertex is the lowest point. If a parabola opens downward, the vertex is the highest point. Identify the vertex of each parabola Identify the vertex of each parabola. Then give the minimum or maximum value of the function. A. B. The vertex is (–3, 2), and the minimum is 2. The vertex is (2, 5), and the maximum is 5. Unless a specific domain is given, you may assume that the domain of a quadratic function is all real numbers. You can find the range of a quadratic function by looking at its graph. For the graph of y = x2 – 4x + 5, the range begins at the minimum value of the function, where y = 1. All the y-values of the function are greater than or equal to 1. So the range is y  1. Find the domain and range. Find the domain and range. Find the domain and range. Download ppt "Warm Up 1. Evaluate x2 + 5x for x = 4 and x = –3. 36; –6" Similar presentations
1,375
4,793
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.78125
5
CC-MAIN-2017-43
longest
en
0.881139
https://www.karelsavry.us/peak_performance/carbohydrates.html
1,576,081,173,000,000,000
text/html
crawl-data/CC-MAIN-2019-51/segments/1575540531974.7/warc/CC-MAIN-20191211160056-20191211184056-00017.warc.gz
755,127,771
9,063
## Carbohydrates Keto Resource Get Instant Access CHO are found in grains, fruits, and vegetables and are the main source of energy in a healthy diet. CHO provide energy to the body in the form of glucose (stored as glycogen), act as building blocks for chemicals made by the body, and are used to repair tissue damage. Unfortunately, many people think CHO are unhealthy and lead to weight gain. That notion came about because many people add high-fat toppings and sauces to their starchy foods. The two types of CHO are: • Simple CHO - have one or two sugar molecules hooked together. Examples include: glucose, table sugar, sugars in fruits, honey, sugar in milk (lactose), maple syrup, and molasses. Simple sugars are added to some processed foods and provide extra kcals. • Complex CHO - have three or more simple sugars hooked together and are digested into simple sugars by the body. Examples include: whole grains, fruits, vegetables, and legumes (peas, beans). Both starch (digestible) and dietary fiber (indigestible) are forms of complex CHO. Although, dietary fiber does not provide any kcals, for health reasons it is recommended that adults eat 20-35 grams of fiber a day. This is achieved by eating more fruits, vegetables, and whole grains (see page 17 and Appendix A). Energy From CHO 1 gram of CHO supplies 4 kcal. CHO should supply 55-60% of your total daily kcals. e.g., in a 2,000 kcal diet at least 2,000 x 55 t- 100 = 1,100 kcals should be from CHO. To convert kcals of CHO into grams of CHO, divide the number of kcals by 4; i.e., 1,100 kcals ^ 4 kcals per gram = 275 grams of CHO. Worksheet 2-1. Calculate Your CHO Requirements Proteins Proteins are found in meat, fish, poultry, dairy foods, beans and grains. Proteins are used by the body to form muscle, hair, nails, and skin, to provide energy, to repair injuries, to carry nutrients throughout the body, and to contract muscle. Energy from Proteins 1 gram of protein supplies 4 kcal (the same as CHO). v Proteins should supply 10-15% of your total daily kcals. Your protein needs are determined by your age, body weight, and activity level. Most people eat 100 to 200 g of proteins each day, which is more protein than is actually needed by the body. Many people eat high-protein foods because they think that proteins make them grow "bigger and stronger". Actually, these excess kcals from proteins can be converted to fat and stored. High-protein intakes also increase fluid needs and may be dehydrating if fluid needs are not met (see "Water" on page 14 and Chapter 12). Table 2-1. Determining Your Protein Factor Grams of Proteins Per Pound of Body Weight Calculate your daily protein requirements in Worksheet 2-2 using your protein factor from Table 2-1. Worksheet 2-2. Calculate Your Protein Requirements _x_=_grams of proteins per day. Body Weight (lbs.) Protein Factor j Fats Fats are an essential part of your diet, regardless of their bad reputation. Fats provide a major form of stored energy, insulate the body and protect the organs, carry nutrients throughout the body, satisfy hunger, and add taste to foods. However, not all fats are created equal. The three types of fats naturally present in foods are saturated, and mono- and polyunsaturated fats. A fourth type of fat, trans fat, is formed during food processing. • Saturated Fats are solid at room temperature and are found primarily in animal foods (red meats, lard, butter, poultry with skin, and whole milk dairy products); tropical oils such as palm, palm kernel and coconut are also high in saturated fat. • Monounsaturated Fats are liquid at room temperature and are found in olive oil, canola oil and peanuts. • Polyunsaturated Fats are liquid at room temperature and are found in fish, corn, wheat, nuts, seeds, and vegetable oils. Saturated, monounsaturated, and polyunsaturated fats should each be less than or equal to 10% of your total daily kcals. Therefore, total fat intake should be less than or equal to 30% of your total daily kcal intake. Trans Fats are created when foods are manufactured. Currently, food labels do not list the trans fat content of a food but if "hydrogenated oils" are listed under ingredients it indicates the presence of trans fats. The more processed foods you eat, the greater your trans fat intake. Trans fats may increase blood cholesterol. A high-fat diet is associated with many diseases, including heart disease, cancer, obesity, and diabetes. On average, people who eat high-fat diets have more body fat than people who eat high-CHO, low-fat diets. On the other hand, a fat-free diet is also very harmful since fat is an essential nutrient. Energy From Fat ||\ 1 gram of fat supplies 9 kcal, more than twice the energy supplied by CHO.| Ik Fats should supply no more than 30% of your total daily kcals. I e.g., in a 2,000 kcal diet no more than 2,000 x 30 4 100 = 600 kcals should be from fats. To convert kcals of fat into grams of fat, divide the number of kcals by 9; i.e., 600 kcals 4 9 kcals per gram = 67 grams of fat.
1,196
5,053
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.59375
3
CC-MAIN-2019-51
latest
en
0.953946
https://www.jiskha.com/display.cgi?id=1333991227
1,516,363,989,000,000,000
text/html
crawl-data/CC-MAIN-2018-05/segments/1516084887973.50/warc/CC-MAIN-20180119105358-20180119125358-00607.warc.gz
933,577,372
4,043
# physics posted by . A down jacket has a surface area of 0.82 m2 and is filled with a 3.8 cm thick layer of down, which has a thermal conductivity of 6.0 10-6 kcal/(s · m · C°). The coat is worn by a person whose skin temperature is maintained at 27° C. The outer surface of the coat is at -23° C. At what rate is heat conducted through the coat? • physics - 6• 10^-6 kcal/(s • m • C°) =0.025 J/(s • m • C°) Fourier's Law of Conduction ΔQ/ Δt = λ•A•( ΔT/ Δx) =0.025• 0.82•(50/0.038) =0.27 J/s ## Similar Questions 1. ### Physics A skier wears a jacket filled with goose down that is 15mm thick. Another skier wears a wool sweater that is 7.0mm thick. Both have the same surface area. Assuming the temperature difference between the inner and outer surfaces of … 2. ### Physics A skier wears a jacket filled with goose down that is 15mm thick. Another skier wears a wool sweater that is 7.0mm thick. Both have the same surface area. Assuming the temperature difference between the inner and outer surfaces of … 3. ### physics Animals in cold climates often depend on two layers of insulation: a layer of body fat [of thermal conductivity 0.200 W/mK] surrounded by a layer of air trapped inside fur or down. We can model a black bear (Ursus americanus) as a … 4. ### physics Ice of mass 11.0 kg at 0.00° C is placed in an ice chest. The ice chest has 3.00 cm thick walls of thermal conductivity 1.00 10-5 kcal/s · m · C° and a surface area of 1.25 m2. (a) How much heat must be absorbed by the ice before … he thermal conductivity of concrete is 0.80 W/m-C° and the thermal conductivity of wood is 0.10 W/m-C°. How thick would a solid concrete wall have to be in order to have the same rate of flow through it as an 8.0 cm thick wall made … 6. ### physics Assume the muscle is 37C and is separated from the outside air by layers of fat and skin. The layer of fat, at a particular location on the skin, is 2 mm thick and has a conductivity of 0.53 W/mK. Finally, the outermost epidermal layer … 7. ### Physics Assume the muscle is 37C and is separated from the outside air by layers of fat and skin. The layer of fat, at a particular location on the skin, is 2 mm thick and has a conductivity of 0.53 W/mK. Finally, the outermost epidermal layer … 8. ### physics Assume the muscle is 37C and is separated from the outside air by layers of fat and skin. The layer of fat, at a particular location on the skin, is 2 mm thick and has a conductivity of 0.53 W/mK. Finally, the outermost epidermal layer … 9. ### Physics 5. A skier wearing a fully-body ski suit of surface area 1.8 m2 is losing heat by convective and radiation processes off the surface of the ski suit at a rate of 95 W. Given that the suit is filled with goose down 15 mm thick with … 10. ### Physics Ice of mass 12.0 kg at 0.00° C is placed in an ice chest. The ice chest has 3.10 cm thick walls of thermal conductivity 1.00 10-5 kcal/s · m · C° and a surface area of 1.20 m2. (a) How much heat must be absorbed by the ice before … More Similar Questions
858
3,034
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.171875
3
CC-MAIN-2018-05
latest
en
0.908446
https://math.stackexchange.com/questions/3347787/getting-different-answers-for-a-definite-integral-uncertain-which-is-correct
1,580,008,740,000,000,000
text/html
crawl-data/CC-MAIN-2020-05/segments/1579251684146.65/warc/CC-MAIN-20200126013015-20200126043015-00121.warc.gz
552,586,206
33,427
Getting different answers for a definite integral, uncertain which is correct. For the definite integral $$\int _{-1}^1\:\left(\frac{27}{x^4}-3\right)dx$$ Khan Academy says that the answer is -24, but my TI-84, WolframAlpha, Symbolab, Desmos, pretty much all software says it's undefined. I'm guessing that Khan Academy is correct - the steps make sense and it's written by a human. KA says that the steps are: 1. Power rule: $$\int_{-1}^1\:\left(27x^{-4}-3\right)dx$$ $$= \left(9x^{-3}-3x)\right|_{-1}^1$$ 2. "Plug in limits of integration": $$[-9*1^{-3}-3*1]-[-9*(-1)^{-3}-3*(-1)] = -12-12 = -24$$ $$\int _{-1}^1\:\left(\frac{27}{x^4}-3\right)dx = -24$$ My questions are, is my assumption that KA is correct valid, and what might cause everything else to be wrong? • Let $f(x)=27x^{-4}-3$. Is $f(x)$ integrable across the entire interval of integration? – Andrew Chin Sep 7 '19 at 23:09 • Look at the graph of that function in Desmos and you'll see what Sal missed. – Matthew Daly Sep 7 '19 at 23:13 • This is why you should read your theorems carefully, especially their conditions, and if ever unsure, check if the integral makes sense at least graphically. – Simply Beautiful Art Sep 7 '19 at 23:15 • Not to mention, $27x^{-4} > 3$ for most of that interval. There's no way that can give you a negative number as an answer. – Ninad Munshi Sep 7 '19 at 23:17 • That makes sense. I suppose I was over (or under?) thinking it. Thanks! – tanner Sep 7 '19 at 23:23 The problem with the solution you quote is that the integrand increases without limit as you approach $$x=0$$. You cannot apply this method when the interval contains such a point. You should always check this before substituting limits in this way. If you look it up you will find lots of information about such "improper integrals". First, a plot of your integrand. Notice that this is a positive integrand for the entire interval of integration. If the integral exists, its value is positive. The graph also makes clear that this is an improper integral -- the integrand is not continuous at $$x = 0$$. We must rewrite it as $$\lim_{r_1 \rightarrow 0^-} \int_{-1}^{r_1} \; \frac{27}{x^4} - 3 \,\mathrm{d}x + \lim_{\ell_2 \rightarrow 0^+} \int_{\ell_2}^{1} \; \frac{27}{x^4} - 3 \,\mathrm{d}x \text{.}$$ Applying the power rule as you did, we end up with $$\lim_{r_1 \rightarrow 0^-} \left( 3r_1 - \frac{9}{r_1^3} - 12 \right) + \lim_{\ell_2 \rightarrow 0^+} \left( 3\ell_2 + \frac{9}{\ell_2^3} - 12 \right) \text{.}$$ Of course, neither limit exists ... What is assumed in Step 1 is the Fundamental Theorem of Calculus, which consists of two closely related statements: Theorem. (FToC) Suppose that $$f$$ is a continuous function on $$[a, b]$$. Then 1. $$f$$ always have an antiderivative on $$[a, b]$$. 2. For any antiderivative $$F$$ of $$f$$, we have $$\int_{a}^{b} f(x) \, \mathrm{d}x = F(b) - F(a).$$ Now let us return to the problem. The function $$f(x) = 27x^{-4} - 3$$ is not continuous at $$x = 0$$, and so, FToC cannot be applied directly to the integral in question. This is one issue with this step. But most of all, it is not clear whether the integral of $$f$$ even makes sense or not. Since $$f$$ is not even bounded on $$[-1, 1]$$, the integral cannot be interpreted in Riemann-integral sense, rendering it undefined in that sense. One may regard it either as improper Riemann-integral (which is the limit of Riemann integrals) or as Lebesgue integral. In such case, notice that $$\begin{gathered} \int_{-1}^{-\delta} f(x) \, \mathrm{d}x = \frac{9}{\delta^3} - 12 + 3\delta\\ \int_{\epsilon}^{1} f(x) \, \mathrm{d}x = \frac{9}{\epsilon^3} - 12 + 3\epsilon \end{gathered}$$ holds for all $$\delta, \epsilon > 0$$. (This is computed by FToC, which is now applicable since $$f$$ is continuous both on $$[\epsilon, 1]$$ and on $$[-1, -\delta]$$.) Then letting $$\delta \to 0^+$$ and $$\epsilon \to 0^+$$ simultaneously, this diverges to $$+\infty$$, and so, $$\int_{-1}^{1} \left( \frac{27}{x^4} - 3 \right) \, \mathrm{d}x = \lim_{\delta, \epsilon \to 0^+} \bigg( \int_{-1}^{-\delta} f(x) \, \mathrm{d}x + \int_{\epsilon}^{1} f(x) \, \mathrm{d}x \bigg). = +\infty,$$ either as improper Riemann integral or as Lebesgue integral. More than similar to other answers, consider the two integrals (where $$\epsilon >0$$) $$I_1=\int_{-1}^{-\epsilon} \left( \frac{27}{x^4} - 3 \right) \,dx=-9+3 \epsilon+\frac{9}{\epsilon ^3}$$ $$I_2=\int^{1}_{\epsilon} \left( \frac{27}{x^4} - 3 \right) \,dx=-12+3 \epsilon+\frac{9}{\epsilon ^3}$$ making $$I=\int_{-1}^{1} \left( \frac{27}{x^4} - 3 \right) \,dx=\lim_{ \epsilon \to 0} (I_1+I_2)=\lim_{ \epsilon \to 0}\left(-21+6 \epsilon+\frac{18}{\epsilon ^3} \right)=+\infty$$
1,607
4,711
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 34, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.65625
4
CC-MAIN-2020-05
latest
en
0.888652
https://wordwall.net/resource/44956/maths/bidmas
1,550,260,056,000,000,000
text/html
crawl-data/CC-MAIN-2019-09/segments/1550247479101.30/warc/CC-MAIN-20190215183319-20190215205319-00288.warc.gz
728,076,424
13,544
Create better lessons quicker 7+6\times 2 - 19, 5 \times 3 +4 - 19, 9\div3+5 - 8, 7-10\div 2 - 2, 19-15\div 3 - 14, 12 +18 \div 6 - 15, (3+5)\times 2 - 16, 12 \div(7-3) - 3, 22-6\times 3 - 4, 4\times 5 -12 - 8, 40\div (12-4) - 5, (24-9)\div 3 - 5, 6+12 \div 4 -2 - 7, (3+9)\div (2+1) - 4, 6+4\div 2 + 3^2 - 17, (6+2)^2-1 - 63, 7+5\times (2+5)^2 - 252, 2^2+3\times 7 - 25, 68\div 2+7\times 3 - 55, 7^2\div 7 - 3 \times 2 - 1,
273
424
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.96875
3
CC-MAIN-2019-09
longest
en
0.386594
http://www.jiskha.com/display.cgi?id=1341681772
1,498,388,077,000,000,000
text/html
crawl-data/CC-MAIN-2017-26/segments/1498128320489.26/warc/CC-MAIN-20170625101427-20170625121427-00154.warc.gz
586,132,848
4,031
# physics posted by . In Rutherford's scattering experiments, alpha particles (charge = +2e) were fired at a gold foil. Consider an alpha particle, very far from the gold foil, with an initial kinetic energy of 3.3 MeV heading directly for a gold atom (charge +79e). The alpha particle will come to rest when all its initial kinetic energy has been converted to electrical potential energy. Find the distance of closest approach between the alpha particle and the gold nucleus. • physics - Set M V^2/2 = Z*2*e^2/d where M is the mass of the alpha particle and Z is the atomic number of gold (79). Solve for the minimum separation, d. V is the velocity associated with the 3.3 MeV energy of the alpha particle. (Or just convert 3.3 MeV to Joules for the left side) • physics - Thank you, but in the above equation what does the e in e^2 equal?
208
850
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.734375
3
CC-MAIN-2017-26
latest
en
0.907526
https://numberworld.info/root-of-14377
1,611,491,497,000,000,000
text/html
crawl-data/CC-MAIN-2021-04/segments/1610703548716.53/warc/CC-MAIN-20210124111006-20210124141006-00537.warc.gz
489,044,455
2,961
# Root of 14377 #### [Root of fourteen thousand three hundred seventy-seven] square root 119.9041 cube root 24.3158 fourth root 10.9501 fifth root 6.7847 In mathematics extracting a root is declared as the determination of the unknown "x" in the equation $y=x^n$ The outcome of the extraction of the root is known as a mathematical root. In the case of "n is 2", one talks about a square root or sometimes a second root also, another possibility could be that n = 3 then one would call it a cube root or simply third root. Considering n beeing greater than 3, the root is declared as the fourth root, fifth root and so on. In maths, the square root of 14377 is represented as this: $$\sqrt[]{14377}=119.90412836929$$ Additionally it is legit to write every root down as a power: $$\sqrt[n]{x}=x^\frac{1}{n}$$ The square root of 14377 is 119.90412836929. The cube root of 14377 is 24.315848242398. The fourth root of 14377 is 10.95007435451 and the fifth root is 6.7847469513432. Look Up
284
989
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.65625
4
CC-MAIN-2021-04
latest
en
0.910406
https://www.answers.com/Q/Cubic_feet_to_cubic_inches
1,606,646,406,000,000,000
text/html
crawl-data/CC-MAIN-2020-50/segments/1606141197593.33/warc/CC-MAIN-20201129093434-20201129123434-00465.warc.gz
562,085,286
32,889
Math and Arithmetic Area Volume # Cubic feet to cubic inches? 001 ###### 2012-10-22 17:38:46 1 cubic feet = 1728 cubic inches ๐Ÿฆƒ 0 ๐Ÿคจ 0 ๐Ÿ˜ฎ 0 ๐Ÿ˜‚ 0 ## Related Questions 1.2 (cubic feet) = 2073.6 cubic inches FYI: Google "convert 1.2 cubic feet to cubic inches" There are no inches in a cubic foot, but there are 1,728 cubic inches.1 cubic foot = 1,728 cubic inches9 cubic feet = 15,552 cubic inches 4.5 cubic feet is 7,776 cubic inches. (there are 1,728 cubic inches per cubic foot) 1 cubic foot = 1,728 cubic inches10 cubic feet = 17,280 cubic inches 1 cubic foot = 1,728 cubic inches13 cubic feet = 22,464 cubic inches you can't convert cubic feet to inches because cubic feet is a volume and inches is a length.but if you mean cubic inches:16 cubic feet = 27648 cubic inches 1 cubic feet = 1728 cubic inches 1.1 cubic feet = 1.1 x 1728 cubic inches = 1900.8 cubic inches There are about 8,640 cubic inches in five cubic feet. 345,600 cubic inches are in 200 cubic feet. 8,352 cubic inches is about 4.83 cubic feet. multiply by 12x12x12 to get cubic feet into cubic inches 1 cubic ft = 1728 cubic inches (12 inches x 12 inches x 12 inches )So to convert cubic feet into cubic inches multiply the number of cubic feet by 1728. 1 cubic foot = 1,728 cubic inches. therefore 9 cubic feet = 15,552 cubic inches. ###### Math and ArithmeticAreaGeometryUnits of MeasureVolume Copyright ยฉ 2020 Multiply Media, LLC. All Rights Reserved. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply.
460
1,607
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.515625
4
CC-MAIN-2020-50
latest
en
0.721172
https://www.convertunits.com/from/water+column/to/femtobar
1,632,073,777,000,000,000
text/html
crawl-data/CC-MAIN-2021-39/segments/1631780056892.13/warc/CC-MAIN-20210919160038-20210919190038-00517.warc.gz
744,369,456
23,442
## ››Convert water column [centimeter] to femtobar water column femtobar Did you mean to convert water column [centimeter] water column [inch] water column [millimeter] to femtobar How many water column in 1 femtobar? The answer is 1.0197162129779E-12. We assume you are converting between water column [centimeter] and femtobar. You can view more details on each measurement unit: water column or femtobar The SI derived unit for pressure is the pascal. 1 pascal is equal to 0.010197162129779 water column, or 10000000000 femtobar. Note that rounding errors may occur, so always check the results. Use this page to learn how to convert between water column and femtobars. Type in your own numbers in the form to convert the units! ## ››Quick conversion chart of water column to femtobar 1 water column to femtobar = 980665000000 femtobar 2 water column to femtobar = 1961330000000 femtobar 3 water column to femtobar = 2941995000000 femtobar 4 water column to femtobar = 3922660000000 femtobar 5 water column to femtobar = 4903325000000 femtobar 6 water column to femtobar = 5883990000000 femtobar 7 water column to femtobar = 6864655000000 femtobar 8 water column to femtobar = 7845320000000 femtobar 9 water column to femtobar = 8825985000000 femtobar 10 water column to femtobar = 9806650000000 femtobar ## ››Want other units? You can do the reverse unit conversion from femtobar to water column, or enter any two units below: ## Enter two units to convert From: To: ## ››Definition: Femtobar The SI prefix "femto" represents a factor of 10-15, or in exponential notation, 1E-15. So 1 femtobar = 10-15 bars. The definition of a bar is as follows: The bar is a measurement unit of pressure, equal to 1,000,000 dynes per square centimetre (baryes), or 100,000 newtons per square metre (pascals). The word bar is of Greek origin, báros meaning weight. Its official symbol is "bar"; the earlier "b" is now deprecated, but still often seen especially as "mb" rather than the proper "mbar" for millibars. ## ››Metric conversions and more ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data. Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres squared, grams, moles, feet per second, and many more!
693
2,513
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.15625
3
CC-MAIN-2021-39
latest
en
0.794164
https://www.got-it.ai/solutions/excel-chat/excel-help/how-to/p/p-value-excel
1,660,776,940,000,000,000
text/html
crawl-data/CC-MAIN-2022-33/segments/1659882573118.26/warc/CC-MAIN-20220817213446-20220818003446-00608.warc.gz
703,387,154
56,965
# Get instant live expert help on I need help with p value excel “My Excelchat expert helped me in less than 20 minutes, saving me what would have been 5 hours of work!” ## Post your problem and you’ll get expert help in seconds. Our professional experts are available now. Your privacy is guaranteed. ## Here are some problems that our users have asked and received explanations on How to find the p-value of this data in excel. Solved by F. H. in 11 mins Test the hypothesis using the P-value approach. Be sure to verify the requirements of the test. Upper H 0 : p= 0.46 versus Upper H 1 : p less< 0.46 n equals 150, x equals 63, alpha equals 0.01 Use technology to find the P-value. Solved by K. L. in 24 mins i want to automate a time table in excel, time table has many merged fields some, it is difficult to paste every value in database, or excel row by row. I am searcing a last character of value in excel sheet, e.g. (UCA017 P) I am searching a P excel search it , then I want to copy this value, and three values under this value in another sheet in a b c d e column and additonally i want to copy the header and rowid. How to do it by vba or excel please help Solved by S. F. in 30 mins Create your own Excel model that computes the following calculation using arithmetic operators. (P(1 P) Z^2 )^2 / E Let P = 0.25, Z = 2.33, and E = 0.05 Solved by F. J. in 25 mins want to delete a object which ius starting with LCELW to <managed object) <managedObject class="LCELW" version="WBTS16" distName="RNC-2/WBTS-2752/MRBTS-1/BTSSCW-1/LCELW-27523" operation="create"> <p name="cellRange">20000</p> <p name="defaultCarrier">2037</p> <p name="expirationTime">1</p> <p name="hspaMapping">None</p> <p name="maxCarrierPower">46.0</p> <p name="maxRxLevelDifference">7</p> <p name="rachCapacity">2</p> Solved by O. L. in 25 mins
540
1,833
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.734375
3
CC-MAIN-2022-33
longest
en
0.883306
https://oeis.org/A241221
1,670,298,342,000,000,000
text/html
crawl-data/CC-MAIN-2022-49/segments/1669446711069.79/warc/CC-MAIN-20221206024911-20221206054911-00874.warc.gz
467,535,735
4,562
The OEIS is supported by the many generous donors to the OEIS Foundation. Year-end appeal: Please make a donation to the OEIS Foundation to support ongoing development and maintenance of the OEIS. We are now in our 59th year, we have over 358,000 sequences, and we’ve crossed 10,300 citations (which often say “discovered thanks to the OEIS”). Other ways to Give Hints (Greetings from The On-Line Encyclopedia of Integer Sequences!) A241221 Primes obtained by merging 5 successive digits in the decimal expansion of sqrt(2) + sqrt(3) + sqrt(5). 1 47441, 87383, 66809, 80953, 87119, 19753, 48163, 81637, 35591, 52967, 96763, 30727, 77621, 80809, 16903, 35051, 14159, 24877, 56437, 24677, 67723, 32077, 29429, 76831, 11257, 57367, 36787, 80207, 61141, 68351, 35129, 47701, 77017, 64579, 24671, 37277, 27701, 56873 (list; graph; refs; listen; history; text; internal format) OFFSET 1,1 COMMENTS All the terms in the sequence are 5-digit primes because leading zeros are not permitted. LINKS K. D. Bajpai, Table of n, a(n) for n = 1..4248 EXAMPLE a(1) = 47441, which is prime. It is the first occurrence of a 5-digit prime in the decimal expansion of sqrt(2) + sqrt(3) + sqrt(5), i.e., 5.3823323(47441)76203873830873445 ... MATHEMATICA With[{len = 5}, Select[FromDigits /@Partition[RealDigits[Sqrt[2] + Sqrt[3] + Sqrt[5], 10, 1000][[1]], len, 1], IntegerLength[#] == len && PrimeQ[#] &]] CROSSREFS Cf. A198161, A198162, A198163, A198164, A198165, A198166, A198169, A241149. Sequence in context: A263067 A234708 A069370 * A261339 A166003 A221017 Adjacent sequences: A241218 A241219 A241220 * A241222 A241223 A241224 KEYWORD nonn,base AUTHOR K. D. Bajpai, Apr 18 2014 STATUS approved Lookup | Welcome | Wiki | Register | Music | Plot 2 | Demos | Index | Browse | More | WebCam Contribute new seq. or comment | Format | Style Sheet | Transforms | Superseeker | Recents The OEIS Community | Maintained by The OEIS Foundation Inc. Last modified December 5 21:40 EST 2022. Contains 358594 sequences. (Running on oeis4.)
678
2,017
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.421875
3
CC-MAIN-2022-49
latest
en
0.695255
https://myassignmenthelp.com/free-samples/bhs0542-human-nutrition/boiled-cabbage-tissue-file-A1DBE34.html
1,726,707,700,000,000,000
text/html
crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00667.warc.gz
385,567,345
27,374
Get Instant Help From 5000+ Experts For Writing: Get your essay and assignment written from scratch by PhD expert Rewriting: Paraphrase or rewrite your friend's essay with similar meaning at reduced cost ## Experiment Method Trial Initial burette reading(A) Final burette reading (B) Dye titrated (B-A)ml 1. 4.0 10.1 6.1 2. 12.1 19.5 7.4 i) Concentration of standard ascorbic acid solution used in the experiment = 2 mg/5 ml=0.4mg/ml. ii) Volume of standard ascorbic acid solution given in the experiment= 5 ml iii) Volume of dye solution required =6.75ml. Dye titrated (average) = 6.1+7.4/2 = 13.5ml/2=6.75ml. 2mg/5ml of standard ascorbic acid solution (0.4mg/ml) with volume of 5ml. Formula to be used: Milligram of ascorbic acid =Amount of titrated dye (ml) x ascorbic acid (mg) /1ml of dye. Milligram of ascorbic acid of dye solution= 6.75ml x 0.4mg/ml= 2.7mg Amount of ascorbic acid equivalent to 1ml of dye= 2.7mg---Answer. Weight of spinach =15 gram of fresh spinach leaves were used. The total volume used= 20ml, Aliquot volume used= 10ml. Trial Initial burette reading(A) Final burette reading(B) Dye titrated(ml)(B-A) 1. 12.5ml 19ml 6.5ml 2. 22ml 27.4ml 5.4ml Dye titrated average in ml= 5.95ml 1. The amount of ascorbic acid in mg/100g of spinach fresh tissue= mg of ascorbic acid used in titration sample sample x total volume of extract in ml x 100/ volume of aliquot in ml x weight of spinach in grams. 2. Amount of ascorbic acid used in titration experiment sample= Amount of titrated dye (ml) x ascorbic acid (mg) /1ml of dye. 3. a) Average amount of ascorbic acid used in the titration in the aliquot= 5.95 ml x 0.4 mg/1ml of dye=2.38mg 4. b) The amount of ascorbic acid content in mg/100g of fresh tissue= 2.38 mg x 20ml x 100/ 10ml x 15 gram. The amount of ascorbic acid in mg/100g of fresh tissue= 4760/150= 31.73mg of ascorbic acid/100 g tissue—Answer. Weight of spinach: 10 g, Total volume used in the experiment: 20 ml, Aliquot volume used in the experiment: 10 ml. Trial Initial burette reading (A) Final burette reading (B) Dye titrated (ml) (B-A) 1. 17.4ml 20.1ml 2.7ml 2. 22.1ml 25ml 2.9ml Dye titrated average in ml=5.6ml/2=2.8ml. 1. Amount of ascorbic acid content in mg/100g of boiled tissue = mg of ascorbic acid used in sample x total volume of extract in ml x 100/ volume of aliquot in ml x weight of spinach in grams. 2. a. Average amount of ascorbic acid used in titration experiment aliquot= Amount of titrated dye (ml) x ascorbic acid (mg) /1ml of dye. Amount of ascorbic acid used in titration experiment sample aliquot= Amount of titrated dye (ml) x ascorbic acid (mg) /1ml of dye. Amount of ascorbic acid used in the titration= 2.8 ml x 0.4mg/1ml of dye=1.12mg 1. The amount of ascorbic acid in mg/ 100g of boiled spinach tissue= 1.12 x 20 x 100/ 10ml x 10g =2240/100=22.4mg The amount of ascorbic acid in mg/ 100g of boiled spinach tissue= 22.4mg—Answer. Boiled water, Weight of spinach leaves: 10 g, Total volume of aliquot taken: 20 ml, Aliquot volume taken for experimentation: 10 ml. Trial Initial burette reading(A) Final burette reading(B) Dye titrated (A-B) 1. 10ml 12ml 2ml 2. 14ml 16ml 2ml Dye titrated average in ml= 2ml 1. Average of ascorbic acid lost in the water after 100g spinach leaves tissue was boiled for 5 minutes = mg of ascorbic acid used in sample x total volume of extract in ml x 100/ volume of aliquot in ml x weight of spinach in grams. 2. Amount of ascorbic acid used in titration experiment sample= Amount of titrated dye (ml) x ascorbic acid (mg) /1ml of dye. 3. Amount of ascorbic acid used in titration experiment sample aliquot= Amount of titrated dye (ml) x ascorbic acid (mg) /1ml of dye. 1. Average ascorbic acid lost in the water after 100g spinach leaves tissue was boiled for 5 minutes = mg of ascorbic acid used in sample x total volume of extract in ml x 100/ volume of aliquot in ml x weight of spinach in grams. 2. Average of ascorbic acid lost in the water after 100g spinach leaves tissue was boiled for 5 minutes = 0.8 x 20 x 100/10x 10 = 1600/100=16mg—Answer. Weight of spinach leaves: 1000g (1Kg), Total volume of aliquot taken: 30 ml, Aliquot volume taken for experimentation: 10 ml. Trial Initial burette reading(A) Final burette reading(B) Dye titrated (A-B) 1. 10ml 12ml 2ml 2. 14ml 16ml 2ml Dye titrated average in ml= 2ml 1. Average of ascorbic acid lost in the water after 1000g spinach leaves tissue was boiled for 5 minutes = mg of ascorbic acid used in sample x total volume of extract in ml x 100/ volume of aliquot in ml x weight of spinach in grams. 2. Amount of ascorbic acid used in titration experiment sample= Amount of titrated dye (ml) x ascorbic acid (mg) /1ml of dye. Amount of ascorbic acid used in titration experiment sample= Amount of titrated dye (ml) x ascorbic acid (mg) /1ml of dye. 1. Average amount of ascorbic acid used in titration experiment sample= 2ml x 0.4mg/ml=0.8mg 2. Average of ascorbic acid lost in the water after 1000g spinach leaves tissue was boiled for 5 minutes = mg of ascorbic acid used in sample x total volume of extract in ml x 100/ volume of aliquot in ml x weight of spinach in grams. Average of ascorbic acid lost in the water after 1000g spinach leaves tissue was boiled for 5 minutes = 0.8 x 30 x 100/10x 1000 = 2400/10000=0.24mg—Answer. 1. Cabbage leaves Trial Initial Burette reading Final Burette reading Dye titrated (B-A)ml 1. 8.2ml 15.4ml 7.2ml 2. 13.1ml 20.2ml 7.1ml i) Concentration of standard ascorbic acid solution used in the experiment = 2 mg/5 ml=0.4mg/ml. ii) Volume of standard ascorbic acid solution given in the experiment= 5 ml iii) Volume of dye solution required =7.15ml. Dye titrated (average) = 7.2+7.1/2 = 14.3ml/2=7.15ml. 2mg/5ml of standard ascorbic acid solution (0.4mg/ml) with volume of 5ml. Milligram of ascorbic acid =Amount of titrated dye (ml) x ascorbic acid (mg) /1ml of dye. Milligram of ascorbic acid of dye solution= 7.15 ml x 0.4mg/ml= 2.86mg Amount of ascorbic acid equivalent to 1ml of dye= 2.86mg---Answer. Weight of cabbage =20 gram of fresh cabbage leaves were used. The total volume used= 25ml, Aliquot volume used= 15ml. Trial Initial burette reading(A) Final burette reading(B) Dye titrated(ml)(B-A) 1. 14.5ml 20.2ml 5.7ml 2. 21.3ml 28.7ml 7.4ml Dye titrated average in ml= 5.7ml+7.4ml/2= 6.55ml 1. The amount of ascorbic acid in mg/100g of cabbage fresh tissue= mg of ascorbic acid used in titration sample aliquot x total volume of extract in ml x 100/ volume of aliquot in ml x weight of cabbage in grams. 2. Amount of ascorbic acid used in titration experiment sample= Amount of titrated dye (ml) x ascorbic acid (mg) /1ml of dye. 3. Average amount of ascorbic acid used in the titration in the aliquot= 6.55 ml x 0.4 mg/1ml of dye=2.62mg 4. The amount of ascorbic acid content in mg/100g of fresh tissue=2.62 mg x 25ml x 100/ 15ml x 20 gram. The amount of ascorbic acid in mg/100g of fresh tissue= 6550/300= 21.83mg of ascorbic acid/100 g tissue—Answer. Weight of cabbage: 20 g, Total volume used in the experiment: 25 ml, Aliquot volume used in the experiment: 15 ml. Trial Initial burette reading (A) Final burette reading (B) Dye titrated (ml) (B-A) 1. 15.2ml 19.1ml 3.9ml 2. 20.3ml 24.8ml 4.5ml Dye titrated average in 3.9ml+4.5ml/2=8.4/2=4.2ml. 1. Amount of ascorbic acid content in mg/100g of boiled tissue = mg of ascorbic acid used in sample x total volume of extract in ml x 100/ volume of aliquot in ml x weight of cabbage in grams. 2. a. Average amount of ascorbic acid used in titration experiment aliquot= Amount of titrated dye (ml) x ascorbic acid (mg) /1ml of dye. Amount of ascorbic acid used in titration experiment sample aliquot= Amount of titrated dye (ml) x ascorbic acid (mg) /1ml of dye. Amount of ascorbic acid used in the titration= 4.2ml x 0.4mg/1ml of dye=1.68mg 1. The amount of ascorbic acid in mg/ 100g of boiled cabbage tissue= 1.68 x 25 x 100/ 15ml x 20g =4200/300=14.0mg The amount of ascorbic acid in mg/ 100g of boiled cabbage tissue= 14.0mg—Answer. Boiled water, Weight of cabbage leaves: 25 g Total volume of aliquot taken: 25 ml Aliquot volume taken for experimentation: 15 ml. Trial Initial burette reading(A) Final burette reading(B) Dye titrated (A-B) 1. 11.8ml 13.5ml 1.7ml 2. 12.5ml 17.5ml 5ml Dye titrated average in ml= 6.7/2ml=3.35ml 1. Average of ascorbic acid lost in the water after 100g cabbage leaves tissue was boiled for 5 minutes = mg of ascorbic acid used in sample x total volume of extract in ml x 100/ volume of aliquot in ml x weight of cabbage in grams. 2. Amount of ascorbic acid used in titration experiment sample= Amount of titrated dye (ml) x ascorbic acid (mg) /1ml of dye. 3. Amount of ascorbic acid used in titration experiment sample aliquot= Amount of titrated dye (ml) x ascorbic acid (mg) /1ml of dye. Amount of ascorbic acid used in titration experiment sample aliquot = 3.35ml x 0.4mg/ml=1.34mg 1. Average ascorbic acid lost in the water after 100g cabbage leaves tissue was boiled for 5 minutes = mg of ascorbic acid used in sample x total volume of extract in ml x 100/ volume of aliquot in ml x weight of cabbage in grams. 2. Average of ascorbic acid lost in the water after 100g cabbage leaves tissue was boiled for 5 minutes = 1.34 x 25 x 100/15x 20 = 3350/300=11.16mg—Answer. Weight of cabbage leaves: 1000g (1Kg) Total volume of aliquot taken: 30 ml Aliquot volume taken for experimentation: 10 ml. Trial Initial burette reading(A) Final burette reading(B) Dye titrated (A-B) 1. 11.4ml 14.2ml 2.8ml 2. 13.5ml 18.4ml 4.9ml Dye titrated average in ml= 2.8+4.9ml=7.7/2=3.85ml 1. Average of ascorbic acid lost in the water after 1000g cabbage leaves tissue was boiled for 5 minutes = mg of ascorbic acid used in sample x total volume of extract in ml x 100/ volume of aliquot in ml x weight of cabbage in grams. 2. Amount of ascorbic acid used in titration experiment sample= Amount of titrated dye (ml) x ascorbic acid (mg) /1ml of dye. Amount of ascorbic acid used in titration experiment sample= Amount of titrated dye (ml) x ascorbic acid (mg) /1ml of dye. 1. Average amount of ascorbic acid used in titration experiment sample= 3.85ml x 0.4mg/ml=1.54mg 2. Average of ascorbic acid lost in the water after 1000g cabbage leaves tissue was boiled for 5 minutes = mg of ascorbic acid used in sample x total volume of extract in ml x 100/ volume of aliquot in ml x weight of cabbage in grams. Average of ascorbic acid lost in the water after 1000g cabbage leaves tissue was boiled for 5 minutes = 3.85 x 30 x 100/10x 1000 = 11550/10000=1.155mg—Answer. Weight of apple: 75g Total volume used: 350ml Aliquot volume used: 175ml Trial Initial Burette reading(A) Final burette reading (B) Dye titrated B-A ml 1. 14 ml 19.5 ml 5.5 2. 19ml 24ml 5 Dye titration volume average in ml: 5.5+/2=10.5/2=5.25ml 1. The amount of ascorbic acid in mg/100g of apple fresh tissue= mg of ascorbic acid used in titration sample sample x total volume of extract in ml x 100/ volume of aliquot in ml x weight of apple in grams. 2. Amount of ascorbic acid used in titration experiment sample aliquot= Amount of titrated dye (ml) x ascorbic acid (mg) /1ml of dye. 3. Average amount of ascorbic acid used in the titration in the aliquot= 5.25 ml x 0.4 mg/1ml of dye=2.10mg ascorbic acid--Answer 4. The amount of ascorbic acid content in mg/100g of fresh apple tissue= 2.10 mg x 350ml x 100/ 175ml x 75 gram. The amount of ascorbic acid in mg/100g of fresh tissue= 73500/13125= 5.6mg of ascorbic acid/100 g tissue—Answer. Answer 6: The potential error in the protocol is that there may be more loss of the ascorbic acid content in the fruits apple and cabbage due to the degradation of the ascorbic acid at high temperatures. There is a degradation of ascorbic acid content in different fruits during the time interval of 4 days-7 days when heated at a high temperature (Mussa and El Sharaa, 2014). Answer 7: The intake of vitamin C is important during pregnancy because this vitamin has several important functions like supporting the growth of cells, helpful in cellular repair mechanisms, preventing the occurrence of neural tube defects during pregnancy, and helping in the synthesis of DNA inside the cells. Vitamin C is also essential for the synthesis of collagen which provides structural support to the skin layer and is also essential for the synthesis of the neurotransmitter (Edward and Hans, 2010). In children, the intake of vitamins boosts the immune system and strengthens the nervous system. Vitamin C supports the growth and development of children and provides strong flexible support to the skin layers (Child, 2020). Answer 8: The major sources of vitamins present in the food diet are as follows citrus fruits, red tomatoes green leafy vegetables available in the market like cabbage, broccoli, cauliflower, and lettuce (Ofoedu et al., 2021). Answer 9: Various factors affect the content of Vitamin C in the diet like the heating of food at high temperatures during cooking, exposure of food containing vitamin C to light, and radiation. The change in pH affects the content of Vitamin C present in the different food sources (Lee et al., 2017). 1. Child, T., 2020. Vitamin C for Kids: The 6 Best Benefits - Brauer Health Library. Brauer Natural Medicines. 2. Diengdoh F, D., E R, Dkhar., T, Mukhim. and C L, Nongpiur., 2015. Worldwidejournals.com. 3. Edward, B. and Hans, U., 2010. Regular vitamin C supplementation during pregnancy reduces hospitalization: outcomes of a Ugandan rural cohort study. Pan African Medical Journal, 5(1). 4. Isleroglu, H., Yilmazer, M. and Ertekin, F., 2016. Kinetics of colour, chlorophyll, and ascorbic acid content in spinach baked in different types of oven. Taylor & Francis. 5. Koh, E., Charoenprasert, S. and Mitchell, A., 2012. Effect of Organic and Conventional Cropping Systems on Ascorbic Acid, Vitamin C, Flavonoids, Nitrate, and Oxalate in 27 Varieties of Spinach (Spinacia oleracea L.). Journal of Agricultural and Food Chemistry, 60(12), pp.3144-3150.. 6. Lee, S., Choi, Y., Jeong, H., Lee, J. and Sung, J., 2017. Effect of different cooking methods on the content of vitamins and true retention in selected vegetables. Food Science and Biotechnology. 7. Mussa, S. and El Sharaa, I., 2014. Analysis of Vitamin C (ascorbic acid) Contents packed fruit juice by UV-spectrophotometry and Redox Titration Methods. IOSR Journal of Applied Physics, 6(5), pp.46-52. 8. N.C, Igwemmar., S.A, Kolawole. and I.A, Imran., 2013. Effect of Heating on Vitamin C Content of Some Selected Vegetables. Ijstr.org. 9. Ofoedu, C., Iwouno, J., Ofoedu, E., Ogueke, C., Igwe, V., Agunwah, I., Ofoedum, A., Chacha, J., Muobike, O., Agunbiade, A., Njoku, N., Nwakaudu, A., Odimegwu, N., Ndukauba, O., Ogbonna, C., Naibaho, J., Korus, M. and Okpala, C., 2021. Revisiting food-sourced vitamins for consumer diet and health needs: a perspective review, from vitamin classification, metabolic functions, absorption, utilization, to balancing nutritional requirements. PeerJ, 9, p.e11940. 10. Samuel, P. and Godrick, E., 1997. Human Metabolism of Vitamin C and Fruit Juice Analysis. 11. Toledo, M., Ueda, Y., Imahori, Y. and Ayaki, M., 2003. l-ascorbic acid metabolism in spinach (Spinacia oleracea L.) during postharvest storage in light and dark. Postharvest Biology and Technology, 28(1), pp.47-57. 12. Varsha T, R. and Padma P, N., 2017. Comparative Studies on Ascorbic acid content in Various Fruits, Vegetables and Leafy Vegetables. 13. Yadav, S. and Sehgal, S., 1995. Effect of home processing on ascorbic acid and ?-carotene content of spinach (Spinacia oleracia) and amaranth (Amaranthus tricolor) leaves. Plant Foods for Human Nutrition, 47(2), pp.125-131 Cite This Work My Assignment Help. (2022). Determining Ascorbic Acid Content In Vegetables And Fruits: An Essay.. Retrieved from https://myassignmenthelp.com/free-samples/bhs0542-human-nutrition/boiled-cabbage-tissue-file-A1DBE34.html. "Determining Ascorbic Acid Content In Vegetables And Fruits: An Essay.." My Assignment Help, 2022, https://myassignmenthelp.com/free-samples/bhs0542-human-nutrition/boiled-cabbage-tissue-file-A1DBE34.html. My Assignment Help (2022) Determining Ascorbic Acid Content In Vegetables And Fruits: An Essay. [Online]. Available from: https://myassignmenthelp.com/free-samples/bhs0542-human-nutrition/boiled-cabbage-tissue-file-A1DBE34.html [Accessed 19 September 2024]. My Assignment Help. 'Determining Ascorbic Acid Content In Vegetables And Fruits: An Essay.' (My Assignment Help, 2022) <https://myassignmenthelp.com/free-samples/bhs0542-human-nutrition/boiled-cabbage-tissue-file-A1DBE34.html> accessed 19 September 2024. My Assignment Help. Determining Ascorbic Acid Content In Vegetables And Fruits: An Essay. [Internet]. My Assignment Help. 2022 [cited 19 September 2024]. Available from: https://myassignmenthelp.com/free-samples/bhs0542-human-nutrition/boiled-cabbage-tissue-file-A1DBE34.html. Get instant help from 5000+ experts for Writing: Get your essay and assignment written from scratch by PhD expert Rewriting: Paraphrase or rewrite your friend's essay with similar meaning at reduced cost
5,297
17,202
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.9375
3
CC-MAIN-2024-38
latest
en
0.826248
https://www.clutchprep.com/physics/practice-problems/103092/a-piece-of-cheese-with-a-mass-of-1-21-kg-is-placed-on-a-vertical-spring-of-negli
1,604,036,035,000,000,000
text/html
crawl-data/CC-MAIN-2020-45/segments/1603107907213.64/warc/CC-MAIN-20201030033658-20201030063658-00583.warc.gz
669,457,711
29,741
Springs & Elastic Potential Energy Video Lessons Example # Problem: A piece of cheese with a mass of 1.21 kg is placed on a vertical spring of negligible mass and a force constant exttip{k}{k} = 2100 N/m that is compressed by a distance of 15.1 cm .When the spring is released, how high does the cheese rise from this initial position? (The cheese and the spring are not attached.) ###### FREE Expert Solution 91% (245 ratings) ###### Problem Details A piece of cheese with a mass of 1.21 kg is placed on a vertical spring of negligible mass and a force constant = 2100 N/m that is compressed by a distance of 15.1 cm . When the spring is released, how high does the cheese rise from this initial position? (The cheese and the spring are not attached.)
180
757
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.875
3
CC-MAIN-2020-45
latest
en
0.921768
https://crater.lanecc.edu/banp/bwckctlg.p_disp_catalog_syllabus?cat_term_in=202320&subj_code_in=MTH&crse_numb_in=010
1,701,625,727,000,000,000
text/html
crawl-data/CC-MAIN-2023-50/segments/1700679100508.42/warc/CC-MAIN-20231203161435-20231203191435-00750.warc.gz
228,244,368
3,527
# . ## Syllabus Information Fall 2022 Dec 03,2023 Use this page to maintain syllabus information, learning objectives, required materials, and technical requirements for the course. Syllabus Information MTH 010 - Whole Numbers, Fractions, Decimals Associated Term: Fall 2022 Learning Objectives: Upon successful completion of this course, the student will be able to: 1. Add, subtract, multiply, and divide whole numbers 2. Identify characteristics of even, odd, prime, and composite numbers 3. Solve real world application problems using whole numbers 4. Order whole numbers using < and > 5. List factors and multiples of a given number 6. Compute problems using the order of operations 7. Use math vocabulary 8. Compute area and perimeter of rectangles using whole numbers 9. Add, subtract, multiply, and divide fractions with like and unlike denominators 10. Reduce fractions 11. Compare fractions using <, > or = 12. Convert fractions to decimals 13. Solve real world problems using fractions 14. Compute area and perimeter of rectangles using fractions 15. Use vocabulary of fraction terms 16. Add, subtract, multiply, and divide using decimals 17. Identify place value in decimal numbers 18. Compare decimals using <, > or = 19. Convert decimals to fractions 20. Solve real world application problems using decimals 21. Compute area and perimeter of rectangles using decimals 22. Selects appropriate math study strategies 23. Monitors and evaluates personal confidence progress 24. Utilizes appropriate math resources Required Materials: Technical Requirements:
336
1,571
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.765625
3
CC-MAIN-2023-50
longest
en
0.616012
https://diffgeom.com/fr/products/five-tetrahedra-ocean-wall-art-poster
1,702,189,561,000,000,000
text/html
crawl-data/CC-MAIN-2023-50/segments/1700679101282.74/warc/CC-MAIN-20231210060949-20231210090949-00178.warc.gz
234,721,841
51,082
Prodigi # Five Tetrahedra, Ocean Wall Art Poster ## Five Tetrahedra, Ocean Wall Art Poster Prix habituel $8.00 USD Prix habituel Prix promotionnel$8.00 USD En vente Épuisé Size Impossible de charger la disponibilité du service de retrait Let $$\phi = \frac{1}{2}(1 + \sqrt{5})$$ be the golden ratio. Start with a cube aligned with the Cartesian axes. Inscribe a regular tetrahedron whose vertices are alternate vertices of the cube. Rotate this tetrahedron by multiples of one-fifth of a turn about the axis through $$(0, 1, \phi)$$ (or any other axis obtained by permuting coordinates). The resulting union of five tetrahedra has icosahedral symmetry. Rotations of space that preserve the union of the five tetrahedra permute the set of five tetrahedra by an even permutation'' (a composition of an even number of swaps). Conversely, every even permutation on the set of five tetrahedra is effected by a unique rotation of space. The group of rotation symmetries of the union is isomorphic to the alternating group on five letters. Printed on high-quality yet economical poster paper with a satin finish. Afficher tous les détails
284
1,138
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.640625
3
CC-MAIN-2023-50
latest
en
0.796605
https://www.jiskha.com/display.cgi?id=1264650967
1,503,394,167,000,000,000
text/html
crawl-data/CC-MAIN-2017-34/segments/1502886110573.77/warc/CC-MAIN-20170822085147-20170822105147-00495.warc.gz
916,041,441
4,501
# Physics posted by . On a string instrument of the violin family, the effective length of a string is the distance between the bridge and the nut. For a violin, this distance is 29.3 cm, while for a cello it is 37.8 cm. The string of a violin is placed in a cello with the intention of producing a sound of the same fundamental frequency. To accomplish this, the string on the cello will be under a larger tension than on the violin. By how much should the tension in the cello be increased with respect to the tension in the violin? Express the result as a percentage, and to two significant figures. Only answer in numerical values, without the % sign. For example, an increase of 11% corresponds to Tcello = (1.11) Tviolin, and should be entered as 11 in the answer box. • Physics - You want the answer in a box? Without the % sign? Do you want fries with that? • Physics - If you want to learn the subject and not just fill in the blanks to get some meaningless degree, use the fact that the frequency is proportional to (wave speed)/(string length) To keep the frequency the same, the wave speed must increase by a factor 37.8/29.3 = 1.2901 The string lineal density remains the same, since it is the same string. Take a look at the formula for wave speed in a string under tension. If you don't know it, look it up. It says that you have to increase Tension so than sqrt(tension) is increased by a factor 1.2901 Take it from there ## Similar Questions 1. ### Physics Problem The portion of string between the bridge and upper end of the fingerboard (the part of the string that is free to vibrate) of a certain musical instrument is 60.0 cm long [.6m] and has a mass of 2.23 g [.00223kg]. The string sounds … 2. ### Physics The portion of string between the bridge and upper end of the fingerboard (the part of the string that is free to vibrate) of a certain musical instrument is 60.0 cm long [.6m] and has a mass of 2.23 g [.00223kg]. The string sounds … 3. ### Physics The A string of a violin is 31 cm long between fixed points with a fundamental frequency of 440 Hz and a mass per unit length of 5.8×10^−4 kg/m. What are the wave speed in the string? 4. ### Physics A violin has an open string length (bridge to nut) of L=32.7 cm. At what distance x from the bridge does a violinist have to place his finger on the fingerboard to play a C (523.3 Hz) on the A string (fundamental frequency 440 Hz)? 5. ### physics On a string instrument of the violin family, the effective length of a string is the distance between the bridge and the nut. For a violin, this distance is 31.1 cm, while for a cello it is 38.8 cm. The string of a violin is placed … 6. ### physics On a string instrument of the violin family, the effective length of a string is the distance between the bridge and the nut. For a violin, this distance is 30 cm, while for a cello it is 36.6 cm. The string of a violin is placed in … 7. ### science 202 I take a violin and make an exact copy of it, except that it is bigger. The strings are identical except for the length; they have the same material and the same tension. If the new violin is 2.30 times the size of the original, at … 8. ### science of sound a violin string’s length from nut to bridge is 33 cm, and on the open string A=440 Hz is played. What is the wavelength of the sound? 9. ### Physicz A violin string is 33cm long. The thinnest string on the violin is tuned to vibrate at a frequency of 659Hz. (a) What is the wave velocity in the string? 10. ### Physics You’ve purchased two identical violin strings. You install one of the strings in a conventional violin; the other string you stretch between two posts, each attached to rigid concrete pillar in the open air. Both strings are made … More Similar Questions
941
3,789
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.875
4
CC-MAIN-2017-34
latest
en
0.926898
http://www.markedbyteachers.com/as-and-a-level/science/in-this-investigation-i-will-be-looking-at-the-resistance-of-a-solution-and-the-different-things-which-affect-it-in-this-experiment-i-have-chosen-one-variable-the-salt-concentration.html
1,606,636,227,000,000,000
text/html
crawl-data/CC-MAIN-2020-50/segments/1606141197278.54/warc/CC-MAIN-20201129063812-20201129093812-00422.warc.gz
140,103,780
23,147
• Join over 1.2 million students every month • Accelerate your learning by 29% • Unlimited access from just £6.99 per month Page 1. 1 1 2. 2 2 3. 3 3 4. 4 4 5. 5 5 6. 6 6 # In this investigation, I will be looking at the resistance of a solution, and the different things which affect it. In this experiment, I have chosen one variable, the salt concentration. Extracts from this document... Introduction Investigating the resistance of a solution In this investigation, I will be looking at the resistance of a solution, and the different things which affect it. In this experiment, I have chosen one variable, the salt concentration. I have chosen the following variables for the concentration: in a beaker of 50ml (cm ) :        0 grams, 1,2,3,4 and 5 grams. Using the circuit shown below, I will find out the resistance of the salt solution. I will keep the following variables the same: I will keep the temperature at room temperature, by not changing the room temperature at all, by keeping windows shut, and not adjusting the radiators during the experiment. I will monitor the temperature, and make a note, just incase it changes. I will monitor the temperature using a thermometer, which will be placed into the beaker during each experiment. I will keep the voltage at 5V and 50cm  of solution, to keep it a fair test. The circuit will include a power pack, an ammeter, a voltmeter, two iron rods, a beaker, wires with crocodile clips and a measuring tube. I will repeat each experient once, so that I can find out an average, which will prevent anomolies affecting the graph, and incase there is a difference between the first attempt, and the second attempt, I will be able to spot the mistakes. ## Method Middle Current Input Voltage Resistance 0 3.86 0.00 5 386.00 3 4.15 0.41 5 10.12 5 4.10 0.58 5 7.06 This preliminary experiment showed me that I might have to use a higher voltage, because there would be a wider range of voltage readings, and this would make a graph more easy to read and understand. I believe that 7V would be a better input voltage to have, because it is not too high, and it is not to small, as it is a higher voltage used in my preliminary work. It shows me that 50cm  is enough water to use, because it will dissolve the amount of acid I plan to use, 5g of salt. It is important that it can dissolve into the solution, because when it comes to the point when no more salt can be dissolved into the solution, it affects the results, because adding more salt has no effect on the reisistance anymore. ## Apparatus Apparatus Use for apparatus Power pack To supply the power Wires To connect the circuit Voltmeter To measure the voltage Ammeter To measure the current Beacker Conclusion I would set up the circuit, as shown in my plan. Before putting the solution in a beaker, into the circuit, and placing the rods into the solution, I will heat the beaker on a tripod over a bunsen, with a thermometer in the beaker. When the temperature inside the beaker has reached the required temperature for the experiment, I will remove it from the tripod, connect the circuit, and record results. I will then rince out the beaker, and repeat the experiment again to a different temperature each time, with different temperatures. This would show whether temperature affects the rate or not, because in my experiment, the temperature remained constant. I could use different waters for my experiment. I used distilled water, as to not affect the results. If I was to use tapwater, I could monitor whether the particles within the tapwater affected the results or not, and whether they conduct electricity, and decrease the resistance. I believe that the experiment would be improved, if different solutions were formed, for example, using a different material to salt, to see if this would produce the same results in terms of results. I used a D.C supply, this was because I wanted to keep the particles at the same charge throughout the experiment, I could improve the experiment, by using an A.C current, to see if this affected the resistance. I could use a larger amount of water,because this would enable me to dissolve more salt into the solution, and therefore give a wider spread of results. This student written piece of work is one of many that can be found in our AS and A Level Electrical & Thermal Physics section. ## Found what you're looking for? • Start learning 29% faster today • 150,000+ documents available • Just £6.99 a month Not the one? Search for your essay title... • Join over 1.2 million students every month • Accelerate your learning by 29% • Unlimited access from just £6.99 per month # Related AS and A Level Electrical & Thermal Physics essays 1. ## Single Phase Transformer (Experiment) Report. The calculations have proved to me that the equivalent circuit method has worked, enabling me to predict the results of a transformer and effects with certain loads. * Now to predict the voltage regulation and efficiency if the load used above had a power factor of 0.8 lagging. 2. ## Internal resistance investigation - I will conduct the following investigation with the aim to ... At first it appears to be a perfect straight line, it then starts to curve slightly. This is because, after a while, all batteries start to deteriorate. The electrodes undergo chemical reactions that begin to block the flow of electricity. 1. ## The aim of my investigation is to determine the specific heat capacity of aluminium. Trial Run- c=ItV/m T =4.01x900x10.45/(1x(36+4)) c=943Jkg-1C-1 This value is nearer the book value of the specific heat capacity of aluminium. For result 1,2 & 3 the time delay should be less as the oil was added around the thermometer to aid conduction and convection. Looking at the tables and graphs it can be seen that 2. ## A2 Viscosity investigation as this would be unpractical and in the time period it wasn't feasible to do so. Also by the time a ball bearing had reached the bottom the temperature would have decreased thus making the test highly unfair. To try and counter this the temperature used at the stat was 1. ## I am going to investigate what the resistivity is of a pencil lead. ... I will then setup the circuit that is displayed above. The reason why I will be using a potential divider circuit is because I can get a higher range of voltages as well as it also helps in getting the voltage more accurate. 2. ## Investigating Electricity. currents to pass through the voltmeter, when I draw up my graph they will also be a curve just like the resistor table but it will be slightly different because of the results. From comparing my light bulb table and my resistor table I managed to identify that at 10volts 1. ## Find The Internal Resistance Of A Power Supply I turned the power supply on, set at two volts and increased the load resistance by moving the bow contact further along the metal rail slightly. After each slight increase in resistance, the reading on the voltmeter and ammeter were recorded. I recorded these readings at four different load resistances. 2. ## Coursework To Find The Internal Resistance Of A PowerSupply the gradient of the plotted results would all be constant. But as you can see from graph 1 and graph 2 below, the internal resistance of the power supply ranges from 0.35? at minimum voltage setting of 2V to 0.45 ? • Over 160,000 pieces of student written work • Annotated by experienced teachers • Ideas and feedback to
1,714
7,510
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.609375
4
CC-MAIN-2020-50
latest
en
0.916891
https://gmatclub.com/forum/a-clock-gains-5-for-3-days-and-subsequently-loses-2-for-4103.html
1,511,088,588,000,000,000
text/html
crawl-data/CC-MAIN-2017-47/segments/1510934805541.30/warc/CC-MAIN-20171119095916-20171119115916-00775.warc.gz
612,312,321
37,762
It is currently 19 Nov 2017, 03:49 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # A clock gains 5% for 3 days and subsequently loses 2% for Author Message Senior Manager Joined: 30 Aug 2003 Posts: 328 Kudos [?]: 26 [0], given: 0 Location: BACARDIVILLE A clock gains 5% for 3 days and subsequently loses 2% for [#permalink] ### Show Tags 10 Jan 2004, 14:28 This topic is locked. If you want to discuss this question please re-post it in the respective forum. A clock gains 5% for 3 days and subsequently loses 2% for the next 7 days. If it was 10 minutes late at 10 A.M. on Monday, what will be the time 10 days from this point? a. 10:04:24 am b. 10:14:24 am c. 9:56:36 am d. 9:45:36 am ***Bloody marvelous question _________________ Pls include reasoning along with all answer posts. ****GMAT Loco**** Este examen me conduce jodiendo loco Kudos [?]: 26 [0], given: 0 Intern Joined: 05 Jan 2004 Posts: 27 Kudos [?]: [0], given: 0 Location: Los Angeles ### Show Tags 10 Jan 2004, 15:12 I don't know if this is the correct answer, but I will attempt at an solution, The correct answer should be (A) 10:04:24am Assuming the clock gains 5% over a period of 3 days, rather than gaining 5% everyday for a period of 3 days. 5% gain for a period of 3 days (24 x 3 hours) , translates to 3 hours 36 min gain Since the Clock was late by 10 min on Monday 10:00am the clock time was 9:50am After three days and gain of 3 hours 36 min, the time would be 1:26pm Then the clock proceeds to lose 2% for a period of 7 days. This translates to a loss of 3 hours 21 min and 36 seconds So, taking 1:26pm as the time after 3 days of gain, after 7 days of loss, the time would be 10:04:24am I think that (A) is the right answer, but offcourse, I may have made a mistake. Kudos [?]: [0], given: 0 Senior Manager Joined: 30 Aug 2003 Posts: 328 Kudos [?]: 26 [0], given: 0 Location: BACARDIVILLE ### Show Tags 10 Jan 2004, 15:18 _________________ Pls include reasoning along with all answer posts. ****GMAT Loco**** Este examen me conduce jodiendo loco Kudos [?]: 26 [0], given: 0 10 Jan 2004, 15:18 Display posts from previous: Sort by # A clock gains 5% for 3 days and subsequently loses 2% for Moderator: chetan2u Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
886
2,985
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.703125
4
CC-MAIN-2017-47
latest
en
0.911341
http://mathematica.stackexchange.com/questions/13361/mollweide-maps-in-mathematica?answertab=active
1,462,124,567,000,000,000
text/html
crawl-data/CC-MAIN-2016-18/segments/1461860116878.73/warc/CC-MAIN-20160428161516-00097-ip-10-239-7-51.ec2.internal.warc.gz
186,396,381
22,838
# Mollweide maps in Mathematica Context In my field of research, many people use the following package: healpix (for Hierarchical Equal Area isoLatitude Pixelization) which has been ported to a few different languages (F90, C,C++, Octave, Python, IDL, MATLAB, Yorick, to name a few). It is used to operate on the sphere and its tangent space and implements amongst other things fast (possibly spinned) harmonic transform, equal area sampling, etc. In the long run, I feel it would be useful for our community to be able to have this functionality as well. As a starting point, I am interested in producing Mollweide maps in Mathematica. My purpose is to be able to do maps such as which (for those interested) represents our Milky Way (in purple) on top of the the cosmic microwave background (in red, the afterglow of the Big Bang) seen by the Planck satellite. Attempt Thanks to halirutan's head start, this is what I have so far: cart[{lambda_, phi_}] := With[{theta = fc[phi]}, {2 /Pi*lambda Cos[theta], Sin[theta]}] fc[phi_] := Block[{theta}, If[Abs[phi] == Pi/2, phi, theta /. FindRoot[2 theta + Sin[2 theta] == Pi Sin[phi], {theta, phi}]]]; which basically allows me to do plots like grid = With[{delta = Pi/18/2}, Table[{lambda, phi}, {phi, -Pi/2, Pi/2, delta}, {lambda, -Pi, Pi, delta}]]; gr1 = Graphics[{AbsoluteThickness[0.05], Line /@ grid, Line /@ Transpose[grid]}, AspectRatio -> 1/2]; gr0 = Flatten[{gr1[[1, 2]][[Range[9]*4 - 1]],gr1[[1, 3]][[Range[18]*4 - 3]]}] // Graphics[{AbsoluteThickness[0.2], #}] &; gr2 = Table[{Hue[t/Pi], Point[{ t , t/2}]}, {t, -Pi, Pi, 1/100}] // Flatten // Graphics; gr = Show[{gr1, gr0, gr2}, Axes -> True] gr /. Line[pts_] :> Line[cart /@ pts] /. Point[pts_] :> Point[cart[ pts]] and project them to a Mollweide representation Question Starting from an image like this one: (which some of you will recognize;-)) I would like to produce its Mollweide view. Note that WorldPlot has this projection. In the long run, I am wondering how to link (via MathLink?) to existing F90/C routines for fast harmonic transforms available in healpix. - Perhaps we should host the image on imgur instead of directly embedding it from tpfto.files.wordpress.com, because (i) hotlinking is bad, and (ii) the site could change its URLs or take the image down. – Rahul Oct 20 '12 at 20:52 @RahulNarain I fixed this. This image was produced by J.M. – chris Oct 20 '12 at 20:55 P.S. that spherical Perlin noise image you linked to is indeed an equirectangular projection. :) – J. M. Oct 21 '12 at 2:47 Transform an image under an arbitrary projection? Looks like a job for ImageTransformation :) @halirutan's cart function gives you a mapping from latitude and longitude to the Mollweide projection. What we need here is the inverse mapping, because ImageTransformation is going to look at each pixel in the Mollweide projection and fill it in with the colour of the corresponding pixel in the original image. Fortunately MathWorld has us covered: \begin{align} \phi &= \sin^{-1}\left(\frac{2\theta+\sin2\theta}\pi\right), \\ \lambda &= \lambda_0 + \frac{\pi x}{2\sqrt2\cos\theta}, \end{align} where $$\theta=\sin^{-1}\frac y{\sqrt2}.$$ Here $x$ and $y$ are the coordinates in the Mollweide projection, and $\phi$ and $\lambda$ are the latitude and longitude respectively. The projection is off by a factor of $\sqrt2$ compared to the cart function, so for consistency I'll omit the $\sqrt2$'s in my implementation. I'll also assume that the central longitude, $\lambda_0$, is zero. invmollweide[{x_, y_}] := With[{theta = ArcSin[y]}, {Pi x/(2 Cos[theta]), ArcSin[(2 theta + Sin[2 theta])/Pi]}] Now we just apply this to our original equirectangular image, where $x$ is longitude and $y$ is latitude, to get the Mollweide projection. i = Import["http://i.stack.imgur.com/4xyhd.png"] ImageTransformation[i, invmollweide, DataRange -> {{-Pi, Pi}, {-Pi/2, Pi/2}}, PlotRange -> {{-2, 2}, {-1, 1}}] - that pretty much nails it. I thought ImageTransformation only did translation/rotation. Thanks... – chris Oct 20 '12 at 20:51 would you know how to set up the background to white (just to be perfectionist)? – chris Oct 20 '12 at 21:02 @chris Try ImageCompose – Dr. belisarius Oct 20 '12 at 23:33 I must say, I'm amazed at the number of upvotes I've received simply for using a built-in function for its intended purpose! :) – Rahul Oct 22 '12 at 5:10 Rahul, there are lots of functions that people either don't know about, don't know how to use, or have forgotten. Showing a simple, powerful example is often rewarded with votes around here, especially when the result is a pretty picture. Don't undervalue your contribution. – Mr.Wizard Oct 22 '12 at 6:46 Heres an alternative. pic = Import["http://i.stack.imgur.com/4xyhd.png"] Let's say you are lazy and you don't want to write mathematical equations. We can use built-in transformations to create domain, image of transformation, and create InterpolationFunction based on this data. data = Join @@ Table[{lat, long}, {lat, -89, 89}, {long, -179, 179}]; Clear[x, y]; proj = "Bonne"; (* check GeoProjectionData[]*) im = First @ GeoGridPosition[GeoPosition[data], proj]; g[{x_, y_}] = Interpolation[Transpose[{data, im}]][y, x]; ImageForwardTransformation[ pic, g, 250 {1, 1}, DataRange -> {{-1, 1} 180, {-1, 1} 90}, (*expected range may vary with projection ofc*) PlotRange -> Pi {{-1, 1}, {-1, 1}} (*as above*) ] (*plot for Bonne, AzimuthalEquidistant, Albers and WinkelTripel*) - I appreciate lazy, +1. – rcollyer Jan 26 '15 at 19:46 Also, less chance of writing the equations wrong. – Rahul Jan 27 '15 at 1:10 @Rahul unfotunately way slower. but fast enough for playing around on small images. – Kuba Jan 27 '15 at 9:32 Is there an inverse of GeoGridPosition? ImageTransformation is much faster than ImageForwardTransformation, I think. – Rahul Jan 27 '15 at 17:43 @Rahul Yes, you have to switch GeoGridPosition with GeoPosition, more or less. I don;t understand why it is faster, it was natural for me to use forward. It should only care about pixels in result but it slows dramatically when the input is larger. – Kuba Jan 27 '15 at 17:54 To summarize various contributions from this post and others (Rahul Narain, halirutan, cormullion, Szabolcs, belisarius, J.M.) into a single plot, see the following definitions invmollweide[{x_, y_}] := With[{theta = ArcSin[y]}, {Pi (x)/(2 Cos[theta]), ArcSin[(2 theta + Sin[2 theta])/Pi]}]; fc[phi_] := Block[{theta}, If[Abs[phi] == Pi/2, phi, theta /. FindRoot[2 theta + Sin[2 theta] == Pi Sin[phi], {theta, phi}]]]; cart[{lambda_, phi_}] := With[{theta = fc[phi]}, {2 /Pi*lambda Cos[theta], Sin[theta]}] colorbar[{min_, max_}, colorFunction_: Automatic, divs_: 15] := DensityPlot[y, {x, 0, 0.1}, {y, min, max}, AspectRatio -> 15, PlotRangePadding -> 0, ColorFunction -> colorFunction, PlotPoints -> {2, divs}, MaxRecursion -> 0, FrameTicks -> {None, Automatic, None, None}]; grid0 = With[{delta = Pi/36}, Table[{lambda, phi}, {phi, -Pi/2, Pi/2, delta}, {lambda, -Pi, Pi, delta}]]; gr1 = Graphics[{AbsoluteThickness[0.1], Line /@ grid0, Line /@ Transpose[grid0]}, AspectRatio -> 1/2]; gr0 = Flatten[{gr1[[1, 2]][[Range[9]*4 - 1]],gr1[[1, 3]][[Range[18]*4 - 3]]}] // Graphics[{AbsoluteThickness[0.4], #}] &; grid = Show[{gr1, gr0}, Axes -> False]; grid = grid /. Line[pts_] :> {White, Line[(cart /@ pts)]}; gr2 = StreamPlot[{-1 - Sin[x]^2 + Sin[3y] + Cos[y]^2, 1 + Sin[2x] - Cos[y]^2}, {x, -Pi, Pi}, {y, -Pi/2, Pi/2}, AspectRatio -> 1/2, Frame -> False, StreamColorFunction -> "ThermometerColors", StreamPoints -> 250]; gr2 = gr2 /. Arrow[pts_] :> Arrow[(cart /@ pts)] /. Point[pts_] :> Point[cart[ pts]] // Show[#, PlotRange -> {{-2, 2}, {-1, 1}}] &; img = With[{img=Import["http://i.imgur.com/2ZPBK.jpg"]}, ImageTransformation[img, invmollweide, {512, 256}*4, DataRange -> {{-Pi, Pi}, {-Pi/2, Pi/2}}, PlotRange -> {{-2, 2}, {-1, 1}}, Column[{Style["The earth with some crazy vector field", 16], Graphics[{Inset[img, {-2, -1}, {0, 0}, {4, 2}], First[grid], First[gr2]}, PlotRange -> {{-2, 2}, {-1, 1}}, ImageSize -> 800], Magnify[Rotate[colorbar[img // ImageData // {Min[#], Max[#]} &, "DarkTerrain"], -90 Degree], 1]}, Center] ` yields (after a minute or so) which illustrates the versatility of Mathematica! - Nice. Needs a title? – cormullion Oct 21 '12 at 19:17 @cormullion you mean for the plot? As above? – chris Oct 21 '12 at 19:22 This would be a nice example for the weekly newsletter. +1 – Fred Kline Oct 21 '12 at 19:41 @chris Yes, anything - just to save me wondering whether it's "The migratory patterns of the Arctic Tern" or something... :) – cormullion Oct 21 '12 at 21:14 At least in 10.0.2 the grid lines look thicker and there are more of them than shown in the answer above. Was the code changed or has Mathematica's rendering changed? – Mr.Wizard Jan 26 '15 at 23:12
2,718
8,864
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0}
3.1875
3
CC-MAIN-2016-18
latest
en
0.838321
https://dearteassociazione.org/how-many-neutrons-are-in-potassium/
1,652,784,401,000,000,000
text/html
crawl-data/CC-MAIN-2022-21/segments/1652662517245.1/warc/CC-MAIN-20220517095022-20220517125022-00425.warc.gz
258,261,541
4,816
As you might know, atoms room composed of 3 different varieties of particles: protons, neutrons, i beg your pardon are discovered in the nucleus, and electrons i beg your pardon are uncovered in the electron cloud about the nucleus. Protons and also neutrons space responsible for many of the atomic mass. You are watching: How many neutrons are in potassium The atomic mass for protons and also neutrons is the same. A proton has actually an atom mass of 1 u (unified atomic mass). The fixed of one electron is an extremely small. Protons have a positive (+) charge, neutrons have actually no charge --they are neutral. Electrons have actually a negative fee (-). The variety of protons and also electrons is the exact same in a neutral (uncharged) atom. How have the right to you discover out how numerous protons (= electrons) an element has? friend look at the periodic table. In this table the aspects are listed according to their atomic number. Hydrogen is the very first element v an atomic number of 1. Now, the atomic number is additionally the number of protons in one element. it is always constant. (e.g., H=1, K=19, Ur=92). So if friend look because that potassium (symbol K in the routine table), girlfriend will discover that it has the atomic variety of 19. This speak you the potassium has actually 19 protons and also - since the variety of protons is the exact same as the variety of electrons- likewise 19 electrons. To uncover out the number of neutrons you need to look at the atomic mass or weight of the element. This can additionally be discovered in the regular table. It is the number under the element symbol. For potassium that is around 39. This way that the atomic weight is 39 because that both protons and neutrons. Because we understand that the variety of protons is 19 we have the right to calculate the number of neutron (39 19) together 20. Things have the right to be a little bit more facility however. The same element may contain varying numbers the neutrons; elements that have the same number of protons however different number of neurons are dubbed isotopes. Oxygen, through atomic number of 8, for example, have the right to have 8, 9, or 10 neutrons. (The atomic number because that oxygen is 8, and the atomic mass is 15.9994). If you read the atomic load for potassium it reads 39.098. This tells you that there need to be some other isotopes because that potassium. In fact, natural potassium has three isotopes with 20, 21 and also 22 neutrons with the abundance of 93.26 %, 0.01 % and 6.73 % respectively. See more: Bog Walker Achievement In Fallout 3 Point Lookout Map All Locations More information have the right to be found at this internet site: routine Just click on the elements.
624
2,748
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.859375
3
CC-MAIN-2022-21
latest
en
0.923743
http://gmatclub.com/forum/at-a-certain-diner-a-hamburger-and-coleslaw-cost-3-59-and-3917.html?fl=similar
1,462,285,773,000,000,000
text/html
crawl-data/CC-MAIN-2016-18/segments/1461860121561.0/warc/CC-MAIN-20160428161521-00060-ip-10-239-7-51.ec2.internal.warc.gz
120,625,916
41,007
Find all School-related info fast with the new School-Specific MBA Forum It is currently 03 May 2016, 07:29 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # At a certain diner, a hamburger and coleslaw cost $3.59, and Question banks Downloads My Bookmarks Reviews Important topics Author Message Senior Manager Joined: 30 Aug 2003 Posts: 324 Location: dallas , tx Followers: 1 Kudos [?]: 21 [0], given: 0 At a certain diner, a hamburger and coleslaw cost$3.59, and [#permalink] ### Show Tags 29 Dec 2003, 19:48 00:00 Difficulty: (N/A) Question Stats: 0% (00:00) correct 0% (00:00) wrong based on 0 sessions ### HideShow timer Statictics This topic is locked. If you want to discuss this question please re-post it in the respective forum. . At a certain diner, a hamburger and coleslaw cost $3.59, and a hamburger and french fries cost$4.40. If french fries cost twice as much as coleslaw, how much do french fries cost? (A) $0.30 B) .45 C) .60 D) .75 E).90 _________________ shubhangi GMAT Club Legend Joined: 15 Dec 2003 Posts: 4302 Followers: 33 Kudos [?]: 314 [0], given: 0 [#permalink] ### Show Tags 29 Dec 2003, 20:16 h=hamburger c=coleslaw f=french fries h+f=4.40 h+c=3.59 But we know that f=2c Thus, h+2c=4.4 by combining both equations: h+2c=4.40 -h-c=-3.59 c=.81 f=1.62 Am i doing anything wrong here? 1.62 is not in the answer choices... _________________ Best Regards, Paul Senior Manager Joined: 30 Aug 2003 Posts: 324 Location: dallas , tx Followers: 1 Kudos [?]: 21 [0], given: 0 [#permalink] ### Show Tags 29 Dec 2003, 23:25 I got the same answer .. i dont knwo what's wrong? _________________ shubhangi [#permalink] 29 Dec 2003, 23:25 Display posts from previous: Sort by # At a certain diner, a hamburger and coleslaw cost$3.59, and Question banks Downloads My Bookmarks Reviews Important topics Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
711
2,562
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.09375
4
CC-MAIN-2016-18
latest
en
0.836617
http://www.cfd-online.com/Forums/main/744-body-forces.html
1,477,225,231,000,000,000
text/html
crawl-data/CC-MAIN-2016-44/segments/1476988719273.37/warc/CC-MAIN-20161020183839-00481-ip-10-171-6-4.ec2.internal.warc.gz
378,245,188
14,395
# Body Forces Register Blogs Members List Search Today's Posts Mark Forums Read April 21, 1999, 10:04 Body Forces #1 Thomas P. Abraham Guest   Posts: n/a Hello Everyone: I have a question on the dominance of body forces. In a given problem, Rayleigh number gives the strength of free convection. When the Rayleigh number is more than 1.0e+7, we say that the turbulent effects need to be accounted. That does not necessarily mean that even the body forces are dominant too. Under what limit, the effects of body forces need to be accounted ? Thanks, Thomas April 21, 1999, 21:24 Re: Body Forces #2 Duane Baker Guest   Posts: n/a Hello Thomas, 1. Rayleigh No = Gr*Pr, physical meaning is the ratio of "Natural Advection" to diffusion terms in the energy (temperature) equation. Natural advection being the motion driven by the body force of bouyancy. 2. The analogy in forced convection is the Peclet No = Re*Pr and as you stated the influence and onset of turbulence is related to Pe for forced and Ra for natural convection. 3. How can one have a flow with a large Ra if the effect of bouyancy is not large (ie Gr is not large)?? To quote Eckert from p525 "Analysis of Heat and Mass Transfer": "A flow situation is called free- or natural convection flow when it is created solely by body forces." Have a read through Eckert's discussion! good luck!....................................Duane Thread Tools Display Modes Linear Mode Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is OffTrackbacks are On Pingbacks are On Refbacks are On Forum Rules Similar Threads Thread Thread Starter Forum Replies Last Post sanmysterio Main CFD Forum 1 July 13, 2010 03:14 Fred CD-adapco 2 June 13, 2007 08:56 Fred CD-adapco 1 May 4, 2007 07:40 Ankan Main CFD Forum 0 August 2, 2005 02:13 Wen Long Main CFD Forum 0 June 29, 2002 21:21 All times are GMT -4. The time now is 08:20.
523
2,010
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.734375
3
CC-MAIN-2016-44
latest
en
0.908889
pearsonexams.com
1,686,007,230,000,000,000
text/html
crawl-data/CC-MAIN-2023-23/segments/1685224652184.68/warc/CC-MAIN-20230605221713-20230606011713-00404.warc.gz
497,038,936
116,229
# Future Value Of Annuity Formula With Calculator It can be either ‘present value annuity formula‘ or ‘future value annuity formula‘. Before we learn how to use the annuity formula to calculate annuities, we need to be conversant with these terms. Annuities are basically loans that are paid back over a set period of time at a set interest rate with consistent payments each period. Basically the future value of an annuity estimates how much cash you would have in the future at a defined rate of return . In other words, you can use a special formula to anticipate how the money you invest today will grow over time. The higher your annuity’s discount rate then the higher your annuity’s future value will be. To understand how to calculate an annuity, it’s useful to understand the variables that impact the calculation. An annuity is essentially a loan, a multi-period investment that is paid back over a fixed period of time. The amount paid back over time is relative to the amount of time it takes to pay it back, the interest rate being applied, and the principal . That is how much interest earnings you will be giving up by paying for the data plan for the next 30-years (of course, your loss will be the data plan company’s gain). ## Calculating Present And Future Value Of Annuities More specifically, an annuity formula helps find the values for annuity payments and annuity due. It’s typically based on the present value of an annuity due, effective interest rate, and several periods. As such, the formula is based on an ordinary annuity, which is a series of payments made at the end of a period. It’s calculated on the present value of an ordinary annuity, effective interest rate, and several periods. An annuity, as used here, is a series of regular, periodic payments to or withdrawals from an investment account. The investor may make deposits weekly, monthly, quarterly, yearly, or at any other regular interval of time. An annuity creates a guaranteed income for your retirement. There will then be multiple time segments that require you to work left to right by repeating steps 3 through 5 in the procedure. The future value at the end of one time segment becomes the present value in the next time segment. The future value of an annuity is the value of a group of recurring payments at a certain date in the future, assuming a particular rate of return, or discount rate. The higher the discount rate, the greater the annuity’s future value. ## How To Calculate The Future Value Of An Annuity Due Placing the two types of annuities next to each other in the next figure demonstrates the key difference between the two examples. The following routines can be used to calculate the present and future values of an annuity that increases at a constant rate at equal intervals of time. Routines are included for both END and BEGIN mode calculations. Let’s imagine you decide to save by depositing \$2,000 in an account each year for five years. The initial deposit happens at the end of the first year. If a deposit was made right away then the future value of annuity formula would be used. Our future value annuity formula example is going to take you back to those fun word problems during 4th-grade math class. Back then you thought word problems were useless, but your future self is thankful you paid attention. ## A Guide To Selling Your Structured Settlement Payments This is a great tool that provides future projected cash values. Selecting he “Exact/Simple” option sets the calculator so it will not compound the interest. Also, the exact number of days between withdrawal dates is used to calculate the interest for the period. The “Daily” option uses the exact number of days between dates, but daily compounding is assumed. (The interest earned each day is added to the principal amount each day.) The “Exact/Simple” compounding option is the most conservative setting. That is, using it will result in the lowest future value. In ordinary annuities, payments are made at the end of each period. https://www.bookstime.com/ With annuities due, they’re made at the beginning of the period. ## Ordinary Annuity Or Deferred Annuity An Annuity Due indicates payments are received at the beginning of each period, whereas an Ordinary Annuity indicates payments are received at the end of each period. Plus, the calculator will calculate future value for either an ordinary annuity, or an annuity due, and display an annual growth chart so you can see the growth on a year-to-year basis. You can find the PV of an ordinary annuity with any calculator that has an exponential function, even regular (non-financial) calculators. This refers to the amount of money you deposit into an account each period. In the examples in this article, a person invested \$4,000 per year for 8 years and deposited \$500 per quarter for 10 years. The amount you deposit in a given period is called the periodic investment amount. The value of annuity due at some future time evaluated at a given interest rate assuming that compounding take place more than one time in a year . The value of annuity due at some future time evaluated at a given interest rate assuming that compounding take place one time in a year . ## Hp 10b Calculator This section covers the first two, which calculate future values for both ordinary annuities and annuities due. These formulas accommodate both simple and general annuities. The insurance agent won’t need to break out the annuity formulas to make those calculations. They should be able to use an annuity table, especially if you’re buying a fixed rate annuity. The table will reveal exactly how much the annuity is worth at each stage of the accumulation phase. I have a sum invested and I would like to know how much I can draw from that sum every month whilst keeping the inflation adjusted value of the sum the same. Assuming you have some amount call it “X”, and you want to make withdrawals, set the Schedule Type to “savings”. Create two rows, the first row as a deposit with value “X” and the second row with value “Y” for the number of withdrawals you expect. future value of annuity If Rounding is set to “Open Balance”, the balance will go negative. You may be considering purchasing an annuity product and want to know how much your annuity would be worth at some point in the future based on what you can afford to pay into it each month. Number Of Years To Calculate Present Value – This is the number of years over which the annuity is expected to be paid or received. Is also entered as a negative number, since you paid it in. In this example, you can see that both the payment and the present value are entered as negative values. State and federal Structured Settlement Protection Acts require factoring companies to disclose important information to customers, including the discount rate, during the selling process. Turn your future payments into cash you can use right now. • An annuity creates a guaranteed income for your retirement. • Obviously this is one of the reasons 401ks are so popular. • Additionally, you can use a spreadsheet application such as Excel and its built-in financial formulas. • It’s important to remember the time value of money when calculating the present value of an annuity because it incorporates inflation. • I have a sum invested and I would like to know how much I can draw from that sum every month whilst keeping the inflation adjusted value of the sum the same. Annuities help both the creditor and debtor have predictable cash flows, and it spreads payments of the investment out over time. You can also use the FV formula to calculate other annuities, such as a loan, where you know your fixed payments, the interest rate charged, and the number of payments. ## The Nitty Gritty: Future Value Of Annuity Formula Derivation In Sheets, amounts that you pay out are considered negative numbers and amount you receive are positive amounts. Another difference is that the present value of an annuity due is higher than one for an ordinary annuity. It is a result of the time value of money principle, as annuity due payments are received earlier. An annuity is a series of equal payments in equal time periods. Usually, the time period is 1 year, which is why it is called an annuity, but the time period can be shorter, or even longer. Let’s assume you want to sell five years’ worth of payments, or \$5,000, and the factoring company applies a 10 percent discount rate. Note that you do not end up with the same balance of \$3,310 achieved under the ordinary annuity. The value of annuity at some future time evaluated at a given interest rate assuming that compounding take place more than one time in a year (Intra-year). The value of annuity at some future time evaluated at a given interest rate assuming that compounding take place one time in a year . The starting value is the starting principal , which is the amount you initially invested in the annuity, plus any compounded interest from the beginning until the annuitization point. Let us take another example of Nixon’s plans to accumulate enough money for his MBA. The FV function is a financial function that returns the future value of an investment, given periodic, constant payments with a constant interest rate. The PV function returns the present value of an investment. Have you ever had to make a series of fixed payments over a set period of time? If so, you’re probably already familiar with the concept of annuities, even if you’re not so clued up on the terminology. Simply put, annuities are recurring or ongoing payments over a period of time, like rent or payments for a car. There are a couple of different ways that you can measure the cost or value of these annuities. Find out everything you need to know about calculating the present value of an annuity and the future value of an annuity with our helpful guide. This will return the formula shown on the top of the page. When the payments are all the same, this can be considered a geometric series with 1+r as the common ratio. Euler’s number is a mathematical constant with many applications in science and finance, usually denoted by the lowercase letter e. Note that the one-cent difference in these results, \$5,525.64 vs. \$5,525.63, is due to rounding in the first calculation. The offers that appear in this table are from partnerships from which Investopedia receives compensation. This compensation may impact how and where listings appear. Investopedia does not include all offers available in the marketplace. ## Future Value Annuity Calculator To Calculate Future Value Of Ordinary Or Annuity Due The annual interest rate is in cell B3 and the number of periods per year is in cell B7. We need to get the interest rate per period by typing B3/B7. You can also click in cell B3, type a /, and then click the cursor in cell B7. An individual makes rental payments of \$1,200 per month and wants to know the present value of their annual rentals over a 12-month period. The present value of an annuity due uses the basic present value concept for annuities, except we should discount cash flow to time zero. The first payment is received at the start of the first period, and thereafter, at the beginning of each subsequent period. Share
2,378
11,319
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.203125
3
CC-MAIN-2023-23
latest
en
0.942343
http://en.academic.ru/dic.nsf/enwiki/61994
1,527,443,278,000,000,000
text/html
crawl-data/CC-MAIN-2018-22/segments/1526794869732.36/warc/CC-MAIN-20180527170428-20180527190428-00058.warc.gz
100,721,455
17,755
# Accretion disc  Accretion disc An accretion disc (or accretion disk) is a structure (often a circumstellar disk) formed by diffuse material in orbital motion around a central body. The central body is typically either a young star, a protostar, a white dwarf, a neutron star, or a black hole. Instabilities within the disc redistribute angular momentum, causing material in the disc to spiral inward towards the central body. Gravitational energy released in that process is transformed into heat and emitted at the disk surface in the form of electromagnetic radiation. The frequency range of that radiation depends on the central object. Accretion discs of young stars and protostars radiate in the infrared, those around neutron stars and black holes in the X-ray part of the spectrum. Accretion disc physics In the 1940s, models were first derived from basic physical principles.Citation | last=Weizsäcker | first=C. F. | year=1948 | title=Die Rotation Kosmischer Gasmassen | periodical=Z. Naturforsch. | volume=3a | issue= | pages=524-539 | url=] In order to agree with observations those models had to invoke a yet unknown mechanism for angular momentum redistribution. If matter is to fall inwards it must lose not only gravitational energy but also lose angular momentum. Since the total angular momentum of the disc is conserved, the angular momentum loss of the mass falling into the center has to be compensated by an angular momentum gain of the mass far from the center. In other words, angular momentum should be "transported" outwards for matter to accrete. According to the Rayleigh stability criterion,:$frac\left\{partial\left(R^2Omega\right)\right\}\left\{partial R\right\}>0,$where $Omega$ represents the angular velocity of a fluid element and $R$ its distance to the rotation center,an accretion disc is expected to be a laminar flow. This prevents the existence of an hydrodynamic mechanism for angular momentum transport. On one hand, it was clear that viscous stresses would eventually cause matter to heat up and radiate away part of the gravitational energy. On the other hand viscosity itself was not enough to explain the transport of angular momentum to the exterior parts of the disc. Turbulence enhanced viscosity was the mechanism thought to be responsible for such angular-momentum redistribution, although the origin of the turbulence itself was not well understood. The conventional phenomenological approach introduces an adjustable parameter $alpha$ describing the effective increase of viscosity due to turbulent eddies within the disc.Citation | last1=Shakura | first1=N. I. | last2=Sunyaev | first2=R. A. | year=1973 | title=Black Holes in Binary Systems. Observational Appearance | periodical=Astronomy and Astrophysics | volume=24 | issue= | pages=337-355 | url=http://adsabs.harvard.edu/abs/1973A&A....24..337S] Citation | last1=Lynden-Bell | first1=D. | last2=Pringle | first2=J. E.| year=1974 | title=The evolution of viscous discs and the origin of the nebular variables | periodical=Mon. Not. R. Astr. Soc. | volume=168 | issue= | pages=603-637 | url=http://adsabs.harvard.edu/abs/1974MNRAS.168..603L] In 1991, with the rediscovery of the magnetorotational instability (MRI), S. A. Balbus and J. F. Hawley established that a weakly magnetized disc accreting around a heavy, compact central object would be highly unstable, providing a direct mechanism for angular-momentum redistribution.Citation last1=Balbus | first1=Steven A. | last2=Hawley | first2=John F. | year=1991 | title=A powerful local shear instability in weakly magnetized disks. I - Linear analysis | periodical=Astrophysical Journal | volume=376 | issue= | pages=214-233 | url=http://adsabs.harvard.edu/abs/1991ApJ...376..214B ] $alpha$-Disc Model Shakura and Sunyaev (1973) proposed turbulence in the gas as the source of an increased viscosity. Assuming subsonic turbulence and the disc height as an upper limit for the size of the eddies, the disc viscosity can be estimated as $u=alpha c_\left\{ m s\right\}H$where $c_\left\{ m s\right\}$ is the sound speed, $H$ is the disc height, and $alpha$ is a free parameter between zero (no accretion) and approximately one. By using the equation of hydrostatic equilibrium, combined with conservation of angular momentum and assuming that the disc is thin, the equations of disk structure may be solved in terms of the $alpha$ parameter. Many of the observables depend only weakly on $alpha$, so this theory is predictive even though it has a free parameter. Using Kramers' law for the opacity it is found that:$H=1.7 imes 10^8alpha^\left\{-1/10\right\}dot\left\{M\right\}^\left\{3/20\right\}_\left\{16\right\} m_1^\left\{-3/8\right\} R^\left\{9/8\right\}_\left\{10\right\}f^\left\{3/5\right\} \left\{ m cm\right\}$:$T_c=1.4 imes 10^4 alpha^\left\{-1/5\right\}dot\left\{M\right\}^\left\{3/10\right\}_\left\{16\right\} m_1^\left\{1/4\right\} R^\left\{-3/4\right\}_\left\{10\right\}f^\left\{6/5\right\}\left\{ m K\right\}$:$ho=3.1 imes 10^\left\{-8\right\}alpha^\left\{-7/10\right\}dot\left\{M\right\}^\left\{11/20\right\}_\left\{16\right\} m_1^\left\{5/8\right\} R^\left\{-15/8\right\}_\left\{10\right\}f^\left\{11/5\right\}\left\{ m g cm\right\}^\left\{-3\right\}$where $T_c$ and $ho$ are the mid-plane temperature and density respectively.$dot\left\{M\right\}_\left\{16\right\}$ is the accretion rate, in units of $10^\left\{16\right\}\left\{ m g s\right\}^\left\{-1\right\}$, $m_1$ is the mass of the central accreting object in units of a solar mass, , $R_\left\{10\right\}$ is the radius of a point in the disc, in units of $10^\left\{10\right\}\left\{ m cm\right\}$, and$f=left \left[1-left\left(frac\left\{R_star\right\}\left\{R\right\} ight\right)^\left\{1/2\right\} ight\right] ^\left\{1/4\right\}$, where $R_star$ is the radius where angular momentum stops being transported inwards. This theory breaks down when gas pressure is not significant. For example, if the accretion rate approaches the Eddington limit, radiation pressure becomes important and the disk will "puff up" into a torus or some other three dimensional solution like an Advection Dominated Accretion Flow (ADAF). Another extreme is the case of Saturn's rings, where the disk is so gas poor its angular momentum transport is dominated by solid body collisions and disk-moon gravitational interactions. Magnetorotational Instability Balbus and Hawley (1991) proposed a mechanism which involves magnetic fields to generate the angular momentum transport. A simple system displaying this mechanism is a gas disc in the presence of a weak axial magnetic field. Two radially neighboring fluid elements will behave as two mass points connected by a massless spring, the spring tension playing the role of the magnetic tension. In a Keplerian disc the inner fluid element would be orbiting more rapidly than the outer, causing the spring to stretch. The inner fluid element is then forced by the spring to slow down, reduce correspondingly its angular momentum causing it to move to a lower orbit. The outer fluid element being pulled forward will speed up, increasing its angular momentum and move to a larger radius orbit. The spring tension will increase as the two fluid elements move further apart and the process runs away.Citation last=Balbus first=Steven A. title=Enhanced Angular Momentum Transport in Accretion Disks journal=Annu. Rev. Astron. Astrophys. volume=41 issue= year=2003 pages=555-597 url=http://arxiv.org/abs/astro-ph/0306208 ] It can be shown that in the presence of such a spring-like tension the Rayleigh stability criterion is replaced by:$frac\left\{partialOmega^2\right\}\left\{partial R\right\}>0$.Most astrophysical discs do not meet this criterion and are therefore prone to this magnetorotational instability. The magnetic fields present in astrophysical objects (required for the instability to occur) are believed to be generated via dynamo action. Citation last1=Rüdiger first1=Günther last2=Hollerbach first2=Rainer title=The Magnetic Universe: Geophysical and Astrophysical Dynamo Theory publisher=Wiley-VCH edition= year=2004 isbn=3-527-40409-0 ] Analytic models of sub-Eddington accretion discs (thin discs, adafs) When the accretion rate is sub-Eddington and the opacity very high, the standard thin accretion disc is formed. It is geometrically thin in the vertical direction (has a disc-like shape), and is made of a relatively cold gas, with a negligible radiation pressure. The gas goes down on very tight spirals, resembling almost circular, almost free (Keplerian) orbits. Thin discs are relatively luminous and they have thermal electromagnetic spectra, i.e. not much different from that of a sum of black bodies. Radiative cooling is very efficient in thin discs. The classic 1974 work by Shakura and Sunyaev on thin accretion discs is one of the most often quoted papers in modern astrophysics. Thin discs have been independently worked out by Lynden-Bell, Pringle and Rees. Pringle contributed in the past thirty years many key results to accretion disc theory, and wrote the classic 1981 review that for many years was the main source of information about accretion discs, and is still very useful today. When the accretion rate is sub-Eddington and the opacity very low, an adaf is formed. This type of accretion disc was prophesied in 1977 by Ichimaru in a paper that was ignored almost by everybody for twenty years. (Some elements of the adaf model were present in the influential 1982 ion-tori paper by Rees, Phinney, Begelman and Blandford, however.) Analytic models of super-Eddington accretion discs (slim discs, Polish doughnuts) The theory of highly super-Eddington black hole accretion, M>>MEdd , was developed in the 1980s by Abramowicz, Jaroszynski, Paczynski, Sikora and others in terms of "Polish doughnuts" (the name was coined by Rees). Polish doughnuts are low viscosity, optically thick, radiation pressure supported accretion discs cooled by advection. They are radiatively very inefficient. Polish doughnuts resemble in shape a fat torus (a doughnut) with two narrow funnels along the rotation axis. The funnels collimate the radiation into beams with highly super-Eddington luminosities. Slim discs (name coined by Kolakowska) have only moderately super-Eddington accretion rates,M≥MEdd , rather disc-like shapes, and almost thermal spectra. They are cooled by advection, and are radiatively ineffective. They were introduced by Abramowicz, Lasota, Czerny and Szuszkiewicz in 1988. Manifestations Accretion discs are a ubiquitous phenomenon in astrophysics; active galactic nuclei, protoplanetary discs, and gamma ray bursts all involve accretion discs. These discs very often give rise to jets coming from the vicinity of the central object. Jets are an efficient way for the star-disc system to shed angular momentum without losing too much mass. The most spectacular accretion discs found in nature are those of active galactic nuclei and of quasars, which are believed to be massive black holes at the center of galaxies. As matter spirals into a black hole, the intense gravitational gradient gives rise to intense frictional heating; the accretion disc of a black hole is hot enough to emit X-rays just outside of the event horizon. The large luminosity of quasars is believed to be a result of gas being accreted by supermassive black holes. This process can convert about 10 percent of the mass of an object into energy as compared to around 0.5 percent for nuclear fusion processes. In close binary systems the more massive primary component evolves faster and has already become a white dwarf, a neutron star, or a black hole, when the less massive companion reaches the giant state and exceeds its Roche lobe. A gas flow then develops from the companion star to the primary. Angular momentum conservation prevents a straight flow from one star to the other and an accretion disc forms instead. Accretion discs surrounding T Tauri stars or Herbig stars are called protoplanetary discs because they are thought to be the progenitors of planetary systems. The accreted gas in this case comes from the molecular cloud out of which the star has formed rather than a companion star. * Accretion (science) * Circumstellar disk * Solar Nebula * Dynamo Theory * Planetary ring * Singularity References * cite book last=Frank first=Juhan coauthors=Andrew King; Derek Raine title=Accretion power in astrophysics publisher=Cambridge University Press edition=Third Edition year=2002 isbn=0-521-62957-8 * cite book last=Krolik first=Julian H. title=Active Galactic Nuclei publisher=Princeton University Press year=1999 isbn=0-691-01151-6 * [http://www.astro.virginia.edu/~jh8h/ Professor John F. Hawley homepage] * [http://www.astro.virginia.edu/VITA/papers/nraf2/section1.html Nonradiative Black Hole Accretion] * [http://www.scholarpedia.org/article/Accretion_Discs Accretion Discs on Scholarpedia] * [http://www.newscientistspace.com/article.ns?id=mg19025574.600&feedId=online-news_rss20 Magnetic fields snare black holes' food] – New Scientist Wikimedia Foundation. 2010. ### Look at other dictionaries: • accretion disc — noun (astronomy) A flat disc formed by matter in orbit around a celestial body • • • Main Entry: ↑accrescent * * * accretion disk or accretion disc, Astronomy. a disk shaped formation of gases or other interstellar matter around a black hole,… …   Useful english dictionary • Accretion (astrophysics) — In astrophysics, the term accretion is used for at least two distinct processes.The first and most common is the growth of a massive object by gravitationally attracting more matter, typically gaseous matter in an accretion disc. Accretion discs… …   Wikipedia • Disc — Contents 1 Data storage 2 Science 2.1 Astronomy 2.2 Biology …   Wikipedia • Disque d'accrétion — Une vue d artiste d un système stellaire binaire composé d un trou noir et une étoile de la séquence principale. Le disque d accrétion et les jets de radiations électromagnétiques sont représentés en bleu. Un disque d accrétion est une structure… …   Wikipédia en Français • Disque D'accrétion — Disques vus en absorption autour d étoiles jeunes dans la constellation d Orion. Un disque d accrétion est une structure astrophysique formée par de la matière en orbite autour d un corps central et qui chute sur celui ci en raison de la… …   Wikipédia en Français • Disque d'accretion — Disque d accrétion Disques vus en absorption autour d étoiles jeunes dans la constellation d Orion. Un disque d accrétion est une structure astrophysique formée par de la matière en orbite autour d un corps central et qui chute sur celui ci en… …   Wikipédia en Français • Disque d’accrétion — Disque d accrétion Disques vus en absorption autour d étoiles jeunes dans la constellation d Orion. Un disque d accrétion est une structure astrophysique formée par de la matière en orbite autour d un corps central et qui chute sur celui ci en… …   Wikipédia en Français • protoplanetary disc — noun an accretion disc surrounding a T Tauri star; thought to be the progenitor of a planetary system Syn: proplyd …   Wiktionary • Active galactic nucleus — An active galactic nucleus (AGN) is a compact region at the centre of a galaxy which has a much higher than normal luminosity over some or all of the electromagnetic spectrum (in the radio, infrared, optical, ultra violet, X ray and/or gamma ray… …   Wikipedia • Black hole — For other uses, see Black hole (disambiguation). Simulated view of a black hole (center) in front of the Large Magellanic Cloud. Note the gravitat …   Wikipedia
4,019
15,609
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 24, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.578125
3
CC-MAIN-2018-22
latest
en
0.882142
http://primes.utm.edu/glossary/page.php?sort=SmithNumber
1,386,357,915,000,000,000
text/html
crawl-data/CC-MAIN-2013-48/segments/1386163052382/warc/CC-MAIN-20131204131732-00004-ip-10-33-133-15.ec2.internal.warc.gz
140,847,881
3,190
Smith number (another Prime Pages' Glossary entries) Glossary: Prime Pages: Top 5000: In 1982, when Albert Wilansky called his brother-in-law, he noticed that the phone number was composite and that the sum of the digits in the phone number equals the sum of the digits in its prime factors. 4937775 = 3.5.5.65837 4 + 9 + 3 + 7 + 7 + 7 + 5 = 3 + 5 + 5 + 6 + 5 + 8 + 3 + 7 Composite numbers with this property are now called Smith numbers after the brother-in-law Wilansky was calling. Trivially, all prime numbers have this property, so they are excluded. The Smith numbers less than 1000 are: 4, 22, 27, 58, 85, 94, 121, 166, 202, 265, 274, 319, 346, 355, 378, 382, 391, 438, 454, 483, 517, 526, 535, 562, 576, 588, 627, 634, 636, 645, 648, 654, 663, 666, 690, 706, 728, 729, 762, 778, 825, 852, 861, 895, 913, 915, 922, 958, and 985. In 1987, Wayne McDaniel showed that are infinitely many Smith numbers by constructing a sequence of them. If Rn is a repunit prime, then 1540.Rn is a Smith number (with digital sum 18+n). Note that 1540 is not the only possible mutiplier here, others include: 1540, 1720, 2170, 2440, 5590, 6040, 7930, 8344, 8470, 8920, 23590, 24490, 25228, 29080, 31528, 31780, 33544, 34390, 35380, 39970, 40870, 42490, 42598, 43480, 44380, 45955, 46270, 46810, 46990, 47908, 48790, and 49960. See Also: EconomicalNumberRelated pages (outside of this work) Smith Numbers Smith Numbers (Wikipedia)References: Lewis1986 K. Lewis, "Smith numbers: an infinite subset of N," Master's thesis, M.S., Eastern Kentucky University, (1994) McDaniel87 W. McDaniel, "The existence of infinitely many k-Smith numbers," Fibonacci Quart., 25 (1987) 76--80.  MR 88d:11007 McDaniel87b W. McDaniel, "Palindromic Smith numbers," J. Recreational Math., 19:1 (1987) 34--37. OW83 S. Oltikar and K. Wayland, "Construction of Smith numbers," Math. Mag., 56 (1983) 36--37.  MR 84e:10015 Wilansky82 A. Wilansky, "Smith numbers," Two-Year College Math. J., 13 (1982) 21. Yates1991 S. Yates, "Welcome back, Dr. Matrix," J. Recreational Math., 23:1 (1991) 11--12. Yates82 S. Yates, Repunits and repetends, Star Publishing Co., Inc., Boynton Beach, Florida, 1982.  pp. vi+215, MR 83k:10014 Chris K. Caldwell © 1999-2013 (all rights reserved)
817
2,231
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.453125
3
CC-MAIN-2013-48
latest
en
0.82183
https://studyadda.com/sample-papers/jee-main-sample-paper-26_q41/280/300372
1,642,378,693,000,000,000
text/html
crawl-data/CC-MAIN-2022-05/segments/1642320300253.51/warc/CC-MAIN-20220117000754-20220117030754-00433.warc.gz
628,068,332
21,262
• # question_answer A circular coil of 20 turns and radius 10 cm is placed in uniform magnetic field of 0.10 T normal to the plane of the coil. If the current in coil is 5A, then the torque acting on the coil will be: A)  $31.4$ N-m                  B)  $3.14$ N-m. C)  $0.314$ N-m                D)  zero Torque (T) acting on a loop placed in a magnetic field B is given by  $\tau \,=nBIA\,\sin \theta$where, A is area of loop, I the current through it, n the number of turns, and $\theta$ the angle which axis of loop makes with magnetic field B. Since, magnetic field  is parallel to the axis of the coil hence $\theta =0{}^\circ$ and s$\sin {{0}^{0}}=0$$\tau \,=0$
213
669
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.078125
3
CC-MAIN-2022-05
latest
en
0.809635
https://artofproblemsolving.com/wiki/index.php?title=2007_AMC_12A_Problems/Problem_25&curid=5304&diff=135349&oldid=125282
1,606,239,468,000,000,000
text/html
crawl-data/CC-MAIN-2020-50/segments/1606141176922.14/warc/CC-MAIN-20201124170142-20201124200142-00071.warc.gz
197,917,055
17,045
Difference between revisions of "2007 AMC 12A Problems/Problem 25" Problem Call a set of integers spacy if it contains no more than one out of any three consecutive integers. How many subsets of $\{1,2,3,\ldots,12\},$ including the empty set, are spacy? $\mathrm{(A)}\ 121 \qquad \mathrm{(B)}\ 123 \qquad \mathrm{(C)}\ 125 \qquad \mathrm{(D)}\ 127 \qquad \mathrm{(E)}\ 129$ Solution Solution 1 Let $S_{n}$ denote the number of spacy subsets of $\{ 1, 2, ... n \}$. We have $S_{0} = 1, S_{1} = 2, S_{2} = 3$. The spacy subsets of $S_{n + 1}$ can be divided into two groups: • $A =$ those not containing $n + 1$. Clearly $|A|=S_{n}$. • $B =$ those containing $n + 1$. We have $|B|=S_{n - 2}$, since removing $n + 1$ from any set in $B$ produces a spacy set with all elements at most equal to $n - 2,$ and each such spacy set can be constructed from exactly one spacy set in $B$. Hence, $S_{n + 1} = S_{n} + S_{n - 2}$ From this recursion, we find that $S(0)$ $S(1)$ $S(2)$ $S(3)$ $S(4)$ $S(5)$ $S(6)$ $S(7)$ $S(8)$ $S(9)$ $S(10)$ $S(11)$ $S(12)$ 1 2 3 4 6 9 13 19 28 41 60 88 129 And so the answer is $(E)$ $129$. Solution 2 Since each of the elements of the subsets must be spaced at least two apart, a divider counting argument can be used. From the set $\{1,2,3,4,5,6,7,8,9,10,11,12\}$ we choose at most four numbers. Let those numbers be represented by balls. Between each of the balls there are at least two dividers. So for example, o | | o | | o | | o | | represents ${1,4,7,10}$. For subsets of size $k$ there must be $2(k - 1)$ dividers between the balls, leaving $12 - k - 2(k - 1) = 12 - 3k + 2$ dividers to be be placed in $k + 1$ spots between the balls. The number of way this can be done is $\binom{(12 - 3k + 2) + (k + 1) - 1}k = \binom{12 - 2k + 2}k$. Therefore, the number of spacy subsets is $\binom 64 + \binom 83 + \binom{10}2 + \binom{12}1 + \binom{14}0 = \boxed{129}$. Solution 3 A shifting argument is also possible, and is similar in spirit to Solution 2. Clearly we can have at most $4$ elements. Given any arrangment, we subract $2i-2$ from the $i-th$ element in our subset, when the elements are arranged in increasing order. This creates a bijection with the number of size $k$ subsets of the set of the first $14-2k$ positive integers. For instance, the arrangment o | | o | | o | | | o | corresponds to the arrangment o o o | o |. Notice that there is no longer any restriction on consectutive numbers. Therefore, we can easily plug in the possible integers 0, 1, 2, 3, 4, 5 for $k$: ${14 \choose 0} + {12 \choose 1} + {10 \choose 2} + {8 \choose 3} + {6 \choose 4} = \boxed{129}$ In general, the number of subsets of a set with $n$ elements and with no $k$ consecutive numbers is $\sum^{\lfloor{\frac{n}{k}}\rfloor}_{i=0}{{n-(k-1)(i-1) \choose i}}$. Solution 4 (Casework) Let us consider each size of subset individually. Since each integer in the subset must be at least $3$ away from any other integer in the subset, the largest spacy subset contains $4$ elements. First, it is clear that there is $1$ spacy set with $0$ elements in it, the empty set. Next, there are $12$ spacy subsets with $1$ element in them, one for each integer $1$ through $12$. Now, let us consider the spacy subsets with $2$ elements in them. If the smaller integer is $1$, the larger integer is any of the $9$ integers from $4$ to $12$. If the smaller integer is $2$, the larger integer is any of the $8$ integers from $5$ to $12$. This continues, up to a smaller integer of $9$ and $1$ choice for the larger integer, $12$. This means that there are $9 + 8 + \cdots + 1 = 45$ spacy subsets with $2$ elements. For spacy subsets with $3$ elements, we first consider the middle integer. The smallest such integer is $4$, and it allows for $1$ possible value for the smaller integer ($1$) and possible $6$ values for the larger integer ($7$ through $12$), for a total of $1 \cdot 6 = 6$ possible subsets. The next middle integer, $5$, allows for $2$ smaller integers and $5$ larger integers, and this pattern continues up until the middle integer of $9$, which has $6$ values for the smaller integer and $1$ value for the larger integer. This means that there are $1 \cdot 6 + 2 \cdot 5 + \cdots + 6 \cdot 1 = 56$ spacy subsets with $3$ elements. Lastly, there are $3$ main categories for spacy subsets with $4$ elements, defined by the difference between their smallest and largest values. The difference ranges from $9$ to $11$. If it is $9$, there is only $1$ set of places to put the two middle values ($n + 3$ and $n + 6$, where $n$ is the smallest value). Since there are $3$ possible sets of smallest and largest values, there are $1 \cdot 3 = 3$ sets in this category. If the difference is $10$, there are now $3$ sets of places to put the two middle values ($n + 3$ and $n + 6$ or $7$, and $n + 4$ and $n + 7$). There are $2$ possible sets of smallest and largest values, so there are $3 \cdot 2 = 6$ sets in this category. Finally, if the difference is $11$, there are $6$ possible sets of places to put the two middle values ($n + 3$ and $n + 6$, $7$, or $8$, $n + 4$ and $n + 7$ or $8$, and $n + 5$ and $n + 8$) and one possible set of smallest and largest values, meaning that there are $6 \cdot 1 = 6$ sets in this category. Adding them up, there are $3 + 6 + 6 = 15$ spacy subsets with $4$ elements. Adding these all up, we have a total of $1 + 12 + 45 + 56 + 15 = \boxed{\mathrm{(E)}\ 129}$ spacy subsets. ~emerald_block
1,779
5,469
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 123, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.4375
4
CC-MAIN-2020-50
latest
en
0.791845
https://egvideos.com/video/oklahoma/grade-1/math/1.g.3/partitioning-into-equal-shares
1,675,397,093,000,000,000
text/html
crawl-data/CC-MAIN-2023-06/segments/1674764500042.8/warc/CC-MAIN-20230203024018-20230203054018-00347.warc.gz
238,570,224
9,803
# Oklahoma - Grade 1 - Math - Geometry - Partitioning Into Equal Shares - 1.G.3 ### Description Partition circles and rectangles into two and four equal shares, describe the shares using the words halves, fourths, and quarters, and use the phrases half of, fourth of, and quarter of. Describe the whole as two of, or four of the shares. Understand for these examples that decomposing into more equal shares creates smaller shares. • State - Oklahoma • Standard ID - 1.G.3 • Subjects - Math Common Core • Math • Geometry ## More Oklahoma Topics Given a two-digit number, mentally find 10 more or 10 less than the number, without having to count; explain the reasoning used. Understand that the two digits of a two-digit number represent amounts of tens and ones. Understand the following as special cases: A. 10 can be thought of as a bundle of ten ones — called a “ten.” B. The numbers from 11 to 19 are composed of a ten and one, two, three, four, five, six, seven, eight, or nine ones. C. The numbers 10, 20, 30, 40, 50, 60, 70, 80, 90 refer to one, two, three, four, five, six, seven, eight, or nine tens (and 0 ones). Use addition and subtraction within 20 to solve word problems involving situations of adding to, taking from, putting together, taking apart, and comparing, with unknowns in all positions, e.g., by using objects, drawings, and equations with a symbol for the unknown number to represent the problem. Express the length of an object as a whole number of length units, by laying multiple copies of a shorter object (the length unit) end to end; understand that the length measurement of an object is the number of same-size length units that span it with no gaps or overlaps. Limit to contexts where the object being measured is spanned by a whole number of length units with no gaps or overlaps. Distinguish between defining attributes (e.g., triangles are closed and three-sided) versus non-defining attributes (e.g., color, orientation, overall size) ; build and draw shapes to possess defining attributes.
475
2,039
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.5625
5
CC-MAIN-2023-06
latest
en
0.90698
https://www.go4expert.com/articles/converting-2-dimemsional-array-3-t9341/
1,561,433,951,000,000,000
text/html
crawl-data/CC-MAIN-2019-26/segments/1560627999787.0/warc/CC-MAIN-20190625031825-20190625053825-00499.warc.gz
756,028,368
11,849
# Converting 2 dimemsional array to 3 dimensional array and vice versa in c++ Discussion in 'C++' started by bashamsc, Mar 14, 2008. 1. ### bashamscNew Member Joined: May 22, 2007 Messages: 51 Likes Received: 7 Trophy Points: 0 Location: chennai ### Introduction This article discusses about the Two Dimensional To Three Dimensional conversion and vice versa in c++ The following program converts two dimensional array to three dimensional array using TwoDimToThree() and converts three dimensional array to two dimensional array ussing ThreeDimToTwo() . Code: ```#include<iostream> using namespace std; void TwoDimToThree(); void ThreeDimToTwo(); main() { cout<<"If u want to convert 2d array to 3d array enter 1 "<<endl; cout<<endl<<"If u want to convert 3d array to 2d array enter 2 "<<endl; int n; cin>>n; if(n==1) TwoDimToThree(); else ThreeDimToTwo(); } void TwoDimToThree() { cout<<"Enter the no. of rows and column for 2d array "<<endl; int Row,Col; cin>>Row>>Col; cout<<"Enter "<<Row*Col<<" elements for the 2d array"<<endl; int i,j,k,a[Row][Col],b[Row][Row][Col]; for(i=0;i<Row;i++) for(j=0;j<Col;j++) cin>>a[i][j]; cout<<"The elements of 2d array are "<<endl; for(i=0;i<Row;i++) for(j=0;j<Col;j++) cout<<"a["<<i<<"]["<<j<<"] = "<<a[i][j]<<endl; cout<<"The elements of 3d array are "<<endl; for(i=0;i<Row;i++) { for(j=0;j<Row;j++) for(j=0;j<Row;j++) { for(k=0;k<Col;k++) { b[i][j][k]=a[j][k]; cout<<"b["<<i<<"]["<<j<<"]["<<k<<"] = "<<b[i][j][k]<<endl; } } } } void ThreeDimToTwo() { cout<<"Enter the no. of rows , column , length of 3d array "<<endl; int Row,Col,Len; cin>>Row>>Col>>Len; cout<<"Enter "<<(Row*Col)*Len<<" elements"<<endl; int a[Row][Col][Len],b[Row][Row*Col]; int i,j,k; for(i=0;i<Row;i++) for(j=0;j<Col;j++) for(k=0;k<Len;k++) cin>>a[i][j][k]; int l=0,m=0,p; cout<<"Elements of the three dimensional array are "<<endl; for(i=0;i<Row;i++) { for(j=0;j<Col;j++) { for(k=0;k<Len;k++) { b[l][m]=a[i][j][k]; cout<<"a["<<i<<"]["<<j<<"]["<<k<<"] = "<<a[i][j][k]<<endl; m++; p=m; } } m=0,l++; } cout<<endl<<"Elements of two dimensional array are "<<endl; for(i=0;i<l;i++) { for(j=0;j<p;j++) cout<<"b["<<i<<"]["<<j<<"] = "<<b[i][j]<<endl; } } ``` 2. ### shabbirAdministratorStaff Member Joined: Jul 12, 2004 Messages: 15,326 Likes Received: 377 Trophy Points: 83 3. ### shabbirAdministratorStaff Member Joined: Jul 12, 2004 Messages: 15,326 Likes Received: 377 Trophy Points: 83
823
2,437
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.625
3
CC-MAIN-2019-26
longest
en
0.426811
https://softmath.com/math-book-answers/adding-exponents/grade-10-practice-balancing.html
1,726,000,505,000,000,000
text/html
crawl-data/CC-MAIN-2024-38/segments/1725700651318.34/warc/CC-MAIN-20240910192923-20240910222923-00106.warc.gz
511,222,340
9,165
## What our customers say... Thousands of users are using our software to conquer their algebra homework. Here are some of their experiences: I can no longer think of math without the Algebrator. It is so easy to get spoiled you enter a problem and here comes the solution. Recommended! Tina Washington, TX The complete explanations, a practical approach, low price and good assignments make it my best professional tutor. Tara Fharreid, CA We bought it for our daughter and it seems to be helping her a whole bunch. It was a life saver. Daniel Thompson, CA ## Search phrases used on 2012-05-08: Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among them? • least common multiple of variable • adding and subtracting positive and negative decimals worksheet • simplify algebra equations over x • free online test paper • 8th grade math proportion word problems worksheet • "math problems" + expressions • ti 84 find x value given y value • adding and subtracting double digit integers • Week 1 of Mat 116 - Algebra 1A • interactive line graphs 6th grade • log 7 ti 89 • how to solve algebraic fraction equations • math test on decimals grade 8 • year 8 math test sample(print) • teaching factoring of quadratics ppt • converting mixed numbers to a percent • algebra 1 practice workbook with examples Mcdougal Littell answers • how to Solve Algebraic Equations • free printables multipy and divide work sheets • vertex formula TI-84 • solving radicals on the ti 83 • turn decimal into fraction generator • Pre-algebra with pizzazz worksheet answers • how to factor cubed trinomials • how to take inverse of log on calculator • solving non linear equations using matlab • solving eqations • integer worksheets • how to graph standard form on ti-84 • algebra 2 square root calculator • Write net Bronsted equations and determine the equilibrium constants for the acid-base reactions that occur when aqueous solutions of the following are mixed. • free fractions worksheets with show your work • ti-89 laplace • holt algebra 1 + flash cards • +worksheets solving 2 step equations • factor cube calculator • algebra yr. 10 • long division to thousandths place worksheets • ASSOCIATIVE PROPERTY WORKSHEET • finding the common denominator • Show Solving Algebra • Step by step LU Decomposition using TI-89 • simultaneous equation solver • fun algebra worksheets • Algebra trivia • free printable math sheets for integers • scale factor grade 7 math test • ti 83 test if a number is a square root • sum of integers in range • holt algebra 1 + flash cards + chapters 2-5 • solving Fraction inequalities free worksheets • mcdougal littell science north carolina grade 8 • properties of addition free worksheets • free printable worksheets for 10th grade • solving multiple functions in matlab • "algebra and trigonometry structure and method book 2 answers" • calculate bisection using java • multivariable multipication • printable maths year 8 questions • holt science and technology chapter 3 test "seventh grade" book m • Factoring Trinomials Calculator • SIMULTANeous equations EXCEL • algebra worksheets like terms • sample kumon problems • completing the square calculator • root of square equations • math and "OPERATIONS RULES" and quiz • lessons on solving equations by adding, subtracting, multiplying, dividing • calculator for factoring group • algebra ks3 yr 8 • how to find slope calculator Prev Next
832
3,557
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.59375
3
CC-MAIN-2024-38
latest
en
0.840538
http://lwn.net/Articles/238463/
1,369,437,633,000,000,000
text/html
crawl-data/CC-MAIN-2013-20/segments/1368705284037/warc/CC-MAIN-20130516115444-00017-ip-10-60-113-184.ec2.internal.warc.gz
164,290,512
5,039
Weekly edition Kernel Security Distributions Contact Us Search Archives Calendar Subscribe Write for LWN LWN.net FAQ Sponsors # Strong correlation? ## Strong correlation? Posted Jun 15, 2007 19:08 UTC (Fri) by vaurora (guest, #38407) In reply to: Strong correlation? by joern Parent article: KHB: Real-world disk failure rates: surprises, surprises, and more surprises Perhaps my wording was influenced by my personal disk failure rate of 100% within one week of the first SMART error - but I stand by it. :) The story is slightly more complex than 7% -> 15-30% annual failure rate. The disk failure rates in the study from CMU averaged a yearly failure rate of 3%, varying from 0.5% to 13.5% (after throwing out a 7-year-old batch of disks with a failure rate of 24%). The failure rate of the Google disks varied from 1.7% to 8.6%, depending on the age of the disks. I can't find the average in the paper, but eyeballing it and doing the math gives me 6.34% overall. So we can call it 3-7% average. More importantly, the failure rate of a disk with no errors is lower than the overall average of 3-7% a year. Figures 6 and 7 in the Google paper show the different failure probabilities for disks with and without scan errors. A disk less than 6 months old with no scan errors has only a 2% probability of failure, while a disk with one or more scan errors has a 33% failure probability. Beginning on page 8 of the Google paper, the authors break down the consequences of scan errors based on time since last error, age of disk, and number of errors. For example, a single scan error on a disk older than 2 years results in a nearly 40% probability of failure in the next 6 months. Take a closer look at those graphs; there's more data than I could summarize in the article. Finally, whether you consider a change in failure rate even from 2% to 33% significant really depends on how much you value your data and how hard it is to get it back. For the average user, the answers are "A lot," and "Nearly impossible." Raise your hand if you've backed up in the last week. Strong correlation? Posted Jun 15, 2007 19:26 UTC (Fri) by joern (subscriber, #22392) [Link] > Finally, whether you consider a change in failure rate even from 2% to 33% significant really depends on how much you value your data and how hard it is to get it back. For the average user, the answers are "A lot," and "Nearly impossible." Raise your hand if you've backed up in the last week. I've done that today. Admittedly, having shell access to four seperate servers in different locations is uncommon for normal users. In the end it is a matter of definition what a weak correlation ends and a strong correlation starts. I wouldn't speak of a strong correlation if I'd lose money when betting on the correlated event. So for me it would have to be x% -> 50+%. Strong correlation? Posted Jun 15, 2007 21:26 UTC (Fri) by giraffedata (subscriber, #1954) [Link] In an operation of the size these papers talk about, gut feelings about "strong" and "weak" correlation and the pain of data loss aren't even significant. It's pure numbers. Somebody somewhere has decided how much a data loss costs and probability, repair costs, and interest rates fill out the equation. Sometimes the cost of data loss is really simple. I had a telephone company customer years ago who said an unreadable tape cost him exactly \$16,000. The tapes contained billing records of calls; without the record, the company simply couldn't bill for the call. Another, arguing against backing up his product source code, showed the cost of hiring engineers to rewrite a megabyte of code from scratch. In the Google situation, I believe single drive data loss is virtually cost-free. That's because of all that replication and backup. In that situation, the cost of the failure is just the cost of service interruption (or degradation) and drive replacement. And since such interruptions and replacements happen regularly, the only question is whether it's cheaper to replace a drive earlier and thereby suffer the interruption later. Anyway, my point is that with all the different ways disk drives are used, I'm sure there are plenty where replacing the drive when its expected failure rate jumps to 30% is wise and plenty where doing so at 90% is unwise. Strong correlation? Posted Jun 16, 2007 2:16 UTC (Sat) by vaurora (guest, #38407) [Link] This is an excellent point - the utility of failure probability data depends on the use case. Google in general has all data replicated a minimum of three times (see the GoogleFS paper) and as a result, it is not cost-effective to replace a drive before it actually fails in practice in most situations. For any sort of professional operation with regular backups and/or replication, this data is not particularly useful except as input into how many thousands of new hard drives to order next month. But for an individual user without automated backup systems, it can provide a valuable hint on the utility of conducting that long-delayed manual backup within the next few hours.
1,156
5,086
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.9375
3
CC-MAIN-2013-20
latest
en
0.942814
https://www.teacherspayteachers.com/Product/Graphing-Questions-Answers-wGraphics-Pocket-Chart-Set-1-CCSS-274714
1,628,051,054,000,000,000
text/html
crawl-data/CC-MAIN-2021-31/segments/1627046154500.32/warc/CC-MAIN-20210804013942-20210804043942-00444.warc.gz
1,011,483,923
35,779
# Graphing Questions & Answers w/Graphics Pocket Chart-Set 1 -CCSS PreK - 2nd Subjects Standards Resource Type Formats Included • PDF • Compatible with Activities Pages 35 pages \$3.00 List Price: \$3.75 You Save: \$0.75 \$3.00 List Price: \$3.75 You Save: \$0.75 Compatible with Easel Activities Create an interactive version of this PDF students can complete on any device. Easel is free to use! Learn more. #### Also included in 1. Use these 20 illustrated graphing questions and multiple illustrated responses to help your students learn about one another as they engage in meaningful math activities! ★ 20 sets (illustrated Graphing questions and possible illustrated responses to help young learners or make it more appealing fo \$4.20 \$7.50 Save \$3.30 2. This Back to School Bundle - Video Preview: HEREContents: 8 Back to School products to help with your back to school classroom set up and planning: Class Lists, Specialists' Schedules, Dismissal Lists, Editable Policy Letter, Seating Chart, and 2 Learning Centers: A Back To School Shared Reading \$10.00 \$19.50 Save \$9.50 ### Description ★ 10 sets (illustrated Graphing questions and possible illustrated responses to help young learners or make it more appealing for older students) ★ Graphing for Pocket Chart-10 Qs & Answers w/Graphics-Set 1 -CCSS ★ 35 Page Download - Each graphing thumbnail is actually 6 full pages… ★ Graphing can help your students to get to know one another while working on math concepts at the same time when we go back to school. ★ Graph the 10 questions listed below, complete with colorful graphics for the graphing question and colorful graphics for the possible graphing responses. ★ Size of each graphing pocket chart card is 2" x 11". Leave graphing questions as a rectangle/square, or slice into strips (2" x11") and place along one row of the pocket chart. Just cut on the paper cutter. ★ Each Graphing set can be kept in a zip lock bag. ★ These graphing questions can be used year after year… not just for back to school graphing. ★ You can simply have square cards with your students' names on them, or photos of each student to use on the graphs as they place their cards next to the response of their choice. ★ Graphing is a great way to get to know one another when students come back to school. Common Core State Standard: Represent and interpret data. CCSS.Math.Content.1.MD.C.4 Organize, represent, and interpret data with up to three categories; ask and answer questions about the total number of data points, how many in each category, and how many more or less are in one category than in another. *How do you get to school? *Where do you like to go in Summer? *How many letters are in your name? *What is your favorite ice cream? *What sport do you like best? *What do you like best in school? *What kind of weather is best? *10 generic graphing questions for pocket chart. I have ANOTHER COMPLETE 10 question set: Back to School Graphing for Pocket Chart-Set 2 OTHER BACK TO SCHOOL PRODUCTS INCLUDE: (Just click on any of the underlined links to view the product in more detail. Thank you…) Back to School Bundle for Primary Grades This Back to School Bundle contains 6 Back to School products to help with your back to school classroom set up and planning. 2 of the products are Back to School classroom centers: A Back To School Shared Reading Songs packet and a Back to School Graphing Pocket Chart Center. Back to School Name Labels Value Pack Back to School Name Labels Value Pack- 10 Sets for the Price of 5. a \$10.00 Value for just \$5.00. 10 Different Sets. Simply highlight the name Elizabeth's, and type in the name you need on each label, and hit the "return" key. Names will automatically be centered.8 Large size: Run on plain white paper or oak tag and cut up OR run on Avery 2" x 4" ink jet labels. 2 Small size Avery 1" by 4". Two sets in Black and White for students to color to conserve ink. Back to School Door Hangers Back to School Door Hangers: We are at... -For Primary Classrooms - (one for intermediate in my product listings) 10 Door Hangers to tell everyone where your class is at any given moment now that you are back to school. Just cut on the red lines, and the door hangers will be ready to slip onto a door knob, or just tape onto your door. Back to School First Name Bingo! Back To School First Name Bingo - 8 page file contains 3 versions of the Back to School First Name Bingo game board, 25, 20 and 16 students, calling card template and directions ready for you to edit with your students' names. Back to School Graphing Questions and Responses Back to School Graphing Cards for Pocket Chart - 10 sets - 35 Page Download - Each thumbnail is actually 6 full pages... Help your students to get to know one another when they come back to school while working on math concepts at the same time. Back to School “My Work Cards” How Am I Working? Back to School - My Work Cards - "How Am I Working?"-2 styles - Flat or Tented - Get your students into good work habits early in the school year with these classroom management cards. These simple cards can alert teachers to those students who need help while working. The traffic light symbol, green thumbs up, red thumbs down and colored question marks should be easily understood b Total Pages 35 pages N/A Teaching Duration Lifelong tool Report this Resource to TpT Reported resources will be reviewed by our team. Report this resource to let us know if this resource violates TpT’s content guidelines. ### Standards to see state-specific standards (only available in the US). Organize, represent, and interpret data with up to three categories; ask and answer questions about the total number of data points, how many in each category, and how many more or less are in one category than in another.
1,355
5,828
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.109375
3
CC-MAIN-2021-31
longest
en
0.858099
http://www.creativenumerology.com/destiny-path-number/
1,501,194,880,000,000,000
text/html
crawl-data/CC-MAIN-2017-30/segments/1500549429548.55/warc/CC-MAIN-20170727222533-20170728002533-00205.warc.gz
400,032,078
17,033
# Destiny Path ### Your birthday is so much more than just a measure of your age. Your DESTINY PATH is derived from all the numbers in your DATE OF BIRTH. It represents the energy into which you were born, and the nature of your individual journey. Your Destiny Path contains the essential qualities you need to fulfill your own desires and intentions. This is the main path you will travel in this lifetime, and is the strongest of all your personal numbers. It helps you to recognize your true self – and feel comfortable simply being YOU. This is invaluable knowledge on the path to self-acceptance! ### How to calculate your Destiny Path Number EXAMPLE: If you were  born on February 26, 1989, add as follows: Month: February = 2 Day: 2+6=8 Year: 1+9+8+9=2+7=9  (Be sure to add all four numbers of the year. Do NOT abbreviate to ’89) Now add the subtotals together. If you don’t immediately get a single digit, keep adding until you do.   2+8+9 =1+9 = 1+0 = 1 In this instance, your Destiny Path would be 1 It is also a good idea to familiarize yourself with the destiny numbers of other people in your life. Relationships of every kind can benefit by the acceptance of each other’s individuality.
318
1,208
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.171875
3
CC-MAIN-2017-30
latest
en
0.884175
https://in.mathworks.com/matlabcentral/answers/409493-how-to-fit-a-custom-equation
1,591,383,553,000,000,000
text/html
crawl-data/CC-MAIN-2020-24/segments/1590348502204.93/warc/CC-MAIN-20200605174158-20200605204158-00419.warc.gz
381,991,977
24,765
# How to fit a custom equation? 323 views (last 30 days) madhuri dubey on 9 Jul 2018 Answered: Alex Sha on 18 Feb 2020 My equation is y=a(1-exp(-b(c+x)) x=[0,80,100,120,150] y=[2195,4265,4824,5143.5,5329] When I am solving it in matlab, I am not getting a proper fit in addition, sse=6.5196e+05 and r square=0.899. Although the r square value is acceptable, the sse is too high. Therefore kindly help to get minimum sse. Further I have tried in curve fitting tool but I got same thing. Star Strider on 9 Jul 2018 I get good results with this: yf = @(b,x) b(1).*(1-exp(-b(2)*(b(3)+x))); B0 = [5000; 0.01; 10]; [Bm,normresm] = fminsearch(@(b) norm(y - yf(b,x)), B0); SSE = sum((y - yf(Bm,x)).^2) Bm = 6677.76372320411 0.0084077646869843 47.1622210493944 normresm = 195.173589996072 SSE = 38092.7302319547 Star Strider on 10 Jul 2018 As always, my pleasure. I would be tempted to use polyfit to get initial estimates of ‘a’, ‘c’, and ‘d’ (estimated as [-0.09, 34, 2200] when I did it) , then let your nonlinear parameter estimation routine (similar to my code) estimate them and ‘b’ (that I would initially estimate as 10). I usually create my own initial population for ga. I would be tempted here to use a matrix of 500 individuals, defined as: init_pop = randi(5000, 500, 4); using the appropriate options function (linked to in the See Also section in the ga documentation) to define it as such. The ga function is efficient, however since it has to search the entire parameter space, it will take time for it to converge. Note that you have 5 data pairs and you are now estimating 4 parameters. madhuri dubey on 11 Jul 2018 When I use polyfit to get initial estimates of ‘a’, ‘c’, and ‘d’ , I got [-0.0001, 0.0346, 2.1807]. Why there is difference in constant values for the same data. Star Strider on 11 Jul 2018 There isn’t. You’re ignoring the constant multiplication factor 1.0E+03. The full result: p = 1.0e+03 * -0.000088159928538 0.034559355475118 2.180742845451099 Image Analyst on 11 Jul 2018 For what it's worth, I used fitnlm() (Fit a non-linear model) because that's the function I'm more familiar with. You can see it gives the same results as Star's method in the image below. I'm attaching the full demo to determine the coefficients and plot the figure. Alex Sha on 18 Feb 2020 If don't care the type of fitting function, try the below function, much simple but with much better result: y = b1+b2*x^1.5+b3*x^3; Root of Mean Square Error (RMSE): 18.0563068929128 Sum of Squared Residual: 1630.15109305524 Correlation Coef. (R): 0.999873897456616 R-Square: 0.999747810815083 Determination Coef. (DC): 0.999747810815083 Chi-Square: 0.165495427247465 F-Statistic: 3964.27710070773 Parameter Best Estimate ---------- ------------- b1 2195.84396843687 b2 3.66989234203779 b3 -0.00107101963512847
912
2,812
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.265625
3
CC-MAIN-2020-24
latest
en
0.826661
https://www.ehow.co.uk/how_2214583_calculate-spacing-deck-joists.html
1,606,239,893,000,000,000
text/html
crawl-data/CC-MAIN-2020-50/segments/1606141176922.14/warc/CC-MAIN-20201124170142-20201124200142-00364.warc.gz
679,491,917
34,575
# How to calculate spacing on deck joists Ryan McVay/Photodisc/Getty Images The spacing between deck joists is commonly referred to as joist span in professional lingo. It doesn't matter what you call it -- it's still a critical calculation when it comes to building your new deck. Though it may seem complicated to the first-time deck builder, the way to calculate the spacing on deck joists is based on common sense and a knowledge of your materials. Determine what kind of wood you will be using for your joists. Different woods expand and contract at different rates and will vary in softness, which will affect how much weight the wood can bear before beginning to sag. Softer woods like cedar will give more than Brazilian IpĂȘ or ironwood, which are harder and less flexible. Determine the size of the timber you will be using for the joists. Longer, thinner joists will need to be set closer together for optimal structural integrity, while shorter, wider joists can be set further apart. • The spacing between deck joists is commonly referred to as joist span in professional lingo. • Different woods expand and contract at different rates and will vary in softness, which will affect how much weight the wood can bear before beginning to sag. Measure the length of the floorboards you will be using. Most decks are built with long, consistent pieces of lumber for the deck flooring. However, some patterned floors require short, angled pieces of wood to create intriguing designs. The designs that use shorter floorboards will require the joists to be set closer together to accommodate these patterns. Take into consideration the climate where you will be building your deck. Not only will wood warp or crack under less than optimal conditions for their species, but if you live in an area that gets heavy snowfall each winter, then you must also take into account the overall dead weight that the deck must hold, sometimes for months at a time. In cooler climates, keep the joist distances smaller than you would in warmer climates. • Measure the length of the floorboards you will be using. • The designs that use shorter floorboards will require the joists to be set closer together to accommodate these patterns. Consult a deck joist span table to determine the best spacing. You can find tables and calculators online at sites like Ace Hardware and Best Deck Site (see Resources). You can also check with the local planning department since it will be able to help make sure that the joist spans you calculate fall within its regulations.
518
2,564
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.53125
3
CC-MAIN-2020-50
longest
en
0.950182
http://nationsenergycorp.com/click-here-for-more-info-electricity-price-comparison-sites.html
1,544,859,938,000,000,000
text/html
crawl-data/CC-MAIN-2018-51/segments/1544376826800.31/warc/CC-MAIN-20181215061532-20181215083532-00253.warc.gz
206,338,317
3,336
There was a time when electricity was electricity.  Like so many other places around America, in Houston, electricity didn’t mean “cheap electricity”.  But you moved into your home and you called the utility and they turned on the power and the bill came in and you paid it every month.  Oh, sure, you might grumble at the amount but then you’d go around and yell at the kids for leaving the lights on and the TV blaring with nobody in the room or maybe you’d look into buying more energy-efficient appliances.  When it came down to it, the Bill was the Bill.  Either you paid the bill or you ate dry packet meals, had cold showers, and watched TV by peering through the neighbor’s window after dark (preferably once they’d turned the TV on).  What’s that?  You want cheap electricity?  Sure thing:  call 1-800-WHO-CARES any time during regular business hours of 2:17am to 3:04am Sundays only. Prepaid electricity plans are yet another option available to Texas customers. Prepaid plans let you avoid credit checks and deposits by pre-paying for your electricity. Prepaid electricity plans typically do not have a fixed duration and operate on a pay-as-you-go basis. Shopping for prepaid electricity can often yield relatively cheap electricity with no deposit. See Prepaid Electricity: Is It Right For Me? for more. When the energy market became deregulated in a majority of Texas in 2002, residents and business owners in these regions earned the power to choose which retail electric provider would supply their electricity. In a deregulated energy market, you could have a range of options for electricity supply rates, which means doing research to find the best one for your needs. ```How did we get this number?This total is calculated by taking the wattage and daily usage of your common appliances and converting this into a monthly kilowatt per hour (kWh) usage rate. To figure out the estimated cost based on this rate, multiply your kWh per month by the cost of your energy (an average rate is \$.12 per kWh). You can learn more about calculating your energy consumption by following the steps on this page. ``` Unlike with long-term plans, monthly, variable rate (no-contract) plans have no cancellation fees. You won’t have to pay a penalty if you decide to take your business elsewhere because you found a better deal. Plus, you won’t be left paying more than you should if the market rate for energy trends down. However, if the market prices rise, you’ll have to pay more than those who are in-contract.
542
2,522
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.5625
3
CC-MAIN-2018-51
latest
en
0.964535
https://se.mathworks.com/matlabcentral/cody/players/3266883/badges
1,606,990,893,000,000,000
text/html
crawl-data/CC-MAIN-2020-50/segments/1606141727627.70/warc/CC-MAIN-20201203094119-20201203124119-00385.warc.gz
468,720,655
16,453
Cody George Berken Rank Score 1 – 60 of 64 Project Euler I Master+50 Earned on 14 Apr 2014 for solving all the problems in Project Euler I. Cody Problems in Japanese Master+50 Earned on 12 Dec 2017 for solving all the problems in Cody Problems in Japanese. CUP Challenge Master+50 Earned on 2 Jan 2014 for solving all the problems in CUP Challenge. Solver+10 Earned on 9 Dec 2013 for solving Problem 1. Times 2 - START HERE. Promoter+10 Earned on 11 Dec 2013 for liking Solution 290489. Earned on 2 Jan 2014 for submitting the best solution Solution 377425. Cody Challenge Master+50 Earned on 10 Feb 2014 for solving all the problems in Cody Challenge. ASEE Challenge Master+50 Earned on 12 Feb 2014 for solving all the problems in ASEE Challenge. Commenter+10 Earned on 12 Feb 2014 for commenting on Solution 401941. Tiles Challenge Master+50 Earned on 16 Feb 2014 for solving all the problems in Tiles Challenge. Scholar+50 Earned on 10 Mar 2014 for solving 500 problems. Speed Demon+50 Earned on 2 Apr 2014 for first solving Problem 2267. Sales Prediction. Community Group Solver+50 Solve a community group Introduction to MATLAB Master+50 Solve all the problems in Introduction to MATLAB problem group. Creator+20 Create a problem. Curator+50 25 solvers for the group curated by the player Quiz Master+20 Must have 50 solvers for a problem you created. Draw Letters Master+50 Solve all the problems in Draw Letters problem group. Cody5:Easy Master+50 Solve all the problems in Cody5:Easy problem group. Indexing I Master+50 Solve all the problems in Indexing I problem group. Puzzler+50 Create 10 problems. Indexing II Master+50 Solve all the problems in Indexing II problem group. Matrix Manipulation I Master+50 Solve all the problems in Matrix Manipulation I problem group. Magic Numbers Master+50 Solve all the problems in Magic Numbers problem group. Sequences & Series I Master+50 Solve all the problems in Sequences & Series I problem group. Famous+20 Must receive 25 total likes for the problems you created. Computational Geometry I Master+50 Solve all the problems in Computation Geometry I problem group. Likeable+20 Must receive 10 likes for a problem you created. Strings I Master+50 Solve all the problems in Strings I problem group. Number Manipulation I Master+50 Solve all the problems in Number Manipulation I problem group. Matrix Patterns I Master+50 Solve all the problems in Matrix Patterns problem group. Divisible by x Master+50 Solve all the problems in Divisible by x problem group. Matrix Manipulation II Master+50 Solve all the problems in Matrix Manipulation II problem group. Matrix Patterns II Master+50 Solve all the problems in Matrix Patterns II problem group. R2016b Feature Challenge Master+50 Solve all the problems in R2016b Feature Challenge problem group. Magic Numbers II Master+50 Solve all the problems in Magic Numbers II problem group. Sequences & Series II Master+50 Solve all the problems in Sequences & Series II problem group. Indexing III Master+50 Solve all the problems in Indexing III problem group. Cody5:Hard Master+50 Solve all the problems in Cody5:Hard problem group. Functions I Master+50 Solve all the problems in Functions I problem group. Matrix Patterns III Master+50 Solve all the problems in Matrix Patterns III problem group. Indexing V Master+50 Solve all the problems in Indexing V problem group. Card Games Master+50 Solve all the problems in Card Games problem group. Number Manipulation II Master+50 Solve all the problems in Number Manipulation II problem group. Sequences & Series III Master+50 Solve all the problems in Sequences & Series III problem group. Strings II Master+50 Solve all the problems in Strings II problem group. Matrix Manipulation III Master+50 Solve all the problems in Matrix Manipulation III problem group. Indexing IV Master+50 Solve all the problems in Indexing IV problem group. Celebrity+20 Must receive 50 total likes for the solutions you submitted. Strings III Master+50 Solve all the problems in Strings III problem group. Computational Geometry II Master+50 Solve all the problems in Computational Geometry II problem group. Renowned+20 Must receive 10 likes for a solution you submitted. Computational Geometry IV Master+50 Solve all the problems in Computational Geometry IV problem group. Logic Master+50 Solve all the problems in Logic problem group. Fundamentals of robotics: 2D problems Master+50 Solve all the problems in Fundamentals of robotics problem group. Combinatorics I Master+50 Solve all the problems in Combinatorics - I problem group. Word Puzzles Master+50 Solve all the problems in Word Puzzles problem group. Computational Geometry III Master+50 Solve all the problems in Computational Geometry III problem group. Computer Games I Master+50 Solve all the problems in Computer Games I problem group. Modeling & Simulation Challenge Master+50 Solve all the problems in Modeling and Simulation Challenge problem group.
1,219
5,068
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.6875
3
CC-MAIN-2020-50
latest
en
0.856147
https://www.teacherspayteachers.com/Product/Ratios-and-Proportional-Relationships-Foldable-3424328
1,539,616,261,000,000,000
text/html
crawl-data/CC-MAIN-2018-43/segments/1539583509326.21/warc/CC-MAIN-20181015142752-20181015164252-00398.warc.gz
1,119,351,949
20,808
# Ratios and Proportional Relationships Foldable Subject Resource Type Common Core Standards Product Rating File Type PDF (Acrobat) Document File 740 KB|5 pages Share Product Description This awesome Foldable will help your students learn the definitions to the Essential Vocabulary Terms for the unit on Ratios and Proportions. I provided the definitions and examples for each PLUS there is space under each flap for you to personalize with a second example to reinforce their skills and use your own creativity to fit the needs of your classroom. Check out my other products on the Unit covering 6th Grade Ratios and Proportional Relationships! This is part of my BUNDLE on Standards 6.RP.A.1, 6.RP.A.2 and 6.RP.A.3 (a-d) which includes activities, assessments, PowerPoint Notes and Printable Cornell Notes as well! -Enjoy! ******************************************************************************************* Other Math Games and Center Ideas Ratios and Proportional Relationships Paired Puzzle Students will be paired with a partner and each will have 12 separate problems to solve for. After finding the Unit Rate for each, they will then compare and match up their answers to find the Letter/Number to solve for a Riddle. Percent Proportion Color by Solution Perfect activity to supplement your lesson on Percent Proportion. This includes 11 questions, with 3 ensuring students can convert a fraction into a percent. ******************************************************************************************* ♥ Please follow my store to stay up-to-date with new product releases! Don’t forget to leave feedback to gain TpT Credits for future purchases! ♥ → If you have questions or problems, please contact me through the Product Q & A and I will respond as quickly as I can! © Personal Copyright: The purchase of this product allows the teacher to use in their personal classroom. Please do not share with other educators unless additional licenses have been purchased. Site and District licenses are also available. Total Pages 5 pages N/A Teaching Duration N/A Report this Resource \$1.75
424
2,115
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.828125
3
CC-MAIN-2018-43
longest
en
0.878458
https://republicofsouthossetia.org/question/select-all-that-are-asymptotes-of-the-tangent-graph-pi-4-pi-2-pi-3pi-2-3pi-15007118-39/
1,637,980,047,000,000,000
text/html
crawl-data/CC-MAIN-2021-49/segments/1637964358078.2/warc/CC-MAIN-20211127013935-20211127043935-00201.warc.gz
607,286,760
13,510
## Select all that are asymptotes of the tangent graph. x = pi/4 x=pi/2 x=pi x=3pi/2 x=3pi Question Select all that are asymptotes of the tangent graph. x = pi/4 x=pi/2 x=pi x=3pi/2 x=3pi in progress 0 1 month 2021-10-20T21:12:50+00:00 2 Answers 0 views 0 ## Answers ( ) 1. Answer: Step-by-step explanation: The tangent function is We can rewrite it as: This graph has vertical asymptotes at where the denominator is zero. 2. Answer: B D Step-by-step explanation:
162
475
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.875
3
CC-MAIN-2021-49
latest
en
0.816313
http://puzzles.blainesville.com/2017/01/
1,513,456,581,000,000,000
text/html
crawl-data/CC-MAIN-2017-51/segments/1512948589177.70/warc/CC-MAIN-20171216201436-20171216223436-00429.warc.gz
224,595,740
17,594
## Sunday, January 29, 2017 ### NPR Sunday Puzzle (Jan 29, 2017): Take six different letters NPR Sunday Puzzle (Jan 29, 2017): Take six different letters: Q: Take six different letters. Repeat them in the same order. Then repeat them again — making 18 letters altogether. Finally add "tebasket" at the end. If you have the right letters and you space them appropriately, you'll complete a sensible sentence. What is it? I'm not sure I'd call it "sensible" unless it was spoken by someone a little nutty. My hint was to nuts, bolts and washers. A: HERWAS --> HER WASHER WAS HER WASTEBASKET ## Sunday, January 22, 2017 ### NPR Sunday Puzzle (Jan 22, 2017): Think of a Number... NPR Sunday Puzzle (Jan 22, 2017): Think of a Number...: Q: This week's challenge is unusual. The numbers 5,000, 8,000, and 9,000 share a property that only five integers altogether have. Identify the property and the two other integers that have it. The hard part isn't figuring out the pattern, it's figuring out how we are supposed to extend it to find integers four and five. Edit: The title contains each of the vowels (a,e,i,o,u) exactly once. The 4 and 5 in my hint refer to the number of digits in the two other answers. A: When spelled out in English, the numbers contain the 5 vowels (a, e, i, o, u, but not y) exactly once. The other two numbers would be 6,010 (six thousand ten) and 10,006 (ten thousand six). I discounted answers like 80,000 and 90,000 which also contain y and wouldn't preclude 26,000 as an answer. ## Sunday, January 15, 2017 ### NPR Sunday Puzzle (Jan 15, 2017): Gods of Comedy NPR Sunday Puzzle (Jan 15, 2017): Gods of Comedy: Q: Take the first and last names of a famous comedian. The first three letters of the first name and the first letter of the last name, in order, spell the name of a god in mythology. The fourth letter of the first name and the second through fourth letters of the last name, in order, spell the name of another god. Who's the comedian, and what gods are these? Here's a long list of comedians and a list of gods to help you out. Edit: The antonym of long is... A: MARTIN SHORT --> MARS, THOR ## Sunday, January 08, 2017 ### NPR Sunday Puzzle (Jan 8, 2017): The Cat's Away... NPR Sunday Puzzle (Jan 8, 2017): The Cat's Away... I'm unable to post the puzzle this week, but I didn't want to leave you without a place to post comments on the puzzle. Somebody help me out by posting a copy here. Then feel free to add your *hints*. Here's my standard reminder... don't post the answer or any outright spoilers before the deadline of Thursday at 3pm ET. If you know the answer, click the link and submit it to NPR, but don't give it away here. Thank you. Update: Q: Think of a two-word phrase you might see on a clothing label. Add two letters to the end of the first word, and one letter to the end of the second word. The result is the name of a famous writer. Who is it? A: VIRGIN WOOL --> VIRGINIA WOOLF ## Sunday, January 01, 2017 ### NPR Sunday Puzzle (Jan 1, 2017): Start the Year with a Word Square Puzzle NPR Sunday Puzzle (Jan 1, 2017): Start the Year with a Word Square Puzzle: Q: Take the four-letter men's names TODD, OMAR, DAVE and DREW. If you write them one under the other, they'll form a word square, spelling TODD, OMAR, DAVE and DREW reading down as well. Can you construct a word square consisting of five five-letter men's names? Any such square using relatively familiar men's names will count. Will has an answer using four relatively common names and one less familiar one. This list of 5-letter names or this list of 5-letter boys names should help you get started. A: Will's intended answer was: KEMAL EMILE MILAN ALAIN LENNY One of the many possible answers, and the answer of the person chosen to play on the air was: ABRAM BLANE RANDY ANDRE MEYER
1,025
3,830
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.8125
4
CC-MAIN-2017-51
latest
en
0.92326
http://mathforum.org/library/drmath/view/56222.html
1,498,708,640,000,000,000
text/html
crawl-data/CC-MAIN-2017-26/segments/1498128323864.76/warc/CC-MAIN-20170629033356-20170629053356-00126.warc.gz
267,457,827
3,443
Associated Topics || Dr. Math Home || Search Dr. Math ### Checkout Registers and Customers ``` Date: 02/27/2001 at 23:17:52 From: Phillip Kirkman Subject: Permutations I have two checkout registers, and twenty customers. What formula will find how many different ways I can arrange them? Order does matter. I've tried two checkouts and three people and have 24 different ways I can arrange them. I also tried the same with four people and got 121 ways to arrange them, and five people and 478 ways to arrange them. I don't know what the formula is... Help! ``` ``` Date: 02/28/2001 at 08:05:01 From: Doctor Anthony Subject: Re: Permutations This type of problem can be modelled in the following way: Ten balls are numbered 1,2, ... ,10. In how many ways can these balls be dropped into five different slots, any number into a slot, and with order being important? We let f(10,5) represent the required number of ways. Suppose that a distribution of 9 of the numbers gives f(9,5) possible distributions, and suppose this gives i(1) numbers in box 1, i(2) numbers in box 2 and so on, so that i(1) + i(2) + i(3) + i(4) + i(5) = 9 Then the 10th object can go into box 1 in [i(1)+1] ways box 2 in [i(2)+1] ways and so on. So that [i(1)+1] + [i(2)+1] + ..... + [i(5)+1] = 9 + 5 = 14 ways Since this number is independent of the particular distribution of the nine numbers, we have the relation f(10,5) = 14.f(9,5) = 14.13.f(8,5) = 14.13.12.f(7,5) = 14.13.12.11.f(6,5) = 14.13.12.11.10.f(5,5) = etc, etc, .... = 14.13.12.11.10.9.8.7.6.5 [f(1,5) = 5] = 3632428800 = P(14,10) If m = the number of boxes and n = the number of numbered balls, then the required number of ways = P(m+n-1,n) This can also be written [m]^n = m(m+1)(m+2).....(m+n-1) In the question of the twenty customers and two checkouts, the number of arrangements is P(2+20-1,20) = P(21,20) = 5.109 x 10^19 We can check your work with 3, 4 and 5 customers. With 3 customers P(2+3-1,3) = P(4,3) = 24 agrees with your answer With 4 customers P(2+4-1,4) = P(5,4) = 120 differs by 1 With 5 customers P(2+5-1,5) = P(6,5) = 720 no agreement - Doctor Anthony, The Math Forum http://mathforum.org/dr.math/ ``` Associated Topics: High School Permutations and Combinations Search the Dr. Math Library: Find items containing (put spaces between keywords):   Click only once for faster results: [ Choose "whole words" when searching for a word like age.] all keywords, in any order at least one, that exact phrase parts of words whole words Submit your own question to Dr. Math Math Forum Home || Math Library || Quick Reference || Math Forum Search
866
2,637
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.34375
4
CC-MAIN-2017-26
longest
en
0.893569