doc_id int32 0 2.25M | text stringlengths 101 8.13k | source stringlengths 38 44 |
|---|---|---|
6,100 | Hubble's law predicts that galaxies that are beyond Hubble distance recede faster than the speed of light. However, special relativity does not apply beyond motion through space. Hubble's law describes velocity that results from expansion "of" space, rather than "through" space. | https://en.wikipedia.org/wiki?curid=4116 |
6,101 | Astronomers often refer to the cosmological redshift as a Doppler shift which can lead to a misconception. Although similar, the cosmological redshift is not identical to the classically derived Doppler redshift because most elementary derivations of the Doppler redshift do not accommodate the expansion of space. Accurate derivation of the cosmological redshift requires the use of general relativity, and while a treatment using simpler Doppler effect arguments gives nearly identical results for nearby galaxies, interpreting the redshift of more distant galaxies as due to the simplest Doppler redshift treatments can cause confusion. | https://en.wikipedia.org/wiki?curid=4116 |
6,102 | Given current understanding, scientific extrapolations about the future of the universe are only possible for finite durations, albeit for much longer periods than the current age of the universe. Anything beyond that becomes increasingly speculative. Likewise, at present, a proper understanding of the origin of the universe can only be subject to conjecture. | https://en.wikipedia.org/wiki?curid=4116 |
6,103 | The Big Bang explains the evolution of the universe from a starting density and temperature that is well beyond humanity's capability to replicate, so extrapolations to the most extreme conditions and earliest times are necessarily more speculative. Lemaître called this initial state the ""primeval atom"" while Gamow called the material ""ylem"". How the initial state of the universe originated is still an open question, but the Big Bang model does constrain some of its characteristics. For example, specific laws of nature most likely came to existence in a random way, but as inflation models show, some combinations of these are far more probable. A flat universe implies a balance between gravitational potential energy and other energy forms, requiring no additional energy to be created. | https://en.wikipedia.org/wiki?curid=4116 |
6,104 | The Big Bang theory, built upon the equations of classical general relativity, indicates a singularity at the origin of cosmic time, and such an infinite energy density may be a physical impossibility. However, the physical theories of general relativity and quantum mechanics as currently realized are not applicable before the Planck epoch, and correcting this will require the development of a correct treatment of quantum gravity. Certain quantum gravity treatments, such as the Wheeler–DeWitt equation, imply that time itself could be an emergent property. As such, physics may conclude that time did not exist before the Big Bang. | https://en.wikipedia.org/wiki?curid=4116 |
6,105 | While it is not known what could have preceded the hot dense state of the early universe or how and why it originated, or even whether such questions are sensible, speculation abounds on the subject of "cosmogony". | https://en.wikipedia.org/wiki?curid=4116 |
6,106 | Proposals in the last two categories see the Big Bang as an event in either a much larger and older universe or in a multiverse. | https://en.wikipedia.org/wiki?curid=4116 |
6,107 | Before observations of dark energy, cosmologists considered two scenarios for the future of the universe. If the mass density of the universe were greater than the critical density, then the universe would reach a maximum size and then begin to collapse. It would become denser and hotter again, ending with a state similar to that in which it started—a Big Crunch. | https://en.wikipedia.org/wiki?curid=4116 |
6,108 | Alternatively, if the density in the universe were equal to or below the critical density, the expansion would slow down but never stop. Star formation would cease with the consumption of interstellar gas in each galaxy; stars would burn out, leaving white dwarfs, neutron stars, and black holes. Collisions between these would result in mass accumulating into larger and larger black holes. The average temperature of the universe would very gradually asymptotically approach absolute zero—a Big Freeze. Moreover, if protons are unstable, then baryonic matter would disappear, leaving only radiation and black holes. Eventually, black holes would evaporate by emitting Hawking radiation. The entropy of the universe would increase to the point where no organized form of energy could be extracted from it, a scenario known as heat death. | https://en.wikipedia.org/wiki?curid=4116 |
6,109 | Modern observations of accelerating expansion imply that more and more of the currently visible universe will pass beyond our event horizon and out of contact with us. The eventual result is not known. The ΛCDM model of the universe contains dark energy in the form of a cosmological constant. This theory suggests that only gravitationally bound systems, such as galaxies, will remain together, and they too will be subject to heat death as the universe expands and cools. Other explanations of dark energy, called phantom energy theories, suggest that ultimately galaxy clusters, stars, planets, atoms, nuclei, and matter itself will be torn apart by the ever-increasing expansion in a so-called Big Rip. | https://en.wikipedia.org/wiki?curid=4116 |
6,110 | As a description of the origin of the universe, the Big Bang has significant bearing on religion and philosophy. As a result, it has become one of the liveliest areas in the discourse between science and religion. Some believe the Big Bang implies a creator, while others argue that Big Bang cosmology makes the notion of a creator superfluous. | https://en.wikipedia.org/wiki?curid=4116 |
6,111 | In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is | https://en.wikipedia.org/wiki?curid=21462 |
6,112 | The parameter formula_3 is the mean or expectation of the distribution (and also its median and mode), while the parameter formula_18 is its standard deviation. The variance of the distribution is formula_6. A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate. | https://en.wikipedia.org/wiki?curid=21462 |
6,113 | Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known. Their importance is partly due to the central limit theorem. It states that, under some conditions, the average of many samples (observations) of a random variable with finite mean and variance is itself a random variable—whose distribution converges to a normal distribution as the number of samples increases. Therefore, physical quantities that are expected to be the sum of many independent processes, such as measurement errors, often have distributions that are nearly normal. | https://en.wikipedia.org/wiki?curid=21462 |
6,114 | Moreover, Gaussian distributions have some unique properties that are valuable in analytic studies. For instance, any linear combination of a fixed collection of normal deviates is a normal deviate. Many results and methods, such as propagation of uncertainty and least squares parameter fitting, can be derived analytically in explicit form when the relevant variables are normally distributed. | https://en.wikipedia.org/wiki?curid=21462 |
6,115 | A normal distribution is sometimes informally called a bell curve. However, many other distributions are bell-shaped (such as the Cauchy, Student's "t", and logistic distributions). For other names, see Naming. | https://en.wikipedia.org/wiki?curid=21462 |
6,116 | The univariate probability distribution is generalized for vectors in the multivariate normal distribution and for matrices in the matrix normal distribution. | https://en.wikipedia.org/wiki?curid=21462 |
6,117 | The simplest case of a normal distribution is known as the "standard normal distribution" or "unit normal distribution". This is a special case when formula_20 and formula_21, and it is described by this probability density function (or density): | https://en.wikipedia.org/wiki?curid=21462 |
6,118 | The variable formula_23 has a mean of 0 and a variance and standard deviation of 1. The density formula_24 has its peak formula_25 at formula_26 and inflection points at formula_27 and formula_28. | https://en.wikipedia.org/wiki?curid=21462 |
6,119 | Although the density above is most commonly known as the "standard normal," a few authors have used that term to describe other versions of the normal distribution. Carl Friedrich Gauss, for example, once defined the standard normal as | https://en.wikipedia.org/wiki?curid=21462 |
6,120 | Every normal distribution is a version of the standard normal distribution, whose domain has been stretched by a factor formula_18 (the standard deviation) and then translated by formula_3 (the mean value): | https://en.wikipedia.org/wiki?curid=21462 |
6,121 | If formula_36 is a standard normal deviate, then formula_37 will have a normal distribution with expected value formula_3 and standard deviation formula_18. This is equivalent to saying that the "standard" normal distribution formula_36 can be scaled/stretched by a factor of formula_18 and shifted by formula_3 to yield a different normal distribution, called formula_43. Conversely, if formula_43 is a normal deviate with parameters formula_3 and formula_6, then this formula_43 distribution can be re-scaled and shifted via the formula formula_48 to convert it to the "standard" normal distribution. This variate is also called the standardized form of formula_43. | https://en.wikipedia.org/wiki?curid=21462 |
6,122 | The probability density of the standard Gaussian distribution (standard normal distribution, with zero mean and unit variance) is often denoted with the Greek letter formula_50 (phi). The alternative form of the Greek letter phi, formula_51, is also used quite often. | https://en.wikipedia.org/wiki?curid=21462 |
6,123 | The normal distribution is often referred to as formula_52 or formula_53. Thus when a random variable formula_43 is normally distributed with mean formula_3 and standard deviation formula_18, one may write | https://en.wikipedia.org/wiki?curid=21462 |
6,124 | Some authors advocate using the precision formula_58 as the parameter defining the width of the distribution, instead of the deviation formula_18 or the variance formula_6. The precision is normally defined as the reciprocal of the variance, formula_61. The formula for the distribution then becomes | https://en.wikipedia.org/wiki?curid=21462 |
6,125 | This choice is claimed to have advantages in numerical computations when formula_18 is very close to zero, and simplifies formulas in some contexts, such as in the Bayesian inference of variables with multivariate normal distribution. | https://en.wikipedia.org/wiki?curid=21462 |
6,126 | Alternatively, the reciprocal of the standard deviation formula_64 might be defined as the "precision", in which case the expression of the normal distribution becomes | https://en.wikipedia.org/wiki?curid=21462 |
6,127 | According to Stigler, this formulation is advantageous because of a much simpler and easier-to-remember formula, and simple approximate formulas for the quantiles of the distribution. | https://en.wikipedia.org/wiki?curid=21462 |
6,128 | Normal distributions form an exponential family with natural parameters formula_66 and formula_67, and natural statistics "x" and "x". The dual expectation parameters for normal distribution are and . | https://en.wikipedia.org/wiki?curid=21462 |
6,129 | The cumulative distribution function (CDF) of the standard normal distribution, usually denoted with the capital Greek letter formula_68 (phi), is the integral | https://en.wikipedia.org/wiki?curid=21462 |
6,130 | The related error function formula_70 gives the probability of a random variable, with normal distribution of mean 0 and variance 1/2 falling in the range formula_71. That is: | https://en.wikipedia.org/wiki?curid=21462 |
6,131 | These integrals cannot be expressed in terms of elementary functions, and are often said to be special functions. However, many numerical approximations are known; see below for more. | https://en.wikipedia.org/wiki?curid=21462 |
6,132 | For a generic normal distribution with density formula_74, mean formula_3 and deviation formula_18, the cumulative distribution function is | https://en.wikipedia.org/wiki?curid=21462 |
6,133 | The complement of the standard normal CDF, formula_78, is often called the Q-function, especially in engineering texts. It gives the probability that the value of a standard normal random variable formula_43 will exceed formula_80: formula_81. Other definitions of the formula_82-function, all of which are simple transformations of formula_68, are also used occasionally. | https://en.wikipedia.org/wiki?curid=21462 |
6,134 | The graph of the standard normal CDF formula_68 has 2-fold rotational symmetry around the point (0,1/2); that is, formula_85. Its antiderivative (indefinite integral) can be expressed as follows: | https://en.wikipedia.org/wiki?curid=21462 |
6,135 | An asymptotic expansion of the CDF for large "x" can also be derived using integration by parts. For more, see Error function#Asymptotic expansion. | https://en.wikipedia.org/wiki?curid=21462 |
6,136 | A quick approximation to the standard normal distribution's CDF can be found by using a Taylor series approximation: | https://en.wikipedia.org/wiki?curid=21462 |
6,137 | About 68% of values drawn from a normal distribution are within one standard deviation "σ" away from the mean; about 95% of the values lie within two standard deviations; and about 99.7% are within three standard deviations. This fact is known as the 68-95-99.7 (empirical) rule, or the "3-sigma rule". | https://en.wikipedia.org/wiki?curid=21462 |
6,138 | More precisely, the probability that a normal deviate lies in the range between formula_90 and formula_91 is given by | https://en.wikipedia.org/wiki?curid=21462 |
6,139 | The quantile function of a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called the probit function, and can be expressed in terms of the inverse error function: | https://en.wikipedia.org/wiki?curid=21462 |
6,140 | The quantile formula_100 of the standard normal distribution is commonly denoted as formula_101. These values are used in hypothesis testing, construction of confidence intervals and Q–Q plots. A normal random variable formula_43 will exceed formula_103 with probability formula_104, and will lie outside the interval formula_105 with probability formula_106. In particular, the quantile formula_107 is 1.96; therefore a normal random variable will lie outside the interval formula_108 in only 5% of cases. | https://en.wikipedia.org/wiki?curid=21462 |
6,141 | The following table gives the quantile formula_101 such that formula_43 will lie in the range formula_105 with a specified probability formula_112. These values are useful to determine tolerance interval for sample averages and other statistical estimators with normal (or asymptotically normal) distributions. Note that the following table shows formula_113, not formula_100 as defined above. | https://en.wikipedia.org/wiki?curid=21462 |
6,142 | The normal distribution is the only distribution whose cumulants beyond the first two (i.e., other than the mean and variance) are zero. It is also the continuous distribution with the maximum entropy for a specified mean and variance. Geary has shown, assuming that the mean and variance are finite, that the normal distribution is the only distribution where the mean and variance calculated from a set of independent draws are independent of each other. | https://en.wikipedia.org/wiki?curid=21462 |
6,143 | The normal distribution is a subclass of the elliptical distributions. The normal distribution is symmetric about its mean, and is non-zero over the entire real line. As such it may not be a suitable model for variables that are inherently positive or strongly skewed, such as the weight of a person or the price of a share. Such variables may be better described by other distributions, such as the log-normal distribution or the Pareto distribution. | https://en.wikipedia.org/wiki?curid=21462 |
6,144 | The value of the normal distribution is practically zero when the value formula_80 lies more than a few standard deviations away from the mean (e.g., a spread of three standard deviations covers all but 0.27% of the total distribution). Therefore, it may not be an appropriate model when one expects a significant fraction of outliers—values that lie many standard deviations away from the mean—and least squares and other statistical inference methods that are optimal for normally distributed variables often become highly unreliable when applied to such data. In those cases, a more heavy-tailed distribution should be assumed and the appropriate robust statistical inference methods applied. | https://en.wikipedia.org/wiki?curid=21462 |
6,145 | The Gaussian distribution belongs to the family of stable distributions which are the attractors of sums of independent, identically distributed distributions whether or not the mean or variance is finite. Except for the Gaussian which is a limiting case, all stable distributions have heavy tails and infinite variance. It is one of the few distributions that are stable and that have probability density functions that can be expressed analytically, the others being the Cauchy distribution and the Lévy distribution. | https://en.wikipedia.org/wiki?curid=21462 |
6,146 | The normal distribution with density formula_118 (mean formula_3 and standard deviation formula_120) has the following properties: | https://en.wikipedia.org/wiki?curid=21462 |
6,147 | Furthermore, the density formula_51 of the standard normal distribution (i.e. formula_20 and formula_132) also has the following properties: | https://en.wikipedia.org/wiki?curid=21462 |
6,148 | The plain and absolute moments of a variable formula_43 are the expected values of formula_142 and formula_143, respectively. If the expected value formula_3 of formula_43 is zero, these parameters are called "central moments;" otherwise, these parameters are called "non-central moments." Usually we are interested only in moments with integer order formula_146. | https://en.wikipedia.org/wiki?curid=21462 |
6,149 | If formula_43 has a normal distribution, the non-central moments exist and are finite for any formula_112 whose real part is greater than −1. For any non-negative integer formula_112, the plain central moments are: | https://en.wikipedia.org/wiki?curid=21462 |
6,150 | Here formula_151 denotes the double factorial, that is, the product of all numbers from formula_94 to 1 that have the same parity as formula_153 | https://en.wikipedia.org/wiki?curid=21462 |
6,151 | The central absolute moments coincide with plain moments for all even orders, but are nonzero for odd orders. For any non-negative integer formula_154 | https://en.wikipedia.org/wiki?curid=21462 |
6,152 | The last formula is valid also for any non-integer formula_156 When the mean formula_157 the plain and absolute moments can be expressed in terms of confluent hypergeometric functions formula_158 and formula_159 | https://en.wikipedia.org/wiki?curid=21462 |
6,153 | These expressions remain valid even if formula_112 is not an integer. See also generalized Hermite polynomials. | https://en.wikipedia.org/wiki?curid=21462 |
6,154 | The expectation of formula_43 conditioned on the event that formula_43 lies in an interval formula_164 is given by | https://en.wikipedia.org/wiki?curid=21462 |
6,155 | where formula_74 and formula_166 respectively are the density and the cumulative distribution function of formula_43. For formula_168 this is known as the inverse Mills ratio. Note that above, density formula_74 of formula_43 is used instead of standard normal density as in inverse Mills ratio, so here we have formula_6 instead of formula_18. | https://en.wikipedia.org/wiki?curid=21462 |
6,156 | The Fourier transform of a normal density formula_74 with mean formula_3 and standard deviation formula_18 is | https://en.wikipedia.org/wiki?curid=21462 |
6,157 | where formula_177 is the imaginary unit. If the mean formula_20, the first factor is 1, and the Fourier transform is, apart from a constant factor, a normal density on the frequency domain, with mean 0 and standard deviation formula_35. In particular, the standard normal distribution formula_51 is an eigenfunction of the Fourier transform. | https://en.wikipedia.org/wiki?curid=21462 |
6,158 | In probability theory, the Fourier transform of the probability distribution of a real-valued random variable formula_43 is closely connected to the characteristic function formula_182 of that variable, which is defined as the expected value of formula_183, as a function of the real variable formula_184 (the frequency parameter of the Fourier transform). This definition can be analytically extended to a complex-value variable formula_184. The relation between both is: | https://en.wikipedia.org/wiki?curid=21462 |
6,159 | The moment generating function of a real random variable formula_43 is the expected value of formula_188, as a function of the real parameter formula_184. For a normal distribution with density formula_74, mean formula_3 and deviation formula_18, the moment generating function exists and is equal to | https://en.wikipedia.org/wiki?curid=21462 |
6,160 | Since this is a quadratic polynomial in formula_184, only the first two cumulants are nonzero, namely the mean formula_3 and the variance formula_6. | https://en.wikipedia.org/wiki?curid=21462 |
6,161 | Within Stein's method the Stein operator and class of a random variable formula_198 are formula_199 and formula_200 the class of all absolutely continuous functions formula_201. | https://en.wikipedia.org/wiki?curid=21462 |
6,162 | In the limit when formula_18 tends to zero, the probability density formula_118 eventually tends to zero at any formula_204, but grows without limit if formula_205, while its integral remains equal to 1. Therefore, the normal distribution cannot be defined as an ordinary function when formula_206. | https://en.wikipedia.org/wiki?curid=21462 |
6,163 | However, one can define the normal distribution with zero variance as a generalized function; specifically, as Dirac's "delta function" formula_207 translated by the mean formula_3, that is formula_209 | https://en.wikipedia.org/wiki?curid=21462 |
6,164 | Of all probability distributions over the reals with a specified mean formula_3 and variance formula_6, the normal distribution formula_52 is the one with maximum entropy. If formula_43 is a continuous random variable with probability density formula_118, then the entropy of formula_43 is defined as | https://en.wikipedia.org/wiki?curid=21462 |
6,165 | where formula_219 is understood to be zero whenever formula_220. This functional can be maximized, subject to the constraints that the distribution is properly normalized and has a specified variance, by using variational calculus. A function with two Lagrange multipliers is defined: | https://en.wikipedia.org/wiki?curid=21462 |
6,166 | where formula_118 is, for now, regarded as some density function with mean formula_3 and standard deviation formula_18. | https://en.wikipedia.org/wiki?curid=21462 |
6,167 | At maximum entropy, a small variation formula_225 about formula_118 will produce a variation formula_227 about formula_228 which is equal to 0: | https://en.wikipedia.org/wiki?curid=21462 |
6,168 | Since this must hold for any small formula_225, the term in brackets must be zero, and solving for formula_118 yields: | https://en.wikipedia.org/wiki?curid=21462 |
6,169 | Using the constraint equations to solve for formula_233 and formula_234 yields the density of the normal distribution: | https://en.wikipedia.org/wiki?curid=21462 |
6,170 | The central limit theorem states that under certain (fairly common) conditions, the sum of many random variables will have an approximately normal distribution. More specifically, where formula_248 are independent and identically distributed random variables with the same arbitrary distribution, zero mean, and variance formula_6 and formula_36 is their | https://en.wikipedia.org/wiki?curid=21462 |
6,171 | Then, as formula_94 increases, the probability distribution of formula_36 will tend to the normal distribution with zero mean and variance formula_6. | https://en.wikipedia.org/wiki?curid=21462 |
6,172 | The theorem can be extended to variables formula_256 that are not independent and/or not identically distributed if certain constraints are placed on the degree of dependence and the moments of the distributions. | https://en.wikipedia.org/wiki?curid=21462 |
6,173 | Many test statistics, scores, and estimators encountered in practice contain sums of certain random variables in them, and even more estimators can be represented as sums of random variables through the use of influence functions. The central limit theorem implies that those statistical parameters will have asymptotically normal distributions. | https://en.wikipedia.org/wiki?curid=21462 |
6,174 | The central limit theorem also implies that certain distributions can be approximated by the normal distribution, for example: | https://en.wikipedia.org/wiki?curid=21462 |
6,175 | Whether these approximations are sufficiently accurate depends on the purpose for which they are needed, and the rate of convergence to the normal distribution. It is typically the case that such approximations are less accurate in the tails of the distribution. | https://en.wikipedia.org/wiki?curid=21462 |
6,176 | A general upper bound for the approximation error in the central limit theorem is given by the Berry–Esseen theorem, improvements of the approximation are given by the Edgeworth expansions. | https://en.wikipedia.org/wiki?curid=21462 |
6,177 | This theorem can also be used to justify modeling the sum of many uniform noise sources as Gaussian noise. See AWGN. | https://en.wikipedia.org/wiki?curid=21462 |
6,178 | The probability density, cumulative distribution, and inverse cumulative distribution of any function of one or more independent or correlated normal variables can be computed with the numerical method of ray-tracing (Matlab code). In the following sections we look at some special cases. | https://en.wikipedia.org/wiki?curid=21462 |
6,179 | If formula_288 and formula_289 are two independent standard normal random variables with mean 0 and variance 1, then | https://en.wikipedia.org/wiki?curid=21462 |
6,180 | The split normal distribution is most directly defined in terms of joining scaled sections of the density functions of different normal distributions and rescaling the density to integrate to one. The truncated normal distribution results from rescaling a section of a single density function. | https://en.wikipedia.org/wiki?curid=21462 |
6,181 | For any positive integer formula_335, any normal distribution with mean formula_336 and variance formula_6 is the distribution of the sum of formula_335 independent normal deviates, each with mean formula_339 and variance formula_340. This property is called infinite divisibility. | https://en.wikipedia.org/wiki?curid=21462 |
6,182 | Conversely, if formula_288 and formula_289 are independent random variables and their sum formula_343 has a normal distribution, then both formula_288 and formula_289 must be normal deviates. | https://en.wikipedia.org/wiki?curid=21462 |
6,183 | This result is known as Cramér's decomposition theorem, and is equivalent to saying that the convolution of two distributions is normal if and only if both are normal. Cramér's theorem implies that a linear combination of independent non-Gaussian variables will never have an exactly normal distribution, although it may approach it arbitrarily closely. | https://en.wikipedia.org/wiki?curid=21462 |
6,184 | Bernstein's theorem states that if formula_43 and formula_298 are independent and formula_300 and formula_301 are also independent, then both "X" and "Y" must necessarily have normal distributions. | https://en.wikipedia.org/wiki?curid=21462 |
6,185 | More generally, if formula_350 are independent random variables, then two distinct linear combinations formula_351 and formula_352will be independent if and only if all formula_353 are normal and formula_354, where formula_355 denotes the variance of formula_353. | https://en.wikipedia.org/wiki?curid=21462 |
6,186 | The notion of normal distribution, being one of the most important distributions in probability theory, has been extended far beyond the standard framework of the univariate (that is one-dimensional) case (Case 1). All these extensions are also called "normal" or "Gaussian" laws, so a certain ambiguity in names exists. | https://en.wikipedia.org/wiki?curid=21462 |
6,187 | where "μ" is the mean and "σ" and "σ" are the standard deviations of the distribution to the left and right of the mean respectively. | https://en.wikipedia.org/wiki?curid=21462 |
6,188 | One of the main practical uses of the Gaussian law is to model the empirical distributions of many different random variables encountered in practice. In such case a possible extension would be a richer family of distributions, having more than two parameters and therefore being able to fit the empirical distribution more accurately. The examples of such extensions are: | https://en.wikipedia.org/wiki?curid=21462 |
6,189 | It is often the case that we do not know the parameters of the normal distribution, but instead want to estimate them. That is, having a sample formula_362 from a normal formula_363 population we would like to learn the approximate values of parameters formula_3 and formula_6. The standard approach to this problem is the maximum likelihood method, which requires maximization of the "log-likelihood function": | https://en.wikipedia.org/wiki?curid=21462 |
6,190 | Taking derivatives with respect to formula_3 and formula_6 and solving the resulting system of first order conditions yields the "maximum likelihood estimates": | https://en.wikipedia.org/wiki?curid=21462 |
6,191 | Estimator formula_370 is called the "sample mean", since it is the arithmetic mean of all observations. The statistic formula_371 is complete and sufficient for formula_3, and therefore by the Lehmann–Scheffé theorem, formula_370 is the uniformly minimum variance unbiased (UMVU) estimator. In finite samples it is distributed normally: | https://en.wikipedia.org/wiki?curid=21462 |
6,192 | The variance of this estimator is equal to the "μμ"-element of the inverse Fisher information matrix formula_375. This implies that the estimator is finite-sample efficient. Of practical importance is the fact that the standard error of formula_370 is proportional to formula_377, that is, if one wishes to decrease the standard error by a factor of 10, one must increase the number of points in the sample by a factor of 100. This fact is widely used in determining sample sizes for opinion polls and the number of trials in Monte Carlo simulations. | https://en.wikipedia.org/wiki?curid=21462 |
6,193 | From the standpoint of the asymptotic theory, formula_370 is consistent, that is, it converges in probability to formula_3 as formula_380. The estimator is also asymptotically normal, which is a simple corollary of the fact that it is normal in finite samples: | https://en.wikipedia.org/wiki?curid=21462 |
6,194 | The estimator formula_382 is called the "sample variance", since it is the variance of the sample (formula_362). In practice, another estimator is often used instead of the formula_382. This other estimator is denoted formula_385, and is also called the "sample variance", which represents a certain ambiguity in terminology; its square root formula_386 is called the "sample standard deviation". The estimator formula_385 differs from formula_382 by having instead of "n" in the denominator (the so-called Bessel's correction): | https://en.wikipedia.org/wiki?curid=21462 |
6,195 | The difference between formula_385 and formula_382 becomes negligibly small for large "n"s. In finite samples however, the motivation behind the use of formula_385 is that it is an unbiased estimator of the underlying parameter formula_6, whereas formula_382 is biased. Also, by the Lehmann–Scheffé theorem the estimator formula_385 is uniformly minimum variance unbiased (UMVU), which makes it the "best" estimator among all unbiased ones. However it can be shown that the biased estimator formula_382 is "better" than the formula_385 in terms of the mean squared error (MSE) criterion. In finite samples both formula_385 and formula_382 have scaled chi-squared distribution with degrees of freedom: | https://en.wikipedia.org/wiki?curid=21462 |
6,196 | The first of these expressions shows that the variance of formula_385 is equal to formula_402, which is slightly greater than the "σσ"-element of the inverse Fisher information matrix formula_375. Thus, formula_385 is not an efficient estimator for formula_6, and moreover, since formula_385 is UMVU, we can conclude that the finite-sample efficient estimator for formula_6 does not exist. | https://en.wikipedia.org/wiki?curid=21462 |
6,197 | Applying the asymptotic theory, both estimators formula_385 and formula_382 are consistent, that is they converge in probability to formula_6 as the sample size formula_380. The two estimators are also both asymptotically normal: | https://en.wikipedia.org/wiki?curid=21462 |
6,198 | By Cochran's theorem, for normal distributions the sample mean formula_370 and the sample variance "s" are independent, which means there can be no gain in considering their joint distribution. There is also a converse theorem: if in a sample the sample mean and sample variance are independent, then the sample must have come from the normal distribution. The independence between formula_370 and "s" can be employed to construct the so-called "t-statistic": | https://en.wikipedia.org/wiki?curid=21462 |
6,199 | This quantity "t" has the Student's t-distribution with degrees of freedom, and it is an ancillary statistic (independent of the value of the parameters). Inverting the distribution of this "t"-statistics will allow us to construct the confidence interval for "μ"; similarly, inverting the "χ" distribution of the statistic "s" will give us the confidence interval for "σ": | https://en.wikipedia.org/wiki?curid=21462 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.