doc_id
int32
0
2.25M
text
stringlengths
101
8.13k
source
stringlengths
38
44
5,200
where formula_18 are the observed values of the sample items, and formula_19 is the mean value of these observations, while the denominator "N" stands for the size of the sample: this is the square root of the sample variance, which is the average of the squared deviations about the sample mean.
https://en.wikipedia.org/wiki?curid=27590
5,201
This is a consistent estimator (it converges in probability to the population value as the number of samples goes to infinity), and is the maximum-likelihood estimate when the population is normally distributed. However, this is a biased estimator, as the estimates are generally too low. The bias decreases as sample size grows, dropping off as 1/"N", and thus is most significant for small or moderate sample sizes; for formula_20 the bias is below 1%. Thus for very large sample sizes, the uncorrected sample standard deviation is generally acceptable. This estimator also has a uniformly smaller mean squared error than the corrected sample standard deviation.
https://en.wikipedia.org/wiki?curid=27590
5,202
If the "biased sample variance" (the second central moment of the sample, which is a downward-biased estimate of the population variance) is used to compute an estimate of the population's standard deviation, the result is
https://en.wikipedia.org/wiki?curid=27590
5,203
Here taking the square root introduces further downward bias, by Jensen's inequality, due to the square root's being a concave function. The bias in the variance is easily corrected, but the bias from the square root is more difficult to correct, and depends on the distribution in question.
https://en.wikipedia.org/wiki?curid=27590
5,204
An unbiased estimator for the "variance" is given by applying Bessel's correction, using "N" − 1 instead of "N" to yield the "unbiased sample variance," denoted "s":
https://en.wikipedia.org/wiki?curid=27590
5,205
This estimator is unbiased if the variance exists and the sample values are drawn independently with replacement. "N" − 1 corresponds to the number of degrees of freedom in the vector of deviations from the mean, formula_23
https://en.wikipedia.org/wiki?curid=27590
5,206
Taking square roots reintroduces bias (because the square root is a nonlinear function which does not commute with the expectation, i.e. often formula_24), yielding the "corrected sample standard deviation," denoted by "s:"
https://en.wikipedia.org/wiki?curid=27590
5,207
As explained above, while "s" is an unbiased estimator for the population variance, "s" is still a biased estimator for the population standard deviation, though markedly less biased than the uncorrected sample standard deviation. This estimator is commonly used and generally known simply as the "sample standard deviation". The bias may still be large for small samples ("N" less than 10). As sample size increases, the amount of bias decreases. We obtain more information and the difference between formula_26 and formula_27 becomes smaller.
https://en.wikipedia.org/wiki?curid=27590
5,208
For unbiased estimation of standard deviation, there is no formula that works across all distributions, unlike for mean and variance. Instead, "s" is used as a basis, and is scaled by a correction factor to produce an unbiased estimate. For the normal distribution, an unbiased estimator is given by "s"/"c", where the correction factor (which depends on "N") is given in terms of the Gamma function, and equals:
https://en.wikipedia.org/wiki?curid=27590
5,209
This arises because the sampling distribution of the sample standard deviation follows a (scaled) chi distribution, and the correction factor is the mean of the chi distribution.
https://en.wikipedia.org/wiki?curid=27590
5,210
The error in this approximation decays quadratically (as 1/"N"), and it is suited for all but the smallest samples or highest precision: for "N" = 3 the bias is equal to 1.3%, and for "N" = 9 the bias is already less than 0.1%.
https://en.wikipedia.org/wiki?curid=27590
5,211
For other distributions, the correct formula depends on the distribution, but a rule of thumb is to use the further refinement of the approximation:
https://en.wikipedia.org/wiki?curid=27590
5,212
where "γ" denotes the population excess kurtosis. The excess kurtosis may be either known beforehand for certain distributions, or estimated from the data.
https://en.wikipedia.org/wiki?curid=27590
5,213
The standard deviation we obtain by sampling a distribution is itself not absolutely accurate, both for mathematical reasons (explained here by the confidence interval) and for practical reasons of measurement (measurement error). The mathematical effect can be described by the confidence interval or CI.
https://en.wikipedia.org/wiki?curid=27590
5,214
To show how a larger sample will make the confidence interval narrower, consider the following examples:
https://en.wikipedia.org/wiki?curid=27590
5,215
A small population of "N" = 2 has only 1 degree of freedom for estimating the standard deviation. The result is that a 95% CI of the SD runs from 0.45 × SD to 31.9 × SD; the factors here are as follows:
https://en.wikipedia.org/wiki?curid=27590
5,216
where formula_34 is the "p"-th quantile of the chi-square distribution with "k" degrees of freedom, and formula_35 is the confidence level. This is equivalent to the following:
https://en.wikipedia.org/wiki?curid=27590
5,217
With "k" = 1, formula_37 and formula_38. The reciprocals of the square roots of these two numbers give us the factors 0.45 and 31.9 given above.
https://en.wikipedia.org/wiki?curid=27590
5,218
A larger population of "N" = 10 has 9 degrees of freedom for estimating the standard deviation. The same computations as above give us in this case a 95% CI running from 0.69 × SD to 1.83 × SD. So even with a sample population of 10, the actual SD can still be almost a factor 2 higher than the sampled SD. For a sample population N=100, this is down to 0.88 × SD to 1.16 × SD. To be more certain that the sampled SD is close to the actual SD we need to sample a large number of points.
https://en.wikipedia.org/wiki?curid=27590
5,219
These same formulae can be used to obtain confidence intervals on the variance of residuals from a least squares fit under standard normal theory, where "k" is now the number of degrees of freedom for error.
https://en.wikipedia.org/wiki?curid=27590
5,220
For a set of "N" > 4 data spanning a range of values "R", an upper bound on the standard deviation "s" is given by "s = 0.6R".
https://en.wikipedia.org/wiki?curid=27590
5,221
An estimate of the standard deviation for "N" > 100 data taken to be approximately normal follows from the heuristic that 95% of the area under the normal curve lies roughly two standard deviations to either side of the mean, so that, with 95% probability the total range of values "R" represents four standard deviations so that "s ≈ R/4". This so-called range rule is useful in sample size estimation, as the range of possible values is easier to estimate than the standard deviation. Other divisors "K(N)" of the range such that "s ≈ R/K(N)" are available for other values of "N" and for non-normal distributions.
https://en.wikipedia.org/wiki?curid=27590
5,222
The standard deviation is invariant under changes in location, and scales directly with the scale of the random variable. Thus, for a constant "c" and random variables "X" and "Y":
https://en.wikipedia.org/wiki?curid=27590
5,223
The standard deviation of the sum of two random variables can be related to their individual standard deviations and the covariance between them:
https://en.wikipedia.org/wiki?curid=27590
5,224
The calculation of the sum of squared deviations can be related to moments calculated directly from the data. In the following formula, the letter E is interpreted to mean expected value, i.e., mean.
https://en.wikipedia.org/wiki?curid=27590
5,225
which means that the standard deviation is equal to the square root of the difference between the average of the squares of the values and the square of the average value.
https://en.wikipedia.org/wiki?curid=27590
5,226
See computational formula for the variance for proof, and for an analogous result for the sample standard deviation.
https://en.wikipedia.org/wiki?curid=27590
5,227
A large standard deviation indicates that the data points can spread far from the mean and a small standard deviation indicates that they are clustered closely around the mean.
https://en.wikipedia.org/wiki?curid=27590
5,228
For example, each of the three populations {0, 0, 14, 14}, {0, 6, 8, 14} and {6, 6, 8, 8} has a mean of 7. Their standard deviations are 7, 5, and 1, respectively. The third population has a much smaller standard deviation than the other two because its values are all close to 7. These standard deviations have the same units as the data points themselves. If, for instance, the data set {0, 6, 8, 14} represents the ages of a population of four siblings in years, the standard deviation is 5 years. As another example, the population {1000, 1006, 1008, 1014} may represent the distances traveled by four athletes, measured in meters. It has a mean of 1007 meters, and a standard deviation of 5 meters.
https://en.wikipedia.org/wiki?curid=27590
5,229
Standard deviation may serve as a measure of uncertainty. In physical science, for example, the reported standard deviation of a group of repeated measurements gives the precision of those measurements. When deciding whether measurements agree with a theoretical prediction, the standard deviation of those measurements is of crucial importance: if the mean of the measurements is too far away from the prediction (with the distance measured in standard deviations), then the theory being tested probably needs to be revised. This makes sense since they fall outside the range of values that could reasonably be expected to occur, if the prediction were correct and the standard deviation appropriately quantified. See prediction interval.
https://en.wikipedia.org/wiki?curid=27590
5,230
While the standard deviation does measure how far typical values tend to be from the mean, other measures are available. An example is the mean absolute deviation, which might be considered a more direct measure of average distance, compared to the root mean square distance inherent in the standard deviation.
https://en.wikipedia.org/wiki?curid=27590
5,231
The practical value of understanding the standard deviation of a set of values is in appreciating how much variation there is from the average (mean).
https://en.wikipedia.org/wiki?curid=27590
5,232
For example, in industrial applications the weight of products coming off a production line may need to comply with a legally required value. By weighing some fraction of the products an average weight can be found, which will always be slightly different from the long-term average. By using standard deviations, a minimum and maximum value can be calculated that the averaged weight will be within some very high percentage of the time (99.9% or more). If it falls outside the range then the production process may need to be corrected. Statistical tests such as these are particularly important when the testing is relatively expensive. For example, if the product needs to be opened and drained and weighed, or if the product was otherwise used up by the test.
https://en.wikipedia.org/wiki?curid=27590
5,233
In experimental science, a theoretical model of reality is used. Particle physics conventionally uses a standard of "5 sigma" for the declaration of a discovery. A five-sigma level translates to one chance in 3.5 million that a random fluctuation would yield the result. This level of certainty was required in order to assert that a particle consistent with the Higgs boson had been discovered in two independent experiments at CERN, also leading to the declaration of the first observation of gravitational waves.
https://en.wikipedia.org/wiki?curid=27590
5,234
As a simple example, consider the average daily maximum temperatures for two cities, one inland and one on the coast. It is helpful to understand that the range of daily maximum temperatures for cities near the coast is smaller than for cities inland. Thus, while these two cities may each have the same average maximum temperature, the standard deviation of the daily maximum temperature for the coastal city will be less than that of the inland city as, on any particular day, the actual maximum temperature is more likely to be farther from the average maximum temperature for the inland city than for the coastal one.
https://en.wikipedia.org/wiki?curid=27590
5,235
In finance, standard deviation is often used as a measure of the risk associated with price-fluctuations of a given asset (stocks, bonds, property, etc.), or the risk of a portfolio of assets (actively managed mutual funds, index mutual funds, or ETFs). Risk is an important factor in determining how to efficiently manage a portfolio of investments because it determines the variation in returns on the asset and/or portfolio and gives investors a mathematical basis for investment decisions (known as mean-variance optimization). The fundamental concept of risk is that as it increases, the expected return on an investment should increase as well, an increase known as the risk premium. In other words, investors should expect a higher return on an investment when that investment carries a higher level of risk or uncertainty. When evaluating investments, investors should estimate both the expected return and the uncertainty of future returns. Standard deviation provides a quantified estimate of the uncertainty of future returns.
https://en.wikipedia.org/wiki?curid=27590
5,236
For example, assume an investor had to choose between two stocks. Stock A over the past 20 years had an average return of 10 percent, with a standard deviation of 20 percentage points (pp) and Stock B, over the same period, had average returns of 12 percent but a higher standard deviation of 30 pp. On the basis of risk and return, an investor may decide that Stock A is the safer choice, because Stock B's additional two percentage points of return is not worth the additional 10 pp standard deviation (greater risk or uncertainty of the expected return). Stock B is likely to fall short of the initial investment (but also to exceed the initial investment) more often than Stock A under the same circumstances, and is estimated to return only two percent more on average. In this example, Stock A is expected to earn about 10 percent, plus or minus 20 pp (a range of 30 percent to −10 percent), about two-thirds of the future year returns. When considering more extreme possible returns or outcomes in future, an investor should expect results of as much as 10 percent plus or minus 60 pp, or a range from 70 percent to −50 percent, which includes outcomes for three standard deviations from the average return (about 99.7 percent of probable returns).
https://en.wikipedia.org/wiki?curid=27590
5,237
Calculating the average (or arithmetic mean) of the return of a security over a given period will generate the expected return of the asset. For each period, subtracting the expected return from the actual return results in the difference from the mean. Squaring the difference in each period and taking the average gives the overall variance of the return of the asset. The larger the variance, the greater risk the security carries. Finding the square root of this variance will give the standard deviation of the investment tool in question.
https://en.wikipedia.org/wiki?curid=27590
5,238
Population standard deviation is used to set the width of Bollinger Bands, a technical analysis tool. For example, the upper Bollinger Band is given as formula_46 The most commonly used value for "n" is 2; there is about a five percent chance of going outside, assuming a normal distribution of returns.
https://en.wikipedia.org/wiki?curid=27590
5,239
Financial time series are known to be non-stationary series, whereas the statistical calculations above, such as standard deviation, apply only to stationary series. To apply the above statistical tools to non-stationary series, the series first must be transformed to a stationary series, enabling use of statistical tools that now have a valid basis from which to work.
https://en.wikipedia.org/wiki?curid=27590
5,240
To gain some geometric insights and clarification, we will start with a population of three values, "x", "x", "x". This defines a point "P" = ("x", "x", "x") in R. Consider the line "L" = {("r", "r", "r") : "r" ∈ R}. This is the "main diagonal" going through the origin. If our three given values were all equal, then the standard deviation would be zero and "P" would lie on "L". So it is not unreasonable to assume that the standard deviation is related to the "distance" of "P" to "L". That is indeed the case. To move orthogonally from "L" to the point "P", one begins at the point:
https://en.wikipedia.org/wiki?curid=27590
5,241
A little algebra shows that the distance between "P" and "M" (which is the same as the orthogonal distance between "P" and the line "L") formula_56 is equal to the standard deviation of the vector ("x", "x", "x"), multiplied by the square root of the number of dimensions of the vector (3 in this case).
https://en.wikipedia.org/wiki?curid=27590
5,242
An observation is rarely more than a few standard deviations away from the mean. Chebyshev's inequality ensures that, for all distributions for which the standard deviation is defined, the amount of data within a number of standard deviations of the mean is at least as much as given in the following table.
https://en.wikipedia.org/wiki?curid=27590
5,243
The central limit theorem states that the distribution of an average of many independent, identically distributed random variables tends toward the famous bell-shaped normal distribution with a probability density function of
https://en.wikipedia.org/wiki?curid=27590
5,244
where "μ" is the expected value of the random variables, "σ" equals their distribution's standard deviation divided by "n", and "n" is the number of random variables. The standard deviation therefore is simply a scaling variable that adjusts how broad the curve will be, though it also appears in the normalizing constant.
https://en.wikipedia.org/wiki?curid=27590
5,245
If a data distribution is approximately normal, then the proportion of data values within "z" standard deviations of the mean is defined by:
https://en.wikipedia.org/wiki?curid=27590
5,246
where formula_59 is the error function. The proportion that is less than or equal to a number, "x", is given by the cumulative distribution function:
https://en.wikipedia.org/wiki?curid=27590
5,247
If a data distribution is approximately normal then about 68 percent of the data values are within one standard deviation of the mean (mathematically, "μ" ± "σ", where "μ" is the arithmetic mean), about 95 percent are within two standard deviations ("μ" ± 2"σ"), and about 99.7 percent lie within three standard deviations ("μ" ± 3"σ"). This is known as the "68–95–99.7 rule", or "the empirical rule".
https://en.wikipedia.org/wiki?curid=27590
5,248
For various values of "z", the percentage of values expected to lie in and outside the symmetric interval, CI = (−"zσ", "zσ"), are as follows:
https://en.wikipedia.org/wiki?curid=27590
5,249
The mean and the standard deviation of a set of data are descriptive statistics usually reported together. In a certain sense, the standard deviation is a "natural" measure of statistical dispersion if the center of the data is measured about the mean. This is because the standard deviation from the mean is smaller than from any other point. The precise statement is the following: suppose "x", ..., "x" are real numbers and define the function:
https://en.wikipedia.org/wiki?curid=27590
5,250
Using calculus or by completing the square, it is possible to show that "σ"("r") has a unique minimum at the mean:
https://en.wikipedia.org/wiki?curid=27590
5,251
Variability can also be measured by the coefficient of variation, which is the ratio of the standard deviation to the mean. It is a dimensionless number.
https://en.wikipedia.org/wiki?curid=27590
5,252
Often, we want some information about the precision of the mean we obtained. We can obtain this by determining the standard deviation of the sampled mean. Assuming statistical independence of the values in the sample, the standard deviation of the mean is related to the standard deviation of the distribution by:
https://en.wikipedia.org/wiki?curid=27590
5,253
where "N" is the number of observations in the sample used to estimate the mean. This can easily be proven with (see basic properties of the variance):
https://en.wikipedia.org/wiki?curid=27590
5,254
In order to estimate the standard deviation of the mean formula_68 it is necessary to know the standard deviation of the entire population formula_69 beforehand. However, in most applications this parameter is unknown. For example, if a series of 10 measurements of a previously unknown quantity is performed in a laboratory, it is possible to calculate the resulting sample mean and sample standard deviation, but it is impossible to calculate the standard deviation of the mean. However, one can estimate the standard deviation of the entire population from the sample, and thus obtain an estimate for the standard error of the mean.
https://en.wikipedia.org/wiki?curid=27590
5,255
The following two formulas can represent a running (repeatedly updated) standard deviation. A set of two power sums "s" and "s" are computed over a set of "N" values of "x", denoted as "x", ..., "x":
https://en.wikipedia.org/wiki?curid=27590
5,256
Given the results of these running summations, the values "N", "s", "s" can be used at any time to compute the "current" value of the running standard deviation:
https://en.wikipedia.org/wiki?curid=27590
5,257
In a computer implementation, as the two "s" sums become large, we need to consider round-off error, arithmetic overflow, and arithmetic underflow. The method below calculates the running sums method with reduced rounding errors. This is a "one pass" algorithm for calculating variance of "n" samples without the need to store prior data during the calculation. Applying this method to a time series will result in successive values of standard deviation corresponding to "n" data points as "n" grows larger with each new sample, rather than a constant-width sliding window calculation.
https://en.wikipedia.org/wiki?curid=27590
5,258
When the values "x" are weighted with unequal weights "w", the power sums "s", "s", "s" are each computed as:
https://en.wikipedia.org/wiki?curid=27590
5,259
And the standard deviation equations remain unchanged. "s" is now the sum of the weights and not the number of samples "N".
https://en.wikipedia.org/wiki?curid=27590
5,260
The incremental method with reduced rounding errors can also be applied, with some additional complexity.
https://en.wikipedia.org/wiki?curid=27590
5,261
The above formulas become equal to the simpler formulas given above if weights are taken as equal to one.
https://en.wikipedia.org/wiki?curid=27590
5,262
The term "standard deviation" was first used in writing by Karl Pearson in 1894, following his use of it in lectures. This was as a replacement for earlier alternative names for the same idea: for example, Gauss used "mean error".
https://en.wikipedia.org/wiki?curid=27590
5,263
In two dimensions, the standard deviation can be illustrated with the standard deviation ellipse (see "Multivariate normal distribution § Geometric interpretation").
https://en.wikipedia.org/wiki?curid=27590
5,264
The Dunning–Kruger effect is a cognitive bias whereby people with low ability, expertise, or experience regarding a certain type of task or area of knowledge tend to overestimate their ability or knowledge. Some researchers also include in their definition the opposite effect for high performers: their tendency to underestimate their skills.
https://en.wikipedia.org/wiki?curid=2288777
5,265
The Dunning–Kruger effect is usually measured by comparing self-assessment with objective performance. For example, the participants in a study may be asked to complete a quiz and then estimate how well they performed. This subjective assessment is then compared with how well they actually performed. This can happen either in relative or in absolute terms, i.e., in comparison with one's peer group as the percentage of peers outperformed or in comparison with objective standards as the number of questions answered correctly. The Dunning–Kruger effect appears in both cases, but is more pronounced in relative terms; the bottom quartile of performers tend to see themselves as being part of the top two quartiles.
https://en.wikipedia.org/wiki?curid=2288777
5,266
The initial study was published by David Dunning and Justin Kruger in 1999. It focused on logical reasoning, grammar, and social skills. Since then, various other studies have been conducted across a wide range of tasks, including skills from fields such as business, politics, medicine, driving, aviation, spatial memory, examinations in school, and literacy.
https://en.wikipedia.org/wiki?curid=2288777
5,267
The Dunning–Kruger effect is usually explained in terms of metacognitive abilities. This approach is based on the idea that poor performers have not yet acquired the ability to distinguish between good and bad performances. They tend to overrate themselves because they do not see the qualitative difference between their performances and the performances of others. This has also been termed the "dual-burden account", since the lack of skill is paired with the ignorance of this lack. Some researchers include the metacognitive component as part of the definition of the Dunning–Kruger effect and not just as an explanation distinct from it.
https://en.wikipedia.org/wiki?curid=2288777
5,268
Many debates surrounding the Dunning–Kruger effect and criticisms of it focus on the metacognitive explanation without denying the empirical findings. The statistical explanation interprets these findings as statistical artifacts. Some theorists hold that the way low and high performers are distributed makes assessing their skill level more difficult for low performers, thereby explaining their erroneous self-assessments independent of their metacognitive abilities. Another account sees the lack of incentives to give accurate self-assessments as the source of error.
https://en.wikipedia.org/wiki?curid=2288777
5,269
The Dunning–Kruger effect has been described as relevant for various practical matters, but disagreements exist about the magnitude of its influence. Inaccurate self-assessment can lead people to make bad decisions, such as choosing a career for which they are unfit or engaging in dangerous behavior. It may also inhibit the affected from addressing their shortcomings to improve themselves. In some cases, the associated overconfidence may have positive side effects, like increasing motivation and energy.
https://en.wikipedia.org/wiki?curid=2288777
5,270
The Dunning–Kruger effect is defined as the tendency of people with low ability in a specific area to give overly positive assessments of this ability. This is often understood as a cognitive bias, i.e. as a systematic tendency to engage in erroneous forms of thinking and judging. Biases are systematic in the sense that they occur consistently in different situations. They are tendencies since they concern certain inclinations or dispositions that may be observed in groups of people, but are not manifested in every performance. In the case of the Dunning–Kruger effect, this applies mainly to people with low skill in a specific area trying to evaluate their competence within this area. The systematic error concerns their tendency to greatly overestimate their competence or to see themselves as more skilled than they are.
https://en.wikipedia.org/wiki?curid=2288777
5,271
Some researchers emphasize the metacognitive component in their definition. In this view, the Dunning–Kruger effect is the thesis that those who are incompetent in a given area tend to be ignorant of their incompetence, i.e. they lack the metacognitive ability to become aware of their incompetence. This definition lends itself to a simple explanation of the effect; incompetence often includes being unable to tell the difference between competence and incompetence, which is why it is difficult for the incompetent to recognize their incompetence. This is sometimes termed the "dual-burden" account since two burdens come paired: the lack of skill and the ignorance of this lack. But most definitions focus on the tendency to overestimate one's ability and see the relation to metacognition as a possible explanation independent of one's definition. This distinction is relevant since the metacognitive explanation is controversial and various criticisms of the Dunning–Kruger effect target this explanation, but not the effect itself when defined in the narrow sense.
https://en.wikipedia.org/wiki?curid=2288777
5,272
The Dunning–Kruger effect is usually defined specifically for the self-assessments of people with a low level of competence. Some definitions, though, do not restrict it to the bias of people with low skill, and instead see it as pertaining to false self-evaluations on different skill levels. So it is sometimes claimed to include the reverse effect for people with high skill. On this view, the Dunning–Kruger effect also concerns the tendency of highly skilled people to underestimate their abilities relative to the abilities of others. Arguably, the source of this error is not the self-assessment of one's skills, but an overly positive assessment of the skills of others. This phenomenon has been categorized as a form of the false-consensus effect.
https://en.wikipedia.org/wiki?curid=2288777
5,273
The most common approach to measuring the Dunning–Kruger effect is to compare self-assessment with objective performance. The self-assessment is sometimes called "subjective ability" in contrast to the "objective ability" corresponding to the actual performance. The self-assessment may be done before or after the performance. If done afterward, it is important that the participants receive no independent clues during the performance as to how well they did. So, if the activity involves answering quiz questions, no feedback is given as to whether a given answer was correct. The measurement of the subjective and the objective abilities can be in absolute or relative terms. When done in absolute terms, self-assessment and performance are measured according to absolute standards, e.g. concerning how many quiz questions were answered correctly. When done in relative terms, the results are compared with a peer group. In this case, participants are asked to assess their performances in relation to the other participants, for example in the form of estimating the percentage of peers they outperformed. The Dunning–Kruger effect is present in both cases, but tends to be significantly more pronounced when done in relative terms. So people are usually more accurate when predicting their raw score than when assessing how well they did relative to their peer group.
https://en.wikipedia.org/wiki?curid=2288777
5,274
The main point of interest for researchers is usually the correlation between subjective and objective ability. To provide a simplified form of analysis of the measurements, objective performances are often divided into four groups, starting from the bottom quartile of low performers to the top quartile of high performers. The strongest effect is seen for the participants in the bottom quartile, who tend to see themselves as being part of the top two quartiles when measured in relative terms. Some researchers focus their analysis on the difference between the two abilities, i.e. on subjective ability minus objective ability, to highlight the negative correlation.
https://en.wikipedia.org/wiki?curid=2288777
5,275
The Dunning–Kruger effect has been studied across a wide range of tasks. The initial study focused on logical reasoning, grammar skills, and social abilities, such as emotional intelligence and judging which jokes are funny. While many studies are conducted in laboratories, others take place in real-world settings. The latter include assessing the knowledge hunters have of firearms and safety or laboratory technicians' knowledge of medical lab procedures. More recent studies have also engaged in large-scale attempts to collect the relevant data online. Various studies focus on students—for example, to self-assess their performance just after completing an exam. In some cases, these studies gather and compare data from many different countries. Other fields of research include business, politics, medicine, driving skills, aviation, spatial memory, literacy, debating skills, and chess.
https://en.wikipedia.org/wiki?curid=2288777
5,276
David Dunning and Justin Kruger published the initial study in 1999 under the title "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments". It examines the performance and self-assessment of undergraduate students of introductory courses in psychology in the fields of inductive, deductive, and abductive logical reasoning, English grammar, and personal sense of humor. Across four studies, the research indicates that the participants who scored in the bottom quartile overestimated their test performance and their abilities; despite test scores that placed them in the 12th percentile, the participants estimated they ranked in the 62nd percentile. It proposes the metacognitive explanation of the observed tendencies and points out that training in a task, such as solving a logic puzzle, increases people's ability to accurately evaluate how good they are at it. It does not yet contain the term "Dunning–Kruger effect", which was introduced later. Dunning was inspired to engage in this research after reading a newspaper report about incompetent bank robbers and set up a research program soon afterward together with Kruger, who was his graduate student at the time.
https://en.wikipedia.org/wiki?curid=2288777
5,277
Dunning, Kruger, and various other researchers published many subsequent studies. In the 2003 paper "Why People Fail to Recognize Their Own Incompetence", the relation between incorrect self-assessment of competence and the person's ignorance of a given activity's standards of performance is discussed. The 2003 study "How Chronic Self-Views Influence (and Potentially Mislead) Estimates of Performance" examines how a person's self-view causes inaccurate self-assessments of their abilities and why such misperceptions are maintained. The 2004 study "Mind-Reading and Metacognition: Narcissism, not Actual Competence, Predicts Self-estimated Ability" extends the research to test subjects' emotional sensitivity to other people and their own perceptions of them. In the 2005 paper "Self-insight: Roadblocks and Detours on the Path to Knowing Thyself", Dunning calls the Dunning–Kruger effect "the anosognosia of everyday life", referring to a neurological condition in which disabled persons either deny or seem unaware of their disability. He writes: "If you're incompetent, you can't know you're incompetent ... The skills you need to produce a right answer are exactly the skills you need to recognize what a right answer is."
https://en.wikipedia.org/wiki?curid=2288777
5,278
The 2006 study "Skilled or Unskilled, but Still Unaware of It: How Perceptions of Difficulty Drive Miscalibration in Relative Comparisons" tries to show that it is not true of all activities that poor performers give more inaccurate self-assessments than strong performers. The study investigates 13 different tasks and concludes that the Dunning–Kruger effect obtains only in tasks that feel easy. Nonetheless, the 2008 study "Why the Unskilled are Unaware: Further Explorations of (Absent) Self-insight Among the Incompetent" applies the research to many additional fields and confirms that the Dunning–Kruger effect is seen in a great variety of tasks. In his 2011 article "The Dunning–Kruger Effect: On Being Ignorant of One's Own Ignorance", Dunning summarizes many of the earlier studies and reasserts the metacognitive explanation of these findings. As he writes, "[i]n short, those who are incompetent, for lack of a better term, should have little insight into their incompetence—an assertion that has come to be known as the Dunning–Kruger effect". In 2014, Dunning and Helzer wrote that the Dunning–Kruger effect "suggests that poor performers are not in a position to recognize the shortcomings in their performance" but added that self-assessment can be improved by becoming a better performer. A 2022 study found, consistent with the Dunning–Kruger effect, that people who reject the scientific consensus on issues think they know the most about them but actually know the least. The study assessed this on climate change, genetically modified organisms, vaccines, nuclear power, homeopathy, evolution, the Big Bang theory, and COVID-19.
https://en.wikipedia.org/wiki?curid=2288777
5,279
Various explanations have been proposed to account for the Dunning–Kruger effect. The initial and most common account is based on metacognitive abilities. It rests on the assumption that part of acquiring a skill consists in learning to distinguish between good and bad performances of this skill. Since people with low skill have not yet acquired this discriminatory ability, they are unable to properly assess their performance. This leads them to believe that they are better than they are because they do not see the qualitative difference between their performances and the performances by others. So they lack the metacognitive ability to recognize their incompetence. This account has also been called the "dual-burden account" or the "double-burden of incompetence" since the burden of regular incompetence is paired with the burden of meta-cognitive incompetence. It is usually combined with the thesis that the relevant meta-cognitive abilities are acquired as one's skill level increases. But the meta-cognitive lack may also hinder some people from becoming better by hiding their flaws from them. This can then be used to explain how self-confidence is sometimes higher for unskilled people than for people with an average skill: only the latter are aware of their flaws. Some attempts have been made to measure metacognitive abilities directly to confirm this hypothesis. The findings suggest that a reduced metacognitive sensitivity exists among poor performers, but it is not clear that its extent is sufficient to explain the Dunning–Kruger effect. An indirect argument for the metacognitive account is based on the observation that training people in logical reasoning helps them make more accurate self-assessments.
https://en.wikipedia.org/wiki?curid=2288777
5,280
Not everyone agrees with the assumptions on which the metacognitive account is based. Many criticisms of the Dunning–Kruger effect have the metacognitive account as their main focus, but agree with the empirical findings themselves. This line of argument usually proceeds by providing an alternative approach that promises a better explanation of the observed tendencies. Some explanations focus only on one specific factor, while others see a combination of various factors as the source. One such account is based on the idea that both low and high performers have in general the same metacognitive ability to assess their skill level. But given the assumption that the skill levels of many low performers are very close to each other, i.e., that "many people [are] piled up at the bottom rungs of skill level", they find themselves in a more difficult position to assess their skills in relation to their peers. So, the reason for the increased tendency to give false self-assessments is not lack of metacognitive ability, but a more challenging situation in which this ability is applied. Thus, the increased error can be explained without a dual-burden account. One criticism of this approach is directed against the assumption that this type of distribution of skill levels can always be used as an explanation. While it can be found in various fields where the Dunning–Kruger effect has been researched, it is not present in all of them. Another criticism rests on the fact that this account can explain the Dunning–Kruger effect only when the self-assessment is measured relative to one's peer group, not when measured relative to absolute standards.
https://en.wikipedia.org/wiki?curid=2288777
5,281
Another account, sometimes given by theorists with an economic background, focuses on the fact that participants in the corresponding studies usually lack incentive to give accurate self-assessments. In such cases, intellectual laziness or a desire to look good to the experimenter may motivate participants to give overly positive self-assessments. For this reason, some studies were conducted with additional incentives to be accurate. One study, for example, gave participants a monetary reward based on how accurate their self-assessments were. These studies failed to show any significant increase in accuracy for the incentive group in contrast to the control group.
https://en.wikipedia.org/wiki?curid=2288777
5,282
A different approach, further removed from psychological explanations, sees the Dunning–Kruger effect as mainly a statistical artifact. It is based on the idea that the statistical effect known as regression toward the mean accounts for the empirical findings. In the case of the quality of performances, this effect rests on the idea that the quality of a given performance depends not just on the agent's skill level, but also on the good or bad luck involved on an occasion. So, even if participants with average skill give an accurate self-assessment of their skill, their performance may be unlucky on that occasion, putting them in the category of low performers who overestimated their skill. According to this approach, the randomness of luck explains the discrepancy between self-assessed ability and objective performance, especially in extreme cases.
https://en.wikipedia.org/wiki?curid=2288777
5,283
Most researchers acknowledge that regression toward the mean is a relevant statistical effect that must be taken into account when interpreting the empirical findings. This can be achieved by various methods. But such adjustments do not eliminate the Dunning–Kruger effect, which is why the view that regression toward the mean is sufficient to explain it is usually rejected. However, it has been suggested that regression toward the mean in combination with other cognitive biases, like the better-than-average effect, can almost completely explain the empirical findings. This type of explanation is sometimes called "noise plus bias". According to the better-than-average effect, people have a general tendency to rate their abilities, attributes, and personality traits as better than average. For example, the average IQ is 100, but people on average think their IQ is 115. The better-than-average effect differs from the Dunning–Kruger effect since it does not track how this overly positive outlook relates to the skill of the people assessing themselves, while the Dunning–Kruger effect mainly focuses on how this type of misjudgment happens for poor performers. When the better-than-average effect is paired with regression toward the mean, it can explain both that unskilled people tend to greatly overestimate their competence and that the reverse effect for highly skilled people is much less pronounced. By choosing the right variables for the randomness due to luck and a positive offset to account for the better-than-average effect, it is possible to simulate experiments that show almost the same correlation between self-assessed ability. This means that the Dunning–Kruger effect may still have a role to play, if only a minor one.
https://en.wikipedia.org/wiki?curid=2288777
5,284
Opponents of this approach have argued that this explanation can account for the Dunning–Kruger effect only when assessing one's ability relative to one's peer group, not when the self-assessment is relative to an objective standard. But even proponents of this explanation agree that this does not explain the empirical findings in full. Moreover, questions have been raised about whether the conditions under which the Dunning-Kruger effect is assessed meet the criteria for a regression to the mean explanation; in short, regression to the mean occurs under conditions of repeated assessment, which is not a feature of a Dunning-Kruger effect experiment. Another statistical-artifact-based challenge to the Dunning-Kruger effect is the demonstration that a form of the effect can emerge when the errors of the self-assessment are randomly created. But the feature of the Dunning-Kruger effect that is not present in analyses of random data is the finding that the magnitude of the errors of self-assessment is larger for those with a low score on the performance assessment than for high scorers on that assessment.
https://en.wikipedia.org/wiki?curid=2288777
5,285
Various claims have been made about the Dunning–Kruger effect's practical significance or why it matters. They often focus on how it causes the affected people to make decisions that lead to dire consequences for them or others. This is especially relevant for decisions that have long-term effects. For example, it can lead poor performers into careers for which they are unfit. High performers underestimating their skills, though, may forego viable career opportunities matching their skills in favor of less promising ones that are below their skill level. In other cases, the wrong decisions can also have severe short-term effects, as when overconfidence leads pilots to operate a new aircraft for which they lack adequate training or to engage in flight maneuvers that exceed their proficiency. Emergency medicine is another area where the correct assessment of one's skills and the risks of treatment is of central importance. The tendencies of physicians in training to be overconfident must be considered to ensure the appropriate degree of supervision and feedback. The Dunning–Kruger effect can also have negative implications for the agent in various economic activities, in which the price of a good, such as a used car, is often lowered by the buyers' uncertainty about its quality. An overconfident agent unaware of their lack of knowledge may be willing to pay a much higher price because they are unaware of all the potential flaws and risks relevant to the price.
https://en.wikipedia.org/wiki?curid=2288777
5,286
Another implication concerns fields in which self-assessments play an essential role in evaluating skills. They are commonly used, for example, in vocational counseling or to estimate the information literacy skills of students and professionals. The Dunning–Kruger effect indicates that such self-assessments often do not correspond to the underlying skills, thereby rendering them unreliable as a method for gathering this type of data. Independent of the field of the skill in question, the metacognitive ignorance often associated with the Dunning–Kruger effect may inhibit low performers from improving themselves. Since they are unaware of many of their flaws, they may have little motivation to address and overcome them.
https://en.wikipedia.org/wiki?curid=2288777
5,287
Not all accounts of the Dunning–Kruger effect focus on its negative sides. Some also concentrate on its positive side, e.g., ignorance can sometimes be bliss. In this sense, optimism can lead people to experience their situation more positively, and overconfidence may help them achieve even unrealistic goals. To distinguish the negative from the positive sides, two important phases have been suggested to be relevant for realizing a goal: preparatory planning and the execution of the plan. Overconfidence may be beneficial in the execution phase by increasing motivation and energy, but it can be detrimental in the planning phase since the agent may ignore bad odds, take unnecessary risks, or fail to prepare for contingencies. For example, being overconfident may be advantageous for a general on the day of battle because of the additional inspiration passed on to his troops, but disadvantageous in the weeks before by ignoring the need for reserve troops or protective gear.
https://en.wikipedia.org/wiki?curid=2288777
5,288
In 2000, Kruger and Dunning were awarded a satiric Ig Nobel Prize in recognition of the scientific work recorded in "their modest report".
https://en.wikipedia.org/wiki?curid=2288777
5,289
"The Dunning–Kruger Song" is part of "The Incompetence Opera", a mini-opera that premiered at the Ig Nobel Prize ceremony in 2017. The mini-opera is billed as "a musical encounter with the Peter principle and the Dunning–Kruger Effect".
https://en.wikipedia.org/wiki?curid=2288777
5,290
Space Exploration Technologies Corp. (SpaceX) is an American spacecraft manufacturer, launcher, and a satellite communications corporation headquartered in Hawthorne, California. It was founded in 2002 by Elon Musk with the stated goal of reducing space transportation costs to enable the colonization of Mars. The company manufactures the Falcon 9, Falcon Heavy, and Starship launch vehicles, several rocket engines, Cargo Dragon and Crew Dragon spacecraft, and Starlink communications satellites.
https://en.wikipedia.org/wiki?curid=832774
5,291
SpaceX is developing a satellite internet constellation named Starlink to provide commercial internet service. In January 2020, the Starlink constellation became the largest satellite constellation ever launched, and as of December 2022 comprises over 3,300 small satellites in orbit.
https://en.wikipedia.org/wiki?curid=832774
5,292
The company is also developing Starship, a privately funded, fully reusable, super heavy-lift launch system for interplanetary and orbital spaceflight. It is intended to become SpaceX's primary orbital vehicle once operational, supplanting the existing Falcon 9, Falcon Heavy, and Dragon fleet. It will have the highest payload capacity of any orbital rocket ever built on its debut, which is scheduled for early 2023 pending a launch license.
https://en.wikipedia.org/wiki?curid=832774
5,293
SpaceX's achievements include the first privately developed liquid-propellant rocket to reach orbit around Earth; the first private company to successfully launch, orbit, and recover a spacecraft; the first private company to send a spacecraft to the International Space Station; the first vertical take-off and vertical propulsive landing for an orbital rocket booster; first reuse of such booster; and the first private company to send astronauts to orbit and to the International Space Station. SpaceX has flown and landed the Falcon 9 series of rockets over one hundred times.
https://en.wikipedia.org/wiki?curid=832774
5,294
In early 2001, Elon Musk donated $100,000 to the Mars Society and joined its board of directors for a short time. He was offered a plenary talk at their convention where he announced "Mars Oasis", a project to land a miniature experimental greenhouse and grow plants on Mars, to revive public interest in space exploration. Musk initially attempted to acquire a Dnepr ICBM for the project through Russian contacts from Jim Cantrell. However two months later, the United States withdrew from the ABM Treaty and created the Missile Defense Agency, increasing tensions with Russia and generating new strategic interest for rapid and re-usable launch capability similar to the DC-X.
https://en.wikipedia.org/wiki?curid=832774
5,295
When Musk returned to Moscow, Russia with Michael Griffin (who led the CIA's venture capital arm In-Q-Tel), they found the Russians increasingly unreceptive. On the flight home Musk announced that he could start a company to build the affordable rockets they needed instead. By applying vertical integration, using cheap commercial off-the-shelf components when possible, and adopting the modular approach of modern software engineering, Musk believed SpaceX could significantly cut launch price. Griffin would later be appointed NASA administrator and award SpaceX a $396 million contract in 2006 before SpaceX had flown a rocket.
https://en.wikipedia.org/wiki?curid=832774
5,296
In early 2002, Musk started to look for staff for his new space company, soon to be named SpaceX. Musk approached rocket engineer Tom Mueller (later SpaceX's CTO of propulsion) and invited him to become his business partner. Mueller agreed to work for Musk, and thus SpaceX was born. SpaceX was first headquartered in a warehouse in El Segundo, California. Early SpaceX employees such as Tom Mueller (CTO), Gwynne Shotwell (COO) and Chris Thompson (VP of Operations) came from neighboring TRW and Boeing corporations following the cancellation of the Brilliant Pebbles program. By November 2005, the company had 160 employees. Musk personally interviewed and approved all of SpaceX's early employees.
https://en.wikipedia.org/wiki?curid=832774
5,297
Musk has stated that one of his goals with SpaceX is to decrease the cost and improve the reliability of access to space, ultimately by a factor of ten.
https://en.wikipedia.org/wiki?curid=832774
5,298
The total development cost of Falcon 1 was approximately US$90 million to US$100 million. The Falcon name was adopted from the DARPA Falcon Project, part of the Prompt Global Strike program of the US military.
https://en.wikipedia.org/wiki?curid=832774
5,299
In 2005, SpaceX announced plans to pursue a human-rated commercial space program through the end of the decade, a program that would later become the Dragon spacecraft.
https://en.wikipedia.org/wiki?curid=832774