category
stringclasses 107
values | title
stringlengths 15
179
| question_link
stringlengths 59
147
| question_body
stringlengths 53
33.8k
| answer_html
stringlengths 0
28.8k
| __index_level_0__
int64 0
1.58k
|
|---|---|---|---|---|---|
confidence intervals
|
Using binomial confidence intervals for multinomial confidence intervals?
|
https://stats.stackexchange.com/questions/333933/using-binomial-confidence-intervals-for-multinomial-confidence-intervals
|
<p>Below is sample code showing the widths of binomial confidence intervals (using a <a href="https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval#Normal_approximation_interval" rel="nofollow noreferrer">simple normal approximation</a>) and multinomial "simultaneous confidence intervals" (from <a href="https://cran.r-project.org/web/packages/MultinomialCI/index.html" rel="nofollow noreferrer">MultinomialCI</a>). I picked counts high enough that I hope the simple normal approximation for a 95% confidence interval is reasonable. The multinomial widths are ~0.007, and the binomial widths ~0.006 (narrower ?! I would expect them wider).</p>
<p>Can I trust the binomial confidence intervals? Looks like they are more conservative because they don't take advantage of the extra information that the distribution adds to 1.</p>
<p>Some <a href="https://stats.stackexchange.com/questions/74767/multinomial-confidence-interval">previous</a> <a href="https://stats.stackexchange.com/questions/175756/multinomial-proportion-confidence-interval">questions</a> say I can just use the binomial confidence intervals.</p>
<pre><code>library(MultinomialCI)
library(tidyverse)
count_vec = c(23000, 12000, 44000)
m = multinomialCI(count_vec, 0.05)
print(paste("First class: [", m[1,1], m[1,2], "]"));
# [1] "First class: [ 0.28746835443038 0.294817894364861 ]"
print(paste("First width: ", m[1,2] - m[1,1]));
# [1] "First width: 0.0073495399344809"
print(paste("Second class: [", m[2,1], m[2,2], "]"));
# [1] "Second class: [ 0.148227848101266 0.155577388035747 ]"
print(paste("Second width: ", m[2,2] - m[2,1]));
# [1] "Second width: 0.00734953993448095"
print(paste("Third class: [", m[3,1], m[3,2], "]"));
# [1] "Third class: [ 0.553291139240506 0.560640679174987 ]"
print(paste("Third width: ", m[3,2] - m[3,1]));
# [1] "Third width: 0.00734953993448095"
the_counts = data.frame(name=c('a', 'b', 'c'), count=count_vec)
the_counts = the_counts %>% mutate(
frac = count / sum(count),
c95 = 1.96 * sqrt(frac * (1-frac) / sum(count)),
ul = frac + c95,
ll = frac - c95)
name count frac c95 ul ll
1 a 23000 0.2911392 0.003167914 0.2943072 0.2879713
2 b 12000 0.1518987 0.002502900 0.1544016 0.1493958
3 c 44000 0.5569620 0.003463983 0.5604260 0.5534980
</code></pre>
<p><strong>EDIT</strong>: Or should <code>frac * (1-frac) / count</code> be <code>frac * (1-frac) / sum(count)</code>? That is, should the denominator be all the observations in all the groups? I think no?</p>
<p><strong>EDIT 2</strong>: <a href="https://blogs.sas.com/content/iml/2017/02/15/confidence-intervals-multinomial-proportions.html" rel="nofollow noreferrer">Here</a> is a blog post with six methods to compute multinomial confidence intervals. What I don't need is more suggestions of methods. What I do need is to know whether to trust one or both of the ones I already mentioned. Thanks.</p>
<p><strong>EDIT 3</strong>: <a href="https://stats.stackexchange.com/questions/20555/simultaneous-confidence-intervals-for-multinomial-parameters-for-small-samples#20589">Another related question</a>.</p>
<p><strong>EDIT 4</strong>: Looking at the two-class results goes the other way: the multinomial intervals are MUCH smaller (0.0000035) than the binomial intervals (~0.005) in this example. This doesn't make sense. They should be the same for two classes.</p>
<pre><code>count_vec = c(23000, 12000)
m = multinomialCI(count_vec, 0.05)
print(paste("First class: [", m[1,1], m[1,2], "]"));
print(paste("First width: ", m[1,2] - m[1,1]));
print(paste("Second class: [", m[2,1], m[2,2], "]"));
print(paste("Second width: ", m[2,2] - m[2,1]));
# [1] "First class: [ 0.657142857142857 0.657146383075623 ]"
# [1] "First width: 3.5259327655357e-06"
# [1] "Second class: [ 0.342857142857143 0.342860668789908 ]"
# [1] "Second width: 3.52593276548019e-06"
the_counts = data.frame(name=c('a', 'b'), count=count_vec)
the_counts = the_counts %>% mutate(
frac = count / sum(count),
c95 = 1.96 * sqrt(frac * (1-frac) / sum(count)),
ul = frac + c95,
ll = frac - c95)
> the_counts
name count frac c95 ul ll
1 a 23000 0.6571429 0.004972886 0.6621157 0.6521700
2 b 12000 0.3428571 0.004972886 0.3478300 0.3378843
</code></pre>
<p><strong>EDIT 5</strong>: Fix the denominator of the binomial to be <code>sum(counts)</code>.</p>
| 600
|
|
confidence intervals
|
Classical Confidence Intervals vs. Bootstrap Confidence Intervals
|
https://stats.stackexchange.com/questions/595103/classical-confidence-intervals-vs-bootstrap-confidence-intervals
|
<p>Suppose I have some data that includes height and weight measurements for 1000 people - I am interested in calculating the Correlation Coefficient to see if there exists some correlation between height and weight, and if this correlation is statistically significant.</p>
<p>I was curious in learning more about how the Confidence Intervals of the Correlation Coefficient is calculated. When reading about this online, I found some links which included something called the "Fisher Transform" and outlined (what seemed to me as) a complicated procedure for calculating the Confidence Interval of the Correlation Coefficient.</p>
<p>This got me thinking about the Bootstrap Procedure. Suppose I took performed "Random Sampling With Replacement" and made 1000 draws from the data I have, and then calculated the Correlation Coefficient. Now, imagine I repeat this process 1000 times and produce a list of 1000 Correlation Coefficients calculated using random draws from this data. Could I not then find 5th and the 95th quantile and use these as a pseudo confidence interval?</p>
<p>Although I have feeling that this <em>might</em> work, I am not sure if this is a statistically valid approach. Is it possible that using the "classical" formulas for the Confidence Intervals of the Correlation Coefficient would be "more realistic and better suited" compared to this "bootstrap approach"?</p>
<p>Thank you!</p>
<p>Notes: <a href="https://stats.stackexchange.com/questions/498413/clt-based-confidence-interval-vs-bootstrap-based-confidence-interval">CLT-based confidence interval vs. Bootstrap based confidence interval</a></p>
|
<p>Yes, you can bootstrap the correlation coefficient and get the confidence intervals you are looking for but:</p>
<p>you should random-sample joint observations (couples of observations i.e. weight,height) and not independently sampling from weight and height.</p>
<p>Even if this makes sense, I may suggest a different approach:</p>
<ol>
<li>Fit a linear model (for example <span class="math-container">$\text{weight} = \alpha + \beta \text{height}$</span>);</li>
<li>Estimate the residuals;</li>
<li>Bootstrap the residuals with replacement and calculate bootstrapped fitted values;</li>
<li>Calculate correlation between the bootstrapped fitted values and the independent variable (height);</li>
<li>Repeat point 4 and 5 N times to get the confidence intervals for the correlation coefficient.</li>
</ol>
| 601
|
confidence intervals
|
Confidence intervals on model parameters
|
https://stats.stackexchange.com/questions/310056/confidence-intervals-on-model-parameters
|
<p>I have a very basic question about interpreting confidence intervals. Say I use R to fit a linear model to the Nile water flow time series (from the <code>datasets</code> package), like this: </p>
<pre><code># Flow of the Nile river (10^8 cubic metres per year) between 1871-1971
nile.ts <- data.frame(year = do.call(seq, as.list(attributes(Nile)$tsp)),
flow = as.numeric(Nile))
# Fit linear function
nile.fit <- gls(flow ~ year, data = nile.ts)
</code></pre>
<p>Next, I look at the confidence intervals on the model parameters:</p>
<pre><code># Look at 95% confidence intervals
intervals(nile.fit)
</code></pre>
<blockquote>
<pre><code>Approximate 95% confidence intervals
Coefficients:
lower est. upper
(Intercept) 4144.217893 6132.173579 8120.129266
year -3.749313 -2.714305 -1.679298
attr(,"label")
[1] "Coefficients:"
Residual standard error:
lower est. upper
132.1041 150.5522 175.0363
</code></pre>
</blockquote>
<p>Here, the coefficient of year in the model is estimated to be $-2.7$ and has a 95% confidence interval of $[-3.7, -1.7]$. I'd read this as meaning I can be pretty confident that there has been a decline in the flow over time. But what if the confidence interval had been $[-3.7, +1.7]$? Since it straddles zero, does this mean that I can't say anything about the trend? Or, can I state that there is no statistically significant trend?</p>
|
<blockquote>
<p>Or, can I state that there is no statistically significant trend?</p>
</blockquote>
<p>Yes. The coefficient for <code>year</code> would be then not different from 0. </p>
| 602
|
confidence intervals
|
t-statistic confidence intervals vs wilson score confidence intervals
|
https://stats.stackexchange.com/questions/91612/t-statistic-confidence-intervals-vs-wilson-score-confidence-intervals
|
<p>I don't have too formal of a grounding in statistics, so sorry if this doesn't make much sense. But:</p>
<p>What are the differences between using a t-distribution to generate confidence intervals for small samples vs using a wilson score confidence intervals? Can they even both be used for this purpose, or am I misunderstanding one (or both)? Are t-intervals more appropriate in certain situations and wilson intervals in others?</p>
<hr>
<p>Just to confirm my understanding, given $$\bar{x} = \frac{x_1 + \cdots + x_n}{n}$$</p>
<p>we use a t-score interval if we believe $x_i \sim\mathcal{N}(\mu,\sigma^2)$, and we use a Wilson interval if we believe $x_i \sim \mathcal{B}(1,p)$?</p>
|
<blockquote>
<p>Are t-intervals more appropriate in certain situations and wilson intervals in others?</p>
</blockquote>
<p>Precisely this. They apply to two different situations, the first being (at least approximately) normally distributed values, the second for proportions based on binomially distributed counts.</p>
<p>There are a number of t-intervals, but generally speaking, they are all used, in essence, when you are trying to construct an interval for a population mean, with unknown variance (so the two are estimated from the sample). Discussion of a basic case is <a href="http://en.wikipedia.org/wiki/Confidence_interval#Theoretical_example" rel="nofollow">here</a>. </p>
<p>The <a href="http://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval#Wilson_score_interval" rel="nofollow">Wilson score interval</a>, on the other hand, is used when dealing with proportions. It's for constructing an interval for a population proportion (a proportion is itself a kind of mean, but one where the variance is related to the mean). It's one of a number of intervals used for binomial population proportions. It's used for situations where the basic data is counts.</p>
<hr>
<p>A common way to derive intervals is via <a href="http://en.wikipedia.org/wiki/Pivotal_quantity" rel="nofollow">pivotal quantities</a> (a good term to search on here, there are a number of answers that discuss simple examples). A pivotal quantity is a function of observations and unobservable parameters whose probability distribution does not depend on the unknown parameters.</p>
<p>So in the case of a t-interval, the interval is based on t-distributions because $\frac{\bar x -\mu}{s/\sqrt n}$* is a pivotal quantity which has a t-distribution.</p>
<p>* or a structurally similar statistic for other t-intervals</p>
<p>That this has a t-distribution relies on the independence of $\bar x$ and $s$, which you have under normality. In the case of a proportion, you don't have a separate estimate of variance, you only have the proportion itself. The independence isn't there, so there's no basis on which to construct a t-interval.</p>
<p>Instead, the Wilson interval is based on the statistic in a score test, which will be asymptotically normal.</p>
| 603
|
confidence intervals
|
Interpreting confidence intervals
|
https://stats.stackexchange.com/questions/12321/interpreting-confidence-intervals
|
<p>Suppose I have a $95 \%$ confidence interval for $\mu$ with population variance $\sigma^{2}$ known. It will be of the form $$[\bar{X}-1.96 \sigma/\sqrt{n}, \bar{X}+1.96 \sigma/\sqrt{n}]$$ </p>
<p>The statistical interpretation of this is that $95\%$ of the confidence intervals will contain the true mean? I know individually, there is a $50\%$ chance for a confidence interval to contain the mean. But collectively, my above sentence is valid?</p>
| 604
|
|
confidence intervals
|
Minimizing 95% Confidence interval extent for bootstrapped confidence intervals
|
https://stats.stackexchange.com/questions/135058/minimizing-95-confidence-interval-extent-for-bootstrapped-confidence-intervals
|
<p>I'm new to CrossValidated so please excuse any shortcomings in my question.</p>
<p>Suppose I have a sample of 500, for a population of 500,000. I asked my sample of 500 what day of the week they go grocery shopping and assume I do not know the actual distribution of which day of week people go grocery shopping. Because I do not know the distribution, and I cannot assume it is normal, I calculate the 95% confidence intervals via bootstrapping (using the boot package in r). Now I want to know how to minimize the extent of the confidence intervals for this data. I look first to sample size. So I randomly grabbed (without replacement) a subset of sample and calculated the confidence intervals on that subset. I repeated this several times for different size subsets and plotted the (sub)sample size on the x-axis and the extent of the confidence intervals on the y-axis:</p>
<p><img src="https://i.sstatic.net/gwbwv.png" alt="Extent of Bootstrapped Confidence Intervals vs Sample Size"></p>
<p>I found this plot surprising. I had hoped I would see that as my sample size increased my confidence intervals extent would decrease. Instead I just see randomness. Does anyone have any insights on first, why I am observing this, and second, how I can determine what sample size I need to see a decrease in my 95% confidence interval extent. Please forgive if this is dumb question!</p>
| 605
|
|
confidence intervals
|
Confidence Bands vs. Simultaneous Confidence Intervals
|
https://stats.stackexchange.com/questions/177110/confidence-bands-vs-simultaneous-confidence-intervals
|
<p>This may be a dumb question, but when talking about multiple regression analyses are "simultaneous" confidence intervals and confidence "bands" the same thing?</p>
<p>I'm still having trouble figuring this out and how to compute the two things, if in fact they are different. From what I can tell simultaneous confidence intervals usually refer to confidence intervals that are set such that when used in testing of a discrete number of tests, the type I error rate is controlled by the number of comparisons being made. </p>
<p>It seems to me that a confidence band is essentially the same thing, but that there are an infinite number of comparisons that can be made along the entire regression curve, since any number along the $x$ matrix can be used to make comparison with any other number along $x$.</p>
<p>Is my understanding of this correct? Are there differences in the calculations of confidence bands vs. simultaneous confidence intervals?</p>
<p><strong>UPDATE</strong></p>
<p>I'm still struggling to find an answer to this seemingly basic question, but no one has bit yet. I'd be grateful for any comments.</p>
| 606
|
|
confidence intervals
|
Are confidence intervals open or closed intervals?
|
https://stats.stackexchange.com/questions/15872/are-confidence-intervals-open-or-closed-intervals
|
<p>I have a question about confidence intervals.</p>
<blockquote>
<p>In general, are confidence intervals open or closed?</p>
</blockquote>
|
<p>The short answer is "Yes".</p>
<p>The longer answer is that it does not really matter that much because the ends of the intervals are random variables based on the sample (and assumptions, etc.) and if we are talking a continuous variable then the probability of getting an exact value (the bound equaling the true parameter) is 0.</p>
<p>Confidence intervals are the range of null values that would not be rejected, so what do you do if you compute a p-value that is exactly <span class="math-container">$\alpha$</span>? (another probability 0 event for continuous cases). If you reject when p=<span class="math-container">$\alpha$</span> exactly then your CI is open, if you don't reject then the CI is closed. For practical purposes, it doesn't matter that much.</p>
| 607
|
confidence intervals
|
Are confidence intervals useful?
|
https://stats.stackexchange.com/questions/390093/are-confidence-intervals-useful
|
<p>In frequentist statistics, a 95% confidence interval is an interval-producing procedure that, if repeated an infinite number of times, would contain the true parameter 95% of the time. Why is this useful?</p>
<p>Confidence intervals are often misunderstood. They are <em>not</em> an interval that we can be 95% certain the parameter is in (unless you are using the similar Bayesian credibility interval). Confidence intervals feel like a bait-and-switch to me.</p>
<p>The one use case I can think of is to provide the range of values for which we could not reject the null hypothesis that the parameter is that value. Wouldn't p-values provide this information, but better? Without being so misleading?</p>
<p>In short: Why do we need confidence intervals? How are they, when correctly interpreted, useful?</p>
|
<p>So long as the confidence interval is treated as <em>random</em> (i.e., looked at from the perspective of treating the data as a set of random variables that we have not seen yet) then we can indeed make useful probability statements about it. Specifically, suppose you have a confidence interval at level <span class="math-container">$1-\alpha$</span> for the parameter <span class="math-container">$\theta$</span>, and the interval has bounds <span class="math-container">$L(\mathbf{x}) \leqslant U(\mathbf{x})$</span>. Then we can say that:</p>
<p><span class="math-container">$$\mathbb{P}(L(\mathbf{X}) \leqslant \theta \leqslant U(\mathbf{X}) | \theta) = 1-\alpha
\quad \quad \quad \text{for all } \theta \in \Theta.$$</span></p>
<p>Moving outside the frequentist paradigm and marginalising over <span class="math-container">$\theta$</span> for any prior distribution gives the corresponding (weaker) marginal probability result:</p>
<p><span class="math-container">$$\mathbb{P}(L(\mathbf{X}) \leqslant \theta \leqslant U(\mathbf{X})) = 1-\alpha.$$</span></p>
<p>Once we fix the bounds of the confidence interval by fixing the data to <span class="math-container">$\mathbf{X} = \mathbb{x}$</span>, we no longer appeal to this probability statement, because we now have fixed the data. However, <em>if the confidence interval is treated as a random interval</em> then we can indeed make this probability statement --- i.e., with probability <span class="math-container">$1-\alpha$</span> the parameter <span class="math-container">$\theta$</span> will fall within the (random) interval. </p>
<p>Within frequentist statistics, probability statements are statements about relative frequencies over infinitely repeated trials. But that is true of <em>every probability statement</em> in the frequentist paradigm, so if your objection is to relative frequency statements, that is not an objection that is specific to confidence intervals. If we move outside the frequentist paradigm then we can legitimately say that a confidence interval contains its target parameter with the desired probability, so long as we make this probability statement marginally (i.e., not conditional on the data) and we thus treat the confidence interval in its random sense.</p>
<p>I don't know about others, but that seems to me to be a pretty powerful probability result, and a reasonable justification for this form of interval. I am more partial to Bayesian methods myself, but the probability results backing confidence intervals (in their random sense) are powerful results that are not to be sniffed at. Moreover, even within the context of Bayesian analysis, where we let <span class="math-container">$\theta$</span> be a random variable with a prior distribution, we can see that the prior predictive probability that the confidence interval contains the parameter is equal to the confidence level. Thus, even within this alternative paradigm, the confidence interval can be regarded as an estimator that has powerful <em>a priori</em> prediction properties.</p>
| 608
|
confidence intervals
|
Confidence interval for the confidence interval?
|
https://stats.stackexchange.com/questions/194499/confidence-interval-for-the-confidence-interval
|
<p>I'm studying confidence intervals, and I'm curious about how one might generate a confidence interval for the confidence interval, if that even makes sense.</p>
<p>For example, let's say I draw simple random samples of n=100 from some population, calculate sample means and standard deviations, and construct 95% confidence intervals. I repeat this procedure 100 times. I know I expect about 95 of these intervals to capture the population mean, and about 5 of them not to. However, can I construct a confidence interval around this expectation? If I were to repeat this entire "100 samples of 100 samples" over and over again, what can I say about the distribution of how often the intervals captures?</p>
<p>Essentially, could I construct a confidence interval for the confidence interval? Would that even make any sense?</p>
<p>Thanks!</p>
|
<blockquote>
<p>what can I say about the distribution of how often the intervals captures?</p>
</blockquote>
<p>Treating each interval containing the parameter as a <a href="https://en.wikipedia.org/wiki/Bernoulli_process" rel="noreferrer">Bernoulli process</a> with each trial having some coverage probability $p$, the number of "coverages" should be $\text{Binomial}(n,p)$.</p>
<p>The potential problem is whether the $p$ one actually has is really the $p$ one was hoping for (whether due to the extent of the failure of assumptions or because of approximations involved in obtaining the intervals). </p>
| 609
|
confidence intervals
|
95% Confidence Intervals
|
https://stats.stackexchange.com/questions/439860/95-confidence-intervals
|
<p>Statistics textbooks go out of their way to say that 95% Confidence Intervals (CIs) do not mean that you can be 95% sure that the population parameter of interest is somewhere between the high and low end of the interval. Rather, if your sample was drawn an infinite number of times, 95% of the intervals would contain the population parameter (while 5% would not). </p>
<p>I fail to see the distinction. If I draw one of the infinite number of samples for which 95% CIs are calculated, aren’t I 95% certain that I’ve drawn one the ones whose CI contains the population parameter? Thus, I’m 95% certain that my CI contains the population parameter.</p>
<p>If someone can explain why my thinking is incorrect, I’d really appreciate it.
Thank you.</p>
<hr>
<p>Just to cause more confusion, I went to my old Statistical Methods textbook by Snedecor and Cochran (8th edition), and found the following section on Confidence Intervals:</p>
<p><a href="https://i.sstatic.net/i9niN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/i9niN.png" alt="enter image description here"></a></p>
<p><a href="https://i.sstatic.net/Ou57b.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ou57b.png" alt="enter image description here"></a></p>
<p>Notice that they provide a mathematical proof for the inequality relating a population parameter value to a sample confidence interval. In addition, in their example in the middle of page 56, they explicitly state that the population parameter lies within the 95% confidence interval given, except in a 1 in 20 chance.</p>
<p>Snedecor and Cochran's book educated several generations of statisticians, at least here in the US. And, the mathematical proof seems pretty convincing. So now what? Do we believe what the current textbooks are saying (which do not help us in making a statement about the population parameter)? Or, do we go with Snedecor and Cochran and state that we are 95% certain the the population parameter is within our 95% CIs? </p>
<p>Anyone who wishes to comment, please do...I'm at a loss. </p>
|
<p>The clue to all of this is realizing that the population paramter <span class="math-container">$\theta$</span> is a fixed, unknown number. And that (loosely speaking) the "randomnes" in all of this comes from the confidence intervals. Each confidence interval is linked to a sample, so for different samples, we get (ideally slightly) different CIs</p>
<p>Now, given a population <span class="math-container">$X$</span>, consider a simple random sample (SRS) of size <span class="math-container">$n$</span> <span class="math-container">$\underline{X}_n=(X_1,X_2,\ldots,X_n)$</span> that depend on the unknown parameter <span class="math-container">$\theta$</span>.</p>
<p>A confidence interval estimator for <span class="math-container">$\theta$</span> at a confidece level of <span class="math-container">$95\%$</span> is an interval <span class="math-container">$(T_1(\underline{X}_n), T_2(\underline{X}_n))$</span> that satisfies that
<span class="math-container">$$P(\theta\in (T_1(\underline{X}_n), T_2(\underline{X}_n)) = 95\%$$</span></p>
<p>Now <span class="math-container">$\underline{X}_n$</span> was a SRS so for this SRS I obtain a specific sample <span class="math-container">$\underline{x}_n=(x_1,x_2,\ldots x_n)$</span>. While <span class="math-container">$\underline{X}_n$</span> was a bunch of random variables, <span class="math-container">$\underline{x}_n$</span> is a bunch of specific numbers. So I use this specific sample and I obtain one specific confidence interval linked to this sample <span class="math-container">$CI(\theta)_{95}=(a,b)$</span> where now <span class="math-container">$a\in\mathbb{R}$</span> and <span class="math-container">$b\in\mathbb{R}$</span>.</p>
<p>Taking into account that <span class="math-container">$\theta$</span> is a fixed number, there are two possible results. Or <span class="math-container">$\theta$</span> is inside this CI or it is outside this CI:</p>
<ol>
<li>If <span class="math-container">$\theta\in(a,b)$</span> then in this case <span class="math-container">$P(\theta\in(a,b))=1$</span></li>
<li>If <span class="math-container">$\theta\notin(a,b)$</span>. then in this case <span class="math-container">$P(\theta\in(a,b))=0$</span></li>
</ol>
<h1>EDIT adding example</h1>
<p>In the end, it is simply a problem of language. Consider that the parameter under study is <span class="math-container">$\mu$</span> the mean height of people in all the world. It doesnt make much sense saying that the probability of this mean height being between 160cm and 170cm is 95% because either this heigh is a number between 160-170cm or it is not. Even if we cannot calculate this mean height because we would require to survey all the people in the world, <span class="math-container">$\mu$</span> is still a fixed quantity, though an unknown one. Talking about probabilities for fixed numbers do not make much sense.</p>
<p>What we can do is take a sample of people and obtain a CI. A change of sample implies a change of CI. For this reason, if we obtain <span class="math-container">$100$</span> samples and compute <span class="math-container">$100$</span> confidence intervals at a <span class="math-container">$95\%$</span> level (one CI per sample), roughly speaking we can say that more or less <span class="math-container">$95$</span> confidence intervals would cover the unknown parameter <span class="math-container">$\mu$</span> and <span class="math-container">$5$</span> would not cover it. We do not know the value of <span class="math-container">$\mu$</span>, so we do not know which are the CIs covering it. The only thing that we can say is that the probability that a confidence interval covers <span class="math-container">$\mu$</span> is <span class="math-container">$95\%$</span></p>
| 610
|
confidence intervals
|
Combining confidence intervals
|
https://stats.stackexchange.com/questions/60844/combining-confidence-intervals
|
<p>I have a collection of efficiency curves i.e. numbers between 0 and 1 as a function of a physical variable. Each efficiency point on the curve has an associated Clopper Pearson 1-sigma confidence interval.</p>
<p>I now need to combine these to obtain a total efficiency as a function of said variable. This means I multiply all the corresponding efficiency points together so that eff_total for point i is eff_1*eff_2*...*eff_n, etc.</p>
<p>How do I propagate the uncertainty in the form of these confidence intervals to eff_total? </p>
| 611
|
|
confidence intervals
|
Difference Between Confidence Bands and Confidence Intervals
|
https://stats.stackexchange.com/questions/655191/difference-between-confidence-bands-and-confidence-intervals
|
<p>I've read other answers on this stackexchange for this topic, but I have a few questions I would like clarified:</p>
<p>Confidence bands are usually visualized as the connected lines ('bands') formed from confidence intervals throughout the regression. This leads me to believe (possibly incorrectly) that the confidence band is simply the 'family' of confidence intervals throughout the entirety of the regression (i.e. the lines which envelope our regressed lines with confidence <span class="math-container">$1-\alpha$</span> %)</p>
<p>My gripe (which leads me to confusion about my above deduction), is that the formula to compute a confidence band uses the Working-Hotelling Constant, which results in a wider interval than the confidence interval at a given point (for example in SLR):</p>
<p><span class="math-container">$
\begin{align}
C.B = \hat y_{i} \pm W \cdot se(\hat y_{i})
\end{align}
$</span></p>
<p>at the <span class="math-container">$x$</span> value associated with the observation <span class="math-container">$i$</span></p>
<p>So is my conclusion about the confidence bands incorrect? What is their relation with confidence intervals and why then do we use the <span class="math-container">$F$</span> distribution rather than the <span class="math-container">$t$</span> distribution? Any help would be greatly appreciated!</p>
| 612
|
|
confidence intervals
|
How important are confidence intervals?
|
https://stats.stackexchange.com/questions/520577/how-important-are-confidence-intervals
|
<p>I am writing a research paper about time series forecasting using neural networks. In my results I created tables containing error values (RMSE, MAE and RMSSE) for the predictions and made plots showing the predicted values over the original data.</p>
<p>Now, I have been told that I need to add confidence intervals to my predictions because limited amounts of data can make confidence intervals very wide. My time series is approximately one year long.</p>
<p>I understand what confidence intervals are, but I don't understand how they will make my results better.</p>
<p>How important are confidence intervals? How do they change how you analyse the predictions of a model?</p>
|
<p>Consider you have some money to invest. Your bank says, that investment A will give a guaranteed annual profit of 2% whereas investment B will may give you a profit somewhere between -8% to +12% with an expected value of 2%. Will you throw dice over which investment to take, because both have the same expected profit or does it make a difference, that in one case the profit is fixed while the other one is taking chances?</p>
<p>Imagine the same scenario, but in addition, you can borrow money for 1.6%. You could borrow as much as possible, invest it in A and have a guaranteed .4% for yourself. With investment B there is a risk of losing money so you should probably not borrow as much as possible but only so much that you could afford the possible loss. So different decisions will be taken for the same expected gain depending on its uncertainty.</p>
<p>So very often it is of utmost importance to not only give a point prediction but to also communicate some measure of uncertainty around that point prediction. Does this necessarily have to be confidence intervals or predictive intervals or credible intervals or standard errors? Depending on the circumstances and the habits within your field people may have certain expectations partly based on tradition. If you can meet these expectations it will improve your communication of results.</p>
| 613
|
confidence intervals
|
Are 50% confidence intervals more robustly estimated than 95% confidence intervals?
|
https://stats.stackexchange.com/questions/248113/are-50-confidence-intervals-more-robustly-estimated-than-95-confidence-interva
|
<p>My question flows out of <a href="http://andrewgelman.com/2016/11/05/why-i-prefer-50-to-95-intervals/#comment-342062" rel="noreferrer">this comment</a> on an Andrew Gelman's blog post in which he advocates the use of 50% confidence intervals instead of 95% confidence intervals, although not on the grounds that they are more robustly estimated:</p>
<blockquote>
<p>I prefer 50% to 95% intervals for 3 reasons:</p>
<ol>
<li><p>Computational stability,</p>
</li>
<li><p>More intuitive evaluation (half the 50% intervals should contain the true value),</p>
</li>
<li><p>A sense that in aplications it’s best to get a sense of where the parameters and predicted values will be, not to attempt an unrealistic near-certainty.</p>
</li>
</ol>
</blockquote>
<p>The commenter's idea seems to be that problems with the assumptions underlying the construction of the confidence interval will have more an impact if it's a 95% CI than if it's a 50% CI. However, he doesn't really explain why.</p>
<blockquote>
<p>[...] as you go to larger intervals, you become more sensitive in general to details or assumptions of your model. For example, you would never believe that you had correctly identified the 99.9995% interval. Or at least that’s my intuition. If it’s right, it argues that 50-percent should be better estimated than 95-percent. Or maybe “more robustly” estimated, since it is less sensitive to assumptions about the noise, perhaps?</p>
</blockquote>
<p>Is it true? Why/why not?</p>
|
<p>This answer analyzes the meaning of the quotation and offers the results of a simulation study to illustrate it and help understand what it might be trying to say. The study can easily be extended by anybody (with rudimentary <code>R</code> skills) to explore other confidence interval procedures and other models.</p>
<p><strong>Two interesting issues emerged in this work.</strong> One concerns how to evaluate the accuracy of a confidence interval procedure. The impression one gets of robustness depends on that. I display two different accuracy measures so you can compare them.</p>
<p>The other issue is that although a confidence <em>interval</em> procedure with low confidence may be robust, the corresponding confidence <em>limits</em> might not be robust at all. Intervals tend to work well because the errors they make at one end often counterbalance the errors they make at the other. As a practical matter, you can be pretty sure that around half of your $50\%$ confidence intervals are covering their parameters, <em>but the actual parameter might consistently lie near one particular end of each interval,</em> depending on how reality departs from your model assumptions.</p>
<hr>
<p><em>Robust</em> has a standard meaning in statistics:</p>
<blockquote>
<p>Robustness generally implies insensitivity to departures from assumptions surrounding an underlying probabilistic model.</p>
</blockquote>
<p>(Hoaglin, Mosteller, and Tukey, <em>Understanding Robust and Exploratory Data Analysis</em>. J. Wiley (1983), p. 2.)</p>
<p>This is consistent with the quotation in the question. To understand the quotation we still need to know the intended <em>purpose</em> of a confidence interval. To this end, let's review what Gelman wrote. </p>
<blockquote>
<p>I prefer 50% to 95% intervals for 3 reasons:</p>
<ol>
<li><p>Computational stability,</p></li>
<li><p>More intuitive evaluation (half the 50% intervals should contain the true value),</p></li>
<li><p>A sense that in applications it’s best to get a sense of where the parameters and predicted values will be, not to attempt an unrealistic near-certainty.</p></li>
</ol>
</blockquote>
<p>Since getting a sense of <em>predicted values</em> is not what confidence intervals (CIs) are intended for, I will focus on getting a sense of <em>parameter</em> values, which is what CIs do. Let's call these the "target" values. Whence, <em>by definition,</em> a CI is intended to cover its target with a specified probability (its confidence level). Achieving intended coverage rates is the minimum criterion for evaluating the quality of any CI procedure. (Additionally, we might be interested in typical CI widths. To keep the post to a reasonable length, I will ignore this issue.)</p>
<p>These considerations invite us to study <em>how much a confidence interval calculation could mislead us concerning the target parameter value.</em> The quotation could be read as suggesting that lower-confidence CIs might retain their coverage even when the data are generated by a process different than the model. That's something we can test. The procedure is:</p>
<ul>
<li><p>Adopt a probability model that includes at least one parameter. The classic one is sampling from a Normal distribution of unknown mean and variance.</p></li>
<li><p>Select a CI procedure for one or more of the model's parameters. An excellent one constructs the CI from the sample mean and sample standard deviation, multiplying the latter by a factor given by a Student t distribution.</p></li>
<li><p>Apply that procedure to various <em>different</em> models--departing not too much from the adopted one--to assess its coverage over a range of confidence levels.</p></li>
</ul>
<p>As an example, I have done just that. I have allowed the underlying distribution to vary across a wide range, from almost Bernoulli, to Uniform, to Normal, to Exponential, and all the way to Lognormal. These include symmetric distributions (the first three) and strongly skewed ones (the last two). For each distribution I generated 50,000 samples of size 12. For each sample I constructed two-sided CIs of confidence levels between $50\%$ and $99.8\%$, which covers most applications.</p>
<p>An interesting issue now arises: <strong>How should we measure how well (or how badly) a CI procedure is performing?</strong> A common method simply evaluates the difference between the actual coverage and the confidence level. This can look suspiciously good for high confidence levels, though. For instance, if you are trying to achieve 99.9% confidence but you get only 99% coverage, the raw difference is a mere 0.9%. However, that means your procedure fails to cover the target ten times more often than it should! For this reason, a more informative way of comparing coverages ought to use something like odds ratios. I use differences of logits, which are the logarithms of odds ratios. Specifically, when the desired confidence level is $\alpha$ and the actual coverage is $p$, then</p>
<p>$$\log\left(\frac{p}{1-p}\right) - \log\left(\frac{\alpha}{1-\alpha}\right)$$</p>
<p>nicely captures the difference. When it is zero, the coverage is exactly the value intended. When it is negative, the coverage is too low--which means the CI is too <em>optimistic</em> and underestimates the uncertainty.</p>
<p>The question, then, is <em>how do these error rates vary with confidence level as the underlying model is perturbed?</em> We can answer it by plotting the simulation results. <em>These plots quantify how "unrealistic" the "near-certainty" of a CI might be</em> in this archetypal application.</p>
<p><a href="https://i.sstatic.net/3VqIC.png"><img src="https://i.sstatic.net/3VqIC.png" alt="Figure"></a></p>
<p>The graphics show the same results, but the one at the left displays the values on logit scales while the one at the right uses raw scales. The Beta distribution is a Beta$(1/30,1/30)$ (which is practically a Bernoulli distribution). The lognormal distribution is the exponential of the standard Normal distribution. The normal distribution is included to verify that this CI procedure really does attain its intended coverage and to reveal how much variation to expect from the finite simulation size. (Indeed, the graphs for the normal distribution are comfortably close to zero, showing no significant deviations.)</p>
<p>It is clear that <strong>on the logit scale, the coverages grow more divergent as the confidence level increases.</strong> There are some interesting exceptions, though. If we are unconcerned with perturbations of the model that introduce skewness or long tails, then we can ignore the exponential and lognormal and focus on the rest. Their behavior is erratic until $\alpha$ exceeds $95\%$ or so (a logit of $3$), at which point the divergence has set in.</p>
<p><strong>This little study brings some concreteness to Gelman's claim</strong> and illustrates some of the phenomena he might have had in mind. In particular, when we are using a CI procedure with a low confidence level, such as $\alpha=50\%$, then even when the underlying model is strongly perturbed, it looks like the coverage will still be close to $50\%$: our feeling that such a CI will be correct about half the time and incorrect the other half is borne out. That is <em>robust</em>. If instead we are hoping to be right, say, $95\%$ of the time, which means we really want to be wrong only $5\%$ of the time, then we should be prepared for our error rate to be much greater in case the world doesn't work quite the way our model supposes.</p>
<p>Incidentally, this property of $50\%$ CIs holds in large part because we are studying symmetric confidence <em>intervals</em>. For the skewed distributions, the individual confidence <em>limits</em> can be terrible (and not robust at all), <em>but their errors often cancel out.</em> Typically one tail is short and the other long, leading to over-coverage at one end and under-coverage at the other. I believe that $50\%$ confidence <em>limits</em> will not be anywhere near as robust as the corresponding intervals.</p>
<hr>
<p>This is the <code>R</code> code that produced the plots. It is readily modified to study other distributions, other ranges of confidence, and other CI procedures.</p>
<pre class="lang-R prettyprint-override"><code>#
# Zero-mean distributions.
#
distributions <- list(Beta=function(n) rbeta(n, 1/30, 1/30) - 1/2,
Uniform=function(n) runif(n, -1, 1),
Normal=rnorm,
#Mixture=function(n) rnorm(n, -2) + rnorm(n, 2),
Exponential=function(n) rexp(n) - 1,
Lognormal=function(n) exp(rnorm(n, -1/2)) - 1
)
n.sample <- 12
n.sim <- 5e4
alpha.logit <- seq(0, 6, length.out=21); alpha <- signif(1 / (1 + exp(-alpha.logit)), 3)
#
# Normal CI.
#
CI <- function(x, Z=outer(c(1,-1), qt((1-alpha)/2, n.sample-1)))
mean(x) + Z * sd(x) / sqrt(length(x))
#
# The simulation.
#
#set.seed(17)
alpha.s <- paste0("alpha=", alpha)
sim <- lapply(distributions, function(dist) {
x <- matrix(dist(n.sim*n.sample), n.sample)
x.ci <- array(apply(x, 2, CI), c(2, length(alpha), n.sim),
dimnames=list(Endpoint=c("Lower", "Upper"),
Alpha=alpha.s,
NULL))
covers <- x.ci["Lower",,] * x.ci["Upper",,] <= 0
rowMeans(covers)
})
(sim)
#
# The plots.
#
logit <- function(p) log(p/(1-p))
colors <- hsv((1:length(sim)-1)/length(sim), 0.8, 0.7)
par(mfrow=c(1,2))
plot(range(alpha.logit), c(-2,1), type="n",
main="Confidence Interval Accuracies (Logit Scales)", cex.main=0.8,
xlab="Logit(alpha)",
ylab="Logit(coverage) - Logit(alpha)")
abline(h=0, col="Gray", lwd=2)
legend("bottomleft", names(sim), col=colors, lwd=2, bty="n", cex=0.8)
for(i in 1:length(sim)) {
coverage <- sim[[i]]
lines(alpha.logit, logit(coverage) - alpha.logit, col=colors[i], lwd=2)
}
plot(range(alpha), c(-0.2, 0.05), type="n",
main="Raw Confidence Interval Accuracies", cex.main=0.8,
xlab="alpha",
ylab="coverage-alpha")
abline(h=0, col="Gray", lwd=2)
legend("bottomleft", names(sim), col=colors, lwd=2, bty="n", cex=0.8)
for(i in 1:length(sim)) {
coverage <- sim[[i]]
lines(alpha, coverage - alpha, col=colors[i], lwd=2)
}
</code></pre>
| 614
|
confidence intervals
|
Topology of Confidence Intervals
|
https://stats.stackexchange.com/questions/200647/topology-of-confidence-intervals
|
<p>I hope this is the right site to post this.</p>
<p>The example I have in my mind is a GLMM model, where we infer random effects, and a random effect caterpillar plot (with confidence intervals):</p>
<p><a href="https://i.sstatic.net/R4Wu6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/R4Wu6.png" alt="enter image description here"></a></p>
<p>Now, suppose we start applying <a href="https://en.wikipedia.org/wiki/Topological_data_analysis" rel="nofollow noreferrer">topological data analysis</a> tools to the above plot, for example the <a href="https://research.math.osu.edu/tgda/mapperPBG.pdf" rel="nofollow noreferrer">Mapper algorithm</a>, with the filter determined by: if two confidence intervals overlap by at least X%, they are then grouped into one node. For example if we pick a $X=40$% in the above picture, we should get three different components. </p>
<p>Question 1: Is there anything interesting, from a statistics point of view about the persistant components that arise from the above reasoning, as we tune $X$ from 100% to 0 (i.e. looking at the resulting <a href="https://www.math.upenn.edu/~ghrist/preprints/barcodes.pdf" rel="nofollow noreferrer">barcodes</a>)?</p>
<p>Question 2: The above reasoning for example, could be done with something simpler like an ANOVA study. Are there any other interesting examples of statistical models which would yield non-trivial topological structures from their confidence intervals? I picked the GLMM model in particular because it has nice non-trivial variance structure. </p>
<p>My main reason for interest in this is to better conceptualize the notion of assigning rankings to each random effect. This obviously depends both on whether or not two confidence intervals overlap and how we define cutoffs of percentiles, whether it's by fixing cutoff regions or by demanding some maximum confidence interval overlap. To this end, is there any reasonable argument that can be made for picking a particular value of $X$ above to declare that the resulting groupings (and therefore rankings) are optimal?</p>
| 615
|
|
confidence intervals
|
Where two confidence intervals meet?
|
https://stats.stackexchange.com/questions/192756/where-two-confidence-intervals-meet
|
<p>I have two means that refers to two different samples of different populations. Let's say, for example, that the two means are 1.0 and 3.15. I can compute confidence intervals about these two cases and, logically, higher is the confidence interval, larger are the two intervals (they tend to infinite width).
So, the question is: is there a way to find out the point in which these two confidence intervals meet?</p>
|
<p>The exact answer depends on what are the intervals about and other circumstances but I'm going to made a couple of assumptions in the hope they fit your problem:</p>
<ul>
<li>You are computing confidence intervals on the mean.</li>
<li>Your samples are large (let's say, at least over 100). If that assumption is false, then the answer here is just an approximation.</li>
</ul>
<p>(For statisticians reading this, the second assumption means that for sake of simplicity I'm going to use normal distribution instead of t-Students and using standard deviation of sample).</p>
<p>You can compute means and standard deviation of both samples, and let's denote them mean1, mean2, sd1 and sd2.</p>
<p>Now you can use the following formula:</p>
<p>$$
z=\frac{(mean2-mean1)}{sd1+sd2}
$$</p>
<p>That $z$, in a normal standard distribution, is the one that contains between -z and +z the same probability of the level of confidence of your intervals when they start overlapping.</p>
<p>If you are interested in the point of overlapping, you just need to do:</p>
<p>$$
point=mean1+z*sd1
$$</p>
<p>or</p>
<p>$$
point=mean2-z*sd2
$$</p>
<p>and both should yield the same point.</p>
<p>If you need to know the confidence level of your intervals, you just need to check $z$ in a normal standard distribution table (like <a href="http://math.arizona.edu/~rsims/ma464/standardnormaltable.pdf" rel="nofollow">this one</a>). If $p$ is the probability you get in the table you can find the confidence level by:</p>
<p>$$
CL=1-2*(1-p)
$$</p>
<p>If your samples are not large, and especially if they are small and of different sizes, the problem is a little different and it needs to be solved numerically - although an Excel spreadsheet can be enough to handle it. </p>
| 616
|
confidence intervals
|
Are confidence intervals scale-invariant?
|
https://stats.stackexchange.com/questions/419406/are-confidence-intervals-scale-invariant
|
<p>Suppose I estimate a mean and construct some sort of confidence intervals (e.g. based on normal approximation or bootstrapped) around the mean. I now wish to rescale my mean from, say, the mean number of infections per hundred persons to the mean number of infections per thousand persons by multiplying the mean by 10.</p>
<p>Does multiplying the endpoints of the confidence intervals by 10 produce the correct confidence intervals?</p>
|
<p>If you have a scale family everything works:</p>
<p>If the probability that a random interval <span class="math-container">$[a,b]$</span> includes <span class="math-container">$\theta$</span> is some value <span class="math-container">$q \geq 1-\alpha$</span> then the probability that a random interval <span class="math-container">$[ka,kb]$</span> includes <span class="math-container">$k\theta$</span> is also <span class="math-container">$q$</span> (and so also <span class="math-container">$\geq 1-\alpha$</span>).</p>
<p>Consequently, generally when you have units, like meters or kilograms, then you can rescale to other units (say, mm) and the CI generally carries through (scale equivariance being the apparent intent of the question). </p>
<p>It's when the thing is unit-free that you don't tend to get it (e.g. with a Poisson count, or a proportion, say). </p>
| 617
|
confidence intervals
|
Test of confidence intervals?
|
https://stats.stackexchange.com/questions/152750/test-of-confidence-intervals
|
<p>In one of my assignments I have to "test" if the confidence intervals (CIs) for a set of parameters in a mixed effect model is accurate. I'm asked to simulate from fitted parameters and after that to refit them using the same model many times. Lastly, I need to take 2.5% and 97.5% quantiles of them and compare with the original CIs. My question is, how does this procedure in any way measure how accurate my original confidence intervals were?</p>
| 618
|
|
confidence intervals
|
Confidence Intervals - What's going on?
|
https://stats.stackexchange.com/questions/480991/confidence-intervals-whats-going-on
|
<p>I'm looking for help with this figure.</p>
<p>Plotted are percentages and confidence intervals for each year (I didn't collect the data).</p>
<p>I am wondering, under what circumstances would the confidence intervals be the same as the percentage for that year... What's going on in the data that allows this to happen? I thought that Confidence Intervals would appear either side of the mean, but in this case, they are not (highlighted in the graph below).</p>
<p><a href="https://i.sstatic.net/31VWA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/31VWA.png" alt="Graph with Confidence Intervals" /></a></p>
| 619
|
|
confidence intervals
|
Joint confidence intervals for probabilities
|
https://stats.stackexchange.com/questions/28605/joint-confidence-intervals-for-probabilities
|
<p>I have two probabilities $p$ and $q$. $p>q$, and they aren't correlated. I'm going to calculate $i$ such that $p^i=q$, which is easily done as $\log_p(q)$.</p>
<p>Now, I'd like to also calculate a confidence interval for $i$, which is necessarily going to be a function of both $p$ and $q$'s confidence intervals. My first approach was to do</p>
<pre><code>p.min <- qbeta(0.025, 152, 29)
p.max <- qbeta(0.975, 152, 29)
q.min <- qbeta(0.025, 37, 19)
q.max <- qbeta(0.975, 37, 19)
## q.max and p.min have the smallest difference
i.min <- log(q.max, base = p.min)
## q.min and p.max have the largest difference
i.max <- log(q.min, base = p.max)
</code></pre>
<p>But it occurs to me that 95% confidence intervals for $p$ and $q$ independently probably produces too large a confidence interval for $i$, because the joint confidence interval for $p$ and $q$ will be narrower. </p>
<p>So, how do I go about figuring out the joint confidence interval of $p$ and $q$. They're uncorrelated, which should make things easier. It is as simple as narrowing the quantiles in <code>qbeta()</code>? By how much?</p>
|
<p>I think there is a confusion between <em>confidence interval</em> and <em>probability interval</em> here. </p>
<p>In the R code, you are indicating that $p\sim Beta(152,29)$ and $q\sim Beta(37,19)$, then you can calculate the distribution of $i=log(q)/log(p)$ using a change of variable and then obtain the corresponding probability interval for $i$ using this distribution. </p>
<p>Another possibility is to approximate this probability interval by Monte Carlo simulation. In this case this interval is approximately $(1.30, 4.23)$</p>
<pre><code>i=log(rbeta(100000, 37, 19))/log(rbeta(100000, 152, 29))
c(quantile(i,0.025),quantile(i,0.975))
</code></pre>
<p>In order to construct a confidence interval for $i$ you would require that $p$ and $q$ are parameters of a sampling model.</p>
| 620
|
confidence intervals
|
Interpreting overlap of bootstrapped confidence intervals
|
https://stats.stackexchange.com/questions/467331/interpreting-overlap-of-bootstrapped-confidence-intervals
|
<p>Assuming two samples of numeric values for two groups of unequal group sizes (e.g. 100 opinion scores collected from group A and 15 opinion scores collected from group B), I understand that non-overlapping 95% confidence intervals of the opinion scores indicate that there is a statistically significant difference in the scores of the two groups, while the opposite is not necessarily true: overlapping confidence intervals do not necessarily indicate the lack of statistically significant difference (see e.g. <a href="https://www.cscu.cornell.edu/news/statnews/stnews73.pdf" rel="nofollow noreferrer">https://www.cscu.cornell.edu/news/statnews/stnews73.pdf</a>).</p>
<p>My question is: What if the confidence intervals are computed not from the observed opinion scores, but by bootstrapping mean scores for both groups? Do overlapping confidence intervals obtained from bootstrapping rather than from the observed scores also indicate that the difference might be statistically significant despite the confidence interval overlap? Or does an overlap of bootstrapped confidence intervals necessarily mean that there is no statistically significant difference between the two groups?</p>
<p>My own attempt to answer this question:
Given that rejecting statistical significance based on confidence intervals is only meaningful when the confidence intervals are computed on the <strong>differences between groups rather than group means</strong> and if the confidence interval of differences does not include zero (see e.g. <a href="https://statisticsbyjim.com/hypothesis-testing/confidence-intervals-compare-means" rel="nofollow noreferrer">https://statisticsbyjim.com/hypothesis-testing/confidence-intervals-compare-means</a>), I conclude that bootstrapping in itself does not necessarily mean that statistical significance can be rejected if bootstrapped confidence intervals from group means are non-overlapping.
Is my thinking correct? Or does bootstrapping from the two samples allows to safely assume lack of statistical significance when confidence intervals overlap?</p>
| 621
|
|
confidence intervals
|
Hypothesis testing: 95% confidence intervals of group means overlapping zero or 83% confidence intervals?
|
https://stats.stackexchange.com/questions/146342/hypothesis-testing-95-confidence-intervals-of-group-means-overlapping-zero-or
|
<p>I am trying to select between the following two methods to look for significant differences between two abundance estimates: 1) calculation of the 95% confidence interval for the difference between the two group means to determine if confidence intervals overlap zero, and 2) 83% confidence intervals. I realise that interpretation of the overlap in 83% confidence intervals should only be undertaken when standard errors are approximately equal. I was wondering whether there are any factors that might affect the reliability of the first method for hypothesis testing?</p>
| 622
|
|
confidence intervals
|
Confidence interval for ratio between two values without confidence intervals
|
https://stats.stackexchange.com/questions/616113/confidence-interval-for-ratio-between-two-values-without-confidence-intervals
|
<p>I have two numeric variables (death rates), each without confidence intervals, and I want to calculate the CI of the ratio between the two values.</p>
<p>I was using the MOVERR method developed by Donner & Zhou but this was only applicable to values with confidence intervals.
<a href="https://rdrr.io/cran/pairwiseCI/man/MOVERR.html" rel="nofollow noreferrer">https://rdrr.io/cran/pairwiseCI/man/MOVERR.html</a> (This is the information on the function and package)
<a href="https://pubmed.ncbi.nlm.nih.gov/20826501/" rel="nofollow noreferrer">https://pubmed.ncbi.nlm.nih.gov/20826501/</a> (This is the link to the article)</p>
<p>Is there a way to calculate the ratio's CI from two values without CI, using <code>R</code>?</p>
<p>An example would be the annual average mortality rate of A virus = 2.2 and the annual average mortality rate of B virus = 12.2 and I want to calculate the ratio of A against B with a 95% confidence interval.</p>
<p>Thank you</p>
| 623
|
|
confidence intervals
|
Boxplots vs. Confidence Intervals
|
https://stats.stackexchange.com/questions/137543/boxplots-vs-confidence-intervals
|
<p>I designed a heuristic that solves a problem concerning network graphs. It was tested on thousands of different instances that have various different characteristics: Topology, template, number and position of users, capacities, ... It produced more than 300000 results that also depend on the random seed that was used.</p>
<p>In order to evaluate the results, I decided to use boxplots that I created with JFreeChart. I created different diagrams for the different topologies and with separate plots for every template. I felt that was a good way to visually summarize the results.</p>
<p>I was asked, why I didn't use confidence intervals instead to give an estimate. From what I know, these depend on a underlying distribution of a population parameter while boxplots don't. However, I summarize results of different seeds, numbers of users and capacities. All these influence the results. So I think I would not be possible to use confidence intervals unless I distinguished every single network characteristic.</p>
<p>Is that true? What are other advantages and disadvantages? And how could I argue, that I only use boxplots and not confidence intervals?</p>
|
<p>Choosing box plots means that you print the 25th and 75th percentiles. Why not choose to print 2.5 and 97.5 percentiles? At n=300000 and unknown distribution that would be the most sensible definition of a confidence interval. You might even consider printing both in just one plot.</p>
<p>The purpose of the data evaluation is not perfectly clear and thus there is no better or worse to advise. If this is all about description, I personally feel that both descriptors contain too little of the available information. Have you considered violin plots? They might tell a lot more about the data's distribution than a boxplot or a confidence interval and take no more space than boxplots.</p>
| 624
|
confidence intervals
|
Confidence intervals for multinomial proportions
|
https://stats.stackexchange.com/questions/569073/confidence-intervals-for-multinomial-proportions
|
<p>I know that there are a lot of methods to find the confidence intervals for binomial proportions using methods like Agresti-Coull, but is there any important papers mentioning to find confidence intervals for multinomial case?</p>
|
<p>Not sure about important papers, but there's the obvious traditional <a href="https://en.wikipedia.org/wiki/Conjugate_prior#Table_of_conjugate_distributions" rel="nofollow noreferrer">conjugate Bayesian updating</a> option (using a Dirichlet prior for the multinomial parameters).</p>
<p>In the binomial case there's of course a direct match between using different Beta priors (a <span class="math-container">$\text{Beta}(1,0)$</span> and <span class="math-container">$\text{Beta}(0,1)$</span> for the upper and lower limits to get exact binomial confidence intervals, but I'm not sure whether there's a directly corresponding approach for the multinomial. However, a Jeffreys <span class="math-container">$\text{Beta}(0.5, 0.5)$</span> prior often performs pretty similar to that, too. The Jeffreys prior for the Multinomial is a <span class="math-container">$\text{Dirichlet}(0.5, \ldots, 0.5)$</span> distribution. After observing <span class="math-container">$y_c$</span> items falling into categories <span class="math-container">$c=1,\ldots,C$</span>, the posterior distribution is then given by <span class="math-container">$\text{Dirichlet}(0.5+y_1, 0.5+y_2, \ldots, 0.5+y_C)$</span>.</p>
<p>If you are then interested in any particular multinomial proport, the marginal distribution for it is just a Beta distribution. E.g. if you are interested in proportion <span class="math-container">$c$</span>, then it's marginal distribution is a <span class="math-container">$\text{Beta}(0.5+y_c, \sum_{i\neq c} (0.5+y_i))$</span> distribution. If you need any more unusual transformations of the proportions, you can always sample from the Dirichlet posterior, do the transformations to the samples, et voilà you've got posterior samples for the transformations of these proportions.</p>
<p>If you need things like covariate adjustments, you can use <a href="https://bookdown.org/content/3686/nominal-predicted-variable.html" rel="nofollow noreferrer">Bayesian nominal / multinomial regression</a> (e.g. in R the <code>brms</code> package covers this nicely) with similar nice properties for the posterior samples.</p>
| 625
|
confidence intervals
|
Do likelihood-based confidence intervals avoid general criticisms of confidence intervals?
|
https://stats.stackexchange.com/questions/539413/do-likelihood-based-confidence-intervals-avoid-general-criticisms-of-confidence
|
<p>In the literature, I see post-sample criticisms of frequentist confidence intervals — but the usual targets are intervals that use somewhat weak methods such as assumed normality or other long-run/asymptotic methods.</p>
<p>However, Bayesian credibility intervals simply average over a number of likelihood intervals.</p>
<p>If we construct an approx CI based on the likelihood function does this address the issues like post-sampling nonsense intervals?</p>
| 626
|
|
confidence intervals
|
Simple example of confidence intervals
|
https://stats.stackexchange.com/questions/119805/simple-example-of-confidence-intervals
|
<p>I'm looking for an example for confidence intervals similar to the following but has only one continuous interval. The following confidence interval is two separate set each of which only contain one value, which seem rather odd for being confidence interval:</p>
<p><img src="https://i.sstatic.net/Nivwo.png" alt="enter image description here"></p>
<p>Any ideas?</p>
| 627
|
|
confidence intervals
|
Confidence Intervals for ICC
|
https://stats.stackexchange.com/questions/578301/confidence-intervals-for-icc
|
<p>I was wondering if anyone might know of a way to calculate confidence intervals around an ICC(1) value? I'm running a multilevel model using the lmer() function in lme4 where I'm interested in seeing if there is a significant amount of within-person variation in a particular construct. I've run a null model for the variable and calculated the ICC and residual ICC but I was hoping to get a confidence interval around the ICC (and subtract this from 1 to get the confidence interval for the residual ICC) to see if it includes 0. However, I haven't been able to figure out a way to do this yet.</p>
| 628
|
|
confidence intervals
|
QAP regression and confidence intervals
|
https://stats.stackexchange.com/questions/663864/qap-regression-and-confidence-intervals
|
<p>I've run a Poisson QAP (quadratic assignment procedure) regression in R on my network data. As expected, the output provides permutation-based p-values, but not standard errors or confidence intervals.</p>
<p>To communicate the uncertainty around my estimates, I’d like to explore whether it's possible to derive confidence intervals or some equivalent measure of uncertainty in this context. Are there any established methods for obtaining confidence intervals for coefficient estimates in QAP regression models? Or is this inherently limited by the permutation framework?</p>
<p>I've searched for answers, but haven’t been able to find anything conclusive so far. Any references would be greatly appreciated!</p>
| 629
|
|
confidence intervals
|
When are confidence intervals useful?
|
https://stats.stackexchange.com/questions/3911/when-are-confidence-intervals-useful
|
<p>If I understand correctly a confidence interval of a parameter is an interval constructed by a <em>method</em> which yields intervals containing the true value for a specified proportion of samples. So the 'confidence' is about the method rather than the interval I compute from a particular sample. </p>
<p>As a user of statistics I have always felt cheated by this since the space of all samples is hypothetical. All I have is one sample and I want to know what that sample tells me about a parameter.</p>
<p>Is this judgement wrong? Are there ways of looking at confidence intervals, at least in some circumstances, which would be meaningful to users of statistics?</p>
<p>[This question arises from second thoughts after dissing confidence intervals in a math.se answer <a href="https://math.stackexchange.com/questions/7564/calculating-a-sample-size-based-on-a-confidence-level/7572#7572">https://math.stackexchange.com/questions/7564/calculating-a-sample-size-based-on-a-confidence-level/7572#7572</a> ]</p>
|
<p>I like to think of CIs as some way to escape the Hypothesis Testing (HT) framework, at least the binary decision framework following <a href="http://j.mp/awJEkH" rel="noreferrer">Neyman</a>'s approach, and keep in line with theory of measurement in some way. More precisely, I view them as more close to the reliability of an estimation (a difference of means, for instance), and conversely HT are more close to hypothetico-deductive reasoning, with its pitfalls (we cannot accept the null, the alternative is often stochastic, etc.). Still, with both interval estimation and HT we have to rely on distribution assumptions most of the time (e.g. a sampling distribution under $H_0$), which allows to make inference from our sample to the general population or a representative one (at least in the frequentist approach).</p>
<p>In many context, CIs are complementary to usual HT, and I view them as in the following picture (it is under $H_0$):</p>
<p><img src="https://i.sstatic.net/9VSib.png" alt="alt text"></p>
<p>that is, under the HT framework (left), you look at how far your statistic is from the null, while with CIs (right) you are looking at the null effect "from your statistic", in a certain sense. </p>
<p>Also, note that for certain kind of statistic, like odds-ratio, HT are often meaningless and it is better to look at its associated CI which is assymmetrical and provide more relevant information as to the direction and precision of the association, if any.</p>
| 630
|
confidence intervals
|
Confidence intervals calculated from other confidence intervals (binomial problem)?
|
https://stats.stackexchange.com/questions/616470/confidence-intervals-calculated-from-other-confidence-intervals-binomial-proble
|
<p>In a binomial experiment, I have an estimate for the probability of 3 independent events A, B & C, each with a 95% confidence interval.</p>
<p>(Trivial example values)</p>
<p><code>P(A) = .12 (.05, .29)</code><br />
<code>P(B) = .16 (.08, .25)</code><br />
<code>P(C) = .06 (.02, .14)</code></p>
<p>I need to calculate <code>P (no event) = P (no A) * P (no B) * P (no C)</code></p>
<p>which is <code>(1 - P(A)) * (1 - P(B)) * (1 - P(B))</code>, or <code>(1 - .12) * (1 - .16) * (1 - .06)</code>.</p>
<p>Now, my question arises when I do the same calculation using the lower and upper bounds of the confidence intervals to calculate a CI around <code>P (no success)</code>. It seems logical to do it, but I know that in some circumstances, you can't just add or subtract lower or upper bounds of C.I.'s without affecting the width, or rather the confidence level of your newly calculated interval. (Adding two 95% C.I.'s would lead to a close to 98% C.I., I've read somewhere recently).</p>
<p>I'm just not sure if this is one of those circumstances, and if it is, how do I find / calculate the proper confidence level (85%? 90%?) to use in the first step in order to end up with a truly 95% C.I. at the end?</p>
<p>EDIT:
This is an epidemiological study. Sample proportions for A, B, and C were obtained from the same sampled individuals; however, the three events are assumed independent (finding A does not impact chance of finding B in the same individual).</p>
|
<ul>
<li><p>You have estimates <span class="math-container">$\hat{q}_a$</span>, <span class="math-container">$\hat{q}_b$</span> and <span class="math-container">$\hat{q}_c$</span>, which are (presumably) approximate independent estimates of the probabities for the independent events 'no A', 'no B' and 'no C'.</p>
</li>
<li><p>You have related standard errors for these estimates (which can be derived from confidence intervals).</p>
</li>
<li><p>You want to compute an estimate for the product <span class="math-container">$q = q_aq_bq_c$</span>, the probability of neither A, B and C, assuming a model where they are independent.</p>
</li>
</ul>
<p>You can estimate this by <span class="math-container">$$\hat{q} = \hat{q}_a\hat{q}_b\hat{q}_c$$</span></p>
<p>For the standard deviation, and associated confidence interval, you can use as approximation of propagation of errors the formula for the <a href="https://stats.stackexchange.com/questions/52646/variance-of-product-of-multiple-independent-random-variables">variance of independent variables when they are multiplied</a>.</p>
<p><span class="math-container">$$\sigma_{XYZ}^2 = \mu_{X}^2 \mu_{Y}^2 \sigma_{Z}^2 + \mu_{X}^2 \sigma_{Y}^2 \mu_{Z}^2 + \sigma_{X}^2 \mu_{Y}^2 \mu_{Z}^2 + \mu_{X}^2 \sigma_{Y}^2 \sigma_{Z}^2 + \sigma_{X}^2 \mu_{Y}^2 \sigma_{Z}^2 + \mu_{X}^2 \sigma_{Y}^2 \sigma_{Z}^2 + \sigma_{X}^2 \sigma_{Y}^2 \sigma_{Z}^2$$</span></p>
<hr />
<h3>Simulation</h3>
<p>I did a simulation when <span class="math-container">$n=100$</span> and <span class="math-container">$p_a=p_b=p_c=0.5$</span>, and interestingly, computing <span class="math-container">$\hat{q}$</span> indirectly via <span class="math-container">$\hat{q}_a\hat{q}_b\hat{q}_c$</span> leads to a smaller variance of the estimate, in comparison to using the raw data directly (counting the cases no a, no b and no c). It is because we are using effectively more data, 300 datapoints instead of 100.</p>
<p><a href="https://i.sstatic.net/GCrVK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GCrVK.png" alt="simulation for difference of methods" /></a></p>
<p>So the indirect estimate using the product <span class="math-container">$\hat{q} = \hat{q}_a\hat{q}_b\hat{q}_c$</span> has less variance than using counts of the events directly. But, potentially it might biased when the events a,b,c are not truly independent.</p>
| 631
|
confidence intervals
|
Monte-Carlo error Confidence Intervals
|
https://stats.stackexchange.com/questions/287939/monte-carlo-error-confidence-intervals
|
<p>I am constructing 95% confidence intervals on some metric of interest using MC simulation. These intervals can be constructed for example using bootstrapping. </p>
<p>Does it mean that if I repeat the same MC simulation 100 times with a different seed, 95% of my results should be inside the confidence interval? </p>
|
<p>$[a,b]$ is a 95%CI if probability that true vaule of your metric lies between $a$ and $b$ is 95%. </p>
<p>In statistics, we believe that a thing called 'true value' actually exists (given by God, Nature, Spaghetti Monster or who/what-ever). We are aware that we can never tell exact value of it, but we try to approximate it.</p>
<p>So, above definition means, in practice, that if you run your MC simulation $n$ times and obtain $n$ different 95%CIs, $0.95n$ of them should contain true vaule of your metric.</p>
<p>Of course bootstrap CIs are just an approximations of exact CIs that could be calculated if we knew distribution of your metric.</p>
| 632
|
confidence intervals
|
Confidence interval and confidence region
|
https://stats.stackexchange.com/questions/138603/confidence-interval-and-confidence-region
|
<p>Could you please tell me what is the difference between confidence interval and confidence region in the following sense? </p>
<p>For example, we have s multiple linear regression model. For individual confidence intervals, we use $t$-statistics to find individual confidence intervals for regression parameters but I found in many books that when authors write confidence intervals based on $F$ distribution,they call it confidence region not the intervals.</p>
<p>What does it mean by region and how does it relate to confidence intervals,
specifically in view of multiple regression?</p>
|
<p>As @NBrouwer says, a confidence interval is for an individual variable, so it is a (one-dimensional) interval. This is the case for e.g. the confidence interval for an individual regression coefficent. </p>
<p>However, if you build 'confidence intervals' for more than one variable at a time, i.e. for a multivariate parameter $\beta=(\beta_0, \beta_1, \dots \beta_n)$ then you get a region in an (n+1)-dimensional space. In two dimensions this could be a rectangular region, an ellips, or another shape. </p>
<p>The Bonferroni correction e.g. requires you (for the two variable case) for a 0.95 confidence level to construct two intervals - one for each dimension - using a 0.975 confidence level. The confidence region then looks like $\{(x,y) | \bar{x}_L \le x \le \bar{x}_H \& \bar{y}_L \le y \le \bar{y}_H \}$ which defines a rectangular region (the 'bar' means a fixed value and the subscripts L and H mean Low and High). Such a rectangular region could be seen as a (two-dimensional) interval. </p>
<p>For confidence 'intervals' based on statistics like a $\chi^2$-statistic (in 2 dimensions) your region will be an ellips (see also <a href="https://stats.stackexchange.com/questions/164741/how-to-find-the-maximum-axis-of-ellipsoid-given-the-covariance-matrix/164744#164744">How to find the maximum axis of ellipsoid given the covariance matrix?</a> and <a href="https://stats.stackexchange.com/questions/171074/chi-square-test-why-is-the-chi-squared-test-a-one-tailed-test/171084#171084">Chi-Square-Test: Why is the chi-squared test a one-tailed test?</a> - where the equation for an ellips can be seen in the definition of the $X^2$). </p>
<p>Similar for the F-statistic; you build an interval for all the coefficients jointly, so it is a multidimensional region. </p>
<p>So a one-dimensional confidence region is a confidence interval. Rectangular confidence regions might be called confidence intervals, depending on your definition of an (multi-dimensional) interval. Or ''confidence region'' is the more general name, ''confidence inteval'' is a special case of a ''confidence region''. </p>
| 633
|
confidence intervals
|
Different methods, different confidence intervals
|
https://stats.stackexchange.com/questions/191133/different-methods-different-confidence-intervals
|
<p>A <a href="http://en.wikipedia.org/wiki/Confidence_interval#Statistical_theory" rel="nofollow">definition</a> of a confidence interval could be:</p>
<blockquote>
<p><strong>A confidence interval</strong> for the parameter θ, with confidence level or
confidence coefficient γ, is an interval with random endpoints ($u(X)$,
$v(X)$), determined by the pair of random variables $u(X)$ and $v(X)$, with
the property:</p>
<p>${\Pr}_{\theta,\varphi}(u(X)<\theta<v(X))=\gamma\text{ for all
}(\theta,\varphi). $</p>
<p>Here $Pr(θ,φ)$ indicates the probability distribution of X
characterised by $(θ, φ)$. </p>
</blockquote>
<p>Implicit in this definition is the notion that one can build <strong>different</strong> confidence intervals using <strong>different</strong> pairs of methods $\left(u, v\right)$, for the <strong>same</strong> confidence coefficient $\gamma$.</p>
<p>What would be a good example to show this? Say we have a 100 sample from a 1D normal distribution, and we seek <strong>two</strong> 95% confidence intervals of the <strong>mean</strong> obtained through two different methods $\left(u_1, v_1\right)$ and $\left(u_2, v_2\right)$. What methods could we use here?</p>
|
<p>First 95% Confidence Interval: Symmetric 2-sided, so sample mean +/- 1.960 * (standard error)</p>
<p>Second 95% Confidence Interval: Upper 1-sided: [$-\infty$, sample mean + 1.645 * (standard error)]</p>
| 634
|
confidence intervals
|
Are Prophet's "uncertainty intervals" confidence intervals or prediction intervals?
|
https://stats.stackexchange.com/questions/619860/are-prophets-uncertainty-intervals-confidence-intervals-or-prediction-interva
|
<blockquote>
<p>By default Prophet will return uncertainty intervals for the forecast <code>yhat</code>.</p>
</blockquote>
<p>Unfortunately, <a href="https://facebook.github.io/prophet/docs/uncertainty_intervals.html" rel="noreferrer">the documentation</a> about those "uncertainty intervals" is extremely vague, and it doesn't help that statisticians have precise definitions of both <a href="https://stats.stackexchange.com/tags/confidence-interval/info">confidence intervals</a> and <a href="https://stats.stackexchange.com/tags/prediction-interval/info">prediction intervals</a> (<a href="https://stats.stackexchange.com/tags/prediction-interval/info">there is a big difference!</a>), but not of "uncertainty intervals". In fact, the documentation could be taken to refer to either one, although a "prediction interval interpretation" sounds a little more likely to me.</p>
<p>Can anyone tell us (e.g., by digging through the code) whether these are confidence or prediction intervals?</p>
|
<p>From digging through the code as you suggest it seems that they are prediction intervals. Specifically the model is fit by sampling a posterior with Stan (line 1249-1266 in <a href="https://github.com/facebook/prophet/blob/374676500795aec9d5cbc7fe5f7a96bf00489809/R/R/prophet.R#L1249" rel="noreferrer">prophet.R</a>) and the interval is based on draws from the posterior predictive distribution. See line 1542 of the file <a href="https://github.com/facebook/prophet/blob/main/R/R/prophet.R#L1542-L1542" rel="noreferrer">prophet.R</a>, which I reproduce below.</p>
<pre class="lang-r prettyprint-override"><code>#' Prophet uncertainty intervals for yhat and trend
#'
#' @param m Prophet object.
#' @param df Prediction dataframe.
#'
#' @return Dataframe with uncertainty intervals.
#'
#' @keywords internal
predict_uncertainty <- function(m, df) {
sim.values <- sample_posterior_predictive(m, df)
# Add uncertainty estimates
lower.p <- (1 - m<span class="math-container">$interval.width)/2
upper.p <- (1 + m$</span>interval.width)/2
intervals <- cbind(
t(apply(t(sim.values<span class="math-container">$yhat), 2, stats::quantile, c(lower.p, upper.p),
na.rm = TRUE)),
t(apply(t(sim.values$</span>trend), 2, stats::quantile, c(lower.p, upper.p),
na.rm = TRUE))
)
colnames(intervals) <- paste(rep(c('yhat', 'trend'), each=2),
c('lower', 'upper'), sep = "_")
return(dplyr::as_tibble(intervals))
}
</code></pre>
<p>See also the <code>sample_model</code> function that is called from <code>sample_posterior_predictive</code>, which shows exactly how the <code>yhat</code>s are calculated <a href="https://github.com/facebook/prophet/blob/374676500795aec9d5cbc7fe5f7a96bf00489809/R/R/prophet.R#L1573" rel="noreferrer">on line 1573</a>:</p>
<pre class="lang-r prettyprint-override"><code>sample_model <- function(m, df, seasonal.features, iteration, s_a, s_m) {
trend <- sample_predictive_trend(m, df, iteration)
beta <- m<span class="math-container">$params$</span>beta[iteration,]
Xb_a = as.matrix(seasonal.features) %*% (beta * s_a) * m$y.scale
Xb_m = as.matrix(seasonal.features) %*% (beta * s_m)
sigma <- m<span class="math-container">$params$</span>sigma_obs[iteration]
noise <- stats::rnorm(nrow(df), mean = 0, sd = sigma) * m$y.scale
return(list("yhat" = trend * (1 + Xb_m) + Xb_a + noise,
"trend" = trend))
}
</code></pre>
| 635
|
confidence intervals
|
Confidence intervals for integer parameters
|
https://stats.stackexchange.com/questions/579228/confidence-intervals-for-integer-parameters
|
<p>I'm interested, purely out of curiosity, in what methods can be used to calculate confidence intervals for discrete integer model parameters.</p>
<p>As an example, consider the model (which I can flesh out with code if needs be)</p>
<p><span class="math-container">$$
y \in [0,1]; \\
P(y_i = 1) =
\begin{cases}
.25 \text{ if } i \lt \theta \\
.75 \text{ otherwise };
\end{cases} \\
i \in [1, 2, \dots, n];\\
\theta \in [1, 2, \dots, n];\\
$$</span></p>
<p>where the change point <span class="math-container">$\theta$</span> is an integer parameter.</p>
<p>I can think of a few approaches one could take here:</p>
<ul>
<li>Bootstrapping</li>
<li>Using Bayesian methods, with a Uniform prior, to sample from the posterior <span class="math-container">$P(\theta | Y) \propto P(Y | \theta)$</span>, and then treat the percentiles as bounds of the corresponding confidence interval. I think this is appropriate, given that for many models credible intervals with uniform priors are equivalent to frequentist confidence intervals.</li>
<li>Treat <span class="math-container">$\theta$</span> as a continuous variable...
<ul>
<li>...and use Fisher information to calculate standard errors. This won't work here because <span class="math-container">$P(Y | \theta)$</span> is a step function.</li>
<li>For other models, it may be possible to calculate continuous (decimal) standard errors (e.g. <a href="https://stats.stackexchange.com/q/374961/42952">here</a>). I guess the resulting confidence interval, <span class="math-container">$\hat \theta \pm 1.96 \times SE$</span> could be rounded off to integer values.</li>
</ul>
</li>
</ul>
<p>So, what methods are available for calculating confidence intervals for integer parameters, either for models of this form or in general?</p>
<blockquote>
<p><a href="https://stats.stackexchange.com/q/8844/42952">This</a> question might also be relevant.</p>
</blockquote>
| 636
|
|
confidence intervals
|
Confidence intervals for autocorrelation function
|
https://stats.stackexchange.com/questions/368404/confidence-intervals-for-autocorrelation-function
|
<p>Given a time series data sample I have computed autocorrelation coefficients for various lags, the result looks something like this</p>
<p><a href="https://i.sstatic.net/VEDsn.jpg" rel="noreferrer"><img src="https://i.sstatic.net/VEDsn.jpg" alt="enter image description here"></a></p>
<p>How do I compute the confidence intervals around the sample autocorrelation curve?</p>
<p>The reason for that is to see if another autocorrelation curve computed from samples generated by some model is within those confidence intervals.</p>
|
<p>A quick google search with "confidence intervals for acfs" yielded</p>
<p><a href="https://books.google.fi/books?id=WLaLBgAAQBAJ&pg=PA38&dq=Confidence+intervals+for+acfs&hl=fi&sa=X&ved=0ahUKEwiTvrPqotPdAhUriaYKHRUFDmAQ6AEIJjAA#v=onepage&q&f=false" rel="noreferrer">Janet M. Box-Steffensmeier, John R. Freeman, Matthew P. Hitt, Jon C. W. Pevehouse: Time Series Analysis for the Social Sciences</a>.</p>
<p>In there, on page 38, the standard error of an AC estimator at lag k is stated to be</p>
<p><span class="math-container">$AC_{SE,k} = \sqrt{N^{-1}\left(1+2\sum_{i=1}^k[AC_i^2] \right)}$</span></p>
<p>where <span class="math-container">$AC_i$</span> is the AC esimate at lag i and N is the number of time steps in your sample. This is assuming that the true underlying process is actually MA. Assuming asympotic normality of the AC estimator, you can calculate the confidence intervals at each lag then as</p>
<p><span class="math-container">$CI_{AC_{k}} = [AC_{k} - 1.96\times\dfrac{AC_{SE,k}}{\sqrt{N}}, AC_{k} + 1.96\times\dfrac{AC_{SE,k}}{\sqrt{N}}]$</span>.</p>
<p>For some further info, see also <a href="https://support.minitab.com/en-us/minitab/18/help-and-how-to/modeling-statistics/time-series/how-to/autocorrelation/methods-and-formulas/methods-and-formulas/" rel="noreferrer">this</a> and <a href="https://www.ibm.com/support/knowledgecenter/SSLVMB_22.0.0/com.ibm.spss.statistics.algorithms/alg_acf-pacf_basic-stats_se-sample-autocorrelation.htm" rel="noreferrer">this</a>.</p>
| 637
|
confidence intervals
|
Given Two 95% Confidence Intervals
|
https://stats.stackexchange.com/questions/41141/given-two-95-confidence-intervals
|
<p>Suppose we have given a two $95 \%$ confidence intervals for $X_1$ and $X_2$. They are normally distributed. From this how would we get a $95 \%$ confidence interval for $X_{1}/X_{2}$?</p>
|
<p>I dont think you can get the 95CI of mean(x1/x2) just by their separate 95CI.
Maybe you can do a simulation to get the empirical distribution of (x1/x2) if you also get the correlated relationship between x1 and x2 </p>
| 638
|
confidence intervals
|
Confusion regarding Confidence Intervals
|
https://stats.stackexchange.com/questions/461378/confusion-regarding-confidence-intervals
|
<p>Suppose we have <span class="math-container">$2$</span> independent population parameters <span class="math-container">$p_1$</span> and <span class="math-container">$p_2$</span>, such that the <span class="math-container">$90$</span>% ( symmetric ) confidence intervals for <span class="math-container">$p_1$</span> and <span class="math-container">$p_2$</span> are given by <span class="math-container">$ (0.411, 0.498) $</span> and <span class="math-container">$(0.473, 0.567)$</span> respectively. Now, according to the original question ( copied verbatim ) , supposing that the <span class="math-container">$90 $</span>% interval for <span class="math-container">$p_1$</span> lies entirely below the interval for <span class="math-container">$p_2$</span>, how confident are we to conclude that <span class="math-container">$p_1 < p_2$</span>?</p>
<p>Now, assuming that I have understood the problem correctly, I feel that this question does not make sense to me because <span class="math-container">$(0.411, 0.498)$</span> does not lie completely behind <span class="math-container">$(0.473,0.567)$</span>. The question is, am I right in claiming so?</p>
<p>Next, assuming the problem is simply "How confident are we to conclude that <span class="math-container">$p_1 < p_2 $</span> ", given their <span class="math-container">$2$</span> <span class="math-container">$90$</span>% symmetric confidence intervals as above, how would we go about approaching it, if it is even possible?</p>
|
<p>When we say that an interval has confidence 95% we mean that in repeated sampling we would find that intervals constructed in the same way would cover the true value of the parameter about 95% of the times: the confidence is the probability
of coverage in repeated experimentation.</p>
<p>Now, I do not think you can talk about "How confident are we to conclude that <span class="math-container">$p_1<p_2$</span>" when no confidence interval is involved: <span class="math-container">$p_1 < p_2$</span> that will be either true or false. Perhaps from a Bayesian point of view you could talk about the posterior probability that <span class="math-container">$p_1 < p_2$</span> given that something happens, but that is an entirely different sort of statement than what goes under the name of confidence.</p>
| 639
|
confidence intervals
|
Particle Filter: Confidence Intervals
|
https://stats.stackexchange.com/questions/271099/particle-filter-confidence-intervals
|
<p><strong>Context</strong></p>
<p>This is a basic question about confidence intervals. So the standard way to estimate a confidence interval.Assuming we have a set of $N$ random variables $\{X^i\}$ such that all of them are i.i.d. We know that the mean of is converges to the central limit of a $N(\mu,\frac{\sigma^2}{N})$, where $\mu$ is the true mean of $X^i$ and $\sigma^2$ is the true variance of $X^i$. Note: we are not assuming $X^i$ are normally distributed.
The confidence interval can be estimated</p>
<p>$$ CI_{upper}=\bar{x}+1.96\frac{s}{\sqrt{N}}$$</p>
<p>where $s$ is the square root of the sample variance. The idea to me here is quite simple. Assuming $N$ is large enough, we carry out an experiment ONCE, compute $\bar{x}=\frac{1}{N}\sum x^i$, which is an unbiased estimate of $\mu$ and compute $s=\frac{1}{N-1} \sum (x^i-\bar{x})^2 $ which is an unbiased estimate of $\sigma^2$. So our confidence interval gives as the uncertainty of repeating the experiment $M$ times, i.e. if we repeated $M$ times we will expect $\mu$ to be inside the confidence interval 95% of the time. In code this can be written as,(this can also be seen as estimating only mean and variance of a single random variable)</p>
<pre><code>x<-rnorm(100)
mean(x)
var(x)
mean(x)+1.96*var(x)/sqrt(N)
</code></pre>
<p>Note: I am assuming i.i.d and I will come to a situation where I am uncertain whether i.i.d applies and if it doesn't how do we compute non-i.i.d confidence intervals.</p>
<p><strong>Particle Filter</strong></p>
<p>I am constructing a particle filter estimating , $p(x_t|y_{1:t})$. I want to estimate confidence intervals for the mean estimates of my particles at time $t$. I hypothesize that my mean estimates are the best estimate of the state,although this is not entirely true. In effect I have an approximation of , $p(x_t|y_{1:t})$ by a set of weighted samples $\{x^i_t, w^i_t\}$.
Therefore, I compute the weighted mean </p>
<p>$$\bar{x}_t=\frac{1}{N}\sum w^i_tx^i_t$$</p>
<p>I intend to compute two quantities the variance of $\bar{x}_t$ and the confidence intervals around $\bar{x}_t$.The purpose of this is really to see how frequent resampling occurs effects variance, therefore, keep in mind that I may have resampled my particles $\{x^i_t, w^i_t\}$ from the previous time step and the index $i$ is not representative of the particle at time $t-1$.</p>
<p><strong>Some theoretical consequences</strong></p>
<p>There exists central limit theorems that state that my weighted approximation, $E(\hat{p}(x_{t}|y_{1:t}))$ converges in distribution to a normal distribution with mean $E(p(x_{t}|y_{1:t})$ and some unknown variance $\sigma_{pf}^2$, where $\hat{p}(x_{t}|y_{1:t})=\frac{1}{N}\sum w^i_t\delta(x_t-x^i_t)$. I am unaware of any independence and identically distributed assumptions. Although I read somewhere that $\{x^i_t,w^i_t\}$ are independent but not identically distributed.</p>
<p><strong>How do I calculate the variance?</strong></p>
<p>I tried to calculated the variance using simulation of the the data $\{y_t\}$(I can generate $y_t$ because I know $x_t$) to compute $\bar{x}_t$.These are the steps I took 1) Generate $\{y_t\}$ 2) Run the filter 3)Obtain $\bar{x}_t$
4) Repeat 1)-3) $M$ times 5)With my collection of $M$ pieces of $\bar{x}_t$, we will denote by $\{\bar{x}_t\}$ I compute the $mean(\{\bar{x}_t\})$ to get an estimate of $E(p(x_{t}|y_{1:t})$ and $var(\{\bar{x}_t\})$ to get an estimate $var(p(x_{t}|y_{1:t})$. My confidence interval is therefore,</p>
<p>$$CI_{upper}=mean(\{\bar{x}_t\})+1.96var(\{\bar{x}_t\})$$</p>
<p>Notice I do not divide by $\sqrt{M}$.</p>
<p><strong>QUESTION</strong></p>
<ol>
<li><p>Have I computed the variance and confidence intervals correctly?I am really treating $\bar{x_t}$ as a random variable and getting monte carlo estimates from it.Also when I compute the confidence interval, I did not use the number of simulations. Also in consideration of the fact that $\{x^i_t\}$ might not be identically distributed am I estimating the $var(\{ \bar{x}_t\})$ correctly?</p></li>
<li><p>Is there a way I can compute confidence interval of $\bar{x}_t$ without needing to simulate?</p></li>
<li><p>Was it correct that I obtained instances of $\bar{x}_t$ by generating a new $\{y_t\}$ at each simulation. Does this not change the distribution I am estimating in the first place, $p(x_t|y_{1:t})$?</p></li>
</ol>
<p>EDIT: in response to @Taylors comments </p>
<p><a href="https://i.sstatic.net/YXuPR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YXuPR.png" alt="Conf with $\sqrt{N}$"></a>
<a href="https://i.sstatic.net/VquYI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VquYI.png" alt="Conf without $\sqrt{N}$ here"></a></p>
<p>As you can see that my constructed confidence intrevals which are shown by the two red lines are too narrow. My true states which are shown by black dots are outside of the interval too frequently. I am using a simple random walk with fixed process variance of 1 and measurement variance of 3. The narrow interval was when I divide by $\sqrt{N}$ and the wide interval is when I don't divid by $\sqrt{N}$</p>
|
<p>You might want to check your formulas for the sample means and sample standard deviations once you start talking about particle filters. Also, the particles are not independent if you are resampling. But they are identical. I think you have that backwards. </p>
<p>Otherwise, if you weren't resampling, you would just be using the Law of Large Numbers, more basic Central Limit Theorems, and Slutsky's theorem to guarantee that sample means and standard deviations are the way to go. You've been asking a lot of questions about the <a href="http://www.stats.ox.ac.uk/~doucet/doucet_johansen_tutorialPF2011.pdf" rel="nofollow noreferrer">Doucet/Johansen tutorial</a> lately, and that tutorial mentions a few sources on the theorems you're asking about. Specifically they say:</p>
<blockquote>
<p>SMC methods involve systems of particles which interact (via the
resampling mechanism) and, consequently, obtaining convergence results
is a much more difficult task than it is for SIS where standard
results (iid asymptotics) apply. However, there are numerous sharp
convergence results available for SMC; see [10] for an introduction to
the subject and the monograph of Del Moral [11] for a complete
treatment of the subject. An explicit treatment of the case in which
resampling is performed adaptively is provided by [12].</p>
</blockquote>
<p>To answer (1), you can drop the $1/N$ because your normalized weights sum to $1$ already:
$$
\bar{X}_t = \sum_i W_t^iX^i_t = \sum_i \frac{w_t^i}{\sum_jw_t^j} X^i_t
$$
$$
S^2_t = \sum_i W_t^i (X^i_t - \bar{X}_t)^2.
$$</p>
<p>Regarding (2) and (3), you probably never want to simulate your observed data. </p>
| 640
|
confidence intervals
|
Confidence intervals for ordered probit
|
https://stats.stackexchange.com/questions/144636/confidence-intervals-for-ordered-probit
|
<p>I'm attempting to compute confidence intervals for an ordered probit. I am a graduate student and it was suggested as one of the tasks to add to my final paper. I have found a few papers discussing it but I'm not sure if this has to be done manually. I am using Stata.</p>
<p>The confidence intervals are for the predicted probabilities. My model has 3 discrete outcomes.</p>
<p>IE. With 95% confidence the outcome will be y=1 or y=2</p>
|
<p>This is a partial solution, but maybe it will be useful start.</p>
<p>The usual way of doing this is with <code>predictnl</code>'s <code>ci</code> option, which will give you predicted probability and a confidence interval for each observation. But this will give you some CI endpoints that fall outside [0,1] interval since Stata is not aware that it is dealing with probabilities when applying the delta method:</p>
<pre><code>use "http://www.stata-press.com/data/r13/fullauto", clear
oprobit rep77 foreign length mpg
/* Automated: CIs outside [0,1] */
predictnl poor = predict(outcome(1)), ci(poor_lb poor_ub)
predictnl fair = predict(outcome(2)), ci(fair_lb fair_ub)
predictnl avg = predict(outcome(3)), ci(avg_lb avg_ub)
predictnl good = predict(outcome(4)), ci(good_lb good_ub)
predictnl exc = predict(outcome(5)), ci(exc_lb exc_ub)
</code></pre>
<p>I don't believe that simply setting the problematic endpoints to zero or one is the correct thing to do. </p>
<p>Next I tried defining <a href="http://www.stata.com/manuals13/roprobitpostestimation.pdf#page=4" rel="nofollow">the expressions by hand</a>, but that had the same problem:</p>
<pre><code>/* Manual #1: CIs still outside [0,1] */
predictnl exc2 = normal(xb() - _b[/cut4]), ci(exc_lb2 exc_ub2)
predictnl good2 = normal(_b[/cut4]-xb()) - normal(_b[/cut3]-xb()), ci(good_lb2 good_ub2)
predictnl avg2 = normal(_b[/cut3]-xb()) - normal(_b[/cut2]-xb()), ci(avg_lb2 avg_ub2)
predictnl fair2 = normal(_b[/cut2]-xb()) - normal(_b[/cut1]-xb()), ci(fair_lb2 fair_ub2)
predictnl poor2 = normal(_b[/cut1]-xb()), ci(poor_lb2 poor_ub2)
</code></pre>
<p>Here <code>xb()</code> is shorthand for the linear index function of the coefficients and the predictors. </p>
<p>Calculating the linear index and getting CIs for that, and then taking the normal transform seems to work much better, but I am not sure how to apply that approach on the middle outcomes, where the predicted probability is a difference of two normal CDFs:</p>
<pre><code>/* Manual #2 */
predictnl exc3 = xb()-_b[/cut4], ci(exc_lb3 exc_ub3)
predictnl poor3 = _b[/cut1]-xb(), ci(poor_lb3 poor_ub3)
foreach var of varlist *3 {
replace `var' = normal(`var')
}
</code></pre>
<p>This will get you 2 of the 3 CIs for each observation.</p>
| 641
|
confidence intervals
|
Confidence intervals for population
|
https://stats.stackexchange.com/questions/442655/confidence-intervals-for-population
|
<p>These might be slightly basic questions for confidence intervals but I can't think exactly how to resolve them.</p>
<p>Considering an example where I have access to the entire population e.g. the annual revenue for a company over the last decade <span class="math-container">$R_i, i\in{1,..10}$</span> where revenue is generated from sales.</p>
<p>1) Is it a valid approach to estimate confidence intervals for e.g. each year's revenue by considering the distribution of revenue by week for each year? For example, weekly revenue is <span class="math-container">$r_{ij}, j\in1...52$</span>, with mean <span class="math-container">$\mu_i$</span> and standard deviation <span class="math-container">$\sigma_i$</span> then use the CLT to generate a CI for the revenue sum, i.e. using <span class="math-container">$R_i\sim N\big(n\mu_i, \sqrt{n}\sigma_i\big)$</span> to get <span class="math-container">$n\mu_i\pm1.96\sqrt{n}\sigma_i$</span> for the 95% CI. What is the difference between this weekly approach and an approach based on considering the distribution of revenue across sales?</p>
<p>2) In such a case where data for the entire population is available, what is the interpretation of the confidence interval? i.e. what is the true parameter that the CI seeks to enclose in this case?</p>
| 642
|
|
confidence intervals
|
Monte-Carlo Quantile Confidence Intervals
|
https://stats.stackexchange.com/questions/287710/monte-carlo-quantile-confidence-intervals
|
<p>I am looking at a Monte-Carlo engine with N simulations. This Monte-Carlo engine builds a distribution from which I would like to read the 99%-tile (called the VaR). The problem can be interpreted as Monte-Carlo VaR (value at risk). I use a confidence interval on this quantile obtained with 2 different methods:</p>
<ol>
<li>Asymptotic CIs ( see Glasserman, Monte Carlo methods, p491) :
$$ \hat{x_p} \pm z_{\alpha/2} \frac{\sqrt{p(1-p)}}{f(x_p)\sqrt{N}}$$
where $f(x_p)$ is the density function and $x_p$ the quantile (VaR).
To estimate the $f(x_p)$ one can use kernel density estimation or finite-differences.</li>
<li>Non-parametric CIs: Another approach is to use batching where the N samples are divided in non overlapping batches $b$ of size $\frac{N}{b}$ and we compute the empirical mean and standard deviation across batches and infer a confidence interval.</li>
</ol>
<p>Having implemented both methods they give similar results, the batching providing slightly wider confidence intervals. Note that I am constructing 95% Confidence intervals. However, when I repeat the MC simulation $M$ times ($M$ MC runs) and look at the percentage of VaR values from this M runs that land into a given confidence interval (say the confidence interval obtained on the first run), this percentage is much higher than 5%.</p>
<p>Shouldn't I expect only around 5% of VaRs from the MC runs finishing outside any given 95% - confidence interval ? Or I am missing something. </p>
<p>Your help would be greatly appreciated. </p>
| 643
|
|
confidence intervals
|
Confidence Intervals Around a Mean: biased (non-centered) confidence interval? (an exercise using R)
|
https://stats.stackexchange.com/questions/115554/confidence-intervals-around-a-mean-biased-non-centered-confidence-interval
|
<p>I've been playing around with the module "Confidence Intervals Around a Mean" (meanCI) from statsTeachR (www.statsteachr.org), authored by Eric A Cohen (unfortunately, author's contact information was not available).</p>
<p>It "focuses on understanding and calculating confidence intervals around a sample mean. This includes understanding the concept of a confidence interval; understanding when using either the normal or the t-distribution (or neither) is appropriate; writing R code to calculate confidence intervals; and writing code to test coverage of the true population mean by confidence intervals" (www.statsteachr.org/modules/19).</p>
<p>Building on this module, I produced this plot:</p>
<p><img src="https://i.sstatic.net/HRYtN.png" alt="plot"></p>
<p>It shows the average number of "mistakes" (confidence intervals that do not contain the population mean) as the sample size (n) increases from 2 to 1000 (the population size N), distinguishing between "below mistakes" and "above mistakes" (above mistake: the lower limit of the CE is above the population mean; below mistake: the upper limit of the CE is below the population mean).</p>
<p>The figure has 4 panels:
Top left: When the population follows a chi square distribution, and confidence interval using the normal distribution.
Top right: When the population follows a chi square distribution, and confidence interval using the t distribution.
Bottom left: When the population is normally distributed, and confidence interval using the normal distribution.
Bottom right: When the population is normally distributed, and confidence interval using the t distribution.</p>
<p>My question is, why there seems to be some sort of bias, with below mistakes consistently higher than the above mistakes when the population follows a chi squared distribution and above mistakes consistently higher than below mistakes when the population is normally distributed?</p>
<p>Is it really like that?, I mean, is this a standard result?, or it is just an error in my code? (I swear I've checked my code many many times, and I think everything is ok). If there is no error in my code, doesn't it mean that we could improve confidence interval's performance trying to correct for such 'bias'?</p>
<p>I think I've been doing my homework and I cannot reconcile what I find in my exercise using R with the theory. Checking out my introductory statistics book, it doesn't say anything about non-centered confidence interval. In fact, it says that the mean estimator is consistent and thus, it gets closer to the true population mean as the sample size increases, and given that the confidence interval has the mean estimator in its center, it seems to me that the above and below mistakes should be relatively similar.</p>
<p>I also searched here in CrossValidated and found this VERY interesting post-answers (<a href="https://stats.stackexchange.com/questions/6652/what-precisely-is-a-confidence-interval">What, precisely, is a confidence interval?</a>), but nobody there discussed anything about whether the confidence interval is indeed centered or not.</p>
<p>Any thought is more than welcome.</p>
<p>Here's my code:</p>
<p>There are three core functions as follows. The first two (CIs_normal_dist, CIs_t_dist) calculate "by hand" the confidence interval using the normal approximation and the t distribution respectively. Note that both functions assume the population variance is unknown and thus, estimate the standard error using the sample values (as discussed here <a href="https://stats.stackexchange.com/questions/44661/confidence-intervals-for-mean-when-variance-is-unknown">Confidence intervals for mean, when variance is unknown</a> and here <a href="http://www.r-tutor.com/elementary-statistics/interval-estimation/interval-estimate-population-mean-unknown-variance" rel="nofollow noreferrer">http://www.r-tutor.com/elementary-statistics/interval-estimation/interval-estimate-population-mean-unknown-variance</a>). The third function produces sets of CIs and calculates how many mistakes (population mean not in the CI) there are in a set, and repeat that many times to calculate the average number of mistakes.</p>
<pre><code># BASED ON:
# statsTeachR module: Confidence Intervals around a Mean
# Author: Eric A. Cohen
# Following statsTeachR module, this function takes many (num_CIs) samples
# from the data and calculates the sample mean and the confindence interval
# around it, using the normal approximation.
CIs_normal_dist <- function(data, num_CIs=100, alpha=0.1, sample_size=40,
replace_sampling=FALSE){
CIs <- matrix(data=NA, nrow=num_CIs, ncol=3)
for (i in 1:num_CIs) {
sample_i <- sample(x=data, size=sample_size, replace=replace_sampling)
CIs[i, 1] <- mean(sample_i)
# Calculate the sample standard deviations
sample_sd <- sd(sample_i)
sample_mean_sd <- sample_sd/sqrt(sample_size)
# Use the normal distribution to calculate the confidence interval:
delta <- qnorm(p=1-alpha/2, mean=0, sd=1) * sample_mean_sd
CIs[i, 2] <- CIs[i, 1] - delta
CIs[i, 3] <- CIs[i, 1] + delta
#just to check that the CI generated by our function
#is the same as the z.test function in the BSDA package
#print(c(CIs[i, 2], CIs[i, 3]))
#if(sample_size>3){
#print(z.test(x=sample_i, conf.level=1-alpha, sigma.x=sample_sd)$conf.int)
#}
}
CIs
}
# Following statsTeachR module, this function takes many (num_CIs) samples
# from the data and calculates the sample mean and the confindence interval
# around it, using t distribution
CIs_t_dist <- function(data, num_CIs=100, alpha=0.1, sample_size=40,
replace_sampling=FALSE){
CIs <- matrix(data=NA, nrow=num_CIs, ncol=3)
for (i in 1:num_CIs) {
sample_i <- sample(x=data, size=sample_size, replace=replace_sampling)
CIs[i, 1] <- mean(sample_i)
# Calculate the sample standard deviations
sample_sd <- sd(sample_i)
sample_mean_sd <- sample_sd/sqrt(sample_size)
# Use the t distribution to calculate the confidence interval:
delta <- qt(p=1-alpha/2, df=sample_size-1) * sample_mean_sd
CIs[i, 2] <- CIs[i, 1] - delta
CIs[i, 3] <- CIs[i, 1] + delta
#just to check that the CI generated by our function
#is the same as the built-in t.test function
#print(c(CIs[i, 2], CIs[i, 3]))
#print(t.test(x=sample_i, conf.level=1-alpha)$conf.int)
}
CIs
}
# Use the previous function (CIs_normal_dist) to produce sets of CIs
# Calculate how many mistakes (population mean not in the CI) there are
# in a set of num_CIs CIs (distinguish mistakes above and below mistakes,
# above: the lower limit of the CE is above the population mean
# below: the upper limit of the CE is below the population mean)
# Do that num_CI_sets times, for each sample size between 1 and max_sample_size
# And take the average number of mistakes for each sample size
# The function returns a data frame with 4 columns
# mean_mistakes[,1] sample size
# mean_mistakes[,2] average number of mistakes above
# mean_mistakes[,3] average number of mistakes below
# mean_mistakes[,4] average number of total mistakes
# (averages of mistakes in num_CI_sets CIs)
mistakes_mean <- function(data, population_mean,
max_sample_size = 2, num_CI_sets = 100,
num_CIs = 100, alpha = 0.1, replace_sampling = FALSE,
FUN = CIs_t_dist){
mean_mistakes <- matrix(data=NA, nrow=max_sample_size-1, ncol=4)
for(sample_size in seq(from=2, to=max_sample_size, by=1)){
mistakes <- matrix(data=NA, nrow=num_CI_sets, ncol=4)
for(i in 1:num_CI_sets){
CIs <- FUN(data = data, num_CIs = num_CIs, alpha = alpha,
sample_size = sample_size, replace_sampling=FALSE)
mistakes[i,1] <- sample_size
mistakes[i,2] <- sum(CIs[,2]>population_mean) #above mistake
mistakes[i,3] <- sum(CIs[,3]<population_mean) #below mistake
}
mistakes[,4] <- mistakes[,2]+mistakes[,3]
mean_mistakes[sample_size-1,1] <- sample_size
mean_mistakes[sample_size-1,2] <- mean(mistakes[,2])
mean_mistakes[sample_size-1,3] <- mean(mistakes[,3])
mean_mistakes[sample_size-1,4] <- mean(mistakes[,4])
}
df <- as.data.frame(mean_mistakes)
names(df) <- c("Sample size","Above mistakes", "Below mistakes", "Total mistakes")
df
}
</code></pre>
<p>Now, I just use those functions for the exercise and summarize the results in a plot:</p>
<pre><code>rm(list = ls(all = TRUE))
source("CE_bias_func.R") #I saved the functions above in this file
library(BSDA)
library(Hmisc)
# set the parameters for the exercise
pop_mean <- 2
alpha <- 0.05
N <- 1000
# Let's generate observations (our finite universe) chisq distributed
universe <- rchisq(n=N, df=pop_mean)
hist(universe)
mm1 <- mistakes_mean(data = universe, population_mean = pop_mean,
max_sample_size = N,
num_CI_sets = 100, num_CIs = 100,
alpha = 0.05, replace_sampling = FALSE,
FUN = CIs_t_dist)
mm2 <- mistakes_mean(data = universe, population_mean = pop_mean,
max_sample_size = N,
num_CI_sets = 100, num_CIs = 100,
alpha = 0.05, replace_sampling = FALSE,
FUN = CIs_normal_dist)
# generate observations (our finite universe), now, normally distributed
pop_mean <- -15
universe <- rnorm(n=N, mean=pop_mean, sd=10)
mm3 <- mistakes_mean(data = universe, population_mean = pop_mean,
max_sample_size = N,
num_CI_sets = 100, num_CIs = 100,
alpha = 0.05, replace_sampling = FALSE,
FUN = CIs_t_dist)
mm4 <- mistakes_mean(data = universe, population_mean = pop_mean,
max_sample_size = N,
num_CI_sets = 100, num_CIs = 100,
alpha = 0.05, replace_sampling = FALSE,
FUN = CIs_normal_dist)
# Describe each simulation
mm1$"Description" <- "Population: chisq dist, Confidence Interval: t dist"
mm2$"Description" <- "Population: chisq dist, Confidence Interval: normal dist"
mm3$"Description" <- "Population: normal dist, Confidence Interval: t dist"
mm4$"Description" <- "Population: normal dist, Confidence Interval: normal dist"
#
mm <- rbind(mm1, mm2, mm3, mm4)
mm$"Total mistakes" <- NULL
library(ggplot2) # to plot
library(reshape2) # to reshape the data as ggplot2 like
dfm <- melt(mm, id.vars=c("Sample size", "Description"),
measure.vars=c("Below mistakes", "Above mistakes"))
names(dfm) <- c("Sample_size", "Description", "variable", "value")
gp <- ggplot(data=dfm, aes(x=Sample_size, y=value, color=variable))
gp <- gp + geom_point()
gp <- gp + labs(x="Sample size", y="Avg. # of mistakes in 100 trials of 100 CIs",
title="Average numer of mistakes in 100 trials of 100 confidence intervals each")
gp <- gp + theme(legend.title=element_blank()) #turn off legen title
gp <- gp + theme(legend.key=element_rect(fill=NA)) #get rid off legend's boxes
gp <- gp + theme(legend.position="top")
gp <- gp + facet_wrap(~Description, nrow=2, ncol=2)
gp
</code></pre>
<p><strong>UPDATE:</strong> After very useful comments from @swmo and @heropup I re-run the exercise, now taking samples from the whole distribution instead of samples from a finite population, so as to have truly independent samples.</p>
<p>And this is the result: pretty much what you would expect in the first place. i) no considerable differences in above and below mistakes and ii) total number of CIs that do not include the true parameter (above+below mistakes) close to 5%.</p>
<p><img src="https://i.sstatic.net/UdnlC.png" alt="updates exercise"></p>
<p>Here's my code (for now, I only did it for a normal dist and using a t.test, but very easily could be adapted to cover the 4 cases in my original post):</p>
<pre><code># A function that takes 100 samples from the distribution,
# given the sample size (n) received as argument
# then calculate the CI for each of them
# and calculate the number of mistakes (above and below) for each of them
# It returns a data.frame with one row and three variables for sample size,
# above mistakes and below mistakes
trial <- function(n){
# Take 100 samples, of sample size n, from the whole distribution normal dist
# and not from a finite population
samples <- lapply(rep(n, 100), FUN=rnorm, mean=2, sd=1)
# For each sample, calculate the confidence interval
CIs <- lapply(samples, FUN = function(x) t.test(x=x, conf.level=0.95)$conf.int)
# For each confidence interval, determine above and below mistakes
above_mistakes <- sapply(CIs, FUN = function(x) if (x[1] > 2) 1 else 0)
below_mistakes <- sapply(CIs, FUN = function(x) if (x[2] < 2) 1 else 0)
data.frame(sample_size = n,
above_mistakes = sum(above_mistakes),
below_mistakes = sum(below_mistakes))
}
# Now create a function that replicates the trial 100 times
# for a given sample size (n) and aggregates the results (rbind) in a data.frame
trialx100 <- function(n) do.call("rbind", replicate(trial(n), n=100, simplify=FALSE))
# Finally, for sample sizes between 10 and 1000, use the trailx100 function.
sample_sizes <- seq(from = 10, to = 1000, by = 10)
mistakes_all <- do.call("rbind", sapply(sample_sizes, FUN = trialx100, simplify=FALSE))
# And take the mean of the mistakes
mistakes_agg <- aggregate(mistakes_all, by = list(mistakes_all$sample_size), FUN = mean)
library(ggplot2) # to plot
library(reshape2) # to reshape the data as ggplot2 like
# Melt the data as ggplot fancy
dfm <- melt(mistakes_agg, id.vars=c("sample_size"),
measure.vars=c("above_mistakes", "below_mistakes"))
gp <- ggplot(data=dfm, aes(x=sample_size, y=value, color=variable))
gp <- gp + geom_line(fill="black")
gp <- gp + labs(x="Sample size", y="Avg. # of mistakes in 100 trials of 100 CIs",
title="Average numer of mistakes in 100 trials of 100 confidence intervals each")
gp <- gp + theme(legend.title=element_blank()) #turn off legen title
gp <- gp + theme(legend.position="bottom")
gp
</code></pre>
|
<p>First of all, I agree with the comments left by heropup. I'll add some details.</p>
<p>The reason why your simulation breaks down may be a little subtle. At least I spend some time reading your code to find the source of the problem. Please notice, that you only simulate once for each of the cases. Then your CIs functions resample this initial data set. This clearly gives a lot of dependence between all of the samples. For instance, if you draw a sample of 1000 of the original data set, there is only one way to do this. If you draw a sample of 999 an overwhelming majority of the data set will still be the same between resamples. You'll need to do independent resampling. Otherwise, the 100 samples are essentially the same when you let $n$ get large.</p>
<p>Turning to your question, a confidence interval as the ones you do above are based on a distributional assumption, for instance your observations are normally distributed. If that's the case the confidence interval will be 'centered' in the sense you talk about when you construct the confidence interval symmetrically. This is evident from the symmetry of the distribution and of the procedure of constructing a confidence interval.
In the above, you also calculate a confidence interval when the distributional assumption you make is not correct. Then an confidence interval need not be centered, even if the distribution assumed is symmetric. This can be seen simulated observations from a chi squared distribution and calculating confidence intervals based on a normal distribution.
However, using a central limit theorem we can argue that the mean of the chi squared observations will be approximately normally distributed for large enough sample sizes.</p>
<p>Finally, I just want to note that a confidence interval (or more generally, a confidence set) is basically and loosely speaking just some subset of a parameter set such that when you calculate this set a lot of times (hypothetically) it will contain the true parameter value for example 95% of the times. There's no claim of this set being 'centered' or symmetric around a parameter estimate. It can be chosen to have all sorts of strange forms. This is just not very intuitive and most of the time not very helpful.</p>
| 644
|
confidence intervals
|
Confidence intervals in beta regressions
|
https://stats.stackexchange.com/questions/656994/confidence-intervals-in-beta-regressions
|
<p>I am using a mixed-effects beta regression model in my study because my values are bounded between 0 and 1. When I run the same analysis using a linear mixed-effects model, I obtain similar results (predictors have the same directions and are significant). In the linear model, the confidence intervals remain within the bounds of the data (i.e., no negative values or values greater than 1), making them straightforward to interpret. Although the effects are significant, the confidence intervals suggest a small effect size, which I would like to report.</p>
<p>However, from what I’ve read, beta regression is more appropriate for my data. I can report the confidence intervals from the beta regression model, but is there a way to transform them to make interpretation easier? Additionally, is there a metric in beta regression that is similar to an effect size?</p>
|
<p>Beta regression models typically use a logit link, which means that the effects (parameter magnitudes or CIs) can't easily be translated to the probability scale without picking a baseline value (unlike identity-link models or log-link models such as Poisson regression). That's why epidemiologists spend so much time learning about how the logit link works (e.g. Breen et al 2018, ref below). You can report the coefficients and their CIs, or you can report the scaled coefficients (Schielzeth 2010), which at least makes the coefficients unitless. For interpretation, see also <a href="https://stats.stackexchange.com/questions/351114/rule-of-thumb-for-log-odds-ratios-effect-size-interpretation">here</a>, or <a href="https://www.google.ca/books/edition/Data_Analysis_Using_Regression_and_Multi/lV3DIdV0F9AC?hl=en&gbpv=1" rel="noreferrer">section 5.2 of Gelman and Hill <em>Data Analysis Using Regression and Multilevel/Hierarchical Models</em></a>. It may be simplest to compute <em>marginal effects</em>, as in the <code>marginaleffects</code> package: as discussed in the comments, <code>avg_slopes()</code> would give the average derivatives of the conditional expectation function with respect to the parameters, and their confidence intervals; if you want to scale the predictors to get a unitless result (i.e., expected average change in probability per 1-SD change in the predictor), you can use <code>avg_comparisons(model, variables = list("x" = "sd"))</code></p>
<p><a href="https://x.com/stephenjwild/status/1774434308540760220" rel="noreferrer">https://x.com/stephenjwild/status/1774434308540760220</a>
<a href="https://i.sstatic.net/FQ2q7XVo.jpg" rel="noreferrer"><img src="https://i.sstatic.net/FQ2q7XVo.jpg" alt="American Chopper meme. frame 1: "You should report the marginal effects". frame 2: "I reported the coefficient values. What more do you want?" frame 3: "People have a hard time interpreting those values." frame 4: "Minus means less and plus means more. That's good enough!" frame 5: "Interpreting effects on the scale of interest is natural and beneficial"" /></a></p>
<hr />
<p>Breen, Richard, Kristian Bernt Karlson, and Anders Holm. “Interpreting and Understanding Logits, Probits, and Other Nonlinear Probability Models.” Annual Review of Sociology 44, no. Volume 44, 2018 (July 30, 2018): 39–54. <a href="https://doi.org/10.1146/annurev-soc-073117-041429" rel="noreferrer">https://doi.org/10.1146/annurev-soc-073117-041429</a>.</p>
<p>Schielzeth, Holger. “Simple Means to Improve the Interpretability of Regression Coefficients: Interpretation of Regression Coefficients.” Methods in Ecology and Evolution 1, no. 2 (February 10, 2010): 103–13. <a href="https://doi.org/10.1111/j.2041-210X.2010.00012.x" rel="noreferrer">https://doi.org/10.1111/j.2041-210X.2010.00012.x</a>.</p>
| 645
|
confidence intervals
|
Standard error and confidence intervals
|
https://stats.stackexchange.com/questions/647158/standard-error-and-confidence-intervals
|
<p>About theoretical concepts... We use confidence intervals to do an inference to extrapolate results from study to similar sample. Is this true?</p>
<p>Which is the difference between a confidence interval, and dividing the result by <span class="math-container">$\sqrt{n}$</span>, how does the standard error occur?</p>
|
<p>We use confidence intervals to estimate where the true parameter of a point estimate might be. Technically, the confidence interval has a certain coverage, e.g. 95% Think that asymptotically in 95% of the cases you compute a confidence interval, the true value is in this estimated interval).</p>
<p>The standard error (with this division by sqrt(N)) arises from looking at the sample distribution of the mean value and computing the standard deviation of this mean estimator.</p>
| 646
|
confidence intervals
|
Two questions about confidence intervals
|
https://stats.stackexchange.com/questions/368347/two-questions-about-confidence-intervals
|
<p>I am learning about confidence intervals, but don't think I understand them very welll.</p>
<p>Assume <span class="math-container">$$(\mu - \hat{\mu}) \sqrt{\frac{n}{\sigma(\mu)}}$$</span>
is asymptotically standard normal.
So I guess we can say that a 95 % CI is <span class="math-container">$\hat{\mu} \pm 1.96 \sqrt{\sigma(\mu)/n}$</span>.</p>
<p>I am a bit confused with respect to this, since <span class="math-container">$\sigma$</span> is a function of <span class="math-container">$\mu$</span>.</p>
<ol>
<li><p>Assume <span class="math-container">$\mu$</span> is unknown. How can we form a confidence interval given that <span class="math-container">$\sigma$</span> depends on <span class="math-container">$\mu$</span>? I know we can just plug in <span class="math-container">$\hat{\mu}$</span> and pray it's close, but what if we don't want to do that? What are the alternatives, if any? What do people do in practice?</p></li>
<li><p>Assume <span class="math-container">$\mu$</span> is known. What is the interpretation of <span class="math-container">$\hat{\mu} \pm 1.96 \sqrt{\sigma(\mu)/n}$</span>? I mean, if I know what <span class="math-container">$\mu$</span> is, does it still make sense to talk about confidence intervals around <span class="math-container">$\mu$</span>? Isn't a "100 %" confidence interval then <span class="math-container">$[\mu, \mu]$</span>, if that makes sense?</p></li>
</ol>
|
<p>The whole game is about learning something about <span class="math-container">$\mu$</span>, so you're right in your first question: the starting point is that <span class="math-container">$\mu$</span> is unknown and you want to use data and an estimator to learn about it. Using a given dataset, you can form an estimator <span class="math-container">$\hat \mu$</span> to estimate the value of <span class="math-container">$\mu$</span>. </p>
<p>Formally, we usually think about <span class="math-container">$\mu$</span> as being a parameter (for instance, the expectation of a random variable, or the coefficient in a linear regression), and the estimator <span class="math-container">$\hat \mu$</span> as being a random variable, which depends on the realisation of the data. For a given dataset, you will obtain a given estimate. </p>
<p>Let us assume (as you do) that <span class="math-container">$\hat \mu$</span> is normally distributed. Its expectation is equal to <span class="math-container">$\mu$</span> (which happens when the estimator is consistent). Expectation, in this case, means that if you were to observe not one dataset, but a large number of datasets, the average value of the <span class="math-container">$\hat \mu$</span> over these datasets would be equal to <span class="math-container">$\mu$</span>. </p>
<p>The variance of <span class="math-container">$\hat \mu$</span> is a function of two quantities: <span class="math-container">$\sigma$</span> the asymptotic/underlying variance, and <span class="math-container">$n$</span> the number of observations. <span class="math-container">$\sigma$</span> essentially depends on the data generating process of the random variable (for instance, the variance of the underlying random variable). Variance of <span class="math-container">$\hat \mu$</span> means: how would my <span class="math-container">$\hat \mu$</span> vary if I computed it many times, on many datasets of size <span class="math-container">$n$</span>? If <span class="math-container">$n$</span> was very large, all the <span class="math-container">$\hat \mu$</span> would be pretty close to each other (and pretty close to <span class="math-container">$\mu$</span>).</p>
<p>Now that we have all this, let's answer your question. You're right that the formula of the variance (and CI) of <span class="math-container">$\hat \mu$</span> depends on <span class="math-container">$\sigma$</span> and we don't observe <span class="math-container">$\sigma$</span>. What is usually done is to plug an estimator of <span class="math-container">$\sigma$</span> instead. In many cases, just like you can form an estimator of <span class="math-container">$\mu$</span>, you can form an estimator of <span class="math-container">$\sigma$</span>, that you call <span class="math-container">$\hat \sigma$</span>, which can be computed as a function of the data. </p>
<p>For instance, the canonical problem is that you have a random variable <span class="math-container">$Y_i$</span> distributed in a <span class="math-container">$\mathcal N(\mu, \sigma^2)$</span>. In this case, we can take <span class="math-container">$\hat \mu$</span> to be the average of the observed <span class="math-container">$Y_i$</span>. An unbiased estimator of the variance <span class="math-container">$\sigma^2$</span> is then:
<span class="math-container">$$
\hat \sigma = \frac{\sum_i Y_i^2}{n} - \hat \mu^2
$$</span>
Note that <span class="math-container">$\hat \sigma$</span> depends on <span class="math-container">$n$</span> and on the observations of the dataset <span class="math-container">$\{Y_i\}$</span> and on the estimator <span class="math-container">$\hat \mu$</span> (which also depends on <span class="math-container">$\{Y_i\}$</span> and <span class="math-container">$n$</span>), but not on <span class="math-container">$\mu$</span>. <span class="math-container">$\hat \sigma$</span> is the quantity you will plug into your confidence interval. </p>
| 647
|
confidence intervals
|
Understanding lincom confidence intervals (STATA)
|
https://stats.stackexchange.com/questions/231067/understanding-lincom-confidence-intervals-stata
|
<p>I'm trying to understand confidence intervals for linear combinations of parameters (lincom command in STATA). Let's say I'm interested in whether smoking is associated with low birth weight (using the lbw dataset, see example in help logit).</p>
<pre><code>webuse lbw
logit low age lwt i.race smoke ptl ht ui
age -.0271003 .0364504 -0.74 0.457 -.0985418 .0443412
lwt -.0151508 .0069259 -2.19 0.029 -.0287253 -.0015763
race
black 1.262647 .5264101 2.40 0.016 .2309024 2.294392
other .8620792 .4391532 1.96 0.050 .0013548 1.722804
smoke .9233448 .4008266 2.30 0.021 .137739 1.708951
ptl .5418366 .346249 1.56 0.118 -.136799 1.220472
ht 1.832518 .6916292 2.65 0.008 .4769494 3.188086
ui .7585135 .4593768 1.65 0.099 -.1418484 1.658875
_cons .4612239 1.20459 0.38 0.702 -1.899729 2.822176
</code></pre>
<p>As I understand, the logit for a non-smoker is .46, which has a 95% confidence interval of -1.9 ; 2.82 (_cons). The additional effect for being a smoker is .92, which has a 95% confidence interval of .14 ; 1.71 (smoke), and this effect is significant at the 95% level (p = .021). </p>
<p>I want to calculate the logit and confidence interval of low birth weight for: 1) non-smokers 2) smokers, so I turn to the lincom command.</p>
<pre><code>lincom _cons (redundant as this is the _cons coefficient)
( 1) [low]_cons = 0
.4612239 1.20459 0.38 0.702 -1.899729 2.822176
lincom _cons+smoke
[low]smoke + [low]_cons = 0
1.384569 1.155967 1.20 0.231 -.8810857 3.650223
</code></pre>
<p>The 95% confidence interval for smokers overlaps with that of non-smokers, even though the 95% confidence interval for the additional effect of smoking !=0 with 95% confidence. I'm finding it difficult to understand why this is, surely the linear combined estimate for smokers should not overlap with the constant?</p>
| 648
|
|
confidence intervals
|
Calculating group mean and confidence interval from single-subject means and confidence intervals
|
https://stats.stackexchange.com/questions/205359/calculating-group-mean-and-confidence-interval-from-single-subject-means-and-con
|
<p>I have a sample of 20 subjects. I have two continuous variables, <strong><em>X</em></strong> and <strong><em>Y</em></strong> which are linearly related. I use linear regression to estimate the regression coefficient relating <strong><em>X</em></strong> and <strong><em>Y</em></strong>.</p>
<p>For each subject, I estimate a regression coefficient relating <strong><em>X</em></strong> to <strong><em>Y</em></strong>, and corresponding confidence intervals. So, I have a set of 20 regression coefficients and confidence intervals. </p>
<p>From this, how can I calculate a regression coefficient and confidence intervals for the entire sample?</p>
<p>Any suggestions much appreciated!</p>
|
<p>In cases as yours two extreme approaches can be taken: (a) calculate <em>independent</em> models for each of the individuals, or (b) calculate <em>aggregate</em> estimate for the whole sample, ignoring the individual variability. Unfortunately, both approaches can give you misleading results. When you calculate independent models you ignore the fact that they come from single population (so they may not be independent). When you calculate aggregated model, you ignore the individual differences. The patterns that can be observed on individual level <a href="https://stats.stackexchange.com/questions/125683/pearson-correlation-has-quizzy-results/125686#125686">are <em>not the same</em></a> as the ones that can be seen on global level, sometimes this is called <a href="http://jech.bmj.com/content/56/8/588.full" rel="nofollow noreferrer">atomistic and ecological fallacies</a>.</p>
<p>If there is some kind of hierarchy or nesting, e.g. students nested within classes, classes within schools (see here for <a href="https://stats.stackexchange.com/questions/97115/should-i-bootstrap-at-the-cluster-level-or-the-individual-level/186079#186079">examples in bootstrap context</a>), there is much wiser approach: <em>hierarchical/multilevel regression.</em> Such models simultaneously calculates effects for different levels (e.g. students, classes, schools). You can find example of such model <a href="https://stats.stackexchange.com/questions/134159/analysis-of-many-companies-over-a-period/134182#134182">described in here</a>.</p>
<p>You can also check some of the multiple good and accessible handbooks on such models (check references below). There is also a friendly introductory paper by Bolker et al (2009) on (generalized) linear mixed models. Check also here to learn about the differences between <a href="https://stats.stackexchange.com/questions/120964/fixed-effect-vs-random-effect-when-all-possibilities-are-included-in-a-mixed-eff/137837#137837">random and fixed effects</a> as knowing it may be helpful in the future.</p>
<hr>
<p>Bolker, B. M., Brooks, M. E., Clark, C. J., Geange, S. W., Poulsen, J. R., Stevens, M. H. H., & White, J. S. S. (2009). <a href="http://avesbiodiv.mncn.csic.es/estadistica/curso2011/regm26.pdf" rel="nofollow noreferrer">Generalized linear mixed models: a practical guide for ecology and evolution.</a> Trends in ecology & evolution, 24(3), 127-135.</p>
<p>Snijders, T.A.B. & Bosker, R.J. (2012). Multilevel Analysis: An Introduction to Basic and Advanced Multilevel Modeling. London: Sage Publishers.</p>
<p>Hox, J. (2010). Multilevel Analysis: Techniques and Applications. New York: Routledge.</p>
<p>Gelman, A. & Hill, J. (2006). Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge University Press.</p>
<p>Pinheiro, J., & Bates, D. (2006). Mixed-effects models in S and S-PLUS. Springer.</p>
| 649
|
confidence intervals
|
Regression: Mean Response Confidence Interval vs Confidence Intervals of Each Predictor
|
https://stats.stackexchange.com/questions/83951/regression-mean-response-confidence-interval-vs-confidence-intervals-of-each-pr
|
<p>I have a regression of costs on volume and some interactions (costs ~ volume + volume:year + year)</p>
<p>Often times when I do a regression, I expect a negatively sloped relationship and the model validates that. I can add up the volume coefficient and the associated interaction term for a given point and get a negative value (as expected). Additionally, I calculate the mean response confidence interval by applying the covariance matrix to each exogenous point. The mean response confidence interval is acceptable.</p>
<p>However, if I look at the High end of the 95% confidence intervals of each parameter and add them up, the result is usually positive.</p>
<p>I'm wondering conceptually what's the difference between the mean response confidence interval and the value you get by calculating the endogenous variable at each coefficient's low or high within its respective confidence interval. I imagine it has something to do with independence: the chance that the coefficient on volume is at the high point on its confidence interval AND the interaction coefficient is at its high is .025^2. Is this right? How should I think about this?</p>
<p>Thanks.</p>
| 650
|
|
confidence intervals
|
Confidence Intervals Intuition
|
https://stats.stackexchange.com/questions/73875/confidence-intervals-intuition
|
<p>I am new to statistics and have run into some trouble understanding computing confidence intervals and am seeking some help. I will outline the motivating example in my textbook and hopefully someone can offer some guidance. </p>
<p>Example </p>
<p>There is a population of mean values and your goal is to figure out the true mean (as best you can). In order to accomplish this, a number of samples are taken, each of which has a mean value. </p>
<p>Next, because we know by the central limit theorem that as the number of samples increase, the sampling distribution will be normally distributed, we use the equation $z = \frac{X - \bar{X}}{s}$ (noting that in this case s = standard error) to compute a lower and upper bound taking each sample mean as the mean for the z-score equation and z-scores of -1.96 and +1.96, for example, to compute a 95% confidence interval. </p>
<p>I’ve included a graph from my textbook in attempt to add clarity.</p>
<p><img src="https://i.sstatic.net/VrE41.png" alt="enter image description here"></p>
<p>So I do not understand how it is you can use each sample mean as the mean value in our z equation to compute intervals. We know that the sample distribution is normally distributed so isn’t it the case that only the mean of all the samples can be used? How can we compute an interval around each mean value that contributes to the sampling distribution?</p>
<p>Any help with this would be much appreciated </p>
<p>Note: I'm reading "Discovering Statistics Using IBM SPSS Statistics 3rd Edition" by Andy Field and this example is from pg 43-45 </p>
| 651
|
|
confidence intervals
|
Difference between confidence intervals and prediction intervals and data set
|
https://stats.stackexchange.com/questions/336736/difference-between-confidence-intervals-and-prediction-intervals-and-data-set
|
<p>I read <a href="https://stats.stackexchange.com/questions/16493/difference-between-confidence-intervals-and-prediction-intervals">Difference between confidence intervals and prediction intervals</a> and <a href="https://www.graphpad.com/support/faq/the-distinction-between-confidence-intervals-prediction-intervals-and-tolerance-intervals/" rel="nofollow noreferrer">https://www.graphpad.com/support/faq/the-distinction-between-confidence-intervals-prediction-intervals-and-tolerance-intervals/</a> </p>
<p>However, I still had some unsolved questions:</p>
<p>Confidence intervals tell you about how well you have determined the mean and Prediction intervals tell you where you can expect to see the next data point sampled.</p>
<p>However, from what I read from the textbook.(Applied Linear Statics Models) It seemed to indicate that: if I was to be working on the determination of the same set of data, I were to use the Confidence interval, and whenever I wanted to use data to "predict" a new set of data, I were to use prediction interval.</p>
<p>My question was that:</p>
<p>Does the usage of Confidence interval and Prediction Interval dependent on weather it's the same set of data or not?
If so, what's the difference or caution?</p>
| 652
|
|
confidence intervals
|
confidence intervals for dependent observations
|
https://stats.stackexchange.com/questions/97638/confidence-intervals-for-dependent-observations
|
<p><a href="https://stats.stackexchange.com/questions/61266/why-is-dependence-a-problem">The answer to this question</a> discusses problems associated with calculating P-values for dependent observations. Let's say you have observations from two different groups that are dependent. You consider carrying out a t-test to compare means of the two groups. However, aware that the calculated P-values in a t-test will likely be inaccurate, you calculate means of each group and their 95% confidence intervals. Does calculating confidence intervals avoid problems associated with calculating P-values when observations are dependent?</p>
|
<p>No. Since confidence intervals convey the same inferential information as $p$-values ($\mu_0 \in \mathrm{CI} \iff p_{H_0:\mu_0=\mu}\geq \alpha$), they also share the same difficulties dealing with dependence.</p>
| 653
|
confidence intervals
|
Calculating confidence intervals for two samples
|
https://stats.stackexchange.com/questions/86509/calculating-confidence-intervals-for-two-samples
|
<p>Let's say I have two samples and I want to calculate confidence intervals for the means of each sample.</p>
<pre><code>x = rnorm(10)
y = rnorm(10)
</code></pre>
<p>Using the t.test command I'm able to get the following output.</p>
<pre><code>> t.test(x, y)
Welch Two Sample t-test
data: x and y
t = -0.0104, df = 17.17, p-value = 0.9918
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-1.077822 1.067245
sample estimates:
mean of x mean of y
-0.2235438 -0.2182557
</code></pre>
<p>However, what if I want the 95% confidence intervals for each sample. Is that a manual calculation I have to do. I want to generate confidence intervals for each sample so that I can compare the two in regards to whether I should reject the null hypothesis.</p>
<p>Basically, I want to mimic <a href="http://www2.sas.com/proceedings/sugi22/STATS/PAPER270.PDF" rel="nofollow">this</a> in R.</p>
|
<p>Your question leave some considerable doubt about what, exactly, <em>this</em> in 'mimic this' consists of. You should be more explicit.</p>
<p>Do you want one sample confidence intervals for the means? </p>
<p>Then <code>t.test</code> can do it easily, by doing it one sample at a time.</p>
<pre><code>t.test(x,conf.int=TRUE)
t.test(y,conf.int=TRUE)
</code></pre>
<p>You can even extract the confidence interval part of the output and if you want, assign it to a variable:</p>
<pre><code> t.test(y,conf.int=TRUE)$conf.int
</code></pre>
<p>(Or do you want to know how to produce a particular display? That's easy enough, once the values are calculated.)</p>
| 654
|
confidence intervals
|
Confidence Intervals and Hyptothesis testing
|
https://stats.stackexchange.com/questions/575615/confidence-intervals-and-hyptothesis-testing
|
<p>Iam currently reading up on basic statistics and Iam somewhat confused about the computation/inference of confidence intervals and hypothesis tests.
As far as I have understood there are several techniques for both confidence interval estimation and hypothesis testing:</p>
<p>confidence interval are computed:</p>
<ul>
<li>estimate +- margin of error <strong>or</strong></li>
<li>bootstrapping</li>
</ul>
<p>hypothesis testing is performed by:</p>
<ul>
<li>parametric tests (for nearly normal distributed data) or non-parametric test <strong>or</strong></li>
<li>permutation (often in combination with bootstrapping)</li>
</ul>
<p>When do I have to use which method? Does it depend on the sample size?</p>
<p>Thanks in advance for any help,</p>
<p>cheers,</p>
<p>Michael</p>
| 655
|
|
confidence intervals
|
Calculating confidence intervals for mode?
|
https://stats.stackexchange.com/questions/240563/calculating-confidence-intervals-for-mode
|
<p>I am looking for references about calculating confidence intervals for mode (in general). Bootstrap may seem to be natural first choice, but as discussed by Romano (1988), standard bootstrap fails for mode and it does not provide any simple solution. Did anything change since this paper? What is the best way to calculate confidence intervals for mode? What is the best bootstrap-based approach? Can you provide any relevant references?</p>
<hr>
<p>Romano, J.P. (1988). <a href="http://www.ism.ac.jp/editsec/aism/pdf/040_3_0565.pdf">Bootstrapping the mode.</a> Annals of the Institute of Statistical Mathematics, 40(3), 565-586.</p>
|
<p>While it appears there hasn't been too much research into this specifically, there is a paper that did delve into this on some level. The paper <a href="https://link.springer.com/article/10.1007/PL00003988" rel="nofollow noreferrer">On bootstrapping the mode in the nonparametric regression model with random design</a> (Ziegler, 2001) suggests the use of a smoothed paired bootstrap (SPB). In this method, to quote the abstract, "bootstrap variables are generated from a smooth bivariate density based on the pairs of observations."</p>
<p>The author claims that SPB "is able to capture the correct amount of bias if the pilot estimator for <em>m</em> is over smoothed." Here, <em>m</em> is the regression function for two i.i.d. variables.</p>
<p>Good luck, and hope this gives you a start!</p>
| 656
|
confidence intervals
|
Confidence Intervals for Non-normal data?
|
https://stats.stackexchange.com/questions/140481/confidence-intervals-for-non-normal-data
|
<p>I have a dataset where the response is the number of successes and I have two factor variables A, B (A has 6 levels and B has 4 levels) and a quantitative variable H (H is hours so it is non-negative). The number of trials is fixed for different levels of A -- A1 always has 20 trials, A2 always 12, etc). The dataset looks like the following:</p>
<pre><code>A B H Success Trials
A1 B2 0.5 5 20
A2 B3 0.75 7 12
A4 B1 3 0 15
A6 B4 2 3 4
... ... ... ... ...
</code></pre>
<p>I'm mainly interested in predicting the expected number of successes. I fit a binomial regression in R with interactions to this data in order to get the probability of success:
<code>(Success, Trials - Success) ~ A + B + H + A:H + B:H</code></p>
<p>From this GLM I can get confidence intervals for the success probability given A, B, H. Since the number of trials is fixed for different levels of A, I can also get a confidence interval on the expected number of successes. However, the specifications for this project also call for confidence intervals for the number of successes given A alone (ignoring B and H) and confidence intervals for number of successes given B alone (ignoring A and H).</p>
<p>Basically they want a confidence intervals on the number of successes for the different levels of A, ignoring other variables and then confidence intervals on number of successes for different levels of B, ignoring other variables.</p>
<p>I want to know the best way to get these confidence intervals. One approach I have tried so far is to take the vcov from my GLM and I have basically averaged out the coefficients for the levels of factor variables I'm "ignoring" and for H I'm just using the overall average number of hours.</p>
<p>Slightly more technically to get the SE for my confidence intervals, I'm making a C vector and taking <code>sqrt(C %*% vcov(glm_model) %*% t(C))</code>. Suppose my vcov matrix has rows that correspond to the following factor levels:</p>
<pre><code>(intercept)
A1
A2
A3
A4
A5
A6
B1
B2
B3
B4
H
</code></pre>
<p>The C-vector associated with the first row of my data frame above would be: <code>(1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0.5)</code></p>
<p>The approach I have tried so far if I wanted a confidence interval for A1, ignoring all levels of B and H is to use the following C-vector: <code>(1, 0, 0, 0, 0, 0, 0.25, 0.25, 0.25, 0.25, Overall Average H)</code>. This will use an average value of H and average out the effect of variable B.</p>
<p>Is there a better way to get confidence intervals than what I've done above (including non-GLM ways)? I've looked at things like Clopper-Pearson and I'm not sure if it's applicable here.</p>
<p>Edit: I guess I forgot to mention that the number of successes is not normal (not even close) so the normal mean +/- 1.96 * SE won't work. The number of successes in my actual dataset has a lot of zeroes.</p>
| 657
|
|
confidence intervals
|
Confidence intervals for mean correlation
|
https://stats.stackexchange.com/questions/497609/confidence-intervals-for-mean-correlation
|
<p>What is the correct way to define confidence intervals for the mean of multiple correlations? I understand how to calculate CIs for individual correlation coefficients, I also understand how to calculate the mean correlation through Fisher's transformation. But what are the confidence intervals for this mean correlation value?
Surely, it can not be just the +-(respective z-score of CI level) multiplied by SE of the final mean Fisher's transformed correlation score, since each single Fisher's transformed correlation score is constructed from multiple data and has it's own precision and SE.</p>
|
<p>Actually, what you describe is just fine.</p>
<p>If <span class="math-container">$\bf{x}$</span> <span class="math-container">$= [x_1, x_2, \dots, x_n]$</span>, your set of data points, is approximately normally distributed, <span class="math-container">$\text{mean}(x) \pm \alpha \times\text{se}(x)$</span> produces a valid confidence interval, with <span class="math-container">$\alpha \approx 1.96$</span> for a 95% CI. As you note, since correlation coefficients aren't typically normally distributed, you can transform them using the Fisher z-transformation, calculated the mean and CI of the transformed values, and then back-transform to the original scale.</p>
<p>You're right that this is throwing away information: each of the coefficients has a different mean and SE, and you could obtain a better estimate by giving more weight to the more precisely-estimated correlations. This can be done, for instance, by fitting a multilevel model. However, the simple approach still yields calibrated confidence intervals.</p>
<p>The same thing happens when you want to calculate confidence intervals for a grand mean (a mean of means), for instance when you have multiple data points per group, and want to calculate a mean across groups.</p>
<hr />
<p>Update: It's just occurred to me that you could, comments about the assumptions of your analysis notwithstanding, reframe this problem as a linear mixed model.</p>
<p>Let's say you currently have <span class="math-container">$n$</span> independent variables, <span class="math-container">$\bf x_1, x_2, \dots, x_n$</span>, and <span class="math-container">$n$</span> dependant variables <span class="math-container">$\bf y_1, y_2, \dots, y_n$</span>, and you're currently calculating <span class="math-container">$n$</span> correlation coefficients, <span class="math-container">$r_1, r_2, \dots, r_n$</span>. As noted in the comments, I assume you believe that these correlations are independently sampled from a common distribution, and you're interested in calculating the mean and standard error of that distribution, for instance that you've collected <span class="math-container">$x$</span> and <span class="math-container">$y$</span> for each of the <span class="math-container">$n$</span> participants in an experiment, calculated <span class="math-container">$r$</span> for each participant, and wish to estimate <span class="math-container">$r$</span> at the population level.</p>
<p>This can be recast as a linear mixed model</p>
<p><span class="math-container">$$y_i = \alpha_p + \beta_p x_i$$</span></p>
<p>where <span class="math-container">$\alpha_p$</span> and <span class="math-container">$\beta_p$</span> are the intercept and slope term
for participant <span class="math-container">$p$</span>, and <span class="math-container">$\alpha_p \sim \text{Normal}(A, \sigma_A)$</span>, <span class="math-container">$\beta_p \sim \text{Normal}(B, \sigma_B)$</span>.
You're interested in estimating <span class="math-container">$B$</span>, the population-level
fixed effect for the slope parameter.</p>
<p>This can be done easily using <code>lme4</code> by pivoting your data to a long format with one row per observation and columns for <code>x</code>, <code>y</code>, and <code>participant</code>, and using the command</p>
<pre><code>lmer(y ~ 1 + x + (1 + x|participant), data=your_long_data)
</code></pre>
<p>Good luck!</p>
| 658
|
confidence intervals
|
Calculating confidence intervals
|
https://stats.stackexchange.com/questions/28809/calculating-confidence-intervals
|
<p>Body mass index was compared for two groups, people with elevated triglycerides (above 1.7 mmol/L) and people with normal triglyceride levels. The 10 people in the group with elevated triglyceride had body mass index mean = 26.1 and standard deviation = 3.72 and the 15 people in the normal group had body mass index mean = 24.3 and S = 3.45. </p>
<p>What are the confidence intervals for these means and without doing any further calculations, decide whether a test of the null hypothesis of no difference against a two-sided alternative at 5% would lead to rejection or not</p>
|
<p>What part are you stuck on? Try to solve it one step at a time.</p>
<p>The data is: </p>
<p>$\overline x_1 = 26.1, s_1 = 3.72, n_1 = 10\\
\overline x_2 = 24.3, s_2 = 3.45, n_2 = 15$</p>
<p>Find the confidence interval for $\mu_1$. Then, separately, find the confidence interval for $\mu_2$.</p>
<p>The next part is basically saying: What would be the result of a test of $H_0: \mu_1 = \mu_2$ vs $H_1:\mu_1 \ne \mu_2$?</p>
| 659
|
confidence intervals
|
Confidence intervals for exponential smoothing
|
https://stats.stackexchange.com/questions/43501/confidence-intervals-for-exponential-smoothing
|
<p>I'm using exponential smoothing (Brown's method) for forecasting. The forecast can be calculated for one or more steps (time intervals). Is there any way to calculate confidence intervals for such prognosis (ex-ante)?</p>
|
<p>Exponential smoothing methods as such have no underlying statistical model, so prediction intervals cannot be calculated. However, when we do want to add a statistical model, we naturally arrive at state space models, which are generalizations of exponential smoothing - and which allow calculating prediction intervals. See <a href="https://otexts.com/fpp3/ets-forecasting.html" rel="nofollow noreferrer">section 8.7 in this free online textbook using R</a>, or look into <em>Forecasting with Exponential Smoothing: The State Space Approach</em>. Both books are by Rob Hyndman and (different) colleagues, and both are very good.</p>
| 660
|
confidence intervals
|
Confidence intervals for repeatability
|
https://stats.stackexchange.com/questions/11898/confidence-intervals-for-repeatability
|
<p>I have calculated the repeatability of individuals' responses to a stimulus using the methodology of <a href="http://www.univet.hu/users/jkis/education/Kutatastervezes/Lessells_Boag_Auk_87_Unrepeatable_repeatabilities_-_a_common_mistake.pdf" rel="nofollow">Lessells & Boag (1987) Auk 104:116</a>, where repeatability r = among-groups variance component / (among-groups variance component + within-groups variance component).</p>
<p>How do I assign confidence intervals to my estimate of r?</p>
|
<p>I would go for bootstrap to compute 95% CIs. This is what is generally done with coefficient of heritability or intraclass correlation. (I found no other indication in Falconer's book.) There is an example in the <a href="http://cran.r-project.org/web/packages/gap/index.html" rel="nofollow">gap</a> package of an handmade bootstrap (see <code>help(h2)</code>) in case of the correlation-based heritability coefficient, $h^2$. IMO, you're better off computing the variance components yourself, and using the <a href="http://cran.r-project.org/web/packages/boot/index.html" rel="nofollow">boot</a> package. Briefly, the idea is to write a small function that returns your MSs ratio and then call the <code>boot()</code> function, e.g.</p>
<pre><code>library(boot)
repeat.boot <- function(data, x) { foo(data[x,])$ratio }
res.boot <- boot(yourdata, repeat.boot, 500)
boot.ci(res.boot, type="bca")
</code></pre>
<p>where <code>foo(x)</code> is a function that take a data.frame, compute the variance ratio, and return it as <code>ratio</code>.</p>
<p><strong>Sidenote:</strong> I just checked on <a href="http://rseek.org" rel="nofollow">http://rseek.org</a> and found this project, <a href="http://rptr.r-forge.r-project.org/" rel="nofollow">rptR: Repeatability estimation for Gaussian and non-Gaussian data</a>. I don't know if the above is not simpler.</p>
| 661
|
confidence intervals
|
Why are confidence intervals valid?
|
https://stats.stackexchange.com/questions/650430/why-are-confidence-intervals-valid
|
<p>Recently, I began my statistics journey to understand the field better. Previously, my experience with statistics consisted of memorizing formulas, conditions, and applications of the latter. While one can often get away with such a superficial understanding, I overlooked the intuition behind most statistical practices.</p>
<p>In particular, I have begun to question the mathematical merit of confidence intervals. My theoretical understanding is as follows. Underlying any confidence interval is the notion of a sampling distribution, a theoretical distribution where we include the sample statistic for every possible sample of size n; the mean of this distribution is the true population parameter. From there, we select a desired confidence level (represented by a t or z value), and, using the standard deviation of our theoretical sampling distribution, we construct an interval. By definition, our confidence level will represent the percentage of total samples from our sample distribution that are within the margin of error of the true population statistic. In other words, if we were to repeatedly collect samples of size n and construct confidence intervals of the same confidence level, we would expect that <span class="math-container">$C%$</span>% of these intervals would capture the true population parameter.</p>
<p>In theory, the math checks out. But, in practice, we do things that diverge from this theoretical underpinning. I will illustrate three cases where I have doubts:</p>
<p>Suppose, for instance, that we are attempting to create a confidence interval for a population proportion. The central limit theorem for proportions and mathematical reasoning leads us to conclude that the sampling distribution for a sample of sufficiently large size n is normally distributed with a mean of <span class="math-container">$p$</span> and a standard deviation of <span class="math-container">$\sqrt(\frac{p(1-p)}{n})$</span>. In theory, if we were to take our <span class="math-container">$\hat{p}$</span> and create an interval using our desired confidence and the standard deviation outlined above, we would expect that <span class="math-container">$C%$</span>% of these intervals would capture the true population parameter. But, we do not do this because we do not know what <span class="math-container">$p$</span> is and thus do not know the sampling distribution's standard deviation. Instead, we use <span class="math-container">$\hat{p}$</span> as a stand-in in the standard deviation equation: <span class="math-container">$\sqrt(\frac{\hat{p}(1-\hat{p})}{n})$</span>. Even though I know <span class="math-container">$\hat{p}$</span> is an unbiased estimator of <span class="math-container">$p$</span>, how can we be sure that <span class="math-container">$C%$</span>% of hypothetical intervals will capture the true population parameter?</p>
<p>Of course, I have the same doubts when it comes to estimating a population mean. The central limit theorem for proportions and mathematical reasoning leads us to conclude that the sampling distribution for a sample of sufficiently large size n is normally distributed with a mean of <span class="math-container">$\mu$</span> and a standard deviation of <span class="math-container">$\frac{\sigma}{\sqrt(n)}$</span>. In theory, if we were to take our <span class="math-container">$\bar{x}$</span> and create an interval using our desired confidence and the standard deviation outlined above, we would expect that <span class="math-container">$C%$</span>% of these intervals would capture the true population parameter. But, we do not do this because we often do not know what <span class="math-container">$\sigma$</span> is and thus do not know the sampling distribution's standard deviation. In these cases, we use the standard error to approximate the standard deviation: <span class="math-container">$\frac{s_x}{\sqrt(n)}$</span>. Given this change, how can we be sure that <span class="math-container">$C%$</span>% of hypothetical intervals will capture the true population parameter?</p>
<p>For the more general case, where the central limit theorem may not apply, statisticians use bootstrapping to calculate a confidence interval. In summary, they create many, many samples with replacement of size n from their original sample of size n. By plotting the sample statistic of each of these samples, we can create a psuedo-sampling distribution. From there, we can create a confidence interval by selecting two percentiles that are equidistant from the 50th percentile. For instance, creating an interval with the values at the 5th and 95th percentiles represent a 90% confidence interval. Yet, this seems like a huge stretch. That is, we are assuming that the standard deviation of our psuedo-distribution <em>is</em> the standard deviation of our true sampling distribution. How can we be sure that <span class="math-container">$C%$</span>% of hypothetical intervals will capture the true population parameter?</p>
| 662
|
|
confidence intervals
|
Transformation of Confidence Interval = Confidence Interval of Transformation?
|
https://stats.stackexchange.com/questions/439994/transformation-of-confidence-interval-confidence-interval-of-transformation
|
<p>I am wondering about the following situation: I have a confidence interval estimator <span class="math-container">$\delta(x)=[lb, ub]$</span>, which returns valid a%-confidence intervals for a value <span class="math-container">$\theta \in \mathbb{R}$</span> (not necessarily a parameter). How can I obtain a confidence interval for a value <span class="math-container">$f(\theta)$</span>? In particular, i am interested in </p>
<ul>
<li>f(x)=2*x-1</li>
<li>f(x)=x/(1-x)</li>
</ul>
<p>The naive approach of simply transforming the bounds using <span class="math-container">$f$</span>, that is, <span class="math-container">$\delta_f(x)=[f(lb), f(ub)]$</span> seems to produce confidence intervals with the correct <span class="math-container">$a\%$</span> coverage. However, given the existence of more complex procedures, like the delta method, this seems too good to be true.</p>
|
<p>Assuming <span class="math-container">$f$</span> is strictly monotone, this method works:</p>
<p><span class="math-container">$$lb < \theta < ub \implies f(lb) < f(\theta) < f(ub)$$</span></p>
<p><span class="math-container">$$\theta \in [lb, ub] \implies f(\theta) \in [f(lb), f(ub)]$$</span></p>
<p><span class="math-container">$$P(\theta \in [lb, ub]) \leq P(f(\theta) \in [f(lb), f(ub)])$$</span></p>
<p>Your second example is monotone when restricted to either x>1 or x<1, so if your CI doesn't cross 1, then you're good. If it crosses 1, then LOL, let's talk. </p>
| 663
|
confidence intervals
|
Excel using confidence intervals when forecasting
|
https://stats.stackexchange.com/questions/618679/excel-using-confidence-intervals-when-forecasting
|
<p>My understanding is that when forecasting if you want to quantify the level of uncertainty of your model, one would typically use predictive intervals. However in Microsoft Excel, when using the 'Forecast Sheet' tool, it appears to be expressing uncertainty using confidence intervals instead.</p>
<p>Are confidence intervals more suited to the Exponential Triple Smoothing algorithm they use, or is there some other reason?</p>
| 664
|
|
confidence intervals
|
Confidence Intervals for Normalized Random Variables
|
https://stats.stackexchange.com/questions/596267/confidence-intervals-for-normalized-random-variables
|
<p>I think I have a pretty simple question about constructing confidence intervals for normalized random variables.</p>
<p>If I have i.i.d random variables <span class="math-container">$X_1, X_2, X_3, ..., X_n \sim F$</span> for some distribution <span class="math-container">$F$</span>, and say i have the standard mean estimator <span class="math-container">$\mu = \frac{1}{n}\sum_{i} X_i $</span>, then it is easy to construct confidence intervals for this estimator (either through Central Limit Theorem or Bootstrapping).</p>
<p>However, say that I now normalize each random variable. In other words, there is some function <span class="math-container">$\gamma = f(X_1, X_2, \ldots) \in \mathbb{R}^{+} $</span> such that I am now interested in the random variables</p>
<p><span class="math-container">$$\frac{X_1}{\gamma}, \frac{X_2}{\gamma}, \ldots,\frac{X_n}{\gamma},$$</span></p>
<p>For example, <span class="math-container">$\gamma$</span> could be the sum, sample mean or the sample variance of the original random variables, AKA (<span class="math-container">$\gamma = \frac{1}{n-1} \sum_i (X_i - \bar{\mu})^2$</span>), where <span class="math-container">$\bar{\mu}$</span> is the sample mean. Then</p>
<ol>
<li><p>is it still true that I can use CLT to construct my confidence intervals for the estimator <span class="math-container">$\frac{1}{n} \sum_{i} \frac{X_i}{\gamma}$</span>? I presume the answer is no since independence assumption is broken.</p>
</li>
<li><p>If CLT no longer works, would bootstrapping work? Isn't independence also required when using bootstrapping to construct confidence intervals?</p>
</li>
<li><p>Finally, if neither technique work, what would be the method to construct the confidence intervals for these normalized random variables?</p>
</li>
</ol>
| 665
|
|
confidence intervals
|
pooled proportions with confidence intervals
|
https://stats.stackexchange.com/questions/316083/pooled-proportions-with-confidence-intervals
|
<p>I have some data like:</p>
<pre><code> Var1 Var2
Study1 20/23 3/23
Study2 30/34 4/34
Study3 1/30 29/30
</code></pre>
<p>I would like to calculate pooled proportions with confidence intervals using R.</p>
| 666
|
|
confidence intervals
|
Confidence intervals for glmer() from lme4
|
https://stats.stackexchange.com/questions/60448/confidence-intervals-for-glmer-from-lme4
|
<p>I'm running a mixed model on some data. I want to calculate confidence intervals for my model. </p>
<p>For this I have adapted the following code section from <a href="http://glmm.wikidot.com/faq" rel="nofollow">Predictions and/or confidence (or prediction) intervals on predictions (lme4)</a>. The problem is that I have a hard time understanding what the code actually does, but so far it's the only way I have found to calculate some reasonable confidence intervals for my result.</p>
<p>So my question is: what are <code>pvar1</code> and <code>tvar1</code>? </p>
<p>I'm guessing it is something like $\text{SE}^2$, but I can't quite see it, and I can't describe their difference, or their normal statistical term. I'm guessing that the difference is one is CI while the other is PI? </p>
<p>Further when the code then calculates the confidence intervals it says <code>prediction +/- 2*sqrt(pvar1)</code>. Shouldn't it be the t.crit value or 1.96 for a normal distribution?</p>
<pre><code>library(lme4)
library(ggplot2) # Plotting
data("Orthodont",package="MEMSS")
fm1 <- lmer(
formula = distance ~ age*Sex + (age|Subject)
, data = Orthodont
)
newdat <- expand.grid(
age=c(8,10,12,14)
, Sex=c("Male","Female")
, distance = 0
)
mm <- model.matrix(terms(fm1),newdat)
newdat$distance <- mm %*% fixef(fm1)
pvar1 <- diag(mm %*% tcrossprod(vcov(fm1),mm))
tvar1 <- pvar1+VarCorr(fm1)$Subject[1] ## must be adapted for more complex models
newdat <- data.frame(
newdat
, plo = newdat$distance-2*sqrt(pvar1)
, phi = newdat$distance+2*sqrt(pvar1)
, tlo = newdat$distance-2*sqrt(tvar1)
, thi = newdat$distance+2*sqrt(tvar1)
)
#plot confidence
g0 <- ggplot(newdat, aes(x=age, y=distance, colour=Sex))+geom_point()
g0 + geom_errorbar(aes(ymin = plo, ymax = phi))+
labs(title="CI based on fixed-effects uncertainty ONLY")
#plot prediction
g0 + geom_errorbar(aes(ymin = tlo, ymax = thi))+
labs(title="CI based on FE uncertainty + RE variance")
</code></pre>
<p>Thank you for your help.</p>
|
<blockquote>
<p>the code then calculates the confidence intervals it says prediction +/- 2*sqrt(pvar1). Shouldn't it be the t.crit value or 1.96 for a normal distribution?</p>
</blockquote>
<p>This is correct. 1.96 is a more accurate critical value, however, statistical inference in mixed models is plagued with problems due to the presence of the random effects. It is difficult to quantify the uncertainty in the random effects.</p>
<p>2 is often use, as Ben Bolker points out in the comments, in order to empasize that these are approximate values.</p>
<p>Accordingly, any p values or confidence intervals should be considered approximate.</p>
| 667
|
confidence intervals
|
Confidence Interval - Revisit
|
https://stats.stackexchange.com/questions/539778/confidence-interval-revisit
|
<p>I've read in many articles about Confidence Interval as below</p>
<p>One such article link: <a href="https://www.statisticssolutions.com/misconceptions-about-confidence-intervals/" rel="nofollow noreferrer">https://www.statisticssolutions.com/misconceptions-about-confidence-intervals/</a></p>
<ol>
<li><p>[FALSE] - There is a 95% chance that the true population mean falls within the confidence interval.</p>
</li>
<li><p>[TRUE] - 95% of the confidence intervals calculated from these random samples will contain the true population mean.</p>
</li>
</ol>
<p>Aren't both the statements the same?</p>
<p><strong>Thought Process</strong>
We plan to take 100 sets of random samples. From the TRUE point 2, around 95 of those intervals (good set) will contain the True population mean while around 5 intervals (bad set) will not contain the mean.</p>
<p>For the first set, we got a confidence interval, say [c1_start, c1_end]. The chance of this confidence interval belonging to the good set is 95 out of 100, and if it's in this set, it'll contain the true population mean. Thus, there is 95% chance/probability that the confidence interval [c1_start, c1_end] will contain the true population mean which is the 1st statement.</p>
<p><strong>How come then the first statement is considered FALSE? Which part of my thought process is incorrect?</strong></p>
<p>Based on the excerpt from Introduction to Statistical Learning
The first point is True, or did I understand it wrong
A 95% confidence interval is defined as a range of values such that with 95% probability, the range will contain the true unknown value of the parameter</p>
<p><strong>My other question is if Confidence Interval can only be explained if we do the experiments many-many times, what does a single confidence interval tells me?</strong></p>
|
<p>I view this as a philosophical question with no uniformly satisfactory answer.</p>
<p>Consider a 95% CI for <span class="math-container">$\mu$</span> based on a random sample of size <span class="math-container">$n$</span> from <span class="math-container">$\mathsf{Norm}(\mu, \sigma),$</span> where <span class="math-container">$\sigma$</span> is known. Before data are observed, there agreement among frequentist statisticians that <span class="math-container">$$P\left(\frac{|\bar X - \mu|}{\sigma/\sqrt{n}} \le 1.96\right)= P\left(\bar X - 1.96\frac{\sigma}{\sqrt{n}} \le \mu \le \bar X + 1.96\frac{\sigma}{\sqrt{n}}\right) = 0.95.$$</span></p>
<p>However, after data are available, there is disagreement whether the above remains a true probability statement. One can argue that the word "confidence" (instead of "probability") came into use because it is sufficiently vague to avoid arguments. About all one can say for sure is that, over the long run, the procedure giving rise to the interval <span class="math-container">$\bar X \pm 1.96\frac{\sigma}{\sqrt{n}}$</span> will cover <span class="math-container">$\mu$</span> 95% of the time. References to what this interval means for the specific experiment at hand are bound to provoke unproductive arguments.</p>
<p>When frequentist statistical consultants tell clients that a given CI is "95% sure" to include the true value of value of <span class="math-container">$\mu,$</span> they can feel safe because the exact true value of <span class="math-container">$\mu$</span> is typically never known.</p>
<p>In a Bayesian context, a prior distribution established a probability framework. A Bayesian posterior probability (or 'credible') interval, based on the prior and likelihood function from data, can be taken as a true probability statement about the current data. If you believe the prior and the integrity of the likelihood, there can be no quibbling about the resulting probability statement about the posterior distribution.</p>
| 668
|
confidence intervals
|
Confidence intervals vs sample size?
|
https://stats.stackexchange.com/questions/38676/confidence-intervals-vs-sample-size
|
<p>I am totally new to stats and the field of confidence intervals. So this might be very trivial or even sound stupid. I would appreciate if you could help me understand or point me to some literature/text/blog that explains this better.</p>
<p>I see on various news sites like CNN, Fox news, Politico etc about their polls regarding the US Presidential race 2012. Each agency conducts some polls and reports some statistics of the form:</p>
<p>CNN: The popularity of Obama is X% with margin of error +/- x1%. Sample size 600.
FOX: The popularity of Obama is Y% with margin of error +/- y1%. Sample size 800.
XYZ: The popularity of Obama is Z% with margin of error +/- z1%. Sample size 300.</p>
<p>Here are my doubts:</p>
<ol>
<li><p>How do I decide which one to trust? Should it be based on the confidence interval, or should I assume that since Fox has a larger sample size, it's estimate is more reliable? Is there an implicit relationship between confidence itnervals and sample size such that specifying one obviates the need to specify the other?</p></li>
<li><p>Can I determine standard deviation from confidence intervals? If so, is it valid always or valid only for certain distributions (like Gaussian)?</p></li>
<li><p>Is there a way I can "merge" or "combine" the above three estimates and obtain my own estimate along with confidence intervals? What sample size should I claim in that case?</p></li>
</ol>
<p>I have mentioned CNN/Fox only to better explain my example. I have no intention to start a Democrats vs Republicans debate here.</p>
<p>Please help me understand the issues that I have raised.</p>
|
<p>In addition to Peter's great answer, here are some answers to your specific questions:</p>
<ol>
<li><p>Who to trust will depend also on who is doing the poll and what effort they put into getting a good quality poll. A bigger sample size is not better if the sample is not representative, taking a huge poll, but only in one, non-swing state would not give very good results.</p>
<p>There is a relationship between sample size and the width of the confidence interval, but other things also influence the width, such as how close the percentage is to 0, 1, or 0.5; what bias adjustments were used, how the sample was taken (clustering, stratification, etc.). The general rule is that the width of the confidence interval will be proportional to <span class="math-container">$\frac{1}{\sqrt{n}}$</span>, so to halve the interval you need 4 times the sample size.</p>
</li>
<li><p>If you know enough about how the sample was collected and what formula was used to compute the interval then you could solve for the standard deviation (you also need to know the confidence level being used, usually 0.05). But the formula is different for stratified vs. cluster samples. Also most polls look at percentages, so would use the binomial distribution.</p>
</li>
<li><p>There are ways to combine the information, but you would generally need to know something about how the samples were collected, or be willing to make some form of assumptions about how the intervals were constructed. A Bayesian approach is one way.</p>
</li>
</ol>
| 669
|
confidence intervals
|
What are "ABC" boostrap confidence intervals?
|
https://stats.stackexchange.com/questions/545316/what-are-abc-boostrap-confidence-intervals
|
<p>I am reading the <a href="https://astrostatistics.psu.edu/su07/R/html/boot/html/abc.ci.html" rel="nofollow noreferrer">documentation</a> of <code>boot::abc.ci</code> and feel I am missing something. It sounds like the "ABC" method is just an approximation of BCa bootstrap confidence intervals.</p>
<p>Is this the case? If so, when would we want to use that approximation instead of calculating the full BCa confidence interval? (The <code>boot</code> package calculates BCa intervals.)</p>
|
<p>Basically, there are situations where BCa intervals can become quite computationally expensive and ABC intervals offer a more feasible alternative. With the computational power usually at hands nowadays, I don't think there is so much need for ABC intervals anymore, though.</p>
<p>As reference, there is a brief statement about ABC methods in Puth et al. (2015):</p>
<blockquote>
<p>[ABC methods] were developed as approximations to the BCa method that require much less computational effort. Increasing availability of computing power reduces concerns about computational effort [...] Further details of these methods can be found in Efron & Tibshirani (1993) or Manly (2007).</p>
</blockquote>
<p>Puth, M. T., Neuhäuser, M., & Ruxton, G. D. (2015). On the variety of methods for calculating confidence intervals by bootstrapping. Journal of Animal Ecology, 84(4), 892-897. <a href="https://besjournals.onlinelibrary.wiley.com/doi/10.1111/1365-2656.12382#jane12382-bib-0010" rel="noreferrer">Link</a>.</p>
| 670
|
confidence intervals
|
R Confidence Intervals for quantiles from Generalized Lambda Distribution
|
https://stats.stackexchange.com/questions/518733/r-confidence-intervals-for-quantiles-from-generalized-lambda-distribution
|
<p>I'd like to compute confidence intervals in R for quantiles from generalized lambda distribution.</p>
<p><a href="https://www.sciencedirect.com/science/article/abs/pii/S0167947309000437" rel="nofollow noreferrer">Steve Su (2009)</a> introduces below 2 ways to calculate confidence intervals. I think I could understand method of below (1). But I cannot interpret method (2) clearly. Can anybody help to code in R?</p>
<p>(1) Normal-GLD Approximation Method</p>
<p><img src="https://chart.googleapis.com/chart?cht=tx&chl=s=%5Cfrac%7B%5Csqrt%7Bp(1-p)%7D%7D%7Bf_%7BX%7D(%5Chat%7Bq%7D_%7Bn%7D(p))%7D" alt="" /></p>
<p><img src="https://chart.googleapis.com/chart?cht=tx&chl=%5Chat%7Bq%7D_%7Bn%7D(p)%5Cpm%7Bz%7D_%7B%5Calpha/2%7D%5Cfrac%7Bs%7D%7B%5Csqrt%7Bn%7D%7D" alt="" /></p>
<pre><code># p : probability point to calculate confidence intervals
# ci : confidence interval such as 0.95 or 0.99
alpha <- 1 - ci
n <- length(data)
# fmkl GLD parameters
lambda1
lambda2
lambda3
lambda4
q <- gld::qgl(
p = p, lambda1 = lambda1, lambda2 = lambda2, lambda3 = lambda3,
lambda4 = lambda4, param = "fkml", lambda5 = NULL
)
s <- sqrt(p * (1 - p)) / gld::dgl(
x = q, lambda1 = lambda1, lambda2 = lambda2, lambda3 = lambda3,
lambda4 = lambda4, param = "fkml", lambda5 = NULL,
inverse.eps = .Machine$double.eps, max.iterations = 500
)
z <- qnorm(p = alpha / 2, mean = 0, sd = 1)
# Confidence Intervals
q + c(-1, 1) * z * s / sqrt(n)
</code></pre>
<p>I think I can calculate confidence interval by above code.</p>
<p>(2) Analytical-Maximum Likelihood GLD Approach</p>
<p><img src="https://chart.googleapis.com/chart?cht=tx&chl=%7Bg%7D(x)=%5Cfrac%7B%5CGamma(n%2B1)%7D%7B%5CGamma(m%2B1)%5CGamma(n-m)%7D(F_%7BX%7D(x))%5E%7Bm%7D(1-F_%7BX%7D(x))%5E%7Bn-m-1%7Df_%7BX%7D(x)" alt="" /></p>
<p><img src="https://chart.googleapis.com/chart?cht=tx&chl=%5CGamma(y)=%5Cint_%7B0%7D%5E%7B%5Cinfty%7Du%5E%7By-1%7De%5E%7B-u%7Ddu" alt="" /></p>
<p><img src="https://chart.googleapis.com/chart?cht=tx&chl=m=%5Clceil%7Bnp%7D%5Crceil" alt="" /></p>
<p>Consequently, to find the confidence interval analytically, all that required is to solve the following equations:</p>
<p><img src="https://chart.googleapis.com/chart?cht=tx&chl=%5Cint_%7B0%7D%5E%7BUpperLimit%7Dg(x)dx=1-%5Cfrac%7B%5Calpha%7D%7B2%7D" alt="" /></p>
<p><img src="https://chart.googleapis.com/chart?cht=tx&chl=%5Cint_%7B0%7D%5E%7BLowerLimit%7Dg(x)dx=%5Cfrac%7B%5Calpha%7D%7B2%7D" alt="" /></p>
<p>In above formula,
<img src="https://chart.googleapis.com/chart?cht=tx&chl=%5Cint_%7B0%7D%5E%7Bx_%7B0%7D%7Dg(x)dx=B(m%2B1,n-m)%7C_%7B0%7D%5E%7BF_%7BX%7D(x_%7B0%7D)%7D" alt="" />
where
<img src="https://chart.googleapis.com/chart?cht=tx&chl=B" alt="" />
is Euler's incomplete beta function normalized by the complete Beta function.</p>
<p>I don't interpret above method clearly. What I tried so far is below code but it seems wrong. I hope someone teach me how to calculate in R.</p>
<pre><code># fmkl GLD parameters
lambda1
lambda2
lambda3
lambda4
n <- length(data) # does n mean length of input data??
p <- # probability point to calculate confidence intervals
m <- ceiling(n * p)
intervals <- qbeta(p = c(alpha / 2, 1 - alpha /2), shape1 = m + 1, shape2 = n - m, ncp = 0, lower.tail = TRUE, log.p = FALSE)
q <- gld::qgl(
p = p, lambda1 = lambda1, lambda2 = lambda2, lambda3 = lambda3,
lambda4 = lambda4, param = "fkml", lambda5 = NULL
)
# Confidence Intervals
q + intervals # Correct???
</code></pre>
|
<p>Here, I'm going to reproduce example 3.1.1 in Su (2009) where he calculates 95% confidence intervals for the 99th quantile for the speed of light data from Michelson 1879.</p>
<p>It basically boils down to implementing the formulas (4), (5) and (6) from Su (2009). In the following <code>R</code> code, I used the <a href="https://cran.r-project.org/package=gld" rel="nofollow noreferrer"><code>gld</code></a> package to fit the generalized lambda distribution (FMKL). The dataset called <a href="https://stat.ethz.ch/R-manual/R-devel/library/datasets/html/morley.html" rel="nofollow noreferrer">morley</a> can be found in the <code>datasets</code> package in <code>R</code>. The code is certainly not very optimized but it seems to work.</p>
<p>Here are the formulas (4), (5) and (6):</p>
<pre><code># Formula (4) in Su (2009)
gx <- function(x, n, p, lambda) {
m <- n*p
gamma(n + 1)/(gamma(m + 1)*gamma(n - m))*(pgl(x, lambda1 = lambda[1], lambda2 = lambda[2], lambda3 = lambda[3], lambda4 = lambda[4]))^m*(1 - pgl(x, lambda1 = lambda[1], lambda2 = lambda[2], lambda3 = lambda[3], lambda4 = lambda[4]))^(n - m - 1)*dgl(x, lambda1 = lambda[1], lambda2 = lambda[2], lambda3 = lambda[3], lambda4 = lambda[4])
}
# Formula (5) in Su (2009)
int_fun_up <- function(a, g, n, p, alpha, lambda, lower_lim) {
integrate(gx, lower = lower_lim, upper = a, lambda = lambda, n = n, p = p, subdivisions = 1e4L, rel.tol = 15e-10)<span class="math-container">$value - (1 - (alpha/2))
}
# Formula (6) in Su (2009)
int_fun_low <- function(a, g, n, p, alpha, lambda, lower_lim) {
integrate(gx, lower = lower_lim, upper = a, lambda = lambda, n = n, p = p, subdivisions = 1e4L, rel.tol = 15e-10)$</span>value - (alpha/2)
}
</code></pre>
<p>Alternatively, you could use the beta function mentioned in the paper:</p>
<pre><code># Formula (5) in Su (2009)
int_fun_up <- function(a, g, n, p, alpha, lambda, lower_lim) {
Fx <- pgl(a, lambda1 = lambda[1], lambda2 = lambda[2], lambda3 = lambda[3], lambda4 = lambda[4])
(pbeta(Fx, n*p + 1, n - n*p) - pbeta(0, n*p + 1, n - n*p)) - (1 - (alpha/2))
}
# Formula (6) in Su (2009)
int_fun_low <- function(a, g, n, p, alpha, lambda, lower_lim) {
Fx <- pgl(a, lambda1 = lambda[1], lambda2 = lambda[2], lambda3 = lambda[3], lambda4 = lambda[4])
(pbeta(Fx, n*p + 1, n - n*p) - pbeta(0, n*p + 1, n - n*p)) - (alpha/2)
}
</code></pre>
<p>Now, we're going to fit the FMKL GLD distribution to the data:</p>
<pre><code>library(gld)
data(morley, package = "datasets")
morley<span class="math-container">$Speed2 <- (morley$</span>Speed + 299000)/1000
fit <- fit.fkml(morley$Speed2, return.data = TRUE)
plot(fit, one.page = TRUE) # Check fit
</code></pre>
<p><a href="https://i.sstatic.net/MJk7h.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MJk7h.png" alt="GLD_fit" /></a></p>
<p>The fit seems adequate. Finally, use a root-solver to solve equations (5) and (6) to get the confidence intervals for the 99th quantile:</p>
<pre><code>uniroot(int_fun_low, interval = c(299.5, 301), g = gx, lower_lim = 299.5, n = 100, p = 0.99, alpha = 0.05, lambda = fit<span class="math-container">$lambda)$</span>root.
[1] 299.9937
uniroot(int_fun_up, interval = c(299.5, 301), g = gx, lower_lim = 299.5, n = 100, p = 0.99, alpha = 0.05, lambda = fit<span class="math-container">$lambda)$</span>root
[1] 300.141
</code></pre>
<p>The confidence interval is <span class="math-container">$(299.9937;\,300.141)$</span>. This is very close to the values reported in Su (2009) which are <span class="math-container">$(299.9936;\,300.1412)$</span>.</p>
| 671
|
confidence intervals
|
Ranking by score with confidence intervals
|
https://stats.stackexchange.com/questions/182405/ranking-by-score-with-confidence-intervals
|
<p>I am using simulation to compute a unique score for every college basketball team, and ranking these teams based on that score.</p>
<p>I am sensitive to the fact that the score sometimes differs by a tiny amount (1 part in 1,000), which is unlikely to be meaningfully different. Therefore, I am planning to use permutation to generate confidence intervals for these scores.</p>
<p>I planned to use the intersection of confidence intervals to determine if two teams had truly different rankings. For example:</p>
<pre><code>1 (Team A) 0.1234 (CI 0.1232-0.1236)
2 (Team B) 0.1232 (CI 0.1228-0.1236)
3 (Team C) 0.1229 (CI 0.1227-0.1231)
</code></pre>
<p>In this example, however, I am uncertain about whether 2 teams should share the same rank, whether all 3 should, or whether they should all be ranked differently. Arguments for each:</p>
<p><em>Why to rank all the same:</em></p>
<p>Team A's confidence intervals overlap with those of Team B, and Team B's confidence intervals overlap with Team C. I think this can be dismissed because Team A's confidence intervals don't overlap with Team C, so it is clear that A and C should have different ranks, regardless of B's rank.</p>
<p><em>Why to rank A and B #1, and C #3:</em></p>
<p>Team A and B's confidence intervals overlap. Team A's and Team C's do not. We will define our ranking procedure to state that each rank terminates when the lower confidence bound of the team with the highest mean rank no longer overlaps with that of a subsequent team. In this case, this would cause A and B to tie for a rank of #1, and C to fall out into the next rank (#3). </p>
<p><em>Why to rank all separately:</em></p>
<p>Team A and B overlap, and team B and C overlap. If Team A and Team B share an equal rank, then Team C deserves to share the same rank because of its equality with B. Therefore, either A and C share the same rank, or A and B cannot share the same rank. Because the confidence bounds of A and C do not overlap, the former cannot be true, so A, B, and C will each be ranked separately (1, 2, and 3, respectively).</p>
<p>I think each can have arguments made for them, but I'm curious if there is a literature which has addressed the merits of these (and perhaps other) approaches to grouping rankings by using confidence intervals.</p>
|
<p>Not disagreeing with @Kontorus, but putting in some more context.</p>
<p>You are making "multiple comparisons"; A with B, A with C, B with C. In such circumstances overlapping groups are common, often indicated by grouping symbols e.g.</p>
<pre><code> value Grouping
1 (Team A) 0.1234 (CI 0.1232-0.1236) 1
2 (Team B) 0.1232 (CI 0.1228-0.1236) 1 2
3 (Team C) 0.1229 (CI 0.1227-0.1231) 2
Values that do not share a grouping number are significantly different
</code></pre>
<p>As @Kontorus remarked Minitab produces such "Grouping Information Tables" from multiple comparison tests following ANOVA, but you can use a similar display more generally.</p>
<p>There is a lot of information out there on multiple comparisons, you might want to read up on it. You should be aware that making multiple comparisons increases the risk of Type 1 errors. </p>
<p>Edited to add: you should also be aware that non-overlapping confidence intervals is not the same as a significance test of difference; see <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC99228/" rel="nofollow">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC99228/</a></p>
<p>Edited to add: this may be of interest "An Algorithm for a Letter-Based Representation of All-Pairwise Comparisons" <a href="http://www.tandfonline.com/doi/abs/10.1198/1061860043515" rel="nofollow">http://www.tandfonline.com/doi/abs/10.1198/1061860043515</a></p>
| 672
|
confidence intervals
|
Family-wise confidence intervals
|
https://stats.stackexchange.com/questions/1972/family-wise-confidence-intervals
|
<p>I have a bunch of variables organized into 10 different levels of a grouping factor. I'm doing some ANCOVA on particular variables and also plotting the data using boxplots. I'd like to add 84% confidence intervals to all the groups (since non-overlapping 84% CIs indicate a significant difference at alpha .05 - at least for two groups). I can do all this quite easily in R.</p>
<p>My question is - should I be applying a "family-wise" 84% CIs to all the groups? In other words, just as one would devalue an alpha level by the number of groups to obtain a family-wise alpha, should I inflate the CI a reciprocal amount to achieve a family-wise interval? This seems reasonable to me, but I haven't seen this discussed in the literature.</p>
<p>If alpha were CI were interchangeable for two or more groups the the family-wise 84% CI would be 99.5%, but i've read that alpha and CI are only interchangable for 1-sample situations. If this is the case, how would I go about calculating the family-wise confidence intervals for 10 (or any number) groups?</p>
<p>Any advice would be welcome.</p>
<p>best,</p>
<p>Steve</p>
|
<p>It sounds a reasonable solution <strong>if this is what important for you to present in the plot</strong>.</p>
<p>What this will give you (besides many questions, in case you are working with people who like statistics less then you), is a CI that is applicable to your situation which requires correction for multiple hypothesis.</p>
<p>What this won't give you, is the ability to compare difference between groups based on the CI.</p>
<p>Regarding the computation of the CI, you could use the p.adjust with something like simes which will still keep your FWE (family wise error), but will give you a wider window.</p>
<p>As to why you didn't find people writing about this, that is a good question, I don't know.</p>
| 673
|
confidence intervals
|
visualizing confidence intervals in longitudinal models
|
https://stats.stackexchange.com/questions/597869/visualizing-confidence-intervals-in-longitudinal-models
|
<p>I have a longitudinal dataset with continuous variables for 6 different time points nested within each ID. I prepared a longitudinal Poisson model with ID as a random effect for the intercepts, which in itself is working fine. In preparing the figures for the report, I prepared a figure with the predicted values for each of the 6 timepoints and the confidence intervals derived from the model.
Then I grew uncertain and started looking in the literature. I could not really decipher what other people had done for their confidence intervals since it was not shared in the methods section or figure legend. Looking at some of these figures I am left wondering whether some have just reported either means or medians with 95%CI or IQR. I guess median/IQR could be defended, but if means and CI´s are reported, wouldn't the confidence interval be biased due to within-subject correlation?</p>
<p>So I guess my question is if I should use my calculated confidence intervals from the model (with ID as a random intercept) or if there is another universally accepted way of doing this that I have missed?</p>
<p>Thanks for the input regarding confidence and prediction intervals. What I am looking for is the most appropriate way of demonstrating the uncertainty of our measurements over time in three groups - see figure - whether I should include the fact that the measurements are correlated over time or not.
<a href="https://i.sstatic.net/QPD9z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QPD9z.png" alt="" /></a></p>
<p>Left is Poisson based fitted population level values and to the right is means and confidence intervals for each time point disregarding the within-subject correlation</p>
|
<p>The question is whether you want to display "the uncertainty of our measurements over time" (essentially the raw data) or the uncertainty of the <em>model estimates</em> over time.</p>
<p>If all you have is a simple Poisson model of counts versus time and random intercepts for individuals, and you have complete data, then it might make sense to take the original data and show means with standard errors. Boxplots can be even more useful for illustrating the variability among measurements. Then your model evaluates whether any differences apparent in the raw data are statistically reliable.</p>
<p>Particularly if there are other covariates in your model, however, plots of raw data alone can be misleading. Then you should report means <em>estimated from the model</em> for each condition; showing 95% confidence intervals (CI) for those <em>estimates</em> is frequent practice. Those would be determined from the model coefficients and the variance-covariance matrix of the estimates. For your data, a logarithmic y-axis might make sense. Make it clear to your audience precisely what you are displaying.</p>
<p>A few warnings.</p>
<p>First, as comments indicate, a CI for a model estimate is not the same as a prediction interval. Use of the phrase "prediction interval" isn't always consistent, but it typically includes both the error in the model estimate and the additional error in sampling: for example the Poisson variance for observations here, or the <a href="https://en.wikipedia.org/wiki/Mean_and_predicted_response" rel="nofollow noreferrer">residual variance in an ordinary least squares model</a>.</p>
<p>Second, it's tempting to equate overlap of CI with lack of significance. That's incorrect. <a href="https://stats.stackexchange.com/q/18215/28500">This page</a> explains the relationship for simple <em>t</em>-tests, but the principles hold in general: non-overlap of CI is too stringent a test for "significance." The <a href="https://cran.r-project.org/package=emmeans" rel="nofollow noreferrer"><code>emmeans</code> package</a> can provide a visual display for which non-overlap of arrows is approximately related to significance of differences among model estimates.</p>
<p>Third, @usεr11852 raises an interesting point about displaying pointwise CI (like you seem to have) versus "simultaneous" CI. The latter involve a correction for <a href="https://en.wikipedia.org/wiki/Multiple_comparisons_problem" rel="nofollow noreferrer">multiple comparisons</a> so that the CI represent the entire family of comparisons. Provided that you are clear in describing what you show, one could argue either way about what to plot.</p>
| 674
|
confidence intervals
|
Confidence Intervals for Dice Rolls?
|
https://stats.stackexchange.com/questions/600395/confidence-intervals-for-dice-rolls
|
<p>Suppose I roll a 6-sided die 100 times and observe the following data - let's say that I don't know the probability of getting any specific number (but I am assured that each "trial" is independent from the previous "trial").</p>
<p>Below, here is some R code to simulate this experiment:</p>
<pre><code># Set the probabilities for each number (pretend this is unknown in real life)
probs <- c(0.1, 0.2, 0.3, 0.2, 0.1, 0.1)
# Generate 100 random observations
observations <- sample(1:6, size = 100, replace = TRUE, prob = probs)
# Print the observations
print(observations)
[1] 2 4 2 2 4 6 2 2 6 6 3 4 6 4 2 1 3 6 3 1 2 5 3 6 4 6 1 3 4 2 6 2 4 1 3 3 3 5 2 5 2 3 5 1 4 6 1 6 4 2
[51] 2 3 2 3 3 5 6 5 4 3 2 3 2 1 2 3 2 2 5 3 2 1 1 1 3 3 2 4 4 3 1 4 4 6 3 3 5 5 2 2 1 3 2 1 6 3 4 3 3 3
</code></pre>
<p>As we know, the above experiment corresponds to the Multinomial Probability Distribution Function (<a href="https://en.wikipedia.org/wiki/Multinomial_distribution" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Multinomial_distribution</a>):</p>
<p><span class="math-container">$$
P(X_1 = x_1, X_2 = x_2, \dots, X_k = x_k) = \frac{n!}{x_1!x_2!\dots x_k!}p_1^{x_1}p_2^{x_2} \dots p_k^{x_k}
$$</span></p>
<p>Using Maximum Likelihood Estimation (<a href="https://en.wikipedia.org/wiki/Maximum_likelihood_estimation" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Maximum_likelihood_estimation</a> MLE), the estimate for the probability for getting any number on this die is given by (e.g. what is the probability that this above die gives you a "3"?):</p>
<p><span class="math-container">$$
\hat{p}_{i,\text{MLE}} = \frac{x_i}{n}
$$</span></p>
<p>Next, the Variance for each of these parameters can be written as follows :</p>
<p><span class="math-container">$$
\text{Var}(\hat{p}_{i,\text{MLE}}) = \frac{p_i(1 - p_i)}{n}
$$</span></p>
<p><strong>From here, I am interested in estimating the "spreads" of these probabilities</strong> - for example, there might be a 0.2 probability of getting a "6" - but we can then "bound" this estimate and say there is a <strong>0.2 ± 0.05 probability</strong> of rolling a 6. Effectively, this "bounding" corresponds to a Confidence Interval (<a href="https://en.wikipedia.org/wiki/Confidence_interval" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Confidence_interval</a>).</p>
<p>Recently, I learned that when writing Confidence Intervals for "proportions and probabilities", we might not be able to use the "classic" notion of the Confidence Interval (i.e. parameter ± z-alpha/2*sqrt(var(parameter))), <strong>because this could result in these bounds going over "1" and below "0", thus violating the fundamental definitions of probability.</strong></p>
<p>Doing some reading online, I found different methods that might be applicable for writing the Confidence Intervals for the parameters of a Multinomial Distribution.</p>
<ul>
<li><p><strong>Bootstrapping</strong> (<a href="https://en.wikipedia.org/wiki/Bootstrapping_(statistics)" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Bootstrapping_(statistics)</a>): By virtue of the Large Law of Large Numbers (<a href="https://en.wikipedia.org/wiki/Law_of_large_numbers" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Law_of_large_numbers</a>), Bootstrapping works by repeatedly resampling your observed data and using this MLE formulas to calculate the parameters of interest on each of these re-samples. Then, you would sort the parameter in estimates in ascending order and take the estimates corresponding to the 5th and 95th percentiles. These estimates from the 5th and 95th percentiles would now correspond to the desired Confidence Interval. As I understand, this is an <strong>approximate method</strong>, but I have heard that the Law of Large Numbers argues that for an infinite sized population and an infinite number of resamples, the bootstrap estimates will converge to the actual values. It is important to note that in this case, the "Sequential Bootstrap" approach needs to be used such that the chronological order of the observed data is not interrupted.</p>
</li>
<li><p><strong>Delta Method</strong> (<a href="https://en.wikipedia.org/wiki/Delta_method" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Delta_method</a>): The Delta Method uses a Taylor Approximation (<a href="https://en.wikipedia.org/wiki/Taylor%27s_theorem" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Taylor%27s_theorem</a>) for the function of interest (i.e. MLE variance estimate). Even though this is also said to be an <strong>approximate method</strong> (i.e. the Delta Method relies on the Taylor APPROXIMATION), there supposedly exists mathematical theory (e.g. <a href="https://en.wikipedia.org/wiki/Continuous_mapping_theorem" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Continuous_mapping_theorem</a>) which can demonstrate that estimates from the Delta Method "converge in probability" to the actual values. This being said, I am not sure how the Delta method can directly be used to calculate Confidence Intervals.</p>
</li>
<li><p>Finally, very recently I learned about the <strong>Wilson Interval</strong> (<a href="https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval</a>), which is said to be more suitable for writing Confidence Intervals in the case of proportions and probabilities. In the case of the Multinomial Probability Distribution, I think the Wilson Interval for 95% Confidence Intervals on parameter estimates can be written as follows:</p>
</li>
</ul>
<p><span class="math-container">$$
\left( \hat{\theta} - \frac{z_{\alpha/2} \sqrt{\hat{\theta}(1-\hat{\theta})/n}}{1+z_{\alpha/2}^2/n}, \hat{\theta} + \frac{z_{\alpha/2} \sqrt{\hat{\theta}(1-\hat{\theta})/n}}{1+z_{\alpha/2}^2/n} \right)
$$</span></p>
<p>However, I am still learning about the details of this.</p>
<p>This brings me to my question: <strong>What are the advantages and disadvantages of using any of these approaches for calculating the Confidence Interval for parameter estimates in the Multinomial Distribution?</strong></p>
<p>It seems like many of these methods are approximations - but I am willing to guess that perhaps some of these approximate methods might have better properties than others. As an example:</p>
<ul>
<li>Perhaps some of these methods might take longer to calculate in terms of computational power for more complex functions and larger sample sizes?</li>
<li>Perhaps some of these methods might be less suitable for smaller sample sizes?</li>
<li>Perhaps some of these methods are known to "chronically" overestimate or underestimate the confidence intervals?</li>
<li>Perhaps some of these methods are simply "weaker" - i.e. the guarantee of the true parameter estimate lying between predicted ranges is not "as strong a guarantee"?</li>
</ul>
<p>In any case, I would be interested in hearing about opinions on this matter - and in general, learning about <strong>which approaches might be generally more suitable for evaluating the Confidence Intervals on parameter estimates for the Multinomial Distribution.</strong></p>
<p>Thanks!</p>
<p>Note: Or perhaps all these differences in real life applications might be negligible and they are all equally suitable?</p>
|
<p><em>See also link in @jbowman comment, method 5.</em></p>
<p>Are you willing to entertain a Bayesian approach? If so, you could specify a uniform prior on the five-dimensional surface <span class="math-container">$\sum\limits_{x=1}^{6} p_x = 1$</span>, sample from the posterior distribution, and identify the <span class="math-container">$100 (1-\alpha)$</span>% region with the highest posterior density.</p>
<hr />
<p><em>Algorithm sketch</em></p>
<p>loop:</p>
<ul>
<li>Gibbs sample parameter vector <span class="math-container">$\theta_i$</span> from prior surface.</li>
<li>Compute likelihood <span class="math-container">$w_i \propto \mathcal{L}(\theta_i | X)$</span>.</li>
<li>Accumulate sums
<ul>
<li>weight <span class="math-container">$\sum w_i$</span></li>
<li>weighted vectors <span class="math-container">$\sum w_i \theta_i$</span></li>
<li>weighted squared vectors <span class="math-container">$\sum w_i \theta_i \theta_i^\textrm{T}$</span></li>
</ul>
</li>
</ul>
<p>output:</p>
<ul>
<li>multivariate posterior sample mean and variance</li>
</ul>
<hr />
<pre><code>library(dplyr)
# Set the probabilities for each number (pretend this is unknown in real life)
probs <- c(0.1, 0.2, 0.3, 0.2, 0.1, 0.1)
# Generate 100 random observations
observations <- sample(1:6, size = 100, replace = TRUE, prob = probs)
x <- tibble(value=observations) %>% count(value) %>% rename(x=n) %>% pull(x)
print(x)
## [1] 8 18 32 22 11 9
theta = rep(1 / 6, 6) # starting point for Gibbs sampler
sum_wt <- 0
sum_wt_x <- rep(0,6)
sum_wt_xxt <- matrix(0,nrow=6,ncol=6)
for (i in 1:1e5) {
# Gibbs sampler - reallocate between two elements of vector
idx <- sample(1:6, 2)
w <- sum(theta[idx])
u <- runif(1)
theta[idx] <- c(u * w, (1 - u) * w)
# weight sampled theta proportional to likelihood (ignore constant)
wt <- exp((log(theta) %*% x))[1]
sum_wt <- sum_wt + wt
sum_wt_x <- sum_wt_x + wt*theta
sum_wt_xxt <- sum_wt_xxt + wt* (theta %*% t(theta))
}
mu <- sum_wt_x / sum_wt[1]
sigma <- (sum_wt_xxt/sum_wt[1] - mu %*% t(mu))
print(mu)
## [1] 0.08614837 0.18522926 0.31384024 0.21021317 0.11096186 0.09360710
print(sigma,digits=5)
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] 6.5268e-04 -0.00011527 -0.00022615 -0.00015234 -9.4170e-05 -6.4751e-05
## [2,] -1.1527e-04 0.00155075 -0.00063112 -0.00044187 -2.0724e-04 -1.5525e-04
## [3,] -2.2615e-04 -0.00063112 0.00201363 -0.00062085 -2.1323e-04 -3.2228e-04
## [4,] -1.5234e-04 -0.00044187 -0.00062085 0.00160344 -2.2775e-04 -1.6063e-04
## [5,] -9.4170e-05 -0.00020724 -0.00021323 -0.00022775 8.0827e-04 -6.5879e-05
## [6,] -6.4751e-05 -0.00015525 -0.00032228 -0.00016063 -6.5879e-05 7.6879e-04
</code></pre>
<hr />
<p><code>mu</code> is the posterior mean, <code>sigma</code> is the posterior covariance matrix. Note off diagonal entries of <code>sigma</code> are negative. This is expected. A positive error in one element of <code>mu</code> is offset by negative errors in the other elements of <code>mu</code>.</p>
| 675
|
confidence intervals
|
Confidence Intervals and Probability Relationship
|
https://stats.stackexchange.com/questions/340257/confidence-intervals-and-probability-relationship
|
<p>Suppose that 20 students visit a farmer's market and each pick (a random sample of) 25 oranges, weigh them, then create a 95% confidence interval for the true mean weight of an orange at the market. What is the probability that 5 of these intervals contain the true
mean weight of an orange at the market and the rest don't?</p>
<p>According to <a href="https://stats.stackexchange.com/questions/25992/probability-that-multiple-confidence-intervals-contain-the-true-population-mean">Probability that multiple confidence intervals contain the true population mean</a>,
according to a comment, the answer should be ${20}\choose{5}$$(0.95)^5(0.05)^{15}$. However, according to the answer at the end, we can't just say that the probability of finding the true mean inside an interval is $1 - \alpha$. </p>
<p>I'm not sure which of these statements are right.</p>
|
<p>I think the confusion comes from how we interpret frequentist confidence intervals. There is NOT a 95% probability that the true mean lies in the interval. In a frequentist approach, the mean is fixed and not random. The interval is the random aspect. For a given interval, the true mean is either in the interval or not, there is no probability involved. However, the intervals are constructed in such a way that at least 95% of them contain the true mean. </p>
<p>That being said, I think it's fair to say if $\theta$ is our fixed true mean and $I(data)$ is a random 95% confidence interval (which depends on the data), then $P(\theta \in I) = 0.95$. Then the random variable "Exactly X out of 20 95% confidence intervals contain the true mean" can be viewed as a Binomial distribution with parameters (20, 0.95). Therefore $P(X = 5 | n=20, p=0.95) = \binom{20}{5}0.95^5 0.05^{15}$ as you originally stated.</p>
| 676
|
confidence intervals
|
ARIMA forecast confidence intervals
|
https://stats.stackexchange.com/questions/431467/arima-forecast-confidence-intervals
|
<p>Can someone explain how confidence intervals for ARIMA forecasts are derived? I can't seem to find any good explanation of it. From what I've read it seems like because an ARIMA process can be expressed as an infinite valued MA process then the forecast values are normally distributed. If this is true then how do you derive the standard deviation of the forecast? </p>
|
<p>For reference, let your model be:</p>
<p><span class="math-container">$$X_t=\phi X_{t-1} + \epsilon_t$$</span></p>
<p>Say you have data from 1 to <span class="math-container">$T$</span>. So your point forecast for <span class="math-container">$X_{T+1}$</span> would be <span class="math-container">$$E(X_{T+1}|\{X_t\}_{t=1}^T) = \phi X_T$$</span> and the confidence interval of <span class="math-container">$X_{T+1}$</span> would depend on the distribution you assume for <span class="math-container">$\epsilon_T$</span>. So forecast follows normal if you assume so for the error term.</p>
<p>There is another important point to be made here. If you are modelling your data, then you actually don't know the model parameter, <span class="math-container">$\phi$</span>. So you estimate <span class="math-container">$\phi$</span> by some method (OLS, MLE, etc.). Based on this <em>estimate</em> you have a point forecast which is <span class="math-container">$\hat{E}(X_{T+1}|\{X_t\}_{t=1}^T)$</span>, i.e., you have an estimate of <span class="math-container">$E(X_{T+1}|\{X_t\}_{t=1}^T)$</span> and so another layer of uncertainty in your forecast. In general the parameter estimates are asymptotically normal. So this is another place from where normality comes into picture. Note however that here we define the confidence interval of forecast as that coming from uncertainty in parameter estimate. In contrast, prediction interval of forecast is the one that also includes the uncertainty from <span class="math-container">$\epsilon$</span>.</p>
<p>For excellent discussion on this see <a href="https://stats.stackexchange.com/questions/16493/difference-between-confidence-intervals-and-prediction-intervals">here</a>.</p>
| 677
|
confidence intervals
|
Confidence intervals for beta regression
|
https://stats.stackexchange.com/questions/234254/confidence-intervals-for-beta-regression
|
<p>I have used the betareg package in R to fit a regression. My question is: how do I calculate confidence intervals for betaregression in R? </p>
|
<p>The beta likelihood is not a regular exponential family, so constructing interval estimates for such two parameter families is not easily done. I think Zeileis was wise not to implement any de-facto methods for <code>confint</code>. The cited article Ospina suggests that bootstrap interval estimates perform best. The package <code>boot</code> has some methods, but bootstrapping is also easily done "by-hand" in R.</p>
| 678
|
confidence intervals
|
Alternatives calculus for proportions confidence intervals
|
https://stats.stackexchange.com/questions/184908/alternatives-calculus-for-proportions-confidence-intervals
|
<p>I have two related questions:</p>
<p>a) Is there any other way to calculate a confidence interval for the proportion, in addition to the "classical" form?</p>
<p><a href="https://i.sstatic.net/egPJM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/egPJM.png" alt="enter image description here"></a></p>
<p>b) Can we apply this "classical" method when the proportion was calculated from a weighted data base?</p>
<p>I will put in context these questions so you can better understand me.</p>
<p>Reviewing the methodology applied on a survey (<a href="http://fra.europa.eu/sites/default/files/fra-2014-vaw-survey-technical-report-1_en.pdf" rel="nofollow noreferrer">http://fra.europa.eu/sites/default/files/fra-2014-vaw-survey-technical-report-1_en.pdf</a>) I found a table showing proportions and their confidence intervals. I calculated the confidence intervals by myself using the "classical" formula, because they looked excesively wide, and they didn't match. The only explanation I can find is that these proportions have been calculated from a weighted base data. But I'm not convinced with this self-explanation, since the "classical" formula takes into account the size of the sample, and that has not changed with the weighting.</p>
<p>Why my confidence intervals don't match with those shown in the table? This question is what led me to ask the two questions above.</p>
<p>Can please anybody help me?</p>
|
<p>One explanation could be that the survey uses clusters and weights, so effectively your n is smaller now because of the correlation of people within a cluster. That is they are similar people within a group, so not independent. If n is smaller then your confidence intervals are wider.</p>
<p>One good thing about the survey is it has fairly large sample size, as it is at the fringes (i.e. proportions that are zero or 100% or near there) that problems arise with Confidence intervals.</p>
| 679
|
confidence intervals
|
Confidence intervals for cross-validated statistics
|
https://stats.stackexchange.com/questions/69831/confidence-intervals-for-cross-validated-statistics
|
<p>How does one calculate confidence intervals of cross-validated estimates?</p>
<p>For an epidemiological paper we use cat. and cont. NRI, IDI, and difference in C index for comparison of two Cox models. The reviewer suggested showing only cross-validated estimates <strong>and their 95% confidence intervals</strong>.</p>
<p>My ideas include taking the appropriate quantiles of the CV resamples, calculating the SE of those resamples and constructing Wald intervals, or bootstrapping the CI of the resamples' mean or median. But somehow these all seem phony.</p>
|
<p>For our <a href="http://www.sciencedirect.com/science/article/pii/S0377221711009064" rel="noreferrer">credit risk paper</a> on predicting loan defaults, a reviewer also suggested we produce confidence intervals for cross validation estimates and in particular recommended bootstrapping of the resampled mean.</p>
<p>Bootstrapped CIs were produced for risk ranking measures including the AUC, <em>H</em>-measure and the Kolmogorov-Smirnov (K-S) statistic. They were used to compare discrimination performance of two survival models - Mixture Cure, Cox with logistic regression.</p>
<p>It would be interesting to learn of other approaches to such CIs.</p>
<p>Tong, E.N.C., Mues, C. & Thomas, L.C. (2012) Mixture cure models in credit scoring: If and when borrowers default. European Journal of Operational Research, 218, (1), 132-139.</p>
| 680
|
confidence intervals
|
Confidence intervals - how to interpret and report
|
https://stats.stackexchange.com/questions/303970/confidence-intervals-how-to-interpret-and-report
|
<p>Just to state that I don't belong to the statistics field or something related (I am medical scientist). I've been trying to understand more about the confidence interval and how to interpret it for biological studies (in case of biological experiments). In this case, I was advised to report confidence intervals in order to reinforce my statistical tests and consequently the results achieved. Could anyone help me with any reference or a brief explanation how to apply confidence intervals in biological experiments and how they can be interpreted based on confidence intervals?</p>
<p>Thanks a lot</p>
<p>Julian</p>
| 681
|
|
confidence intervals
|
Combining multiple confidence intervals
|
https://stats.stackexchange.com/questions/555302/combining-multiple-confidence-intervals
|
<p>Suppose that we have 10 90% confidence intervals resulting from 10 large samples showing the percentage of 3rd-graders who don't know how to sum. Means, sample sizes and other statistical data are not given.</p>
<p>(1.10, 1.12)
(1.01, 1.04)
(1.01, 1.15)
(1.11, 1.12)
(1.03, 1.04)
(1.04, 1.07)
(1.05, 1.20)
(1.08, 1.17)
(1.09, 1.12)
(1.13, 1.25)</p>
<p>Where does the true percentage lie in the population?</p>
|
<p>One possible approach:</p>
<p>These ten CIs may be of the form <span class="math-container">$\hat p \pm 1.645\sqrt{\frac{\hat p(1-\hat p)}{n}},$</span> where <span class="math-container">$\hat p = x/n$</span> for <span class="math-container">$x$</span> arithmetic deficient students in <span class="math-container">$n.$</span>
Taking 'percentages' into account, the first CI is <span class="math-container">$(0.0110, 0.0112).$</span>
That allows you to solve (approximately) for <span class="math-container">$x$</span> and <span class="math-container">$n$</span> of the first of ten groups.</p>
<p>Combine the ten <span class="math-container">$x$</span>;s and <span class="math-container">$n$</span>'s, assuming no overlapping of groups.
Then make a 90% CI for the combined data to estimate the true population
proportion of such students, and convert it to a percentage.</p>
| 682
|
confidence intervals
|
confidence intervals in linear regression
|
https://stats.stackexchange.com/questions/263115/confidence-intervals-in-linear-regression
|
<p>I am trying to understand the confidence interval for linear regression parameters. At this link <a href="https://stats.stackexchange.com/questions/88461/derive-variance-of-regression-coefficient-in-simple-linear-regression">Derive Variance of regression coefficient in simple linear regression</a> an answer is provided. However, I did not understand in the derivation fully. Do we consider the regression parameters as random variables? </p>
<p>That is what are the random variables in the following regression:
$$
y=\beta_1 x+\beta_0 + \epsilon
$$
From what I understand is that just the $\epsilon$ is random variable with mean 0 and variance $\sigma^2$. Considering $\beta_1, x, \beta_0$ as constants $y$ also becomes random variable with mean $\beta_1 x+\beta_0$ and variance $\sigma^2$, from which confidence interval can be calculated.</p>
<p>However,for regression coefficients confidence intervals are also calculated. Can you please clarify the random variables in regression please?</p>
<p>Thanks.</p>
|
<p>I am from a different domain, and use somewhat different language, but maybe this will help.</p>
<p>Imagine doing an experiment. $x$ is a set of given values, an "independent variable". Not random. For each of these values you measure a dependent variable, $y$. Presumably, $y$ depends on $x$ in a deterministic (non-random) way, but your measurements are also affected by some noise, $\epsilon$. Therefore, $y$ is a random variable.</p>
<p>When you calculate coefficients of linear regression, $\hat{\beta}_0$ and $\hat{\beta}_1$ (which are estimates of "true", non-random $\beta_0$ and $\beta_1$), their values depend on $y$, and therefore they are also random. If you repeat your experiment once again, for the same $x$ values, you will get (slightly, or not slightly) different $y$, and then will calculate different $\hat{\beta}_0$ and $\hat{\beta}_1$.</p>
<p>Finally, note that $\hat{\beta}_0$ and $\hat{\beta}_1$ are expressed in terms of $y$ <em>linearly</em>. Therefore, variances of $\hat{\beta}_0$ and $\hat{\beta}_1$ are proportional to $var(y) = var(\epsilon)$.</p>
| 683
|
confidence intervals
|
Are confidence intervals useful for fitting data?
|
https://stats.stackexchange.com/questions/442521/are-confidence-intervals-useful-for-fitting-data
|
<p>I recently directed to a very good explanation on the difference between an error band and a confidence intervals, <a href="https://stats.stackexchange.com/questions/217374/real-meaning-of-confidence-ellipse/217377#217377">here</a>.</p>
<p>My question arose from the context of using error bars/bands or confidence intervals as a means of weighting a fit of some data.</p>
<p>Given that I more clearly understand the difference between a an error bar and a confidence interval my question is now:</p>
<p>If one understands the error distribution of some data well enough, can this knowledge be used to generate a confidence interval and in turn can this confidence interval be used to weight a fit for that data set?</p>
<p>Does it even make sense to think/use a confidence interval in this way, or is a confidence interval better used for describing the <strong>result</strong> of a fit?</p>
|
<p>It depends on what the error estimates represent: errors in the measurements going into the model, or expected errors in predictions from a fitted model. For terminology, it's simplest to discuss in terms of the error variance estimates (which for a given study size bear a one-to-one relationship with the confidence-interval widths). </p>
<p>Standard linear regression, for example, assumes that the error variance is constant. If you have examined the distribution of measured outcome values and find that error variance actually depends on the values being measured, say with error increasing as the values increase, then you could consider a <a href="https://en.wikipedia.org/wiki/Weighted_least_squares" rel="nofollow noreferrer">weighted linear regression</a> to take that into account. In that case, you could be properly using information about observation variances to weight a model.</p>
<p>If you fit a standard linear regression, however, the estimated variance of <em>predictions from</em> the model are not constant across the range of predicted values even if the variances of observations going into the model and estimated variances of observed minus predicted values are constant. See this <a href="https://en.wikipedia.org/wiki/Mean_and_predicted_response#Predicted_response" rel="nofollow noreferrer">Wikipedia page</a> for the formula for a one-predictor model. As the values of the predictor variable get farther from the mean of the original predictor-variable values, the variance of the <em>predictions from</em> the model necessarily increase. In that case the associated confidence intervals are best used to describe the result of the model; it would be inappropriate to use the variances of such predictions from the model to weight the results for a revised regression.</p>
| 684
|
confidence intervals
|
Question about confidence intervals and prediction intervals
|
https://stats.stackexchange.com/questions/517730/question-about-confidence-intervals-and-prediction-intervals
|
<p>Considering following linear multiple regression model:
<span class="math-container">\begin{equation}
y=X\beta + e,
\end{equation}</span>
where observations <span class="math-container">$y\in\Re^n$</span>, coefficents <span class="math-container">$\beta\in\Re^p$</span> and <span class="math-container">$e\sim N(0,\sigma I)$</span> is a white Gaussian noise term.</p>
<p>By the definition of the confidence interval:
<span class="math-container">\begin{equation}
\hat{y} \pm t_{n-p}^{\alpha/2} \times \mathrm{SE}_\bar{y}
\end{equation}</span>
where <span class="math-container">$t_{n-p}^{\alpha/2}$</span> is the critical value, and standard error
<span class="math-container">\begin{equation}
\mathrm{SE}_\bar{y} = \frac{\sigma_y}{\sqrt{n}}
\end{equation}</span>
I understand that
<span class="math-container">\begin{equation}
\mathrm{Var}(\hat{\beta})=\hat{\sigma}^2 (X^\top X)^{-1} \qquad\text{and} \qquad \mathrm{Var}(\hat{y}_{n+1})=\mathrm{Var}(x_{n+1}\hat{\beta})=\hat{\sigma}^2x_{n+1}^\top(X^\top X)^{-1}x_{n+1}
\end{equation}</span>
My question is that why in post:<a href="https://stats.stackexchange.com/questions/16493/difference-between-confidence-intervals-and-prediction-intervals">Difference between confidence intervals and prediction intervals</a>
and
<a href="https://stats.stackexchange.com/questions/9131/obtaining-a-formula-for-prediction-limits-in-a-linear-model-i-e-prediction-in">Obtaining a formula for prediction limits in a linear model (i.e.: prediction intervals)</a>
they use square root of Variance
<span class="math-container">\begin{equation}
\hat{y}_{n+1} \pm t_{n-p}^{\alpha/2} \times \sqrt{\mathrm{Var}(\hat{y}_{n+1})}
\end{equation}</span>
rather than the standard error?? Where is the <span class="math-container">$\frac{1}{\sqrt{n}}$</span>?</p>
| 685
|
|
confidence intervals
|
Cox multi-state model - CIF confidence intervals
|
https://stats.stackexchange.com/questions/638037/cox-multi-state-model-cif-confidence-intervals
|
<p>Is it possible to get confidence intervals for the estimated CIF from a <code>survival::coxph</code> multi-state model?</p>
<p>This code produces the cumulative incidence function, but there are no options to get confidence intervals:</p>
<pre><code>library(survival)
# 0 = Censored, 1 = Relapse, 2 = Death
rotterdam<span class="math-container">$status = ifelse(rotterdam$</span>recur == 1, 1,
ifelse(rotterdam<span class="math-container">$death == 1, 2, 0))
rotterdam$</span>time = pmin(rotterdam<span class="math-container">$rtime, rotterdam$</span>dtime)
# Competing risks Cox model
m = coxph(Surv(time, as.factor(status)) ~ hormon, data = rotterdam, id = pid)
# Plot CIF for people without hormone replacement therapy
s = survfit(m, newdata = data.frame(hormon = c(0)))
plot(s, col = c('red', 'blue'), xlab = 'Time since diagnosis (days)',
ylab = 'CIF') # Plots s$pstate[, 1, ] against time
legend(x = 0, y = 0.58, legend = c('Relapse', 'Relapse free death'),
col = c('red', 'blue'), lty = 1)
</code></pre>
<p>Essentially I'm after adding confidence intervals to the lines in the plots.</p>
<p>I'm happy to use a different package if needed, but it needs to be a Cox model & not a Fine-Grey model.</p>
<p>Thanks in advance!</p>
|
<p>I am not familiar with this stuff, so I'm really not sure of what I do below. See <a href="https://cran.r-project.org/web/packages/survival/vignettes/compete.pdf" rel="nofollow noreferrer">the competing risks vignette</a> and <code>?finegray</code>. Below I follow the <code>?finegray</code> example.</p>
<pre class="lang-r prettyprint-override"><code>etime <- with(rotterdam, ifelse(recur == 0, dtime, rtime))
event <- with(rotterdam, ifelse(recur == 0, 2*death, 1))
event <- factor(event, 0:2, labels = c("censor", "recur", "death"))
pdata <- finegray(Surv(etime, event) ~ ., data = rotterdam)
fgfit <- coxph(Surv(fgstart, fgstop, fgstatus) ~ hormon,
weight = fgwt, data = pdata)
sf <- survfit(
fgfit,
newdata = data.frame(hormon = c(0, 1)),
se.fit = TRUE,
conf.int = 0.95
)
summary_sf <- summary(sf)
# confidence bounds:
summary_sf<span class="math-container">$lower
summary_sf$</span>upper
<span class="math-container">```</span>
</code></pre>
| 686
|
confidence intervals
|
Forecast confidence intervals from multiple realizations
|
https://stats.stackexchange.com/questions/477167/forecast-confidence-intervals-from-multiple-realizations
|
<p>I have a forecast which involves sampling a probability distribution and therefore each time I run the forecast there is some random variation between results. If I run the forecast many times, how do I compute the expected forecast, 5% and 95% confidence intervals using the ensemble of results?</p>
<p>Two options I have tried</p>
<p>(1) At each time step compute the 0.5, 0.025 and 0.975th quantiles across all forecasts.</p>
<p>(2) Take the sum over all time steps and use the forecasts where this sum corresponds to 0.5, 0.025, 0.975th quantile of all sums?</p>
<p>I am pretty sure both methods are incorrect.</p>
<p>The first because it involves choosing from each forecast at each time step. Each forecast is an independent realization and so it feels like I should be considering each forecast independently. In any case, the confidence intervals I get when I use this method are very wide, much wider than the max variation in the individual forecasts.</p>
<p>The second option also seems incorrect. When I use this method the confidence intervals may cross. Furthermore, who is to say the forecast I choose to represent the ith confidence interval will still represent the ith confidence interval when I run the ith+1 time step.</p>
<p>Hoping someone can explain the floors in my logic and help me figure out the correct procedure.</p>
|
<p>If I have understand the question correctly, this is a case of <a href="https://en.wikipedia.org/wiki/Ensemble_forecasting" rel="nofollow noreferrer">Ensemble Forecasting</a>, and I believe the goal is to find <em>Prediction</em> rather than <em>Confidence</em> Intervals. Given a set of <span class="math-container">$n$</span> predictions <span class="math-container">$\hat{y}_{1, t+h}, \hat{y}_{2, t+h}, .., \hat{y}_{n, t+h}$</span> for <span class="math-container">$h$</span> timesteps ahead:</p>
<ul>
<li>Expected forecast: average of the set of predictions</li>
<li>50% Prediction Interval: median of the predictions</li>
<li>95% Prediction Interval: Find the 2.5% and 97.5% quantiles of the inverse empirical cumulative distribution of <span class="math-container">$\hat{y}_{i, t+h}$</span>, there is 95% chance that the prediction falls in this interval.</li>
</ul>
| 687
|
confidence intervals
|
Why do we need confidence intervals?
|
https://stats.stackexchange.com/questions/143772/why-do-we-need-confidence-intervals
|
<p>I am following a video lecture on Statistics, which introduces the concept of confidence intervals in the following way:</p>
<p>"A bank vice president is interested in the average checking account balance for all personal accounts. A random sample of 500 accounts is selected, and the average is calculated. What level of "confidence" for the mean will the VP be satisfied with?"</p>
<p>What I don't understand is, why are confidence intervals even necessary? I learned previously that the mean of the sampling distribution of the sample mean is exactly equal to the population mean. So, wouldn't it be better for the VP to do that i.e. take many samples and calculate the mean of the sampling distribution of the sample mean (and thus get the exact population mean), instead of just using one sample to calculate confidence intervals (which will not give you the exact population mean)?</p>
<p>Thanks.</p>
<p>(Question source: <a href="https://www.youtube.com/watch?v=9GtaIHFuEZU" rel="nofollow">https://www.youtube.com/watch?v=9GtaIHFuEZU</a>)</p>
|
<p>You forget that accuracy comes at the cost of effort. He'd need to gather all the data from thousands of accounts. And what if five accounts have billions while the rest are in the hundreds? The confidence interval is a faster way to give you a reasonable answer in a reasonable amount of time.
Confidence interval is even more apparent with surveys of a city or country. You can't put a gun to everyone's head and yell at them to complete the survey.</p>
| 688
|
confidence intervals
|
Confidence intervals for group means (R)
|
https://stats.stackexchange.com/questions/210515/confidence-intervals-for-group-means-r
|
<p>Here are some sample data in R:</p>
<pre><code>set.seed(42)
df <- data.frame(g = factor(rep(1:2, each= 50)), y = rnorm(100)+rep(0:1, each=50))
</code></pre>
<p>One can easily get group means using e.g. <code>with(df, tapply(y,g,mean))</code> but there is no such easy way to get the confidence intervals for group means. This is why I tried:</p>
<pre><code>lm(y ~ g-1, df)
# correct group means:
# -0.03567 1.10070
confint(lm(y~g-1, df))
# too narrow CI's
# 2.5 % 97.5 %
# g1 -0.3287751 0.2574315
# g2 0.8075981 1.3938047
</code></pre>
<p>That is, one can estimate the group means using a linear model with dummy group indicators, omitting the overall intercept. But the confidence intervals of these regression parameters are narrower than the confidence intervals of group means. The latter could be found group by group with the same function:</p>
<pre><code>confint(lm(y~1, data=df, subset=g==1))
# 2.5 % 97.5 %
# (Intercept) -0.3629177 0.2915742
</code></pre>
<p>Or a manual check using textbook formulae:</p>
<pre><code>ci.mean <- function(x, alfa=0.05){
n <- length(x)
a<-qt(1-alfa/2, n-1)
m<-mean(x); s<-sd(x)
se<-s/sqrt(n)
res <- c(m, m-a*se,m+a*se)
setNames(res, c("mean", paste(100*alfa/2, "%"), paste(100*(1-alfa/2), "%")))
}
ci.mean(with(df, y[g==1])
# mean 2.5 % 97.5 %
# -0.03567178 -0.36291775 0.29157418
</code></pre>
<p>There is probably an easy answer to the question of why the CIs of seemingly the same parameters are different. (The answer, obviously, has to start with difference in standard errors.) But I would be interested in the interpretation: why is that I can trust the group means found with <code>lm(y~g-1)</code> but I can't trust the confidence intervals around those "means" found with <code>confint(lm(y~g-1))</code>? And another naive question, why is the standard error for a group mean smaller if another group is present? That is:</p>
<pre><code>coef(summary(lm(y~g-1, df)))[1,2]
# [1] 0.1476987
coef(summary(lm(y~1, df, subset=g==1)))[1,2]
# [1] 0.1628433
</code></pre>
<p>Again, I am more interested in the substantial interpretation than the formula showing why this is so. (I suppose after some sleep I could figure out the formula but would still be in trouble with interpretation).</p>
<p>Thanks in advance!</p>
|
<p>The answer to your "naive question" contains the solution to your problem.</p>
<p>In the linear model on all the data, the residual variance is estimated from all 100 data points, based on the difference of each value from its associated group mean. Thus you will note that the difference between the top and bottom CI is the same for both groups (0.5862 for your data set).</p>
<p>In the subset analysis, only the data points in the selected group are used, so one would expect a different CI range than in the pooled analysis. In this case the CI range is larger, 0.6545.</p>
<p>If you had analyzed the group 2 subset separately instead, its individual CI would have been <em>less than</em> its CI in the pooled analysis:</p>
<pre><code>confint(lm(y~1, data=df, subset=g==2))
2.5 % 97.5 %
(Intercept) 0.8378242 1.363579
</code></pre>
<p>The CI range here is only 0.5258.</p>
<p>One group analyzed individually has a narrower CI band than in pooled analysis, one has a wider band when analyzed individually. The pooling of variance estimates in the combined linear model explains your results.</p>
| 689
|
confidence intervals
|
Frequentist confidence intervals = constant trapping probability?
|
https://stats.stackexchange.com/questions/44018/frequentist-confidence-intervals-constant-trapping-probability
|
<p>In the case of estimating an unknown mean of a normal distribution with known variance, if I'm not mistaken, the confidence interval contains $\theta$ with probability $1 - \alpha$, regardless of the actual value of $\theta$. In other cases (e.g. when the variance is not necessarily constant), is it still the case that the confidence interval contains the actual value with probability $1 - \alpha$? Or is it only the case that for any value of $\theta$, the confidence interval contains the actual value with probability <strong>at least</strong> $1 - \alpha$? (which if I'm not mistaken is how confidence intervals are defined).</p>
<p>Thanks!</p>
|
<p>It depends on how strictly <em>you</em> want to define it, most I believe would accept that for all parameter values, if the interval has at least 1 - alpha coverage - it can be taken as a confidence interval (no matter how you came up with it). There is an interesting post on this right now at <a href="http://normaldeviate.wordpress.com/2012/11/17/what-is-bayesianfrequentist-inference/" rel="nofollow">Normal Deviate</a></p>
<p>One could define it more strictly as always equal to 1 - alpha (similar I think I recall the technical term being) which is sometime achieved by post study randomization (or something equivalent) or not admitting relevant subsets or etc. </p>
<p>Not a bad idea to do a simulation and plot the coverge over the parameter space. For instance, the odds ratio from a two group experiment with binary outcomes has a very interesting plot. </p>
<p>As per comment below </p>
<blockquote>
<p>choose your CI to make the coverage 1−α for some parameters and
greater for others?</p>
</blockquote>
<ul>
<li>this is usual case. Normaldeviate's plot was an example with just one parameter and that is very misleading for when there are more than one parameter. But there are always caveats - for instance the whole real line, plane, etc has 100% covergae for all parameter values. One of the more famous unresolved problems is very simple problem to state and dicussed on wiki. <a href="http://en.wikipedia.org/wiki/Behrens%E2%80%93Fisher_problem" rel="nofollow">Behrens-Fisher</a>.</li>
</ul>
<p>Doing some simulations and plotting coverage over the full parameter space is highly recomended ;-)</p>
| 690
|
confidence intervals
|
Confidence intervals of bounded variable
|
https://stats.stackexchange.com/questions/249115/confidence-intervals-of-bounded-variable
|
<p>Given 1000 observations that come from a distribution that is bounded between 0 and 1. How do you calculate correct 95% Confidence intervals when dealing with a bounded distribution?</p>
<pre><code>set.seed(10)
data = runif(1000, min=0, max=1)
mean(data)
mean(data) + 1.96*sd(data)/sqrt(length(data)) # usual CIs
mean(data) - 1.96*sd(data)/sqrt(length(data)) # usual CIs
</code></pre>
|
<p>This is a later answer but perhaps may be useful to someone. I have an R package on github (<a href="https://github.com/mattelisi/mlisi" rel="nofollow noreferrer"><code>mlisi</code></a>) with a set of convenient functions, including one that calculate boostrapped confidence intervals using the bias-corrected and accelerated method (Efron, 1987).</p>
<pre><code>> set.seed(10)
> data = runif(1000, min=0, max=1)
> library(mlisi)
> bootMeanCI(data, nsim=10^4)
2.797862% 97.7708%
0.4874827 0.5240060
</code></pre>
<p>Although the BCa method is the default, you can also use the percentile method by setting the argument 'method'</p>
<pre><code>> bootMeanCI(data, nsim=10^4, method="percentile")
2.5% 97.5%
0.4871504 0.5236511
</code></pre>
<p>You can install the package from github using <code>devtools</code></p>
<pre><code>library(devtools)
install_github("mattelisi/mlisi")
</code></pre>
| 691
|
confidence intervals
|
Interpreting 95% confidence intervals for relative risk (known true RR, 100 samples)
|
https://stats.stackexchange.com/questions/306149/interpreting-95-confidence-intervals-for-relative-risk-known-true-rr-100-samp
|
<p>Let's assume that we are investigating how tobacco smoking is associated with incident lung cancer in a population. In the full population, the true relative risk of lung cancer associated with tobacco smoking is 2.</p>
<p>Next, we collect 100 random samples from the population. For each sample, we calculate 95% confidence intervals for the relative risk of lung cancer associated with tobacco smoking.</p>
<p>What can we expect from the distribution of confidence intervals here:</p>
<pre><code>A. 95/100 confidence intervals will include the true value RR=2
5/100 confidence intervals will not include the true value RR=2
A2. As "A". Additionally, confidence intervals can freely include RR=1
B. 95/100 confidence intervals will not include RR=1
5/100 confidence intervals will include RR=1
</code></pre>
<p>I also provided a simplified picture that presents 20 (100 would be too much) confidence intervals as vertical bars.</p>
<p><a href="https://i.sstatic.net/NWqDa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NWqDa.png" alt="enter image description here"></a></p>
<p>Edit: <a href="https://stats.stackexchange.com/questions/158570/interpretation-of-confidence-interval">This</a> post seemed to reach the consensus that "A" was true, but calculated means was used as an example instead of relative risk measures. </p>
|
<p>The answer is A2. Confidence intervals are agnostic to the "null value" of a parameter. They may freely include it or not. There may be some confusion stemming from the idea that a 95% CI will cause you to incorrectly reject H0 5% of the time; but this is only true if the true RR is the null RR (i.e., 1). This is not the case here because it was stated as a known premise that H0 is false and the true RR is not the null RR. If the true RR was 1.05, you might expect a 95% CI to include the null RR almost every time.</p>
| 692
|
confidence intervals
|
Calculate Confidence Intervals for Lognormal Distribution
|
https://stats.stackexchange.com/questions/299135/calculate-confidence-intervals-for-lognormal-distribution
|
<p>Surprisingly, I can't find a discussion on calculating confidence intervals for the mean $EY=e^{\mu+\sigma^2/2}$ of the lognormal distribution. My question goes beyond what is covered in the link below, and is specific to the package <code>EnvStats</code>.</p>
<p><a href="https://stackoverflow.com/questions/21843745/confidence-interval-for-mu-in-a-log-normal-distributions-in-r">Confidence Interval for Mu in a Log normal Distributions in R</a></p>
<p>Say I have some lognormal data:</p>
<pre><code>mydat <- data.frame(value = rlnorm(1000, meanlog = 6, sdlog = .5))
</code></pre>
<p>That looks like:
<a href="https://i.sstatic.net/30BBN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/30BBN.png" alt="enter image description here"></a></p>
<p>I use <code>EnvStats::elnormAlt</code> to estimate parameters for the lognormal distribution <code>mydat</code>. </p>
<pre><code>elnormAlt(mydat$value, method = "mvue", ci = FALSE, ci.type = "two-sided",
ci.method = "land", conf.level = 0.95)
</code></pre>
<p>And obtain:</p>
<pre><code>Results of Distribution Parameter Estimation
--------------------------------------------
Assumed Distribution: Lognormal
Estimated Parameter(s): mean = 454.7097844
cv = 0.5359667
Estimation Method: mvue
Data: mydat$value
Sample Size: 1000
</code></pre>
<p>When I change the argument <code>ci = TRUE</code>, I get the error:</p>
<pre><code>Error in integrate(density.fcn.qlands.t, -pi/2, theta, nu = nu, zeta = zeta) :
non-finite function value
</code></pre>
<p><strong>My questions are twofold:</strong> </p>
<ol>
<li>Can someone succinctly explain the mathematical meaning of <code>cv</code>?</li>
<li>What is the meaning of the error message I'm getting, and how can I calculate confidence intervals using the Land (Cox) method?</li>
</ol>
| 693
|
|
confidence intervals
|
Confidence intervals for derivatives from GAM predictions
|
https://stats.stackexchange.com/questions/503819/confidence-intervals-for-derivatives-from-gam-predictions
|
<p>I have built the following GAM model in <code>mgcv</code></p>
<pre><code>wt9 <- gam(weight_t ~
tagged +
sex_t0 +
s(age.x, by = tagged, k = 5) +
s(age.x, by = sex_t0, k = 5) +
s(scale_id, bs = "re") +
s(age.x, scale_id, bs = "re"),
data = long,
method = "REML")
</code></pre>
<p>I then made population averaged predictions from this model.</p>
<pre><code># Create new data frame to predict from
pred.dat <- data.frame(tagged = c(rep(0, 752), rep(1, 752)),
sex_t0 = c(rep("f", 376), rep("m", 376), rep("f", 376), rep("m", 376)),
age.x = c(rep(seq(9, 384, 1), 4)),
scale_id = rep(1, 1504))
# Define factors in new data frame
pred.dat<span class="math-container">$tagged <- factor(pred.dat$</span>tagged)
pred.dat<span class="math-container">$sex_t0 <- factor(pred.dat$</span>sex_t0)
pred.dat<span class="math-container">$scale_id <- factor(pred.dat$</span>scale_id)
# Population averaged predictions from fitted gam wt9
preds <- predict(wt9,
newdata = pred.dat,
exclude = c("s(scale_id)",
"s(age.x,scale_id)"),
se = T)
# Combine predictions to new data frame for plotting
pred.dat <- cbind(pred.dat, fit = preds<span class="math-container">$fit)
pred.dat <- cbind(pred.dat, se.fit = preds$</span>se.fit)
# Calculate 95% CI for predictions from predicted standard errors
pred.dat<span class="math-container">$lci <- pred.dat$</span>fit - (1.96*pred.dat<span class="math-container">$se.fit)
pred.dat$</span>uci <- pred.dat<span class="math-container">$fit + (1.96*pred.dat$</span>se.fit)
# Plot predicted weight for tagged and untagged, and male and female, animals through time (+/- 95% CI)
mycolours1 <- brewer.pal(4, "Blues")[3:4]
mycolours2 <- brewer.pal(4, "Greens")[3:4]
f2a.1 <- ggplot(pred.dat, aes(x = age.x, y = fit, colour = tagged:sex_t0, fill = tagged:sex_t0)) +
geom_line(size = 1.5) +
geom_ribbon(aes(ymin = lci, ymax = uci), alpha = 0.2, colour = NA) +
scale_colour_manual(labels = c("Untagged female", "Untagged male", "Tagged female", "Tagged male"), values = c(mycolours1, mycolours2)) +
scale_fill_manual(labels = c("Untagged female", "Untagged male", "Tagged female", "Tagged male"), values = c(mycolours1, mycolours2)) +
theme_classic() +
theme(axis.title.x = element_text(face = "bold", size = 14),
axis.title.y = element_text(face = "bold", size = 14),
axis.text.x = element_text(size = 12),
axis.text.y = element_text(size = 12),
legend.text = element_text(size = 12), legend.title = element_blank()) +
xlab("Age (days)") +
ylab("Body mass (g)"); f2a.1
</code></pre>
<p><a href="https://i.sstatic.net/DPimC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DPimC.png" alt="Predictions from fitted GAM" /></a></p>
<p>I would now like to create an equivalent figure for the first derivatives of these curves.
I can create and plot the derivatives manually but am struggling to get confidence intervals. I have followed <a href="https://stackoverflow.com/questions/14207250/determining-derivatives-from-gam-smooth-object">this post</a> to obtain the first derivatives manually.</p>
<p>I understand that there is no function to automatically calculate first derivatives and confidence intervals from model predictions. The <code>derivatives()</code> function from the <code>gratia</code> package will calculate first derivatives and confidence intervals from a fitted GAM but not from model predictions, see answer <a href="https://stats.stackexchange.com/questions/495775/first-derivative-of-fitted-gam-changes-according-to-specified-model-distribution">here</a>.</p>
<pre><code>eps <- 1e-7
X0 <- predict(wt9,
newdata = pred.dat,
exclude = c("s(scale_id)",
"s(age.x,scale_id)"),
se = T,
type = 'lpmatrix')
pred.datFeps_p <- pred.dat
pred.datFeps_p<span class="math-container">$age.x <- pred.datFeps_p$</span>age.x + eps
X1 <- predict(wt9,
newdata = pred.datFeps_p,
exclude = c("s(scale_id)",
"s(age.x,scale_id)"),
se = T,
type = 'lpmatrix')
# finite difference approximation of first derivative
# the design matrix
Xp <- (X1 - X0) / eps
# first derivative
fd_d1 <- Xp %*% coef(wt9)
test <- cbind(pred.dat, fd_d1)
ggplot(test, aes(x = age.x, y = fd_d1, colour = tagged:sex_t0, fill = tagged:sex_t0)) +
geom_line(size = 1.2) +
scale_colour_manual(labels = c("Untagged female", "Untagged male", "Tagged female", "Tagged male"), values = c(mycolours1, mycolours2))
</code></pre>
<p><a href="https://i.sstatic.net/8MkkG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8MkkG.png" alt="First derivatives plot" /></a></p>
<p><strong>Q: How can I obtain confidence intervals for these first derivative curves? Noting that these are model predictions and not plotted directly from the fitted GAM.</strong></p>
|
<p>You can do posterior simulation to draw a large set of samples from the posterior distribution of the model, and then for each sample (which is one set of curves if you are predicting for all your groups over a grid of values in <code>age.x</code>) compute the derivatives and store the values.</p>
<p>This gives you a posterior distribution for the derivative of each curve for each value of <code>age.x</code>. You can then summarise those posterior distributions using some suitable quantiles (0.025 and 0.975 probability quantiles say for a 95% interval) to directly estimate the confidence interval on the derivative.</p>
<p>This is all assuming a Gaussian approximation to the posterior, which works OK a lot of the time but can fail in some others. Simon Wood has implemented INLA and a simple Metropolis Hastings sampler to also generate samples from the posterior distribution of the model, which shouldn't have the same problems that Gaussian approximation has in some situations.</p>
<p>If you want to simplify the process (currently only for Gaussian approximation) you can use <code>fitted_samples()</code> from <strong>gratia</strong> to get the posterior draws for the input data you specify. Then you can compute the derivatives from those posterior draws, etc.</p>
| 694
|
confidence intervals
|
Confidence intervals with penalized likelihood
|
https://stats.stackexchange.com/questions/314677/confidence-intervals-with-penalized-likelihood
|
<p>I am trying to perform parameter estimation using something like a maximum likelihood ratio method, however I need to add a penalty term to constrain nuisance parameters which describe certain systematic uncertainties in the measurement process. So I have been digging around in the literature to try and better understand what is know about penalized likelihood estimation, but I cannot find anything I can understand regarding the construction of confidence intervals for my parameters of interest.</p>
<p>I get that a penalty term in the likelihood is quite similar to a Bayesian prior, and indeed if I wanted to then I could easily analyse my situation in a Bayesian way and use credible intervals instead of confidence intervals, however for this instance I want my intervals to have correct frequentist coverage properties, at least asymptotically (as in the usual profile likelihood case where I could rely on Wilks' theorem).</p>
<p>So, are there theorems similar to Wilks' that work with penalized likelihoods? Or with particular kinds of penalty terms? In the literature it looks to me like people mostly do numerical studies of coverage rather than rely on theorems, does this mean that there are no known "general purpose" theorems for this?</p>
<p>Edit: Alternatively, I suppose that I would be happy to be able to treat the penalty term in a fully Bayesian way, but nevertheless, in the end, still construct frequentist confidence intervals based on whatever estimator resulted, say the maximum of the marginal likelihood (i.e. with the nuisance parameter marginalised out). Looking around I do find some literature discussing the frequency properties of Bayesian point estimates, so I guess that it is possible that somewhere in this literature exists the sort of construction that I want?</p>
|
<p>I don't think it's possible to provide meaningful confidence intervals in that case in a natural way.</p>
<p>Call $\theta$ the parameter and $\hat\theta$ the penalized likelihood estimator. Confidence intervals at 95% say that $P(|\theta-\hat\theta|\leq d|\theta)\geq0.95$ <strong>for all $\theta$</strong>. If you reframe the Bayesian idea in a back and white fashion: the reason why you need penalization is that not <strong>all $\theta$</strong> are expected to really be "possible": you focus on the ones that you consider to be "possible". </p>
<p>Outside of the "possible" zone, $\hat\theta$ is a very poor estimate. If the real $\theta$ is far from the possible zone $\hat\theta$ can be far from the real $\theta$ with high probability. For such $\theta$, $P(|\theta-\hat\theta|\leq d|\theta)\geq0.95$ is only true if $d$ is big. Since $d$ is required to not depend on $\theta$, you would just use the worst case scenario $d$ yielding useless confidence intervals. Actually, I think $d=+\infty$ if you consider $\theta$ going to infinity.</p>
<p>A way to fall back on a frequentist analysis in this case is to define a "possible" region $\Theta$ inspired from the penalization. One possibility is a region containing 99% of the weight of the prior. With $L^2$ regularized MLE for example, if this region is a ball whose radius is exactly the norm of the penalized estimate, then the frequentist MLE raw estimate is the same as the penalized one and lies on the border (sphere). With this method, you can say: if $\theta\in \Theta$ then $P(|\theta-\hat\theta|\leq d|\theta)\geq0.95$ with a meaningful $d$. It is a confidence interval with condition $\theta\in \Theta$.</p>
| 695
|
confidence intervals
|
Question about calculating confidence intervals
|
https://stats.stackexchange.com/questions/457649/question-about-calculating-confidence-intervals
|
<p>I am reading about confidence intervals and got stuck with this example from L. Wasserman's book titled "All of Statistics". Could anybody explain why P<sub>Q</sub>(θ ∈ C) = 3/4 in this example? Below is the paragraph from the book:</p>
<blockquote>
<p>Let θ be a fixed, known real number and let X<sub>1</sub>,
X<sub>2</sub> be independent random variables such that
P(X<sub>i</sub> = 1) = P(X<sub>i</sub> = -1) = 1/2. Now define
Y<sub>i</sub> = θ + X<sub>i</sub> and suppose that you only observe
Y<sub>1</sub> and Y<sub>2</sub>. Define the following interval that
actually contains only one point:</p>
</blockquote>
<p><a href="https://i.sstatic.net/PgXmT.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PgXmT.gif" alt="enter image description here"></a></p>
<blockquote>
<p>You can check that, no matter what θ is, we have P<sub>θ</sub>(θ ∈ C)
= 3/4 so this is a 75 percent confidence interval. Suppose we now do the experiment and we get Y<sub>1</sub> = 15 and Y<sub>2</sub> = 17.
Then our 75 percent confidence interval is {16}. However, we are
certain that θ = 16. If you wanted to make a probability statement
about θ you would probably say that P(θ ∈ C|Y<sub>1</sub>,
Y<sub>2</sub>) = 1. There is nothing wrong with saying that {16} is a
75 percent confidence interval. But is it not a probability statement
about θ.</p>
</blockquote>
|
<p>You can dissociate cases : </p>
<ul>
<li><p>if <span class="math-container">$X_1 \neq X_2$</span>, which happens with probability <span class="math-container">$\frac{1}{2}$</span>, then <span class="math-container">$X_1 = -X_2$</span> (since <span class="math-container">$X$</span> can only be <span class="math-container">$1$</span> or <span class="math-container">$-1$</span>) and <span class="math-container">$Y_1 \neq Y_2$</span>. So <span class="math-container">$C = \{\frac{Y_1 + Y_2}{2}\} = \{\theta\}$</span>. So <span class="math-container">$\theta \in C$</span>.</p></li>
<li><p>if <span class="math-container">$X_1 = X_2 = 1$</span>, which happens with probability <span class="math-container">$\frac{1}{4}$</span>, then <span class="math-container">$Y_1 = Y_2 = \theta + 1$</span> and <span class="math-container">$C = \{\theta\}$</span>. So <span class="math-container">$\theta \in C$</span>.</p></li>
<li><p>if <span class="math-container">$X_1 = X_2 = -1$</span>, which happens with probability <span class="math-container">$\frac{1}{4}$</span>, then <span class="math-container">$Y_1 = Y_2 = \theta - 1$</span>, and <span class="math-container">$C = \{\theta - 2\}$</span>. So <span class="math-container">$\theta \notin C$</span>.</p></li>
</ul>
<p>So the only possible case in which <span class="math-container">$\theta \notin C$</span> is when <span class="math-container">$X_1 = X_2 = -1$</span>, which happens with probability <span class="math-container">$\frac{1}{4}$</span>, so <span class="math-container">$P(\theta \in C) = 1 - \frac{1}{4} = \frac{3}{4}$</span></p>
<p>I think the point of Wasserman here is that the randomness lies in <span class="math-container">$C$</span> and not <span class="math-container">$\theta$</span>. And indeed in the different cases considered, each time it is the confidence intervall <span class="math-container">$C$</span> which changes, not <span class="math-container">$\theta$</span>.</p>
| 696
|
confidence intervals
|
Distance between Vectors with Confidence Intervals
|
https://stats.stackexchange.com/questions/159820/distance-between-vectors-with-confidence-intervals
|
<p>I have a machine learning application where I extract numerical features $a_{i1}, a_{i2}, \dots, a_{ik}$ for each object $a_i$ to study. Objects are then compared using standard euclidean distance. </p>
<p>The problem is that the features entail uncertainty. The good message is that I have confidence intervals, meaning that I can tell with probability $\alpha$ that $a_{ij}-c_{ij} \le a_{ij} \le a_{ij}+c_{ij}$. Since these confidence intervals are independent of each other, I end up with k-dimensional boxes instead of k-dimensional points (see below).</p>
<p>My question is whether there's a standard approach to extend euclidean distance to account for uncertainty. The right way to go is probably using standard euclidean distance, and deriving new confidence intervals for that distance. Maybe it wouldn't even be too hard to derive them, but I'm also interested in a paper that I could cite.</p>
<p><img src="https://i.sstatic.net/Udwzx.jpg" alt="confidence intervals"></p>
|
<p>Bayesian machine learning relies on probability distribution to represent uncertainty. In the present case, using multivariate normal distributions instead of finite boxes may lead to simpler calculations. </p>
<p>Currently, you assme that the interval <span class="math-container">$a_{ij}−c_{ij} \leq a_{ij} \leq a_{ij}+c_{ij}$</span>
contains an amount <span class="math-container">$\alpha$</span> of the uncertainty mass. This can me converted to a normal distribution by using the z-table. Basically, it consists in finding the standard deviation <span class="math-container">$\sigma$</span> of a normal distribution such that the following equality holds:<span class="math-container">$$\alpha = \int_{-c_{ij}}^{c_{ij}}\mathcal{N}(0, \sigma^2)$$</span></p>
<p>This change in uncertainty representation turns the problem into computing the Euclidiean distance between 2 multivariates normal distributions. Kettani and Ostrouchov (2005) computed the distribution over the distance resulting from this operation under various hypothesis regarding 2 multivariate normal random variables. </p>
<p>Kettani, H., & Ostrouchov, G. (2005). <strong>On the distribution of the distance between two multivariate normally distributed points</strong>. Department of Computer Science and Information Systems Engineering, Fort Hays State University, Fort Hays (KS).</p>
| 697
|
confidence intervals
|
Confidence intervals for frequency tables
|
https://stats.stackexchange.com/questions/298398/confidence-intervals-for-frequency-tables
|
<p>I am analyzing the results of a survey on R. The questionnaire is a series of questions that participants answer using a Likert scale (<a href="https://en.wikipedia.org/wiki/Likert_scale" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Likert_scale</a>). </p>
<p>I have obtained frequency tables for each question (i.e. what percentage of participants chose "strongly agree", what percentage chose "somewhat agree," etc.) and I am now interested in obtaining confidence intervals for those percentages. </p>
<p>Since I don't want to assume that my data is normally distributed, I was thinking of using bootstrapping to get the confidence intervals. However, I am not entirely sure how using bootstrapping works in this context. I am familiar with how to use bootstrapping when dealing with means, but not really when dealing with frequency tables such as these. Specifically, I am not sure how to go about coding the bootstrapping. </p>
<p>Thank you,
Best,</p>
|
<p>There are different methods for calculating confidence intervals for proportions without using bootstrapping.</p>
<p>For a multinomial proportion, you might try the methods in the <code>DescTools</code> package.</p>
<pre><code>### Adapted from http://rcompanion.org/handbook/H_02.html
if(!require(DescTools)){install.packages("DescTools")}
library(DescTools)
SA = 10
A = 9
N = 20
D = 5
SD = 1
observed = c(SA, A, N, D, SD)
MultinomCI(observed,
conf.level=0.95,
method="sisonglaz")
### Methods: "sisonglaz", "cplus1", "goodman"
### est lwr.ci upr.ci
### [1,] 0.22222222 0.08888889 0.3807871
### [2,] 0.20000000 0.06666667 0.3585648
### [3,] 0.44444444 0.31111111 0.6030093
### [4,] 0.11111111 0.00000000 0.2696759
### [5,] 0.02222222 0.00000000 0.1807871
</code></pre>
| 698
|
confidence intervals
|
Significant difference from regression confidence intervals
|
https://stats.stackexchange.com/questions/55687/significant-difference-from-regression-confidence-intervals
|
<p>I have a question about statistical significance in relation to confidence intervals from linear regression. I'm obviously far from a stats expert, and I've been searching for the answer to this, probably simple, question for a while now without any luck.</p>
<p>I've made an example to clarify my question:
I'm interested in looking at the treatment effect of doing a change (e.g. spraying with pesticide) on one area, and use an untreated area for control. Before the treatment a "calibration line" is established between the two areas as the correlation between some observed response (e.g. annual crop yield, white open circles below, with regression line and 95% confidence intervals drawn)</p>
<p><img src="https://i.sstatic.net/43d0P.jpg" alt="Link to example plot: (sorry, not rep to post image)http://imgur.com/M95dqEk"></p>
<p>[Fig: X and Y axis show the same response (e.g. annual crop yield), but for the control area(X axis) and for the treatment area (Y axis). Each data point is then the annual crop yield for both the control and the treatment area, a total of 10 years of data]</p>
<p>After the treatment this response is measured again (red open triangles), and the "treatment effect" is defined as the difference between the observed response in the treatment and the predicted response (the regression line). </p>
<p>My question is if you can say that the treatment effect is statistically significant if the data point is outside of the 95% confidence interval of the calibration regression? And why/why not ? (so in the plot example 3 of the observed treatment effects are significantly different from the predicted response to a 95% confidence level (p=0.05)?)</p>
<p>Thanks</p>
<p>Edit1, additional question:
Would prediction intervals, instead of confidence intervals, be more suitable to describe whether there has been a change in the relationship between the two areas after treatment?</p>
<p>Edit2:
Would it be right so say that the confidence intervals can be used to check if the mean of the treatment effects are significantly different (and, as @Glen_b suggests, use the regression line/confidence line for the treatment points instead of single points.
But when talking about whether a single sample is significantly different (as in my comment below to Glen_b) it is better to use the prediction interval?</p>
|
<p>You don't compare the individual points to conclude a treatment effect. You see whether the lines for the treatment and control are different.</p>
<p>In some circumstances, the fitted lines might be parallel, and just the difference in intercept is of interest. In others, both the intercept and slope might differ, and any difference would be of interest. </p>
<p>Testing point vs line in ordinary regression (not errors-in-variables, which is more complicated):</p>
<p>It's not correct to check if data values for another are in the confidence interval because the data values themselves have noise.</p>
<p>Call the first sample $(\underline{x}_1,\underline{y}_1)$, and the second one $(\underline{x}_2,\underline{y}_2)$. Your model for the first sample is $y_1(i) = \alpha_1 + \beta_1 x_{1,i} + \varepsilon_i$, with the usual iid $N(0,\sigma^2)$ assumption on the errors. </p>
<p>You want to see if a particular point $(x_{2,j},y_{2,j})$ is consistent with the first sample. Equivalently, to check whether an interval for $y_{2,j} - \left(\alpha_1 + \beta_1 x_{2,j}\right)$ includes 0 (notice the points are second-sample, the line is first-sample).</p>
<p>The usual way to obtain such CI would to construct a pivotal quantity, though one could simulate or boostrap as well.</p>
<p>However, since in this illustration we're doing it for a single point, under normal assumptions and with ordinary regression conditions, we can save some effort: this is a solved problem. It corresponds to (assuming sample 1 and sample 2 have a common population variance) checking whether one of the sample 2 observations lies within a <em>prediction interval</em> based on sample 1, rather than a confidence interval.</p>
| 699
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.