category
stringclasses 107
values | title
stringlengths 15
179
| question_link
stringlengths 59
147
| question_body
stringlengths 53
33.8k
| answer_html
stringlengths 0
28.8k
| __index_level_0__
int64 0
1.58k
|
|---|---|---|---|---|---|
hypothesis testing
|
Hypothesis testing for equivalence of two arrangements
|
https://stats.stackexchange.com/questions/79380/hypothesis-testing-for-equivalence-of-two-arrangements
|
<p>I have two arrangements(i.e. permutations) of numbers. First one is the target/real arrangement. Second, is the observed arrangement.</p>
<blockquote>
<p>e.g.</p>
<p>Target := 1,2,3,4,5,6,7</p>
<p>Observed := 4,1,7,3,2,5,6</p>
</blockquote>
<p>Any two elements in an arrangement is not equal. What kind of test should I use?</p>
<p>p.s.
I am not good in statistics. I am trying to evaluate a simulation model with real world data. Target arrangement is a sequence of real world events while Observed arrangement is the sequence of events which occurred in a simulation. My hypothesis is that these two are similar.</p>
<p>--EDIT--</p>
<p>Can this be done using Sequence Alignment methods used in Bioinformatics?</p>
<p>--EDIT--
Actually i have 30 samples (30 subjects participated). All target values and observed values for a particular sample are in the same range where range is [1,n] and n ~= 15.</p>
|
<p>First of all, I would suggest to reconsider the names you chose for your data. In most places 'Observed' is used for the actual observed/measured/real value of your data, so in your case I would expect it to be used for the sequence of real world events.</p>
<p>I would also suggest to switch 'Targeted' for 'predicted' or 'estimated', or anything that signifies that this is coming out of your simulation model.</p>
<p>To attempt to answer what I think you are asking here, can you have a look at this from stackoverflow? </p>
<p><a href="https://stackoverflow.com/questions/10845114/algorithm-to-measure-similarity-between-two-sequences-of-strings">LINK</a></p>
<p>Levenshtein distance might be what you are looking for.</p>
| 300
|
hypothesis testing
|
Dependent vs Independent sample
|
https://stats.stackexchange.com/questions/276323/dependent-vs-independent-sample
|
<p>I am quite confused about a question that came up in my exam.</p>
<p><strong>Question:</strong> <em>The travel times on two alternative routes through a network are recorded on 20 working days during a month. The results of this are given in Table 4.
Perform an appropriate hypothesis test to investigate the difference between the mean travel times on the two routes, and comment on the outcome of this.</em></p>
<p>I initially believed the samples would be independent, however my peers have said they are dependent.</p>
<pre><code>EDIT adding the data table from comments:
Route
Day 1 2
1 401 420
2 433 451
3 355 378
4 436 456
5 580 616
6 497 549
7 401 433
8 413 430
9 353 368
10 449 480
11 341 369
12 402 413
13 423 441
14 438 462
15 358 361
16 470 489
17 392 420
18 369 387
19 394 417
20 368 385
</code></pre>
|
<p>If it is the same driver in the same car on the travelling both routes over 20 days then the samples are dependent. If there are different drivers in different cars on the two routes then the samples are independent. Samples can only be dependent if there is some common factor, like a single driver, otherwise the samples are independent</p>
| 301
|
hypothesis testing
|
Comparing a proportion to a 'mean' proportion. Which test?
|
https://stats.stackexchange.com/questions/473018/comparing-a-proportion-to-a-mean-proportion-which-test
|
<p>Suppose there are 24 factories who all fabricate the same product with a certain percentage of that product being faulty. We have a table of data:</p>
<p>Factory, Produced, Faulty</p>
<p>F1, 212, 31</p>
<p>F2, 1021, 145</p>
<p>…, …, …</p>
<p>F24, 480, 40</p>
<p>Now I want to check factory F1 has a different proportion of faulty product than the average (!!) proportion of the other factories. With 2 factories I could make a 2 sample proportion test / CHi^2 work, but now I am not interested in the difference between say F3 and F17, I just want to know, if F2 through F24 is considered, does F1 perform worse?</p>
<p>I was thinking that I could do a 2 sample proportion test, where F2 through F24 could be considered one giant factory (summing the non-faulty products and faulty products as a set to test against the set of non faulty products from F1 and faulty products from F1).</p>
<p>But I want to be sure. Any help?</p>
|
<p>The question is not clarifying whether the data arises from random sampling or not.</p>
<p>You can check out the following images for a more detail description.</p>
<p><a href="https://i.sstatic.net/XY9FE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XY9FE.png" alt="Chi-Square Test" /></a></p>
<p><a href="https://i.sstatic.net/f8EHu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f8EHu.png" alt="Analysis of Variance (ANOVA)" /></a></p>
| 302
|
hypothesis testing
|
Question about p.adjust in R and BH correction
|
https://stats.stackexchange.com/questions/473580/question-about-p-adjust-in-r-and-bh-correction
|
<p>In multiple test we adjust the significant level by BH. I used p.adjust in R and adjust my p value after that when I print those out it has some 0 and 1, I don’t get what it means .Also, does p.adjust default 0.05 significant level? In my knowledge, BH should change the significant level instead of p value, so I am confused</p>
|
<p>In the documentation for p.adjust there is this for "value"</p>
<blockquote>
<p>A numeric vector of corrected p-values (of the same length as p, with
names copied from p).</p>
</blockquote>
<p>So, it is doing what it says it should. Of course, p values can't be below 0 or above 1. The 0's are probably some sort of rounding or display problem.</p>
| 303
|
hypothesis testing
|
How can I show that the hypothesis $\mu = \mu_0$ is true exactly if $\mathbf a'\mu = \mathbf a'\mu_0$ for all vectors $\mathbf a \in \mathbb R^p$?
|
https://stats.stackexchange.com/questions/473620/how-can-i-show-that-the-hypothesis-mu-mu-0-is-true-exactly-if-mathbf-a
|
<p>How can I show that the hypothesis <span class="math-container">$\mu = \mu_0$</span> is true exactly if <span class="math-container">$\mathbf a'\mu = \mathbf a'\mu_0$</span> for all vectors <span class="math-container">$\mathbf a \in \mathbb R^p$</span>?</p>
|
<p>Consider that <span class="math-container">$A \implies B \iff B^c \implies A^c$</span>.</p>
<p>You can prove your statement by contrapositive: suppose <span class="math-container">$\mu \neq \mu_0$</span>, then it is enough to show that there exists one <span class="math-container">$a \in \mathbb{R}^p$</span> such that <span class="math-container">$a'\mu \neq a'\mu_0$</span>.</p>
<p>Let <span class="math-container">$p=2$</span> and let <span class="math-container">$\mu = \begin{bmatrix} 2 \cr -2 \end{bmatrix}$</span> and <span class="math-container">$\mu_0 = \begin{bmatrix} 1 \cr -1 \end{bmatrix}$</span>.</p>
<p>Then <span class="math-container">$a'\mu = 2(a_1-a_2)$</span> and <span class="math-container">$a'\mu_0 = (a_1-a_2)$</span>, which are not equal for any <span class="math-container">$a_1 \neq a_2$</span>.</p>
| 304
|
hypothesis testing
|
What does it mean to fail to reject in a one-sided hypothesis test?
|
https://stats.stackexchange.com/questions/475402/what-does-it-mean-to-fail-to-reject-in-a-one-sided-hypothesis-test
|
<p>Let's say we want to test the following hypotheses:</p>
<p><span class="math-container">$H_0: \mu = 0$</span> <span class="math-container">$H_1: \mu > 0$</span></p>
<p>for a random sample <span class="math-container">$\{X_1, \dots , X_n\}$</span> that is normally distributed <span class="math-container">$X_i \sim N(\mu, 25)$</span> (so variance is known).</p>
<p>Why is it that we fail to reject <span class="math-container">$H_0$</span> if <span class="math-container">$\bar{X} = -300$</span> for example? What does 'fail to reject' even mean in a one-sided test? What would be the meaning of such a p-value even?</p>
| 305
|
|
hypothesis testing
|
Alpha error vs Beta error
|
https://stats.stackexchange.com/questions/475535/alpha-error-vs-beta-error
|
<p>If my main hypothesis is that there will be no difference between two different designs in an experiment, should I be more concerned about the alpha error (rejecting H0 when it is true) or beta error (accepting H0 when it is false)?
Also, would my null hypothesis in this experiment be that there is a difference between the two designs?</p>
|
<p>It depends on the situation. For example, if you work in the medical field and you want to check if the patient has a critical disease like cancer, the false-negative would be terrible. Because if you pass on the sick patient, the patient might die. In this case, you want to reduce the false-negative as much as possible.</p>
<p>One issue is that if you reduce the false positive, the false-negative tends to increase. They work in the opposite way. In many cases, people want a balanced approach reducing two errors at the same time to a reasonable level. And they calculate the scores like F1 which considers both aspects at the same time.</p>
<hr />
<p>The hypothesis you make is the null hypothesis. In this case, no difference in the designs. And positive means there is a difference against the null hypothesis, and negative is for no difference with a given significance level. Usually, the null hypothesis is set as indistinguishability as it's the simpler hypothesis to test.</p>
<p>As Alexis mentioned in a comment, the hypothesis test doesn't prove or disprove null-hypothesis with certainty. The negation of the null hypothesis doesn't mean that the situation stated by the null hypothesis is impossible. The hypothesis test is about the probabilistic statement.</p>
| 306
|
hypothesis testing
|
Hypothesises testing
|
https://stats.stackexchange.com/questions/476733/hypothesises-testing
|
<p>I want to test a hypothesis saying 50% of employees in a company are happy. A survey of 100 people has been made and 41 say that they are happy.
My questions: I can't understand what I am given.
(1) Is the hypothesis <span class="math-container">$H_0: \mu_0 = 50$</span> <span class="math-container">$H_1:\mu_1 \ne 50$</span>?
(2) Do I have a standard deviation <span class="math-container">$\sigma$</span>?
(3) And in the theory, it says I need a percentile for the hypothesis. Do I assume its 95%?</p>
|
<ol>
<li><p>The null and alternative would be <span class="math-container">$H_0: p = 0.5$</span> and <span class="math-container">$H_A: p \neq 0.5$</span>. Here, <span class="math-container">$p$</span> is the proportion of the population which are happy.</p>
</li>
<li><p>The standard deviation is a function of the sample proportion. If you sample 100 people and 41 say they are happy, then <span class="math-container">$\hat{p}=0.41$</span> and <span class="math-container">$\hat{\sigma} = \sqrt{\hat{p}(1-\hat{p})}$</span>.</p>
</li>
<li><p>I'm not sure what a "percentile for the hypothesis" means. For a given hypothesis test, we have to decide on a false positive rate <span class="math-container">$\alpha$</span> which by convention is 0.05, but that is free to change.</p>
</li>
</ol>
| 307
|
hypothesis testing
|
Can you create an example showing when the p-value $P(D|H_0)$ does not imply probability of $H_0$ being true given the observed data $P(H_0|D)$?
|
https://stats.stackexchange.com/questions/487928/can-you-create-an-example-showing-when-the-p-value-pdh-0-does-not-imply-pro
|
<p>I'm reading <a href="https://journals.sagepub.com/doi/abs/10.1177/106591299905200309" rel="nofollow noreferrer">The Insignificance of Null Hypothesis Significance Testing</a> and on page 654, the author states that most people incorrectly think that the null hypothesis significance test produces <span class="math-container">$\mathbb{P}(H_0|D)$</span>: the probability of <span class="math-container">$H_0$</span> being true given the observed data.</p>
<p>But the test actually produces <span class="math-container">$\mathbb{P}(D|H_0)$</span>. And by Bayes law, these two are not the same unless <span class="math-container">$\mathbb{P}(H_0) = \mathbb{P}(D)$</span>. What is the intuitive meaning of this result?</p>
<p>Can someone give me an example where using typical hypothesis i.e. <span class="math-container">$H_0: \mu = 0$</span> and <span class="math-container">$H_1: \mu \ne 0$</span>, <span class="math-container">$\bar{X}$</span> (sample mean) test statistic and the normal distribution, where hypothesis tests start to fail?</p>
|
<p>Is not possible to create the example that you looking for. The problem here is that <span class="math-container">$P(H_0|D)$</span> is a meaningless writing. The last because, even if in practice case you do not known if <span class="math-container">$H_0$</span> is true or false, we have to remember that, under the paradigm behind the p-value (ML and/or LS and then <em>frequentist approach</em>) parameters are unknown constants and not random variables.</p>
<p>Then in your example <span class="math-container">$\mu$</span> is a fixed constant (known or unknown) and something like <span class="math-container">$P(-x<\mu<x)$</span> is a nonsense writing. As a consequence even <span class="math-container">$P(H_0|D)$</span> is senseless. At the other side is possible to shown that p-value=<span class="math-container">$P(D|H_0)$</span> make sense.</p>
<p>Therefore, even if it seems an intuitive way, is not possible to use the <em>bayes rule</em> here in order to link <span class="math-container">$P(D|H_0)$</span> and <span class="math-container">$P(H_0|D)$</span></p>
<p>Therefore</p>
<blockquote>
<p>But the test actually produces <span class="math-container">$P(D|H_0)$</span>. And by Bayes law, these two
are not the same unless <span class="math-container">$P(H_0)=P(D)$</span>. What is the intuitive meaning of
this result?</p>
</blockquote>
<p>It haven't proper meaning.</p>
| 308
|
hypothesis testing
|
Is a hypothesis test useful if our null hypothesis is not the true value?
|
https://stats.stackexchange.com/questions/487972/is-a-hypothesis-test-useful-if-our-null-hypothesis-is-not-the-true-value
|
<p>Assume we are testing if the true average weight of milk cartons is 100g. We may specify <span class="math-container">$H_0: \mu = 100$</span> and <span class="math-container">$H_1: \mu \ne 100$</span>.
Let's assume the true weight is 102.</p>
<p>In the course of testing we may calculate metrics, such as the type 1 error for example. This is the probability that we reject a null hypothesis given that it is true. But isn't this a non-sensical number if the true <span class="math-container">$\mu$</span> isn't the same as the null in our test? And given that this is the likely situation in the real world, what information does a hypothesis test really give us if we do not correctly specify the null hypothesis?</p>
|
<p>If the power of your test of <span class="math-container">$H_0: \mu=100$</span> against <span class="math-container">$H_a: \mu\ne 100$</span> is sufficient, you will likely reject <span class="math-container">$H_0.$</span> So the test has not been useless. Furthermore, it is
good statistical practice to accompany this test with a CI for <span class="math-container">$\mu.$</span> For example, such a CI is included in the R output for <code>t.test</code>.</p>
<p>Also, ideally, the test would have been preceded by a power computation to find the
probability of rejection the <span class="math-container">$H_0$</span> is false by various amounts <span class="math-container">$\Delta.$</span></p>
<p>You are correct that the situation, in which <span class="math-container">$H_0$</span> does not exactly specify
the true value of <span class="math-container">$\mu,$</span> is commonly encountered in practice.</p>
<p>If the variability among contents of milk cartons is given by <span class="math-container">$\sigma=0.1$</span> and
we sample <span class="math-container">$n = 12$</span> cartons, we might get results as shown for the simulated
sample below:</p>
<pre><code>set.seed(917)
x = rnorm(12, 102, .1)
t.test(x, mu = 100)
One Sample t-test
data: x
t = 66.027, df = 11, p-value = 1.193e-15
alternative hypothesis:
true mean is not equal to 100
95 percent confidence interval:
101.9421 102.0760
sample estimates:
mean of x
102.0091
</code></pre>
<p>In this case, <span class="math-container">$H_0$</span> is strongly rejected with at P-value very nearly <span class="math-container">$0.$</span>
The 95% CI <span class="math-container">$(101.9, 102.1)$</span> gives a good indication that the true value
is near <span class="math-container">$\mu = 102.$</span></p>
<ul>
<li><p>If it is the firm's intention is to overfill cartons slightly in order to avoid complaints or regulatory fines for selling
cartons that don't have the <span class="math-container">$100$</span>g promised on the carton, then result of
the experiment and and the test and CI in R will assure them that all is well.</p>
</li>
<li><p>If the it is firm's intention to put just barely enough in each carton
to avoid underfilling the vast majority of the time, then these results might suggest a target fill amount of something like
<span class="math-container">$100.1$</span>g or <span class="math-container">$100.2$</span>g, depending on the particulars and pending ongoing monitoring.</p>
</li>
</ul>
<p><strong>Addendum:</strong> Because you ask about power computations in a Comment, I will
illustrate how one can simulate the power for a two-tailed, one-sample t test, at the 5% level, of <span class="math-container">$H_0: \mu = 100$</span> vs. <span class="math-container">$H_a: \mu = 101$</span> (specific value different from
100) when <span class="math-container">$n = 12, \sigma = 1.$</span> (The result can be found using a noncentral t distribution, but <span class="math-container">$n$</span> is too small for a good normal approximation.)</p>
<p>The power is about <span class="math-container">$88\%.$</span> That is, when <span class="math-container">$\mu_a$</span> differs by <span class="math-container">$\Delta = 1$</span> from <span class="math-container">$\mu_0 = 100,$</span> we have probability about <span class="math-container">$0.88$</span> of rejecting <span class="math-container">$H_0.$</span></p>
<pre><code>set.seed(2020)
pv = replicate(10^5, t.test(rnorm(12, 101, 1), mu=100)$p.val)
mean(pv <= 0.05)
[1] 0.88404
</code></pre>
<p>The result is essentially the same for this two-tailed test if data are
<span class="math-container">$\mathsf{Norm}(99,1).$</span> With 100,000 samples of size <span class="math-container">$n = 12,$</span> one can
expect about 2-place accuracy for rejection probability.</p>
<pre><code>set.seed(1234)
pv = replicate(10^5, t.test(rnorm(12, 99, 1), mu=100)$p.val)
mean(pv <= 0.05)
[1] 0.88219
</code></pre>
| 309
|
hypothesis testing
|
interdependence of type 1 error and type 2 error in p-Value based hypothesis tests
|
https://stats.stackexchange.com/questions/225183/interdependence-of-type-1-error-and-type-2-error-in-p-value-based-hypothesis-tes
|
<p>Investigating a t-Test, I ran some "experiments", generating randomly distributed values around a given mean, and running a whole bunch of t-test (always with new data) for conditions where the null-hypothesis is true, and I indeed found, that the type 1 error, the ratio of the number of times the t-test incorrectly rejected the null-hypothesis and the number of total times, $\alpha$, I ran: $\alpha$ matched the p-Value.</p>
<p>Then I changed the script, choosing a different mean for the random-number generator, so I set up the experiment in a way that the alternative hypothesis must be true. I then calculated the empirical rate for a type II error $\beta$.</p>
<p>I then varied the p-Value that I used in the t-test and realized that the lower I chose the p-Value, the higher my type II error got.</p>
<p>I understand the meaning of both types of errors individually (I believe), but I have somewhat of a hard time reasoning about their interdependence. </p>
<ul>
<li>Is this a general property of type 1 and type 2 errors? </li>
<li>Is it specific to the t-Test?</li>
<li>is there a good way to quantify this?</li>
</ul>
|
<p>To summarize, you find that if you use a lower p-value as threshold to reject hypotheses, the type I error goes down and the type II error goes up.</p>
<p>This should make sense; if you use a lower threshold (in terms of p-values) for rejecting a null hypothesis, you will be rejecting fewer hypotheses. For one, that will make it less likely that you will falsely reject (reject even though the null is true) as you are being more conservative. Hence, the Type I error should go down (indeed it should be exactly the threshold you use for the p-value).</p>
<p>On the other hand, since you are being more conservative in rejecting the null hypothesis, you are also more likely to <strong>not reject</strong> when the null hypothesis is false. In a sense, you are requiring "more evidence" against the null hypothesis. That is, the Type II error rate goes up.</p>
<p>Maybe it is also good to consider the extremes: if you never reject the null hypothesis (p-value threshold of 0), then your type I error rate is 0. You never reject so you never make a mistake. On the other hand, your Type II error rate is 1.</p>
<p>On the other hand, if you always reject, then your Type II error rate is 0, because you never make a mistake not rejecting. But, your Type I error rate is 1 in this case.</p>
| 310
|
hypothesis testing
|
Hypothesis testing options on non-normal populations
|
https://stats.stackexchange.com/questions/226282/hypothesis-testing-options-on-non-normal-populations
|
<p>Can a hypothesis test be performed if I have a non-normal population, small sample size, but population standard deviation is known? We are testing if the mean differs from the given mean.</p>
|
<p>The <a href="https://en.wikipedia.org/wiki/Student%27s_t-test" rel="nofollow">t-test</a> is <a href="http://thestatsgeek.com/2013/09/28/the-t-test-and-robustness-to-non-normality/" rel="nofollow">quite robust</a> to departures from the assumption of normality. Intuitively, the reason for this is that the T statistic is based on averages, which are asymptotically normal under mild conditions (Central limit theorem), and typically they converge fast to normality.</p>
<p>Alternative nonparametric tests (for equality of distributions or means) include the Kolmogorov-Smirnov test, permutation tests, and Wilcoxon signed-rank test.</p>
| 311
|
hypothesis testing
|
How can I test whether the mean return of stock indices is 0?
|
https://stats.stackexchange.com/questions/229270/how-can-i-test-whether-the-mean-return-of-stock-indices-is-0
|
<p>I have daily return data for SPX over 50 years. And I calculate the mean return by just taking arithmetic average. I want to test the hypothesis whether the mean is 0.</p>
<p>Can I use the t statistic, which is (mean-0)/sample varirance, to test whether the mean return is 0? If not, what statistic should I use? Thanks.</p>
|
<p><strong>Can I use the t statistic to test whether the mean return is 0? If not, what statistic should I use? Thanks.</strong></p>
<p>Two factoids: Yes you can, but probably no you shouldn't. Stock prices cannot be negative, consequently, the differences between stock prices are limited below by what you paid for them. For stock prices one can use the <a href="http://financetrain.com/why-lognormal-distribution-is-used-to-describe-stock-prices/" rel="nofollow">LogNormal distribution</a>. That is, one takes the logarithm of the prices and then that logarithm <em>may</em> be normally distributed. Now iff the data is LogNormal, then the difference between the logarithms of the prices would also be normal in the logarithm, but, the difference of logarithms is the logarithm of the ratio of prices, not the difference in prices themselves. However, applying the t-test to the difference of logarithms would tell us when the start price was no different than the end price. </p>
<p>However, LogNormality is only one possibility, and one should always let the data talk to us about what it is. Another approach that works (almost) all the time because it does not matter what the distribution is, would be to use a nonparametric test like the Wilcoxon signed-rank sum test against an assumed difference of zero.</p>
<p>In fact, since the probability of losing the entire investment is not negligible, taking logs is not perfectly normally distributed, as there will be a nonzero distribution at minus infinity. So, Wilcoxon testing may be generally better than t-testing of the logarithms. The Wilcoxon test is (almost) the same for comparing the quantities themselves or for comparing their logarithms, it compares ranking, and ranking is mostly logarithm invariant, so Wilcoxon testing is versatile. Moreover, comparing the before and after prices as opposed to their logarithms will remove ties at negative infinity, but there will be ties at zero for enough data, so use the best available software for the Wilcoxon test--the treatment of ties can differ.</p>
| 312
|
hypothesis testing
|
setting up hypothesis testing problems
|
https://stats.stackexchange.com/questions/230891/setting-up-hypothesis-testing-problems
|
<p>I'm trying to do some hypothesis testing for work, but I have to admit that it's a bit trickier when you have to formulate the question yourself.</p>
<p>I have some data of the number of errors in a software we provide in the first 3 months after going live. I also have the number of those errors that are "critical errors". The hypothesis I want to test is that the number of critical errors in the first three months is equal to zero. I'm used to doing this in R, but my boss wanted it done in excel. Here's what I've got so far, but I'm not sure if I'm setting things up correctly:</p>
<pre><code>Total Errors | Criticals
24 | 1
31 | 0
2 | 1
8 | 3
2 | 0
0 | 0
2 | 0
4 | 0
4 | 0
5 | 0
5 | 0
9 | 0
6 | 0
7 | 1
0 | 0
12 | 0
10 | 1
13 | 0
19 | 0
Totals 163 | 7
|
s.e. criticals | 0.74059196
mean criticals | 0.04294479
h0 | 0
t-value | 0.74032982
p-value | 0.76991425
</code></pre>
<p>Sorry for the horrible format. I tried to copy it directly from excel. For SD and mean I just used excel's functions. I then calculated the t-statistic manually, and used excel's t-test function to find the p-value.</p>
<p>From what I've got so far I would say that I can't reject the null hypothesis that mean of critical errors in the first three months is equal to zero.</p>
|
<p>pnuts' comment is correct. </p>
<p>If the true rate of critical errors is actually zero, you won't see a single one. (Even then, you can't <em>prove</em> the rate is zero, you can at best give an upper bound -- in a confidence interval sense -- on the rate.)</p>
<p>Conversely if you observe <em>any</em> critical failures in the three month period, you know for sure that the rate of critical failures cannot be zero.</p>
<p><a href="https://i.sstatic.net/pbpSY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pbpSY.png" alt="extract of data from question showing "Total" row"></a></p>
<p>Seven is more than 0. The rate of critical failures is not 0.</p>
<p>It's not clear to me why the other failures are relevant, but maybe you expressed the hypothesis of interest differently than you intended.</p>
<p>You might be able to do something similar to an equivalence test (but in this case only a single one-sided test would be needed). This would require specifying a proportion of total failures or a rate per unit time that is "practically" close enough to zero -- some sort of acceptable level of critical failures that you can demonstrate you're below (in effect, because a confidence interval for the parameter of interest would be contained inside the "acceptable region"). Your clients may have very different views from you on what rate of errors is acceptable, though.</p>
<p>An alternative would be to forget hypothesis tests or trying to give a bound on what's "acceptable" and just quote a one-sided interval for the parameter of interest. I wouldn't use t-statistics for that.</p>
| 313
|
hypothesis testing
|
What to compare - means or variances?
|
https://stats.stackexchange.com/questions/233648/what-to-compare-means-or-variances
|
<p>I have dataset about vehicles that crossed a certain singalized intersection (each record is vehicle). I want to model the relationship between the entrance time relative to the yellow onset (independent variable) and the number of vehicles (dependent variable). To this end, I use the following logistic model:</p>
<p><img src="https://chart.googleapis.com/chart?cht=tx&chl=y_%7Bi%7D%3D%5Cfrac%7BA%7D%7B1%2B%5Cexp%5E%7BB%5Cleft(x_%7Bi%7D-C%5Cright)%7D%7D" alt=""></p>
<p>I divided the dataset into two, by the length of vehicle (short or long), and I fitted the above model for each subset of data. I want to perform hypothesis test about the B parameter (represents the slope in the inflection point). In simple words, I want to compare the slopes of two models, but I do not know what to compare - means or variances?</p>
<p>In what cases it is better to perform a test that compares two variances (instead of comparing means)? What is the motivation to use a test that compares two variances?</p>
|
<p>Let us look at a simple possible example. Suppose you divide your sample of patients into two groups and give one group your new experimental treatment and the other your old boring control treatment. Even the new exciting treatment will not work equally well for everybody so it may well be that when you measure serum whatever after the treatment the results for your treatment group are more variable than for the control group. In such a case you could test for differences in variance.</p>
<p>Having said that, in practice people do not seem to do such a test but just test for differences in location.</p>
| 314
|
hypothesis testing
|
Measuring Difference when AB Testing is not ideal
|
https://stats.stackexchange.com/questions/234213/measuring-difference-when-ab-testing-is-not-ideal
|
<p>Normally it is best to be able to set up a randomized AB test to measure if some change is actually better than the original. What about in situations where it is not ideal to AB test? </p>
<p>For example, we are a ride sharing company operating in an area where the amount of drivers are much lower than the amount of people looking for rides, and we want to test a new pricing algorithm and measure the impact of it. Splitting our users into a control and test group will make it more difficult to meet the already higher than supply demand, and different pricing between drivers may incite negative feedback and complaints on the uneven payments. There can also be many difficulties when it comes to controlling variables such as distances from drivers to riders while trying to randomize our test and control groups. </p>
<p>In events like this, which technique would be ideal to be able to measure a difference with some degree of certainty?</p>
|
<p><strong>Edited to reflect revised interpretation of question</strong></p>
<p>If the algorithms were selected based on some mathematical model of consumer behavior, that same model might be used to evaluate the new algorithm and compare its projections directly to the old algorithm. If no model currently exists, one could be developed and, if it performs well enough predicting usage under the current algorithm, then used to estimate effects of the new algorithm.</p>
<p>Without such a model, I might do something like conduct a survey of potential customers asking about how much they might use the ride share service under the current surge pricing algorithm as well as the new one. Their responses regarding the current algorithm can be compared against data collected from real observations of use (also under the current algorithm) to estimate how well the stated preferences match actual use. The answers to questions about the new algorithm allow for estimation of the new algorithm's performance, mediated by the differences between stated and revealed preferences regarding the current algorithm.</p>
<p>I don't doubt that there are a lot of other approaches, but this strikes me as a question that's hard to answer in the general case. Modeling is a formal way of examining the relevant factors' effects on the outcome, but your predictions will only be as accurate as the underlying model. Surveys are effective in some applications and less so in others. I don't know if there is a single "ideal" technique for projecting the effect of a potential change like you describe unless the question is substantially narrowed.</p>
| 315
|
hypothesis testing
|
I have to compare pre and post training effectiveness for group of Managers statistically
|
https://stats.stackexchange.com/questions/234287/i-have-to-compare-pre-and-post-training-effectiveness-for-group-of-managers-stat
|
<p>I have to compare pre and post training effectiveness for group of Managers who have undergone training programme. I have data based on rank order from 1 to 4 where 1 being the most preferred and 4 being the least preferred . Based on the pre and post data i have to give the percentage improvement in the mangers . Which statistical method should be used for measuring effectiveness of managers.</p>
|
<p>If you are just checking only if there are differences between the same group such as before test and after test to see if there are improvements you can use ANOVA. </p>
<p>If you are looking at how much improvement from the test I would use t-test.</p>
<p>This source may interest you for looking at which test to use: <a href="http://www.csun.edu/~amarenco/Fcs%20682/When%20to%20use%20what%20test.pdf" rel="nofollow">http://www.csun.edu/~amarenco/Fcs%20682/When%20to%20use%20what%20test.pdf</a></p>
| 316
|
hypothesis testing
|
Hypothesis testing Type I and Type II with erroneous error
|
https://stats.stackexchange.com/questions/245745/hypothesis-testing-type-i-and-type-ii-with-erroneous-error
|
<p>I am in a Stats class with a brilliant Professor that I unfortunately do not get everything they say and or do. I have a question of theirs that I would like to see how others address the answer to it. The question is below. I am having trouble connecting how to decipher a Type I and a Type II and how it should be set up, especially when the word "erroneously" are thrown in. Please help to explain the best ways to understand and interpret the answers for A and B. (please note I already have the answers, it's how to understand and process them that I need more comprehension of)</p>
<p>Question</p>
<p>The manufacturer of an over-the-counter pain reliever claims that its product brings pain relief to headache sufferers in less than 3.5 minutes.</p>
<p>A. What null hypothesis is Mary testing if she commits a type I error when she erroneously concludes the manufacturer's claim is correct?</p>
<p>B. What null hypothesis is Mary testing if she commits a type II error when she erroneously concludes the manufacture's claim is correct?</p>
|
<p>A Type I means to postulate a non existing effect. The manufacturer claims there is an effect of pain relief. So when the error of agreeing with the manufacturer is of type 1, the null hypothesis must be that there isn't an effect.</p>
<p>B The manufacturer still claims pain relief and she is still wrong in agreeing with him. This time, this makes her commit a type II error, missing an existing effect. Therefore, the "effect" must be defined as absence of pain relieve and the null hypothesis be pain relief. This second case is very counterintuitive.</p>
<p>PS. This less than 3.5 minutes bit is meant as a hint at unilateral testing.</p>
| 317
|
hypothesis testing
|
Hypothesis Testing of means
|
https://stats.stackexchange.com/questions/245929/hypothesis-testing-of-means
|
<p>Can anyone help me out with this question? My notes & textbooks just aren't giving me the explanations I need.</p>
<p>The average household size in a certain region several years ago was 3.14 persons. A sociologist wishes to test, at the 5% level of significance, whether it is decreased now. Preform the test using the information collected by the sociologist: in a random sample of 75 households, the average size was 2.98 persons, with a sample standard deviation 0.82 person.</p>
<p>
1. State the null and the alternative hypotheses.</p>
<p>A. Ho: = 3.14 & Ha: < 3.14</p>
<p>B. Ho: = 3.14 & Ha: > 3.14</p>
<p>C. Ho: = 3.14 & Ha: ≠ 3.14</p>
<p>D. Ho: = 2.98 & Ha: ≠ 2.98</p>
<p>E. Ho: = 2.98 & Ha: > 2.98</p>
<p>
I believe the answer would be B, but I am unsure if that's correct.</p>
<p>
2. Compute the value of the test statistic.</p>
<p>
A. -1.69</p>
<p>B. -2.73</p>
<p>C. 1.69</p>
<p>D. 2.73</p>
<p>E. -0.195</p>
<p>I tried using this formula, but I am not getting any of the above answers.
t= mean - standard deviation/ s/√n</p>
<p>I believe the answer is A. I plugged the numbers from the problem into the STAT-TESTS- Z-Int. </p>
<ol start="3">
<li>What is the rejection region?</li>
</ol>
<p>A. (- infinity, -1.96]</p>
<p>B. [1.96, + infinity)</p>
<p>C. (- infinity, -1.64]</p>
<p>D. [1.64, + infinity)</p>
<p>E. (- infinity, -1.96] ∪ [1.96, + infinity)</p>
<p>I believe the answer would be A. because it is a left-tailed test since the test statistics is negative.</p>
<p>
4. Based on the evidence can you make a decision for the sociologist.</p>
<p>I believe you do not reject Ho.</p>
|
<ol>
<li><p>This would actually be A since the alternative hypothesis is that the household size has decreased. </p></li>
<li><p>Test statistic = (observed-expected)/(standard deviation/sqrt(75).
This is (2.98-3.14)/(0.82/8.66) which = -1.69</p></li>
</ol>
<p>check your parentheses with this calculation. </p>
<ol start="3">
<li>You have a left-tail test. The rejection region would be t less than or equal to the critical t value for your alpha level (5%). You can calculate the critical t values with free online calculators like this: <a href="http://www.mathcracker.com/t_critical_values.php" rel="nofollow noreferrer">http://www.mathcracker.com/t_critical_values.php</a></li>
</ol>
<p>This website could be helpful for you to visualize left and right tail tests: <a href="https://onlinecourses.science.psu.edu/stat500/node/44" rel="nofollow noreferrer">https://onlinecourses.science.psu.edu/stat500/node/44</a></p>
| 318
|
hypothesis testing
|
Design/invention of Statistical Tests
|
https://stats.stackexchange.com/questions/246253/design-invention-of-statistical-tests
|
<p>I was wondering how the statisticians come up with the statistical tests for hypothesis testings and the corresponding tables/distribution? Let's say Wilcoxon Rank Sum Test. Any references with concrete examples would be appreciated.</p>
| 319
|
|
hypothesis testing
|
Which statistical test should I apply?
|
https://stats.stackexchange.com/questions/248544/which-statistical-test-should-i-apply
|
<p>I have 2 sets of data.</p>
<p>First set is historical data and samples are taken up to a date. It has 3000 samples.</p>
<p>Second set of new samples are taken after that particular date. It has 400 samples.</p>
<p>I want to compare these two sets of data statistically.</p>
<p>Which test should I apply? Student-T or Z test?
How can I do it in MATLAB?</p>
|
<p>You can perform Welch's $t$-test in MATLAB using the function <a href="https://ch.mathworks.com/help/stats/ttest2.html" rel="nofollow noreferrer"><code>ttest2</code></a>. Welch's $t$-test is a version of Student's $t$-test adapted to the case where variances and/or sample sizes may not be equal (both populations should still have a normal distribution, however!). See here:</p>
<p><a href="https://en.wikipedia.org/wiki/Welch's_t-test" rel="nofollow noreferrer">Wikipedia: Welch's $t$-test</a></p>
| 320
|
hypothesis testing
|
probability of data ratio increasing
|
https://stats.stackexchange.com/questions/251393/probability-of-data-ratio-increasing
|
<p>I have a data set of control and paired test values in which the control variability can be relatively high. I'd like to answer whether or not the the test values have increased relative to controls. Specifically, I wanted to examine the increase as a function (percentage/ratio) of the control value (i.e. divide observations by the individual control values). I am having trouble coming up with an appropriate statistical test.</p>
<p>To clarify, let's consider the following data example. Control/test values are paired.</p>
<pre><code>Controls: 0.5 1 2
Tests: 1 2 3.5
</code></pre>
<p>In this case, the test values have nearly all doubled relative to controls. Normalizing test values to each individual control, would yield:</p>
<pre><code>Norm Test: 2, 2, 1.75
</code></pre>
<p>In general, the question is how to determine if the test values have increased relative to the controls, whether that be through normalization or some test done on the original (non-normalized) data.</p>
|
<p>The ratio can't differ from 1 without the difference differing from 0. Whether you would have more statistical power to detect differences or ratios will depend on the nature of the distributions of those quantities. A $t$-test would be appropriate if they are normal, or sufficiently close and you have a lot of data. However, searching through various possible quantities to test, and transformations of them, is not generally advised (cf., <a href="https://stats.stackexchange.com/q/121852/">here</a>). As a result, I would just use the values as they are and use a test that does not depend on the distribution. More specifically, I would use the <a href="https://en.wikipedia.org/wiki/Wilcoxon_signed-rank_test" rel="nofollow noreferrer">Wilcoxon signed rank test</a> for your data. </p>
| 321
|
hypothesis testing
|
Choosing the right regression model
|
https://stats.stackexchange.com/questions/252302/choosing-the-right-regression-model
|
<p>I'm trying to see whether the number of salesman per customer entering the store affect the sales amount using a simple OLS regression. Which one is a better model to test the hypothesis? Can you also tell me why?</p>
<p>Model 1:sales= B1+ B2(salesman/customers entering the store). </p>
<p>Model 2:sales=B1 + B2 *salesman + B3 * entering tore+B4* (salesman/store size)</p>
|
<p>Split your sample in two: training and validation. Estimate both models on the training and pick the model that performs the best (in mean squares error) on the validation dataset. </p>
| 322
|
hypothesis testing
|
Are significance level and critical value the same thing in hypothesis testing?
|
https://stats.stackexchange.com/questions/252655/are-significance-level-and-critical-value-the-same-thing-in-hypothesis-testing
|
<p>It seems to me that the alpha value is used behind both concepts. It is the cut off point where you determine whether to reject the null hypothesis or not.</p>
<p>So why are there two names for the same concept?</p>
|
<p>They are not the same concept. They are, however, related.</p>
<p>For a simple null hypothesis, your significance level is the type I error rate that you choose, which is the long-run proportion of times you would reject the null hypothesis when the null hypothesis was true (and the other assumptions all held true).</p>
<p>(When the type I error rate is different in different parts of the null space, - as with a compound null hypothesis - it's the largest type I error rate under the null.)</p>
<p>The critical value is the value of the test statistic that marks the boundary of your rejection region. It's the least "extreme" value of the test statistic that is still in the rejection region (i.e. the value which would cause you to <em>just</em> reject). Any test statistic that is more extreme (less consistent with the null hypothesis in the direction of the alternative) will be in the rejection region and any that is less extreme (more consistent with the null than this) will not be in the rejection region.</p>
<p>The critical value is the most extreme (in the above sense) value available that would lead to a rejection region whose total probability under the null doesn't exceed the desired type I error rate. The actual type I error rate you get* with using that critical value will be your significance level.</p>
<p>(* or again, with a complex null, the largest of the rates you can get)</p>
| 323
|
hypothesis testing
|
Why aren't cox regression models validated against independent test sets in medical literature
|
https://stats.stackexchange.com/questions/254620/why-arent-cox-regression-models-validated-against-independent-test-sets-in-medi
|
<p>It has been the standard in many machine learning journals for very many years that models should be evaluated against a test set that's identically distributed but has independently samples from training data, and authors report averages of many iterations of random train/test partitions of a full dataset.</p>
<p>When looking at epidemiology research papers (e.g. risk of future stoke given lab results), I see that a huge proportion of papers build Cox proportional hazard models, from which they report hazard ratios, coefficients, and confidence intervals directly from a single training of a model, and do not evaluate the accuracy of the model on an independent test set. Is this, in general, reasonable? </p>
|
<p>Finding independent survival datasets in the public domain for validation is often quite difficult. In addition to requiring all the same features, you need to find a dataset with time and event information. Many studies don't collect this information, and if they do, they probably already did the survival analysis, and your study is therefore less novel. </p>
| 324
|
hypothesis testing
|
Interpreting hypothesis testing result (assuming that the null hypothesis is true)
|
https://stats.stackexchange.com/questions/255169/interpreting-hypothesis-testing-result-assuming-that-the-null-hypothesis-is-tru
|
<p>I have a doubt on how to interpret a result of a hypothesis test. For example, a scenario where I have an existing configuration and also a new configuration. I am trying to check if with the new configuration the program is faster.</p>
<p>The execution of the program in the existing configuration is 70.20 and in the new configuration is 65.10.</p>
<p>My hypothesis is</p>
<p>$H_0:$ The old configuration is better or same as the new configuration ($u>=0$)</p>
<p>$H_1:$ The new configuration is faster ($u<0$)</p>
<p>And I get a p-value of 3%.</p>
<p>Does this mean that getting 65.10 when the null hypothesis is true is unlikely, so we reject the null hypothesis? So because we get 65.10 the null hypothesis is true? I'm not understanding very well this part of the assuming that null hypothesis is true.</p>
| 325
|
|
hypothesis testing
|
hypothesis testing for this simple problem
|
https://stats.stackexchange.com/questions/257413/hypothesis-testing-for-this-simple-problem
|
<p>I want to know if my system is functional or not base on 30 trials. so what i did is, I have 1 group with 30 trials. the variable for the group is categorical success or fail.
for 30 trials, the system has 30 successes.</p>
<p>how to do the hypothesis testing? i feel like im very wrong. </p>
<p>can i say this? using binomial testing.
null hypothesis : the probability of success is 1 which means that the system is functional<br>
alternative hypothesis : the probability of success is not 1 which means that the system is not functional</p>
<p>using 5% significance level i can say that the critical value with n=30 is equal to 30*.05=1.5. calculating the number of trials in which the system fails= 0 fails. with these values 1.5 and 0, 0<1.5. therefore i conclude that the null hypothesis is accepted and the alternative hypothesis is rejected. </p>
|
<p>Under H0, which assumes that in each run probability of success equals the probability of fails. The probability to obtain N successes is $0.5^N$. In your case p-value= $9.3^{-10}$. </p>
| 326
|
hypothesis testing
|
Negative t-value
|
https://stats.stackexchange.com/questions/258169/negative-t-value
|
<p>I have run a test on a treatment where the mean (of particular performance) after treatment is greater than that before and standard deviation has decreased. My t value is in the -22 area and I have not found a negative t value before. Should I just use the absolute value and check this agains the t-table, as in compare 22 against the critical value according to my DF and confidence level? </p>
<p>Does a negative t reflect a positive or negative effect?</p>
|
<p>The t distribution is two-sided and centered around 0, so it is possible to have a negative t statistics. What was the null hypothesis?</p>
| 327
|
hypothesis testing
|
Can we use t-test for large sample..??
|
https://stats.stackexchange.com/questions/260270/can-we-use-t-test-for-large-sample
|
<p>How can I use t-test, my sample size is 368, my sample is dependent in nature as it is collected from same population over two period of time, population s.d is unknown.</p>
|
<p>This is a <a href="https://en.wikipedia.org/wiki/Student's_t-test#Paired_samples" rel="nofollow noreferrer">paired t-test (ref Wikipedia)</a>. It forms one sample by examining the before-and-after differences.</p>
<p>To implement this in Excel, use the code <code>=TTEST()</code> with the <code>type</code> option (last argument) equal to 1 for paired data, i.e. <code>=TTEST(A1:A368,B1:B368,2,1)</code></p>
| 328
|
hypothesis testing
|
What is the right hypothesis test for this problem?
|
https://stats.stackexchange.com/questions/259126/what-is-the-right-hypothesis-test-for-this-problem
|
<p>I would like to discuss and analyze what is the best hypothesis test for this problem: </p>
<blockquote>
<p>We have data with the distance that each football player of each team runs in a match. Now we want to find two teams with the most similar pattern (by comparing any combination of teams). </p>
</blockquote>
<p>I agree the word "similar" is not clear in above statement. My interpretation is to find a hypothesis test that helps to find the same underlying distribution.</p>
<p>The to formalize the question:
Suppose x and y are vectors representing distance that players in each team run. Also, we suppose the position of players from goalkeeper to forward is sorted in the list. To prevent confusion lets just suppose there are only 11 players and we are not considering substitute players.</p>
<p>Null hypothesis(H0): x and y doesn't have the same distribution</p>
<p>Alternative hypothesis(Ha): x and y have the same distribution</p>
<p>Then we need to find a test so that the returned p-value is less than a significance level (alpha) (for example alpha= 0.05) so that we can have against evidence against the null hypothesis and (Ha) can be accepted.</p>
<p>I guess the data is paired(dependent) data because we are comparing two group with the same structure. however I am not sure about it.</p>
<p>Link to previous related question: <a href="https://stats.stackexchange.com/q/259112/96725">Interpretation of p-value in Mann-Whitney rank test</a></p>
|
<p>"<em>Now we want to find two teams with the most similar pattern</em>" doesn't seem to be a hypothesis testing problem.</p>
<p>Once you define what "most similar" among pairs of patterns for teams is (or conversely, what most dissimilar is), it seems to be a matter of calculation to find the most (or least, if you measure dissimilarity) extreme case.</p>
<p>[If you had <em>one</em> pair for comparison, you might do an equivalence test (rather than a more typical hypothesis test), but you might end up with several that are equivalent, or none; it doesn't pick a "most".]</p>
| 329
|
hypothesis testing
|
statistical test for 3 response answer satisfied , not satisfied , can't say
|
https://stats.stackexchange.com/questions/264122/statistical-test-for-3-response-answer-satisfied-not-satisfied-cant-say
|
<p>the survey is for different online payment methods from which replies for which different methods they are satisfied, not satisfied , can't say.which test is to be done in this question.</p>
|
<p>It's not completely clear what you are trying to do, but if your dependent variable is the satisfaction measure, then I think ordinal logistic regression would be a good starting point. Or perhaps classification trees. Or multinomial logistic. Or random forests. Or a neural network might work best. </p>
| 330
|
hypothesis testing
|
Understanding statistical hypothesis tests in paper
|
https://stats.stackexchange.com/questions/264696/understanding-statistical-hypothesis-tests-in-paper
|
<p>I'm a stats noob, so I don't really understand the statistic tests that the authors Ross, Greene, and House use to justify their results in their paper <a href="http://web.mit.edu/curhan/www/docs/Articles/biases/13_J_Experimental_Social_Psychology_279_%28Ross%29.pdf" rel="nofollow noreferrer">"The 'false consensus effect'"</a>.</p>
<p>For example, in their first study they are looking at how personal agreement affects how much a participant thinks other people would agree or disagree. It gives the counts for agreement and disagreement for four scenarios, the average estimated percentage of peers that participants would say that would agree or disagree, and a mysterious "F" value (page 283). Additionally, when discussing the results they say the following (page 284):</p>
<blockquote>
<p>When each story was treated as a “fixed” variable in an analysis of variance combining the data for all four stories, the main effect of Rater’s Choice
was highly significant, F(1, 312) = 49.1, p < .001, while the Story x Rater’s
Choice interaction was trivial, F(1, 312) = 1.37, p > .10.</p>
</blockquote>
<p>This is probably a very basic statistics question, but I have no context for what this "F" function is, and where they are getting the numbers they are feeding into it. Thank you!</p>
|
<p>The paper includes the phrase "F-ratio". This is almost certainly the <a href="http://www.statisticshowto.com/f-statistic/" rel="nofollow noreferrer">F-statistic</a> produced during analysis of variance <a href="https://en.wikipedia.org/wiki/Analysis_of_variance" rel="nofollow noreferrer">(ANOVA)</a>. The <a href="https://en.wikipedia.org/wiki/Analysis_of_variance#The_F-test" rel="nofollow noreferrer">F-ratio test</a> takes the ratio $F = \frac{\text{variance between treatments}}{\text{variance within treatments}}$ and uses a lookup table (i.e., a conversion between the F-statistic and a probability) to find the probability that the explanatory value of the treatments could occur by chance alone. In the paper, the notation $F(.,.)\rightarrow F(df,n)$ is implied on page 285 by comparing the text with the footnote on that page.</p>
| 331
|
hypothesis testing
|
What kind of Non-parametric test to use when huge difference between two sample sizes?
|
https://stats.stackexchange.com/questions/265360/what-kind-of-non-parametric-test-to-use-when-huge-difference-between-two-sample
|
<p>I have Type I CD protein data which consists of 27 Enzymes and 217 non-enzymes. I want to determine if there is a significant difference in the length of enzymes versus non-enzyme. What type of non-parametric test should I use if there is a huge difference between the sample size of enzymes versus non-enzyme?</p>
|
<p>Use whichever nonparametric test is suited for your particular null and alternative. </p>
<p>None of the usual tests suitable for two independent samples will care that one sample is larger than the other.</p>
<p>Note that "difference in the length" is sort of vague -- if that's as specific as you can be, I'd lean toward a general one like a Wilcoxon Mann-Whitney (I presume you have ties though), but if you have a particular measure of location you're interested in that can be done fairly easily (e.g. via permutation tests).</p>
<p>So you if you want to see if there's a difference in mean length, you could do that with a nonparametric test (or by adding some assumptions to an existing test). </p>
<p>[What's perhaps less immediately clear, though, is how to construe this as a suitable situation for a hypothesis test in terms of random selection from populations (since clearly there's no random assignment to treatment); it seems to be fairly standard to just charge ahead regardless. I hope someone has constructed appropriate justifications <em>somewhere</em>.]</p>
| 332
|
hypothesis testing
|
If $p\text{-value}<\alpha$ does the observed test statistic always belongs to critical region?
|
https://stats.stackexchange.com/questions/265584/if-p-text-value-alpha-does-the-observed-test-statistic-always-belongs-to-cr
|
<p>Assume we carry out a hypothesis test at the 5% significance level. We have an observed test statistics $t$ with calculated p-value $0.03$. Does that imply that the observation has to lie in the critical region? I mean $3\%$ of the distribution is at least as extreme and the critical region is the most extreme 5% of the distribution, therefore $t$ must be contained in the critical region?</p>
|
<blockquote>
<p>If p-value<α
does the observed test statistics always belongs to critical region?</p>
</blockquote>
<p>Yes, that's right. </p>
<p>(It doesn't depend on whether the t-test is appropriate as suggested in comments -- the appropriateness of the assumptions doesn't come into this at all; this is a question of the decision you make when presented with a p-value. The appropriateness of the assumptions would matter when interpreting the p-value and it would matter in relation to the decision-process yielding the properties you desire, but none of that is at issue.)</p>
<blockquote>
<p>I mean 3% of the distribution is at least as extreme and the critical region is the most extreme 5% of the distribution</p>
</blockquote>
<p>This is correct. Anything up to (and including) <em>5%</em> is at least as extreme as 5%. </p>
| 333
|
hypothesis testing
|
Which test to use for generalizations
|
https://stats.stackexchange.com/questions/267033/which-test-to-use-for-generalizations
|
<p>I'm currently conducting a research in linguistics. The goal is to show the audience preference when it comes to different translation strategies in subtitling. The experiment design has one independent variable with two levels (2 different translations of the same clip), and the dependent variable is the reception (like it, don't like it). The expected number of participant is 100. As I am a complete novice in statistics, which test would you suggest to use to make generalizations and test hypothesis?</p>
|
<p>You will get a 2x2 table as a result of the experiment. If each person in your sample sees only one version, you could use the chi-square test.</p>
<p>If you get the same 100 people watch both translations and rate them, you might use McNemar's test.</p>
<p>You could read more here: <a href="https://stats.stackexchange.com/questions/76875/what-is-the-difference-between-mcnemars-test-and-the-chi-squared-test-and-how">What is the difference between McNemar's test and the chi-squared test, and how do you know when to use each?</a></p>
| 334
|
hypothesis testing
|
hypotheses testing
|
https://stats.stackexchange.com/questions/188433/hypotheses-testing
|
<p>so I'm new to statistics and am not very comfortable with it yet. It may be a very simple question, but I'm finding this very difficult to understand
In research papers the hypotheses are mostly in a particular direction.
Like "satisfaction is positively related to customer retention"
If this particular hypothesis is rejected, does it mean that 'satisfaction is negatively related to customer retention' or that 'satisfaction is unrelated to customer retention'?
Please clarify </p>
|
<p>then satisfaction is not positively related to customer retention. that's all we know .This implies that in statistical hypothesis testing you are unable to find evidence for H0. If one rejects H0 then the only conclusion you can draw is 'We can not prove H0' or 'it is probable that H0 is false and so we accept H1 (refer power of test ) .</p>
| 335
|
hypothesis testing
|
How to test whether the average return on S(USD/AUD) of the last 30 days is significantly different from zero at the 5% level of significance?
|
https://stats.stackexchange.com/questions/188948/how-to-test-whether-the-average-return-on-susd-aud-of-the-last-30-days-is-sign
|
<p>I have daily returns on S(USD/AUD). Now how to test whether average return of last 30 days is significantly different from zero at 5% level of significance ?</p>
|
<p>You may be looking for a so-called "HAC"-test, a "heteroskedasticity and autocorrelation consistent" test for $\mu=\mu_0$:
$$
t_{HAC}=\frac{\bar{Y}_T-\mu_0}{\sqrt{\frac{\sum_{j=-\infty}^{\infty}\gamma_j}{T}}}
$$
We can then approximately (i.e. for $T$ sufficiently large) argue that, under $H_0:\mu=\mu_0$
$$
t_{HAC}\stackrel{a}{\sim}N(0,1)
$$
How can we estimate $J=\sum_{j=-\infty}^{\infty}\gamma_j$? In practice, for a sample size $T$, we can calculate autocovariances up to order at most $T-1$. A plausible idea then is
$$
J_T\equiv\hat{\gamma}_0+2\sum_{j=1}^{T-1}\hat{\gamma}_j
$$
Clearly the higher order terms will be estimated from very few observations. It turns out that using all possible $\hat{\gamma}_j$ as above leads to an inconsistent estimator. </p>
<p>Consistent estimators obtain for so-called "nonparametric" <em>kernel</em> estimators
$$
\hat{J_T}\equiv\hat{\gamma}_0+2\sum_{j=1}^{T-1}k\left(\frac{j}{\ell_T}\right)\hat{\gamma}_j
$$
$k$ is a kernel or weighting function, that among other things must be symmetric and have $k(0)=1$. $\ell_T$ is a bandwidth parameter that has to be chosen "appropriately".</p>
<p>The literature proposes a variety of choices for $k$. A popular one is the <em>Bartlett</em> kernel
$$k\left(\frac{j}{\ell_T}\right) = \begin{cases}
\bigl(1 - \frac{j}{\ell_T}\bigr)
\qquad &\mbox{for} \qquad 0 \leqslant j \leqslant \ell_T-1 \\
0 &\mbox{for} \qquad j > \ell_T-1
\end{cases}
$$
It is often the case that the choice of $k$ does not matter too much. Choosing $\ell_T$ appropriately is more important. Again, there are many rules (<a href="https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=0ahUKEwjs-ZSxo5DKAhUG4w4KHcJ8BGIQFgguMAE&url=http%3A%2F%2Fwww.ssc.wisc.edu%2F~kwest%2Fpublications%2F1990%2FAutomatic%2520Lag%2520Selection%2520in%2520Covariance%2520Matrix%2520Estimation.pdf&usg=AFQjCNEREpJozDPteukQ2zqnGRRysRPjtA" rel="nofollow">Newey and West, Review of Economic Studies 1994</a>). An easy one is
$$
\ell_T=\lfloor 4(T/100)^{2/9}\rfloor
$$ </p>
| 336
|
hypothesis testing
|
How to Compare Mortality Rates Among 5 Groups
|
https://stats.stackexchange.com/questions/189639/how-to-compare-mortality-rates-among-5-groups
|
<p>What statistical test would I use to compare differences in mortality among 5 independent groups? I know chi square can be used for comparing 2 groups.</p>
|
<p>You want to see whether the distribution in frequencies among the five groups is consistent with a discrete uniform distribution (null hypothesis) or they are instead different enough to reject the null. To do so, you can use a goodness-of-fit test chi squared test.</p>
<p>If you are using R, you can do something like this:</p>
<pre><code>group_mortality <- c(15, 21, 11, 09, 22)
expected_freq <- rep(1/5,5)
(ch_sq <- chisq.test(group_mortality, p= expected_freq))
Chi-squared test for given probabilities
data: group_mortality
X-squared = 8.6667, df = 4, p-value = 0.06999
</code></pre>
<p>In a case like this the $\chi^2$ value wouldn't be extreme enough to reject the null with a risk alpha of $<5\%$.</p>
| 337
|
hypothesis testing
|
Proper way to test hypothesis of random selection?
|
https://stats.stackexchange.com/questions/193301/proper-way-to-test-hypothesis-of-random-selection
|
<p>Suppose I have $N$ urns, each containing various mixes of red and green balls.
A subject is to make a random selection without replacement of $M$ balls from each of the urns, whereupon a count is made of the red and green for that urn, resulting in a count of these for each urn.</p>
<p>The hypothesis is the selection was random, vs the subject peeking and selecting one color preferentially over all the urns.</p>
<p>Would summing the probabilities for all possible selection permutations over the urns (that total to the grand total of the subject's selections or more extreme) be a proper significance test, or should <em>each</em> selection be tested that way and some kind of multiple test correction/meta-analysis be used to arrive at an overall result, or...?</p>
<p>Edit: A toy example for clarification:</p>
<p>Suppose there are only two urns. Urn 1 has 10 red & 10 green while urn 2 has 12 red & 13 green. Then subject makes 5 draws without replacement from each urn, and reports 2 red for urn 1, 0 red for urn 2. The low value of red count raises suspicion that subject peeked, and picked green preferentially.</p>
<p>Using the first test idea, I take the possible permutations of urn counts that could lead to a total of 2 red or less - {{0, 0}, {1, 0}, {0, 1}, {2, 0}, {0, 2}, {1, 1}}, calculate the individual probability products, and sum those, arriving at the probability of ~0.04 of getting 2 or fewer total red for the draws if the draws were actually random.</p>
<p>For the second, I'd calculate the probability of getting 2 or fewer for urn 1 (0.5), and that of getting zero for urn 2 (~0.024) and do a meta-analysis on those p-values (say using Fisher's method), getting ~0.07.</p>
<p>Both methods seem reasonable, but arrive at opposing results for significance - the first is significant at the 0.05 level, the second is not.</p>
<p>Thoughts?</p>
|
<p>Summing over all the probabilities that are as or more extreme is perfectly sufficient here, but there are a few things to keep in mind. First, you have to be careful about what you mean by "as or more extreme" here. If you have a particular reason to believe a cheater would prefer picking green over red, then the probability that two or fewer red would be chosen by chance may suffice, but often you will want to use two-sided p-values: i.e. what are the chances that two or fewer red or green would be picked by random chance? (which here would give a p value greater than 5%)</p>
<p>Another important point is as djma mentioned: .05 p-value is arbitrary and has created some serious problems for the reproducibility of scientific experiments. If you set a p-value of .05 as the bar, of true hypotheses you will reject on average one of every twenty! And so really, you should set your threshold for significance on a number of factors, particularly:</p>
<ul>
<li>How bad would it be to reject a true null hypothesis? (In your toy model, this would be considering the consequences of accusing someone falsely of cheating)</li>
<li>How bad would it be to accept a false null hypothesis? (In your toy model, this would be the consequences of failing to catch a cheater)</li>
<li>Prior probabilities: a priori, how likely do you consider it is that the null hypothesis is false?</li>
<li>Are you subject to the look-elsewhere effect? (In your toy model, if you are a casino and you are subjecting many people to this test, at a low significance level you are bound to falsely accuse many of cheating, but a lower significance level may be more reasonable if this is a one-off thing)</li>
<li>Many more points I'm sure I'm not thinking of.</li>
</ul>
| 338
|
hypothesis testing
|
How to determine Uniformly most powerful test?
|
https://stats.stackexchange.com/questions/199027/how-to-determine-uniformly-most-powerful-test
|
<p>In practice, how do you find the uniformly most powerful test? Would you essentially brute-force all possible hypothesis tests?</p>
<p>Could we prove that there exists a uniformly most powerful test and the one that we are using is sub-optimal?</p>
|
<p>For a case of testing simple hypothese, there's a <a href="https://en.wikipedia.org/wiki/Neyman%E2%80%93Pearson_lemma" rel="nofollow">Neyman–Pearson lemma</a> whereas for case of composite hypotheses we've got Karlin–Rubin theorem, which is a bit limiting (to scalar parameters and scalar measurements). Probably there're other more general solution instead of Karlin–Rubin theorem, but unfortunately they're unknown to me.
I recommend you to have a look on book by E.L. Lehmann and J.P. Romano <em>Testing Statistical Hypotheses</em> (the whole 3rd chapter is about UMP, at least in 3rd edition).</p>
<p>Let's have a glance on the Neyman-Person lemma and go to a working example that should give you a vague insight how to create such UMP test <strong>in some cases</strong>:</p>
<p>We're considering a random variable $X \sim \mathcal{N}(0, \sigma^2)$ and an one-sided test:
$$H_0: \sigma^2 \leq \sigma_0^2 ~~~ \text{against} ~~~ H_A: \sigma^2 > \sigma_0^2 $$
We have a sample of $n$ independent random variables of common distribution
$$X_1, \dots, X_n \sim \mathcal{N}(0, \sigma^2)$$</p>
<p>Now, we're computing a likelihood ratio for this sample:
$$
\frac{L(\sigma_2^2|X_1, \dots, X_n)}{L(\sigma_1^2|X_1, \dots, X_n)} = \frac{\frac{1}{(\sqrt{2 \pi\sigma_2^2})^n} \cdot \exp(-\frac{1}{2\sigma_2^2} \sum\limits_{k=1}^n X_k^2)}{\frac{1}{(\sqrt{2 \pi\sigma_1^2})^n} \cdot \exp(-\frac{1}{2\sigma_1^2} \sum\limits_{k=1}^n X_k^2)}
$$</p>
<p>$$
\frac{L(\sigma_2^2|X_1, \dots, X_n)}{L(\sigma_1^2|X_1, \dots, X_n)} =
(\frac{\sigma_1}{\sigma_2})^n \cdot
\exp[(\frac{1}{2\sigma_1^2} - \frac{1}{2\sigma_2^2})
\cdot \sum\limits_{k=1}^n X_k^2]
$$
In this form, we clearly see that the likelihood ratio is monotonically increasing with respect only to the statistic
$$
T = \sum\limits_{k=1}^n X_k^2
$$</p>
<p>Using Neyman-Person lemma (look at this particular form <a href="http://ocw.mit.edu/courses/economics/14-381-statistical-method-in-economics-fall-2013/lecture-notes/MIT14_381F13_lec10.pdf" rel="nofollow">Theorem 1: Neyman-Person Lemma</a>) it can be said that there's a critical region of the form
$$
C = \{ (X_1, \dots, X_n) | \sum\limits_{k=1}^n X_k^2 \geq CritVal_{\alpha}\}
$$
for a Uniformly Most Powerful Test with significance level of $\alpha$.</p>
<p>Now, we must only find a critical value for given $\alpha$ level.
It's easy to see that
$$
\frac{1}{\sigma_0^2} \sum\limits_{k=1}^n X_k^2 \sim \chi_n^2 ~~~~
(\chi^2~\text{with $n$ degrees of freedom})
$$</p>
<p>We can introduce an auxiliary random variable $A \sim \chi_n^2$ and
find such value of $t$ satisfying
$$
P(A > t) = \alpha
$$
then we can state $CritVal_{\alpha} = t \cdot \sigma_0^2$.</p>
<p>To sum up, we've just constructed a critical region, so we have actual UMP test for this particular task.</p>
| 339
|
hypothesis testing
|
Why we also consider the opposite signed value of test statistic in two tailed test to calculate P value?
|
https://stats.stackexchange.com/questions/200768/why-we-also-consider-the-opposite-signed-value-of-test-statistic-in-two-tailed-t
|
<p>Why P value of a 2 tailed test is multiplied by 2 [=2 X P(Z>tcal)? I am looking for a answer which may explain the underlying reason except the answer 'because it is a two tailed test'. Why we also consider the opposite signed value of test statistic in two tailed test to calculate P value?</p>
|
<blockquote>
<p>why the P value of a two tailed test is multiplied by 2?</p>
</blockquote>
<p>Not all two-tailed tests have the property that a p-value of a two-tailed test should be double the p-value of a one-tailed test.</p>
<p>However, tests where </p>
<p>i. being in either of the tails are mutually exclusive events*, and</p>
<p>ii. the distribution of the test statistic is symmetric</p>
<p>will have the property that to compute the p-value of a two-tailed test you double the smaller p-value of the two one-tailed tests. This follows because a p-value is the probability of a test statistic at least as extreme as the one you observe under the null hypothesis; the two tailed test considers alternatives in either direction so "at least as extreme" can be in either tail; given the two conditions above, you get that doubling of the one-tailed p-value (which only considers one of the tails).</p>
<p>* consider the two one-tailed Kolmogorov-Smirnov tests compared to the two-tailed version to see a case where being in either tail are not mutually exclusive events.</p>
<p>For the asymmetric case, some discussion <a href="https://stats.stackexchange.com/questions/140107/p-value-in-a-two-tail-test-with-asymmetric-null-distribution">here</a> may be relevant.</p>
| 340
|
hypothesis testing
|
Is the choice of test statistics in hypothesis testing a completely philosophical one?
|
https://stats.stackexchange.com/questions/203868/is-the-choice-of-test-statistics-in-hypothesis-testing-a-completely-philosophica
|
<p>Is the choice of test statistic in hypothesis testing a completely philosophical one? In other words, is the choice of test statistic and rejection/acceptance region a completely judgement call and is not bounded by any requirment?</p>
|
<p>I am assuming that you are asking about the choice of test statistic within a specific statistical model rather than asking about the choice of statistical model. I am also assuming that you are asking about the test statistic to be used in a classical hypothesis test in the accept/reject manner.</p>
<p>The choice of test statistic is made on the basis of the properties of the resulting test. There is good reason to choose the test statistic to optimise the power to discriminate between a true and false test hypothesis, but it is also useful that the distribution of the test statistic be known. </p>
<p>Student (Gossett) wanted to devise a significance test for means from small samples. His resulting t-test uses a particular test statistic, Student's t, not because he wanted to test the ratio of the mean and standard error, but because the distribution of that test statistic is derivable. </p>
<p>Whether you wish to call the choice of test statistic a "philosophical one" depends on what you mean by that. ;-)</p>
| 341
|
hypothesis testing
|
To use the right model and analysis
|
https://stats.stackexchange.com/questions/204690/to-use-the-right-model-and-analysis
|
<p>I have a dataset, <code>data</code>, that contains <code>user</code>, <code>game_played</code>, <code>amount_spent</code> and <code>amount_won</code>. So <code>head(data)</code> gives</p>
<pre><code>user game amount_spent amount_won
14 4 186 120
14 2 200 80
10 2 65 100
</code></pre>
<p>I want to investigate why a <code>user</code> stops playing: is it because of the <code>game</code> or/and if a <code>user</code> loses? </p>
<p>What would be the right way to do this?</p>
<p>My approach would be this:
We divide <code>data</code> in two groups <code>good</code> and <code>bad</code> where <code>good</code> contains users that play for a long time whereas <code>bad</code> contains user that stop playing very fast. Then one way is to find the most popular game for a fixed user in the two group and test if there is a difference. </p>
<p>Another way is to calculate <code>amount_spent</code>-<code>amount_won</code> for users in the two group and then test if there is a difference between the two groups.</p>
<p>Is this the right approach or is there a better one?</p>
|
<p>It is worth noting that a statistician should not look for "causation" as much as "association." There is no statistical method for identifying causation. In addition, the data set does not appear to have a variable that indicates when the user begins playing a game and when he stops playing a game. A user is always going to stop playing a game at some point since nobody lives forever, so what you actually want to know is whether or not the type of game/amount spent/amount won have an association with the length of time that a user plays a specific game or with the length of time that the user plays in total between games. So, the first two points that need to be understood are these: 1) when reporting findings, report them in terms of association, not causation, and 2) a variable needs to be created that indicates the length of play.</p>
<p>Once this variable is created, you should analyze scatter plot relationships of your variables, i.e. game type versus length of play, amount spent versus length of play, etc. to see if there is an identifiable relationship between your dependent variable (length of play) and any of your independent variables. The scatter plot analysis will also help you identify what kind of relationship that the two may exhibit. For example, what if the length of play has a logarithmic relationship to amount spent? Or, what if the relationship is non-linear in such a way that it cannot be transformed into a linear regression equation without also transforming the dependent variable? Knowing these details prior to fitting a model is important. </p>
<p>Once you have made an educated guess on your relationships after visual analysis of your scatter plots, you can then fit a regression model to the relationship. I would strongly recommend keeping the model linear if at all possible due to the wealth of theory that exists to support interpretation of the model. Several transformations exist, such as exponential, quadratic, and reciprocal transformations that will allow you to preserve the relationship in a linear format. If the relationship is truly non-linear and cannot be transformed in a way that preserves standard linear regression format (such as a power transformation), I would recommend using a generalized additive model to describe the relationship--the interpretation of such a model, though, is a different question entirely. Lastly, if you plan on creating a multiple regression model with multiple independent variables to your dependent, I would analyze correlations between your independent variables to see if any multi-collinearity may exist, in which case you would probably want to exclude one of the highly correlated variables (although is, too, can be a separate question).</p>
<p>I hope that this answer helps, and good luck!</p>
| 342
|
hypothesis testing
|
One Sided Null Hypothesis - 2 Interpretations
|
https://stats.stackexchange.com/questions/205958/one-sided-null-hypothesis-2-interpretations
|
<p>I've been reading around about hypothesis testing. I don't understand why the following one sided tests are equivalent:</p>
<p>$H_0:\mu \leq \mu_0$; $H_a:\mu > \mu_0$</p>
<p>and</p>
<p>$H_0:\mu = \mu_0$; $H_a:\mu > \mu_0$</p>
<p>Any thoughts?</p>
<p>Edit:
I think I understand why they are equivalent. Anything that's rejected by the second hypothesis will be also rejected by the first (at least for Z tests and T tests you learn about in a first course in stats). Maybe a better question to ask is -- are there scenarios where these two inferences are not equivalent?</p>
|
<p>I do not think those are equivalent and in fact I believe one of them </p>
<blockquote>
<p>H0:μ=μ0; Ha:μ>μ0
is incorrect. </p>
</blockquote>
<p>Philosophically, the 'rules' for forming the Ho and the Ha are that they be
a-mutually exclusive and b-exhaustive, and so I think technically that form of the null is incorrect because it's not exhaustive (e.g. it omits the result in which (using your single sample example) the obtained mean is actually significantly lower).</p>
<p>Pragmatically, you are correct there aren't any cases where anything that's rejected by the second version of the hypothesis wont be also rejected by the first version - because the critical value for rejection region would go in the tail corresponding to the alternative hypothesis, leaving the entire other part of the distribution in the zone of the null. But the fact that the practical implication is invariant doesn't make the expression of the hypothesis correct (for the reason stated above, that it fails one of the rules of hypothesis formation). </p>
| 343
|
hypothesis testing
|
Which Hypothesis Test should I apply in this case?
|
https://stats.stackexchange.com/questions/206240/which-hypothesis-test-should-i-apply-in-this-case
|
<p>A system produces samples. I have historical data as thousands of samples. I know the number of samples, their mean and their standard deviation. This data was collected by using the same system.</p>
<p>But recently the system was modified. And I have around 30 new samples after the system update. I know the number of new samples, their mean and their standard deviation. </p>
<p>I want to make a test by using null hypothesis and verify if the system update had any significant effect or not.</p>
<p>I’m confused between many different types of tests. Some are use when standard deviation of the population is not known, some use one-tail ect.
In my case I have all the historical samples and the new samples; I have means and standard deviations of both.</p>
<p>Which test should I apply in this case? (I'm almost novice in the field)</p>
|
<p>I think that a single sample Z-test would be appropriate as you suggest that the samples that you are in possession of have a mean and standard deviation. Conceptually, you would be treating the population of samples which you have as a true population. It's mean would be equal to the mean of all the samples' means. Then test your new sample's mean against this population using the Z-test formula:
Z = (your sample mean - known population mean)/(standard deviation of the population/(SQRT(size of new sample)). </p>
<p>This test would tell you if the new system deviates from the old one in producing samples with a different than expected mean.</p>
| 344
|
hypothesis testing
|
What is p-value in simple words and good non-mathematical examples?
|
https://stats.stackexchange.com/questions/205653/what-is-p-value-in-simple-words-and-good-non-mathematical-examples
|
<p>Can someone explain in simple words and with many good down-to-earth examples what is the p-value and how do we find it? </p>
<p>Is it true that it shows what is the probability that the sample we have tested is true not simply by chance? Isn't that the point of the alpha level to guarantee that for an instance, for 95% of the time, it is not a chance that our hypothesis is true? For the rest of the 5% if it is true, it is by chance?</p>
<p>I will greatly appreciate if you can explain the difference between p-value and alpha level(significance level), how are they connected, what do they tell me for the "real world" and give me examples! (Because any mathematical explanation seems to confuse me more) </p>
<p>Thanks in advance!</p>
<p>PS: I am aware that there are questions about p-value with good explanations out there, would then just like to have answers for the other questions.</p>
| 345
|
|
hypothesis testing
|
hypothesis test_ determine null and alternative
|
https://stats.stackexchange.com/questions/208622/hypothesis-test-determine-null-and-alternative
|
<p>What is the most suitable null and alternative hypothesis for following problem? </p>
<blockquote>
<p>It is believed that the average level of Prothrombin in a normal
population s 20 mg/100 ml of blood plasma with a standard deviation
of 4 mg /100 ml. To verify this, a sample is taken from 40
individuals in whom the average is 18.5 mg/100 ml.</p>
</blockquote>
<p>My answer is</p>
<p><strong>H0:average level of prothombrin in a normal population = 20mg/100ml</strong></p>
<p><strong>H1:average level of prothombrin in a normal population != 20mg/100ml</strong></p>
<p>but in the answer sheet they have stated as follows</p>
<p><strong>H0:sample is not taken from 40 individuals in whom the average is 18.5 mg/100 ml.</strong></p>
<p><strong>H1:sample is taken from 40 individuals in whom the average is 18.5 mg/100 ml.</strong></p>
<p>is my answer incorrect?</p>
<p>How do we determine the exact null and alternative hypothesis for given problem?</p>
|
<p>The answer sheet seems to suggest an answer that I think is wrong for several reason:</p>
<ol>
<li>the tested hypotheses depend on the observed data, </li>
<li>the tested hypotheses include the sample size (why?!!) and </li>
<li>are seemingly unrelated to hypothesized value that is to be verified (20 mg/100 ml).</li>
<li>Rejection of the null hypotheses of the mean not being 18.5 mg/100 ml (or failure to reject it) would not say much about whether the mean is 20mg/100ml.</li>
<li>I am also uncertain whether there even exists a sensible test for testing this null hypothesis versus the stated alternative. Point null hypothesis versus point alternative (or interval alternative) is quite normal, interval null hypothesis versus interval alternative is quite normal (see my proposal below), but interval null hypothesis versus point alternative seems difficult to me.</li>
</ol>
<p>In short, that answer does not seem to make any sense to me, unless I am missing something.</p>
<p>However, your answer is also not suitable to achieve the stated goal of verifying that the mean is 20 mg/100 ml (I assume the standard deviation is not the thing we are trying to verify) or at least reasonably close to it. A failure to reject the null hypothesis of mean = 20mg/100ml (your answer) does not mean that the mean is actually 20mg/100ml. </p>
<p>In practice, I guess one would say that values between 20-$\delta_1$ mg/100ml and 20+$\delta_2$ mg/100ml are considered practically equivalent to 20mg/100ml. The null hypothesis would then be that the true mean lies outside of this interval and the alternative would be that the true mean would lie inside of this interval.</p>
| 346
|
hypothesis testing
|
Comparing difference between two subsets
|
https://stats.stackexchange.com/questions/213652/comparing-difference-between-two-subsets
|
<p>I have a large sample of 800 participants who completed a measure of relationship at 2 time points. The result of a paired-samples t-test indicates that there was no statistically significant difference between both time points. I then split the group into high risk and low risk subsets of participants likely to develop relationship problems over time. the analysis then indicated a statistically significant result for both subset! I do not think that this is correct as I feel that one subset should be significant relative to the other?
am I right in thinking this or is there a better way of splitting the group into subsets rather than my approach of creating a dummy variable for each subset and selecting each subset and carrying out the paired samples t-test twice?</p>
|
<blockquote>
<p>I feel that one subset should be significant relative to the other?</p>
</blockquote>
<p>From what you posted it doesn't sound like that's the comparison you were making but rather that you were comparing time t1 with time t2 in both subgroups. If that is the case I don't see any reason why the subgroups can't show a significant difference even if the overall population did not.</p>
<p>Assuming that your null hypothesis is that the mean difference between t1 and t2 is 0 it's not hard to imagine situations that might produce such a result: if the high-risk group has greater risk at t2 relative to t1 and the low-risk group has lower risk at t2 relative to t1, the overall group difference between measures at t1 and t2 could be close enough to 0 that you would not reject the null. If you are unsure about this you can always run some simulations with random data to see if this situation arises-- I threw one together in R using rnorm and found 10 such cases out of 10,000 trials.</p>
<p>The method for grouping your subsets seems fine to me, though it's possible that your criteria for subsetting the data could cause an issue (depending on how you categorize people as high- or low-risk).</p>
| 347
|
hypothesis testing
|
a hypothesis test for evidence that one thing is dependant on another
|
https://stats.stackexchange.com/questions/215070/a-hypothesis-test-for-evidence-that-one-thing-is-dependant-on-another
|
<p>I'm not sure what the null hypothesis for this would be/ what the correct symbols are.
the data is unpaired and I'm needing to find if there evidence ( at 5% level of significance) that one thing is dependant on another. </p>
<p>for example, is there evidence that the size of the banana is dependant on the plantation its grown in? </p>
<p>H0:
H1: </p>
|
<p>You might use Chi-Square Test for Independence.</p>
<p>Considering the independence of size of banana and the plantation its grown in.
For simplification, let's assume that those bananas are dividing into $r$ groups according to its size and are grown in $s$ plantations. Denote $A_1, A_2..A_r$ for size level, $B_1, B_2...B_s$ for plantation level. Then we obtain a $r \times s$ contingency table.</p>
<p><a href="https://i.sstatic.net/o5O3Y.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o5O3Y.jpg" alt="contingency table"></a></p>
<p>Denote $p_{ij}$ for the probability that bananas are from $A_i$ and $B_j$, $p_{i.}$ for the probability that bananas are from $A_i$, $p_{.j}$ for the probability that bananas are from $B_j$.</p>
<p>Hypothesis: $ H_0: p_{ij} = p_{i.}p_{.j}$and $H_1: p_{ij} \neq p_{i.}p_{.j} $</p>
<p>$$
T_n = \sum_{i = 1}^{r} \sum_{j = 1}^{s} \frac {(nX_{ij} - m_in_j)^2}{nm_in_j}
$$</p>
<p>If $H_0$ is correct, $T_n \xrightarrow{d.f.} {\chi}^2_{(r-1)(s-1)}$ when $n$ is sufficiently big.</p>
<p>You might see <a href="https://en.wikipedia.org/wiki/Pearson's_chi-squared_test#Test_of_independence" rel="nofollow noreferrer">wiki</a> for more information.</p>
| 348
|
hypothesis testing
|
Basic Hypothesis Testing
|
https://stats.stackexchange.com/questions/220246/basic-hypothesis-testing
|
<p>I'm having trouble with a basic hypothesis testing question that I just thought of. The question is the following: suppose you know that a certain lawnmower manufacturing company (called Company A) makes lawnmowers that run on average 300 minutes before running out of gas with a standard deviation of 30 minutes. Suppose you found a lawnmower and it only ran for 230 minutes before running out of gas. Did this lawnmower come from Company A?</p>
<p>Would this be a valid way to approach this problem? First, construct the 95% confidence interval $(300 - 1.96 * 30, 300 + 1.96 * 30)$. Since 230 minutes lie outside this confidence interval, then we can say that at the 95% confidence level, we reject the null hypothesis that this lawnmower comes from Company A.</p>
<p>Is that correct?</p>
|
<p>Regarding your approach:
Confidence interval is a <em>random</em> interval $I$ constructed from the data that contains the unknown parameter of interest $\theta$ with specified probability $1-\alpha$, $\mathbb{P}(\theta\in I)=1-\alpha$. Your interval $(300−1.96∗30,300+1.96∗30)$ is not random, it is constructed from population parameters, so it is not really a confidence interval.</p>
<p>Let's try to solve the problem "from scratch." Assume that the distribution of running times is normal $\mathcal{N}(\mu, \sigma^2)$, with mean $\mu=300$ and standard deviation $\sigma=30$. Given the context of the problem, the normality assumption is reasonable (although we should keep in mind that we work under this assumption). Given the data $X=230$, our null hypothesis, the lawnmower came from Company A, can be formalized as follows: $H_0: X\sim \mathcal{N}(\mu,\sigma^2)$. </p>
<p>To test this hypothesis, we need to choose a statistic $s$, a function of data, with the following property: the larger $s$, the more tempting to reject the null. It seems intuitive to reject $H_0$ whenever $$s(X)=|X-\mu|$$ is large, i.e. whenever $X$ is far from the mean $\mu_0$. So, our <em>rejection region</em> is
$$
s(X)>c.
$$</p>
<p>How do we chose $c$? We chose $c$ to control the probability of <em>type I error</em>, i.e. an error of rejecting $H_0$ when it is true. To construct a test of size $\alpha$, we chose $c$ such that
$$
\mathbb{P}(\mbox{Reject } H_0|H_0)=\alpha,
$$
or
$$
\alpha=\mathbb{P}(|X-\mu|>c \hspace{1mm}| \hspace{1mm} X\sim \mathcal{N}(\mu,\sigma^2))=\mathbb{P}\left(\left.|Z|>\frac{c}{\sigma} \right| Z\sim\mathcal{N}(0,1)\right)=2\Phi\left(-\frac{c}{\sigma}\right),
$$
where $\Phi$ is the standard normal CDF. Solving this equation for $c$ yields
$$
c=-\sigma\Phi^{-1}\left(\frac{\alpha}{2}\right).
$$
For example, if we want to construct a test of size $\alpha=0.05$, then $c\approx58.8$. Since the value of the test statistic $s(X)=70$, our size $0.05$ test will reject the null. </p>
<p>Reporting the p-value is more informative then simply reporting whether the test accepts or rejects the null. Recall the the p-value is the smallest size $\alpha^*$ at which the test rejects the null. To find the p-value, we need to solve
$$
s(X)=-\sigma\Phi^{-1}\left(\frac{\alpha^*}{2}\right)
$$
for $\alpha^*$. The solution is
$$
\mbox{p-value}\equiv \alpha^*=2\Phi\left(\frac{|X-\mu|}{\sigma}\right)\approx0.0196.
$$
This p-value is small (less than the usual 0.05), and, therefore, the data provides strong evidence against the null hypothesis. </p>
| 349
|
hypothesis testing
|
What is appropriate statistical test for one condition?
|
https://stats.stackexchange.com/questions/144265/what-is-appropriate-statistical-test-for-one-condition
|
<p>There is one chemical for plants; in absence of it (control) all 3 of them live. In presence 5 of 6 die, only 1 lives. So how can we show if it is significant or not? It may be basic but I appreciate your help. </p>
|
<p>I suggest to use the Fisher's Exact test in order to test if there is a statistically significant difference between the proportions of survivors in the two samples (treatment and control).</p>
<p>Check <a href="http://en.wikipedia.org/wiki/Fisher%27s_exact_test" rel="nofollow">http://en.wikipedia.org/wiki/Fisher%27s_exact_test</a></p>
<p>The contingency table will look like:</p>
<p>................. Alive | Dead | Row Total</p>
<p>Treatment 1 | 5 |6</p>
<p>Control..... 3 | 0 | 3</p>
<p>Col Total.. 4 | 5 | 9</p>
| 350
|
hypothesis testing
|
Why do different statistical tests differ?
|
https://stats.stackexchange.com/questions/145803/why-do-different-statistical-tests-differ
|
<p>I heard from the grapevine that when somebody has statistics, some tests may indicate one type of significance, while others do not. The tests themselves may be inconsistent. My question is, what is the fundamental problem that causes the tests to break down? </p>
<p>My answer, which I am not sure if correct or not, is that it comes down to the problem of not knowing how many terms one needs to get "close" to the convergence. To use an example from analysis, we may know that a sequence of functions converges, but we may not know how many terms are necessary to get "close". In statistics, the sample points are used to estimate the mass function of the random variable. By various limit theorems (strong law of large numbers, central limit theorem, etc), these sample points need to converge to the mass function. However, as we doing inverse-probability theory, we do not know many terms are necessary to converge "close" enough. Sometimes 20,000 sample points seems like a lot, but we do know that there are sequences that converge painfully slow. Perhaps, this is why the tests are inconsistent? </p>
|
<p>[Beware using the term 'inconsistent' in this context, as <em>inconsistency</em> has a particular technical meaning when applied to hypothesis tests. I'll use it because you did, but with the clear stipulation that it's not taking its technical meaning in this discussion.]</p>
<p>Different test statistics - even when attempting to test quite similar hypotheses - respond to different aspects of the data.</p>
<p>That doesn't necessarily make them "inconsistent" with each other, since they're sensitive to different things and make different assumptions. It's like taking pictures through different filters ... they don't necessarily look the same.</p>
<p>That doesn't mean anything "broke down".</p>
| 351
|
hypothesis testing
|
Interpretation of empirical frequency of null hypothesis rejections
|
https://stats.stackexchange.com/questions/145286/interpretation-of-empirical-frequency-of-null-hypothesis-rejections
|
<p>assume I know a theoretical distribution that is quite non-normal.</p>
<p>I simulate many (N) samples of given size (T).</p>
<p>Then for each sample I test if the sample average is equal to the theoretical one (t-test).</p>
<p>Then I look at the frequency of rejections as a function of T.</p>
<p>Let T* the smallest T such that the rejection frequency equals the significance level used in the test.</p>
<p>Does it make sense to say that T* is the minimum sample size such that a random sample is informative about the mean of the theoretical distribution?</p>
<p>In general, what is the most appropriate way to identify the minimum size of a random sample such that I can consider it informative, given a non-normal theoretical distribution? </p>
<p>Thanks a lot</p>
|
<p>Sounds like you want to do a <a href="http://en.wikipedia.org/wiki/Statistical_power" rel="nofollow">power analysis</a>. There is a large literature on that, so you may want to read that first. However, if you worry about non-normality, then I would start with worrying whether a t-test is appropriate in the first place before looking at the power of that test.</p>
| 352
|
hypothesis testing
|
A basic question on hypothesis testing
|
https://stats.stackexchange.com/questions/145293/a-basic-question-on-hypothesis-testing
|
<p>In hypothesis testing a hypothesis is generally defined to be "a statement about the value of a population parameter". For example the mean value of the height of people living in a certain city.</p>
<p>I do not understand how this applies to the classical example of coin tossing. In coin tossing we test the hypothesis that the coin is fair. What is the population here and which parameter of this population is under concern?</p>
<p>Thanks</p>
|
<p>Let's say you toss your coin 100 times. Then you count how often you've got heads. </p>
<p>Doing this very often you may draw a graph that shows how often you've got one head, two heads, three heads ...up to 100 heads. Your x-axis is 1 to 100 heads, your y-axis is how often you've got it tossing very very often 100 times your coin.
With a perfect coin, you will find that for the most you have 50 times heads. This will be your parameter of central tendency. But of course sometimes there will be more or less than 50 heads, even using a perfect coin. Of course it very unlikely that you get only one head, when tossing 100 times. How (un)likely it is shows your graph.</p>
<p>When doing an empirical experiment with a specific coin, you may toss 100 times and count how often you really found head. Given the H0 that the coin is perfect, you can determine how likely is, what you found empirically. Just go to your graph and have a look.</p>
<p>The second parameter (spreading) depends on how often you toss your coin each trial. If your trial is not 100 tosses but let's say just 10 tosses, you will get more variance (and of course your mean value is 5 and not 50). So your H0-graph depends on how often you toss each trial.</p>
<p>However, the coins are not central but help to understand the idea. Doing a study you have a concrete number of observations ("tosses per trial") and you can calculate the parameters. You can determine how likely is what you found - given the H0.</p>
| 353
|
hypothesis testing
|
Would a t-test apply? Statistical Test Suggestion
|
https://stats.stackexchange.com/questions/146175/would-a-t-test-apply-statistical-test-suggestion
|
<p>Given two lists of characters. For example </p>
<p>List #1</p>
<p>A
C
O
P</p>
<p>List #2</p>
<p>A
O
R
T</p>
<p>How would you test whether the two lists differ significantly or not. I feel like a t-test can be applied, but I'm not sure how given I have character values as opposed to numerical values (i.e. I cant compute means). </p>
<p><strong>Question: Is there a way to show that the two lists of characters are generated from the same population given their similarities</strong></p>
<p>Thanks for any help</p>
|
<p>Your answer makes little sense. You're <strong>NOT</strong> comparing directly the ammio acid or nucleotides. You're trying to conduct a differential expression test of two experiments. The dependent variable is the abundance of each RNA sample and the values are integers.</p>
<p>Comparing directly the ammio acid or nucleotides is more like pairwise alignment, but this has nothing to do what you're trying to do.</p>
<p>Let's get back to the test. You <strong>could</strong> use a t-test to test for the differences, but it's not recommended. Not that it's invalid, the sample sizes in a medical research is small, so a simple t-test gives little power.</p>
<p>We typically use other statistical tests. I don't know which one is the best because this depends on your data and assumptions. I have been using the edgeR bioconductor package which models a negative-binomial GLM. I've also seen other bioinformatic packages using F-test.</p>
<p>Read <a href="http://www.bioconductor.org/packages/release/bioc/vignettes/edgeR/inst/doc/edgeRUsersGuide.pdf" rel="nofollow">http://www.bioconductor.org/packages/release/bioc/vignettes/edgeR/inst/doc/edgeRUsersGuide.pdf</a> for more details. Pay close attention to "2.7 Negative binomial models" section.</p>
<p>Read <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC154570/pdf/gb-2003-4-4-210.pdf" rel="nofollow">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC154570/pdf/gb-2003-4-4-210.pdf</a> on why a t-test isn't recommended.</p>
| 354
|
hypothesis testing
|
Which test to use to compare calculated percentages?
|
https://stats.stackexchange.com/questions/146479/which-test-to-use-to-compare-calculated-percentages
|
<p>I have data in the following form:</p>
<pre><code>subject PercA PercB PercC PercD
A1 0.12 0.33 0.40 0.15
A2 0.14 0.31 0.38 0.17
...
B1 0.18 0.30 0.35 0.17
B2 0.17 0.29 0.39 0.15
...
</code></pre>
<p>The percentages in each row sum up to 1 because the percentages are calculated like this: PercA=A/(A+B+C+D), PercB=B/(A+B+C+D) and so on.<br>
So, now I want to test whether this percentage "profiles" differ between subjects from group A and B. What kind of statistical test is applicable for this scenario?</p>
|
<p>You can possibly rearrange the data and use regression like this: </p>
<pre><code>> mydf
subject num PercA PercB PercC PercD
A 1 0.12 0.33 0.40 0.15
A 2 0.14 0.31 0.38 0.17
B 1 0.18 0.30 0.35 0.17
B 2 0.17 0.29 0.39 0.15
> mm = melt(mydf, id=c('subject','num'))
> mm
subject num variable value
1 A 1 PercA 0.12
2 A 2 PercA 0.14
3 B 1 PercA 0.18
4 B 2 PercA 0.17
5 A 1 PercB 0.33
6 A 2 PercB 0.31
7 B 1 PercB 0.30
8 B 2 PercB 0.29
9 A 1 PercC 0.40
10 A 2 PercC 0.38
11 B 1 PercC 0.35
12 B 2 PercC 0.39
13 A 1 PercD 0.15
14 A 2 PercD 0.17
15 B 1 PercD 0.17
16 B 2 PercD 0.15
> summary(lm(value~subject+variable, data=mm))
Call:
lm(formula = value ~ subject + variable, data = mm)
Residuals:
Min 1Q Median 3Q Max
-0.03250 -0.01063 0.00125 0.01188 0.02750
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.525e-01 1.186e-02 12.86 5.69e-08 ***
subjectB -2.631e-17 1.061e-02 0.00 1.000
variablePercB 1.550e-01 1.500e-02 10.33 5.32e-07 ***
variablePercC 2.275e-01 1.500e-02 15.17 1.01e-08 ***
variablePercD 7.500e-03 1.500e-02 0.50 0.627
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.02121 on 11 degrees of freedom
Multiple R-squared: 0.9683, Adjusted R-squared: 0.9568
F-statistic: 84.03 on 4 and 11 DF, p-value: 3.599e-08
</code></pre>
| 355
|
hypothesis testing
|
How to estimate relative risk for a small group which has 0 members with the outcome?
|
https://stats.stackexchange.com/questions/148406/how-to-estimate-relative-risk-for-a-small-group-which-has-0-members-with-the-out
|
<p>I have a contingency table that looks like this:</p>
<pre><code> Disease Not Disease
Exposed 372 870
Not Exposed 0 23
</code></pre>
<p>What methods would I use to estimate if there is a statistically significant difference between the exposed and not exposed?</p>
|
<p>Expected:-</p>
<pre><code> Disease Not Disease
Exposed 365.24 876.76
Not Exposed 6.76 16.24
</code></pre>
<p>as expected cell values is big enough you can test by Chi-square contingency test.</p>
| 356
|
hypothesis testing
|
Testing Hypothesis
|
https://stats.stackexchange.com/questions/148562/testing-hypothesis
|
<p>I have a sample of 40 data, that shows the average of time a person waits in a gas station line.</p>
<p>Descriptive statistics of this sample are as follow:
N= 40, mean= 115sec, std.dev.=11sec, min=90sec, max=147sec.</p>
<p>I decided to test the following hypothesis:
H0: waiting time = 115sec
H1: waiting time is not 115sec</p>
<p>Please tell me how I can test my hypothesis?</p>
|
<p>You can test that hypothesis with the standard t-test. Simply compute:</p>
<p>$$
\frac{t_0 - t_1}{se(t_0)}
$$</p>
<p>Where $t_1$ is the value under the null (that value you think $t$ should have). And $t_0$ is the value of $t$ estimated from your data (the sample mean of 115). $se(t_0)$ is the sample standard deviation. Under very general asumptions, this statistic follows a t-distribution with $n-1$ degrees of freedom.</p>
<p>The point made in the comments is that since $t_0 = 115$, testing $H_0: t_0 = 115$ can never be rejected. It is often much more interesting to compute the confidence interval, that way you can say: "I am $1- \alpha\%$ sure that (the true but unknown parameter) $t_0$ falls within this interval". <a href="http://en.wikipedia.org/wiki/Confidence_interval" rel="nofollow">This wiki has a neat discussion of CI</a> </p>
| 357
|
hypothesis testing
|
Test two 1-10 ranking processes on same data and test for statistical significance in the two processes
|
https://stats.stackexchange.com/questions/149089/test-two-1-10-ranking-processes-on-same-data-and-test-for-statistical-significan
|
<p>I have one set of data. I have two procedures for making 1-10 ranking of the data (one was used previously and one is a new procedure). I want to do a hypothesis test to see if the ranking is the same or not.</p>
|
<p>I would advise you to have a look at one of the follow tests:</p>
<ul>
<li>Wilcoxon signed-rank test </li>
<li>Mann–Whitney U test </li>
<li>Kruskal–Wallis test</li>
</ul>
<p>These tests are generally used when testing with rankings. </p>
<p>I'm pretty sure one of them matches your data and does everything you ask for. </p>
| 358
|
hypothesis testing
|
Hypothesis test null hypothesis equality
|
https://stats.stackexchange.com/questions/152928/hypothesis-test-null-hypothesis-equality
|
<p>If we want to test whether the sample mean is at least 5 meters, how should we state the null hypothesis and alternative hypothesis? Here is what I think:
H0: µ>=5
H1: µ<5
But when determining the p value, I will be looking at a left tail probability isn't it? It just feels weird. So is my H0 and H1 correct?</p>
|
<p>You do not want to take H0 as µ>5 because it can't yield assumptions that allows you to draw a distribution to test it. This is the point of null hypothesis testing. <strong>Only H0:µ=5 allows you to do that</strong>. Moreover, in null hypothesis testing, <strong>you want to reject H0</strong> (not accept it), then in any case µ>5 was the wrong way to go.
More precisely, I would advocate for :</p>
<ul>
<li>H0 : µ=5</li>
<li>H1 : µ>5</li>
<li>You look at the <strong>right tail</strong> of your <strong>z-distribution</strong> under H0 (in order to reject H0 to the benefit of H1).</li>
</ul>
| 359
|
hypothesis testing
|
Test that a group of probabilities is different from chance - in either direction
|
https://stats.stackexchange.com/questions/153877/test-that-a-group-of-probabilities-is-different-from-chance-in-either-directio
|
<p>Suppose I have subjects sort 10 images into the categories "Group A" or "Group B". I want the null hypothesis to be that subjects are randomly assigning the images, and the alternate hypothesis that certain images tend to be assigned to certain categories. Importantly, I do not have an a priori hypothesis about the category of a given image.</p>
<p>How would you test whether the probability of assigning an image to Group A is <em>different</em> from 50%? I.e., for one image, it could be 30% chance of assignment to Group A, for another, it could be 70% chance of assignment to group A, and I would want to treat those as equally-powerful pieces of evidence for the alternative hypothesis.</p>
<p>My initial thought was to do a chi squared test of homogeneity, but such a test would be punished as the number of images increases, whereas it seems intuitively that my chosen test should become more powerful the more images I use.</p>
|
<p>Conducting a $\chi^2$ test is totally appropriate. Your last sentence:</p>
<blockquote>
<p>My initial thought was to do a chi squared test of homogeneity, but such a test would be punished as the number of images increases, whereas it seems intuitively that my chosen test should become more powerful the more images I use.</p>
</blockquote>
<p>can be interpreted a few different ways. One way to do the test would be to have, say 30 subjects conduct your experiment. Then simply bin the observations into either A or B. Which would result in the table.</p>
<pre><code> A B
-------
100 200
</code></pre>
<p>The expected number for each bin would be $n \cdot 0.5 = 300 \cdot 0.5 = 150$, and so we see that (in R code), this example one would reject the null where each bin has equal probability.</p>
<pre><code>dat <- c(100,200)
chisq.test(dat)
</code></pre>
<p>Conducting this test <em>would</em> be more powerful if you gave the same number of subjects more images. Another way to conduct the test though would be to create a 10 by 2 table, where each row is for a different image. e.g.:</p>
<pre><code> Image A B
--------------
1 6 4
2 etc...
3
4
5
6
7
8
9
10
</code></pre>
<p>This approach has the advantage that you can examine the residuals from the table and see if any particular image is more likely to be classified into the A or B category. Since you fix the number of images shown, to correspond to the <a href="https://stats.stackexchange.com/a/14230/1036">conservative rule of thumb</a> that the expected value for any cell should be at least 5, all you need to do is to conduct your experiment on at least ten people. I'm not sure if this approach gains power to reject the null with more images, as you are adding rows to the table - it would take more investigation. (I would guess no for a very low number of people, but after say 20 people I would guess more images does increase the power.) You may also consider Fisher's exact test on such a table (although I presume the test statistic would need to be estimated via simulation).</p>
<p>You can do the same type of "x by 2" table for people as well, in which case each row is a person. This has the same exploratory advantage in which you can see if any persons are more likely to classify images in the A or B category. This approach will increase in power with the more images you show to persons. And finally you may consider a logistic regression model predicting the categories based on individual or image random effects. This last suggestion requires the largest sample size, but gains in power both when increasing persons and increasing images.</p>
| 360
|
hypothesis testing
|
Is power always associated with hypothesis testing?
|
https://stats.stackexchange.com/questions/157296/is-power-always-associated-with-hypothesis-testing
|
<p>Suppose I know that the true population proportion of a mutation is p = 0.3493119. I want to know that given power = 0.8, what's the proportion of of mutation in my sample of n = 30? Here's what I have so far: </p>
<p>1 - cdf((x-p)/sqrt(p*(1-p)/30) = 0.8
Since p = 0.3493119, I can solve for x and I get x = 0.276055. So does this mean that the probability of having 27.6% of the sample be mutated = 0.8? Is it correct to say that?</p>
<p>From my experience I know that power is the probability of correctly rejecting the null. But since I already know the true population proportion of the mutation, is it still necessary to conduct a hypothesis test? I am leaning towards the negative, but without a hypothesis test, how can I calculate the power? </p>
<p>If I had to perform a hypothesis test to get the power...would it be something like this:</p>
<p>H0: p = 0.5 vs. H1: p > 0.5</p>
<p>power = P(Z_statistic > critical_value | p > 0.5)</p>
<p>power = 1 - normal_CDF(critical_value | p > 0.5)</p>
<p>Now I'm confused by how to deal with the p > 0.5, if it were just p = 0.5, I could've standardized the critical value so that it follows a N(0, 1) distribution. </p>
<p>Overall I think I just want to know the answer to the question, given power = 0.8, what is the proportion of mutation in my sample of n = 30? (true population proportion = 0.3493119). </p>
|
<p>I would say "Yes"power is alway associated with hypothesis testing it relates to sample size, type I error and both H1 and H0. </p>
<p>The followings are R code to show the power curve for OP's cases (one tailed at 0.05 level, by approximate a normal distribution. </p>
<pre><code>mu<-30*0.35
sd<-sqrt(30*0.35*(1-0.35))
c<-qnorm(0.95,mu,sd)
mus<-seq(10,30,1)
power = 1-pnorm(c, mus, sd)
plot(mus, power, type="l")
</code></pre>
<p><img src="https://i.sstatic.net/46YpH.jpg" alt="enter image description here"></p>
<p>Two tailed case by approximating normal distribution</p>
<pre><code>mu<-30*0.35
sd<-sqrt(30*0.35*(1-0.35))
mu
sd
c1<-qnorm(0.025,mu,sd)
c2<-qnorm(0.975,mu,sd)
mus<-seq(0,30,1)
power2 <- pnorm(c1,mus,sd)+1-pnorm(c2,mus,sd)
plot(mus, power2, type="l")
</code></pre>
<p><img src="https://i.sstatic.net/O97jp.jpg" alt="enter image description here"></p>
| 361
|
hypothesis testing
|
hypothesis test one-sample vs two-sample
|
https://stats.stackexchange.com/questions/163543/hypothesis-test-one-sample-vs-two-sample
|
<p>An industrial process is in place that increases the strength of a metal component. We are tuning a couple of settings on the system to optimize the strength. There are already some settings in place, but I have found some new settings that I would like to make a recommendation to change the system to. The new settings that I found to be an improvement were found using a sample that I tested different settings on. I would like to perform a hypothesis test to see if my recommended changes makes an improvement to the system. My question is whether I will be able to do a one sample hypothesis test, or if I must do a two sample hypothesis test.</p>
<p>Method 1:
If I do a one-sample hypothesis test, I would use the average strength of the component from the previous sample under the current settings as the null hypothesis and the average strength of the metal component from the new sample under my new recommended settings.</p>
<p>Method 2:
If I do a two-sample hypothesis test, I would run one new sample under the old parameter settings and the other sample under the new settings and test the difference in average strength to determine if there was a significant improvement.</p>
<p>I fear that if I use a one-sample test in this way (method 1), that I am introducing some sort of bias into the test. Which method is best and why?</p>
|
<p>You should do a two sample test since you are comparing two samples, each of which has variability that should be modeled. A one sample test would be if you were comparing the mean of a sample to some fixed value.</p>
| 362
|
hypothesis testing
|
Measure the performance of this year
|
https://stats.stackexchange.com/questions/164740/measure-the-performance-of-this-year
|
<p>I am going to measure the performance of this year. The average of the past year records (e.g. 5 years) and this year record will be used to judge how many standard deviation it is and if this year performance has improved or not. However, I have a doubt if the average should also include this year record for calculation?</p>
|
<p>From a practical standpoint, by including the current year in whatever calculation you want to do, you are probably reducing the chance that the current year's value deviates from whatever baseline expectation you have. </p>
| 363
|
hypothesis testing
|
Which type of analysis to use?
|
https://stats.stackexchange.com/questions/166375/which-type-of-analysis-to-use
|
<p>I'm trying to predict my DV based on IV (predictor variable) scores.
I have a sample size of 62.
DV is categorical (addicted or not addicted).
8 IVs are all continuous (at a push I can lose 3 IVs)
I only have access to SPSS.
Could you help me decide, please, the most appropriate and correct statistical analysis to use given the above information?
Thank you.</p>
|
<ul>
<li>Use continuous data for DV as forcing it to categorical (2 categories -addicted or not addicted) leads to information loss</li>
<li>For 8 (or even 5) predictors, sample size is very less (You may need more sophisticated techniques to handle such data)</li>
<li>Either try to gather more data (recommended)</li>
<li>Or try to use dimension reduction techniques (such as PCA) for variable selection (Literature review may also help/support)</li>
<li>Try using both of the above mentioned options</li>
</ul>
| 364
|
hypothesis testing
|
consequences of rejected/accepted hypothesis
|
https://stats.stackexchange.com/questions/166934/consequences-of-rejected-accepted-hypothesis
|
<p>A and B are some statements such that A implies B. I test the null hypothesis that A is true. If my test fails to reject A, does that result say anything about B? Analogously, if instead I test the null hypothesis that B is true, and my test rejects B, can I conclude that A is rejected as well?</p>
|
<p>A implies B, understood as <em>logical implication</em>, means that if A is true, then B is true. However if A is false, this says nothing about B, and if B is true, this says nothing about A.</p>
<p>According to that definition, concluding that B is true will shed no light over A. Also, concluding that A is false will give you no information about B.</p>
<p>Finally, if you conclude that A is true, then you are safe say that B is true, though this conclusion should not be reached from failing to reject the null, since this doesn't mean that the null is true (see <a href="http://blog.minitab.com/blog/understanding-statistics/things-statisticians-say-failure-to-reject-the-null-hypothesis" rel="nofollow">this</a> for more information).</p>
| 365
|
hypothesis testing
|
Test if the difference is statistically significant in A/B test
|
https://stats.stackexchange.com/questions/174787/test-if-the-difference-is-statistically-significant-in-a-b-test
|
<p>Let's say we did an A/B testing, and the click rate for 1 group was 0.4 and for the other group, it was 0.3.</p>
<p>How can we go about testing whether this difference is statistically significant?
I'm thinking getting a p-value from t-test, but what would the null and alternative hypothesis be?</p>
|
<p>You can do a test for difference of binomial proportions. You will need to know how many people were presented with A and with B.</p>
<p>For example, if there were 50 shown each, Stata gives the following result:</p>
<pre><code>. prtesti 50 .4 50 .3
Two-sample test of proportions x: Number of obs = 50
y: Number of obs = 50
------------------------------------------------------------------------------
Variable | Mean Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
x | .4 .069282 .2642097 .5357903
y | .3 .0648074 .1729798 .4270202
-------------+----------------------------------------------------------------
diff | .1 .0948683 -.0859385 .2859385
| under Ho: .0953939 1.05 0.295
------------------------------------------------------------------------------
diff = prop(x) - prop(y) z = 1.0483
Ho: diff = 0
Ha: diff < 0 Ha: diff != 0 Ha: diff > 0
Pr(Z < z) = 0.8527 Pr(|Z| < |z|) = 0.2945 Pr(Z > z) = 0.1473
</code></pre>
<p>Imagining that you wanted a one-tailed hypothesis test to see if the "new group" was better than the old group, you would look at <code>Ha: diff > 0</code> which gives a p-value of 0.15, i.e., not statistically significant (at standard significance level of 0.05).</p>
<p>If you instead showed to 500 each the p-value is 0.0005, so very much statistically significant.</p>
<p>See the <a href="http://www.stata.com/manuals13/rprtest.pdf" rel="nofollow">Stata documentation for prtest</a> to understand the methods used by the command.</p>
| 366
|
hypothesis testing
|
test if two Binomial distribution are significantly different
|
https://stats.stackexchange.com/questions/179148/test-if-two-binomial-distribution-are-significantly-different
|
<p>I have two groups of people.<br>
Group one with 16 choosing 1 and 33 choosing 2.<br>
Group two with 10 choosing 1 and 49 choosing 2.<br>
I assume they both follows Binomial distribution. So how can I found out if they are significant different from each other?<br>
(I have used nonparametric method to see they are neither randomly picked. Besides, I think t-test is infeasible in this question)</p>
|
<p>You can use a likelihood-ratio test.</p>
<p>There is an example which corresponds exactly to your problem on the wikipedia page :
<a href="https://en.wikipedia.org/wiki/Likelihood-ratio_test" rel="nofollow">https://en.wikipedia.org/wiki/Likelihood-ratio_test</a></p>
<p>The correspondance with your question is: Group 1/2 = Coin 1/2, and 1/2 = Heads/Tails.</p>
| 367
|
hypothesis testing
|
p value in one-sided test
|
https://stats.stackexchange.com/questions/177792/p-value-in-one-sided-test
|
<p>$$H_0:\mu=0 \quad\quad H_1:\mu>0 $$
we assume the distribution of the sample is Gaussian.<br>
If $p$-value is very big, sample mean very small, we still accept null hypothesis.<br>
Isn't it counterintuitive? Why are we still using it?</p>
<p>E.g. $x_1,x_2,\ldots,x_n$, sample mean is $-5$, variance is known as 1.<br>
$$H_0:\mu=0 \quad\quad H_1:\mu>0 $$
It's easy to see that $p$-value is very big, but we still accept mu = 0.
But actually it is very intuitive that $\mu$ is less than 0 or at least not equal to zero?</p>
|
<p>You have written the null hypothesis incorrectly. For a one sided test its'</p>
<p>$H_0 \mu \le 0 \\
H_1 \mu > 0$</p>
<p>So, in your example, where $\bar{X} = -5$ it is clearly much less than 0 and the null cannot be rejected. That's the price of a one tailed test. </p>
<p>Also, you don't <em>accept</em> the null you only fail to reject it. </p>
| 368
|
hypothesis testing
|
What test should I use for hypothesis with many variables?
|
https://stats.stackexchange.com/questions/56435/what-test-should-i-use-for-hypothesis-with-many-variables
|
<p>I have a survey with a list of questions in which respondents can pick their perceived personality traits. Then I have a question that asks respondents about their favorite in-game character's perceived personality and a question that asks respondents about their least favorite in-game character's perceived personality trait.
My hypothesis is: People will prefer characters that they perceive to be similar to themselves, and dislike ones that they perceive to be different.
What is the best way that I can conduct this statistic test?</p>
<p>Example:
One respondent may answer like this:</p>
<ul>
<li>Extrovert (1-7): 2</li>
<li>Dependable (1-7): 5</li>
<li>Reserved (1-7): 4</li>
</ul>
<p>Favorite Character:</p>
<ul>
<li>Extrovert: no</li>
<li>Dependable: yes</li>
<li>Reserved: no</li>
</ul>
<p>Least Favorite Character:</p>
<ul>
<li>Extrovert: no</li>
<li>Dependable: no</li>
<li>reserved: no</li>
</ul>
<p>I know how to test simple hypotheses about two simple numeric variables, but this looks a lot more complex.</p>
|
<p>There are many ways to approach this problem. One simple and flexible approach is as follows:</p>
<p>1) Come up with a way of computing a coefficient c for each user, such that this coefficient should usually be bigger if your hypothesis is true than if not. For example, you could use</p>
<p>c = sum_{feature in {Extrov, Dep, Res}} (how much likes feature)(favorite character has feature - least favorite character has feature).</p>
<p>Designing this coefficient is up to you. The important thing is that you expect this coefficient to be big when your hypothesis is true.</p>
<p>2) Then compute the average of this coefficient over your data.</p>
<p>3) The question you now need to answer is: Is the coefficient you got above unusually big? You can do this with a permutation test. Specifically, permute all the answers to the three questions (i.e., for each question individually, reassign all the answers to this question to new people) and then re-compute your coefficient. Do this repeatedly. If the coefficient you got in part (2) is bigger than 97.5 of the coefficients you get by permuting your data, then you may be able to report a significant effect.</p>
| 369
|
hypothesis testing
|
Rejection regions nested or not?
|
https://stats.stackexchange.com/questions/59122/rejection-regions-nested-or-not
|
<p>When varying the significance level, the rejection regions can be chosen to be nested or not nested. I was wondering what some theoretical and practical considerations are in using either nested or non-nested rejection regions? Thanks and regards!</p>
|
<p>It is not quite true that the rejection regions can be "chosen" to be nested or not. For simple hypotheses, and a continuous test statistic, the rejections regions of maximal power tests are surely nested via the Neyman-Pearson Lemma. The same goes for composite hypotheses and UMP tests.</p>
<p>GLR tests do not guarantee maximal power, but I do not know any practitioner that would not reject starting with the upper values of the GLR statistic. Since most common parametric tests are ultimately GLR tests, there is not much room for choice left.</p>
<p>Also note that you can grow an $\alpha$ level rejection region around any observed outcome. For this reason, I believe it will be very hard to justify non-nested regions. </p>
| 370
|
hypothesis testing
|
Seeking to understand asymmetry in hypothesis testing
|
https://stats.stackexchange.com/questions/21967/seeking-to-understand-asymmetry-in-hypothesis-testing
|
<p>I need to have one understanding on statistical hypothesis testing. In a typical hypothesis test, we have 2 opposite hypotheses; namely Null and Alternative. Here my textbook says that "those 2 hypotheses are not symmetrical in the sense that if we swap the hypotheses then the result will alter".</p>
<p>Here I am unable to grasp the point which that textbook wants to say. Can somebody explain to me in detail? It would be helpful if someone can give some example of that asymmetry as well.</p>
<p>Appreciate your help.</p>
<p>Thanks,</p>
|
<p>I suspect it means that if you perform a test for H1 with null H0 and are not able to reject the null hypothesis, that does not imply that if you performed a test for H0 with H1 as the null that you would be able to reject H1.</p>
<p>The reason is that failing to reject the null hypothesis does not mean that the null hypothesis is true, it could just mean that there isn't enough data to be confident that the null hypothesis is false. </p>
| 371
|
hypothesis testing
|
Hypothesis tests for exponential growth?
|
https://stats.stackexchange.com/questions/649267/hypothesis-tests-for-exponential-growth
|
<p>Is a standard test applicable for situations involving exponential growth? I don't have a 'problem on my desk' that I need to solve. This is just a curiosity. Examples might include mitosis of bacteria or compounding interest for an investment.</p>
<p>In this context, we would have panel data for two different groups, observing the value, volume, cell count, etc. for each. We’d either be inferring the rates as latent variables, or otherwise testing the null hypothesis of rates being equal.</p>
<p>My naive approach would be</p>
<ol>
<li>Take the logarithm <span class="math-container">$\ln y$</span> of the response variable <span class="math-container">$y$</span> for groups one and two at different times.</li>
<li>Use regression to fit the model <span class="math-container">$\ln y =\beta_1t + \beta_2 \text{group}_2 t + \beta_0$</span>, conditioning on time and group two assignment.</li>
<li>Look at the t-test value for <span class="math-container">$\beta_2$</span> to determine how "unlikely" is a coefficient value of 0.</li>
</ol>
|
<p>With hypothesis testing for exponential growth, your proposed approach of using a log-transformed response variable and linear regression seems reasonable to me. What follows is a more detailed explanation and a refined version of the model considering the comments provided in the OP, particularly those of MattF.</p>
<h3>Model and Hypothesis</h3>
<p>Given two groups with panel data observed over time, we want to test if the growth rates differ between the two groups. The exponential growth can be modeled as:</p>
<p><span class="math-container">$$y_i(t) = y_{i0} e^{\beta_i t}$$</span></p>
<p>where <span class="math-container">$y_i(t)$</span> is the response variable (eg volume, cell count) for group <span class="math-container">$i$</span> at time <span class="math-container">$t$</span>, <span class="math-container">$y_{i0}$</span> is the initial value at <span class="math-container">$t = 0$</span>, and <span class="math-container">$\beta_i$</span> is the growth rate for group <span class="math-container">$i$</span>.</p>
<h3>Log-Transformation</h3>
<p>To linearise the exponential growth, we take the natural logarithm of the response variable:</p>
<p><span class="math-container">$$
\ln(y_i(t)) = \ln(y_{i0}) + \beta_i t
$$</span>
For two groups, the model can be expressed as:<br />
<span class="math-container">$$
\ln(y) = \beta_0 + \beta_1 t + \beta_2 (\text{group}_2) + \beta_3 (\text{group}_2 \cdot t)
$$</span>
where:</p>
<ul>
<li><span class="math-container">$\beta_0$</span> is the intercept (log-initial value for group 1),</li>
<li><span class="math-container">$\beta_1$</span> is the growth rate for group 1,</li>
<li><span class="math-container">$\beta_2$</span> is the difference in initial values between groups (group effect at <span class="math-container">$t = 0$</span>),</li>
<li><span class="math-container">$\beta_3$</span> is the interaction term representing the difference in growth rates between the two groups.</li>
</ul>
<h3>Hypothesis Testing</h3>
<p>We are particularly interested in testing whether the growth rates are the same for both groups. This corresponds to testing the null hypothesis:
<span class="math-container">$H_0: \beta_3 = 0$</span>
versus the alternative hypothesis:
<span class="math-container">$H_a: \beta_3 \neq 0$</span></p>
<h3>Regression Model</h3>
<p>The linear regression model becomes:
<span class="math-container">$$\ln(y) = \beta_0 + \beta_1 t + \beta_2 (\text{group}_2) + \beta_3 (\text{group}_2 \cdot t) + \epsilon
$$</span>
where <span class="math-container">$\epsilon$</span> is the error term.</p>
<h3>Statistical Inference</h3>
<ul>
<li>Estimate the parameters using ordinary least squares (OLS) regression.</li>
<li>Conduct a t-test on the coefficient <span class="math-container">$\beta_3$</span>. The t-test value will tell us how unlikely it is to observe the estimated <span class="math-container">$\beta_3$</span> under the null hypothesis <span class="math-container">$\beta_3 = 0$</span>.</li>
</ul>
<h3>Comments and Refinements</h3>
<ul>
<li>Including <span class="math-container">$\beta_2$</span> allows the model to account for different starting points between the two groups, which is realistic and often necessary.</li>
</ul>
<p>In many practical situations, the initial values of the response variable (eg cell count, investment amount) might differ between the two groups being compared. For example, in an experiment comparing the growth of bacteria under two different conditions, the initial number of bacteria might not be identical in both groups. The term <span class="math-container">$\beta_2$</span> in the model represents the difference in the initial values (starting points) between the two groups at time <span class="math-container">$t = 0$</span>. By including <span class="math-container">$\beta_2$</span>, we can account for this difference, ensuring that any observed difference in growth rates is not confounded by the difference in starting points. This adjustment makes the comparison more realistic and accurate.</p>
<ul>
<li>In randomised controlled trials (RCTs), if the initial volumes are equal or known, the model can be simplified by setting <span class="math-container">$\beta_2$</span> to 0 or adjusting the intercept accordingly.</li>
</ul>
<p>In RCTs researchers often have control over the initial conditions of the experiment. If the initial values of the response variable are equal or standardised across the groups at the start of the study, the term <span class="math-container">$\beta_2$</span>, which accounts for differences in initial values, may become unnecessary. In such cases, <span class="math-container">$\beta_2$</span> can be set to 0, simplifying the model to:
<span class="math-container">$$
\ln(y) = \beta_0 + \beta_1 t + \beta_3 (\text{group}_2 \cdot t) + \epsilon
$$</span>
Alternatively, if the initial values are known but different, the intercept <span class="math-container">$\beta_0$</span> can be adjusted accordingly to reflect these known initial values, effectively incorporating <span class="math-container">$\beta_2$</span> into the intercept term. This simplification reduces the complexity of the model while still accurately representing the growth dynamics.</p>
<ul>
<li>The model <span class="math-container">$\ln(y / y_0) = \beta_1 t + \beta_2 (\text{group}_2 \cdot t)$</span> suggested by MattF can be used if the initial values are normalised, providing a simplified approach to focus solely on the growth rates.</li>
</ul>
<h3>Final Model for Hypothesis Testing</h3>
<p><span class="math-container">$$
\ln(y) = \beta_0 + \beta_1 t + \beta_2 (\text{group}_2) + \beta_3 (\text{group}_2 \cdot t) + \epsilon
$$</span></p>
<p>The critical step is to test <span class="math-container">$\beta_3$</span> using the t-test, where:</p>
<ul>
<li><span class="math-container">$\beta_3$</span> represents the differential growth rate between the two groups.</li>
<li>A significant t-test for <span class="math-container">$\beta_3$</span> (p-value < 0.05, for example) would lead us to reject <span class="math-container">$H_0$</span> and conclude that the growth rates differ between the groups.</li>
</ul>
<h3>Practical Steps</h3>
<ul>
<li>Fit the regression model using software (e.g., R, Python).</li>
<li>Examine the t-test for the interaction term (<span class="math-container">$\beta_3$</span>).</li>
<li>Interpret the results to determine if there is a difference in the growth rates between the two groups.</li>
</ul>
<p>This approach provides a robust framework for testing hypotheses about exponential growth rates using linear regression on log-transformed data.</p>
<h3>Simulation and Visualisation</h3>
<p>In the following we simulate some applicable data and visualise the final model and also the normalised one.</p>
<pre class="lang-r prettyprint-override"><code>library(tidyverse)
library(broom)
# Set seed for reproducibility
set.seed(15)
# Simulate some data
n <- 100
time <- seq(0, 10, length.out = n)
group <- sample(0:1, n, replace = TRUE)
# Parameters for the simulation
beta_0 <- 1
beta_1_group1 <- 0.3
beta_1_group2 <- 0.5
initial_value <- 10
noise <- rnorm(n, mean = 0, sd = 0.5)
# Simulated response variable
y_group1 <- initial_value * exp(beta_1_group1 * time + noise)
y_group2 <- initial_value * exp(beta_1_group2 * time + noise)
y <- ifelse(group == 0, y_group1, y_group2)
# Create DataFrame
df <- data.frame(
time = time,
group = group,
y = y,
log_y = log(y),
log_y_y0 = log(y / initial_value)
)
# Adding interaction term
df <- df %>%
mutate(time_group = time * group)
# Fit the first model: log(y) = beta_0 + beta_1 * time +
# beta_2 * group + beta_3 * (group * time)
model1 <- lm(log_y ~ time + group + time_group, data = df)
summary_model1 <- summary(model1)
# Fit the second model: log(y / y0) = beta_1 * time +
# beta_2 * (group * time)
model2 <- lm(log_y_y0 ~ time + time_group, data = df)
summary_model2 <- summary(model2)
# Print summaries
summary_model1
</code></pre>
<p>which results in:</p>
<pre><code>Call:
lm(formula = log_y ~ time + group + time_group, data = df)
Residuals:
Min 1Q Median 3Q Max
-1.2459 -0.3199 -0.0007 0.3204 1.2385
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.39206 0.14358 16.660 < 2e-16 ***
time 0.27294 0.02474 11.034 < 2e-16 ***
group -0.18698 0.21940 -0.852 0.396
time_group 0.23555 0.03794 6.208 1.36e-08 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.5469 on 96 degrees of freedom
Multiple R-squared: 0.8423, Adjusted R-squared: 0.8374
F-statistic: 170.9 on 3 and 96 DF, p-value: < 2.2e-16
</code></pre>
<p>summary_model2</p>
<pre class="lang-r prettyprint-override"><code>Call:
lm(formula = log_y_y0 ~ time + time_group, data = df)
Residuals:
Min 1Q Median 3Q Max
-1.25893 -0.31570 0.02893 0.33794 1.22882
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.009395 0.108411 0.087 0.931
time 0.284849 0.020381 13.977 <2e-16 ***
time_group 0.207612 0.019076 10.883 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.5461 on 97 degrees of freedom
Multiple R-squared: 0.8411, Adjusted R-squared: 0.8379
F-statistic: 256.8 on 2 and 97 DF, p-value: < 2.2e-16
</code></pre>
<p>Now the plots:</p>
<pre class="lang-r prettyprint-override"><code>ggplot(df, aes(x = time, y = log_y, color = factor(group))) +
geom_point(alpha = 0.5) +
geom_smooth(method = "lm", se = FALSE, aes(group = group)) +
labs(title = "Model 1: log(y) vs. time", x = "Time", y = "log(y)") +
theme_minimal() +
theme(legend.title = element_blank())
</code></pre>
<p><a href="https://i.sstatic.net/3HjRcylD.png" rel="noreferrer"><img src="https://i.sstatic.net/3HjRcylD.png" alt="enter image description here" /></a></p>
<p>This is a linearelised plot (obviously) and clearly shows the different growth rates (as different slopes on the plot). Now we show a plot for the untransformed data:</p>
<pre class="lang-r prettyprint-override"><code># Plot for Model 1 with untransformed data
ggplot(df, aes(x = time, y = y, color = factor(group))) +
geom_point(alpha = 0.5) +
geom_line(stat = "smooth", method = "lm", formula = y ~ exp(x),
se = FALSE, aes(group = group)) +
labs(title = "Model 1: y vs. time", x = "Time", y = "y") +
theme_minimal() +
theme(legend.title = element_blank()) +
scale_y_continuous(trans = 'log10') # Log scale for y-axis
</code></pre>
<p><a href="https://i.sstatic.net/iVuQqJzj.png" rel="noreferrer"><img src="https://i.sstatic.net/iVuQqJzj.png" alt="enter image description here" /></a></p>
| 372
|
hypothesis testing
|
Evaluating total effect on population after conducting a/b test
|
https://stats.stackexchange.com/questions/659770/evaluating-total-effect-on-population-after-conducting-a-b-test
|
<p>I conducted an A/B test where the treatment involved offering discounts to customers. However, after the test concluded, I discovered that the discount was not consistently displayed to customers in the test group due to an additional logic layer that determined whether the discount was significant enough to be shown.</p>
<p>To address this, I narrowed the analysis to only include customers for whom the discount was deemed significant enough to be displayed (in both the control and test groups). When I tested for statistical significance between these groups, I found the difference to be significant.</p>
<p>To estimate the overall effect on the entire population, I assumed that for the subset of customers who did not receive a visible discount, there would be no difference in conversion rates between the control and test groups (since they were effectively untreated). I then calculated the total effect by combining this assumption with the actual observed results for the group that received the visible discount.</p>
<p>Does this approach hold up from a statistical perspective?
Thanks</p>
|
<p>What actually happened in your experiment is that, instead of the "treatment" being "offer discount to customers", it was "offer discount to customers only if discount is <em>substantial</em>" (I would not use the word <em>significant</em> here on CV, to avoid confusion between colloquial and statistical definitions).</p>
<p>So you should eliminate all customers which were not offered a discount from your test group (because they did not get the treatment). Which seems to be what you did; but you say that you did so "<em>in both the control and test groups</em>". How would/could you do so in the control group? No-one in the control group was offered a discount? Anyway...</p>
<p>Now you are trying to guess-timate the effect you would have seen, if only all test group customers had been offered a discount. The only honest answer is that you have no idea (because you do not have this data).</p>
<p>You could however come up with a conservative (worst case) estimate, which is what you tried to do. For that, you used the conversion rate of the control group. But that is incorrect...</p>
<p>There is a reason why these test group customers were not offered a discount (I am guessing because the discount amount would have been below some threshold?). So, these customers are not randomly selected, but selected on some very specific criteria. Therefore they are not distributed as the control group (which is a random sample from your all your possible customers).</p>
<p>The proper way to estimate a worse case effect is to use whatever conversion rate was observed for these test group customers who did not get a discount (i.e. you assume that the discount would have had 0 effect).</p>
<p>Beyond this, you need to explain to your audience that there was a design issue, which modified the intended treatment. You need to report the results for what was actually used as a treatment. And you need to state that the overall effect on the whole population is only a "worst case" effect.</p>
<p>And you may suggest to continue the A/B test for a bit longer, to collect actual data on these specific customers which were not offered a discount (as opposed to assuming "no effect").</p>
| 373
|
hypothesis testing
|
Given big enough sample size, a test will always show significant result unless the true effect size is exactly zero. Why?
|
https://stats.stackexchange.com/questions/323862/given-big-enough-sample-size-a-test-will-always-show-significant-result-unless
|
<p>I am curious about a claim made in Wikipedia's article on <a href="https://en.wikipedia.org/wiki/Effect_size#Relationship_to_test_statistics" rel="noreferrer">effect size</a>.
Specifically:</p>
<blockquote>
<p>[...] a non-null statistical comparison will always show a statistically
significant results unless the population effect size is exactly zero</p>
</blockquote>
<p>I am not sure what this means/implies, let alone an argument to back it up. I guess, after all, an effect is a statistic, i.e., a value calculated from a sample , with its own distribution. Does this mean that effects are never due
to just random variation (which is what I understand it means to not be significant)? Do we then just consider whether the effect is strong enough -- having high absolute value? </p>
<p>I am considering the effect I am most familiar with: the Pearson correlation coefficient r seems to contradict this. Why would any $r$ be statistically-significant? If $r$ is small our regression line
$$ y=ax+b = r\left(\frac {s_y}{s_x}\right) = \epsilon x+b $$ </p>
<p>For $\epsilon$ small,is close to 0, an F-test will likely contain a confidence interval containing 0 for the slope. Isn't this a counterexample?</p>
|
<p>As @Kodiologist points out, this is really about what happens for large sample sizes. For small sample sizes there's no reason why you can't have false positives or false negatives. </p>
<p>I think the $z$-test makes the asymptotic case clearest. Suppose we have $X_1, \dots, X_n \stackrel{\text{iid}}\sim \mathcal N(\mu, 1)$ and we want to test $H_0: \mu = 0$ vs $H_A: \mu \neq 0$. Our test statistic is
$$
Z_n = \frac{\bar X_n - 0}{1 / \sqrt n} = \sqrt n\bar X_n.
$$</p>
<p>$\bar X_n \sim \mathcal N(\mu, \frac 1n)$ so $Z_n = \sqrt n \bar X_n \sim \mathcal N(\mu \sqrt n, 1)$. We are interested in $P(|Z_n| \geq \alpha)$.
$$
P(|Z_n| \geq \alpha) = P(Z_n \leq -\alpha)+ P(Z_n \geq \alpha)
$$
$$
= 1 + \Phi(-\alpha - \mu\sqrt n) - \Phi(\alpha - \mu \sqrt n).
$$
Let $Y \sim \mathcal N(0,1)$ be our reference variable. Under $H_0$ $\mu = 0$ so we have $P(|Z_n| \geq \alpha) = 1 - P(-\alpha \leq Y \leq \alpha)$ so we can choose $\alpha$ to control our type I error rate as desired. But under $H_A$ $\mu \sqrt n \neq 0$ so
$$
P(|Z_n| \geq \alpha) \to 1 + \Phi(\pm\infty) - \Phi(\pm\infty) = 1
$$
so with probability 1 we will reject $H_0$ if $\mu \neq 0$ (the $\pm$ is in case of $\mu < 0$, but either way the infinities have the same sign).</p>
<p>The point of this is that if $\mu$ <em>exactly</em> equals $0$ then our test statistic has the reference distribution and we'll reject 5% (or whatever we choose) of the time. But if $\mu$ is not exactly $0$, then the probability that we'll reject heads to $1$ as $n$ increases. The idea here is the <a href="https://en.wikipedia.org/wiki/Consistency_(statistics)#Tests" rel="noreferrer">consistency</a> of a test, which is that under $H_A$ the power (probability of rejecting) heads to $1$ as $n \to \infty$.</p>
<p>It's the exact same story with the test statistic for testing $H_0 : \rho = \rho_0$ versus $H_A: \rho \neq \rho_0$ with the Pearson correlation coefficient. If the null hypothesis is false, then our test statistic gets larger and larger in probability, so the probability that we'll reject approaches $1$.</p>
| 374
|
hypothesis testing
|
Equivalence test for two multivariate normal distributions?
|
https://stats.stackexchange.com/questions/55298/equivalence-test-for-two-multivariate-normal-distributions
|
<p>I'm trying to compare two samples from multivariate normal distributions to see if their distributions are equivalent (within a factor of epsilon).</p>
<p>The standard version of this test is the <a href="http://en.wikipedia.org/wiki/Energy_distance#Testing_for_equal_distributions" rel="nofollow">energy test</a>, but it is not useful for my purposes because it uses $P = Q$ as the null hypothesis, whereas I need $P \neq Q$ to be the null.</p>
<p>Some previous work has been done on this, mostly in <a href="http://books.google.ca/books/about/Testing_Statistical_Hypotheses_of_Equiva.html?id=4WitzyJFkyoC&redir_esc=y" rel="nofollow">this book</a>, but nothing for my situation.</p>
<p>How can I extend the interval inclusion method described in this book to use the energy statistic? Should the test statistic used to get the confidence interval be the original test statistic (i.e. based on the null hypothesis $P = Q$), or do I have to derive my own based on the new null hypothesis? I think derive my own is definitely beyond my capabilities at this point.</p>
|
<p>If you actually have the confidence interval, your first option is right. It's the TOST. Please remember to take the $1-2\alpha$-confidence interval to get an $\alpha$-level test. If this confidence interval is a subset of your prespecified equivalence region, you may conclude that the distributions are equal up to your prespecified error. Otherwise, you cannot conclude anything.</p>
| 375
|
hypothesis testing
|
Why can't we accept the null hypothesis, but we can accept the alternative hypothesis?
|
https://stats.stackexchange.com/questions/587383/why-cant-we-accept-the-null-hypothesis-but-we-can-accept-the-alternative-hypot
|
<p>I understand it's reasonable only to not reject the null hypothesis. But why can we accept the alternative hypothesis?</p>
<p>What's the difference?</p>
|
<p>I'll start with a quote for context and to point to a helpful resource that might have an answer for the OP. It's from V. Amrhein, S. Greenland, and B. McShane. Scientists rise up against statistical significance. <em>Nature</em>, 567:305–307, 2019. <a href="https://doi.org/10.1038/d41586-019-00857-9" rel="noreferrer">https://doi.org/10.1038/d41586-019-00857-9</a></p>
<blockquote>
<p>We must learn to embrace uncertainty.</p>
</blockquote>
<p>I understand it to mean that there is no need to state that we <em>reject a hypothesis</em>, <em>accept a hypothesis</em>, or <em>don't reject a hypothesis</em> to explain what we've learned from a statistical analysis. The accept/reject language implies certainty; statistics is better at quantifying uncertainty.</p>
<p><em>Note</em>: I assume the question refers to making a binary reject/accept choice dictated by the significance (P ≤ 0.05) or non-significance (P > 0.05) of a p-value P.</p>
<p>The simplest way to understand hypothesis testing (NHST) — at least for me — is to keep in mind that p-values are probabilities about the data (not about the null and alternative hypotheses): Large p-value means that the data is consistent with the null hypothesis, small p-value means that the data is inconsistent with the null hypothesis. NHST doesn't tell us what hypothesis to reject and/or accept so that we have 100% certainty in our decision: hypothesis testing doesn't <em>prove</em> anything<sup>٭</sup>. The reason is that a p-value is computed by <em>assuming the null hypothesis is true</em> [3].</p>
<p>So rather than wondering if, on calculating P ≤ 0.05, it's correct to declare that you "reject the null hypothesis" (technically correct) or "accept the alternative hypothesis" (technically incorrect), don't make a reject/don't reject determination but report what you've learned from the data: report the p-value or, better yet, your estimate of the quantity of interest and its standard error or confidence interval.</p>
<p>٭ Probability ≠ proof. For illustration, see this story about a small p-value at CERN leading scientists to announce they <em>might</em> have discovered a brand new force of nature: <a href="https://theconversation.com/new-physics-at-the-large-hadron-collider-scientists-are-excited-but-its-too-soon-to-be-sure-157871" rel="noreferrer">New physics at the Large Hadron Collider? Scientists are excited, but it’s too soon to be sure</a>. Includes a bonus explanation of p-values.</p>
<p><em>References</em></p>
<p>[1] S. Goodman. A dirty dozen: Twelve p-value misconceptions. <em>Seminars in Hematology</em>, 45(3):135–140, 2008. <a href="https://doi.org/10.1053/j.seminhematol.2008.04.003" rel="noreferrer">https://doi.org/10.1053/j.seminhematol.2008.04.003</a></p>
<p>All twelve misconceptions are important to study, understand and avoid. But Misconception #12 is particularly relevant to this question: It's <em>not</em> the case that <em>A scientific conclusion or treatment policy should be based on whether or not the P value is significant.</em></p>
<p>Steven Goodman explains: "This misconception (...) is equivalent to saying that the magnitude of effect is not relevant, that only evidence relevant to a scientific conclusion is in the experiment at hand, and that both beliefs and actions flow directly from the statistical results."</p>
<p>[2] <a href="https://lakens.github.io/statistical_inferences/pvalue.html" rel="noreferrer">Using p-values to test a hypothesis</a> in <a href="https://lakens.github.io/statistical_inferences/index.html" rel="noreferrer">Improving Your Statistical Inferences</a> by Daniël Lakens.</p>
<p>This is my favorite explanation of p-values, their history, theory and misapplications. Has lots of examples from the social sciences.</p>
<p>[3] <a href="https://stats.stackexchange.com/questions/31/what-is-the-meaning-of-p-values-and-t-values-in-statistical-tests">What is the meaning of p values and t values in statistical tests?</a></p>
| 376
|
hypothesis testing
|
Why the probability of rejecting the null hypothesis tends to 1 in this case?
|
https://stats.stackexchange.com/questions/627209/why-the-probability-of-rejecting-the-null-hypothesis-tends-to-1-in-this-case
|
<p>Suppose we have an estimator <span class="math-container">$\hat\mu$</span> of population parameter <span class="math-container">$\mu$</span> and we know that</p>
<p><span class="math-container">$$\sqrt{N}(\hat\mu-\mu)\overset{d}{\to}N(0,1).$$</span></p>
<p>We are interested in the following hypothesis scheme:</p>
<p><span class="math-container">$$H_0: \mu=0$$</span>
<span class="math-container">$$H_1: \mu\ne0$$</span></p>
<p>Suppose that <span class="math-container">$\mu=\delta$</span> for some arbitrarily small <span class="math-container">$\delta>0$</span>. I need to show that the probability of rejecting <span class="math-container">$H_0$</span> tends to 1 as the sample size goes to <span class="math-container">$\infty$</span>. Why is this like so? I think that it has to do with the fact that the convergence in distribution implies (not completely positive of that) that <span class="math-container">$\hat\mu\overset{p}{\to}\mu>0$</span> and the probability of our statistic <span class="math-container">$\hat\mu$</span> being exactly <span class="math-container">$\mu$</span> is zero thus with a large sample any value of <span class="math-container">$\hat\mu$</span> that is slightly different from <span class="math-container">$\mu$</span> will lead to the rejection of <span class="math-container">$H_0$</span>. Is this reasoning correct?</p>
<p>Any help is appreciated.</p>
<p>Thanks.</p>
|
<p>Sort of, but not quite: <span class="math-container">$\hat\mu$</span> being exactly zero isn't needed and you've left out some important information.</p>
<p>Consider the distributions. If <span class="math-container">$\mu=\delta$</span>, then <span class="math-container">$$\sqrt{N}(\hat\mu-\delta)\stackrel{d}{\to}N(0,1)$$</span> but if <span class="math-container">$\mu=0$</span> then <span class="math-container">$$\sqrt{N}(\hat\mu-0)\stackrel{d}{\to}N(0,1).$$</span>
So under <span class="math-container">$H_0$</span>, <span class="math-container">$\hat \mu$</span> is close to 0 in large samples and otherwise <span class="math-container">$\hat\mu$</span> is close to <span class="math-container">$\delta$</span> in large samples.</p>
<p>Now, one reasonable kind of test would be the test that rejects if <span class="math-container">$|\hat\mu_N|>C$</span> for some <span class="math-container">$C>0$</span>. If <span class="math-container">$0<C<\delta$</span>, then <span class="math-container">$P(|\hat\mu|<C)\to 1$</span> under <span class="math-container">$H_0$</span> and <span class="math-container">$P(\hat\mu>C)\to 1$</span> under <span class="math-container">$H_1$</span>. So for any test of <em>this</em> sort the probability of rejecting <span class="math-container">$H_0$</span> at any level <span class="math-container">$\alpha$</span> goes to 1.</p>
<p>We can then argue that since this test has power converging to 1 no-one would use a test that doesn't have power converging to 1, and so we are done.</p>
<p>We do need an argument of this kind, because you didn't specify what test you wanted to use and there are bad tests out there. Suppose we took a test that rejected <span class="math-container">$H_0$</span> if <span class="math-container">$\hat\mu\neq 0$</span>. For this test, the probability of rejection is zero for all <span class="math-container">$N$</span>. It's an unbiased test, but it's a bad test. Or we could reject <span class="math-container">$H_0$</span> if <span class="math-container">$\hat\mu\in [-0.42,\,0.69]$</span>. That's a bad test and whether the probability of rejection goes to 1 depends on <span class="math-container">$\delta$</span>. Or we could generate an independent <span class="math-container">$U$</span> from a uniform distribution on <span class="math-container">$[0,1]$</span> and reject if <span class="math-container">$U<\alpha$</span> for some specified level <span class="math-container">$\alpha$</span>. That's an unbiased exact test and extremely bad, and the probability of rejection doesn't go to 1.</p>
| 377
|
hypothesis testing
|
What if both null hypothesis and alternative hypothesis are wrong?
|
https://stats.stackexchange.com/questions/365604/what-if-both-null-hypothesis-and-alternative-hypothesis-are-wrong
|
<p>In hypothesis testing, alternative hypothesis doesn't have to be the opposite of null hypothesis. For example, for $H_0: \mu=0$, $H_a$ is allowed to be $\mu>1$, or $\mu=1$. My question: <em>Why is this allowed</em>? What if in reality, $\mu=-1$ or $\mu=2$, in which case if one applies, say, likelihood ratio test, one may (wrongly) conclude that $H_0$ is accepted, or $H_0$ is rejected and hence $H_a$ is accepted? </p>
<p>What about this proposal: $H_a$ should always be the opposite of $H_0$? That is, $H_a: H_0$ is not true. This way, we are effectively testing only a single hypothesis $H_0$, rejecting it if the p-value is below a predefined significance level, and not have to test two hypotheses at the same time that can be both wrong. </p>
|
<p>What you've identified is one of the fundamental flaws with this approach to hypothesis testing: namely, that the statistical tests you are doing do not assess the validity of the statement you are actually interested in assessing the truth of.</p>
<p>In this form of hypothesis testing, $H_a$ is <em>never</em> accepted, you can only ever reject $H_0$. This is widely misunderstood and misrepresented by users of statistical testing.</p>
| 378
|
hypothesis testing
|
Why do we need alternative hypothesis?
|
https://stats.stackexchange.com/questions/386982/why-do-we-need-alternative-hypothesis
|
<p>When we do testing we end up with two outcomes.</p>
<p>1) We reject null hypothesis</p>
<p>2) We fail to reject null hypothesis. </p>
<p>We do not talk about accepting alternative hypotheses. If we do not talk about accepting alternative hypothesis, why do we need to have alternative hypothesis at all? </p>
<p><strong>Here is update:</strong>
Could somebody give me two examples:</p>
<p>1) rejecting null hypothesis is equal to accepting alternative hypothesis </p>
<p>2) rejecting null hypothesis is not equal to accepting alternative hypothesis</p>
|
<p>There was, historically, disagreement about whether an alternative hypothesis was necessary. Let me explain this point of disagreement by considering the opinions of Fisher and Neyman, within the context of frequentist statistics, and a Bayesian answer.</p>
<ul>
<li><p><em>Fisher</em> - We do not need an alternative hypothesis; we can simply test a null hypothesis using a goodness-of-fit test. The outcome is a <span class="math-container">$p$</span>-value, providing a measure of evidence for the null hypothesis.</p></li>
<li><p><em>Neyman</em> - We must perform a hypothesis test between a null and an alternative. The test is such that it would result in type-1 errors at a fixed, pre-specified rate, <span class="math-container">$\alpha$</span>. The outcome is a decision - to reject or not reject the null hypothesis at the level <span class="math-container">$\alpha$</span>. </p>
<p>We need an alternative from a decision theoretic perspective - we are making a choice between two courses of action - and because we should report the power of the test
<span class="math-container">$$
1 - p\left(\textrm{Accept $H_0$} \, \middle|\, H_1\right)
$$</span>
We should seek the most powerful tests possible to have the best chance of rejecting <span class="math-container">$H_0$</span> when the alternative is true.</p>
<p>To satisfy both these points, the alternative hypothesis cannot be the vague 'not <span class="math-container">$H_0$</span>' one.</p></li>
<li><p><em>Bayesian</em> - We must consider at least two models and update their relative plausibility with data. With only a single model, we simple have
<span class="math-container">$$
p(H_0) = 1
$$</span>
no matter what data we collect. To make calculations in this framework, the alternative hypothesis (or model as it would be known in this context) cannot be the ill-defined 'not <span class="math-container">$H_0$</span>' one. I call it ill-defined since we cannot write the model <span class="math-container">$p(\text{data}|\text{not }H_0)$</span>.</p></li>
</ul>
| 379
|
hypothesis testing
|
What is the difference between a hypothesis test of $P(|Z|\geq z)$ versus $P(|Z|\geq |z|)$?
|
https://stats.stackexchange.com/questions/549552/what-is-the-difference-between-a-hypothesis-test-of-pz-geq-z-versus-pz
|
<p>For a Z test, in say a Normal Z test with known variance, what is the difference between rejection areas being represented by</p>
<p><span class="math-container">$P(|Z|\geq z)$</span></p>
<p>versus</p>
<p><span class="math-container">$P(|Z|\geq |z|)$</span></p>
<p>For <span class="math-container">$P(|Z|\geq z)$</span>, the rejection regions seem to correspond to the sets <span class="math-container">$Z\geq z$</span> and <span class="math-container">$Z\leq z$</span>. What about when <span class="math-container">$P(|Z|\geq |z|)$</span>?</p>
|
<p>Well, if you have chosen <span class="math-container">$z$</span> to be a positive number, there is no difference between the two tests and the <em>description</em> of the rejection region as the set
<span class="math-container">$$\{Z \geq z\}\cup \{Z \leq -z\}.\tag{1}$$</span>
Note that what you have <em>stated</em> in your question as the rejection region is missing a <span class="math-container">$-$</span> sign.</p>
<p>But if you have chosen <span class="math-container">$z$</span> to be a negative number, then you need to describe the rejection region as
<span class="math-container">$$\{Z \geq |z|\}\cup \{Z \leq -|z|\}.\tag{2}$$</span>
Since <span class="math-container">$(2)$</span> is the same as <span class="math-container">$(1)$</span> when <span class="math-container">$z>0$</span>, <em>you</em> should always use <span class="math-container">$(2)$</span> to describe the rejection region so that <em>you</em> never need be confused.</p>
| 380
|
hypothesis testing
|
Why type I error rate is rejection area in hypothesis testing?
|
https://stats.stackexchange.com/questions/561321/why-type-i-error-rate-is-rejection-area-in-hypothesis-testing
|
<p>In hypothesis testing, we set up a rejection area for rejecting <span class="math-container">$H_0$</span> in favor of <span class="math-container">$H_1$</span> with <span class="math-container">$\alpha$</span>. I don't understand why type I error (rejecting <span class="math-container">$H_0$</span>, when <span class="math-container">$H_0$</span> is actually true) is the area that we choose to reject <span class="math-container">$H_0$</span>. Why does it makes senses that we reject something from error rates?</p>
|
<p>by <a href="https://en.wikipedia.org/wiki/Type_I_and_type_II_errors" rel="nofollow noreferrer">wiki</a>,</p>
<blockquote>
<p>a type I error is the mistaken rejection of an actually true null hypothesis</p>
</blockquote>
<p><span class="math-container">$\alpha$</span> has the same value as type 1 error but you can distinguish them by going thru the following story.</p>
<p>In your hypothesis test of recovery rate of a drug, you first assume your <span class="math-container">$H_0$</span> is correct, which means the drug gives you the same recovery rate as not using the drug. In this case, you assume the recovery rate distribution of the drug is the same as the distribution of not using drug.</p>
<p>Then you calculate the average recovery rate of patients using the drug, and find a value <span class="math-container">$r$</span>. Then you look at where <span class="math-container">$r$</span> is in the <span class="math-container">$H_0$</span> distribution. and you calculate the <span class="math-container">$\alpha$</span> by summing the area under the distribution <span class="math-container">$>=r$</span>, so this <span class="math-container">$\alpha$</span> value is actually the total probability of observing a recovery rate <span class="math-container">$>=r$</span> assuming that <span class="math-container">$H_0$</span> is correct. Let's say this value is 0.021, meaning that if <span class="math-container">$H_0$</span> is true, you have 2.1% chance observing <span class="math-container">$r$</span> or greater than <span class="math-container">$r$</span>. It is quite unlikely judging from the size of the value, but it is <em><strong>NOT IMPOSSIBLE</strong></em>.</p>
<p>Then you need to decide whether you want to reject <span class="math-container">$H_0$</span>, you can reject it like a dictator or you can reject it by saying that "OK, now <span class="math-container">$\alpha$</span> is smaller than a threshold which I decided to be 0.05, so let's reject <span class="math-container">$H_0$</span>.</p>
<p>Now we finally introduce "type 1 error".</p>
<p>We can reject <span class="math-container">$H_0$</span> because we think the chance of observing <span class="math-container">$r$</span> is small, so it is <em><strong>merely not likely</strong></em>, but <em><strong>not impossible</strong></em>. Therefore we can be wrong!</p>
<p>If we are wrong, then <span class="math-container">$H_0$</span> should <em><strong>actually be true</strong></em>, and how likely it is that we are wrong (error)? It is how likely it is for us to observe <span class="math-container">$r$</span> or greater than <span class="math-container">$r$</span>, which is <span class="math-container">$\alpha$</span>, which is 0.021. Therefore the error value has the same value of <span class="math-container">$\alpha$</span>, and this error is named as type 1 error.</p>
| 381
|
hypothesis testing
|
Is a test with small effect size and high sensitivity meaningful or useful?
|
https://stats.stackexchange.com/questions/67676/is-a-test-with-small-effect-size-and-high-sensitivity-meaningful-or-useful
|
<p>From <a href="https://stats.stackexchange.com/a/2519/1005">a reply by John</a></p>
<blockquote>
<p>What is true is that trivially small effects can be found with very large sample sizes. That does not suggest that you shouldn't have such large sample sizes. What it means is that the way you interpret your finding is dependent upon the effect size and sensitivity of the test. <strong>If you have a very small effect size and highly sensitive test you have to recognize that the statistically significant finding may not be meaningful or useful.</strong></p>
</blockquote>
<p>Why "If you have a very small effect size and highly sensitive test you have to recognize that the statistically significant finding may not be meaningful or useful"?</p>
| 382
|
|
hypothesis testing
|
Statistical Significance Dependent Populations
|
https://stats.stackexchange.com/questions/69259/statistical-significance-dependent-populations
|
<p>I am hoping to understand best way to test statistical significance between 2 dependent population groups. </p>
<p>For example, consider a usability test. When 100 subjects were tested, 50 of them clicked (=50% click rate). However, 50 of the subjects were male, 40 of whom clicked for an 80% click rate for males.</p>
<p>The question is that 80% statistically significant? In other words, do men click more than the population as a whole? I think I need to use a paired $t$-test, however unsure as what I would use as the mean, since these are all population proportions.</p>
|
<p>You seem to have gone on a convoluted route to asking for how to assess independent count data. You've got 100 independent items. There are 40 males that clicked, 10 males that didn't click, 10 non-males that clicked, and 40 non-males that didn't click. You can easily construct what is called a contingency table (below) from those data and do a $\chi^2$ (chi-square) test for independence. </p>
<pre><code> male non-male
click 40 10
no click 10 40
</code></pre>
<p>Searching for the chi-square test on the internet will show you the formulas, logic, and even online calculators that can solve the problem.</p>
| 383
|
hypothesis testing
|
What statistic procedure to use for analyzing my data?
|
https://stats.stackexchange.com/questions/74684/what-statistic-procedure-to-use-for-analyzing-my-data
|
<p>I'm currently a fourth year university student. As part of my studies, I'm taking a class called Capstone, where students design and carry out a research project. An essential part of formulating this research is choosing a statistical procedure with which to analyze and present your results.</p>
<p>My study focuses on studying the increase of middle school students' awareness on the subject of bullying. </p>
<p>To do that, we will have a group of students who will take an initial questionnaire which has multiple choice questions about different situations and what type of bullying they represent. After that test, the same students will be giving a workshop where we will discuss bullying: the types that exist, how to recognize them and the negative impact they can have. After those workshops, the students will take another test, consisting of exactly the same questions as the first. </p>
<p>The goal is that, by comparing the answers on both tests, we will find that students answers on that second test correspond to a better identification and understanding of what bullying is.</p>
<p>My question the is: what type of statistical test would you recommend I use to sort and analyze the data I recollect?</p>
|
<p>I think you should do a simple pairwise difference comparison (before and after workshop) for each question separately.</p>
<p>Since you will probably use some Likert scale in your questionnaire (such as "Strongly agree", "Agree", etc.) your data will be ordinal.</p>
<p>You can use the Wilcoxon signed rank test, to estimate whether there was a significant change in the responses for each question after the workshop.</p>
<p>I think any serious statistical package will support it. I'm sure you will have for example SPSS at school.</p>
<p>If you want to be able to claim that it was the workshop that caused the change, I would recommend you to go a step further. Let the class fill out the questionnaire, then split the class in two parts randomly, and send only half of the class to the workshop. Then let the entire class repeat the questionnaire. The part of the class not taking your workshop will be your control group. You can check whether there will be significant difference even without the workshop.</p>
<p>(if the workshop offers some real additional value for the students, send the other half to the workshop after they have finished filling out the questionnaire a second time)</p>
| 384
|
hypothesis testing
|
Testing on non-normal distributed discrete values
|
https://stats.stackexchange.com/questions/79653/testing-on-non-normal-distributed-discrete-values
|
<p>I have the following observations</p>
<p>Oberservation ; Count </p>
<p>-1.67 ; 726 </p>
<p>18.33 ; 33</p>
<p>148.33 ; 15</p>
<p>This is obviusly not normal distributed :S</p>
<p>How can I make a test for $H_0: \mu = 0$ or even better is it possible to make a confidence interval for the mean?</p>
|
<p>While the original distribution is clearly non-normal, the sample size is so large that the distribution of the mean will be approximately normal:</p>
<p><img src="https://i.sstatic.net/opAdp.png" alt="enter image description here"></p>
<p>(that's the distribution of the sample mean for 10000 samples of the same size as your sample from the empirical cdf. Which is to say, it's the bootstrap distribution of the sample mean).</p>
<p>Further, the distribution of the standard error of the mean is pretty tight, so that you could reasonably treat the null distribution as normal with $\sigma = s$. So you could do a z-test.</p>
<p>Or you could base a test directly off the bootstrap distribution above; it suggests a very small p-value.</p>
| 385
|
hypothesis testing
|
Am I doing my quantitative study on GitHub right?
|
https://stats.stackexchange.com/questions/79957/am-i-doing-my-quantitative-study-on-github-right
|
<p>I am trying to do a small case study in 24 hours + change.</p>
<p>For a dataset, I'm using <a href="http://ghtorrent.org" rel="nofollow">GHTorrent.org</a>.</p>
<p>A general assumption about virtual work is that richer media leads to greater productivity. I have decided to focus on <a href="http://github.com" rel="nofollow">GitHub</a> and to examine the effects of @mentions on issue resolution.</p>
<p>My hypothesis is that mentions are correlated with shorter time to issue resolution.</p>
<p>To see if this is true, I figure I can take a look at when an issue opened, when it closed, and how many mentions there were divided by how many comments there were.</p>
<p>Does this sound reasonable? I am a final-year master student and this is for a small assignment to get us familiar with writing scientific papers. Any advice is much appreciated.</p>
|
<p>Seems like an interesting project, and that does seem reasonable, though with some caveats. So here's some thoughts I have.</p>
<p>First, you will encounter right-censoring, that is, there will be reported bugs that are still not resolved past the end of your dataset. You <em>absolutely cannot</em> just use the resolution time of resolved bugs and ignore the unresolved bugs - this will give biased results.</p>
<p>You could strictly enforce a set observation time (eg/ 1Y past the date the issue is opened); check the resolution status as a binary variable (resolved/not resolved) at the end of that period; and then run a logistic regression. However this potentially loses a lot of information, most obviously status after the cut-off period, and since things like number of mentions (and rate of mentions) during the observation period varies over time, you lose that information as well. Given that you only have 24 hours or so, I'd go with this.</p>
<p>Probably the "most correct" way to approach this is to treat it as a survival analysis problem, and use something like a cox proportional hazards model with time varying covariates; with bugs that are still not resolved at the end of your dataset set as censored. Your data would be something like one (binary - resolved/not resolved) observation per day/week/month per bug, and the relevant measured covariates for that day/week/month. I'm not too familiar with these, so I'll avoid further discussion.</p>
<p>Second, don't use a ratio as a explanatory variable when the two underlying variables are available! Use the two underlying variables and an interaction term instead. Consider the models<br>
$Y = \beta_1A + \beta_2B+\beta_{12}AB$<br>
$Y = \beta_{12}AB$<br>
The second model is implied when you only utilize the ratio. There are quite a few good <a href="https://stats.stackexchange.com/search?tab=votes&q=interaction%20main%20effect">answers</a> to the question of excluding main effects while including an interaction.</p>
| 386
|
hypothesis testing
|
Statistical test for a random die roll?
|
https://stats.stackexchange.com/questions/80555/statistical-test-for-a-random-die-roll
|
<p>Suppose I roll a six-sided die 1000 times and write down the number of times each face comes up. How do I test whether the die is fair? Can I use a chi-squared test where the expected number of each face is 1000/6=167?</p>
<p>There also appears to be a <a href="http://en.wikipedia.org/wiki/Multinomial_test" rel="nofollow noreferrer">multinomial test</a>, but that seems less likely to be baked into stats packages and software.</p>
<p><a href="https://stats.stackexchange.com/questions/80499/can-i-use-a-chi-squared-test-to-compare-two-empirical-distributions">Related question</a>.</p>
|
<p>Apply the chi-square test of goodness of fit with number of possible cases - 1 degrees of freedom and the null hypothesis being the discrete uniform distribution as you pointed out. This is a textbook example for that.</p>
| 387
|
hypothesis testing
|
Simple Question About Hypothesis Testing
|
https://stats.stackexchange.com/questions/82752/simple-question-about-hypothesis-testing
|
<p>I'm trying to test whether or not it's true that people of similar heights tend to marry each other, and I'm a bit confused how exactly to go about it.</p>
<p>I have a data set with 96 pairs of the heights of husband and wife, so I thought I could just take the mean of the differences in height between husband and wife (specifically, height of the wife subtracted from height of the husband), which comes out to 10.43 cm, and then do hypothesis testing with that as my test-statistic. </p>
<p>This is where I'm confused, however. I thought I could have the null hypothesis be $|\mu| \geq 15$ and the alternative hypothesis $|\mu|<15$ where $\mu$ is the average difference in height between husband and wife. Calculating the $t$-value for $\hat{\mu}=10.43$ with respect to 15 yields -6.77. This is obviously less than the critical value so can I therefore reject the null hypothesis and conclude that the heights of husband and wife are, on average, similar?</p>
<p>Thank you.</p>
|
<p>one questions :
why you set your Null hypothesis like this ? (how you come up to set mean to 15).</p>
<p>Basically, for every statistical testing, the null hypothesis already defined. for example,in your case, you would assume the your Null hypothesis is: μ=o ,then you will calculate the T-test for it and w.r.t the P-value, either you will reject or accept your Null hypothesis.</p>
| 388
|
hypothesis testing
|
Rejection region or p-value
|
https://stats.stackexchange.com/questions/82754/rejection-region-or-p-value
|
<p>I am writing a research paper where I am using an hypothesis test.</p>
<p>Is it better to give a p-value for this test or use a 5% two-tailed rejection region?</p>
<p>Thanks in advance!</p>
|
<p>In situations like these - it's best to look at things from the reader's perspective. Would the reader care about the actual value of the test statistic? Do you want the reader to know that the $T$-statistic is $2.79$ or $F = 8.91$? In most cases, the reader would not be interested in these values, so just give the p-value along with the test that you used and an estimate of the magnitude of your effect size.</p>
| 389
|
hypothesis testing
|
How can I determine which of two complementary hypotheses should be the null?
|
https://stats.stackexchange.com/questions/88200/how-can-i-determine-which-of-two-complementary-hypotheses-should-be-the-null
|
<p>How can you determine the direction of the test by looking at a pair of hypotheses? How can you tell which direction (or no direction) to make the hypothesis by looking at the problem statement (research question)?</p>
|
<p>Since we test assuming that the null is true, the null has to include a way to determine a sampling distribution. This means that if one of the hypotheses includes an $=$ and the other does not, then the one with the $=$ is the null. For example if the hypotheses are $\mu \ge 0$ and $\mu < 0$ then the 1st is the null and the 2nd is the alternative.</p>
<p>If there is not a clear $=$ then the null generally represents the ideas of "no difference", "no change", or "status quo" while the alternative is generally one of "there is a difference" or "things have changed".</p>
<p>The alternative is usually what we would like to show and the null is what we want to reject, so there are cases where the equality can be reversed (with some additional assumptions), for example we may want to show that a new cheaper method is just as effective as the old method, so we are trying to show equality. For this we do an equivalence study (without infinite data we cannot prove equality, but we can show equivalence if we decide on a range of values we consider to be equivalent). In this case the equal condition is part of the alternative that we are trying to show and the null is of a difference (but in practice we test with the null at the equivalence boundaries, so it still has equality, actually we usually just see if the CI is within the equivalence range).</p>
<p>Your question is very general and the answer can change with the details. If you want a more precise answer then we need a more detailed question.</p>
| 390
|
hypothesis testing
|
Which statistical test?
|
https://stats.stackexchange.com/questions/90339/which-statistical-test
|
<p>I am designing my study, but I am a little stuck in which test I eventually should use. I have a between-subject design with 6 conditions (let's say A, B, C, D, E, F), with each having 6 responses (let's call these a, b, c, d, e, f), on a 7-point scale:</p>
<pre><code>IV (condition) -> DV
A -> a, b, c, d, e, f
B -> a, b, c, d, e, f
C -> a, b, c, d, e, f
D -> a, b, c, d, e, f
E -> a, b, c, d, e, f
F -> a, b, c, d, e, f
</code></pre>
<p>So, each condition has 6 dependent variables. I don't want to test difference between the conditions, but only the effect of the condition on the different DV's. For example, when someone is in condition A, I would expect them to score higher on DV a than on DV b, c, d, e, and f. When someone is in condition B, I would expect them to score higher on another DV, and so forth.</p>
<p>Later on, I expect also some moderator effects, but I guess that's only adding interaction terms to the model.</p>
<p>Can I use separate tests for each condition? Or should I test it in one big model? What kind of test would be most suitable for this. Most tables that should help you choose a statistical test only show options with 1 DV.</p>
|
<p>Can you please clarify you variable and their measurement scale?</p>
<p><strong>6 conditions (let's say A, B, C, D, E, F)</strong> represents 6 input variables or 6 categories of one input variable.</p>
<p><strong>Each having 6 responses (let's call these a, b, c, d, e, f), on a 7-point scale</strong>. Is it one outcome variable measured on 7-point likert scale or 6 outcome variables (each measured on 7-point likert scale)</p>
<p>Techniques which can be used:</p>
<p>6 inputs, 6 outcomes - multivariate multiple regression or MANCOVA</p>
<p>1 input (with 6 categories), 6 outcomes - multivariate regression or MANOVA</p>
<p>6 inputs, 1 outcome - multiple regression </p>
<p>1 input (with 6 categories), 1 outcome - simple regression with dummy variable or ANOVA</p>
| 391
|
hypothesis testing
|
How to do this test?
|
https://stats.stackexchange.com/questions/90859/how-to-do-this-test
|
<p>Suppose I have a sample $\{(x_{1i}, x_{2i}, \dots, x_{mi}), i=1,\dots, n\}$of $m$ unknown random variables $X_1, X_2,\dots, X_m$. </p>
<p>How can I test if $X_1 = X_2 =\dots = X_m$?</p>
<p>Furthermore, if there is a nonrandom explanatory variable $Y$ such that $Y=y_j$ for $X_j, j=1, \dots, m$, how can I test if $Y$ has an influence on the difference between $X_1, X_2,\dots, X_m$?</p>
|
<p>The first question is addresed by one-way ANOVA (between- or within-subjects, depending on whether your $m$ samples are independent or linked). </p>
<p>Are the values of your explanatory variable pairwise comparable? If they are, you could, for example, reorder your samples in such a way that $y_1\leq\dots\leq y_m$ and use Jonckheere trend test to check if $med X_1 \leq \dots \leq med X_m$.</p>
| 392
|
hypothesis testing
|
Hypothesis testing with the geometric distribution for dummies
|
https://stats.stackexchange.com/questions/93244/hypothesis-testing-with-the-geometric-distribution-for-dummies
|
<p>I'm new to stats and I need some help. Can anybody tell me, in the most "beginner friendly" way, how to perform hypothesis testing with a geometric distribution for 2 samples?
please take into account that, until a month ago, Statistics for me was mean, mode and std dev... </p>
|
<p>I'm going to assume you want to test equality of the $p$ parameter against the two sided alternative.</p>
<p>The usual way to construct a test would be to make a test statistic from the likelihood ratio, but it's not the only choice. </p>
<p>The LRT takes the ratio of the likelihood for the null to the likelihood for the alternative. </p>
<p>To go into details of the calculation, it would help if you said which of the two common forms of <a href="http://en.wikipedia.org/wiki/Geometric_distribution" rel="nofollow">geometric</a> you were looking at - the "number of failures" or the "number of trials" version. (Otherwise you'll be reading through two sets of explanations only one of which is of interest to you.)</p>
<p>Frankly, if I was trying to do such a problem, I'd actually do it using a GLM (which will take care of the LRT calculations). </p>
<p>Specifically, I'd do it in R using the negative binomial functions for GLMs supplied in the package <code>MASS</code> (which comes with R) such as <code>glm.nb</code> to fit the GLM (and possibly <code>anova.negbin</code> to do testing, though in your particular example one can get it from the glm summary output). For those, you can supply the parameter of the negative binomial which specifies the geometric.</p>
| 393
|
hypothesis testing
|
What kind of test should I use?
|
https://stats.stackexchange.com/questions/95235/what-kind-of-test-should-i-use
|
<p>I am designing a study for my project.
I wanted to test if music affects reading comprehension. This study will be a between group design. The independent variable is type of music and the dependent variable is the score on a reading comprehension task. </p>
<p>Half of participants will be randomly assigned to a room with classical (soft) music, and the remaining half of participants will be assigned to the room with rock (hard) music.</p>
<p>Would this be a t-test? Then what kind of t-test should I need to use?</p>
<p>Or is this a one way ANOVA? </p>
|
<p>Your design involves two independent samples, so you will be conducting an <em>unpaired</em> test. An unpaired <em>t</em> test would be appropriate to infer mean difference. You could also use a rank sum test, and if the two groups have similar (univariate) distributions of scores, you could infer median difference.</p>
| 394
|
hypothesis testing
|
proper statistical test to check differences
|
https://stats.stackexchange.com/questions/95431/proper-statistical-test-to-check-differences
|
<p>I have 3 replicas for a value in different individuals. Each of these values are ratios $ab$, and $a$ and $b$ are means from $n=20$ sample pool each. Thus there are 3 times each ratio $ab$ for each individual. When comparing for differences is student t-test correct?</p>
<pre><code> 1st sampling - 2nd - 3rd - 4th
individual 1 - 0.7164165213 - 0.6057539083 - 0.5242174359 - 0.7670756899
individual 2 - 0.6540839702 - 0.8140762612 - 0.6057645321 - X
individual 3 - 0.611493629 - 0.7270260938 - 0.5255522645 - 0.9964242368
</code></pre>
<p>as said each sampling value comes from A/B, whereas A and B are means for treated and non treated for a given variable with n ranging from 15-25. </p>
|
<p>This question is not clear at all, or what the ratios mean. A sample code/structure of the data would be help provide proper context and definition of the problem.</p>
<p>Generally speaking, when you have more than two means to compare, you should not use the t-test because you are inflating your Type II error. If you want to know which pairs of means are statistically different from one another, you should be using a factorial design and the resulting ANOVA table.</p>
| 395
|
hypothesis testing
|
How to compare two samples without knowledge of the distribution
|
https://stats.stackexchange.com/questions/95963/how-to-compare-two-samples-without-knowledge-of-the-distribution
|
<p>lets say I have the mean of two different measurements of something, a and b. I also have the standard deviation on a, and b, but I do not have access to all the individual measurements of a and b. Is it possible to compare the two values to see if the means are statistically different (i.e., reject the null hypothesis that they are the same)? Any help would be greatly appreciated</p>
|
<p>If, as you now say, you don't know the sample sizes but you have sample means and sample standard deviations, the best you can guarantee is say that the sample sizes are at least two (since if either were smaller than two you couldn't compute standard deviations).</p>
<p>So you can at least test for that 'worst case' and see if there's a difference on that basis (that would be a <a href="http://en.wikipedia.org/wiki/Student%27s_t-test" rel="nofollow">two-sample t-test</a>).</p>
<p>[A possible alternative is to figure out which combinations of sample sizes in the two samples would imply a significant difference.]</p>
<p>If you know <em>anything</em> about sample sizes - lower bounds, upper bounds, likely ranges, their ratio, whatever, that information is likely to be at least somewhat useful.</p>
<hr>
<p>As you suggest, the t-test relies on the normality assumption. But without the individual observations you will pretty much have to make some kind of assumption, and it's at least somewhat robust to the assumption, more so if the true sample sizes are not actually very small. </p>
| 396
|
hypothesis testing
|
What test should I use to see correlations between overlapping groups and a score?
|
https://stats.stackexchange.com/questions/96381/what-test-should-i-use-to-see-correlations-between-overlapping-groups-and-a-scor
|
<p>To put it simply, I have the courses students have taken and scores on an exam. The students come from different course backgrounds. (Some have taken only course A some only D some A and D some A and B some B C and D etc etc)</p>
<p>What can test can I use to account for this?</p>
|
<p>You want to see if grade relates to previous courses taken. That puts you in the regression group. But some students have taken one earlier course, some more than one. There are a couple options. </p>
<p>If the total number of courses that you are interested in is small and your sample is relatively large, then you could look at all combinations of courses. You list 4 courses, and if that reflects reality, then there are 16 combinations of courses. If you have enough people and they are spread over those combinations, you could look at all 16.</p>
<p>If there are many more courses, then you will have to combine them into groups. Also, if some of the combinations are rare then you may have to combine groups.</p>
<p>Another option is to ignore the multiplicity of courses and just look at the effect of each course. </p>
<p>The first option is similar to including all interactions. The second is like looking only at main effects.</p>
| 397
|
hypothesis testing
|
Hypothesis testing for non-linearity
|
https://stats.stackexchange.com/questions/96803/hypothesis-testing-for-non-linearity
|
<p>I have $n_X$ observations of variable X, $n_Y$ of variable Y, and $n_Z$ of variable Z. I'd like to test the hypothesis that the true mean of $X$ is equal to the sum of the means of Y and Z.
$$H_0: \mu_X - (\mu_Y+\mu_Z) = 0$$</p>
<p><strong>Initial thoughts</strong></p>
<p>I can use the sample means to define an estimator $\hat{\gamma} = \bar{y}_X - (\bar{y}_Y+\bar{y}_Z)$. Can I estimate the variance using $$\hat{\sigma}^2(\hat{\gamma}) = \sigma^2 (1/n_1+1/n_2+1/n_3)?$$
If so, is the test statistic $\hat{\gamma}/\sqrt{\hat{\sigma}^2(\hat{\gamma})}$ t-distributed?</p>
|
<p>Do you know $\sigma^2$? If not then you will need to estimate it from the data (and assume that all 3 variances are equal). So your final formula will probably be a little more complicated than you show, but it can be reduced to something that is (at least approximately) t distributed (though probably done using $t^2=F$ if the 3 variables are normal or the sample sizes are large enough. One way to test this is with the General Linear Hypothesis (google for it).</p>
| 398
|
hypothesis testing
|
Is a data point significantly larger than a certain distribution average?
|
https://stats.stackexchange.com/questions/99803/is-a-data-point-significantly-larger-than-a-certain-distribution-average
|
<p>I have a simulated distribution with mean 12.53% and standard deviation 11.83%. The sample size is big enough (10,000) to assume it is a Normal distribution. </p>
<p>How do I properly test if the value "26.05%" is significantly larger than the mean 12.53%?
Can anyone please help me to write the null hypothesis, as well as the test, or just give me any reference that I'm not being able to find (or most probably to recognize) on the web? </p>
|
<p>One fairly simple way is to create a predictor series (x) and place '0' in it except for the point where the specific value is to be challenged place a "1". Estimate a regression model (OLS) between the y values (profit) and the x series. The t value for the predictor series will test the hypothesis that the challenged value is significantly different from the mean profit (mean of y excluding the challenged value).It is clear that if you select a value that is close to the overall mean the t ratio will be approximately zero suggesting that the null hypothesis should be accepted.</p>
| 399
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.