category
stringclasses 107
values | title
stringlengths 15
179
| question_link
stringlengths 59
147
| question_body
stringlengths 53
33.8k
| answer_html
stringlengths 0
28.8k
| __index_level_0__
int64 0
1.58k
|
|---|---|---|---|---|---|
probability distributions
|
Expected distribution of random draws
|
https://stats.stackexchange.com/questions/3650/expected-distribution-of-random-draws
|
<p>I have a two part question;</p>
<p>First Part:</p>
<p>I have an urn with 20 balls, 2 of those balls are purple, and I pull out 6 balls at random. I witness 100 realizations of this process. </p>
<p>Given the observed frequency at which I drew purple balls, how do I determine if I am really pulling balls out at random? Also given that there are 2 purple balls, I have a hunch that if purple balls are pulled out disproprionately, I expect that both purple balls would be pulled out (i.e. I'm more interested in seeing if 2 purple balls are pulled out disproportionately than I am if 1 purple ball is pulled out more frequently than expected).</p>
<p>Second Part:</p>
<p>I have an urn with a variable number of balls, a variable number of purple balls within that urn, and a variable number of draws. I witness 100 realizations of this process, and I observe in each realization how many balls there were, how many of those balls were purple, and how many balls I drew from the urn.</p>
<p>Same questions; Given the observed frequency at which I drew purple balls, how do I determine if I am really pulling balls out at random? Again I'm more interested to see if higher frequencies of purple balls are disproportionately drawn than I am to see if 1 purple ball being drawn happens more than expected by chance.</p>
<p>(I'm open to suggestions for title of question and tags)</p>
<p><strong>Edit:</strong></p>
<p>Srikant suggested I may need to make distributional assumptions about my variables, which I am willing to do.</p>
<p>Lets say the number of balls in the urn is uniform between 20 and 30, the number of purple balls is uniform between 0 and 4, and the number of draws is uniform between 6 and 12.</p>
<p>See my <a href="https://stats.stackexchange.com/questions/3650/expected-distribution-of-random-draws/3873#3873">answer</a> that describes my motivation for asking this question.</p>
|
<p>The expected frequency of observing $k$ purple balls in $d$ draws (without replacement) from an urn of $p$ purple balls and $n-p$ other balls is obtained by counting and equals</p>
<p>$$\frac{{p \choose k} {n-p \choose d-k} }{{n \choose d}}.$$</p>
<p>Test a sample (of say $100$) such experiments with a chi-squared statistic using these probabilities as the reference.</p>
<p>In the second case, integrate over the prior distributions. There is no nice formula for that, but the integration (actually a sum for these discrete variables) can be carried out exactly if you wish. In the example given in the edited section -- independent uniform distributions of $n$ from $20$ to $30$ (thus having a one in 11 chance of being any value between $20$ and $30$ inclusive), of $p$ from $0$ to $4$, and of $d$ from $6$ to $12$ -- the result is a probability distribution on the possible numbers of purples ($0, 1, 2, 3, 4$) with values</p>
<p>$0: 69728476151/142333251060 = 0.489896$</p>
<p>$1: 8092734193/24540215700 = 0.329774$</p>
<p>$2: 36854/258825 = 0.14239$</p>
<p>$3: 169436/4917675 = 0.0344545$</p>
<p>$4: 17141/4917675 = 0.00348559$.</p>
<p>Use a chi-squared test for this situation, too. As usual when conducting chi-squared tests, you will want to lump the last two or three categories into one because their expectations are less than $5$ (for $100$ repetitions).</p>
<p>There is no problem with zero values.</p>
<hr>
<p><strong>Edit</strong> (in response to a followup question)</p>
<p>The integrations are performed as multiple sums. In this case, there is some prior distribution for $n$, a prior distribution for $p$, and a prior distribution for $d$. For each possible ordered triple of outcomes $(n,p,d)$ together they give a probability $\Pr(n,p,d)$. (With uniform distributions as above this probability is a constant equal to $1/((30-20+1)(4-0+1)(12-6+1))$.) One forms the sum over all possible values of $(n,p,d)$ (a triple sum in this case) of</p>
<p>$$\Pr(n,p,d) \frac{{p \choose k} {n-p \choose d-k} }{{n \choose d}}.$$</p>
| 500
|
probability distributions
|
Find cumulative probability from given formula
|
https://stats.stackexchange.com/questions/192926/find-cumulative-probability-from-given-formula
|
<p>Given a formula to calculate instantaneous probability of an event.<br/>
<code>f(i) = 0.0222 * e ^ (-i / 11.5)</code></p>
<p>For instance 0.0222 * e ^ (-4 / 11.5) is the probability of the event occurring exactly during the fourth months given that it hasn’t happened before. Calculate the cumulative probability in 18 months, the probability of the event happens within 18 months. </p>
<p>i attempted it concluded here -
<code>0.0222 * ( e ^ (-1 / 11.5) + e ^ (-2 / 11.5) + e ^ (-3 / 11.5) + e ^ (-4 / 11.5) +..... + e ^ (-17 / 11.5) + e ^ (-18 / 11.5) )</code></p>
<p>Is this right and what is the best and most efficient way to solve this.</p>
|
<p>The cumulative distribution is simply the integral of the pdf</p>
<p>$F(i)=\int_{-\infty}^i f(k) dk$</p>
<p>in this case</p>
<p>$F(i)=\int_{-\infty}^i \frac{1}{11.5}e^{-\frac{k}{11.5}} dk$</p>
<p>$F(i)=-\frac{11.5}{11.5}e^{-\frac{i}{11.5}} - -\frac{11.5}{11.5}e^{-\frac{-infty}{11.5}} $</p>
<p>$F(i)=1-e^{-\frac{i}{11.5}} $</p>
<p>Oops, didn't realize this was discrete - which is why 0.0222 is used instead of 1/11.5. Give me 5 and I'll try again</p>
<p>I don't think that the function is a valid pmf. Doing a quick numerical example in MATLAB, I get a sum of 0.26666. Perhaps there is a 73.3% chance the event never occurs?</p>
<p>Assuming the pmf is meant as given, the cmf is just the sum (versus integral above - where I was thinking pdf).</p>
<p>$F(i)=\sum_{-\infty}^i f(k) dk$</p>
<p>in this case</p>
<p>$F(i)=\sum_{k=0}^i 0.0222*e^{-\frac{k}{11.5}} $</p>
<p>$F(i)=0.0222*\frac{1-e^{-\frac{i+1}{11.5}}}{1-e^{-\frac{1}{11.5}}} $</p>
<p>$F(i)=0.2666-0.2666*e^{-\frac{i+1}{11.5}} $</p>
| 501
|
probability distributions
|
Assessing the validity of a PMF?
|
https://stats.stackexchange.com/questions/485938/assessing-the-validity-of-a-pmf
|
<p>How would one go about solving the following given that the function h(x) isn’t provided in the question? I’m at a loss on where to begin.</p>
<p>Suppose h(x) is such that h(x) > 0 for x = 1,2,3,...,I. Argue that <span class="math-container">$p(x) = h(x)/ \sum_{i=1}^I h(i)$</span> is a valid pmf</p>
|
<p>The fact that <span class="math-container">$h(x)$</span> isn't provided is a strong hint: it <em>doesn't matter</em> what <span class="math-container">$h(x)$</span> is except that it's always positive (<span class="math-container">$\geq 0$</span> would also be ok).</p>
<p>First, why is <span class="math-container">$p(x)>0$</span> always true? Second, what's the other property that a pmf has to have?</p>
| 502
|
probability distributions
|
Joint distributions and Function of a random variable
|
https://stats.stackexchange.com/questions/231094/joint-distributions-and-function-of-a-random-variable
|
<p>In a probability distribution, is it true that $XX$ is NOT $X^2$. That is, $XX$ is a joint distribution of $X$ and $X$ and $X^2$ is a function of $X$?</p>
|
<p>I think it is a notation problem: what does $XX$ represent? Note $XX$ is not a widely used notation.</p>
<p>Are you trying to use $XX$ to represent $2$ outcomes from two random events? or Are you tying to use $XX$ to represent a product of two random variables.</p>
<p>If $XX$ represents $2$ outcomes, then the distribution can be described with joint distribution. But most people will use a different notation, where $X_1X_2$ are used. For example, in a coin flip, people will use $P(X_1=H, X_2=H)$ to represent $2$ head flip.</p>
<p>On the other hand, if you use $XX$ to represent a product of two random variables, the distribution of the new random variable is $X^2$.</p>
| 503
|
probability distributions
|
How to find difference between multiple probability distributions?
|
https://stats.stackexchange.com/questions/232259/how-to-find-difference-between-multiple-probability-distributions
|
<p>I have few <code>vectors</code> (of length <code>1000</code>) representing <strong>frequency of 1000 elements for different situations</strong>.</p>
<p>e.g. Vector for situation 1 is </p>
<p><code>s1=(12, 0, 3, 4, 0, ...., 10)</code></p>
<p>and is of length <code>1000</code> (as there are <strong>1000 distinct elements</strong>).</p>
<p>I have 10 such vectors <code>s1, s2, ..., s10</code>.</p>
<p>How do I statistically show that the probability distributions over all <code>1000</code> elements for these <code>10</code> vectors are <strong>significantly different</strong>?</p>
<p>e.g. One way would be to use <code>KL-Divergence</code>. Also, how to use <code>KL-Divergence</code> for such <code>10</code> distributions?</p>
| 504
|
|
probability distributions
|
Is there a constructive approach to creating a distribution achieving the Chebyshev lower bound?
|
https://stats.stackexchange.com/questions/236248/is-there-a-constructive-approach-to-creating-a-distribution-achieving-the-chebys
|
<p>I came across the following question and am wondering if there's a simple way to cook up a distribution achieving the Chebyshev lower bound:</p>
<p>Suppose $X$ has $\mu_X = \sigma^2_X = 9$. Define the lower bound for</p>
<p>$$\mathbb{P}[3 \leq X \leq 15]$$</p>
<p>Chebyshev tells us the answer is $\frac34$.</p>
<p>Can we come up with a specific distribution for $X$ that follows this?</p>
<p>More specifically, I'd prefer $X$ be discrete and restricted to take non-negative values.</p>
<p>A more general constructive proof would of course be more instructive!</p>
| 505
|
|
probability distributions
|
Is there any way of estimating the value of a variable when you know its probability distribution?
|
https://stats.stackexchange.com/questions/246951/is-there-any-way-of-estimating-the-value-of-a-variable-when-you-know-its-probabi
|
<p>I have one question regarding the estimation of an unknown variable.</p>
<p>Is there any way of estimating the value of a variable when you know its probability distribution?
In this case I have a variable which is distributed on the interval (1, 2), with a 25% probability of 2, and uniformly distributed otherwise.</p>
|
<p>A random variable doesn't have a single unique value (unless it's <a href="https://en.wikipedia.org/wiki/Degenerate_distribution" rel="nofollow noreferrer">degenerate</a>, and your variable isn't). On the contrary, random variables exist to provide a mathematical formalism for something that might take on any of a variety of values.</p>
<p>Perhaps you mean to ask what the mean or median of this variable is. See the Wikipedia article "<a href="https://en.wikipedia.org/wiki/Central_tendency" rel="nofollow noreferrer">Central tendency</a>".</p>
| 506
|
probability distributions
|
Factorizing a probability distribution
|
https://stats.stackexchange.com/questions/247536/factorizing-a-probability-distribution
|
<p>I am trying to read a paper on factor graphs and coming from a cs background I am a little lost on the following proposition.</p>
<p>Not all distributions can be factored into a product of clique potentials.</p>
<p>The example they cite is the uniform distribution over binary vectors with an even number of ones. Why can't that be factored as such?</p>
<p>To start, what does the uniform distribution over binary vectors with an even number of ones even look like? What conditional dependencies exist to even try and encode into a PGM?</p>
| 507
|
|
probability distributions
|
Finding the number of trials required for a value to be within a certain range in a discrete distribution at a certain probability
|
https://stats.stackexchange.com/questions/250765/finding-the-number-of-trials-required-for-a-value-to-be-within-a-certain-range-i
|
<p>In a survey of random questions, Jane can reply with either of the two possible answers; 'Yes' or 'No'.
The theoretical probability of Jane selecting 'Yes' for any random question in the survey is (p). </p>
<p>0 < (p) < 1 and (p) is a decimal with two decimal places (i.e. (p) can be 0.00, 0.01, 0.02, 0.03, ..., 0.98, 0.99, 1.00)</p>
<p>However, since neither you nor Jane know the value of (p), you decide to conduct an experiment.</p>
<p>Whenever Jane answers a question, you are presented with:</p>
<ul>
<li>a running total of the number of questions Jane has answered (Q)</li>
</ul>
<p>and</p>
<ul>
<li>a running total of the number of times Jane has said 'Yes' (Y)</li>
</ul>
<p>Therefore the experimental probability (e) of Jane saying 'Yes' is (Y)/(Q), and as the number of trials approaches infinity, the experimental probability (e) should approach the value of (p). However you can only conduct a finite number of trials to approximate the value of (p).</p>
<p>So for example:</p>
<p>The first question Jane says Yes. (e = 1/1 = 1)</p>
<p>The second question Jane says Yes. (e = 2/2 = 1)</p>
<p>The third question Jane says No. (e = 2/3 = 0.667)</p>
<p>The fourth question Jane says Yes. (e = 3/4 = 0.75)</p>
<p>The fifth question Jane says Yes. (e = 4/5 = 0.8)</p>
<p>The sixth question Jane says No. (e = 4/6 = 0.667), etc.</p>
<p>So when Jane has answered (Q) questions, the total amount of experimental probabilities will be (Q)+1. For example when (Q) = 4, the possible experimental probabilities (e) at that point are 0, 0.25, 0.50, 0.75 and 1.</p>
<p>However, obviously it would be expected that experimental probabilities with values closer to (p) would have a higher probability of being observed in the probability distribution for a given value of (Q) of all the possible experimental probabilities. So in the above example, if (p) = 0.27, then the experimental probability of 0.25 would have a higher probability of being observed in the probability distribution out of all possible probabilities (0, 0.25, 0.50, 0.75 and 1).</p>
<p>Assuming the experimental probabilities are distributed evenly around the theoretical probability (p) (meaning it may be similar to a normal distribution but can be skewed in one direction and all points on the distribution are discrete), what is the value of (Q) at which the experimental probability (e) is within 0.02 of the theoretical probability (p) at least 90% of the time?</p>
<p>For example, if (p) = 0.13, what value of (Q) will result in there being a 90% chance or greater that in a distribution of all possible values of (e) for that value of (Q) around the theoretical probability (p),</p>
<p>0.11 <= (e) <= 0.15?</p>
<p>If this question is a bit unclear, please let me know as I'm new to this and not an expert on probability. </p>
| 508
|
|
probability distributions
|
How to model cache hits
|
https://stats.stackexchange.com/questions/260692/how-to-model-cache-hits
|
<p>I'd like to model the behavior of a server A, which caches results from an upstream server B. Queries sent to server A will be forwarded to server B if the former has not already seen this query for an entry. </p>
<p>Relaxations and parameters:</p>
<ul>
<li>There are <code>x</code> total entries in server B, which can be cached in server A</li>
<li>A query for an entry is chosen uniformly from the set of <code>x</code> entries</li>
<li>We will query <code>y</code> entries to server A</li>
<li>Server A has an empty cache initially</li>
<li>Server A has enough space to cache at least <code>x</code> entries</li>
<li>no further cache invalidation</li>
</ul>
<p>Now what I'd like to be able to calculate is, after <code>y</code> queries how many of those queries are expected to be cached at server A.
I'm having trouble decomposing the problem in, what seems like elementary statistics problems.</p>
| 509
|
|
probability distributions
|
Determine $x,x∈R^+$ such that $φ(x)=0,9505$
|
https://stats.stackexchange.com/questions/262191/determine-x-x%e2%88%88r-such-that-%cf%86x-0-9505
|
<p><em>I tried to use the definition:</em>
$$\displaystyle φ(x) = \frac{1}{\sqrt{2\pi}}
\int_{-\infty}^x e^{-{s^2}/{2}}\,\mathrm ds$$</p>
<p><em>So, according to <a href="http://es.symbolab.com/solver/step-by-step/%5Cint%20e%5E%7B-%5Cfrac%7Bx%5E2%7D%7B2%7D%7D" rel="nofollow noreferrer">this</a> site:</em>
$$\int \:e^{-{x^2}/{2}}\mathrm dx=\frac{\sqrt{\pi }}{\sqrt{2}}\text{erf}\left(\frac{x}{\sqrt{2}}\right)+C$$</p>
<p><em>But by definition:</em>
$${\displaystyle \operatorname {erf} (x)={\frac {2}{\sqrt {\pi }}}\int _{0}^{x}e^{-s^{2}}\,\mathrm {d} s}$$</p>
<hr>
<p>I do not know how to follow after the function $ erf (...) $</p>
<p>Maybe the value is only possible to get it through tables?</p>
<p><strong>How to determine $x,x∈R^+$ such that $φ(x)=0,9505$?</strong></p>
<p>Thank you very much.</p>
|
<p>Tables of $\Phi(x)$ can be found in many textbooks, on-line (e.g. <a href="https://www.mathsisfun.com/data/standard-normal-distribution-table.html" rel="nofollow noreferrer">here</a>), etc, and you simply look in the table for the value of $x$ for which $\Phi(x)$ equals $0.9505$. Alternatively, there are various on-line calculators (e.g. <a href="http://stattrek.com/online-calculator/normal.aspx" rel="nofollow noreferrer">this one</a>) that can find the value of $x$ for you. If you want to know <em>how</em> these calculators find the answer, well, one possibility is that they use a formula such as
<a href="http://people.math.sfu.ca/~cbm/aands/page_933.htm" rel="nofollow noreferrer">26.2.22</a> in the well-known reference book <em>Handbook of Mathematical Functions</em> by Abramowitz and Stegun.</p>
| 510
|
probability distributions
|
Tricky Question re Customer Purchase Probabilities
|
https://stats.stackexchange.com/questions/183714/tricky-question-re-customer-purchase-probabilities
|
<p>I would like to calculate some parameters relating to customer purchasing in a retail situation. </p>
<p>I have some basic information which I can use:</p>
<p>Customer visit frequency in the form of probability distribution (I can generate Excel poisson tables using average visit frequency and these work well) for number of customer visits in a given period (1 month), e.g.:</p>
<p>0 Customers: 14%<br>
1 Customer: 27%<br>
2 Customers: 27%<br>
3 Customers: 18%<br>
4 Customers: 9%<br>
5 Customers: 4%<br>
6 Customers: 1%<br>
.... etc</p>
<p>Customer purchase quantity per visit (based on observation), e.g.:</p>
<p>1 unit: probability = 60%<br>
2 units: probability = 25%<br>
3 units: probability = 10%<br>
4 units: probability = 4%<br>
5 units: probability = 1%<br>
(max 5 units)</p>
<p>The data points above are provided by way of example but will differ from case to case.</p>
<p>I would like to calculate the probability of a total of 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11…. etc, units being purchased in the given time period. I think I can do this if the max number of customers is very small (2 or less!) but am struggling to see how to achieve this in a more general case where the numbers of potential customers n are larger! Once I know how to do the calculation, I would like to implement this in Excel.</p>
<p>To make this clearer: Case for 1 customer is simple, being 5 options (using my data: probability of 1 customer is 27% who is likely to purchase 1, 2, 3, 4 or 5 units with probabilities 60%, 25%, 10%, 4% ,1%). I can then calculate probability of selling 1, 2, 3, 4, or 5 units by multiplying. </p>
<p>The case for n = 2 customers has 25 options (I think) (customer 1 with 5 options x customer 2 with 5 options). For this case I may sell 2,3,4,5,6,7,8,9, or 10 units and I can multiply and sum the probabilities manually. This result would then be added to the result for 1 customer.</p>
<p>However, with the case for n = 3 customers there would be 125 options and the adding and multiplying already starts to become hairy. In practice, my customer visit table may extend to many more than 3 customers so the problem quickly becomes difficult to manage without having some kind of general formula. </p>
|
<p>what you are looking for is the sum of independent random variables. This can be calculated either by convolution or by using moment generating functions.</p>
<p><a href="https://www.stat.wisc.edu/courses/st311-rich/convol.pdf" rel="nofollow">discrete convolution formula</a>
so in excel you should be able to do it in a spreadsheet or in vba: basically you split the problem up- calculate sum of 2 variables-> then you repeat with pdf of sum of 2 variables and next variable</p>
<p>or you calculate the mgf of #objects purchased with 1 customer( see calculation for discrete probability mass in wikipedia article)
then you use the 'sum of independent RVs is product of mgfs' to get the mgf for n customers. finally you get the distribution of n customers purchases by inverting the mgf (which will again be a discrete distribution).
<a href="https://en.wikipedia.org/wiki/Moment-generating_function" rel="nofollow">wiki:mgf</a></p>
<p>Updated. So what I am suggesting is to use the discrete convolution formula for $Z=X+Y$. You have worked out how to calculate it for two customers - and all I am suggesting is just do that repeatedly. Take two customers, then $X$ is customer 1 and $Y$ is customer 2, and use the formula to calculate the pdf Z for 2 customers. Now to calculate the pdf for 3 customers you use the pdf for 2 customers which you just calculated ( call that now X) and the pdf of 1 customer purchase (Y) in the same formula. so in terms of VBA you might write a function with the following declaration :conv(pdf_x, pdf_y) which given 2 2-D arrays (1 column for quantity and 1 for probability) would produce a new 2-d array pdf_z. then you would call it repeatedly to get the pdfs for 3,4,5,6,... etc</p>
<p>$P(Z=z)=\sum_{x=0}^z f_X(x)f_Y(z-x)$</p>
| 511
|
probability distributions
|
Probability of median > 1.5
|
https://stats.stackexchange.com/questions/194123/probability-of-median-1-5
|
<p>I was asked a probability question: </p>
<p>Given three numbers i.i.d as $\text{uniform}(0,2)$, what is the probability of the median greater than $1.5$?</p>
<p>My hunch is that each number has $P(X > 1.5) = (2-1.5)/(2-0) = 0.25$, the probability of $\text{median}> 1.5$ is equivalent to "at least two of the three numbers are greater than 1.5", which can be derived as the complement of 'exactly one number is greater than 1.5 or none of them is greater than 1.5'. This could be formulated as a binomial distribution:</p>
<p>$1 - C(\text{choose 1 from 3})\times(0.25)\times(1-0.25)^2 - C(\text{choose 0 from 3})\times(1-0.25)^3$</p>
<p>Is this the correct approach?</p>
|
<p>I would suggest looking at the order statistics. If your three observations $X_1, X_2, X_3$ follow a $Uniform(0,2)$, then the median is $X_{(2)}$, where $X_{(2)}$ is the second order statistic. Your problem then becomes much simpler, as you want to find $P(X_{(2)}>1.5)$. You can easily look up the PDF of order statistics (and, in particular, order statistics for the Uniform distribution). Once you have that, it should become a (reasonably) basic calculus problem to integrate to find the probability that your median is greater than 1.5.</p>
| 512
|
probability distributions
|
Looking for a distribution with very specific properties
|
https://stats.stackexchange.com/questions/210039/looking-for-a-distribution-with-very-specific-properties
|
<p>I'm looking for a <strong>continuous</strong> distribution which I can parameterize such that </p>
<ol>
<li>The expected value is roughly zero</li>
<li>The expected maximum given $x$ draws from that distribution is only very weakly increasing in $x$</li>
<li>$Prob(\max $ of $ x $ draws $ > 0)$ is small for "large" $x$, say around $x\in (30, 100)$</li>
<li>(nice to have): The expected maximum of $x$ variables drawn from the distribution has a nice closed-form solution</li>
<li>(nice to have): The distribution has not too many parameters and is not a very peculiar one that is only known to probability fetishists.</li>
</ol>
<p><strong>Phenomenon</strong> I am trying to capture a phenomenon which I can here describe as a rigged lottery with continuous outcomes. Most lottery tickets are of (varying) negative value. However, as people take part in the lottery, the expected value must be non-negative. Second, there are lottery insiders which, instead of the unconditional value, use insider information to attain the maximum of their draws. These are observed to purchase many tickets. Properties 2 and 3 control that these insiders indeed purchase many tickets (and the value that they expect from that).</p>
<p>Now I try to back out the lottery distribution consistent with this behavior.</p>
<p>I was starting off with Frechet (also because it has convenient properties regarding the maximum), but it doesn't allow enough freedom. I manage to calibrate it such that 1. and 2. hold, but then 3. breaks very quickly. </p>
<p>I looked in the related distributions but couldn't find anything. In general, how does one find (search) distributions when having so specific demands? In specific here, is there any?</p>
|
<p>How about $y \tilde{} \mathscr{N}(-a, fa)$ for small $a>0$ and $f>1$?</p>
<ol>
<li>E$[y]$ = $-a$.</li>
<li>E[max of $x$ draws] = $-a + 2fa\sqrt{\ln x}$ = $a[2f\sqrt{\ln x}-1]$. </li>
<li>Probability of maximum > 0 is similarly increasing in x, and controlled by $f$. It is something like $f_Z(z) = x F_Y(y)^{x-1}f_Y(y)$ where Y is your variable ($\mathscr{N}(-a, fa)$), $Z$ is the max of $x$ draws, $F$ denotes cdf, and $f$ denotes pdf. You can work it out.</li>
<li>Property 2 has a closed form solution.</li>
</ol>
| 513
|
probability distributions
|
How to properly initialize a stochastic vector?
|
https://stats.stackexchange.com/questions/213644/how-to-properly-initialize-a-stochastic-vector
|
<p>I'd like create a stochastic vector $$\mathbf{v} = (v_1, \dots, v_n)$$ of length $n$, so that its elements are assigned weights according to a given parameter ("entropy"): the weights are either uniformly distributed with $p_i = 1/n$, or there's a "bump" at a particular position and other elements decay to 0 quickly -- with the speed specified by the parameter.</p>
<p>In my head, it should look similar to Poisson's pdf:</p>
<p><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/1/16/Poisson_pmf.svg/220px-Poisson_pmf.svg.png" alt="pois_pdf"></p>
| 514
|
|
probability distributions
|
How Many Random Choices Before They Have All Been Picked About The Same # Of Times?
|
https://stats.stackexchange.com/questions/221745/how-many-random-choices-before-they-have-all-been-picked-about-the-same-of-tim
|
<p>Given a set of numbers 1 through N, how many times do I have to randomly pick a number before all of the numbers have been picked a roughly equivalent (within 5% of each other) amount of times, and each one has at least been picked 100 times?</p>
<p>For example:
Given numbers 1-10, how many times do I have to randomly pick one before 1,2,3,4,5,6,7,8,9,10 have all been picked about the same number of times?</p>
<p>I thought I was onto something:
Each number will have the same chance of being picked, which is 1/N.</p>
<p>So in X picks,
N will have X/N picks.
N-1 will have X/N picks.
...
N-(N-1) will have X/N picks.
N-N will have X/N picks.</p>
<p>So if they all have X/N picks, that means as long as X > N, they will all be picked the same amount? It doesn't make sense, I'm missing something having to do with confidence level I think.</p>
<p>Update 7/7/16:
This <a href="https://stats.stackexchange.com/questions/47362/randomly-picking-from-n-choices-roughly-n-times-whats-the-resulting-freque?rq=1">question</a> shows that the probabibility that a particular number was chosen exactly k times when choosing a number randomly m times is:
$p(k)={m\choose k}\frac{(n-1)^{(m-k)}}{n^m}$.<br>
I think I'm getting closer, but not sure what to do with this information.</p>
<p>Update 7/7/16: Not a duplicate of "How many times to roll a die where all have been selected once...".</p>
|
<p>Multinomial problems can be tricky. But when the number of observations grows large, approximations can work very well. This post explains how to use mental arithmetic (or, at worst, the back of a napkin) to obtain a reasonable answer. The beauty of this approach lies in how one can solve challenging statistical problems like this one, using little more than mental arithmetic, by means of a passing familiarity with logarithms and the standard Normal distribution.</p>
<hr>
<h3>Notation</h3>
<p>Let there be $d$ (rather than $N$) distinct objects, each with an equal probability of being drawn in a sample, and let $n$ be the number of draws (with replacement). Let $X_1, X_2, \ldots, X_d$ designate the frequencies of these objects, indexed by $1$ through $d$.</p>
<h3>Framing the question</h3>
<p>Because the frequencies are random, there will always be some chance that they differ by more than $5\%$. Therefore the question should be posed as "how large should the sample be so there is at least a $1−\alpha$ chance that all $d$ frequencies will be within a range that is no greater than $100\gamma\%$ of the average frequency?" With $d=10, \gamma=0.05,$ and $1-\alpha=.95$, elementary approximations suggest the answer is somewhat larger than $70,000$, as we will see.</p>
<h3>The univariate distributions</h3>
<p>Each $X_i$ has a Binomial$(N, 1/d)$ distribution. When the average frequency $N/d$ is sufficiently large (how large depends on $d$), these distributions are approximately Normal. To work with this approximation we need to recall that the mean and variance of a Binomial distribution are $N/d$ and $N(1/d)(1-1/d)$, respectively. Those are the mean and variance of the approximating Normal distribution.</p>
<h3>The bivariate distributions</h3>
<p>The $X_i$ are not independent, because they sum to $N$. What effect might this have on our calculations? To find out, let's compute their correlation. That calculation starts with the covariances $\operatorname{Cov}(X_i,X_j)$. Since the $X_i$ are completely interchangeable, all these covariances are equal, say to some number $U = \operatorname{Cov}(X_1,X_2)$. We can obtain this with a simple calculation:</p>
<p>$$\eqalign{
0=\operatorname{Var}(N)=\operatorname{Var}\left(X_1+\cdots+X_d\right) &= d\operatorname{Var}(X_1) + d(d-1)\operatorname{Cov}(X_1,X_2) \\
&= dN\left(\frac{1}{d}\right)\left(1 - \frac{1}{d}\right) + d(d-1)U.
}$$</p>
<p>The solution is</p>
<p>$$\operatorname{Cov}(X_i,X_j) = U = -\frac{N}{d^2}.$$</p>
<p>This corresponds to a mutual correlation</p>
<p>$$\rho = \frac{-N/d^2}{N(1/d)(1-1/d)} = -\frac{1}{d-1}.$$</p>
<p>For small $d$, this is large enough in size to be important: for instance, with $d=2$, $X_1$ and $X_2$ are perfectly negatively correlated (obviously, since $X_1 = N-X_2$). For sufficiently large $d$, though, we can expect that neglecting it might be a good approximation.</p>
<h3>Distribution of the range</h3>
<p>Let's suppose $d$ is large enough that we may approximate the correlation $-1/d$ by zero. Uncorrelated Normal variables are independent. We may recenter and rescale them to be standard Normal (with zero mean and unit variance), provided we remember to undo this change of units at the end.</p>
<p>Let the distribution function of a standard Normal variable be $F$ with density function $f$. The density function of the range $R = \max(X_i) - \min(X_i)$ is a little hard to work with, but it can be useful, so here it is for $r \ge 0$:</p>
<p>$$f_R(r) = n(n-1)\int_{-\infty}^\infty f(x)\left(F(x+r)-F(x)\right)^{n-2}f(x+r) \text{d}x.\tag{1}$$</p>
<p>This can be numerically integrated with a computer. In the meantime, notice that the symmetry of the standard Normal distribution around $0$ implies the distribution of the maximum is the same as the distribution of the negative of the minimum. The distribution function of the maximum is just $F^n$. Intuitively, the max and min must be positively correlated: when the maximum is large, that means all the values are a little larger than they might be predicted to be, which suggests even the minimum is a little larger than its average. But for even modest values of $d$, that correlation is not great. (For $d=10$ it's around $0.077$.) Therefore, we might consider approximating the range as the difference between two <em>independent</em> variables, one distributed like the maximum and the other like the minimum.</p>
<p>Once again we may resort to a (crude) approximation. Although the distribution of the maximum is not Normal--it has some positive skew--it's not that far off. We could estimate its mean and variance, then replace the range by a variable with twice that mean and twice that variance (which means its standard deviation will be $\sqrt{2}$ times as great).</p>
<p>That's still too hard to calculate exactly, but we can approximate the mean as the <em>median</em> and the standard deviation as half the distance between the 84th and 16th percentiles (which holds perfectly for a Normal distribution and is still pretty good for sort-of-Normal distributions).</p>
<p><em>Now we have something that's easy to work with.</em> Consider the median (50th percentile) of the maximum. This is the value $z$ for which</p>
<p>$$F(z)^{10} = F_{\max}(z) = 0.50.$$</p>
<p>Equivalently, taking roots, it is the $z$ for which</p>
<p>$$F(z) = 0.50^{1/10}.$$</p>
<p>Since $\log(0.50) = -\log(2)\approx -0.7$, $\log(0.50^{1/10}) = (1/10)\log(0.50) \approx - 0.07$, implying $0.50^(1/10) \approx \exp(-0.07) \approx 1-0.07 = 0.93$. A basic familiarity with the Normal distribution indicates $z$ will lie between $1.28$ and $1.64$, perhaps around $1.5$ (interpolating). Therefore the expected value of the range is around $2\times 1.5=3.0$. (Numerical integration of $(1)$ gives $3.077$.)</p>
<p>Using similar back-of-the-napkin calculations we may solve the equations</p>
<p>$$F(z)^{10} = F_{\max}(z) = 0.84;\quad F(z)^{10} = F_{\max}(z) = 0.16$$</p>
<p>by approximating $\log(0.84) \approx 1-0.84 = -0.16$ and $\log(0.16) \approx \log(1/(2\times 3)) = -\log(2) - \log(3) \approx -0.7 - 1.05 = -1.75$. These yield</p>
<p>$$F^{-1}_{\max}(0.84) \approx F^{-1}(1 - 0.016) \approx 2.2$$</p>
<p>and</p>
<p>$$F^{-1}_{\max}(0.16) \approx F^{-1}(1 - 0.175) \approx 1.0.$$</p>
<p>Therefore the standard deviation of the maximum is approximately $(2.2-1)/2 = 0.6$ and the SD of the range will be estimated as $$0.6\times \sqrt{2} \approx 0.8.$$ (The correct value is $0.8125\ldots$.)</p>
<p>From these numbers--a mean range of $3.0$ and standard deviation of $0.8$--we may find upper limits for the range. For instance, because $95\%$ of the standard Normal distribution is less than $1.65$, the range should have around a $95\%$ chance of being less than $3.0 + 1.65(0.8) \approx 4.3.$ (A more accurate answer, also taking into account the correlation between maximum and minimum, is $4.47$.)</p>
<h3>Application of the results</h3>
<p>We have seen that in a sample of size $n$, by making many approximations and neglecting various correlations, the range of the $X_i$ has around a $95\%$ chance of being less than $4.3$ times the common standard deviation of the $X_i$, which is $\sqrt{n(1/d)(1-1/d)} = \sqrt{n(10-1)}/10$. The average of the $X_i$ obviously is $n/d$. We ask, then, how large must $n$ be in order that this limiting range not exceed $5\%$ of the average? In symbols, the inequality is</p>
<p>$$4.3 \frac{\sqrt{n(10-1)}}{10} \le \frac{5}{100} \frac{n}{10}$$</p>
<p>with the easy solution</p>
<p>$$70000\approx \left(\frac{4.3}{5/100}\right)^2(10-1) \le n.$$</p>
<p>By neglecting the negative correlations among the $X_i$, we have surely underestimated this sample size.</p>
<h3>Checking <em>via</em> simulation</h3>
<p>This gives us the starting value for a quick <code>R</code> simulation. In a few seconds, we can perform the experiment (of drawing ten numbers 70,000 times) over and over again 5,000 times:</p>
<pre><code>n.sim <- 5e3
d <- 10
n <- 7e4
sim <- replicate(n.sim, diff(range(tabulate(sample.int(d, n, replace=TRUE), nbins=d))))
(quantile(sim*d/n, .95))
</code></pre>
<p>The output is around $0.0535$, showing that <em>all $d=10$ counts were within $5.3\%$ of each other in $95\%$ of samples from $n=70,000$ draws.</em> We're in the right ballpark and we were correct in supposing that slightly more draws would be needed.</p>
<p>Note that these recommended values of $n$ are so large, many of the Normal approximations used to derive it are amply justified.</p>
<hr>
<h3>Summary</h3>
<p>The methods described here work for large $d$, $\alpha$ not too small, and $\gamma$ not too large. How to check? We will run into trouble if $\alpha$ is much smaller than $1\%$. When $nd$ is large--hundreds or more--you're probably ok using these approximations.</p>
| 515
|
probability distributions
|
How can I determine if a weighed random function is working as expected?
|
https://stats.stackexchange.com/questions/223217/how-can-i-determine-if-a-weighed-random-function-is-working-as-expected
|
<p>I am writing a series of program utilities which heavily utilize weighted random functions, where the user defines a probability density function as a piecewise curve with a series of <code>(x, y)</code> coordinates and gets back numbers which conform to that function. What I've been doing so far to verify the working of the program is to crudely output a large number of weighted random numbers, plot them on a histogram, and eyeball to see if the curve looks as expected. I would like to be able to verify this more thoroughly and automatically. </p>
<p>Given a piecewise probability distribution function and a significant number of sample points, how can I determine if the sample points reasonably fall within the function?</p>
| 516
|
|
probability distributions
|
In a statitical experiment involving independent coin tosses, what is the number of heads required to get $m$ tails?
|
https://stats.stackexchange.com/questions/136794/in-a-statitical-experiment-involving-independent-coin-tosses-what-is-the-number
|
<p>I know that the solution is a negative binomial distribution. However, I was a looking for a proof for the same along the following lines. If the number of coin tosses is fixed to be $n$, then the distribution of number of heads follows a binomial distribution, that is
\begin{equation}
P(N_h = n_h) = \frac{n!}{n_h!(n-n_h)!} p^{n_h}(1-p)^{n-n_h}\:,
\end{equation}
where $p$ is the probability of observing a head. </p>
<p>Hence, if we condition on the number of tails observed $m$, we should be able to get the desired distribution. Hence,</p>
<p>\begin{align*}
P(N_h = n_h| N_t = m) &= \frac{P(N_h = n_h, N_t = m)}{P(N_t = m)} \\
&= 1{\{n_h+m =n\}}
\end{align*}</p>
<p>I know that this is the wrong way to this problem. I was wondering about how to proceed correctly.</p>
|
<p>Assume that we have observed $m$ tails. The last coin toss must have given us a tail - otherwise we would have stopped earlier. We can now count in how many ways this can occur with $n_h$ heads. Thus, we have $n_h + m$ tosses, and the last one is fixed (tail). We then have to place $m - 1$ tails in $n_h + m - 1$ places. We can do this in </p>
<p>\begin{equation}
k_{n_h} =\pmatrix{n_h + m - 1 \\ m - 1}
\end{equation}</p>
<p>ways, using binomial coefficients. Each of them have equal probability, namely $p^{n_h}(1-p)^m$, using the independence of the tosses. Each of the $k_{n_h}$ different ways are obviously disjoint events, and therefore we can get the desired probability simply by summing their respective probabilities. All in all,</p>
<p>\begin{equation}
P(N_h = n_h | N_t = m) = \pmatrix{n_h + m - 1 \\ m - 1} \cdot p^{n_h}(1-p)^m,\ n_h \in \mathbb{N}_0,\ m \in \mathbb{N}.
\end{equation}</p>
<p>This is exactly the negative binomial distribution with parameters $m$ and $p$.</p>
| 517
|
probability distributions
|
Given hit probability and number of projectiles, randomly determine number of hits
|
https://stats.stackexchange.com/questions/155412/given-hit-probability-and-number-of-projectiles-randomly-determine-number-of-hi
|
<p>I'm working on a game where I'd like damage dealt to a target to be randomized, but having troubles working out how to go about it.</p>
<p><code>N</code> projectiles are fired at a target with probability, <code>p</code>, that each missile will hit the target and damage dealt is based on the number of projectiles that hit.</p>
<p>So, I'd like to randomly generate the number of projectiles that hit. Fractional values would likely be considered as partial hits for this purpose and thus seem acceptable. Specifically, I see say 20.2 hits might some combination of direct hits vs indirect hits that total up to the damage that would come from 20.2 direct hits. </p>
<p>I could simply use the expected value of <code>N*p</code> hits, and fuzz it a bit, but that seems rather less than ideal. I could also simply do the hit calculations for each projectile, but the number of projectiles could be upwards of millions or more, so it would be rather inefficient.</p>
<p>I know how to compute the probabilities of a success after N tires, and R successes after N tries. I'm thinking this is related to probability distributions, but I'm not familiar enough with them to determine which I should use, let alone what the input parameters should be.</p>
<p>Any help, hints, or tips would be appreciated.</p>
<p><strong>A bit more details:</strong></p>
<p>To be more exact, I'm trying to simulate the damaging effects of a single wave of missiles, launched from a fleet of spaceships, against a target, another fleet of spaceships. The fleets have an overall health and damage to that decreases the effectiveness of the fleet during later waves.</p>
<p>For simplicity, I'm simply using a defense rating (DEF) of a target and attack rating, <code>ATK</code>, of the missile to determine probability of hit, <code>PHIT</code>. Specifically, <code>PHIT = ATK / (DEF + ATK)</code>, where <code>DEF</code> and <code>ATK</code> are <code>1</code> or higher, so it becomes <code>50:50</code> if the <code>DEF</code> and <code>ATK</code> ratings are the same. The <code>DEF</code> is intended to summarized all of the reasons why the missile might miss, while the <code>ATK</code> does the same for why the missile might hit.</p>
<p><strong>Yet more details</strong>:</p>
<p>Haven't had internet so only now able to check it. I'll answer some questions.</p>
<p>I'm hoping to keep ATK/DEF values fairly close so p should typically be in the 0.25 to 0.75 range, but there'll probably be plenty of cases closer than 0.1 to 0 or 1.</p>
<p>I've had no intention of computing each missile separately, directly. Though, I do want the number of hits to be based on the probability of each missile hitting. </p>
<p>There could be up to millions or even billions of missiles per wave, and as many as 1000 waves (but it's much more likely to be in the single digits). I'll actually probably just put a limit on the number of waves and call it a draw if reached. So looking for an approach that is O(1) or O(log(N)) in time and space. Although, I'm willing to lower the limit on missiles if it's not possible to obtain random numbers from a binomial distribution or a hypergeometric distribution (the Wikipedia on this gave me some interesting ideas to expand the game) efficently. I'm looking at the <code>scipy</code> functions for them, and they seem suitable, though I haven't yet installed <code>scipy</code> to determine how efficent they are.</p>
<p>The game will likely be turn based, though there may be a game-mode that isn't.</p>
<p>The game will basically be similar to the Kenway's Fleet mini-game in Assassin's Creed Black Flag and it's relative in Assassin's Creed Rogue. However, it'll allow more customization and flexibility. The game is intended for web or mobile. It's also inspired by a number of novels that have fleet battles between spaceships, where waves of missiles are exchanged. Therefore the number of ships in the fleet are simply there to help justify increases in ATK, DEF and number of missiles per wave. This is why randomization is required.</p>
<p>I'd rather avoid simply using the mean as I think variance would make things more interesting, as a given scenario doing a specific amount of damage. Although, I tried out the normal with a mean and a bit of tweaking to the variance formula does yield results that seem suitable. As it was, cases where there was only a 60-70% chance of a hit, seemed to result in cases where a wave of 100 missiles had 100 hits (100% hits) quite often. Which is why I'm thinking binomial distribution should work better.</p>
|
<blockquote>
<p>N projectiles are fired at a target with probability, p, that each missile will hit the target and damage dealt is based on the number of projectiles that hit.</p>
<p>I'd like to randomly generate the number of projectiles that hit. Fractional values would likely be considered as partial hits for this purpose and thus seem acceptable.</p>
</blockquote>
<p>If the outcome for each missile is independent of all other missiles (not necessarily realistic) and <span class="math-container">$p$</span> is the same for all missiles, then the number of missiles that hit is binomial<span class="math-container">$(N,p)$</span>.</p>
<p>If <span class="math-container">$X$</span> is the number of hits then
<span class="math-container">$$P(X=x)={N\choose x} p^x (1-p)^{N-x}\,,\quad x=0,1,2,...,N$$</span></p>
<p>See Wikipedia on the <a href="http://en.wikipedia.org/wiki/Binomial_distribution" rel="nofollow noreferrer">binomial distribution</a>.</p>
<p>Similarly, under the same assumptions, the probability of at least one hit is 1-P(0 hits), and P(0 hits) is just <span class="math-container">$P(X=0)$</span> above.</p>
<p>To randomly generate the number of hits, you can use functions that randomly generate from that binomial distribution (I don't know what you have access to but libraries for this sort of thing are commonplace).</p>
<p>Alternatively, since partial hits are okay, you could randomly generate from a normal with mean <span class="math-container">$Np$</span>, and variance <span class="math-container">$Np(1-p)$</span>, and if the answer is below 0 call it 0, and if it's above N call it N. Nearly always you'll get a non-integer number in between.</p>
<p>You could then round to the nearest integer, or you can say "if it's within some distance <span class="math-container">$d$</span> of <span class="math-container">$x+\frac{1}{2}$</span>, call it <span class="math-container">$x$</span> hits and one partial hit". So you might say d=0.25, for example, which means if you got a result of 5.2 you'd say 'five hits' but a result of 5.4 would be '5 hits and one partial hit', while 5.9 would be "six hits'. By playing with <span class="math-container">$d$</span> you can increase or decrease the frequency with which any partial hits show up. This will never give more than a single partial hit though.</p>
<p>If you don't want to assume independence, you'll have to specify the dependence you want.</p>
| 518
|
probability distributions
|
How to work out covariance using the expected value of x,y and xy
|
https://stats.stackexchange.com/questions/156165/how-to-work-out-covariance-using-the-expected-value-of-x-y-and-xy
|
<blockquote>
<p>Cov(x,y) if e(x) = 2.60 and e(y) = 2.35 and e(xy) = 7.06</p>
</blockquote>
<p>i'm confused because all i know is
$$cov(x,y) = \text{correlation of xy}*sqrt(x)*sqrt(y) $$</p>
| 519
|
|
probability distributions
|
How to determine if random variables are distributed according to a multivariate normal distribution?
|
https://stats.stackexchange.com/questions/173440/how-to-determine-if-random-variables-are-distributed-according-to-a-multivariate
|
<p>Suppose $(x_1, x_2, x_3)\sim N(\mu, \Sigma)$ where $\mu\in\mathbb{R}^3$ and $\Sigma$ is a $3\times 3$ covariance matrix, are the variables $A = x_1 + x_2$ and $B = x_2 + x_3$ necessarily distributed according to a multivariate normal distribution? That is, do there exist $\nu\in\mathbb{R}^2$ and a $2\times 2$ covariance matrix $M$ such that $(A, B) \sim N(\nu, M)$</p>
|
<p>Yes and here is how to see it. Write $A$ and $B$ as</p>
<p>$$\begin{bmatrix} A \\B \end{bmatrix}=\begin{bmatrix} 1 & 1 & 0 \\0 & 1 &1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix}$$</p>
<p>and exploit the fact that <a href="https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Affine_transformation" rel="nofollow">linear combinations of multivariate normal variables are themselves normal</a>.</p>
| 520
|
probability distributions
|
Why doesn't 1/2 always equal the value of X's CDF at E[X]?
|
https://stats.stackexchange.com/questions/177178/why-doesnt-1-2-always-equal-the-value-of-xs-cdf-at-ex
|
<p>Doesn't exactly half the probability fall on either side of the mean?</p>
|
<blockquote>
<p>Doesn't exactly half the probability fall on either side of the mean?</p>
</blockquote>
<p>No!</p>
<p>The value half the probability falls either side of (at least with continuous distributions) is called the <strong><em>median</em></strong> (with discrete distributions you have to rephrase as something like "at least half is equal to or above and equal to or below").</p>
<p>The mean is a different thing with a different definition, and in general mean and median are different.</p>
<p>Consider a distribution that has 99% of the probability at 0 and 1% of the probability at 100. The mean is 1, but 99% of the probability is below the mean. </p>
| 521
|
probability distributions
|
pdf and their relation to moments
|
https://stats.stackexchange.com/questions/484434/pdf-and-their-relation-to-moments
|
<p>I read that gaussian distribution is only defined by 2 moments i.e., mean and variance and can be only defined by these moments. Do we have distributions defined by just mean? Even for that matter, are there distributions defined by first three moments or say infinite moments. Can someone throw examples for such?</p>
|
<p>To reframe the question a bit, let us recall that (raw or absolute) moments of a distribution <span class="math-container">$p$</span> are quantifies related to expectations (when defined): <span class="math-container">$E_p[(x-a)^\alpha]$</span> or <span class="math-container">$E_p[|x-a|^\alpha]$</span> and they can be centered, standardized, etc.</p>
<p>The range of <span class="math-container">$\alpha$</span> can be restricted to integers, or often positive integers <span class="math-container">$n$</span>.</p>
<p>For the Gaussian, once the two first raw moments <span class="math-container">$\mu_1,\sigma=\mu_2$</span> are known, the other integers centered moments are determined directly: the odd ones are zero, the even ones are <span class="math-container">$(n-1)!!\sigma^n$</span>. <strong>So, the Gaussian moments are determined by the first two moments</strong> (more at: <a href="https://arxiv.org/pdf/1209.4340.pdf" rel="nofollow noreferrer">Moments and Absolute Moments of the Normal Distribution</a>). <strong>But I would not say that they are defined by them</strong>. Because you can derive the others... <em>by knowing that the distribution IS Gaussian</em>.</p>
<p>Given a sequence of moments <span class="math-container">$\mu_n$</span>, can we characterize the underlying <span class="math-container">$p$</span>, and vice-versa is one of the question under the hood of <strong>the problem of moments</strong>.</p>
<p>Given a sequence of moments, is a distribution uniquely defined? No, there is a counter-example provided with a lognormal distribution and a periodically "perturbed" lognormal distribution; <span class="math-container">$p(x) := \frac{1}{x\sqrt{2\pi}} \exp \left(- \frac{(\log x)^2}{2} \right)$</span> and
<span class="math-container">$q(x) := p(x) (1+ \sin(2\pi \log(x))$</span>, see:</p>
<ul>
<li>n StackOverflow: <a href="https://mathoverflow.net/q/3525/102954">When are probability distributions completely determined by their moments?</a></li>
<li>in SE.stats: <a href="https://stats.stackexchange.com/a/84213/83945">How is the kurtosis of a distribution related to the geometry of the density function?</a></li>
<li>in SE.maths: <a href="https://math.stackexchange.com/a/1166855/257503">Do moments define distributions?</a> .</li>
</ul>
<p>Under which additional properties are a distribution and its moments unique, this seems to remain an open problem to me, even under additional conditions (positivity, etc.). In <a href="https://mathoverflow.net/q/3525/102954">When are probability distributions completely determined by their moments?</a>, they say:</p>
<blockquote>
<p>Roughly speaking, if the sequence of moments doesn't grow too quickly,
then the distribution is determined by its moments. One sufficient
condition is that if the moment generating function of a random
variable has positive radius of convergence, then that random variable
is determined by its moments.</p>
</blockquote>
<p>One can check: <a href="https://web.williams.edu/Mathematics/sjmiller/public_html/book/papers/jcmp.pdf" rel="nofollow noreferrer">The moment problem</a> for references and an introduction to the classical moment problem on the real line with special focus on the indeterminate case.</p>
| 522
|
probability distributions
|
Probability distribution of random variables
|
https://stats.stackexchange.com/questions/271458/probability-distribution-of-random-variables
|
<p>The SAT is used as an aid in determining college admissions. This test is a multiple choice test. To discourage random guessing, points are subtracted for wrong answers. Each question has 5 possible answers, and the test taker must pick one answer or choose not to answer the question. One point is awarded for each correct answer, and for each wrong answer 1/4 point is subtracted. Construct a table of probability distribution of the random variable x, which represents the point for each answer. Then calculate the expected point value of a random guess.</p>
|
<p>So, If I understand correctly, you are asking for the probability distribution of the number of points for a <em>single question.</em></p>
<p>The use of a Bernoulli (0 or 1) random variable can help here. Let $Y \sim Bern(p)$, where $p$ is the probability of getting a question correct. Then the number of points earned or lost on a single question is:
\begin{equation}
X = Y - \frac{1}{4}(1-Y)
\end{equation}</p>
<p>So in order to answer your question, first figure out what $p$ is in the case where we are just guessing. Then you must calculate the expected value of $X$. It's a linear function of $Y$, and $\mathbb{E}(Y) = p$.</p>
| 523
|
probability distributions
|
Central limit theorem for non-identical distributed random variables
|
https://stats.stackexchange.com/questions/461272/central-limit-theorem-for-non-identical-distributed-random-variables
|
<p>Suppose <span class="math-container">$x_i, i=1, ..., n$</span> are independently distributed with mean <span class="math-container">$0$</span> and variance <span class="math-container">$\sigma^2$</span>. Then <span class="math-container">$\frac{1}{\sqrt n}\sum_{i=1}^n a_ix_i$</span> converges to a normal distribution with mean 0 and variance <span class="math-container">$\sigma^2 p$</span> if <span class="math-container">$\frac{1}{n}\sum_{i=1}^n a_i^2 \to p > 0$</span>.</p>
<p>How to prove? What I learned was that central limit theorem was for independently identically distributed random variables. But this is not the case here. </p>
|
<p>Necessary and sufficient conditions for a CLT of variables that are independent but not iid are known: the Lindeberg-Feller CLT.</p>
<p>The conditions for a CLT of an independent zero-mean, sequence <span class="math-container">$y_n$</span> (or triangular array <span class="math-container">$y_{in}$</span> to converge are</p>
<ul>
<li>The 'Lindeberg' condition on the tails: with <span class="math-container">$s_n^2=\sum_{i=1}^n\mathrm{var}[Y_i]$</span>, for any <span class="math-container">$\epsilon>0$</span>, <span class="math-container">$$\lim_{n\to\infty}\frac{1}{s_n^2} \sum_{i=1}^n E[Y_i^2\mid |Y_i|>\epsilon s_n]=0$$</span></li>
<li>The 'uniform asymptotic negligibility' condition <span class="math-container">$$\max_{i\leq n} \mathrm{var}[Y_i]/s_n^2\to 0$$</span></li>
</ul>
<p>The Lindeberg condition is implied by a bound on <span class="math-container">$E[|Y_i|^{2+\delta}]$</span> for any <span class="math-container">$\delta>0$</span> and that's often how it's proved. The Lindeberg condition is sufficient, adding the UAN condition tightens it to be necessary.</p>
<p>The basic idea behind the proof of the sufficiency half of theorem is to take a sequence of Normal random variables with the same means and variances as the <span class="math-container">$Y_i$</span>. The sum of this is (trivially) Normal. Now replace them one at a time by the <span class="math-container">$Y_i$</span> and show that the expectation of a suitable set of functions of the partial sums doesn't change much, so that the limiting distribution is still Normal. The details are a bit annoying but widely available.</p>
<p>We will want to take <span class="math-container">$Y_i=a_iX_i$</span>. Whether your conditions imply these conditions is not clear. For a start, I'm not sure whether you mean your variables <span class="math-container">$X_i$</span> to iid or just zero mean and constant variance. If they aren't iid, it's possible that the result fails even for <span class="math-container">$a_i\equiv 1$</span>. For example, if the skewness of the <span class="math-container">$X_i$</span> increases fast enough with <span class="math-container">$i$</span> the skewness of the partial sums might not go to zero.</p>
<p>If your <span class="math-container">$X_i$</span> are iid, we'd only need to worry about the impact of the <span class="math-container">$a_i$</span>. Your condition
<span class="math-container">$$\frac{1}{n}\sum_{i\leq n} a_i^2\to p$$</span>
is equivalent to
<span class="math-container">$$\sum_{i\leq n} \frac{\mathrm{var}[Y_i]}{n\sigma^2}=\sum_{i\leq n} \frac{a_i^2\sigma^2}{n\sigma^2}\to p/\sigma^2$$</span>
which implies
<span class="math-container">$$\max_{i\leq n} \frac{\mathrm{var}[Y_i]}{n\sigma^2}\to 0$$</span>
the UAN condition.</p>
<p>Again if your <span class="math-container">$X_i$</span> are iid
<span class="math-container">$$E[Y_i^2\mid |Y_i|>\epsilon s_n]=a_i^2E[X_i^2\mid |X_i|>\epsilon \tau_n ]$$</span>
where <span class="math-container">$\tau_n=s_n/a_n=\sigma\sqrt{\sum_{i=1}^na_i^2}$</span>. Your condition on <span class="math-container">$a_i$</span> implies <span class="math-container">$\tau_n$</span> is close to <span class="math-container">$p$</span> (within an arbitrarily short interval except finitely often), so this implies Lindeberg's condition.</p>
<p>Note that convergence of <span class="math-container">$\frac{1}{n}\sum_i a_i^2$</span> was used; a <span class="math-container">$\limsup$</span> would not be enough.</p>
| 524
|
probability distributions
|
Phase space distribution of heads and tails (coin toss)
|
https://stats.stackexchange.com/questions/70670/phase-space-distribution-of-heads-and-tails-coin-toss
|
<p>So I have the equation </p>
<p>$$h(t) = 1 + vt - \frac{1}2 gt^2 \pm sin(\omega t) $$</p>
<p>to describe the motion of a flipped coin. It is just a kinematics equation with an angular component added to it, where $h$ is the height of the coin, $v$ is the upwards velocity of the coin, $\omega$ is the angular velocity, and $t$ is time. </p>
<p>What I want to show is a 2D graph of angular velocity against velocity ($\omega$ vs. $v$) so that for all combinations of $v$, $\omega$, and $t$ such that $h = 0$ (when the coin lands), the graph is divided into regions that indicate whether the coin landed on heads or tails.</p>
<p>What is the approach for doing this? I prefer Mathematica if possible.</p>
| 525
|
|
probability distributions
|
What distribution would be expected for number rapists ( vs number of victims)?
|
https://stats.stackexchange.com/questions/71641/what-distribution-would-be-expected-for-number-rapists-vs-number-of-victims
|
<p>This is a horrible question to ask. But it would be useful to know (rather than someone spouting an <em>opinion</em> that 99.999% are/are not.).</p>
<p>It has been estimated that 18.3% of women will be raped at some point in their life time (<a href="http://www.cdc.gov/ViolencePrevention/pdf/sv-datasheet-a.pdf" rel="nofollow">CDC: Sexual Violence, Facts at a glance</a>).</p>
<p>So to answer "how many men are (likely to be) rapists", more info is required.</p>
<p>There would be an upper bound of 18.3%, however some men would rape more than one woman.</p>
<p>So to make an estimate the percentage of men that may have committed rape on one or more women, you would need to know that type of distribution there would be.</p>
<p>So what would likely be the distribution for rapist like behaviour?
(One tailed normal, Pareto, something else?)</p>
<p>*To make things simple lets assume only for the M -> F case.</p>
<p>This rather subjective, so I am more than happy to rephrase if someone has suggestions how.
Also links to actual research would be ideal.</p>
|
<p>How about a zero-inflated model?</p>
<p>It seems reasonable to assume that the population contains a (hopefully large) number of good folks who never rape anyone, mixed together with a subpopulation of rapists that attack people according to some other as-of-yet-unspecified distribution. </p>
<p>Fortunately for me, I know nothing about rapists, but it might be reasonable to start by imagining that for the rapists, the rapes are "generated" by a Possion process. The underlying model--rapists attack new victims at a fixed rate-- isn't quite right, but it's probably reasonably close enough that a zero-inflated poisson model would work reasonably well. </p>
<p>Since the Poisson model only has one parameter ($\lambda$ controls both the mean and variance), you may want to consider a zero-inflated negative bionomial instead, since it can handle over-dispersed (variance > mean) data too.</p>
| 526
|
probability distributions
|
Calculating the probability of an inequality with two random variables
|
https://stats.stackexchange.com/questions/78030/calculating-the-probability-of-an-inequality-with-two-random-variables
|
<p>I am analyzing a timing circuit I designed, and I need to calculate the probability of a certain event (bit error). For example, I have derived this equation:</p>
<p>$(1 + x) / d < 1 / M$,</p>
<p>where x is a random variable with a normal distribution, d is a random variable with uniform distribution, and M is a constant.</p>
<p>I would like to be able to calculate the probability that this inequality is true, for a given distribution for x, a given distribution for d, and a given M. For example, let's say x has a normal distribution with mean = 0 and SD = 0.05, d has a uniform distribution between 0 and 1, and M = 3. How would I calculate the probability that the inequality is true? </p>
<p>If there was a single random variable, I could do it. But I am lost and don't know how to solve it. I tried searching and reading a statistics textbook, but I don't know what to look for.</p>
|
<p>I think that your ratio is slash distributed. </p>
<p><a href="http://en.wikipedia.org/wiki/Slash_distribution" rel="nofollow">http://en.wikipedia.org/wiki/Slash_distribution</a></p>
<p>It should be possible to derive the probability $<1/M$ from there. Another option perhaps is bootstrapping, where you would take random samples from your data and evaluate the criterion. The proportion of cases in which it is true would be the probability of your event.</p>
| 527
|
probability distributions
|
Expected number of trials before k successes where multiple successes can occur at each trial
|
https://stats.stackexchange.com/questions/94891/expected-number-of-trials-before-k-successes-where-multiple-successes-can-occur
|
<p>Let's say there are $N$ balls, $l$ of them have unique colours, $N-l$ of them are black.</p>
<p>$D$ people uniformly randomly sample $n$ balls without replacement. Each person's sampling is independent of the others'. So they take $n$ balls, make copies of them, and then put the originals back for the next person.</p>
<p>Now I want to find at least $k$ of the $l$ uniquely coloured balls. I'm going to ask each person one at a time (in random order) for all of their non-black balls. What's the expected number of people I have to ask before I have collected at least $k$ uniquely coloured balls?</p>
<p>If I only ask one person then the probability that I collect at least $k$ comes from a binomial distribution ($X \sim B(l, \frac{n}{N})$). I imagine this as asking "what is the probability that the person sampled at least $k$ of the $l$ balls".</p>
<p>Where I come unstuck is that if I now ask a second person, the probability that I find more coloured balls is dependent on how many, and which, coloured balls I found the first time.</p>
<p>I wrote out the possibilities for small numbers by hand and it looks like $(zp)^k$ (where $p$ is the probability that a particular coloured ball was sampled) gives the probability that $k$ balls have been found after asking $z$ people. </p>
<p>I got that 'result' by asking, "what's the probability that between 2 people I find 2 different balls". If I use $p_{ij}$ as the probability that person $i$ gave me ball $j$ then the answer is:</p>
<p>$P(\textrm{2 balls between 2 people}) = p_{11}p_{12}+p_{11}p_{22}+p_{21}p_{12}+p_{21}p_{22} = 4p^2$</p>
<p>My reasoning is that the four terms are the four possible ways that I could get 2 balls from 2 people, each way is independent so I can add them up.</p>
<p>I did this for a couple of different numbers and saw the $(zp)^k$ result.</p>
<p>I'm not sure that my reasoning is sound or that my result is even useful in answering the original question. I've been staring at this for an afternoon now and I don't think I'm getting much further.</p>
<p>So, what should I do?</p>
| 528
|
|
probability distributions
|
Cumulative probability of 2 independent events
|
https://stats.stackexchange.com/questions/97539/cumulative-probability-of-2-independent-events
|
<p>I’m trying to find the cumulative probability of 2 independent events (at least I think its cumulative probability). </p>
<p>I will miss work on rainy days (event 1) where my car won’t start (event 2).
The probability that my car won’t start on any given day is 20%.
The probability of rain on any given day in a month is 10 in 30 (33.3%).
The two events are independent of each other.</p>
<p>The probability that my car won’t start on a rainy day is 0.2 * 0.33 = .067
In a 30 day month I would expect to miss work 0.67 * 30 = 2.0 days (rounded) </p>
<p>But how can I find the probability of missing 1 days, 2 days, 3 days … all 30 days etc. during a 30 day month? </p>
| 529
|
|
probability distributions
|
How do I determine/compute a cutoff for chance level/not chance level?
|
https://stats.stackexchange.com/questions/102643/how-do-i-determine-compute-a-cutoff-for-chance-level-not-chance-level
|
<p>I am in the process of describing my research design for my dissertation and ran into a roadblock. In my design, I am converting 20 y/n responses from 190 participants to two dichotomous groups: 1). Chance level and 2). Above chance level.</p>
<p>If chance level is 50% or 10 responses correct/incorrect, how do I determine what would not be chance? How do I determine the cutoff for placing an individual into the above chance level group?</p>
<p>I have heard a few things such as 25% over chance is no longer chance. But even here, I am not sure how to calculate it...is 50% the base or is 100% the base?</p>
<p>I could really use your help with this matter.</p>
<p>Thank you for your consideration.</p>
<p>======================================================================================</p>
<p>Hi Glen,</p>
<p>I asked a question about probabilities and cut-offs, to which you replied with a great answer. I appreciate the table you provided as it gives me a choice. However, now my dilemma is that I would like to know the source so I can cite it. My mentor wants to know my source!</p>
<p>Skootz</p>
|
<p>Here's one possibility:</p>
<p>If you assume the answers are like independent coin-flips, you can work out the probability of getting any number correct. For example if the answers were 50-50 coin flips the change of getting at least 15 correct is about 2%. You might say "well, that's pretty unlikely, let's say that 15 or more is 'above chance' since there's such a low probability of it happening by chance".</p>
<p>Indeed, that's how hypothesis tests are constructed.</p>
<p>Here's a table of values so you can choose which cutoff appeals to you:</p>
<pre><code>Number Probability of
correct at least that
many correct (%)
13 13.1588
14 5.7659
15 2.0695
16 0.5909
17 0.1288
18 0.0201
19 0.0020
20 0.0001
</code></pre>
<p>I'd suggest choosing one of 14, 15 or 16.</p>
| 530
|
probability distributions
|
Proof of conditional probability
|
https://stats.stackexchange.com/questions/104351/proof-of-conditional-probability
|
<p>$X$ and $Y$ are real-valued random variables such that the distribution of $(X,Y)$ is absolutely continuous with density function $p$ and let $p_x$ denote the marginal density function of $X$. Suppose that there exists a point $x_0$ such that $p_x(x_0) > 0$, $p_x$ is continuous at $x_0$, and for almost all $y$, $p(\cdot,y)$ is continuous at $x_0$. Let $A$ denote a subset of $\mathbb{R}$. For each $\epsilon \gt 0$, let</p>
<p>$$d(\epsilon)=\Pr(Y \in A | x_0 \leq X \leq x_0 + \epsilon).$$</p>
<p>Show that $\Pr[Y \in A|X = x_o]$ = $\lim_{\epsilon \to 0} d(\epsilon).$</p>
<p><strong>Attempt:</strong></p>
<p>$$\eqalign{
\lim_{\epsilon \to 0} d(\epsilon)
&= \lim_{\epsilon \to 0}\Pr(Y \in A|x_0≤ X≤ x_0+\epsilon)\\
&= \Pr(Y∈A|x_0≤X≤x_0+0)\\
&= \Pr(Y \in A|x_0≤ X ≤x_0)\\
&= \Pr(Y \in A|X = x_0).
}$$</p>
<p>I realize this attempt is wrong, but I'm not sure how to approach this problem.</p>
|
<p>Recalling the definition of conditional density we have
$$
\Pr(Y\in A\mid X=x_0) = \int_A f_{Y\mid X}(y\mid x_0)\,dy = \frac{1}{f_X(x_0)} \int_A f_{X,Y}(x_0,y)\,dy \, .
$$
For $\epsilon>0$, consider
$$
\Pr(Y\in A\mid x_0\leq X\leq x_0+\epsilon) = \frac{\Pr(Y\in A, x_0\leq X\leq x_0+\epsilon)}{\Pr(x_0\leq X\leq x_0+\epsilon)} \, .
$$
Very informally (drawing a figure may help), if $\epsilon$ is "sufficiently small" and, for fixed $y$, $f_{X,Y}(\,\cdot\,,y)$ "varies slowly" inside the interval $[x_0,x_0+\epsilon]$, we can approximate
$$
\begin{align}
\Pr(Y\in A, x_0\leq X\leq x_0+\epsilon) &= \int_{\{(x,y):x_0\leq x\leq x_0+\epsilon, y\in A\}} f_{X,Y}(x,y)\,dxdy \\
&\approx \epsilon \int_A f_{X,Y}(x_0,y)\, dy \, .
\end{align}
$$
Analogously,
$$
\Pr(x_0\leq X\leq x_0+\epsilon) \approx \epsilon\,f_X(x_0)\, .
$$
Hence,
$$
\lim_{\epsilon\to 0} \Pr(Y\in A\mid x_0\leq X\leq x_0+\epsilon) = \Pr(Y\in A\mid X=x_0) \, .
$$</p>
| 531
|
probability distributions
|
What is the difference between dexp and pexp
|
https://stats.stackexchange.com/questions/111970/what-is-the-difference-between-dexp-and-pexp
|
<p>I understand probability distribution but I am having a hard time getting a grasp on probability density function, specifically difference between dexp (density of exponential distribution) and pexp (probability distribution of exponential distribution)</p>
|
<p>Suppose X is an exponential random variable.
pexp(c) is the probability that X is less than or equal to c. pexp is always non-decreasing. To prove this, let m>0, then pexp(c+m)=P(X
<p>dexp(c) is the derivative of pexp(c), but intuitively, it is the probability that X is 'near' c, or the 'density' of the probability mass. The chance X lands on exactly each number is zero, but when we sum over the infinite real numbers in any interval, we get a finite probability that X falls in that interval. This is decreasing, since an exponential random variable is more likely to be between 1 and 2 than between 100 and 101. </p>
<p>A CDF is NOT a density function. A density is a mass divided by a length. A CDF doesn't define the length where the probability mass is, just that it is left of c.</p>
| 532
|
probability distributions
|
joint probability distribution with a constant
|
https://stats.stackexchange.com/questions/124620/joint-probability-distribution-with-a-constant
|
<p>If $X \sim N(0,1)$, then what is the joint probability distribution of $(X+1,X)$?</p>
<p>An attempt: $f(x,x+1)=f(x|x+1)f(x+1)=f(x+1)$, so $N((0,0),(0,0;0,1))$. Note sure though...</p>
|
<p>First, define $Y=X+1$ to make the notation easier. What you are looking for is the pdf of YX. Here it is:</p>
<p>$$ f_{YX}(y,x) = f_{Y|X}(y|x)f_X(x).$$</p>
<p>We know $f_X(x)$, so the question is what is $f_{Y|X}(y|x)$ ?</p>
<p>The answer is the following. If we know that $X=x$, then $Y=X+1 = x+1$. So $Y$ is a constant, with pdf given by $\delta(y-(x+1))$. Note that this is <a href="http://en.wikipedia.org/wiki/Dirac_delta_function" rel="nofollow">Dirac's delta</a>. </p>
<p>The overall pdf:</p>
<p>$$ f_{YX}(y,x) = \delta(y-(x+1)) \frac{1}{\sqrt{2\pi}}\exp(-x^2/2). $$</p>
<p>In other words, the pdf is zero whenever $y\neq x+1$. (You could also say that for $y=x+1$ it is equal to "infinity", but this is not very informative).</p>
<p>You can verify this by marginalizing w.r.t. either $X$ or $Y$, to see that you get the correct pdfs.</p>
| 533
|
probability distributions
|
Comparing values of random variables from sampled distributions
|
https://stats.stackexchange.com/questions/128732/comparing-values-of-random-variables-from-sampled-distributions
|
<p>This is a novice question, which I struggling to answer. I want to find P(A>B) where A and B are random variables from two different distributions. Although two distributions that I have are not classical variants (e.g. normal or binomial etc) but a sampled distribution, I cannot use analytical approach where I could look at the PDF and reason about it, hence I would like to find a numerical solution. </p>
<p>I was wondering if I could use this formula from this answer (<a href="https://stats.stackexchange.com/questions/24693/probability-that-random-variable-b-is-greater-than-random-variable-a">Probability that random variable B is greater than random variable A</a>)</p>
<p>\begin{align*}
\theta=P(A<B) = \Phi\left(\dfrac{\mu_B-\mu_A}{\sqrt{\sigma_A^2+\sigma_B^2}}\right)
\end{align*}</p>
<p>I could easily estimate
\begin{align*}
(μA,μB,σA,σB)
\end{align*}</p>
<p>Are there other solution for it? </p>
| 534
|
|
probability distributions
|
rate of convergence of sample mean
|
https://stats.stackexchange.com/questions/134325/rate-of-convergence-of-sample-mean
|
<p>Law of large number ensures the convergence of sample mean to population mean. But it does not tell about the rate of convergence. Now CLT tells about the rate of convergence. Then what is the necessity of Berry-Essen theorem which also gives us the rate of convergence?</p>
| 535
|
|
probability distributions
|
Distribution of the ratio of dependent magnitude square of complex Gaussians
|
https://stats.stackexchange.com/questions/137192/distribution-of-the-ratio-of-dependent-magnitude-square-of-complex-gaussians
|
<p>Assume that $X=X_1 + X_2 +...+X_n$, where $X_i \sim CN(0,\sigma^2)$ and independent. Here $CN$ means circular complex Gaussian.</p>
<p>The question is, what is the distribution for</p>
<p>$Z = \frac{\left|X\right|^2}{\left|X_1\right|^2 + \left|X_2\right|^2+...+\left|X_n\right|^2}$</p>
<p>How can we benefit from the results obtained here:
<a href="https://stats.stackexchange.com/questions/88628/distribution-of-the-ratio-of-dependent-chi-square-random-variables">Distribution of the ratio of dependent chi-square random variables</a></p>
| 536
|
|
probability distributions
|
Are first passage times in a Brownian motion the same as the ARL in a CUSUM?
|
https://stats.stackexchange.com/questions/32784/are-first-passage-times-in-a-brownian-motion-the-same-as-the-arl-in-a-cusum
|
<p>I am investigating a link between a random walk with drift (call it Brownian process or difusion with drift) and the CUSUM statistic.<br>
The CUSUM procedure accumulates deviations from the process mean over time, thus, if a change in the mean occurs for some reason, then the CUSUM will steadely increase over time, eventually crossing some pre-determined control limit when an alarm is raised. </p>
<p>Can anybody enlighten me as to whether this is in any way similar to calculating the first passage time for the CUSUM, that is the time it takes to cross a given barrier. Is the CUSUM ARL the inverse of the probability of crossing this barrier? How do I go about calculating this probability? </p>
<p>...so many questions!!
Any thoughts are appreciated!!</p>
|
<p>Brownian motion with drift is a linear function with Gaussian noise added to it. Generally speaking CUSUM charts are used to detect a process going out of control. So what is assumed is that the process starts out under control and something could go wrong. This would show up in a change in the mean of some measure of the process (e.g. rate of production of defective parts). Such changes are often started at some point in time with alevel shift or a shift occuring over a short period of time. The CUSUM chart crossing a control boundary is a sequential hypothesis testing problem where crossing rejects the hypothesis that the mean has not changed. There are many conditions that could indicate a change in mean. Browian motion with drift is one of several possible ways that a process could go out of control. So the CUSUM deals more generally with a process going out of control and not just Brownian motion with drift. </p>
| 537
|
probability distributions
|
Fit a distribution to a combinatorial problem
|
https://stats.stackexchange.com/questions/34517/fit-a-distribution-to-a-combinatorial-problem
|
<p>In my previous question titled <a href="https://math.stackexchange.com/questions/183614/conditional-combinations-of-balls-in-bowls">"Conditional combinations of balls in bowls"</a>, is there a distribution to fit $k$ when $d \gg M$? I mean, when $d$ is so large, what is the distribution of the total number of balls?</p>
<p>$M$ can be a number between $4$ to $64$ and $d$ (number of bowls) is very large, about $10000$. Each bowl can have $0,1,2,\ldots,M$ balls in it with equal probability. What is the distribution of total number of balls $k$? $\Pr(k=k_0)=$? </p>
|
<p>Using uppercase $K$ for a random variable and lowercase $m,d,k_0$ for constants:</p>
<p>The number of balls in a particular bowl has a <a href="http://en.wikipedia.org/wiki/Uniform_distribution_%28discrete%29" rel="nofollow">discrete uniform distribution</a> with mean $\frac{m}{2}$ and standard deviation $\sqrt{\frac{m^2+2m}{12}}$.</p>
<p>Add up a large number $d$ of these i.i.d. then the the distribution of sum $K$ with mean $\frac{md}{2}$ and standard deviation $\sqrt{\frac{(m^2+2m)d}{12}}$ can be approximated using the <a href="http://en.wikipedia.org/wiki/Central_limit_theorem" rel="nofollow">central limit theorem</a>, remembering $K$ is discrete with integer spacing. </p>
<p>So $$\Pr(K=k_0) \approx \Phi \left( \frac{k_0 +\frac12 -\frac{md}{2}}{\sqrt{\frac{(m^2+2m)d}{12}}} \right)- \Phi \left( \frac{k_0-\frac12 -\frac{md}{2} }{\sqrt{\frac{(m^2+2m)d}{12}}} \right) \approx \phi \left( \frac{k_0-\frac{md}{2}}{\sqrt{\frac{(m^2+2m)d}{12}}} \right) $$ where $\Phi$ is the cumulative distribution function and $\phi$ is the probability density function of a <a href="http://en.wikipedia.org/wiki/Standard_normal_distribution#Properties" rel="nofollow">standard normal distribution</a>. </p>
| 538
|
probability distributions
|
Does any one know what is the P value when F=37.45; df1=5; df2=40 in a one way anova F test?
|
https://stats.stackexchange.com/questions/41476/does-any-one-know-what-is-the-p-value-when-f-37-45-df1-5-df2-40-in-a-one-way-a
|
<p>in a one way anova F test
when F=37.45; df1=5; df2=40
what is the P value?
I tried several software, and the result is <0.0001. I know it sounds weird that I need a really small number of possibility. However, I really need it for a publication.
I greatly appreciate if anyone can help on this issue. Please let me know the exact number if you can calculate it.</p>
<p>thanks</p>
|
<pre><code>0.00000000000004551914400963141815737
</code></pre>
<p>Is that exact enough?</p>
<p>Seriously, even though in a publication it asks for an exact amount there's a point where just saying it's less than 0.0001 is all you can reasonably do. It's a very very very small number. I strongly suggest that you not try to send that number in for publication and you stick with p < 0.0001.</p>
| 539
|
probability distributions
|
Distributions similar to the family of stable distributions
|
https://stats.stackexchange.com/questions/49413/distributions-similar-to-the-family-of-stable-distributions
|
<p>Are there any other distributions with similar properties to the family of <a href="http://en.wikipedia.org/wiki/Stable_distributions" rel="nofollow">stable distributions</a>? That is, $\alpha$-stable, normal tempered stable, classical tempered stable, etc. etc. where the distribution could exhibit the following properties: Heavy tailed, leptokurtotic, symmetric or asymmetric but can still retain some of the Gaussian properties? </p>
<p>For instance setting $\alpha = 2$ in any of the stable distributions will result in a normal distribution with no leptokurtotic or heavy tailed properties.</p>
<p>I know of the <a href="http://en.wikipedia.org/wiki/Generalised_hyperbolic_distribution" rel="nofollow">generalized hyperbolic distribution</a> family which is similar in features and also, for a few special cases when the degrees of freedom are very high decays to the gaussian.</p>
| 540
|
|
probability distributions
|
Probability distribution to simulate number of users
|
https://stats.stackexchange.com/questions/52987/probability-distribution-to-simulate-number-of-users
|
<p>I need to simulate with probability distribution number of users who are listening radio during the day. Given data : max number of users is 1000, day has 24 hours (from 0 to 24), the highest number of users should be from 11 to 15. Which probability distribution will be the best and which parameters of distribution should I use? </p>
|
<p>You may use a <a href="http://en.wikipedia.org/wiki/Birth%E2%80%93death_process" rel="nofollow">Birth-Death</a> process: a "birth" is a new user tuning in, a "death" is a listener tuning out. If you adjust the rates of the process properly, you will not need to introduce the maximum users cut-off explicitly.</p>
| 541
|
probability distributions
|
Probability density function scalar-valued non-linear function of continuous random variables
|
https://stats.stackexchange.com/questions/546723/probability-density-function-scalar-valued-non-linear-function-of-continuous-ran
|
<p>Is it possible to numerically calculate the probability density function for a scalar-valued function where each variable is independent and has a known distribution? For example in the simple case of <span class="math-container">$z=f(x,y)$</span>:
<span class="math-container">$$
z=y*e^x
$$</span>
if <span class="math-container">$x$</span> and <span class="math-container">$y$</span> have gaussian probability density functions, how can one calculate <span class="math-container">$Pz(Z)$</span>?
For the 1-D case, transformation of variables is a good approach. Using the same example for 1-D i.e., <span class="math-container">$z=f(x)$</span>:
<span class="math-container">$$
z=e^x
$$</span>
We would have:
<span class="math-container">$$
Pz(Z)=Px(x)\left|\frac{dx}{dz}\right|
$$</span>
But I'm not sure how to handle this for the case <span class="math-container">$z=f(x,y)$</span>. I've seen some solutions out there for multiple-to-multiple change of variables, but these all involve using the determinant of a jacobian, which I don't believe is possible in this case as the jacobian would not be square. Thanks in advance for any help!</p>
| 542
|
|
probability distributions
|
Probability distributions in an ordered set of random extracted elements
|
https://stats.stackexchange.com/questions/548420/probability-distributions-in-an-ordered-set-of-random-extracted-elements
|
<p>I tried looking for an existing solution for the following problem:</p>
<ul>
<li>Assume that S is a set of d elements, and R is a total order relation on S. Assume that n elements are randomly extracted from S, and then they are ordered according to R. Which is the probability that in the i-th position of the ordered set of n elements there is the x-th element of S? (where the x-th element of S is intended according to the R). In other words, which is the probability distribution function for the i-th position of the ordered set of n elements?</li>
</ul>
<p>Do you know if such a problem has already been solved? I have not found any specific solution in the literature or elsewhere. I tried to find my own solution, but I would like to know if something already exists.</p>
| 543
|
|
probability distributions
|
When do I have 80% chance of 6 rolls of either 1, 2 or 3?
|
https://stats.stackexchange.com/questions/550125/when-do-i-have-80-chance-of-6-rolls-of-either-1-2-or-3
|
<p>Problem:
Every roll has 4 outcomes: 1, 2, 3 and 4
There are 50% chance for 1-3 at every roll. (And that means 50% for 4)
If one roll did not give 1-3 at one roll, then it is 100% chance at the roll after. (So every second roll is guaranteed to be either 1, 2 or 3)
If rolled 1, 2 or 3 then there is equal chance of either 1, 2 or 3.</p>
<p>So:
1/6 chance of 1,
1/6 chance of 2,
1/6 chance of 3,
1/2 chance of 4,
If last roll were 4 or it is the first roll.</p>
<p>Success:
six 1's, six 2's or six 3's. So I need 6 rolls of either 1, 2 or 3.</p>
<p>Example:
1,4,1,2,3,4,1,1,1,1 (now there is 6 1's)</p>
<p>Notes:
I know that after 32 rolls there are 100% chance of six rolls of either 1, 2 or 3.</p>
<p>Question:
When do I have 80% chance of 6 rolls of either 1, 2 or 3?</p>
<p><strong>Edit 1:</strong>
If we ignore that I cannot roll two fours in a row then:</p>
<p>I could work backwards:
there is 1-((1/6)^5)*(1/2)=0,9999356996 of not getting it in the first 6 rolls.</p>
<p>Then what about 7 rolls?
There is 5/6 chance of not hitting a number that matches.
(1-((1/6)^5)<em>(1/2))</em>(5/6)=0.8332797497</p>
<p>So with 9 extra rolls:
(1-((1/6)^5)<em>(1/2))</em>(5/6)^9=0,1937942376
I hit below 20% probability of not hitting 6 equals.
That is 15 rolls.</p>
<p>If I did consider that I could not get two fours in a row, then it would take even fewer extra rolls; Because the probability would be even better. I just dont know exactly how much.</p>
<p>Right?</p>
<p><strong>Edit 2</strong>
I see some great answers below that I will look at soon. Here is just an idea I were thinking about.</p>
<p>Initially we forget the 4's. That would give:</p>
<pre><code>(1-((1/3)^5)*(1/2))*(5/3)^4=0,1975308642
</code></pre>
<p>Meaning 6+4=10 rolls if there is never rolled a four.</p>
<p>If there were a 100% chance of rolling a four, but not two fours in a row; then we would just multiply with two: 20 rolls.</p>
<p>But now that the probability is 50%, then I must ask how many 4's is there "in between" non-four rolls after 10 rolls. Again it must be a range, so I would like to know the 80% chance again.</p>
<pre><code>(1-((1/2)^8))*(1/2)^(10-8)=0,2490234375
</code></pre>
<p>But that does not seem right, and maybe my whole method is wrong then.</p>
<p>I will take a look at the others answers now :)</p>
|
<p>To take <span class="math-container">$r$</span> rolls, you have to get</p>
<ul>
<li><span class="math-container">$6$</span> rolls of one of the three numbers <span class="math-container">$1,2,3$</span>,</li>
<li>and <span class="math-container">$a$</span> rolls of the next with <span class="math-container">$0\le a \le 5$</span>,</li>
<li>and <span class="math-container">$b$</span> rolls of the other with <span class="math-container">$0\le b \le 5$</span>,</li>
</ul>
<p>being able to do that <span class="math-container">${5+a+b \choose 5}{a+b \choose a}$</span> different ways since the final attempt must be the desired number,</p>
<ul>
<li>and <span class="math-container">$r-6-a-b$</span> rolls of the number <span class="math-container">$4$</span></li>
</ul>
<p>out of the <span class="math-container">$6+a+b$</span> places you might have done it so in <span class="math-container">${6+a+b \choose r-6-a-b}$</span> ways <span class="math-container">$\big($</span>or <span class="math-container">$0$</span> ways if <span class="math-container">$r <6+a+b$</span> or if <span class="math-container">$6+a+b < \frac{r}{2}\big)$</span>.</p>
<p>So I think the probability you take <span class="math-container">$r$</span> rolls is <span class="math-container">$$\sum\limits_{a=0}^{5} \sum\limits_{b=0}^{5} \frac{3}{6^{6+a+b}} {5+a+b \choose 5}{a+b \choose a}{6+a+b \choose r-6-a-b}$$</span></p>
<p>Using R to calculate this</p>
<pre><code>probs <- function(r){
f <- function(a,b){
3 * choose(5+a+b, 5) * choose(a+b, a) *
choose(6+a+b, r-6-a-b) / 6^(6+a+b)
}
sum(outer(0:5,0:5,f))
}
probrolls <- numeric(32)
for (n in 1:32){
probrolls[n] <- probs(n)
}
sum(probrolls) # check adds up to 1
# 1
sum(probrolls*(1:32)) # expected number
# 18.27188
</code></pre>
<p>The probabilities and cumulative probabilities are as follows, so to have at least an <span class="math-container">$80\%$</span> chance of <span class="math-container">$6$</span> rolls of either <span class="math-container">$1$</span>, <span class="math-container">$2$</span> or <span class="math-container">$3$</span> appears to need <span class="math-container">$22$</span> rolls.</p>
<pre><code>cbind(probrolls, cumprob=cumsum(probrolls))
probrolls cumprob
[1,] 0.000000e+00 0.000000e+00
[2,] 0.000000e+00 0.000000e+00
[3,] 0.000000e+00 0.000000e+00
[4,] 0.000000e+00 0.000000e+00
[5,] 0.000000e+00 0.000000e+00
[6,] 6.430041e-05 6.430041e-05
[7,] 5.144033e-04 5.787037e-04
[8,] 2.014746e-03 2.593450e-03
[9,] 5.320264e-03 7.913714e-03
[10,] 1.096679e-02 1.888051e-02
[11,] 1.915676e-02 3.803727e-02
[12,] 2.974389e-02 6.778115e-02
[13,] 4.227031e-02 1.100515e-01
[14,] 5.602002e-02 1.660715e-01
[15,] 7.001649e-02 2.360880e-01
[16,] 8.299626e-02 3.190842e-01
[17,] 9.345859e-02 4.125428e-01
[18,] 9.987578e-02 5.124186e-01
[19,] 1.010285e-01 6.134471e-01
[20,] 9.633151e-02 7.097786e-01
[21,] 8.604229e-02 7.958209e-01
[22,] 7.132705e-02 8.671480e-01
[23,] 5.417352e-02 9.213215e-01
[24,] 3.707282e-02 9.583943e-01
[25,] 2.240174e-02 9.807961e-01
[26,] 1.167466e-02 9.924707e-01
[27,] 5.105289e-03 9.975760e-01
[28,] 1.811591e-03 9.993876e-01
[29,] 4.989406e-04 9.998865e-01
[30,] 9.978811e-05 9.999863e-01
[31,] 1.287589e-05 9.999992e-01
[32,] 8.047428e-07 1.000000e+00
</code></pre>
<hr />
<p>As a check, here is a simulation of <span class="math-container">$10^5$</span> cases that comes close to that result and those probabilities</p>
<pre><code>runofrolls <- function(){
appearances <- numeric(4)
while (max(appearances[1:3]) < 6){
roll <- sample(1:4, 1, prob=c(1/6,1/6,1/6,1/2))
appearances[roll] <- appearances[roll] + 1
if (roll == 4){
nextroll <- sample(1:3, 1, prob=c(1/3,1/3,1/3))
appearances[nextroll] <- appearances[nextroll] + 1
}
}
return(sum(appearances))
}
set.seed(2021)
sims <- replicate(10^5, runofrolls())
mean(sims)
# 18.26183
quantile(sims, 0.8)
# 80%
# 22
mean(sims <= 21)
# 0.79601
mean(sims <= 22)
# 0.86814
</code></pre>
| 544
|
probability distributions
|
Why do we need weighted distributions?
|
https://stats.stackexchange.com/questions/550520/why-do-we-need-weighted-distributions
|
<p>I have read some papers about Weighted Distribution. Suppose <span class="math-container">$X$</span> is a non-negative continuous random variable with pdf <span class="math-container">$f(x)$</span>. The pdf of the weighted random variable <span class="math-container">$X_w$</span> is given by:</p>
<p><span class="math-container">$f_w (x) = \frac{w(x)f(x)}{E[w(X)]}$</span>, where <span class="math-container">$w(x)$</span> is a non-negative weight function.</p>
<p>My question is: why do we need to add weight function <span class="math-container">$w(x)$</span> to the <span class="math-container">$f(x)$</span>? I mean, what is the idea behind the weighted distribution? What are the effects after adding the <span class="math-container">$w(x)$</span> to the <span class="math-container">$f(x)$</span>? I still can't find the answer that I want.</p>
|
<p>Situations in which weighted distributions occur or have some use:</p>
<ol>
<li><p>Mixture models of the type <span class="math-container">$f(x)=\sum_{k=1}^K \pi_kf_k(x)$</span>. To clarify: not the mixture itself is a weighted distribution, rather a mixture component <span class="math-container">$f_k$</span> is <span class="math-container">$f$</span> weighted by <span class="math-container">$w(x)$</span> being the probability <span class="math-container">$p_k$</span> that <span class="math-container">$x$</span> has been generated by mixture component <span class="math-container">$f_k$</span>. Formally: <span class="math-container">$f_k(x)=f(x|Z=k)=\frac{p_k(x)f(x)}{Ep_k(x)}$</span>, where <span class="math-container">$Z$</span> is a random variable giving the component memberships with probabilities <span class="math-container">$\pi_1,\ldots,\pi_K$</span> (in fact <span class="math-container">$Ep_k(x)=\pi_k$</span>). This can be used in an algorithm (EM) to iteratively estimate the parameters of <span class="math-container">$f_k$</span> given the observation weights, and then the observation weights given the estimated parameters.</p>
</li>
<li><p>Some methods for identifying outliers and robust estimators estimate robustness weights (sometimes but not necessarily interpreted as probabilities that observations are not outliers). One may be interested in the distribution of non-outliers, which would be the overall distribution weighted by robustness weights.</p>
</li>
</ol>
<p>More generally one may be interested in a subpopulation of the data without having "hard" information about which observations belong to this subpopulation, and either known or estimated weights (once more these can be probabilities but don't necessarily have to) that specify the degree to which the observations belong to the subpopulation of interest.</p>
<p>There's also an application in sampling theory (although I'm not an expert for this). If you want to represent an underlying population, but you have more observations than proportionally required of one part of it and less of another part, you have observed a weighted form of the original distribution with higher weights for parts that are more likely to be in your sample (and you may want to downweight these when estimating the underlying population distribution).</p>
| 545
|
probability distributions
|
Distribution of number of balls in a subset of bins
|
https://stats.stackexchange.com/questions/289244/distribution-of-number-of-balls-in-a-subset-of-bins
|
<p>I have $ B $ bins and $ G $ balls, and each of the balls has a weight. I toss the balls uniformly into the bins. I select the $ K $ bins that have the highest weights as determined by the weights of the balls that are in them, and I then record the total number of balls that appear in these $ K $ bins. </p>
<p>I want to model the sum of the number of balls that appear in these $ K $ bins. Can I create a "worst-case" model of this quantity? E.g., can I say that this quantity's variance can be no worse than some other computable value?</p>
<p>Intuitively, I believe that I can because, as $ K \rightarrow B $, this quantity's expected value becomes $ G $ and its variance goes to 0. </p>
| 546
|
|
probability distributions
|
What is the probability that the best N people come from China?
|
https://stats.stackexchange.com/questions/494227/what-is-the-probability-that-the-best-n-people-come-from-china
|
<p>Consider two countries competing in a game like chess. And suppose the abilities of all individuals have distributed according to Uniform[0, 1] distribution.</p>
<p>Say country <code>A</code> has population <span class="math-container">$P$</span> and country <code>B</code> has population <span class="math-container">$Q$</span>.</p>
<p>Let <span class="math-container">$N$</span> be the number of people in country <code>A</code>, such that their abilities are all higher than the people in country <code>B</code>.</p>
<p>So <span class="math-container">$N$</span> must take the value of <span class="math-container">$0, 1, 2, ...., P$</span>.</p>
<p>Is there a formula for <span class="math-container">$P(N > k)$</span>?</p>
|
<p>Thanks to uniform randomness, that would be simply <span class="math-container">$$P(N>k)=P(N\geq k+1)=\frac{{P \choose k+1}}{P+Q\choose k+1}$$</span></p>
<p>It's choosing best <span class="math-container">$k+1$</span> people from population of <span class="math-container">$P$</span> people vs all possible choices, i.e. choose <span class="math-container">$k+1$</span> from <span class="math-container">$P+Q$</span> people.</p>
| 547
|
probability distributions
|
Support of a probability distribution
|
https://stats.stackexchange.com/questions/555657/support-of-a-probability-distribution
|
<p>Consider a bivariate probability distribution <span class="math-container">$G$</span> and a random vector <span class="math-container">$(X_1,X_2)$</span>. Should <span class="math-container">$G$</span> satisfy any specific <strong>support</strong> restrictions in order to be an admissible probability distribution for each of these 3 vectors contemporaneously
<span class="math-container">$$
\begin{pmatrix}
X_1\\
X_2
\end{pmatrix}\quad\begin{pmatrix}
X_1-X_2\\
-X_2
\end{pmatrix}\quad \begin{pmatrix}
-X_1\\
X_2-X_1
\end{pmatrix}\quad
$$</span>
I understand that <span class="math-container">$G$</span> should satisfy some <strong>shape</strong> restrictions in order to be an admissible distribution of the 3 vectors above. For example, the first marginal of <span class="math-container">$G$</span> should be symmetric around zero. Here, however, I'm wondering about support restrictions. For instance, should <span class="math-container">$G$</span> have support "degenerate" in some dimension? Or should <span class="math-container">$G$</span> have full support?</p>
| 548
|
|
probability distributions
|
Probability finishing on a given turn two independent draw
|
https://stats.stackexchange.com/questions/557482/probability-finishing-on-a-given-turn-two-independent-draw
|
<p>I have two decks of card. I draw from each deck independently. I need to find a specific card from each individual deck. What is the odd that on turn <span class="math-container">$n$</span>, I have found the card in each deck (this is not an assignment)?</p>
<p>From what I understand, the probability to finish on the first turn is <span class="math-container">$P(1) = 1/52^{2}$</span>. On the second turn, <span class="math-container">$P(2) = 2*1/52*1/51 + 1/51^2$</span> where the first term is the odds that the card has been found in one of the deck on the first turn multiplied with the odds to find it on the second turn and the second term, that of finding the two cards on the same turn.</p>
<p>I'd like to express the probability in an easy way to calculate. If I had to expand this for the <span class="math-container">$n^{th}$</span> turn, things would get messy. How would one proceed to do so?</p>
|
<p>I am not quite sure what your <span class="math-container">$P(2)$</span> is supposed to be,</p>
<p>but if it is the probability that you have seen both individual cards by the second turn drawing from both decks but not both by the first turn and you are drawing without replacement,</p>
<p>then it should be <span class="math-container">$\left(\frac{2}{52}\right)^2 - \left(\frac{1}{52}\right)^2$</span>.</p>
<p>If so, then more generally you get <span class="math-container">$P(n)= \dfrac{2n-1}{2704}$</span> for <span class="math-container">$n \in \{1,2,\ldots,52\}$</span>, which is a discrete triangular distribution.</p>
| 549
|
probability distributions
|
pdf of the product of two independent uniform random variables $X,Y \sim U(-1,1)$
|
https://stats.stackexchange.com/questions/560253/pdf-of-the-product-of-two-independent-uniform-random-variables-x-y-sim-u-1-1
|
<p>Using the product distribution. I have <span class="math-container">$Z = XY$</span> with <span class="math-container">$X,Y \sim U(-1,1)$</span> and independent. Thus
<span class="math-container">\begin{align}
f_Z(z) & = \frac{1}{4} \int_{-1}^1 I_{[-1 < \frac{z}{x} < 1]} \ \frac{dx}{|x|} \\
& = \frac{1}{4} \left[ \int_0^1 I_{[-x < z < x]} \ \frac{dx}{x} + \int_{-1}^0 I_{[-x > z > x]} \ \frac{dx}{-x} \right].
\end{align}</span>
I am not sure what to do to finish since it seems I will end up with an <span class="math-container">$\ln z$</span> with <span class="math-container">$z < 0$</span>...</p>
| 550
|
|
probability distributions
|
Are $U + V$ and $UV$ independent when $U,V$ are independent and standard uniform?
|
https://stats.stackexchange.com/questions/560260/are-u-v-and-uv-independent-when-u-v-are-independent-and-standard-uniform
|
<p>This is related a previous question I posted on the product of two independent variables <a href="https://stats.stackexchange.com/questions/560253/pdf-of-the-product-of-two-independent-uniform-random-variables-x-y-sim-u-1-1">here</a>. As an alternative method, one could note that if <span class="math-container">$X,Y \sim U(-1,1)$</span> and <span class="math-container">$U,V \sim U(-1,1)$</span> then <span class="math-container">$Z = XY = 4UV - 2(U+V) + 1 = S - T + 1$</span>. It is easy to show that
<span class="math-container">$$ f_S(s) = -\frac{1}{4} \ln \frac{s}{4}, \quad f_T(t) = \frac{1}{4} \begin{cases} t & \text{if } 0 < t < 2 \\ 4-t & \text{if } 2 < t < 4 \end{cases}. $$</span>
Then, it remains to consider the difference of these variables and then the shift (+1). At this point, I realize that (while maybe not necessary), it is unclear to me whether <span class="math-container">$S$</span> and <span class="math-container">$T$</span> are independent. Intuitively, I would think potentially not since <span class="math-container">$T > 2$</span> implies that <span class="math-container">$S > 0$</span> and so there should be some dependence.</p>
<p>So this question really has two parts: 1) Is there some way to show dependence? 2) Regardless, is there a way to proceed here to arrive at the distribution of <span class="math-container">$Z$</span>?</p>
|
<p><strong>Here's a very simple solution.</strong> It involves no integration and only the easiest algebra.</p>
<p>Let <span class="math-container">$X=2U-1$</span> and <span class="math-container">$Y=2V-1.$</span> These are <em>iid</em> uniform random variables on <span class="math-container">$[-1,1]$</span> and therefore are symmetric about <span class="math-container">$0:$</span> that is, <span class="math-container">$-X$</span> and <span class="math-container">$-Y$</span> have the same distribution, too. Thus</p>
<p><span class="math-container">$$\begin{aligned}
\operatorname{Cov}(XY, X+Y) &= \operatorname{Cov}((-X)(-Y),\ (-X)+(-Y)) \\
&= \operatorname{Cov}(XY,\ -(X+Y)) \\
&= -\operatorname{Cov}(XY,\ X+Y).
\end{aligned}$$</span></p>
<p>Since <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are bounded, their covariance exists and is finite. The only number equal to its own negative is <span class="math-container">$0:$</span> that must be the covariance.</p>
<p>Now exploit the basic rules of covariance (bilinearity) to relate this result to what you want:</p>
<p><span class="math-container">$$\begin{aligned}
0 &= \operatorname{Cov}(XY,\ X+Y) = \operatorname{Cov}((2U-1)(2V-1),\ (2U-1)+(2V-1)) \\
&= \operatorname{Cov}(4UV-2(U+V),\ 2U+2V)\\
&= 8\operatorname{Cov}(UV,\ U+V) - 4\operatorname{Cov}(U+V,\ U+V)
\end{aligned}$$</span></p>
<p>Because <span class="math-container">$U+V$</span> is not constant, it has positive covariance, whence the left hand side of the last line cannot be zero. This means <span class="math-container">$UV$</span> and <span class="math-container">$U+V$</span> have positive covariance and therefore cannot be independent, <em>QED.</em></p>
<p>BTW, if you wish to do the integrals you can compute that the common variance of <span class="math-container">$U$</span> and <span class="math-container">$V$</span> must be <span class="math-container">$1/12$</span> and conclude (from the independence of <span class="math-container">$(U,V)$</span>) that <span class="math-container">$$\operatorname{Cov}(UV, U+V)=\frac{4}{8}\operatorname{Cov}(U+V,U+V) = \frac{4}{8}\left(\frac{1}{12}+\frac{1}{12}\right)=1/12.$$</span></p>
<hr />
<p><strong>To find the distribution of <span class="math-container">$Z=XY,$</span></strong> observe that it, too, must be symmetric about <span class="math-container">$0$</span> and the distribution of its positive part must be that of <span class="math-container">$UV.$</span> That distribution function is</p>
<p><span class="math-container">$$\Pr(UV \le t) = \iint^{(1,1)}_{uv \le t} \mathrm{d}u\mathrm{d}v = t(1-\log(t))$$</span></p>
<p>for <span class="math-container">$0 \lt t \le 1.$</span> Consequently</p>
<p><span class="math-container">$$\Pr(-t \le Z \le t) = t(1-\log(t)),$$</span></p>
<p>which is a useful formula for the distribution of <span class="math-container">$Z.$</span> In particular, its density at both <span class="math-container">$\pm t$</span> must be half the derivative of this expression, giving</p>
<p><span class="math-container">$$f_Z(t) = -\frac{1}{2}\log|t|,\ -1 \le t \le 1; t\ne 0.$$</span></p>
<p>(The density is not defined at <span class="math-container">$0.$</span>)</p>
<p>The figure is a histogram of a million draws of <span class="math-container">$XY$</span>, over which is plotted in red a graph of <span class="math-container">$f_Z.$</span> They match.</p>
<p><a href="https://i.sstatic.net/gSwzx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gSwzx.png" alt="Figure" /></a></p>
| 551
|
probability distributions
|
Probability distribution of questions in a forum
|
https://stats.stackexchange.com/questions/11749/probability-distribution-of-questions-in-a-forum
|
<p>I would like simulate appearance of publications in a forum and I need know what is the probability distribution of new question being asked in a forum. In my first simulation I used to normal distribution, but I think that the best distribution can be exponential distribution.</p>
|
<p>The exponential distribution might be a good starting point for the <em>waiting time</em> between new posts. This would be equivalent to assuming a Poisson distributed number of posts in a given time period. There are some pretty strong assumptions behind a model like that, but it might make sense for your application.</p>
| 552
|
probability distributions
|
If a sequence of distributions converges to a degenerate, does that imply the variance strictly decreases?
|
https://stats.stackexchange.com/questions/16872/if-a-sequence-of-distributions-converges-to-a-degenerate-does-that-imply-the-va
|
<p>If $F_i = G(F_{i-1}), F_0 = x$ is a sequence of distributions that converges to a degenerate distribution as $i \to \infty$, does that imply that the variance of $F_i$ decreases with $i$? </p>
<p>Specifically, I am interested in the inverse Kumaraswamy distribution: $G(x) = (1-(1-x)^a)^b$ when $a=b$... Note that $G$ produces the cdf of the Max of b draws from the distribution of the Min of a draws from its argument (which is going to be a cdf). </p>
<p>So if it's not true for the general case, but you have insights on how/why it would be true in this case, I would appreciate it. I can intuit (and graph for specific values of a) that the mean and variance decrease in this case -- taking the Min of $a$ draws from $F$ decreases the mean, then taking the max of $a=b$ draws from the new distribution increases the mean, but not by as much as the first operation lowered it, so the final mean is less than that of the original. But I am shaky on how to prove it (and my other questions today have basically been trying to get at this point).</p>
<p>PS -- sorry to overload with questions today! As I mentioned, they've all been in the goal of this question here.</p>
|
<p><strong>Here is a simple counterexample.</strong></p>
<p>Assume the CDF $F$ is supported on $[0,1]$ and define a new CDF $G[F]$ as follows. If $\mathbb{E}[F] \gt 1/2$, let</p>
<p>$$G[F](x) = 1 - F(3/2 - 2 x).$$</p>
<p>Otherwise, let</p>
<p>$$G[F](x) = 1 - F(5/6 - 2x/3).$$</p>
<p>The first operation squeezes the distribution by a factor of $2$ towards $1/2$ and flips it around $1/2$, while the second expands the distribution by a factor of $3/2$ away from $1/2$ and also flips it around $1/2$. The flipping guarantees that if the mean of $F$ is other than $1/2$, then iterating $G$ will alternate between these two operations, because the expectations will alternately be greater than and less than $1/2$. The net result nevertheless is to compress the support down towards $1/2$, converging to a degenerate distribution. However, the variance in the first case is multiplied by $1/4$ and in the second case it is multiplied by $9/4$, whence it does not uniformly decrease: it alternately jumps up and down. Therefore convergence of a sequence of distributions $(G^n[F])$ to an atom does not imply monotonic decrease of the variances.</p>
| 553
|
probability distributions
|
A recurrence relation in an elementary problem in probability?
|
https://stats.stackexchange.com/questions/489852/a-recurrence-relation-in-an-elementary-problem-in-probability
|
<p>While considering solving the following standard question in probability in an alternative way, I get stuck with a recurrence relation. The problem is as follows:</p>
<blockquote>
<p>A bag contains <span class="math-container">$N$</span> ropes. We pick up two randomly chosen ends and tie them together until no untied ropes are left. The problem is to find the expected number of loops.</p>
</blockquote>
<p>Now there is a <a href="https://math.stackexchange.com/questions/2209/expected-number-of-loops">standard and neat solution</a> to the above problem. I tried to solve it instead by first finding the probability <span class="math-container">$p_{k,l}$</span> of getting <span class="math-container">$l$</span> loops at the <span class="math-container">$k$</span>th stage, where both <span class="math-container">$k$</span> and <span class="math-container">$l$</span> can take the values <span class="math-container">$0,1,2,...N$</span>. Further, it follows that at any time <span class="math-container">$l\le k$</span>. Since with <span class="math-container">$N$</span> loops the probability of addition of a loop at the first step is <span class="math-container">${N}/{\binom{2N}{2}}$</span>, and at the the kth stage we have effectively <span class="math-container">$N-k$</span> ropes (possibly some consisting of more than one of the original ropes tied), we have the following recurrence relation:</p>
<p><span class="math-container">$$p_{k,l}=p_{k-1,l} \cdot \text{Prob(No loop is added at $k$th stage)} +p_{k-1,l-1}\cdot\text{Prob(a loop is added at $k$th stage)}.$$</span></p>
<p>The above gives us
<span class="math-container">$$p_{k,l}=p_{k-1,l} \frac{2N-2k-2}{2N-2k-1} + p_{k-1,l-1} \frac{1}{2N-2k-1}$$</span></p>
<p>Now, with the base conditions <span class="math-container">$p_{1,1}=\frac{1}{2N-1}$</span> and <span class="math-container">$p_{1,o}=\frac{2N-2}{2N-1}$</span>, how can we find <span class="math-container">$p_{N,l}$</span> in terms of <span class="math-container">$N$</span> and <span class="math-container">$l$</span>?Is it even possible to find an approximate formula for <span class="math-container">$p_{N,l}$</span> by simulating <span class="math-container">$p_{N,l}$</span>?</p>
| 554
|
|
probability distributions
|
Joint distribution given normalized gamma distributed components
|
https://stats.stackexchange.com/questions/497368/joint-distribution-given-normalized-gamma-distributed-components
|
<p>Consider <span class="math-container">$N = 2^{n}$</span> random variables <span class="math-container">$X_{1}, X_{2}, \ldots, X_{N}$</span>, such that for each <span class="math-container">$i \in [N]$</span>,</p>
<p><span class="math-container">$$X_{i} \sim \Gamma\left(\frac{1}{2}, 2^{-n+1}\right). $$</span></p>
<p>We are also given that
<span class="math-container">$$\sum_{i = 1}^{N}X_{i} = 1$$</span></p>
<p>What is the pdf for the joint distribution for the <span class="math-container">$N$</span>-tuple <span class="math-container">$(X_{1}, X_{2}, \ldots X_{N})$</span>? Does it follow a Dirichlet distribution? Note that the random variables are identically distributed but not independent.</p>
| 555
|
|
probability distributions
|
Basic question about PDFs: multi-parameter PDF, not multi-variate
|
https://stats.stackexchange.com/questions/501787/basic-question-about-pdfs-multi-parameter-pdf-not-multi-variate
|
<p>I'm kinda new to stats and trying to find info on PDFs that depend on multiple parameters, but I keep finding info only on multi-variate distributions.</p>
<p>The point is that <strong>I have only one random variable</strong> <span class="math-container">$R$</span>, but it depends on two or more parameters, say <span class="math-container">$(x_1,x_2,x_3)$</span>.
My actual goal is: I know the dependency of R on the separate params, and I need to create a joint PDF for it.</p>
<p>An example can be: say you have the distribution of heights based on age, and separately based on weight, how do you combine those to one PDF such that given (age, weight) will give the correct height distribution?
Any help will be very appreciated!</p>
<p><strong>Edit</strong>: it seems that there's not enough data to “build” a joint PDF. Will the answer be any different if I can assume everything is distributed normally? Meaning, can the “joint” mean and variance be determined given age + weight?</p>
|
<p>If you only have the <code>age</code> and <code>weight</code> density, you cannot do much else than multiply them, and then you end up with the same <code>age</code> distribution for every <code>weight</code> for example, which is probably not what you are looking for.</p>
<p>The problem is that how <code>weight</code> differs for multiple <code>age</code> groups is not contained in either distribution. So you don't have any information on that to start with.</p>
<p>Lets say <code>age ~ uniform(0, 100)</code> and <code>weigth ~ N(60, 20)</code>. Then neither of those can tell you that young people are lighter. So you'd need additional information.</p>
| 556
|
probability distributions
|
how many times will get exactly two heads if I toss 2 not-fair coins (chances of getting a head is 0.68) for 100 times?
|
https://stats.stackexchange.com/questions/502338/how-many-times-will-get-exactly-two-heads-if-i-toss-2-not-fair-coins-chances-of
|
<p>I want to calculate how many times I will get exactly two heads if I toss 2 not-fair coins (chances of getting a head is 0.68) for 100 times?</p>
<p>Can somebody gives me a formula that I can generalize this question? (n not-fair coins where is p the chance of head for m times)</p>
|
<p>This is just a binomial distribution where “success” is getting two heads. Let’s calculate the probability of such an event.</p>
<p><span class="math-container">$$P(HH) = P(H)P(H)=0.4624$$</span></p>
<p>So this is our “p” in the binomial distribution. The other parameter in the binomial distribution is <span class="math-container">$n$</span>, the number of attempts, which is <span class="math-container">$100$</span>. Now calculate the probability of getting exactly two “successes”, which is exactly what a binomial tells you.</p>
<p><span class="math-container">$$f(x)=\binom{100}{x}
0.4624^x (1-0.4624)^{100-x}$$</span></p>
<p>The trick to generalizing this is recognizing the “p” parameter in the binomial distribution. It’s always the probability of getting the number of flips.</p>
<p><strong>EDIT</strong></p>
<p>Binomial PMF in general:</p>
<p><span class="math-container">$$
f(x\vert n,p)=\binom{n}{x}p^x(1-p)^{n-x}
$$</span></p>
| 557
|
probability distributions
|
Cumulative distribution function equals almost surely
|
https://stats.stackexchange.com/questions/506072/cumulative-distribution-function-equals-almost-surely
|
<p>Let <span class="math-container">$F_1, F_2$</span> - two continuous CDF.</p>
<p>if <span class="math-container">$F_1 = F_2\quad F_2$</span> almost surely (i.e. probability of <span class="math-container">$x$</span> where <span class="math-container">$F_1(x)\neq F_2(x)$</span> is zero with respect to probability with CDF <span class="math-container">$F_2$</span>).</p>
<p>Then <span class="math-container">$F_1 = F_2$</span> (everywhere).</p>
|
<p>By contrapositive, if exists <span class="math-container">$x$</span> such that <span class="math-container">$F_1(x) \neq F_2(x)$</span></p>
<p>if <span class="math-container">$F_2(x) < F_1(x)$</span>, then choose <span class="math-container">$y>x$</span> such that <span class="math-container">$F_2(y) > F_2(x)$</span> and <span class="math-container">$F_2(y) < F_1(x)$</span>.
Then <span class="math-container">$F_2(x) \neq F_1(x)$</span> on <span class="math-container">$[x,y]$</span> (by the monotony of F_1) and <span class="math-container">${\Pr}_{F_2}([x,y]) \ge F_2(y) - F_2(x) > 0$</span>.</p>
<p>if <span class="math-container">$F_2(x) > F_1(x)$</span>, then choose <span class="math-container">$y<x$</span> such that <span class="math-container">$F_2(y) < F_2(x)$</span> and <span class="math-container">$F_2(y) > F_1(x)$</span>.
Then <span class="math-container">$F_2(x) \neq F_1(x)$</span> on <span class="math-container">$[y,x]$</span> (by the monotony of F_1) and <span class="math-container">${\Pr}_{F_2}([y,x]) \ge F_2(x) - F_2(y)$</span> > 0.</p>
<p>Note I use only continuity of <span class="math-container">$F_2$</span> and statement true if <span class="math-container">$F_1$</span> is arbitrary.</p>
| 558
|
probability distributions
|
Probability of trial sequence
|
https://stats.stackexchange.com/questions/508660/probability-of-trial-sequence
|
<p>A trader learns to predict whether the stock price will rise or fall on a particular day of trading. To do this, he calls one hundred friends and asks them to toss a coin once a day, thus receiving one hundred signs of the type "heads / tails from the n-th friend." What is the probability that within a week of such analysis there will be a sign that correlates 100% with the dynamics of stocks?</p>
<p>Which distribution should I consider to solve that task? Bernoulli?</p>
|
<p><strong>Yes, Bernoulli distributions are involved.</strong> But they are combined in a complex way; and it is important to be clear about how you understand the question. Here is one proposal.</p>
<h3>Framing the problem</h3>
<p><strong>Let us say that "100% correlates with the dynamics of stocks" means the following:</strong></p>
<ol>
<li><p><em>Before</em> conducting this experiment, you designate a set of frequently traded assets (the "market").</p>
</li>
<li><p>Over the course of the seven days, you will compare the sum of closing prices of those assets to the same sum the previous day. On day <span class="math-container">$d,$</span> set <span class="math-container">$Y(d)=1$</span> when the new sum is greater than the old one and otherwise let <span class="math-container">$Y(d)=0.$</span></p>
</li>
<li><p>Let <span class="math-container">$Z(d,f)$</span> (which depends on the day <span class="math-container">$d$</span> and the friend <span class="math-container">$f$</span>) equal <span class="math-container">$1$</span> when at the beginning of that day the friend flips a head and otherwise equal <span class="math-container">$0.$</span></p>
</li>
<li><p>Assume every one of those coins is "fair:" this means the tosses are independent and, each time, have a <span class="math-container">$1/2$</span> chance of landing heads.</p>
</li>
</ol>
<p>For each day <span class="math-container">$d$</span> and friend <span class="math-container">$f,$</span> let <span class="math-container">$X(d,f)=1$</span> when <span class="math-container">$Z(d,f)=Y(d):$</span> that is, your friend <span class="math-container">$f$</span>'s flip agrees with (or "predicts") the direction in which the market moves on day <span class="math-container">$d$</span>.</p>
<p>Assumption (4) implies all the <span class="math-container">$Z(d,f)$</span> are independent random variables <em>and therefore the <span class="math-container">$X(d,f)$</span> are independent, too.</em> Moreover, every one of the 700 <span class="math-container">$X(d,f)$</span> variables have Bernoulli distributions with probabilities <span class="math-container">$1/2.$</span></p>
<p><strong>The preceding observations are key, so make sure you understand them and can explain them to others.</strong></p>
<h3>Solution</h3>
<p><strong>To solve this problem,</strong> you now have to demonstrate:</p>
<ol>
<li><p>A friend <span class="math-container">$f$</span> achieves "100% correlation" when the sum of all the friend's <span class="math-container">$X(d,f)$</span> values (over all seven days) equals <span class="math-container">$7.$</span> This means their coin flips correctly "predicted" the movement of the market every day of the week.</p>
</li>
<li><p>The sum of any particular friend's <span class="math-container">$X(d,f)$</span> has a Binomial<span class="math-container">$(7,1/2)$</span> distribution.</p>
</li>
<li><p>Therefore, the chance that <em>at least one</em> friend achieves 100% correlation is the chance that the largest of <span class="math-container">$100$</span> independent draws from a Binomial<span class="math-container">$(7,1/2)$</span> distribution equals <span class="math-container">$7.$</span></p>
</li>
<li><p>You can compute this chance as <span class="math-container">$1 - (1 - (1/2)^7)^{100} \approx 1 - \exp(-100/2^7) \approx 0.54.$</span></p>
</li>
</ol>
<p>You may conclude that at the end of this experiment, <em>it is more likely than not that at least one of your friends' coin flips will have predicted the market movements for a week.</em></p>
<h3>Application</h3>
<p>Before the experiment begins, ask your friends to document the times and results of their coin flips. If one friend does achieve 100% correlation, send an email to a million gullible investors telling them about <em>this friend only,</em> along with that friend's documentation ("this market watcher predicted the market's movements correctly every day last week: you can have exclusive access to their next predictions!"). Offer advance information about this friend's future coin flips for a small price. <strong>You could make a fortune.</strong></p>
<p>Extra credit: explain the allegory. Hint: a common term for friend is "market guru."</p>
<hr />
<p><strong>NB:</strong> Nothing in this post constitutes recommendation or advice. Invest (and send emails) at your own risk, after consulting a lawyer about which of your country's laws you might be breaking if you carry out this program.</p>
| 559
|
probability distributions
|
trials from a multinomial distribution required to obtain 1 item in particular
|
https://stats.stackexchange.com/questions/269611/trials-from-a-multinomial-distribution-required-to-obtain-1-item-in-particular
|
<p>I have a set of outcomes $\{A, 2A, B, C\}$, which on a roll appear with probabilities $\{0.10, 0.15, 0.40, 0.35\}$ respectively.</p>
<p>How many rolls do I need to get 30 $A$ elements?</p>
| 560
|
|
probability distributions
|
Adjust probabilities to make it equal to 1
|
https://stats.stackexchange.com/questions/273673/adjust-probabilities-to-make-it-equal-to-1
|
<p>Is there a way to normalize the given probabilities? Let's say for example I have this probability for the following words:</p>
<pre><code> I am the best today
</code></pre>
<p>$$1/5 + 1/5 + 1/5 + 1/5 + 1/5 = 1$$</p>
<p>Given that people typed the word</p>
<pre><code>am
</code></pre>
<p>50% of the time, the distribution would adjust to</p>
<p>$$ 1/5 + 1/2 + 1/5 + 1/5 + 1/5$$
It is obvious that the equation wouldn't be equal to $1$. How can I force the equation to give a distribution wherein the probability of am would stay at $1/2$ while the others would adjust to make the equation be equal to $0$?</p>
|
<p>As I understand you correctlly, you want "am" to have 0.5 probability, keep relations between other probabilities and make all probabilities sum to 1. Am I right?</p>
<p>If so, you should first normalize the four probabilities you want to normalize to make them sum to 1. You can do this by dividing them by their sum (0.2/0.8 and all four probabilities would be 0.25, 0.25, 0.25, 0.25).</p>
<p>Then you can compute, that all other probabilities except probability of "am" should sum to 0.5 (because 1 - 0.5). Then you should multiplicate every of the remaining probabilities by this 0.5.</p>
<p>So result would be 0.25*0.5=0.125 and all probabilities would be: 0.125, 0.5, 0.125, 0.125, 0.125.</p>
| 561
|
probability distributions
|
Probable value of a
|
https://stats.stackexchange.com/questions/274005/probable-value-of-a
|
<p>I'm working on writing an excel program that logs a series of measurements. The measurements are then used in 6 different models, all approximating the same unknown value.</p>
<p>I'd like to take the result of these 6 models and approximate the most probable value for what the unknown is. I've used average and medical average but given the range in values of my set, I don't think these are necessarily appropriate. </p>
<p>I've checked out several other similar questions on this stack, but the solutions seem to be written for higher level programming languages than I have access to. I'm not sure what the math is that I'm looking for, but I remember something similar to this from my undergrad courses. And I seem to remember the math being relatively easy to compute by hand. Ideally I'd like to be able to translate this into an excel style solution.</p>
<p>Thanks!</p>
|
<p>If you have several estimators of the same value, you can combine them by using a weighted average of all the estimates where the weights are given based on the variance of each estimator. Check for example: <a href="https://en.m.wikipedia.org/wiki/Inverse-variance_weighting" rel="nofollow noreferrer">https://en.m.wikipedia.org/wiki/Inverse-variance_weighting</a> </p>
| 562
|
probability distributions
|
Is the following inequality correct? How to prove it?
|
https://stats.stackexchange.com/questions/274566/is-the-following-inequality-correct-how-to-prove-it
|
<p>Suppose $X$ and $Y$ are two arbitrary random variables, and we have the following inequality that conditional on $Y=y$,
$$\textbf{Pr}(X \ge a_0 | Y=y)\le f(y),$$
where $\textbf{Pr}(\cdot)$ denotes the probability of the event, $a_0$ is a constant, and $f(y)$ is an increasing function with respect to $y$. I want to know whether the following inequality is correct,
$$\textbf{Pr}(X \ge a_0 , Y\le b_0)\le f(b_0),$$
where $b_0$ is a constant.</p>
<p>If it is wrong, is there any counter example? Thanks a lot.</p>
|
<p>Expand using product rule:</p>
<p>$$P(X \ge a_0 \cap Y \le b_0) = P(X \ge a_0 | Y \le b_0) P(Y \le b_0)$$</p>
<p>Assuming $P(Y \le b_0)$ is nonzero, you can use the first inequality to obtain:</p>
<p>$$P(X \ge a_0 | Y \le b_0) P(Y \le b_0) \le f(y\le b_0) P(Y \le b_0)$$</p>
<p>Because $f(y)$ is increasing in $y$, $f(y \le b_0) \le f(b_0)$ so the inequality of interest holds.</p>
<p>By the way, in the context of probability $f$ is normally used for PDFs so you may want to use a different letter to avoid confusion.</p>
<p>EDIT: Granted, $f(y \le b_0)$ is an abuse of notation. It should be read as $f(y), y \le b_0$ and not as the function taking a set for its argument.</p>
| 563
|
probability distributions
|
What is meant by dynamic range when talking about probability distribution?
|
https://stats.stackexchange.com/questions/278299/what-is-meant-by-dynamic-range-when-talking-about-probability-distribution
|
<p>To provide some context, the exact sentence is</p>
<blockquote>
<p>for right-skewed data of the kind we consider here the method is especially sensitive to slight deviations of the data from the power-law model around xmin because most of the data, and hence most of the dynamic range of the CDF, lie in this region.</p>
</blockquote>
<p>Thank you.</p>
<p>EDITS: I thought this was a common term used in statistics, so I didn't provide too much context, sorry about that. This is the paper this was taken from (it's at the end of section 3.3), <a href="https://arxiv.org/pdf/0706.1062.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/0706.1062.pdf</a>. The author was discussing the performance of the Kolmogorov-Smirnov statistic in determining a estimate for the xmin parameter when fitting a power law, and mentions how this method is sensitive to fluctuations at a certain range of the data.</p>
| 564
|
|
probability distributions
|
A distribution on binary vectors defined by a sum
|
https://stats.stackexchange.com/questions/296643/a-distribution-on-binary-vectors-defined-by-a-sum
|
<p>The following distribution came up in my research and I'm wondering if the general class has a name and if there are computationally efficient sampling methods for it.</p>
<p>Let $\mathcal{X} = \{0,1\}^{n}$ be the sample space; these are simply $n$ dimensional vectors with each component either $0$ or $1$. Let $a_1,...,a_n$ be non-negative coefficients with atleast one $a_i$ positive. Then define the measure
$$P(x_1,...,x_n) = \frac{1}{Z}\sum\limits_{i=1}^{n}a_ix_i$$
where $Z$ is a normalization constant (a simple recursion shows $Z = 2^{n-1}\sum\limits_{i=1}^{n}a_i)$.</p>
<p>Does this distribution look familiar to anyone? Please include a reference to a textbook or scholarly article in your response.</p>
|
<p>It's multinomial distribution. Same as repeating a coin toss n times and count the number of times head appears. In this case you also have coefficient to each i th toss ( i= 1..n) means that you have treat ith toss differently from the i+1 th toss. </p>
| 565
|
probability distributions
|
Probability of false negative in uniform distribution test
|
https://stats.stackexchange.com/questions/287874/probability-of-false-negative-in-uniform-distribution-test
|
<p>Let's say I have a set of $n$ objects, and I select an object from the set with uniform probability. I do this many times, and record how many times I select each object. These counts will tend toward a uniform distribution, but an exact uniform distribution is obviously unlikely. </p>
<p>What's the chance that the difference between the most-selected object and least-selected object is greater than $d$? Assume I perform $s$ total selections.</p>
<p>If you're curious, I'm writing a test for a piece of software that should uniformly select objects in this manner, and I want to verify that it is actually selecting them uniformly. I want to know the test's false negative rate (the chance that it thinks the distribution is not uniform when it is). False negatives are significantly worse for me than false positives. If there's a better way to check if a distribution is uniform than simply comparing the max and min, I'd be happy to hear about it.</p>
| 566
|
|
probability distributions
|
Given two means and deviations, how can I compute the probability that x < y?
|
https://stats.stackexchange.com/questions/297927/given-two-means-and-deviations-how-can-i-compute-the-probability-that-x-y
|
<p>I've experimentally achieved my goal by running many random trials, generating two points according to scaled and translated gaussian distributions and counting how many times x is less than y.</p>
<p>However this is becoming a problem from a performance point of view for my application.</p>
<p>Is there a straightforward way to compute P(X < Y) where X and Y are variables with a given mean and deviation?</p>
|
<p>If your assumptions regarding the normality of $X$ and $Y$ are correct, then you have two normally distributed random variables $X\sim\mathcal{N}(\mu_X, \sigma_X^2)$ and $Y\sim\mathcal{N}(\mu_Y, \sigma_Y^2)$, and you are searching for $\mathrm{Pr}(X < Y)$, aka $\mathrm{Pr}(X-Y < 0)$.</p>
<p>Assuming $X$ and $Y$ are independent, this is easy to analyze, because in that case $X-Y\sim\mathcal{N}(\mu_X-\mu_Y, \sigma^2_X+\sigma^2_Y)$. Therefore your desired probability can be obtained from the standard normal cdf: $\mathrm{Pr}(X-Y < 0) = \Phi(\frac{\mu_Y-\mu_X}{\sqrt{\sigma^2_X+\sigma^2_Y}})$.</p>
<p>To link this to your approach of simulating many samples from the random variables, consider the case where $X\sim\mathcal{N}(1, 3^2)$ and $Y\sim\mathcal{N}(2, 4^2)$. Then we can calculate that $\mathrm{Pr}(X < Y) = \Phi(\frac{2-1}{\sqrt{3^2+4^2}})\approx 0.57926$. We can obtain a similar result via simulation in R:</p>
<pre><code>set.seed(144)
x.samples <- rnorm(1e6, 1, 3)
y.samples <- rnorm(1e6, 2, 4)
mean(x.samples < y.samples)
# [1] 0.579655
</code></pre>
<p>If $X$ and $Y$ are not independent, then we need more information to calculate $\mathrm{Pr}(X < Y)$. For instance, if they are jointly normally distributed with correlation coefficient $\rho$, then $X-Y\sim\mathcal{N}(\mu_X-\mu_Y, \sigma_X^2+\sigma_Y^2+2\rho\sigma_X\sigma_Y)$, meaning $\mathrm{Pr}(X < Y) = \Phi(\frac{\mu_Y-\mu_X}{\sqrt{\sigma_X^2+\sigma_Y^2+2\rho\sigma_X\sigma_Y}})$. If they are not jointly normally distributed, then $X-Y$ may have some other distribution, and that will impact the calculation of $\mathrm{Pr}(X < Y)$.</p>
| 567
|
probability distributions
|
Calculate probability of number based on previous numbers in a non-random sequence
|
https://stats.stackexchange.com/questions/300718/calculate-probability-of-number-based-on-previous-numbers-in-a-non-random-sequen
|
<p>I have a large number of number sequences of different lengths consisting of 1s and 0s indicating whether an action has occurred, like:</p>
<pre><code>0111001111101
00010001111
01111111011111
</code></pre>
<p>The digits are not independent, as they represent whether an action has been performed or not, which is dependent on whether that action was carried out on the previous occasion.</p>
<p>What I would like to calculate is the probability of finding a 0 after a sequence of 1, such as the probability of finding a 0 after a single 1, after two 1s, after three 1s, etc, across all these sequences. I would basically like to know whether the probability of a 0 increases or decreases with a longer sequence of 1s. In other words, in this specific instance, does the repeated performance of an action increase or decrease the probability that a competing action will be performed?</p>
<p>I have looked at conditional probabilities but haven't found a solution that matches my problem.</p>
<p>This might be a simple question, but I haven't been able to find the solution by myself so hopefully someone here can help.</p>
| 568
|
|
probability distributions
|
Taking expectations over uniformly distributed random variable
|
https://stats.stackexchange.com/questions/303134/taking-expectations-over-uniformly-distributed-random-variable
|
<p>I have a probably very silly question, but somehow I am lost and don't get to the solution. Either it is my mistake or there is a mistake in the paper. Hopefully you can help me out!</p>
<p>The problem is as follows: The density of random variable q that is uniformly distributed between 0 and $\bar{q}$ is given by:</p>
<p>$$
\frac{\frac{1+q}{2}}{\frac{2+\bar{q}}{4\bar{q}}}
$$</p>
<p>Now taking expectations over q, the paper gets $\frac{\bar{q}(3+2\bar{q})}{3(2+\bar{q})}$. However, if I integrate over above density from 0 to $\bar{q}$, I get $\frac{\bar{q}(2\bar{q}+\bar{q}^2)}{2+\bar{q}}$. It would be really cool, if somebody could either uncover my mistake or conirm my solution. Thanks a lot in advance!</p>
| 569
|
|
probability distributions
|
Distribution of X³/Y where X and Y are uniformly distributed
|
https://stats.stackexchange.com/questions/298607/distribution-of-x%c2%b3-y-where-x-and-y-are-uniformly-distributed
|
<p>$X$ and $Y$ are two uniformly distributed variables in interval $(a,b)$ and $(c,d)$ respectively (a>0 and c>0). </p>
<p>What is the distribution of the variable $Z$ defined as $Z=X^3/Y$?</p>
| 570
|
|
probability distributions
|
Distribution of a random variable to the nth power
|
https://stats.stackexchange.com/questions/302187/distribution-of-a-random-variable-to-the-nth-power
|
<p>If we know the distribution of $X$ is symmetric, do we know anything about the shape of the distribution of $X^n$, where $n\geq 0$ but is not necessarily an integer?</p>
<p>I'm not referring to the moments of $X$, but rather the distribution of its products (but more generally, raised to the $n$th power).</p>
<p>Edit: as many have pointed out, I should clarify further that $x$ is assumed to have the domain $[a,b]$, where $a$ and $b$ are positive numbers. Symmetric refers to symmetric around the mean, a positive number</p>
| 571
|
|
probability distributions
|
Probability distribution for random walk samples of Voronoi decomposition of high-dimensional space of exponentially-distributed points
|
https://stats.stackexchange.com/questions/309510/probability-distribution-for-random-walk-samples-of-voronoi-decomposition-of-hig
|
<p>I've got a protein I'm modeling. One thing I do with it is randomly perturb the protein for a fixed amount of time at a chosen temperature, and then minimize it so that its configuration reaches a local energy minimum. Higher temperatures can reach energy minima further and further away from the starting configuration. I repeat this hundreds of thousands of times at different temperatures.</p>
<p>I've noticed that energy minima are sparse enough that at lower temperatures, I'll get repeats of the same minimum. In fact, at the lowest temperature I sample at (<span class="math-container">$kT = 1\,\text{REU}$</span>), less than 1% of samples are unique. The portion of uniques as a function of temperature looks vaguely like the CDF of a gamma distribution (not to say that that's what it actually is).</p>
<p>How can one use the temperature vs. portion of uniques data to model the distribution of energy minima as a function of configuration-space distance from the starting point (which, by the way, is the global minimum)? I've gotten to the end of the analysis, but I have no background in statistics and would appreciate it if someone with more stats knowledge than me went over it.</p>
<ul>
<li>The minima space is a 1000+-dimensional space, with each minimum as a point in the space. The density of minima decreases exponentially as you get further away from the origin; we can model this with the function
<span class="math-container">$$N(r) = N_{0}(1-e^{-\alpha r})$$</span>
representing the number of minima within a distance of <span class="math-container">$r$</span> from the origin, for some <span class="math-container">$\alpha$</span>. (This is perhaps the most weakly-motivated approximation here. In fact, it can't possibly be true, not only because there is certifiably just 1 minimum at the center, but also because the data has an inflection point, meaning that the mode of the minima density is probably somewhere past <span class="math-container">$r=0$</span>. But I'm at a loss as to what else it could be.)</li>
<li>The random perturbation search is a random walk in the minima space, with the number of steps in the random walk linearly increasing with temperature. This means that the probability of the random walk ending up at any particular distance from the origin is approximately
<span class="math-container">$$P(r; T) = \frac{2r}{\beta T}e^{-r^{2}/\beta T}$$</span>
for some <span class="math-container">$\beta$</span>.</li>
<li>Wherever the random walk ends up, the minimization will then carry it to the nearest minimum. This is equivalent to a random walk on a nearest-neighbor partition (aka Voronoi decomposition) of the minima space, and then taking a sample of whichever partition you end up in. At a given <span class="math-container">$r$</span> for a finite number of samples, this will look like multinomial sampling with probabilities proportional to the hypervolumes of the cross-sections of the partitions that intersect the hypershell of that <span class="math-container">$r$</span>. And because we're assuming that the minima are evenly distributed within such a hypershell, the probability of getting any particular minimum is the same, i.e. <span class="math-container">$\left(\frac{dN(r)}{dr}\right)^{-1}$</span>. And this means that, for <span class="math-container">$n$</span> samples at a temperature <span class="math-container">$T$</span>, our expected number of unique minima hit should be
<span class="math-container">$$ M(n, T) = \int_0^\infty \frac{dN(r)}{dr}\left(1-\left(1-\left(\frac{dN(r)}{dr}\right)^{-1}\right)^{n}\right)P(r;T)\,dr $$</span>
where <span class="math-container">$N(r)$</span> and <span class="math-container">$P(r;T)$</span> are the functions defined above. Mathematica couldn't do the integral, unfortunately, so I guess it's something really weird.</li>
</ul>
| 572
|
|
probability distributions
|
Independent joint conditional probability
|
https://stats.stackexchange.com/questions/315227/independent-joint-conditional-probability
|
<p>I am new to conditional probability. I would like to do some inference. If we know p(z|x) and p(z|y), can we infer p(z|x,y)? What can we deduce if x and y are independent? Thank you very much.</p>
|
<p>There is not enough information in $p(z|x)$ and $p(z|x)$ to determine $p(z|x,y)$. In particular, when $X$ and $Y$ are independent, they may become dependent conditional on $Z$: take for instance the exponential variates $X$, $Y$, and $Z$, related by
$$X\sim\mathcal{E}(1)\qquad Y\sim\mathcal{E}(1)\qquad Z|X,Y\sim\mathcal{E}(XY)$$
In this case,
$$Y|X,Z\sim f(y|x,z)\propto xy\exp\{-y-xyz\}$$
is a Gamma$(2,1+xz)$ that depends on $x$.</p>
| 573
|
probability distributions
|
Is $Pr(x \leq C)$ equal to $Pr(\sqrt{x} \leq \sqrt{C})$?
|
https://stats.stackexchange.com/questions/321935/is-prx-leq-c-equal-to-pr-sqrtx-leq-sqrtc
|
<p>Where x is non-negative continuous random variable and C is a constant.</p>
|
<p>Yes, because $x \leq C \Leftrightarrow \sqrt{x} \leq \sqrt{C}$ for $x, C \geq 0$. </p>
<p>This means that the set of events $\{A \in \Omega: x(A) \leq C\}$ equals the set $\{A \in \Omega: \sqrt{x(A)} \leq \sqrt{C}\}$, so are their probabilities.</p>
| 574
|
probability distributions
|
On upper bounds on the distance between the population density and a chosen density
|
https://stats.stackexchange.com/questions/324440/on-upper-bounds-on-the-distance-between-the-population-density-and-a-chosen-dens
|
<p>Say One has a certain number of observations from a population density $p(x)$ and based on the observations decides to utilize a density $q(x)$ to approximate the unknown population density $p(x)$.</p>
<p>Can one choose $q(x)$ in such a way to get some upper bound for the $L^2$ distance or $L^1$ distance between the two densities:</p>
<p>$$\int_{-\infty}^{+\infty} (p(x) - q(x))^2 \, dx \qquad \int_{-\infty}^{+\infty} |p(x) - q(x)| \, dx$$</p>
<p>so that one can estimate how far off he can be on his approximation of the unknown population density?</p>
| 575
|
|
probability distributions
|
Probability distribution, n dice
|
https://stats.stackexchange.com/questions/330508/probability-distribution-n-dice
|
<p>Suppose I have $n$ fair dice with 6 faces each (numbers from $1$ to $6$). I define a random variable $X_n$ - sum of numbers on dice after the throw. Is there any probability distribution I can use to calculate exact probabilities for every value of this random variable? I'm interested in a distribution that gives exact values, not something like CLT.</p>
| 576
|
|
probability distributions
|
How to compute hypergeometric distribution probabilities for complex events?
|
https://stats.stackexchange.com/questions/350461/how-to-compute-hypergeometric-distribution-probabilities-for-complex-events
|
<p>How would I calculate the sums of two (or more) hypergeometric distributions. If, using a standard deck of cards, I want to determine the probability of draw 2 red cards and one Black Queen. I cannot just change my "good card" size and use one formula cause that wouldn't tell me what I need.</p>
<p>So, given 52 cards, 26 of which are red and given a draw of 4 cards, the hypergeometric probability of drawing 2 red cards is 0.32. With Black Queens, there are 2 "successes" in the population and I want to draw 1, the probability is 0.17. </p>
<p>In Python using Scipy I can use the hypergeom function</p>
<pre><code>[M,n,N] = [52,26,5] # M=Population,n=Successes,N=drawn
rv = hypergeom(M,n,N)
pR = rv.pmf(2) # probability of two reds in a hand of 5
n1 = 2
rv = hypergeom(M,n1,N)
pBQ = rv.pmf(1) # probability of 1 Black Queen in a hand a 5
</code></pre>
<p>How do I then calculate the probability that out of a 5 card draw I get 2 reds and 1 black Queen?</p>
|
<p>Since drawing more red cards implies you have drawn fewer black cards, the red card count and black queen count are not independent. That makes it difficult to combine the probabilities of each event in any simple way to obtain the answer.</p>
<p>Instead, do it the old-fashioned way: count every hand having two red cards, one black queen, and (presumably) two other black cards. Divide that by the count of all possible five-card hands, because each such hand has the same probability (under a fair draw, anyway).</p>
<p>The number of such hands is counted by taking the number of two-card subsets of all $26$ cards, written $\binom{26}{2}$, multiplying that by the number of one-card subsets of the two black queens, written $\binom{2}{1},$ and multiplying the result by the number of two-card subsets of the remaining $24$ black cards, written $\binom{24}{2}.$ This will be divided by $\binom{52}{5}$ to obtain the probability.</p>
<p>Applying the formula</p>
<p>$$\binom{n}{k} = \frac{n!}{(n-k)!k!} = \frac{n(n-1)\cdots(n-k+1)}{k(k-1)\cdots(1)}$$</p>
<p>yields the answer</p>
<p>$$\frac{\binom{26}{2}\binom{2}{1}\binom{24}{2}}{\binom{52}{5}} = \frac{5\times 23}{2\times 7^2 \times 17}=\frac{115}{1666}\approx 6.9\%.$$</p>
<hr>
<p>This reasoning applies, <em>mutatis mutandis,</em> to any event that specifies how many elements of each non-overlapping subset of a population must appear in a sample (without replacement) of a given size.</p>
| 577
|
probability distributions
|
probability of repeated events
|
https://stats.stackexchange.com/questions/371379/probability-of-repeated-events
|
<p>I have a website and I want to calculate the probability of clicks on the ads.</p>
<p>Let the probability that each user clicks on a link be <code>p</code> (something like <code>1%</code>)</p>
<p>if we have totally <code>N</code> users, What is the formula that computes the probability of exactly <code>n</code> clicks?
of course we have</p>
<blockquote>
<p>0< = n <= N</p>
</blockquote>
<p>Each users can click only once</p>
|
<p>Consider each user as a trial. For every trial you have two outcomes, they are success (clicks on ad) and failure (does not click on ad). <span class="math-container">$P[success] = p $</span> and <span class="math-container">$P[failure] = 1-p$</span>. </p>
<p>The total number of ways in which <span class="math-container">$n$</span> users can be selected from <span class="math-container">$N$</span> users is <span class="math-container">$\binom{N}{n}$</span>. So, the probability of exactly <span class="math-container">$n$</span> clicks is <span class="math-container">$\binom{N}{n}*p^n*(1-p)^m$</span> where <span class="math-container">$m = N-n$</span>, as the clicks are independent and exactly <span class="math-container">$n$</span> clicks mean exactly <span class="math-container">$N-n$</span> 'not clicks'.</p>
| 578
|
probability distributions
|
Probabilities under the log normal distribution, as well as mean and sd
|
https://stats.stackexchange.com/questions/372729/probabilities-under-the-log-normal-distribution-as-well-as-mean-and-sd
|
<p>I have what are probably a pretty basic stats questions. I heard that under the log normal distribution, the mean =variance. Is this true? Or is this another distribution? I am having trouble finding this information online.</p>
<p>Second, if I have the mean and sd for a log normal distribution, how to I calculate the probability of a value under this distribution? I am assuming it can be transformed to the normal, but don't remember the steps to do this?</p>
|
<p>"I heard that under the log normal distribution, the mean =variance. Is this true?" No. For normal distribution, mean and variance have no relation. For lognormal distribution, mean and variance have relation, but not determined each other.</p>
<p>"Or is this another distribution?" One of them is Poisson distribution.</p>
<p>"how to I calculate the probability of a value under this distribution?" The probability of a value for a continue distribution, such as lognormal, is zero. If you want the probability from a to b, use log(a) and log(b), then follows the way used to get the probability from normal distribution.</p>
| 579
|
probability distributions
|
pdf of combined models
|
https://stats.stackexchange.com/questions/376201/pdf-of-combined-models
|
<p>In a question, I have 5 systems. At a given time x, the probability that they're working is based on an exponential distribution. The combination of all systems will work if any single system is working</p>
<ul>
<li>Systems fail independently</li>
</ul>
<p>How can I determine the pdf of the system reliability? (ie. the lifetime of the system)</p>
<hr>
<p>My current process</p>
<p>The exponential function: <span class="math-container">$\lambda e^{-\lambda x}$</span><br>
Therefore, the probability of a system failing is <span class="math-container">$1-\lambda e^{-\lambda x}$</span><br>
Therefore, the probability of all systems failing at time x is <span class="math-container">$\prod\limits_{i=1}^{5}(1-\lambda_i e^{-\lambda_i x})$</span><br>
So all systems working would be <span class="math-container">$p = 1- \prod\limits_{i=1}^{5}(1-\lambda_i e^{-\lambda_i x})$</span></p>
<p>Very unclear to me</p>
|
<p>Call <span class="math-container">$T_i$</span> as the lifetime of system <span class="math-container">$i$</span>; then, your total system lifetime, <span class="math-container">$T=\max(T_1..T_5)$</span>. You'll first write <span class="math-container">$F_T(t)=P(T\leq t)=\prod_{i=1}^5 P(T_i\leq t)$</span>, and differentiate wrt <span class="math-container">$t$</span>. </p>
<p>Your <em>probability of system failing</em> expression is wrong. You seem to confuse probability with density. P(A system fails before time <span class="math-container">$t$</span>) = <span class="math-container">$P(T_i \leq t)=\int_0^{t}{\lambda_i e^{-\lambda_i x}dx}=1-e^{-\lambda_i t}$</span>.</p>
| 580
|
probability distributions
|
When drawing three number from the same distribution, what is the probability of the first to be between the two?
|
https://stats.stackexchange.com/questions/383688/when-drawing-three-number-from-the-same-distribution-what-is-the-probability-of
|
<p>If I draw 3 numbers: <span class="math-container">$a$</span>, <span class="math-container">$b$</span> and <span class="math-container">$c$</span> from the exact same distribution (unknown, but the same for each of the numbers).
I want to know the probability that <span class="math-container">$a$</span> is between <span class="math-container">$b$</span> and <span class="math-container">$c$</span>.
That is: <span class="math-container">$b < a < c$</span> or <span class="math-container">$c < a < b$</span>.</p>
<p>Naturally, I expect the probability to be <span class="math-container">$\frac{1}{3}$</span>.
This is because it is easy to prove that for uniform distribution it is true, and we can simulate any distribution using a uniform one.</p>
<p>Can this be proven formally?</p>
|
<p>Any of the three can be the one in the middle. If you think it may be more probable that <span class="math-container">$a$</span> is the one in the middle than <span class="math-container">$b,$</span> then what happens if you rename them so that the one called <span class="math-container">$a$</span> is then called <span class="math-container">$b$</span> and vice-versa? Does <span class="math-container">$b$</span> then become the more probable one?</p>
<p>If they're independent and the distribution is continuous, then the probability of a tie is <span class="math-container">$0.$</span> If the distribution is continuous and they're not assumed independent, then there is nothing in the way you stated the problem to prevent the first and second outcomes to be equal ALWAYS.</p>
<p>If they're not independent and the distribution is discrete, then it can still happen that the probability of two being equal is zero and each has probability <span class="math-container">$1/3$</span> of being the one in the middle. As follows: Choose WITHOUT REPLACEMENT from the set <span class="math-container">$\{1,2,3\}.$</span></p>
<p>As long as the distribution of the three random variables is <em>exchangeable</em> and the probability of a tie is <span class="math-container">$0,$</span> then the three have equal probability of being the one in the middle.</p>
| 581
|
probability distributions
|
Calculate $P(X>10)$ where $X$ have Poisson distribution
|
https://stats.stackexchange.com/questions/384461/calculate-px10-where-x-have-poisson-distribution
|
<p>Calculate <span class="math-container">$P(X>10)$</span> where <span class="math-container">$X$</span> have Poisson distribution <span class="math-container">$Poisson(7,2)$</span>. using <span class="math-container">$R$</span></p>
<p><b>My attempt</b>
By the theory of probability we know <span class="math-container">$P(X>10)=1-P(X\leq 10)$</span></p>
<p>Then using R, the probabilitity is:</p>
<pre><code>1-ppois(10,2,lower.tail = TRUE)
</code></pre>
<p>But i have the doubt:</p>
<pre><code>1-qpois(10,2)
</code></pre>
<p>What is the difference?</p>
<blockquote>
<p>Moreover, i need calculate <span class="math-container">$P(2\leq X\leq 8)$</span></p>
</blockquote>
<p>I think in this</p>
<pre><code>sum(qpois(2:8,2))
</code></pre>
<p>is correct this? thanks</p>
|
<p>I think that qpois’s output is a quantile, ppois gives a probability.</p>
<p>Seems you’re looking for a probability, so the first formula is OK.</p>
<p>For the second question, you might use ppois(8,lambda) - ppois(1,lambda) where lambda is the parameter of your variable.</p>
| 582
|
probability distributions
|
How to find the value for average rate
|
https://stats.stackexchange.com/questions/387084/how-to-find-the-value-for-average-rate
|
<p>I'm doing some textbook problems on my own and there are no steps given to the solutions to some of the problems. If anyone could help me solve the following problem, much would be appreciated.</p>
<p>"A liquid culture medium contains on the average <em>m</em> bacteria per ml. A large number of samples is taken, each of 1 ml, and bacteria are found to be present in 90% of the samples. Estimate <em>m</em>.</p>
<p>The answer given:
<em>m</em> = 2.3026</p>
<p>The knowledge we're supposed to use (What we have so far learned in this chapter are: Binomial, Poisson, Exponential and Normal Distributions) is a bit limited, so if it is possible to provide an explanation within those constraints, that would be ideal.</p>
<p>Thank you in advance.</p>
|
<p>Since you figured it out, I will post the answer. Poisson is the easiest option, as you figured out, since it has the least number of parameters. So using the Poisson formula</p>
<p><span class="math-container">$$P(k \leq 0)=P(k=0)=0.1$$</span></p>
<p>Which means</p>
<p><span class="math-container">$$e^{-\lambda}\frac{\lambda^k}{k!}=0.1$$</span></p>
<p>and since k=0</p>
<p><span class="math-container">$$e^{-\lambda}=0.1$$</span>
<span class="math-container">$$\lambda=-\ln(0.1)$$</span></p>
| 583
|
probability distributions
|
How can I calculate the probability of a increasingly likely positive outcome?
|
https://stats.stackexchange.com/questions/388611/how-can-i-calculate-the-probability-of-a-increasingly-likely-positive-outcome
|
<p>I'm not even sure I'm phrasing the question properly, please let me know if there is any standard terms around this type of problem.</p>
<p>I'm trying to find an average number of attempts it would take to find a randomly placed pixel within a fixed size element. If a wrong pixel is chosen, it is removed from the choices.</p>
<p>Scenario:</p>
<p>There is a 100 x 100 pixel square.
1 pixel is the winning pixel, all others are losers.
Remove each losing pixel as they are chosen.</p>
<p>A 100x100 square has a total of 10,000 pixels. Is the probability simply determined by multiplying 1/10000 * 1/9999 * 1/9998 * 1/9997 ....</p>
<p>How can I arrive at a number that gives me an idea of an average number of clicks it would take to find the winning pixel? I.e. can I run 1000 of these scenarios and get a fairly accurate estimate?</p>
|
<p>You're looking to calculate expected value of a variable <span class="math-container">$X$</span>, which we define as number of clicks until success.</p>
<p><span class="math-container">$$E[X] = \sum_{i=1}^{10,000} i \cdot Pr(X = i)$$</span> </p>
<p>Note that for each possible <span class="math-container">$i \in \{1,2,...,10000\}$</span>, <span class="math-container">$Pr(X=i) = \frac{1}{10,000}$</span>. Thus,</p>
<p><span class="math-container">$$E[X] = \sum_{i=1}^{10,000} \frac{i}{10,000} = 5000.5$$</span> </p>
<p>Running the scenario 1000 times could get you close-ish to this answer. In R:</p>
<pre><code>set.seed(1)
mean(replicate(1000, which(sample(1:10000, replace = F) == 1)))
</code></pre>
| 584
|
probability distributions
|
Probability Theory and distribution
|
https://stats.stackexchange.com/questions/394331/probability-theory-and-distribution
|
<p>I am studying probability theory and one of the questions that I have faced is this. The problem is that I either don't know where to go about with this question or even if I do do something about it, I have no way of knowing if I'm on the right path. Is it possible to please help explain this?</p>
<p>Q. In Germany, every year 125 people out of 100 000 report the occurrence of Parkinson's Disease.
a) Which kind of distribution would you use to model the occurrence of new cases per year? </p>
<p>b) Garching has a population of 17500, compute the probability that no more than 2 cases of the disease are encountered per year.(don’t calculate numeric value, but show the pdf or cdf computation that you do)</p>
|
<blockquote>
<p>In Germany, every year 125 people out of 100 000 report the occurrence of Parkinson's Disease. a) Which kind of distribution would you use to model the occurrence of new cases per year?</p>
</blockquote>
<p>This can be modeled as <a href="https://en.wikipedia.org/wiki/Poisson_distribution" rel="nofollow noreferrer">Poisson distribution</a> with parameter <span class="math-container">$\lambda$</span> which is the average number of events per interval. Poisson distribution expresses the probability of a given number of events occurring <em>in a fixed interval of time or space</em> if these events occur with <em>a known constant rate</em> and <em>independently of the time since the last event</em>.</p>
<blockquote>
<p>Garching has a population of 17500, compute the probability that no more than 2 cases of the disease are encountered per year.</p>
</blockquote>
<p>to answer this one can make a simplifying assumption that the <span class="math-container">$125$</span> cases in Germany are uniformly distributed over Germany and the population is also uniformly distributed in Germany. Then you can find out the average number of cases in Garching, which should be <span class="math-container">$125*(17500/100,000)$</span>. Then you can model it similarly as above and can find out the relevant probabilities.</p>
| 585
|
probability distributions
|
What happens if my Anderson Darling score is more than 10?
|
https://stats.stackexchange.com/questions/394819/what-happens-if-my-anderson-darling-score-is-more-than-10
|
<p>I am using crystal ball to fit a distribution curves. Based on the fitting, the AD score is 20, p value is 0. What does it mean? Does it mean it is suitable?</p>
<p>Appreciate any help!</p>
<p>Thanks!</p>
|
<p>With an exceptionally high AD score of 20, and a p value that is probably equal to 0.0000% you can reject the null hypothesis that your data may have a distribution that could come from the same population as the distribution you are testing for. </p>
<p>I suspect you used the AD test to check if your data is normally distributed. It is not. For it to have a viable probability that it would be normally distributed (not being able to reject the null hypothesis that it is normally distributed, your AD score should be typically much less than 1.00. </p>
| 586
|
probability distributions
|
Biased coins: Probability such that the first player throws heads
|
https://stats.stackexchange.com/questions/403974/biased-coins-probability-such-that-the-first-player-throws-heads
|
<p>Let <span class="math-container">$A$</span>, <span class="math-container">$B$</span> be the two players. Each one has a coin has a probability of getting heads of <span class="math-container">$p_i$</span>. Player <span class="math-container">$A$</span> always starts first. What is the probability such that <span class="math-container">$A$</span> wins?</p>
<p>Ex. The both coins land 'heads' on average 1 out of 2 times.
The solution says <span class="math-container">$\frac{1}{2}$</span> as the result.</p>
<p>My approach was to draw a probability tree and compute the probability such that <span class="math-container">$A$</span> throws heads. We know, that this is geometrically distributed, but don't know how to get the result.</p>
|
<p>If you drawn a tree, then you can see that either <span class="math-container">$A$</span> wins straight away, or <span class="math-container">$A$</span> flips tail and <span class="math-container">$B$</span> also flip tail and then <span class="math-container">$A$</span> gets heads, ..., or <span class="math-container">$A$</span> and <span class="math-container">$B$</span> each get <span class="math-container">$j$</span> tails in a row and then <span class="math-container">$A$</span> gets heads.</p>
<p>So, the probability of <span class="math-container">$A$</span> winning is given by summing all these probabilities:
<span class="math-container">$$
\sum_{j=0}^{+\infty} \big((1 - p_A)(1 - p_B)\big)^{j} p_A
$$</span>
Let <span class="math-container">$c = (1 - p_A)(1 - p_B) \in [0; 1]$</span>. The above sum can be written as:
<span class="math-container">$$
p_A \sum_{j=0}^{+\infty} c^{j}
$$</span></p>
<p>Can you compute the result?</p>
<hr>
<p><strong>Edit:</strong> For future reference, I'm adding the result for the probability of <span class="math-container">$A$</span> winning:
<span class="math-container">$$
\frac{p_A}{1-c} = \frac{p_A}{p_A + p_B - p_A p_B}
$$</span></p>
<hr>
<p>And, just for fun, I wondered which should be the probability when <span class="math-container">$p_A = p_B$</span> that makes the probability of <span class="math-container">$A$</span> winning equal to <span class="math-container">$1/2$</span>.</p>
<p><span class="math-container">$$
\frac{p_A}{2 p_A - p_A ^2} = \frac{1}{2}
$$</span>
<span class="math-container">$$
2p_A = 2 p_A - p_A ^2
$$</span>
<span class="math-container">$$
p_A ^2 = 0
$$</span>
<span class="math-container">$$
p_A = 0
$$</span></p>
<p>Hence,
<span class="math-container">$$
\lim_{p_A \to 0} \frac{p_A}{2 p_A - p_A ^2} = \frac{1}{2}
$$</span>
So, if the game goes on forever (in the limit <span class="math-container">$p_A \to 0$</span>) then both players have <span class="math-container">$1/2$</span> probability of winning.</p>
<p>Furthermore, if <span class="math-container">$p_A = p_B$</span> the first player to toss the coin has the highest probability of winning. I.e., for <span class="math-container">$p_A \in [0; 1]$</span>,
<span class="math-container">$$
\frac{p_A}{2 p_A - p_A ^2} \geq \frac{1}{2}
$$</span></p>
| 587
|
probability distributions
|
Accounting for membership in set, but also quantity in event
|
https://stats.stackexchange.com/questions/403084/accounting-for-membership-in-set-but-also-quantity-in-event
|
<p>Say we have a sample space
<span class="math-container">$$\Omega_1 = \{\text{"alpha"},\text{"beta"},\text{"gamma"},\text{"delta"}\}$$</span>
if we only care about the (binary) membership in set in each event, the event space would be the power set of <span class="math-container">$\Omega_1$</span>:
<span class="math-container">$$\mathcal{F_1} = \{\emptyset, \{\text{"alpha"}\},\{\text{"beta"}\},...,\{\text{"alpha"},\text{"beta"},\text{"gamma"}\}, \Omega_1\}$$</span></p>
<p>But what if, in each event, I want to also include the (real-valued) "quantity" of each element in set, in addition to the binary membership?</p>
<p>One approach that came to mind is to have a separate, real-valued continuous sample space <span class="math-container">$\Omega_2 = [0,100]$</span> </p>
<p>Then the joint sample space would be <span class="math-container">$\Omega_\text{1,2}=\Omega_1\times\Omega_2$</span></p>
<p>And an example event will look something like: <span class="math-container">$\{\text{"alpha"}, 32.8,\text{"beta"},43.6,\text{"gamma"},7.21\}$</span></p>
<p>Does this approach make sense? Or are there better ways to do this?</p>
<p>And what would be the appropriate notation?</p>
|
<p>What you want here can be accomplished by using the sample space:</p>
<p><span class="math-container">$$\Omega = \{ 0,1 \}^4 \times \mathbb{R}^4.$$</span></p>
<p>This sample space allows for four binary indicators and four corresponding real values. The event space <span class="math-container">$\mathscr{G}$</span> would then consist of the class of all <a href="https://en.wikipedia.org/wiki/Borel_set" rel="nofollow noreferrer">Borel sets</a> on <span class="math-container">$\Omega$</span>.</p>
| 588
|
probability distributions
|
how to obtain the distribution of random variables?
|
https://stats.stackexchange.com/questions/416234/how-to-obtain-the-distribution-of-random-variables
|
<p>If we have two independent random variables modeled as triangular distribution centered around a mean in the interval (mean - a, mean + a). 'a' is finite value. Then discretized them at fixed sub-intervals. what will be the distribution of their max?</p>
|
<p>Your question can be answered by considering the question: "in what ways can the <span class="math-container">$k$</span> be the maximum of these two random variables?"</p>
<p>Case 1: <span class="math-container">$X < k$</span> and <span class="math-container">$Y = k$</span>. This happens with probability <span class="math-container">$P(X<k)P(Y=k)$</span>, since the random variables are independent. This is indeed the first summand in <span class="math-container">$(1)$</span>.</p>
<p>Case 2: <span class="math-container">$Y <k$</span> and <span class="math-container">$X=k$</span>. Similarly, this happens with probability <span class="math-container">$P(Y<k)P(X=k)$</span>. This is the second summand.</p>
<p>Case 3: It can also be the case (since the transformed distributions are discrete) that <span class="math-container">$Y=k$</span> and <span class="math-container">$X=k$</span>. This happens with probability <span class="math-container">$P(Y=k)P(X=k)$</span>. This is where the third summand comes into play, and there you have formula <span class="math-container">$(1)$</span>.</p>
<p>In case the variables had been continuous, then the relevant formula would have been (with appropriate modifications) <span class="math-container">$(2)$</span>, as case three happens with probability measure <span class="math-container">$0$</span>.</p>
| 589
|
probability distributions
|
Is a transformation of a conditional distribution identical to the conditional of the transformation?
|
https://stats.stackexchange.com/questions/407932/is-a-transformation-of-a-conditional-distribution-identical-to-the-conditional-o
|
<h1>Problem</h1>
<p>Say that we can find a random vector <span class="math-container">$(Y_1, \cdots, Y_n)$</span> whose distribution is identical to the conditional distribution of <span class="math-container">$(X_1, \cdots, X_n)$</span> under <span class="math-container">$f(X_1, \cdots, X_n) \le 1$</span>, where <span class="math-container">$f(\cdot)$</span> is some function. </p>
<p><span class="math-container">$$(Y_1, \cdots, Y_n) \overset{d}{\equiv} (X_1, \cdots, X_n) \Big| f(X_1, \cdots, X_n) \le 1 $$</span></p>
<p>Now say we have been given <span class="math-container">$g : \mathbb{R}^n \to \mathbb{R}^n$</span> : <span class="math-container">$1$</span>-to-<span class="math-container">$1$</span> function. </p>
<hr>
<h2>Question</h2>
<blockquote>
<p>Is is true that
<span class="math-container">$$
g(Y_1, \cdots, Y_n) \overset{d}{\equiv} g(X_1, \cdots, X_n) \Big| f(X_1, \cdots, X_n) \le 1
$$</span>?</p>
</blockquote>
<hr>
<h2>Try</h2>
<p>Answering the above question is equivalent to showing the following</p>
<p><span class="math-container">$$
g\left( (X_1, \cdots, X_n) \Big| f(X_1, \cdots, X_n) \le 1 \right) \overset{d}{\equiv} g (X_1, \cdots, X_n) \Big| f(X_1, \cdots, X_n) \le 1
$$</span></p>
<p>since by definition <span class="math-container">$g(Y_1, \cdots, Y_n) \overset{d}{\equiv} g\left( (X_1, \cdots, X_n) \Big| f(X_1, \cdots, X_n) \le 1 \right)$</span>.</p>
<p>But I'm not sure how it can be rigorously shown. </p>
<p>Any help will be appreciated. </p>
| 590
|
|
probability distributions
|
Total variation of a distribution
|
https://stats.stackexchange.com/questions/413757/total-variation-of-a-distribution
|
<p>The wikipedia page for <a href="https://en.wikipedia.org/wiki/Total_variation#Total_variation_of_probability_measures" rel="nofollow noreferrer">Total Variation</a> says that "The total variation of any probability measure is exactly one" (and is therefore not interesting).</p>
<p>I don't get why. </p>
<p>For example, if I take a discrete distribution with <code>p[0]=0.5, p[1]=0.1, p[2]=0.3, p[3]=0.1</code>, its total variation appears to be 0.4+0.2+0.2=0.8<1. If I add <code>p[x<0]=p[x>3]=0</code> for good measure, I get 0.5+0.4+0.2+0.2+0.1=1.4>1.</p>
<p>What am I getting wrong? Is a discrete probability distribution not a probability measurement?</p>
|
<p>If you use the definition <span class="math-container">$\Delta = \sup_i \sum_j |\mu(E_j^i)|$</span> you get exactly one. In fact for your example, let's <span class="math-container">$E = \{0, 1, 2, 3\}$</span> and <span class="math-container">$E_i:=\{E^i_j\}$</span> a part of <span class="math-container">$E$</span>. We have <span class="math-container">$$E_i \in \mathcal{P}[E]:= \{\{\},\{0\},\{1\},\{2\},\{3\},\{0,1\},\{0,2\},\{0,3\},\{1,2\},\{1,3\},\{2,3\},\{0,1,2\},\{0,1,3\},\{1,2,3\},\{0,1,2,4\}\}$$</span></p>
<p>The set of <span class="math-container">$\sum_j |\mu(E_j^i)|$</span> is <span class="math-container">$\{0, 0.5, 0.1, 0.3, 0.1, 0.6, 0.8, 0.6, 0.4, 0.2, 0.4, 0.9, 0.7, 0.5, 1 \} = \{0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1 \}$</span></p>
<p>The supremum is then <span class="math-container">$1$</span>.</p>
<p>And if you add <span class="math-container">$p[x<0]=p[x>3]=0$</span> the probability of these set equal to 0. Then the supremum is not different.</p>
| 591
|
probability distributions
|
For any distributions, are mean of the sum of the same distributions equals the sum of the means of the distributions?
|
https://stats.stackexchange.com/questions/420123/for-any-distributions-are-mean-of-the-sum-of-the-same-distributions-equals-the
|
<p>I have some distributions with the same distribution, for example, Gaussian distribution or Beta distribution. My question: is the mean of the sum of these distributions equals to the sum of the means of these distributions?</p>
|
<p>Yes. Mostly. By the property called linearity of expectation, the mean of the sum of random variables is the same as the sum of their individual means. </p>
<p>This is true so long as there are finitely many random variables going into the sum and so long as their means are finite. Also true in some other cases, see comment below. </p>
| 592
|
probability distributions
|
An example of a random variable whose both marginal distributions and itself has the same probability distribution?
|
https://stats.stackexchange.com/questions/421577/an-example-of-a-random-variable-whose-both-marginal-distributions-and-itself-has
|
<p>To ask what I want to, let me start with an example:</p>
<p>Let <span class="math-container">$\chi(x)$</span> and <span class="math-container">$\eta(t)$</span> be two independent random variables. Then we can define a new random variables such as
<span class="math-container">$$\epsilon_1(x,t) = \chi(x) + \eta(t),$$</span>
or
<span class="math-container">$$\epsilon_2(x,t) = \chi(x) \eta(t).$$</span></p>
<p>Now, it is clear that, even if the underlying probability distributions of <span class="math-container">$\chi$</span> and <span class="math-container">$\eta$</span> are the same, <span class="math-container">$\epsilon$</span> (probably) will have a different probability distribution.</p>
<hr>
<p>Now, the real question:</p>
<p>Let <span class="math-container">$\epsilon = \epsilon(x,t)$</span> be a random variable of two variables. Is there any example of a random variable whose both marginal distributions (i.e the probability distribution along a single dimension) and itself has the same probability distribution ?</p>
<p><strong>Edit:</strong></p>
<p>The interpretation that @whuber put it in the comments is correct, and I'm looking for non-trivial cases (i.e for example not <span class="math-container">$\epsilon (x,y) = x$</span>).</p>
| 593
|
|
probability distributions
|
How probability distributions help a statistical analyst/data scientist
|
https://stats.stackexchange.com/questions/431238/how-probability-distributions-help-a-statistical-analyst-data-scientist
|
<p>How exactly the probability distributions help a statistician/data scientist in modelling/decision making? </p>
<p>Or how using distributions a data scientist derive any inference or make decisions when modelling? </p>
<p>I understand probability distributions theoretically but not sure how its utilised in day to day business operations. Some day to day business examples would be very helpful.</p>
<p>Thanks.</p>
|
<p>I suppose you are referring to the standard probability distributions? In that case, an example would be the arrival times for a certain service. </p>
<p>For example, suppose that a hospital has a limited amount of beds to accommodate patients. The hospital wants to know how many beds it should have at any given time to make sure that the probability that they run out of beds is 0.01. In order to model this, we can look at the arrival times of patients. We note that patients do not influence each others arrival times and that during our period of measurement, say, 12 hours from 8 AM to 8 PM, patients arrive randomly. </p>
<p>Now, if we observe the number of patients that arrive say, every 5 minutes, we notice something very peculiar: it follows a Poisson distribution. Finding the amount of beds the hospital needs has now become incredibly easy, since from our observations we can obtain an 'arrival rate', which is the lambda in a Poisson distribution, and with this arrival rate we can compute this probability using this particular distribution.</p>
| 594
|
probability distributions
|
Drawing curve from events that have probabilities
|
https://stats.stackexchange.com/questions/430480/drawing-curve-from-events-that-have-probabilities
|
<p>I'm having some problems plotting my data, and understanding what I can do with it</p>
<p>I have a dataset that looks something like:</p>
<pre><code>| Event | Score | Prob |
+-------+-------+------+
| 1 | 5 | 30% |
| 2 | 2 | 90% |
| 3 | 1 | 20% |
| 4 | 9 | 30% |
| ... | | |
+-------+-------+------+
</code></pre>
<p>Each event has a probability of happening, and a score associated with it. If more than 1 event occurs, then the scores sum.</p>
<p>I would like to make a plot that shows the most 'likely' score that could be achieved, and a curve that shows the distribution of scores against total probability. I feel this should be possible as I have the probabilities for each event, however, I also feel like I am misunderstanding something.</p>
<p>Can anyone please advise me on what to do, or any resources I should read to understand my problem better? Thanks.</p>
|
<p>You could work out the mean score by multiplying each score by its probability, and summing.</p>
<p>This might not be the most likely (mode) score if the distribution is not symmetrical, or if it is multi-modal.</p>
<p>You could run a simulation to get this plot (assuming the events are independent):
Iterate 10000 times
At each iteration, take a random number between 0 and 1 for each event.
If the random number is less than or equal to Prob for that event then include the score, else don't include.
Store the sum of included scores for that iteration.
After all the iterations are done, plot the density of the summed scores.</p>
| 595
|
probability distributions
|
Is it appropriate use the Binomial Theorem to analyze the problem of rolling dice?
|
https://stats.stackexchange.com/questions/431476/is-it-appropriate-use-the-binomial-theorem-to-analyze-the-problem-of-rolling-dic
|
<p>In mathematics, the <a href="https://en.wikipedia.org/wiki/Multinomial_theorem" rel="nofollow noreferrer">multinomial theorem</a> </p>
<blockquote>
<p>describes how to expand a power of a sum in terms of powers of the terms in that sum. It is the generalization of the binomial theorem from binomials to multinomials.</p>
</blockquote>
<p>which means not each multinomial distribution is necessarily a binomial distribution, such as rolling dice.</p>
<p>this <a href="https://stats.stackexchange.com/a/3618/250190">post</a> is using the Binomial Theorem to analyze the problem of rolling dice, is it appropriate?</p>
<p>in other words, is rolling dice a multinomial distribution that is not a binomial distribution?</p>
|
<p>This really depends on exactly what you are looking at. "Rolling dice" is a specification of <em>an activity</em>, not a specification of a <em>numerical outcome</em> that constitutes a random variable having a distribution. If you roll a set of standard six-sided dice and you look at the counts of the six possible outcomes, then (under standard assumptions), this vector of count outcomes will have a multinomial distribution. On the other hand, if you look at the count of only one outcome, then (under standard assumptions), this value will have a binomial distribution. There are many other distributions you could get from "rolling dice", depending on what numerical outcome you look at.</p>
| 596
|
probability distributions
|
Distribution of the top $m$ of $n$ samples of a Gaussian distribution?
|
https://stats.stackexchange.com/questions/435061/distribution-of-the-top-m-of-n-samples-of-a-gaussian-distribution
|
<p>I was wondering if there was an analytic description of the distribution of the largest <span class="math-container">$m$</span> of <span class="math-container">$n$</span> samples of a Gaussian distribution, where <span class="math-container">$n \geq m$</span>.</p>
<p>(As an example, I generated 100 samples from <span class="math-container">$\mathcal{N}(\mu = 0, \sigma = 1)$</span>. The average of the top 100 samples came to <span class="math-container">$0.0404$</span>, the average of the top 50 samples came to <span class="math-container">$0.832$</span>, the average of the top 10 samples came to <span class="math-container">$1.842$</span>, and the top 1 sample was 2.88.)</p>
<p>Intuitively, as <span class="math-container">$n$</span> increases and <span class="math-container">$m$</span> stays constant, or <span class="math-container">$m$</span> decreases and <span class="math-container">$n$</span> stays constant, the expected value increases. It's like taking the top <span class="math-container">$m$</span> applicants.</p>
<p>Is there a name or analytic description of such a distribution?</p>
<p>(I haven't been able to figure out how to do this, and I spent a lot of time searching. This seems like a simple / common problem and I'm surprised I haven't found anything on it.)</p>
<p>EDIT: After the comments below, I as able to find a good answer here: </p>
<p><a href="https://stats.stackexchange.com/questions/9001/approximate-order-statistics-for-normal-random-variables">Approximate order statistics for normal random variables</a></p>
| 597
|
|
probability distributions
|
Distribution of uniform RVs under sum constraint
|
https://stats.stackexchange.com/questions/438090/distribution-of-uniform-rvs-under-sum-constraint
|
<p>Suppose I generate <span class="math-container">$x_1,x_2,x_3,x_4$</span> through the following procedure:</p>
<ol>
<li>Sample <span class="math-container">$x_1,x_2,x_3 \sim \text{unif}(0, 1)$</span>, iid </li>
<li>While <span class="math-container">$x_1+x_2+x_3 > 1$</span>, resample them all</li>
<li>Let <span class="math-container">$x_4 = 1 - x_1 - x_2 - x_3$</span></li>
</ol>
<p>What is the distribution of the <span class="math-container">$x_1,x_2,x_3,$</span> and <span class="math-container">$x_4$</span> we end up with afterwards? I empirically found that they all seem to follow the same distribution, but can't figure out how to derive this distribution analytically. </p>
|
<p>The answer is:that's not true.
it is look like </p>
<ol>
<li>generate <span class="math-container">$(X_1,X_2,X_3)$</span> such that <span class="math-container">$S=X_1+X_2+X_3 \leq 1$</span></li>
<li><span class="math-container">$X_4=(1-S)$</span></li>
</ol>
<p>or on the other hands <span class="math-container">$X_4= (1-S|S<1) $</span>.
now we want to find a conditional distribution <span class="math-container">$(1-S|S<1)$</span>.</p>
<p><span class="math-container">$S=X_1+X_2+X_3$</span> and according to <a href="https://en.wikipedia.org/wiki/Irwin%E2%80%93Hall_distribution" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Irwin%E2%80%93Hall_distribution</a></p>
<p><span class="math-container">\begin{eqnarray}
f_S(a)=\left\{
\begin{array}{cc}
\frac{1}{2} a^2 & 0\leq a <1 \\
\frac{1}{2}(-2a^2+6a-3) & 1\leq a<2 \\
\frac{1}{2} (a-3)^2 & 2\leq a<3
\end{array}
\right.
\end{eqnarray}</span></p>
<p>so <span class="math-container">$P(S\leq 1)=\int_{0}^{1} \frac{1}{2} a^2 da=\frac{1}{6}$</span> </p>
<p>and</p>
<p><span class="math-container">$P( 1-x \leq S\leq 1)=\int_{1-x}^{1} \frac{1}{2} a^2 da$</span>
<span class="math-container">$=\frac{1}{6} a^3|_{1-x}^{1}=\frac{1}{6}(1-(1-x)^3)$</span></p>
<p>so <span class="math-container">$F_{X_4}(x)=P(X_4\leq x)=P(1-S\leq x|S<1)=P(S\geq 1-x|S\leq 1)$</span></p>
<p><span class="math-container">$=\frac{P( 1-x \leq S\leq 1)}{P(S\leq 1)}=1-(1-x)^3 $</span> <span class="math-container">$,x<1$</span></p>
<p>its not look like uniform distribution and <span class="math-container">$f_{X_4}(x)=3(1-x)^2$</span> .(<span class="math-container">$beta(1,3)$</span> distribution)</p>
<p>a simple simulation confirms that the <span class="math-container">$X_4$</span> does not follow uniform distribution:</p>
<pre><code>x4<-c()
count<-1
simu<-6*1000
for(i in 1:simu){
x<-runif(3)
s<-sum(x)
if(s<1) {x4[count]<-1-s;count<-count+1}
}
plot(density(x4),type="l",xlim=c(0,1))
> ks.test(x4,runif(length(x4)))
Two-sample Kolmogorov-Smirnov test
data: x4 and runif(length(x4))
D = 0.37026, p-value < 2.2e-16
alternative hypothesis: two-sided
</code></pre>
<p><a href="https://i.sstatic.net/BNdQO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BNdQO.png" alt="enter image description here"></a></p>
<p>it's simple to check <span class="math-container">$1-S|S>1$</span> does not have uniform distribution.</p>
| 598
|
probability distributions
|
If odd is uniformly distributed, what is the distribution of proportion?
|
https://stats.stackexchange.com/questions/448872/if-odd-is-uniformly-distributed-what-is-the-distribution-of-proportion
|
<p>Suppose <span class="math-container">$\pi=\frac{\theta}{1-\theta}$</span> where theta is between <span class="math-container">$[0,1]$</span>. </p>
<p>If we set a uniform prior for <span class="math-container">$\pi$</span> (<span class="math-container">$p(\pi) \propto 1$</span>), what is the induced prior on <span class="math-container">$\theta=
\frac{\pi}{1+\pi}$</span>? Is this prior proper?</p>
<p>I'm stuck on this problem. Can someone help me out?</p>
|
<p>If <span class="math-container">$\pi$</span> is uniformly distributed over <span class="math-container">$(0,a)$</span> then <span class="math-container">$\theta\in\left(0,\frac{a}{1+a}\right)$</span>. Then for <span class="math-container">$0<x<\frac{a}{1+a}$</span>
<span class="math-container">$$
F_\theta(x)=\mathbb P(\theta\leq x)=\mathbb P\left(\frac{\pi}{1+\pi}\leq x\right) = \mathbb P\left(\pi\leq \frac{x}{1-x}\right)=F_\pi\left(\frac{x}{1-x}\right)=\frac{x}{a(1-x)}.
$$</span>
And the pdf of <span class="math-container">$\theta$</span> should be <span class="math-container">$f_\theta(x)=\frac{1}{a(1-x)^2}\mathbb 1_{\left(0,\frac{a}{1+a}\right)}$</span>.</p>
<p>Note that <span class="math-container">$\left(0,\frac{a}{1+a}\right)\subset (0,1)$</span>. Say, for <span class="math-container">$a=1$</span>, <span class="math-container">$\left(0,\frac{a}{1+a}\right)=\left(0,\frac12\right)$</span>. </p>
<p>You cannot obtain <span class="math-container">$\theta$</span> with positive pdf over whole <span class="math-container">$(0,1)$</span> if <span class="math-container">$\pi$</span> is uniformly distributed since there cannot be uniform distribution over positive halfline. </p>
| 599
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.