text
stringlengths
81
47k
source
stringlengths
59
147
Question: <p>Let <span class="math-container">$X \sim \operatorname{N}\left(0,1\right)$</span> be a standard normal distributed random variable: Can someone please please show me the whole way how to find the density of <span class="math-container">$\left\vert X\right\vert$</span>.</p> <p><em>My tries:</em> I saw sites where someone gives tips and I wanna do it with Cumulative distribution function but everywhere are only few tips and I didn't understand it. Can someone please explain how to do it for someone who never did tasks with densities before?</p> <p>I only know the density of <span class="math-container">$X$</span> : <a href="https://en.wikipedia.org/wiki/Normal_distribution" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Normal_distribution</a></p> <p>I would be very thankful for help.</p> Answer: <p>A standard normally distributed random variable has a symmetric distribution about <span class="math-container">$0$</span>, so <span class="math-container">$|X|$</span> has zero density below <span class="math-container">$0$</span>, and double the standard normal density above <span class="math-container">$0$</span></p> <p>If you want to deal with CDFs, then if <span class="math-container">$Y=|X|$</span> where <span class="math-container">$F_X(x)=\Phi(x)$</span> then</p> <ul> <li><span class="math-container">$F_Y(x)=P(|X| \le x)$</span> <ul> <li>which is <span class="math-container">$0$</span> when <span class="math-container">$x &lt;0$</span></li> <li>and is <span class="math-container">$P( -x \le X \le x) = \Phi(x)-\Phi(-x)$</span> when <span class="math-container">$x\ge 0$</span></li> </ul> </li> <li>the density is the derivative of this <ul> <li><span class="math-container">$f_Y(x)=0$</span> when <span class="math-container">$x &lt;0$</span></li> <li><span class="math-container">$f_Y(x)= \phi(x)+\phi(-x) = 2\phi(x)$</span> when <span class="math-container">$x\ge 0$</span></li> </ul> </li> </ul> <p>This is called a <a href="https://en.wikipedia.org/wiki/Half-normal_distribution" rel="nofollow noreferrer">half-normal distribution</a></p>
https://math.stackexchange.com/questions/4163938/how-to-find-densitiy-of-x
Question: <p>Let <span class="math-container">$A$</span> and <span class="math-container">$B$</span> be two events such that <span class="math-container">$\Pr[A\cap B]=0.2$</span> and <span class="math-container">$0.3&lt;\Pr[\bar{A}\cap B]&lt;0.4$</span>. Find <span class="math-container">$a,b\in\mathbb{R}$</span> such that <span class="math-container">$a\leq \Pr[A]\leq b$</span>.</p> <p>I tried to used the fact that <span class="math-container">$$\Pr[A]=\Pr[A\cap B]+\Pr[A\cap \bar{B}]=0.2+\Pr[A\cap \bar{B}],$$</span> but I am not sure how to obtain a bound on <span class="math-container">$\Pr[A\cap \bar{B}]$</span>.</p> Answer: <p>first observe that</p> <p><span class="math-container">$$(A\cap B)\cup(\overline{A}\cap B)=B$$</span></p> <p>Thus</p> <p><span class="math-container">$$0.5&lt; \mathbb{P}[B]&lt; 0.6$$</span></p> <p>and evidently</p> <p><span class="math-container">$$0.2\leq \mathbb{P}[A]\leq 0.7$$</span></p> <p>here is a Venn's diagram showing the situation</p> <p><a href="https://i.sstatic.net/pq6Zg.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pq6Zg.jpg" alt="enter image description here" /></a></p>
https://math.stackexchange.com/questions/4165711/from-pra-cap-b-and-pr-bara-cap-b-deduce-pra
Question: <p>I'm trying to develop an algorithm for finding biased coins. The basic problem formulation is this:</p> <ol> <li>There are an infinite number of coins</li> <li>Some proportion <span class="math-container">$t$</span> of the coins is biased (this number is known)</li> <li>All biased coins have the same probability <span class="math-container">$p_b$</span> of coming up heads (this number is also known)</li> <li>All other coins are fair</li> <li>Biased coins are otherwise indistinguishable from fair ones</li> </ol> <p>The task is to find <strong>one</strong> biased coin, with some confidence, using the fewest number of coin flips.</p> <p>I know the basic solution to a related problem, i.e. determining whether a <em>single</em> coin is biased. Following the formulation in <a href="https://en.wikipedia.org/wiki/Checking_whether_a_coin_is_fair" rel="noreferrer">https://en.wikipedia.org/wiki/Checking_whether_a_coin_is_fair</a>, I can set my desired maximum error <span class="math-container">$E$</span> to be equal to <span class="math-container">$|p_b - 0.5|$</span> and use the equation <span class="math-container">$n = \frac{Z^2}{4E^2}$</span> to get the number of coin tosses <span class="math-container">$n$</span> required to determine whether the coin is indeed fair using a given <span class="math-container">$Z$</span> value.</p> <p>However, I'm curious how the method might change given my formulation, where there are multiple coins and a known proportion are biased. A brute force algorithm, I suppose, would be to select a coin, flip it <span class="math-container">$n$</span> times, select another coin, flip it <span class="math-container">$n$</span> times, etc. until a biased one is found. But this feels sub-optimal.</p> <p>Is it possible, for instance, to abandon a coin before <span class="math-container">$n$</span> flips is reached based on some criteria, i.e. using the evidence collected so far to judge whether it is worthwhile to keep flipping that coin or move on to another? It seems like the value of <span class="math-container">$t$</span>, particularly if it is low, should be a useful prior that I can leverage.</p> <p>I'm also concerned that if I test multiple coins, I am at risk of inadvertently finding significance where there is none.</p> Answer: <blockquote> <p>Is it possible, for instance, to abandon a coin before n flips is reached based on some criteria, i.e. using the evidence collected so far to judge whether it is worthwhile to keep flipping that coin or move on to another? It seems like the value of t, particularly if it is low, should be a useful prior that I can leverage.</p> </blockquote> <p>A Bayesian approach might go like this, for a single coin ... Let <span class="math-container">$\widetilde p$</span> be the unknown probability of success, with prior given by <span class="math-container">$P(\widetilde p =p_b)=t$</span> and <span class="math-container">$P(\widetilde p =1/2)=1-t$</span>. Then the posterior, after <span class="math-container">$n$</span> conditionally i.i.d. tosses <span class="math-container">$X_1,...X_n$</span>, with <span class="math-container">$k$</span> being the number of successes, is given by <span class="math-container">$$\begin{align} P(\widetilde p =p_B\mid X_1=x_1,...X_n=x_n) &amp;= {P(X_1=x_1,...X_n=x_n\mid \widetilde p =p_B)\cdot P(\widetilde p =p_B)\over P(X_1=x_1,...X_n=x_n)}\\[3mm] &amp;= {p_B^k(1-p_B)^{n-k}\cdot t\over P(X_1=x_1,...X_n=x_n)}\\ \\ \end{align}$$</span> and <span class="math-container">$$\begin{align}P(\widetilde p =1/2\mid X_1=x_1,...X_n=x_n) &amp;= {P(X_1=x_1,...X_n=x_n\mid \widetilde p =1/2)\cdot P(\widetilde p =1/2)\over P(X_1=x_1,...X_n=x_n)}\\[3mm] &amp;= {(1/2)^n\cdot (1-t)\over P(X_1=x_1,...X_n=x_n)} \end{align}$$</span></p> <p>Then we could, for example, use the posterior odds-ratio, <span class="math-container">$$R:={P(\widetilde p =p_B\mid X_1=x_1,...X_n=x_n) \over P(\widetilde p =1/2\mid X_1=x_1,...X_n=x_n)}={p_b^k(1-p_b)^{n-k}\over(1/2)^n }{t\over 1-t}$$</span> with a decision rule like this:</p> <p><span class="math-container">$$\ \begin{cases}R&gt;r &amp;\implies \text{decide "biased"}\\ R&lt;1/r &amp;\implies \text{decide "unbiased"} \end{cases} $$</span> ... tossing until one alternative becomes sufficiently more probable than the other, as judged by some threshold ratio value <span class="math-container">$r$</span>. To guarantee a bound on the number of tosses, one would probably want to set some maximum number <span class="math-container">$n_{max}$</span>, such that if <span class="math-container">$n_{max}$</span> were ever reached without a decision being made, then a default rule would be applied to stop and decide in favor of whichever alternative had the higher posterior probability at that time. (A hybrid Bayesian-frequentist approach could also set <span class="math-container">$n_{max}$</span> based on the frequentist estimation value you quoted.)</p> <hr /> <p>EDIT:</p> <p>Note that if we let <span class="math-container">$P_n := P(\widetilde p =p_B\mid X_1=x_1,...X_n=x_n)$</span> and subscript the corresponding <span class="math-container">$R$</span>-value, then <span class="math-container">$R_n$</span> and <span class="math-container">$P_n$</span> are in 1-1 correspondence, each being an increasing function of the other:</p> <p><span class="math-container">$$P_n = {R_n\over R_n+1},\quad R_n = {P_n\over 1-P_n}.$$</span></p> <p>Consequently the decision criteria can be written equivalently in terms of either quantity:</p> <p><span class="math-container">$$\text{using $(r_{lo}, r_{hi})$:}\quad\quad\begin{cases}R_n&gt;r_{hi}\\[6mm] R_n&lt;r_{lo}\end{cases}\iff\begin{cases}P_n&gt;{r_{hi}\over r_{hi}+1}\\[3mm] P_n&lt;{r_{lo}\over r_{lo}+1}\end{cases} $$</span> and <span class="math-container">$$\text{using $(p_{lo}, p_{hi})$:}\quad\quad\begin{cases}P_n&gt;p_{hi}\\[6mm] P_n&lt;p_{lo}\end{cases}\iff \begin{cases}R_n&gt;{p_{hi}\over 1-p_{lo}}\\[3mm] P_n&lt;{p_{lo}\over 1-p_{lo}}\end{cases} $$</span></p> <p>I've written <span class="math-container">$lo$</span> and <span class="math-container">$hi$</span> quantities in case you want to explore setting the two criteria independently of one another, instead of my example above using <span class="math-container">$(r_{lo},r_{hi})=({1\over r}, r)\iff (p_{lo},p_{hi})=({1\over r+1},{r\over r+1})$</span>. (E.g., <a href="https://math.stackexchange.com/a/4162623/16397">@quasi's answer</a> uses <span class="math-container">$(p_{lo},p_{hi})=(t, a)\iff (r_{lo}, r_{hi})=({t\over 1-t},{a\over 1-a})$</span> seeking to minimize the expected total number of flips required to decide that a biased coin has been found.)</p>
https://math.stackexchange.com/questions/4160412/finding-a-rare-biased-coin-from-an-infinite-set
Question: <p>A bag contains 6 white balls, 5 black balls and 2 red balls. If two balls are drawn at random, what is the probability that neither of them are white?</p> <p>For this question, the method that I used was to consider the four possible cases, BB, RR, BR, RB.</p> <p>Therefore <span class="math-container">$$P = P(BB)+P(RR)+P(BR)+P(RB)=\frac{5}{13} \times \frac{4}{12} + \frac{2}{13} \times \frac{1}{12} + \frac{5}{13} \times \frac{2}{12} + \frac{2}{13} \times \frac{5}{12} = \frac{7}{26} $$</span></p> <p>which gives the correct answer, but I found there are two other ways to do this question, neither of which I understand.</p> <p>Alternative method 1):</p> <p><span class="math-container">$P = P(\text{red/black first pick}) \times P(\text{red/black second pick}) =\frac{5+2}{13} \times \frac{(5+2)-1}{13-1} = \frac{7}{26}$</span></p> <p>This obviously kind of makes sense, but I've just never seen a question done this way before, I would appreciate if someone could explain logic behind this.</p> <p>Alternative method 2) <span class="math-container">$$P = \frac{{}_5C{}_2 + {}_2C{}_2 + {}_5C{}_1 \times {}_2C{}_1}{_{13}C{}_2}=\frac{7}{26}$$</span></p> <p>What I don't understand about this method is why we didn't view BR and RB as two different cases, i.e., why we didn't do</p> <p><span class="math-container">$$P = \frac{{}_5C{}_2 + {}_2C{}_2 + {}_5C{}_1 \times {}_2C{}_1 + {}_2C{}_1 \times {}_5C{}_1 }{_{13}C{}_2} $$</span></p> Answer:
https://math.stackexchange.com/questions/4165726/a-bag-contains-6-white-balls-5-black-balls-and-2-red-balls
Question: <p>I study math many years ago (I am engineer) and not remember how exactly calculate probability for such problem. I choose n values from m set and want calculate probability that all will be unique, has <span class="math-container">$1$</span> repeat or less, <span class="math-container">$2$</span> repeats or less or <span class="math-container">$k$</span> repeats or less.</p> <p>I have set value <span class="math-container">$0 .. m$</span> and I do some choices from this set <span class="math-container">$n \geq 1$</span>. Number of choices is less or equal to set size <span class="math-container">$n \leq m$</span>. After every choice I return selected value from <span class="math-container">$0 .. m$</span> to set.</p> <p>Maybe it is simple problem but I forget some theory or it is hidden deep in memory and I can not solve it easily.</p> <p>Examples:</p> <ol> <li>if set is <span class="math-container">$m = 2$</span> and choice <span class="math-container">$n = 2$</span> probability to not choose same value is <span class="math-container">$\frac{1}{2}.$</span></li> <li>if set is <span class="math-container">$m = 3$</span> and choice <span class="math-container">$n = 2$</span> probability to not choose same value is <span class="math-container">$\frac{6}{9}$</span> because <span class="math-container">$3*3$</span> permutations and <span class="math-container">$3$</span> combos in <span class="math-container">$2$</span> orders.</li> <li>if set is <span class="math-container">$m = 3$</span> and choices <span class="math-container">$n = 3$</span> <span class="math-container">$-&gt;$</span> <span class="math-container">$p = \frac{6}{27}$</span> (<span class="math-container">$3*3*3$</span> permutations and <span class="math-container">$3*2$</span> combos)</li> </ol> <p>How to generalize it with probability equation?</p> Answer: <p>To calculate the probability of no repeats, observe</p> <p><em>Total ordered sequences</em> = <span class="math-container">$m\times m\times \dots\times m = m^n $</span></p> <p><em>Number of sequences with each element distinct</em> = <span class="math-container">$\binom{m}{n} \times n!=\frac{m!}{(m-n)!} $</span></p> <p>The probability is thus <span class="math-container">$$ \frac{m!}{m^n(m-n)!} $$</span></p>
https://math.stackexchange.com/questions/4166183/what-is-probability-of-choose-already-choosen-when-we-choose-n-values-from-set-l
Question: <p>How do we show that if $\sum E(|X_n - X|^r) &lt; \infty$ then $X_n {\to} X$ almost surely for $r &gt; 0$?</p> <p>I know that it's true for $\sum P(|X_n - X| &gt; \epsilon) &lt; \infty$, but how do we extend this to account for the $r$-th mean?</p> Answer: <p>$\sum E(|X_{n}-X|^{r}) = E(\sum |X_{n}-X|^{r})$. So $\sum|X_{n}-X|^{r} &lt; \infty$ a.s $\implies$ that $|X_{n}-X|^{r} \rightarrow 0$ a.s and therefore that $X_{n} \rightarrow X$ a.s </p>
https://math.stackexchange.com/questions/2770386/sum-ex-n-xr-converges-implies-almost-sure-convergence
Question: <p>I recently did a question, Numbers are selected at random, one at a time from two-digit numbers {00-99} with replacement. An event E occurs if and only if the product of the two digits of a selected number is 18. If four numbers are selected, Find the probability that the event E occurs at least 3 times.</p> <p>The numbers possible are (29,36,63,92) and the ways are probability was calculated by making cases. For the case of 3 successes and one failure, binomial probability distribution formula was used for the same.</p> <p>In this question it does make sense to use it sine the numbers are drawn one-by-one and with replacement. If instead say the numbers were drawn simultaneously would it still make sense to use the binomial probability distribution? Since now the order of successes and failures would not matter !?</p> Answer: <p>The <em>count of successes</em> among a <em>specified amount</em> of <em>Bernoulli trials</em> each with an <em>independent and identically distributed</em> success rate, is a <strong>Binomially Distributed</strong> random variable.</p> <p>Order of the sequence is not one of these criteria.</p> <p><strong>However</strong>, when drawing simultaneously, the results <em>may not</em> be independent. If they are not, you may instead have a <em>hypergeometric distribution</em>.</p>
https://math.stackexchange.com/questions/4166345/use-of-binomial-probability-distribution-if-events-occur-simultaneously
Question: <p>I'm stuck at solving the following problem: launch 3 fair coins independently. Let A the event: &quot;you get at least a head&quot; and B &quot;you get exactly one tail&quot;. Then what is the probability of the event <span class="math-container">$A \cup B$</span>?</p> Answer: <p>Note that <span class="math-container">$\overline{A\cup B}=\overline{A}\cap \overline{B}$</span>. Now <span class="math-container">$\overline{A}$</span> is you get not heads and <span class="math-container">$\overline{B}$</span> is you get any number of tails but one. Only one option fits <span class="math-container">$\overline{A}$</span> and it is TTT, which fits also <span class="math-container">$\overline{B}$</span>.</p> <p>So <span class="math-container">$\Pr(\overline{A}\cap\overline{B})=\tfrac{1}{2^3}$</span> and <span class="math-container">$\Pr(A\cap B)=1-\tfrac{1}{2^3}$</span>.</p>
https://math.stackexchange.com/questions/4107396/coin-tosses-probability-to-calculate
Question: <p><strong>Q.1)</strong> A family has $n$ children, $n\geq2$. We ask from the father, "Do you have at least one daughter named Lilia?" He replies, "Yes!". What is the probability that all of their children are girls? </p> <p>In other words, we want to find the probability that all $n$ children are girls, given that the family has at least one daughter named Lilia. </p> <p>Here we can assume that if a child is a girl, her name will be Lilia with probability $\alpha\ll1$ independently from other children's names. If the child is a boy, his name will not be Lilia.</p> <p><strong>Q.2)</strong> In a family of $n$ children. We pick one among them and found that she is a girl. What is the probability that all children are girls?</p> <hr> <p>My solution to Q.1)</p> <p>$$ \begin{equation} \begin{split} P(\text{all are girls | at-least one named Lila}) &amp;= \frac{P(\text{at-least one name Lila | all are girls})\ \times\ P(\text{all are girls})}{P(\text{at-leat one named Lila})}\\ &amp;= \frac{{n\choose1}\ \alpha\ (1-\alpha)^{n-1}\ \times\ \frac{1}{2^n}}{{n\choose1}\ \alpha \ \frac{1}{2^{n-1}}} \end{split} \end{equation}$$</p> <p>My solution to Q.2)</p> <p>$$\begin{equation} \begin{split} P(\text{all are girls | at-least one girl}) &amp;= \frac{P(\text{at-least one girl | all are girls})\ \times\ P(\text{all are girls})}{P(\text{at-least one girl})}\\ &amp;= \frac{1\ \times\ \frac{1}{2^n}}{{n\choose1}\ \frac{1}{2} \ \frac{1}{2^{n-1}}} \end{split} \end{equation}$$</p> Answer: <p>Alright so the first question seems to be confusing some people. Let $B$ be the event of all boys. The complement of $B$ is $B^c$, which is the event of at least one girl. Here is where confusion then plays out: is $B^c$ the same as a girl named Lilia? Does this mean that if you have at least one girl, then her name will be Lilia? No because her name could have been any other name, the fact that the child is named Lilia simply indicates that in addition to having a child named Lilia, you now have at least one girl. Now let $L$ be the event that none of the $n$ children be named Lilia. There is some more hidden information. When the problem states that the probability of Lilia is $\alpha&lt;&lt;1$, the problem actually assumes that you are expected to see the names of other girls repeat, such as Mary, before the name Lilia is first encountered. That being said, let $L^c$ be the event of at least 1 Lilia. Let $G$ be the event that all children are girls, let $c_i$ denote the event of the $i_{th}$ child. Finally consider Baye's Rule: $$P(G|(B^c\cap{L^c})){\times}P(B^c\cap{L^c})= P((B^c\cap{L^c})|G){\times}P(G)$$ The problem specifically asks for $P(G|(B^c\cap{L^c}))$. So now it is time to find all the other pieces of equation. Assuming that child $c_i$ has an equal of probability of being girl or boy, then $P(G) = ({\frac{1}{2}})^n$. Moving on to the next term $P((B^c\cap{L^c})|G)$, which denotes the probability of at least one girl, and at least one Lilia, given that all the children are girls. If a child that is also a girl has a probability $\alpha$ of being named Lilia, then that child that is also a girl has a probability of $1-\alpha$ of not being named Lilia. $L$ is the event that none of the children are named Lilia, and in this particular case, there are n girls, and so now consider the probability that none of the n girls are named Lilia, which is $(1-\alpha)^n$. However, in reality, the event of at least one Lilia is sought, thus $P((B^c\cap{L^c})|G) = (1-(1-\alpha)^n)$. Here you can ignore the $B^c$, the event of at least one girl, because you are given n girls as per the conditional. </p> <p>Finally consider the last portion of the problem, $P(B^c\cap L^c)$. This is the event of at least one girl, and at least one girl named Lilia. One basic rule in Set theory essentially allows this problem to be reformulated as $P((B \cup L)^c)$, which is the complement of the event of all boys or all children are not named Lilia. Then it follows that $P((B \cup L)^c) = 1 - P(B \cup L)$. To find how these quantities interact, consider the Addition Rule of non-disjoint sets such that $P(B \cup L) = P(B)+P(L)-P(B \cap L)$. The event of all boys is equal to $P(B) = (\frac{1}{2})^n$, and then $P(B \cap L) = P(L|B) \times P(B) = 1 \times (\frac{1}{2})^n$. This is true because if you have all boys, then probability of no Lilias is subsequently 1. This means that $P(B \cup L) = P(L)$. So what is the probability that no child will be named Lilia. Is important to consider that this includes both boys and girls, because boys will certainly not be named Lilia, and only a select number of girls will be named Lilia. Let $L_i$ be the event where the $i_{th}$ child is not named Lilia. Then $$P(L_i) = P(L_i \cap c_i=g)+P(L_i \cap c_i=b) = P(L_i|c_i=g) \times P(c_i=g) + P(L_i|c_i=b) \times P(c_i=b)$$ This yields, $P(L_i) = (1-\alpha) \times (\frac{1}{2})+(1) \times (\frac{1}{2}) = (\frac{1}{2})(2-\alpha)$. Then the probability that all n children are not named Lilia is $P(L) = ((\frac{1}{2})(2-\alpha))^n = \frac{(2-\alpha)^n}{2^n}$. Going a little far back, $P((B \cup L)^c) = 1-P(L) = 1-\frac{(2-\alpha)^n}{2^n}= \frac{2^n-(2-\alpha)^n}{2^n} $</p> <p>Finally, $$P(G|(B^c\cap{L^c})) = \frac{P((B^c\cap{L^c})|G){\times}P(G)}{P(B^c\cap{L^c})}$$</p> <p>$$= ((1-(1-\alpha)^n) \times \frac{1}{2^n}) \div \frac{2^n-(2-\alpha)^n}{2^n}$$</p> <p>$$= (\frac{((1-(1-\alpha)^n)}{2^n}) \times (\frac{2^n}{2^n-(2-\alpha)^n}$$</p> <p>Obviously the answer is completely dependent on being given a girl named Lilia. It might not seem intuitive at first, but consider it this way, instead of boys and girls, use blue and red colored shapes. Blue shapes have an odd number, and red shapes have an even number. Also, imagine having way more lower numbers than larger numbers. So if I tell you that there is at least one red colored shape with a really large even number, then the predicted number of reds should go up from your initial estimate. </p> <p><strong>Problem 2</strong> Okay and for problem 2, I am assuming that a girl is picked at random. If that's the case, then the problem is quickly reduced. First let, $E$ be the event of random picking a girl. However this isn't all that is actually said. In reality, you picked one of the children, and then it turns out that the child may be either boy or girl. Let $p_i$ be the event of selecting $i_th$ child. This is all equivalent to $E = (C_1 \cap p_1) \cup (C_2 \cap p_2) \cup ... \cup (C_n \cap p_n)$. Note that each of these events are actually disjoint, that is picking the 1st child is separate from picking the 2nd child and so on. Thus $$P(E) = P(C_1 \cap p_1) +P(C_2 \cap p_2) + ... +P(C_n \cap p_n) = P(C_1 | p_1) \times P(p_1) + P(C_2 | p_2) \times P(p_2) + ... + P(C_n | p_n) \times P(p_n) $$ Thus, $$P(E) = (\frac{1}{2} \times \frac{1}{n}) + (\frac{1}{2} \times \frac{1}{n}) + ... + (\frac{1}{2} \times \frac{1}{n}) = (\frac{1}{2n}) \times (1+1...+1) = \frac{n}{2n} = \frac{1}{2}$$</p> <p>Now use Bayes Rule to solve the remainder of the problem. Let $G$ be the event that all children are girls. So $$P(G | (B^c \cap E)) \times P(B^c \cap E) = P((B^c \cap E) | G) \times P(G) = P(G | (B^c \cap E)) \times P(B^c | E) \times P(E) = 1 \times \frac{1}{2^n}$$ So if you are given all girls, the probability of at least 1 girl AND picking a girl at random is now 1. Hopefully the probability of G is obvious. It should also be obvious that if you pick any girl at random, then you have at least 1 girl so that is simply 1. Then, $$P(G | (B^c \cap E)) = (\frac{1}{2^n}) \div (1 \times \frac{1}{2}) = (\frac{1}{2^n}) \times 2 = \frac{1}{2^{n-1}}$$ A similar but more complicated approach can be reached using the binomial expansion although it can be tedious to follow along. </p>
https://math.stackexchange.com/questions/1893041/conditional-probability-what-is-the-prob-that-all-are-girls-given-that-there
Question: <p>I was wondering, if you flip a fair coin $5$ times, whether you can calculate the probability of getting at least one head is calculated like this:</p> <p>You can do the complement of getting at least one head which is TTTTT: $\dfrac1{2^5} =\dfrac1{32}$</p> <p>Then you do $$1-\frac1{32}= \frac{31}{32}\;,$$ so that's the possibility of getting at least one head after a flip?</p> <p>Thanks! </p> Answer: <p>Yes, that's correct. This technique is often called <em>complementary counting</em>.</p>
https://math.stackexchange.com/questions/144499/probability-of-heads-in-a-coin
Question: <p>I am trying to calculate the percentage of winning for a certain event but cannot find the right approach or an easier way to exclude special cases. </p> <p>Problem:: In many Trading Card Games (TCG) players are given the option to enter tournaments that reward them based on the number of wins they can achieve. Because said tournaments charge an initial entry fee and only give it back if one reaches certain wins I am trying to determine the rate at which one comes out even or ahead. </p> <p>Tournament Rules:: A player is allowed to play up to incurring 3 losses or 7 wins whichever comes first. I am basing the following math on the probability that a deck has win rate of 50%.</p> <p>At first I calculated it thru binomial probability, however due to the nature of the problem the number of trials changes. When you go 7-0 technically you don't play the 3 losses so that's 7 successes out of a number of trials of 7 right? The probability of going 7-0 would then be.<br> .5 * .5 * .5 * .5 * .5 * .5 *.5 = 0.0078125 </p> <p>I calculated the probabilities for going 7-1 at number of trials at 8, 7-2 at 9, 6-3 at 9, 5-3 at 8 and so forth until going 0-3 at 3. I know I did something wrong because when I added the probabilities it gave me ~1.57. </p> <p>When I asked a friend he suggested that I map out all the combinations for each scenario so I did that. By using Binomial Coefficient I then came up 238 possible combinations:</p> <p>7-0 [1] </p> <p>7-1 [8]</p> <p>7-2 [36]</p> <p>6-3 [84]</p> <p>5-3 [56]</p> <p>4-3 [35]</p> <p>2-3 [10]</p> <p>1-3 [4]</p> <p>0-3 [1]</p> <p>Is this the correct approach? Now even with this approach there are cases in the combinations that are invalid. For example in a 6-3 it gives me the combination of L-L-L-W-W-W-W, while this combination of Loss to Wins is correct it is not valid since the game would kick you out after the 3 losses. </p> <p>My head is stumped as to how to find the percentages and if someone could shed some light unto how to approach the problem I would gladly appreciate it. Thanks. </p> Answer: <p>I'm not sure I understand the problem, but what I think you're saying is that you play a series of games, each of which you have a probability of 1/2 of winning, and the series ends whenever your cumulative score reaches -3 or +7, which ever comes first.</p> <p>If this interpretation is correct, then another way to look at the problem is that you have 3 coins and "the system" has 7 coins. You give the system a coin each time you lose, and the system gives you a coin each time you win. The series is over when either of you runs out of coins. You would like to know the probability that you win, i.e. you win all the system's coins.</p> <p>This is a well-known problem in probability called the <a href="https://en.wikipedia.org/wiki/Gambler%27s_ruin" rel="nofollow noreferrer">Gambler's Ruin Problem</a>. When Player 1 has <span class="math-container">$n_1$</span> coins and Player 2 has <span class="math-container">$n_2$</span> coins, it turns out that Player 1's probability of winning all of Player 2's coins is <span class="math-container">$$P_1 = \frac{n_1}{n_1+n_2} \tag{*}$$</span> In your case, we have <span class="math-container">$n_1=3$</span> and <span class="math-container">$n_2=7$</span>, so your probability of winning is <span class="math-container">$3/10$</span>.</p> <p>One simple way to see (*) is to imagine that Player 1 has a team of <span class="math-container">$n_1$</span> little players on his side, and Player 2 has a team of <span class="math-container">$n_2$</span> little players. Each game, two of the little players are chosen at random to compete, and each has a probability of <span class="math-container">$1/2$</span> of winning. This goes on until the series ends. Some one of the little players must be the winner of the final game, and by symmetry, all the little players are equally likely. So the probability that the winner of the final game is one of Player 1's team is <span class="math-container">$n_1/(n_1+n_2)$</span>.</p>
https://math.stackexchange.com/questions/2972892/how-to-calculate-at-most-with-special-cases-removed
Question: <p>I'm trying to understand the following question:</p> <blockquote> <p>An engineer conducts tests to find out if circuits of a certain type are prone to overheating. 30% of all such circuits are prone to overheating. If the circuit is prone to overheating, the test will report it is not prone to overheating with probability 0.1, prone to overheating with probability 0.7, and produce an inconclusive result with probability 0.2. If it is not prone to overheating, the test will report it is not prone to overheating with probability 0.6, prone to overheating with probability 0.3, and produce an inconclusive result with probability 0.1. The experiment is performed twice on a particular circuit; the first time it produces an inconclusive result and the second time it reports that the circuit is prone to overheating. Assuming the results of the two tests are independent, what is the probability the circuit is prone to overheating, given the outcome of the tests?</p> </blockquote> <p>This is how I tried to solve the question:</p> <blockquote> <p>$$P(O) = 0.3 \ \quad P(O^c)= 0.7 \\ P(N|O) = 0.1 \quad P(P|O) = 0.7 \quad P(I|O) = 0.2 \\ P(N|O^c) = 0.6 \quad P(P|O^c) = 0.3 \quad P(I|O^c) = 0.1 \\ \\ P(O|I)= \frac {P(I|O)P(O)}{P(I|O)P(O) + P(I|O^c)P(O^c)} = \frac{0.2*0.3}{0.2*0.3 + 0.1*0.7} = \frac{6}{13}\\ P(O|P)= \frac {P(P|O)P(O)}{P(P|O)P(O) + P(P|O^c)P(O^c)} = \frac{0.7*0.3}{0.7*0.3 + 0.3*0.7} = \frac{1}{2}\\$$ So the probability that the circuit is prone to overheating is $\frac{6}{13}* \frac{1}{2} = \frac{3}{13} $</p> </blockquote> <p>My answer was incorrect. The actual method is:</p> <blockquote> <p>The probability that the circuit is prone to overheating and we observe the test results we have seen is: 0.3 × 0.2 × 0.7 = 0.042. The probability that the circuit is not prone to overheating and we observe the test results we have seen is: 0.7 × 0.1 × 0.3 = 0.021. The probability that we observe the test results we have seen is: 0.042 + 0.021 = 0.063. Therefore, the conditional probability that the circuit is prone to overheating given the outcomes of the tests is:$ \frac {0.042}{0.063} = \frac{2}{3}$</p> </blockquote> <p>My understanding is clearly not correct. Could someone explain why my method doesn't work?</p> Answer: <p>Your method doesn't work because you have to find: </p> <p>P(O | I on $1^{st}$ test $\cap$ P on $2^{nd}$ test), but you have calculated </p> <p>What you have calculated is $P(O | \text{I on a test}) * P (O | \text{P on a test})$ which isn't the probability of a specific event.</p> <p>The first part:</p> <p>$P(O | \text{I on a test})$</p> <p>Includes cases where you get I on a test but not P on the other, and the second part:</p> <p>$P (O | \text{P on a test})$</p> <p>includes cases where you get P on a test but not I on the other</p> <p><strong>You need to find the conditional probability given both I and P happen. $I \cap P$</strong></p> <hr> <p>Calculation for completeness:</p> <p>For simplicity I'll call the events I and P.</p> <p>$P(O | I \cap P) = \frac{P(O \cap I \cap P)}{P(I \cap P)}$</p> <p>$P(O | I \cap P) = \frac{P(O \cap I \cap P)}{P(O \cap I \cap P) + P(O^c \cap I \cap P)}$</p> <p>$P(O \cap I \cap P) = P(O)*P(I \cap P | O) = 0.3 * 0.2 * 0.7 = 0.042 $</p> <p>$P(O^c \cap I \cap P) = P(O^c)*P(I \cap P | O^c) = 0.7 * 0.1 * 0.3 = 0.021 $</p> <p>$P(O | I \cap P) = \frac{0.042}{0.042 + 0.021} = \frac{0.042}{0.063} = \frac{2}{3} $</p>
https://math.stackexchange.com/questions/1737564/conditional-probability-question-understanding-mistake
Question: <p>I'm new to probability and I'm currently studying its axiomatic definition. I'm having a real hard time trying to understand the following exercise:</p> <p>" Tomorrow there is an exam. Esther has studied really hard, and she only has <span class="math-container">$\frac 1 5$</span> probability of not passing the exam.</p> <p>David has studied less, and he has <span class="math-container">$\frac 1 3$</span> probability of not passing the exam. We know that the probability of both not passing the exam is <span class="math-container">$\frac 1 8$</span>.</p> <p>What is the probability that at least one of them does not pass the exam? "</p> <p>From the statement, we know that <span class="math-container">$P(A\cap B)=\dfrac{1}{8}$</span></p> <p>My question is: How is that value achieved? How is it that the intersection of <span class="math-container">$\dfrac{1}{5}$</span> and <span class="math-container">$\dfrac{1}{3}$</span> equals <span class="math-container">$\dfrac{1}{8}$</span>?</p> <p>Thanks in advanced for all your help! </p> Answer: <blockquote> <p>My question is: How is that value achieved? How is it that the intersection of <span class="math-container">$\tfrac{1}{5}$</span> and <span class="math-container">$\tfrac{1}{3}$</span> equals <span class="math-container">$\tfrac{1}{8}$</span>?</p> </blockquote> <p>Well, <span class="math-container">$\frac 18&lt;\min\{\frac 15,\frac 13\}$</span> so this is possible. &nbsp; Knowing <span class="math-container">$\mathsf P(A)$</span> and <span class="math-container">$\mathsf P(B)$</span> does not <em>alone</em> tell you what <span class="math-container">$\mathsf P(A\cap B)$</span> is; just that <span class="math-container">$0\leq\mathsf P(A\cap B)\leq\min\{\mathsf P(A),\mathsf P(B)\}$</span>. &nbsp; The intersection of two events <em>may</em> be anything from empty, to being the entirety of the smallest event.</p> <p>Okay, <em>when</em> the events are independent, <em>then</em> the probability of their intersection is the product of their probabilities. &nbsp; However, <em>because</em> this probability is not that, <em>therefore</em> David' and Esther's performances on the exam are not independent.</p> <p>Perhaps they shared faulty study materiel. &nbsp; It doesn't really matter. </p> <p>You want to find <span class="math-container">$\mathsf P(A\cup B)$</span> knowing <span class="math-container">$\mathsf P(A), \mathsf P(B),$</span> and <span class="math-container">$\mathsf P(A\cap B)$</span>. &nbsp; You <em>can</em> do that.</p>
https://math.stackexchange.com/questions/2978525/axiomatic-probability-intersection-formula
Question: <p>Question: A fair coin is independently flipped <span class="math-container">$n$</span> times, <span class="math-container">$k$</span> times by <span class="math-container">$A$</span> and <span class="math-container">$n − k$</span> times by <span class="math-container">$B$</span>. Show that the probability that <span class="math-container">$A$</span> and <span class="math-container">$B$</span> flip the same number of heads is equal to the probability that there are a total of <span class="math-container">$k$</span> heads.</p> <p>I know the probability of getting heads or tails is the same for each because the coin is fair. I also know the probability of an arbitrary number, say, <span class="math-container">$m$</span> heads is equal to probability of getting <span class="math-container">$m$</span> tails. </p> <p>So I know <span class="math-container">$P(A$</span> gets <span class="math-container">$x$</span> tails) = <span class="math-container">$P(B$</span> gets <span class="math-container">$x$</span> heads)</p> <p>However, I'm confused as to where to go and how to apply this to the problem. Any help appreciated!</p> Answer: <p>By symmetry we could assume <span class="math-container">$k \le n-k$</span>. The probability that <span class="math-container">$A$</span> and <span class="math-container">$B$</span> flip the same number of heads would be <span class="math-container">$\sum_{i=0}^{k}{\binom{k}{i}\binom{n-k}{i}(\frac{1}{2})^n} = (\frac{1}{2})^n\sum_{i=0}^{k}{\binom{k}{k-i}\binom{n-k}{i}} = (\frac{1}{2})^n\binom{n}{k}$</span>, which is exactly the probability to get <span class="math-container">$k$</span> heads.</p> <p>The second equation comes from a basic combinatorics formula: Choosing <span class="math-container">$k$</span> from <span class="math-container">$n$</span> balls could be done by choosing <span class="math-container">$k-i$</span> from the first <span class="math-container">$k$</span> balls and then choosing <span class="math-container">$i$</span> from the rest <span class="math-container">$n-k$</span> balls.</p>
https://math.stackexchange.com/questions/2978537/showing-probability-that-a-and-b-flip-the-same-number-of-heads-is-equal-to-a
Question: <p>If $q$ number of elements are scheduled only to stay together, without having any specific order, what would be the permutation of $r$ elements taken from $n$ elements?</p> <p>For example, suppose we have 5 alphabets $A, B, C, E, F$. If A and E always stay together, how many permutations are possible if we use 3 characters at a time?</p> <p>In this case, 12 results are possible: $AEB$, $AEC$, $AEF$, $BAE$, $BEA$, $CAE$, $CEA$, $EAB$, $EAC$, $EAF$, $FAE$, $FEA$?</p> <p>What would be the general formula to calculate this kind of problem?</p> Answer: <p>Suppose out of n elements, q elements stay together, then we can consider a total of n - q elements and one additional group. Also, as those q elements can further have different permutations, we have to take additional factor of q! multiplied with the terms which include that group in them and as per your statement, the group seems to be always included.</p> <p>So, we could rather make it easy by saying that instead of finding permutations of r elements, we will have permutations of r - q elements from n - q elements (note that the 1 group is left as of now). Let the result here be a (i.e. number of ways to choose r - q from n - q elements). Now we can place that group in r - q + 1 positions which can further have q! permutations.</p> <p>So overall we have, $q! \times (r - q + 1) \times ^{n-q}P_{r-q}$.</p> <p>I am assuming you can't group more elements than the ones to be choosen as you seem to always include that group.</p>
https://math.stackexchange.com/questions/1495199/what-should-be-the-general-formula-for-the-following-permutation-problem
Question: <p>How to show that without using Venn Diagram</p> <p><span class="math-container">$P(A) + P\left( {\bar A} \right)P\left( {B|\bar A} \right) = 1 - P\left( {\bar A \cap \bar B} \right)$</span> ?</p> <p>Effort so far</p> <p><span class="math-container">$P(A) + P\left( {\bar A} \right)P\left( {B|\bar A} \right) = P(A) + P\left( {B,\bar A} \right)$</span></p> <p>I have a feeling it has something to do with Boolean Logic Absorption's Law ( X + Y Z = (X + Y) • (X + Z)) but I am stuck.</p> Answer: <p><span class="math-container">$$P(A) + P(\overline{A}) P(B\mid \overline{A})$$</span><span class="math-container">$$ = P(A) + P(\overline{A}) \frac{P(B\cap \overline{A})}{P(\overline{A})}$$</span> <span class="math-container">$$= P(A) + P(B \cap \overline{A}) $$</span> <span class="math-container">$$=P(A \cup (B \cap \overline{A}))$$</span> <span class="math-container">$$= P(A \cup B)$$</span> <span class="math-container">$$=1 - P(\overline{A\cup B})$$</span> <span class="math-container">$$= 1 - P(\overline{A}\cap \overline{B})$$</span></p>
https://math.stackexchange.com/questions/3832297/how-to-prove-that-pa-p-left-bar-a-rightp-left-b-bar-a-right-1
Question: <p>Recently, I encountered a probability question which can be phrased differently:</p> <p>Q1: A letter is chosen at random from the word <strong>MISSISSIPPI</strong>. What is the sample space.</p> <p>Q2: The letters from the word <strong>MISSISSIPPI</strong> are put into a bag. What is the sample space.</p> <p>Q3: Picking a letter at random from a box containing identical cards with letters that spell the word <strong>MISSISSIPPI</strong>. What is the sample space.</p> <p>I can understand that the S's, I's and P's are indistinguishable for Q1, and hence the sample space is {M, I, S, P}.</p> <p>However, my teacher told me that the sample space for Q2 is also {M, I, S, P} which I do not understand. Since each letter is now distinct and distinguished from one another, i.e., the four S's are distinguishable, then why shouldn't the sample space be {M, I1, S1, S2, I2, S3, S4, I3, P1, P2, I4} ?</p> <p>To make things even more confusing, she said that the sample space for Q3 is {M, I1, S1, S2, I2, S3, S4, I3, P1, P2, I4}.</p> <p>Thank you in advance.</p> Answer: <p>For Q2, it's still the same sample space because the letters are still indistinguishable. If you pull an <code>S</code> at random from the bag (I'm imagining Scrabble tiles here), you still don't know whether it was the first <code>S</code> in <code>MISSISSIPPI</code> or one of the others.</p> <p>If we let the random variable <span class="math-container">$X$</span> represent the letter drawn, the sample space is still <span class="math-container">$X\in \{M,I,S,P\}$</span>, with probabilities <span class="math-container">$P(X=M) = \frac{1}{11}$</span>, <span class="math-container">$P(X=I) = \frac{4}{11}$</span>, <span class="math-container">$P(X=S) = \frac{4}{11}$</span>, and <span class="math-container">$P(X=P) = \frac{2}{11}$</span>.</p> <p>For Q3, the distinction comes from the cards presumably being ordered. Then, once you've drawn a card, you can tell what position it was in. Each card would have a distinguishable letter <em>and</em> position, so each element of the sample space would consist of the letter and its position.</p>
https://math.stackexchange.com/questions/4044378/probability-question-on-sample-space
Question: <p>A bit surprisingly I can't find the answer to exactly my question. I am looking for the formula to calculate at least k successes with n tries without replacement.</p> <p>For example take the bag/balls problems. Let's say 250000 balls in the bag 250 white 249750 blue. If you draw 8500 balls what is the probability of drawing at least 1 white ball. </p> Answer: <p>The count of favoured items in a sample selected from a population <em>without replacement</em> has a <strong>hypergeometric distribution</strong>.</p> <p>When the population is size $N$ with $K$ favoured items, and the sample is of size $n$, then the count $W$ of favoured items in the sample having size $k$ has probability:</p> <p>$$\mathsf P(W=k) = \dfrac{\dbinom{K}{k}~\dbinom{N-K}{n-k}}{\dbinom{N}{n}} \qquad \Big[0\leq k\leq \min(K, n) \leq \max(K, n) \leq N\Big]$$</p> <p>$$\mathsf P(W\geqslant k) = \sum_{x=k}^{\min(K, n)} \dfrac{\dbinom{K}{x}~\dbinom{N-K}{n-x}}{\dbinom{N}{n}} \qquad \Big[0\leq k\leq \min(K, n) \leq \max(K, n) \leq N\Big]$$</p> <p>For particular values it might be more appropriate to use an approximation, or work with the complement, to ease the computation load.</p> <blockquote> <p>For example take the bag/balls problems. Let's say 250000 balls in the bag 250 white 249750 blue. If you draw 8500 balls what is the probability of drawing at least 1 white ball. </p> </blockquote> <p>This is easiest calculated using the complement. &nbsp; It is the probability of <em>not</em> drawing zero white balls.</p> <p>$$\mathsf P(W\geqslant 1) = 1-\mathsf P(W=0) = 1-\dfrac{\binom{250}{0}\binom{249750}{8500}}{\binom{250000}{8500}}\\ \approx 0.999{\small 825266071400062267418017708833099206480271885565627713\ldots}$$</p>
https://math.stackexchange.com/questions/1680387/at-least-k-successes-in-n-tries-without-replacement
Question: <p>I see that there is a "fact" $P(A|B)=1−P(A^{c}|B))$, this can be deduced or what is the intuition? I can see that the "domain" is reduced in both cases to $B$ and that we use $A$ and $A^{c}$ and this makes sense I just don't know where to take this "fact" from.</p> Answer: <p>$$\begin{align}\mathsf P(A^\complement\mid B) ={}&amp; \dfrac{\mathsf P(A^\complement\cap B)}{\mathsf P(B)}&amp;&amp;\text{by definition of conditional probability}\\[1ex] ={}&amp; \dfrac{\mathsf P(B)-\mathsf P(A\cap B)}{\mathsf P(B)} &amp;&amp; \raise{2ex}{\text{via the Law for Total Probability}\\{\small \mathsf P(B)=\mathsf P(A\cap B)+\mathsf P(A^\complement\cap B)}} \\[1ex]={}&amp; 1-\mathsf P(A\mid B)&amp;&amp;\text{by definition of conditional probability}\end{align}$$</p> <p>That is all.</p>
https://math.stackexchange.com/questions/2484262/proof-of-pab-1%e2%88%92pacb
Question: <p>I am reliably informed that the probability of getting 3 of a kind in 5 rolls of a 6-sized dice is approximately <span class="math-container">$0.1929$</span>.</p> <p>I'm assuming this excludes 4-of a kind and 5-of a kind but not full-house (3 of a kind plus two of a kind).</p> <p>Trying to check this I reasoned: Given an initial value for the first roll, I need two of the same and 2 different values.</p> <p><span class="math-container">$1/6 * 1/6 * 5/6 * 5/6 = 0.01929...$</span></p> <p>This value seems bizarrely similar to the correct answer but off by a factor of 10.</p> <p>Could someone please explain what I'm doing wrong? </p> Answer: <p>You calculated the probability that the first three rolls are the same, but the last two rolls produce a different result. However, there are <span class="math-container">$$\binom{5}{3} = 10$$</span> sequences in which exactly three of the five rolls yield the same value. This is your missing factor of <span class="math-container">$10$</span>.</p> <p>Based on your description of the problem, you want to find the probability that exactly three of the rolls are the same.</p> <ol> <li>Choose which of the six values appears three times.</li> <li>Choose which three of the five rolls are the same.</li> <li>Multiply by the probability that that value occurs three times.</li> <li>Multiply by the probability that the remaining two rolls produce a different value than those three rolls.</li> </ol> <p><span class="math-container">$$\binom{6}{1}\binom{5}{3}\left(\frac{1}{6}\right)^3\left(\frac{5}{6}\right)^2$$</span></p>
https://math.stackexchange.com/questions/3476371/3-of-a-kind-5-rolls-6-sized-dice
Question: <p>If i have four dice i calculate the chances of getting at least one 2 as 864 ÷1296 = 66,66% since if on one dice a 2 comes up then it does not matter what comes up on the other 3 dice (1×6×6×6 possible outcomes × 4 times = 864), i still have thrown a two.</p> <p>If i now work out the chances of not throwing at least one two, it 5×5×5×5 ÷ 1296 = 48,23%.</p> <p>when i add these two together i get way over 100 %. What am i doing here wrong ??????</p> Answer: <p>The probability of no dice showing a two is <span class="math-container">$\left(\frac{5}{6}\right)^4\approx 48.22\%$</span>. The probability of at least one dice showing a two consequently is <span class="math-container">$1-\left(\frac{5}{6}\right)^4\approx 51.77\%$</span>. You can also obtain this number combinatorially. Start with one dice. The probability of getting at lest one two is <span class="math-container">$\frac{1}{6}$</span>. If you have two dice, you have the cases 2-1, 2-2, 2-3, 2-4, 2-5, 2-6, 1-2, 3-2, 4-2, 5-2, 6-2, which are eleven out of the thirty-six possible cases. Note that you get <span class="math-container">$\frac{6}{36}+\frac{5}{36}$</span> since you can't count the case 2-2 twice. This is basically the error you did in your calculations. Working things out properly, you obtain only <span class="math-container">$671$</span> cases with at least one two.</p>
https://math.stackexchange.com/questions/2980503/odds-of-winning-plus-odds-of-losing-do-not-equal-100
Question: <p>Given two cdf's <span class="math-container">$F_1, F_2\colon [0,1]\to\mathbb{R}$</span>, it is always possible to find two real-valued random variables <span class="math-container">$X_1, X_2$</span> such that <span class="math-container">$X_i$</span> is distributed according to <span class="math-container">$F_i$</span>, and <span class="math-container">$X_1$</span> and <span class="math-container">$X_2$</span> are independent. (At least, I would be shocked to learn this is not the case.)</p> <p>I am wondering if there are (nontrivial) instances where this is the &quot;only possibility.&quot;</p> <p><strong>Question</strong>: Do there exists two cdf's <span class="math-container">$F_1, F_2\colon [0,1]\to\mathbb{R}$</span> such that:</p> <ul> <li>neither <span class="math-container">$F_i$</span> is the distribution of a deterministic random variable;</li> <li>for any <span class="math-container">$X_1,X_2$</span> such that <span class="math-container">$X_i$</span> is distributed according to <span class="math-container">$F_i$</span>, <span class="math-container">$X_1$</span> and <span class="math-container">$X_2$</span> are independent?</li> </ul> Answer: <p>If <span class="math-container">$F$</span> is a cdf let <span class="math-container">$G$</span> be the &quot;inverse&quot; <span class="math-container">$G(u):=\inf\{x: F(x)&gt; u\}$</span>. If <span class="math-container">$U$</span> is uniformly distributed on <span class="math-container">$(0,1)$</span> then the cdf of <span class="math-container">$G(U)$</span> is <span class="math-container">$F$</span>. So <span class="math-container">$Y_i:=G_i(U)$</span> will have cdf <span class="math-container">$F_i$</span> (for <span class="math-container">$i=1,2$</span>) but <span class="math-container">$Y_1$</span> and <span class="math-container">$Y_2$</span> won't be independent outside of degenerate cases that you have outlawed. The idea is to simulate both cdfs using the same randomness <span class="math-container">$U$</span>.</p>
https://math.stackexchange.com/questions/4172926/inherently-independent-distributions
Question: <p>The taxi driver drives through four villages <span class="math-container">$W,Z,X,Y.$</span> These roads form a quadrangle where <span class="math-container">$WX = 5, WZ = 10, ZY = 5$</span> and <span class="math-container">$XY = 10.$</span> With a probability of <span class="math-container">$\dfrac{1}{4},$</span> a call (order) may come. Then, with a probability of <span class="math-container">$\dfrac{1}{3}$</span>, he can go to any village. Let <span class="math-container">$D$</span> (random variable) be the distance you need to travel to pick up to pick up and take one passenger. Find the probabilities of a random variable.</p> <p>Solution:</p> <p>Let the sides of the triangle be equal <span class="math-container">$a = 5, b = 10, c = 5, d = 10.$</span> <span class="math-container">$$\begin{cases} 0 + a=5, \text{with the probability } 1/12\\ 0+ b=10,\text{with the probability } 1/12\\ a + a = 10, \text{with the probability } 1/12\\ a + b = 15, \text{with the probability } 1/4 \\ b + b = 20, \text{with the probability } 1/12 \\ 2a + b = 20, \text{with the probability } 1/6 \\ b + 2b = 25, \text{with the probability } 1/6 \\ 2a + 2b = 25, \text{with the probability } 1/12 \\ \end{cases}$$</span></p> <p>Why is it correct, namely this one <span class="math-container">$2a + b = 20, \text{with the probability } 1/6 $</span>?</p> Answer:
https://math.stackexchange.com/questions/4163648/one-problem-about-the-villagesprobability-theory
Question: <p>Three spinners are marked with equal amounts of Red, Blue and Yellow. At a particular instance, all three are spun together. What is the probability that at least two of the spinners land on red?</p> <p>The at least part is confusing me.</p> <p>My attempt: So if all three lands on red the probability will be: <span class="math-container">$P(All\space red)=\frac{1}{3}\times\frac{1}{3}\times\frac{1}{3}=\frac{1}{27}$</span> Having at least two will have a higher probability; two of the spinners should have red and the other can have any colour: <span class="math-container">$$P(Two\space red\space and\space one\space any)=\frac{1}{3}\times\frac{1}{3}\times\frac{3}{3}=\frac{1}{9}$$</span></p> Answer: <p>i) if two of them land on red - choose which two spinners land on red, the third can be either blue or yellow. So if <span class="math-container">$X$</span> is the event of a spinner landing on red,</p> <p><span class="math-container">$ \displaystyle P(X=2) = {3 \choose 2} \cdot \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{2}{3} = \frac{2}{9}$</span></p> <p>ii) All spinners land on red,</p> <p><span class="math-container">$ \displaystyle P(X=3) = \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{1}{3} = \frac{1}{27}$</span></p> <p>Adding them,</p> <p><span class="math-container">$\displaystyle P(X \geq 2) = \frac{7}{27}$</span></p>
https://math.stackexchange.com/questions/4171461/what-is-the-probability-that-at-least-two-spinner-land-on-red
Question: <p>An urn contains <span class="math-container">$4$</span> blue and <span class="math-container">$4$</span> red marbles. At first a marble is drawn (without looking) and removed from the urn. Then, a marble is drawn from the urn, its color recorded and put back in the bag. This process is repeated <span class="math-container">$1000$</span> times.</p> <ol> <li>Stochastically model this experiment with an appropriate probability space. Give a precise mathematical description of the random variable <span class="math-container">$X_i$</span>, <span class="math-container">$i=1,\ldots,1000$</span> that takes the value <span class="math-container">$1$</span> only if the <span class="math-container">$i$</span>-th drawn marble out of the <span class="math-container">$1000$</span> is blue, <span class="math-container">$0$</span> else.</li> </ol> <hr /> <p>I am really having trouble specifying the <a href="https://en.wikipedia.org/wiki/Probability_space" rel="nofollow noreferrer">probability space</a> <span class="math-container">$(\Omega,\mathcal F, P)$</span>. Here is my attempt:</p> <p>The sample space is the set of all possible outcomes. Basically I can only draw a blue or a red marble so I was thinking:</p> <p><span class="math-container">$$\Omega=\{R,B\} \,\,\,\,\text{or}\,\,\,\, \Omega=\{1,0\}$$</span></p> <p>Assuming that <span class="math-container">$1$</span> corresponds to blue and <span class="math-container">$0$</span> corresponds to red. The sample space in this example would be the set of all possible <span class="math-container">$n$</span>-tuples of length <span class="math-container">$1000$</span>.</p> <p><span class="math-container">$$\mathcal F:=\{(a_1,a_2,\ldots,a_{1000}), a_j\in \Omega \}$$</span></p> <p>I am not really sure what to write for <span class="math-container">$P$</span>. Is this supposed to be some sort of rule for a probability? I also tried writing an expression for the random variable <span class="math-container">$X_i$</span> with the indicator function:</p> <p><span class="math-container">$$X_i=\sum_{n=1}^{1000} \mathbf{1}_{a_i=1}$$</span></p> <p>But it's not really specific. I wanted to write down something that would take a string from the event space <span class="math-container">$\mathcal F$</span> and return <span class="math-container">$1$</span> if the <span class="math-container">$i$</span>-the component is <span class="math-container">$1$</span> but I can't seem to write it down properly.</p> Answer:
https://math.stackexchange.com/questions/4174914/drawing-without-replacement-problem-formally-specifying-the-probability-space
Question: <p>So this is an example straight from a book, like an example to help teach the material yet it makes absolutely no sense or I am just not seeing where they make the jump at.</p> <p>We have the following: Consider the experiment consisting of 2 rolls of a fair 4-sided die. Let X be a random variable, equal to the maximum of the 2 rolls. It then says complete the following table (I don't know how to make a table on here)</p> <p>| $\space$$\space$$\space$$\space$x$\space$$\space$$\space$$\space\,\,$ | 1 | 2 | 3 | 4 |<br/>|Pr(X=x)| ? | ? | ? | ? |</p> <p>With the sample space S ={(1, 1),(1, 2),(1, 3),(1, 4),(2, 1),(2, 2),(2, 3),(2, 4),(3, 1),(3, 2),(3, 3), (3, 4),(4, 1),(4, 2),(4, 3),(4, 4)}</p> <p>I don't see how Pr(X=1) = $1\over16$, Pr(X=2) = $3\over16$, Pr(X=3) = $5\over16$ and Pr(X=4) = $7\over16$.</p> <p>I can see that there is a total of seven 4's in the sample space but there's also seven 1's in the sample space, same with seven 2's and so on. Could anyone give a bit of a hint on what I'm missing? I've read the definitions and everything leading up to this example but they don't help.</p> Answer: <p>For example, $X=3$ consists of the sample points $(1,3), (2,3), (3,1), (3,2), (3,3)$. There are $5$ of them, while the sample space has $16$ points, all equally probable, so $P(X=3) = 5/16$.</p>
https://math.stackexchange.com/questions/2159690/what-is-the-prx-x
Question: <p>I am solving the following probability exercise. The solution I have found is very counter intuitive and I feel It is wrong, but I can't seem to understand why.</p> <p>A fair coin is tossed twice, you have to decide wheter it is more likely that two heads showed up given that: 1) at least one toss is head, 2) the second toss was head.</p> <p><strong>Solution</strong></p> <p>Let <span class="math-container">$A$</span> be the event &quot;the first toss is head&quot; and let <span class="math-container">$B$</span> be the event &quot;the second toss is head&quot;.</p> <p>For case 1,: <span class="math-container">$$ P(A \cap B \vert A \cup B) = \frac{P(A \cap B \cap (A \cup B))}{P(A \cup B)} = \frac{P(A\cap B)}{P(A) + P(B) - P(A \cap B)} = \frac{1/4}{3/4} = 1/3$$</span></p> <p>For case 2:</p> <p><span class="math-container">$$ P(A \cap B \vert B) = \frac{P(A\cap B)}{P(B)} = \frac{1/4}{1/2} = \frac{1}{2}$$</span></p> <p>Is this right? I feel like case <span class="math-container">$1$</span> should be more probable, given that at least may mean there are already two heads?</p> <p>Can someone shed some light?</p> Answer: <p>It is correct.</p> <p>There are four equally likely outcomes (HH, HT, TH, TT) of which one outcome has two heads (HH).</p> <p>In question (1) there are two other possibilities (HT, TH).</p> <p>In question (2) there is only one other possibility (TH). This makes HH conditionally more likely by excluding consideration of one of the partial successes that question (1) would consider.</p>
https://math.stackexchange.com/questions/4174777/coin-tossing-whats-more-probable
Question: <p>When we write <span class="math-container">$X\stackrel d= Y$</span>, does this mean that <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> have exactly the same distribution?</p> <p>For example, <span class="math-container">$X\sim\mathcal N(\mu,\sigma^{2}) \ \text{and}\ X\stackrel d= Y\implies Y\sim\mathcal N(\mu,\sigma^{2})$</span> with the same values of <span class="math-container">$\mu$</span> and <span class="math-container">$\sigma$</span>?</p> <p>From what I understand in this <a href="https://en.wikipedia.org/wiki/Convergence_of_random_variables" rel="nofollow noreferrer">Wikipedia</a> article, <span class="math-container">$X\stackrel d= Y\implies F_X(t)=F_Y(t)$</span> for all <span class="math-container">$t$</span>, which would seem to imply the answer to my question is yes. But then it goes on to say that equality of distribution is the weakest form of equality/convergence usually discussed. What about this type of equality makes it <em>weak</em>?</p> Answer: <p><span class="math-container">$X\overset{d}{=}Y$</span> does mean equality in distribution, which implies that the distribution functions of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> agree, that is, <span class="math-container">$$ F_X(t)=F_Y(t),\quad\forall t\in\Bbb R. $$</span> Note that equality in distribution also sometimes goes by the name <em>equality in law</em>.</p> <p>Since the notation <span class="math-container">$X\sim\mathcal N(\mu,\sigma^2)$</span> is just a statement about the distribution of <span class="math-container">$X$</span>, you can rightfully conclude <span class="math-container">$$ X\overset{d}{=}Y\,\land\, X\sim\mathcal N(\mu,\sigma^2)\implies Y\sim\mathcal N(\mu,\sigma^2). $$</span></p> <p>In regards to the strength (or weakness) of such an equality, note that there are <a href="https://en.wikipedia.org/wiki/Convergence_of_random_variables" rel="nofollow noreferrer">multiple ways</a> we can describe the &quot;equalness&quot; of two random variables. The reason why equality in distribution is considered weak is because it does not imply actual observed values of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> will be similar or close in value.</p> <p>For example, suppose <span class="math-container">$X\sim\operatorname{Uniform}(-1/2,1/2)$</span> and <span class="math-container">$Y=-X$</span>. Then by change of variables (or just upon inspection) we can see that <span class="math-container">$Y\sim\operatorname{Uniform}(-1/2,1/2)$</span> and thus <span class="math-container">$X\overset{d}{=}Y$</span>. But if you were to then draw observations from these distributions they would never by similar as they would always be opposite in sign.</p>
https://math.stackexchange.com/questions/4086903/x-stackrel-d-y-what-does-it-exactly-mean
Question: <p>Is there any way of analytically determining the expected value of <span class="math-container">$Z=e^{\alpha X e^{\beta X}}$</span>, where <span class="math-container">$X\sim\mathcal N(0,1)$</span> and <span class="math-container">$\alpha,\beta$</span> are known constants?</p> <p><strong>A couple of the methods I tried:</strong></p> <p>I tried evaluating the integral of <span class="math-container">$Z$</span> against the pdf of <span class="math-container">$X$</span> directly without managing to achieve much. <span class="math-container">$$ E[Z]=\int \frac{1}{\sqrt 2\pi}e^{\alpha x e^{\beta x}-\frac{1}{2}x^2}\,dx. $$</span></p> <p>Among others (which look even less neat in terms of outcome), I tried the substitution <span class="math-container">$y=e^x$</span>, s.t. <span class="math-container">$dy=y dx$</span>, to yield <span class="math-container">$$ E[Z]=\int\frac{1}{\sqrt 2\pi}y^{1+\alpha y^\beta}e^{-\frac{1}{2}(\log y)^2}dy. $$</span></p> <p>Noting that <span class="math-container">$$ e^{-\frac{1}{2}(\log y)^2}=e^{\log (y^{-\frac{1}{2} \log y})}=y^{-\frac{1}{2} \log y} $$</span></p> <p>That &quot;simplifies&quot; the integral to <span class="math-container">$$ E[Z]=\int\frac{1}{\sqrt 2\pi}y^{1+\alpha y^\beta-\frac{1}{2} \log y}dy, $$</span> which I wasn't able to transform into anything I could evaluate.</p> <p>One more interesting route I tried was to consider the variable <span class="math-container">$Y=\alpha Xe^{\beta X}$</span> and perform a Taylor expansion on <span class="math-container">$Z=e^Y$</span>: <span class="math-container">$$ E[Z]=E[e^Y]=\sum_{n=0}^\infty E[\frac{1}{n!}Y^n]=\sum_{n=0}^\infty \frac{\alpha ^n}{n!}E[X^ne^{n\beta X}]=\sum_{n=0}^\infty \frac{\alpha ^n}{n!}E[\frac{1}{n^n}\frac{\partial^n}{\partial \beta^n} e^{n\beta X}]= $$</span></p> <p><span class="math-container">$$=\sum_{n=0}^\infty \frac{\alpha ^n}{n! n^n}\frac{\partial^n}{\partial \beta^n}E[ e^{n\beta X}]=\sum_{n=0}^\infty \frac{\alpha ^n}{n! n^n}\frac{\partial^n}{\partial \beta^n}e^{\frac{1}{2}n^2\beta^2}$$</span></p> <p>At which point I got lost in the algebra trying to simplify this expression into a manageable infinite sum.</p> <p>Any hints would be highly appreciated.</p> Answer:
https://math.stackexchange.com/questions/4176572/expected-value-of-the-exponential-of-a-normal-lognormal-mixture-in-the-special-c
Question: <p>If four people are in a room for 1 hour, each on their own very old laptop and each laptop has a 10% chance of crashing during that time, then I thought the probability of at least one laptop crashing would be:</p> <p>1/10 + 1/10 + 1/10 + 1/10 = 2/5</p> <p>But then if the chance of crashing was 25% it would then be:</p> <p>1/4 + 1/4 + 1/4 + 1/4 = 1</p> <p>Obviously there is not a 100% chance of of at least one crashing so I have got this completely wrong.</p> <p>Is this the kind of thing where I should be calculating the chances of no laptop crashing and working out that way?</p> <p>Thanks</p> Answer: <p>It is just a binomial distribution where <span class="math-container">$n=4,p=\frac{1}{10}$</span></p> <p><span class="math-container">$P[n,k]=\binom{n}{k}*p^k*(1-p)^{n-k}$</span></p> <p><span class="math-container">$P[4,1]=\binom{4}{1}*(0.1)^1*(0.9)^3=0.2916$</span></p> <p><span class="math-container">$P[4,2]=\binom{4}{2}*(0.1)^2*(0.9)^2=0.0486$</span></p> <p><span class="math-container">$P[4,3]=\binom{4}{3}*(0.1)^3*(0.9)^1=0.0036$</span></p> <p><span class="math-container">$P[4,4]=\binom{4}{4}*(0.1)^4*(0.9)^0=0.0001$</span></p> <p>Adding all these probabilities, we get the probability that at least one old laptop will crash during the one hour time period.</p>
https://math.stackexchange.com/questions/2988464/basic-probability-of-at-least-one-independent-event-happening
Question: <p>I have the following question, but I fail to get the right answer. There are two boxes - box <span class="math-container">$A$</span> and box <span class="math-container">$B$</span>. Box <span class="math-container">$A$</span> has <span class="math-container">$5$</span> red balls and <span class="math-container">$3$</span> blue ones. Box <span class="math-container">$B$</span> has <span class="math-container">$6$</span> red balls and <span class="math-container">$2$</span> blue ones. If one randomly chooses one of the boxes and picks up <span class="math-container">$3$</span> balls randomly without returning them, I wish to find the probability of the third one being red considering the first two were blue. What I did was: <span class="math-container">$$P=P\left(\left(\text{third red}\mid\text{first two blue}\right)\mid\text{box A}\right)P\left(\text{box A}\right)+P\left(\left(\text{third red}\mid\text{first two blue}\right)\mid\text{box B}\right)P\left(\text{box B}\right)=\frac{5}{6}\cdot\frac{1}{2}+\frac{6}{6}\cdot\frac{1}{2}=\frac{11}{12}$$</span> But the answer seems to be <span class="math-container">$\frac{7}{8}$</span>. What have I done wrong? I fail to see my mistake (it could be that the answers sheet contains a mistake I guess, but I tend to think I did one).</p> Answer: <p>You have to attack the problem in stages.</p> <p>First, you have to calculate <span class="math-container">$f(a), f(b)$</span> where</p> <p><span class="math-container">$f(a) =$</span> probability that balls are being drawn from box A and</p> <p><span class="math-container">$f(b) =$</span> probability that balls are being drawn from box B.</p> <p>Naturally, this will involve <a href="https://en.wikipedia.org/wiki/Bayes%27_theorem" rel="nofollow noreferrer">Bayes Theorem</a>, and you should expect <span class="math-container">$f(a) + f(b) = 1.$</span></p> <p>Once you do that, you then have to calculate</p> <p><span class="math-container">$g(a) =$</span> chance that next ball is red, given that balls are being taken from box A and</p> <p><span class="math-container">$g(b) =$</span> chance that next ball is red, given that balls are being taken from box B</p> <p>Then, the final computation will be</p> <p><span class="math-container">$$\left[f(a) \times g(a)\right] + \left[f(b) \times g(b)\right].$$</span></p> <hr /> <p><span class="math-container">$f(a) = \frac{R}{S}$</span></p> <p>where <span class="math-container">$R = \frac{1}{2} \left[\frac{3}{8} \times \frac{2}{7}\right]$</span></p> <p>and <span class="math-container">$S = \left\{\frac{1}{2} \left[\frac{3}{8} \times \frac{2}{7}\right]\right\} + \left\{\frac{1}{2} \left[\frac{2}{8} \times \frac{1}{7}\right]\right\}.$</span></p> <p>This works out to <span class="math-container">$f(a) = (3/4), f(b) = (1/4).$</span></p> <p>Then, you also have that <span class="math-container">$g(a) = (5/6)$</span> and <span class="math-container">$g(b) = 1$</span>.</p> <p>The numbers immediately above are explained by reasoning that if the balls are being drawn from box A, there are 6 balls left, 5 of which are red. If the balls are being drawn from box B, then there are 6 balls left, all of which are red.</p> <p>Thus, the final computation is</p> <p><span class="math-container">$$\left[(3/4) \times (5/6)\right] + \left[(1/4) \times (1)\right] = (21/24) = (7/8).$$</span></p>
https://math.stackexchange.com/questions/4177140/probability-picking-up-from-a-box
Question: <p>If say 2 people individually had a .05 chance of survival, 2 individually had a .10 chance of survival, and 2 individually had a .20 chance of survival (pulled these numbers out of a hat)</p> <p>What is the chance that the family has more than 3 deaths.</p> <p>The method I have is quite tedious and I was wondering if there was a better way than just going case by case of the combinations and summing the results. I'm not even sure if that gives me the right answer, and I will check later using monte carlo.</p> <p>Does this have a well known probability distribution? I know some small parts are binomial.</p> <p>I wanted to know the probability distribution of death in our family, which has people in different age groups, if everyone in the family got covid. Other than by using simulation, I don't know a reasonable way to scale up what I did to include more age groups and more people. So I was curious to know if there is a way to do that, or at least approximate the probability.</p> Answer: <p>Kinda morbid mate. Nevertheless. If the chances of survival are <span class="math-container">$p_i$</span> for <span class="math-container">$i=1,2,\dots,6$</span>, then the chance of</p> <ul> <li>only the <span class="math-container">$i$</span>-th person surviving is <span class="math-container">$p_i\prod_{i\neq j}(1-p_j)$</span></li> <li>exactly the <span class="math-container">$i$</span>-th and <span class="math-container">$j$</span>-th person surviving is <span class="math-container">$p_ip_j\sum_{\substack{z\neq i\\ z\neq j}}(1-p_z)$</span></li> <li>exactly the <span class="math-container">$i$</span>-th, <span class="math-container">$j$</span>-th, <span class="math-container">$k$</span>-th person surviving <span class="math-container">$p_ip_jp_k\prod_{u\in{1,2,\dots,6}\setminus\left\{i,j,k\right\}}(1-p_u)$</span></li> </ul> <p>So the probability of at most 3 surivals i.e. at least 3 deaths is the sum of all the above. You can select <span class="math-container">$i$</span> in <span class="math-container">$6$</span> ways in the first case, <span class="math-container">$i$</span> and <span class="math-container">$j$</span> in <span class="math-container">$\binom{6}{2}$</span> ways the second case, and <span class="math-container">$i,j,k$</span> in <span class="math-container">$\binom{6}{3}$</span> ways in the third case, then sum them all up. The cases are mutually exclusive as the surivors differ.</p> <p>Edit: if you fancy Python:</p> <pre><code>from sympy import * import itertools p_1, p_2, p_3, p_4, p_5, p_6 = symbols('p_1 p_2 p_3 p_4 p_5 p_6') s = [p_1,p_2,p_3,p_4,p_5,p_6] pairs = itertools.combinations(s,2) triples = itertools.combinations(s,3) prodneg = prod([1-y for y in s]) def one(x): return x/(1-x)*prodneg def two(i,j): return i*j/(1-i)/(1-j)*prodneg def three(i,j,k): return i*j*k/(1-i)/(1-j)/(1-k)*prodneg allones = sum([one(x) for x in s]) alltwos = sum(two(i[0],i[1]) for i in pairs) allthrees = sum(three(i[0],i[1],i[2]) for i in triples) atleastthreedeaths = allones+alltwos+allthrees </code></pre> <p>I'm sorry if you are a programmer and I just made your eyes bleed.</p> <p>Your final formula is the atrocious <span class="math-container">$$p_{1} p_{2} p_{3} \left(1 - p_{4}\right) \left(1 - p_{5}\right) \left(1 - p_{6}\right) + p_{1} p_{2} p_{4} \left(1 - p_{3}\right) \left(1 - p_{5}\right) \left(1 - p_{6}\right) + p_{1} p_{2} p_{5} \left(1 - p_{3}\right) \left(1 - p_{4}\right) \left(1 - p_{6}\right) + p_{1} p_{2} p_{6} \left(1 - p_{3}\right) \left(1 - p_{4}\right) \left(1 - p_{5}\right) + p_{1} p_{2} \left(1 - p_{3}\right) \left(1 - p_{4}\right) \left(1 - p_{5}\right) \left(1 - p_{6}\right) + p_{1} p_{3} p_{4} \left(1 - p_{2}\right) \left(1 - p_{5}\right) \left(1 - p_{6}\right) + p_{1} p_{3} p_{5} \left(1 - p_{2}\right) \left(1 - p_{4}\right) \left(1 - p_{6}\right) + p_{1} p_{3} p_{6} \left(1 - p_{2}\right) \left(1 - p_{4}\right) \left(1 - p_{5}\right) + p_{1} p_{3} \left(1 - p_{2}\right) \left(1 - p_{4}\right) \left(1 - p_{5}\right) \left(1 - p_{6}\right) + p_{1} p_{4} p_{5} \left(1 - p_{2}\right) \left(1 - p_{3}\right) \left(1 - p_{6}\right) + p_{1} p_{4} p_{6} \left(1 - p_{2}\right) \left(1 - p_{3}\right) \left(1 - p_{5}\right) + p_{1} p_{4} \left(1 - p_{2}\right) \left(1 - p_{3}\right) \left(1 - p_{5}\right) \left(1 - p_{6}\right) + p_{1} p_{5} p_{6} \left(1 - p_{2}\right) \left(1 - p_{3}\right) \left(1 - p_{4}\right) + p_{1} p_{5} \left(1 - p_{2}\right) \left(1 - p_{3}\right) \left(1 - p_{4}\right) \left(1 - p_{6}\right) + p_{1} p_{6} \left(1 - p_{2}\right) \left(1 - p_{3}\right) \left(1 - p_{4}\right) \left(1 - p_{5}\right) + p_{1} \left(1 - p_{2}\right) \left(1 - p_{3}\right) \left(1 - p_{4}\right) \left(1 - p_{5}\right) \left(1 - p_{6}\right) + p_{2} p_{3} p_{4} \left(1 - p_{1}\right) \left(1 - p_{5}\right) \left(1 - p_{6}\right) + p_{2} p_{3} p_{5} \left(1 - p_{1}\right) \left(1 - p_{4}\right) \left(1 - p_{6}\right) + p_{2} p_{3} p_{6} \left(1 - p_{1}\right) \left(1 - p_{4}\right) \left(1 - p_{5}\right) + p_{2} p_{3} \left(1 - p_{1}\right) \left(1 - p_{4}\right) \left(1 - p_{5}\right) \left(1 - p_{6}\right) + p_{2} p_{4} p_{5} \left(1 - p_{1}\right) \left(1 - p_{3}\right) \left(1 - p_{6}\right) + p_{2} p_{4} p_{6} \left(1 - p_{1}\right) \left(1 - p_{3}\right) \left(1 - p_{5}\right) + p_{2} p_{4} \left(1 - p_{1}\right) \left(1 - p_{3}\right) \left(1 - p_{5}\right) \left(1 - p_{6}\right) + p_{2} p_{5} p_{6} \left(1 - p_{1}\right) \left(1 - p_{3}\right) \left(1 - p_{4}\right) + p_{2} p_{5} \left(1 - p_{1}\right) \left(1 - p_{3}\right) \left(1 - p_{4}\right) \left(1 - p_{6}\right) + p_{2} p_{6} \left(1 - p_{1}\right) \left(1 - p_{3}\right) \left(1 - p_{4}\right) \left(1 - p_{5}\right) + p_{2} \left(1 - p_{1}\right) \left(1 - p_{3}\right) \left(1 - p_{4}\right) \left(1 - p_{5}\right) \left(1 - p_{6}\right) + p_{3} p_{4} p_{5} \left(1 - p_{1}\right) \left(1 - p_{2}\right) \left(1 - p_{6}\right) + p_{3} p_{4} p_{6} \left(1 - p_{1}\right) \left(1 - p_{2}\right) \left(1 - p_{5}\right) + p_{3} p_{4} \left(1 - p_{1}\right) \left(1 - p_{2}\right) \left(1 - p_{5}\right) \left(1 - p_{6}\right) + p_{3} p_{5} p_{6} \left(1 - p_{1}\right) \left(1 - p_{2}\right) \left(1 - p_{4}\right) + p_{3} p_{5} \left(1 - p_{1}\right) \left(1 - p_{2}\right) \left(1 - p_{4}\right) \left(1 - p_{6}\right) + p_{3} p_{6} \left(1 - p_{1}\right) \left(1 - p_{2}\right) \left(1 - p_{4}\right) \left(1 - p_{5}\right) + p_{3} \left(1 - p_{1}\right) \left(1 - p_{2}\right) \left(1 - p_{4}\right) \left(1 - p_{5}\right) \left(1 - p_{6}\right) + p_{4} p_{5} p_{6} \left(1 - p_{1}\right) \left(1 - p_{2}\right) \left(1 - p_{3}\right) + p_{4} p_{5} \left(1 - p_{1}\right) \left(1 - p_{2}\right) \left(1 - p_{3}\right) \left(1 - p_{6}\right) + p_{4} p_{6} \left(1 - p_{1}\right) \left(1 - p_{2}\right) \left(1 - p_{3}\right) \left(1 - p_{5}\right) + p_{4} \left(1 - p_{1}\right) \left(1 - p_{2}\right) \left(1 - p_{3}\right) \left(1 - p_{5}\right) \left(1 - p_{6}\right) + p_{5} p_{6} \left(1 - p_{1}\right) \left(1 - p_{2}\right) \left(1 - p_{3}\right) \left(1 - p_{4}\right) + p_{5} \left(1 - p_{1}\right) \left(1 - p_{2}\right) \left(1 - p_{3}\right) \left(1 - p_{4}\right) \left(1 - p_{6}\right) + p_{6} \left(1 - p_{1}\right) \left(1 - p_{2}\right) \left(1 - p_{3}\right) \left(1 - p_{4}\right) \left(1 - p_{5}\right)$$</span></p>
https://math.stackexchange.com/questions/4177704/what-is-the-probability-that-you-have-more-than-3-deaths-in-an-infected-family-o
Question: <p>I have a fold up table at home with six legs and three areas. Each area takes 2 legs to keep the table up, but when stored three legs are positioned on the left and three on the right.</p> <p>Everytime I setup the table I randomly take the three legs on the left, put two on the left side and one in the middle. Then I take the three legs from the right and put two on the right and also one in the middle.</p> <p>When folding it up I randomly pick one leg in the middle to go left and one to go right.</p> <p>So basically a leg that started on the left when folded, could end up in the middle when setup and end on the right when folded again.</p> <p>I've setup this table maybe a hundred times and everytime I wonder "Has each leg been in every spot (six spots when setup) at least once?"</p> <p>I guess there is a 1/3 change that a leg ends up in the middle and then a 1/2 change it ends up on the other side, which by my uneducated mind results in a 16.6% probability it changes sides, but I have no idea how to go from there.</p> <p>If you perfectly rotate the legs around, you could get them in each spot in 6 times. But I don't know that is even relevant at all.</p> <p>So, what is the probability each leg has been in every spot?</p> Answer: <p>Suppose the legs are numbered and ordered in a list, for example [1,2,3,4,5,6]. When packed away the first three legs in the list are on the left, the last three are on the right. When erected the first two are on the left, the middle two in the middle and the last two on the right. </p> <p>Let's consider the possible permutations that occur between a table being packed away and then re-erected. If we start with a table with the order [1,2,3,4,5,6], the possible orders after being packed are [1,2,3,4,5,6] or [1,2,4,3,5,6]. After being reassembled the 18 possible permutations are:</p> <p>P = { [1,2,3,4,5,6], [2,3,1,4,5,6], [2,3,1,4,5,6], [1,2,3,5,4,6], [1,3,2,5,4,6], [2,3,1,5,4,6], [1,2,3,6,4,5], [1,3,2,6,4,5], [2,3,1,6,4,5], [1,2,4,3,5,6], [1,4,2,3,5,6], [2,4,1,3,5,6], [1,2,4,5,3,6], [1,4,2,5,3,6], [2,4,1,5,3,6], [1,2,4,6,3,5], [1,4,2,6,3,5], [2,4,1,6,3,5]} </p> <p>As some commenters mentioned, working out precisely the expectation that all legs end up in all spots is (probably) very hard, but I think this is a great example of a situation where running randomised simulations in a computer should give a pretty good real world answer. </p> <p>For a given number of disassemblies and erections between 1 and 100, I ran 10000 simulations and counted for how many simulations each of the legs occurred in each of the positions. The results:</p> <p><a href="https://i.sstatic.net/bjbah.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bjbah.png" alt="enter image description here"></a></p> <p>So, if you have setup the table 100 times I would say it's very likely indeed (> 99%) that all the legs would have been in all positions!</p>
https://math.stackexchange.com/questions/3379150/probability-each-table-leg-was-in-each-spot
Question: <p>I fail to see what I have done wrong solving the following problem: Consider a system with 3 parts A,B,C. Part A works with probability 0.8, part B with 0.8 and part C with 0.9 (they are independent). The system is considered to work only if there are at least two parts working. I wish to calculate what the probability of part A working, considering the system doesn't work. What I did is: <span class="math-container">$$P\left(\text{part A working}\mid\text{system isn't working}\right)=P\left(\text{system isn't working}\mid\text{part A working}\right)\frac{P\left(\text{part A working}\right)}{P\left(\text{system isn't working}\right)}=\frac{0.8\cdot0.2\cdot0.1\cdot0.8}{1-\left[\left(0.8\cdot0.8\cdot0.1\right)+\left(0.8\cdot0.8\cdot0.9\right)+\left(0.2\cdot0.8\cdot0.9\right)+\left(0.8\cdot0.2\cdot0.9\right)\right]}$$</span> <span class="math-container">$$\approx0.177$$</span></p> <p>But I seem to get the wrong answer. I went over it again and again and I fail to find my mistake (I used Bayes formula).</p> Answer: <p>Your mistake is in the numerator, in finding probability that the system does not work but part <span class="math-container">$1$</span> works (I am calling it part <span class="math-container">$1$</span> instead of part <span class="math-container">$A$</span>).</p> <p>If <span class="math-container">$B$</span> is the event of system not working,</p> <p><span class="math-container">$P(B) = 1 - [0.2 \times 0.8 \times 0.9 + 0.8 \times 0.2 \times 0.9 + 0.8 \times 0.8 \times 0.1 + 0.8 \times 0.8 \times 0.9] = 0.072$</span></p> <p>If <span class="math-container">$A$</span> is the event of part <span class="math-container">$1$</span> working,</p> <p><span class="math-container">$P(A \cap B) = 0.8 \times 0.2 \times 0.1 = 0.016$</span></p> <p>So <span class="math-container">$P(A \ |B) = \displaystyle \frac{P(A\cap B)}{P(B)} = \frac{2}{9}$</span></p>
https://math.stackexchange.com/questions/4180245/conditional-probability-a-system-with-3-parts
Question: <p>Shuffle a standard deck of cards and cut it into three piles. What is the probability that a face card will turn up on top of one of the piles? </p> <p>There are 12 face cards (four jacks, four queens and four kings) in the deck.</p> Answer: <p>As has been mentioned in the comments, the split into three decks is just a red herring - the problem is equivalent to just picking the three top cards assuming the deck has been shuffled well.</p> <p>The probability of getting at least one court card is equal to one minus the probability of the inverse case: getting no court cards at all. </p> <p>The probability of getting no court cards is fairly easy to compute. First, we have 40 non-court cards to choose from a deck of 52, then 39 non-court cards from a deck of 51 and finally 38 non-court cards from a deck of 50. This makes the probability of getting no court cards in three picks $\frac{40}{52} \times \frac{39}{51} \times \frac{38}{50} = \frac{59280}{132600}$. </p> <p>This means probability of getting at least one court card is $1 - \frac{59280}{132600} = \frac{73320}{132600}$ which is approximately $0.55$.</p>
https://math.stackexchange.com/questions/1415911/classical-probability-and-combinatorics
Question: <p>My brother brings a certain number of his marbles to play with in my room. Each marble is distinct. He has 8 total marbles that are either red or blue. One day, I spotted two red marbles in my room. The probability that any two of his marbles (of those that he plays in my room), randomly chosen, both being red is 1/2. How many marbles does he bring into my room?</p> <p>I tried doing this:</p> <p>let x = number of red marbles</p> <p>So <span class="math-container">$(x/8)$</span> = probability of picking red marble</p> <p>and then <span class="math-container">$(x-1)/(8 - 1)$</span> = probability of picking second red marble.</p> <p><span class="math-container">$(x/8)(x-1)/(7) = 1/2$</span>, but I got x to be a decimal which is not possible.</p> <p>EDIT: I kept guessing and checking <span class="math-container">$\frac{x}{b}\cdot\frac{x-1}{b-1}=\frac{1}{2}$</span>, where <span class="math-container">$x =$</span> number of red balls, and <span class="math-container">$b=$</span> number of balls he brings into my room to get that <span class="math-container">$b=4$</span> and <span class="math-container">$x=3$</span>, but unsure how to get this solution formally.</p> Answer: <p>Assume your brother brings <span class="math-container">$x+2$</span> red and <span class="math-container">$y$</span> blue marbles, and leaves <span class="math-container">$2$</span> marbles, where <span class="math-container">$0\leq x+y\leq 6$</span>.</p> <p>Then the probability that those two marbles are both red is: <span class="math-container">${\binom{x+2}{2}/\binom{x+2+y}{2}}$</span>, which is claimed to be <span class="math-container">$1/2$</span><br> <span class="math-container">$$\dfrac{(x+2)(x+1)}{(x+y+2)(x+y+1)}=\dfrac{1}{2}$$</span></p> <p>Find the integer solution. </p> <hr> <p>Hunt and seek is a viable method.</p> <p>Note: Increasing the number of red marbles (<span class="math-container">$x$</span>) always raise the probability closer to one, if not already there, while increasing the number of blue marbles (<span class="math-container">$y$</span>) will always decrease the probability. &nbsp; Start at <span class="math-container">$x=0,y=0$</span> and searching with this as a guide.</p> <blockquote class="spoiler"> <p> So, indeed your brother brought three red and one blue marble into the room. <span class="math-container">$$\dfrac{(1+2)(1+1)}{(1+1+2)(1+1+1)}=\dfrac{6}{12}$$</span></p> </blockquote>
https://math.stackexchange.com/questions/3397057/probability-and-marbles
Question: <p>Consider families with two children, in which both parents have been identified as carriers of an autosomal recessive allele (Aa). At least one of the children shows the corresponding phenotype. When adding all the children of such families, what proportion of them will show this phenotype?</p> <p>Why is the correct answer 4/7? The answer was given by a genetic teacher where I'm studying</p> Answer: <pre><code> A a A AA Aa a Aa aa </code></pre> <p>Based on the Punnett square above there is a <span class="math-container">$1/4$</span> chance that a child will show the phenotype. Let <span class="math-container">$X$</span> be the number of children in the family of <span class="math-container">$2$</span> that show the phenotype. <span class="math-container">$X$</span> has pmf <span class="math-container">$\Pr(X=0)=9/16,\Pr(X=1)=6/16,\Pr(X=2)=1/16$</span>. Then the conditional probability can be found <span class="math-container">$\Pr(X=0|X=1 or X=2)=0, \Pr(X=1|X=1 or X=2)=\frac{6/16}{6/16+1/16}=6/7,\Pr(X=2|X=1 or X=2)=\frac{1/16}{6/16+1/16}=1/7$</span>. Then the expected value of <span class="math-container">$X/2$</span> given <span class="math-container">$X=1orX=2$</span> is <span class="math-container">$1/2\cdot \left(0+6/7\cdot 1+1/7\cdot 2\right)=4/7$</span>.</p> <p>So given that at least one child has the phenotype (aa), the expected proportion of children in the family of <span class="math-container">$2$</span> that shows the phenotype is <span class="math-container">$4/7$</span>. Hey looks like the teacher was right after all.</p>
https://math.stackexchange.com/questions/4139143/genetic-combination-exercise
Question: <p>There is the following question: There is a coin with probability <span class="math-container">$p$</span> to be <span class="math-container">$H$</span> and <span class="math-container">$q$</span> to be <span class="math-container">$T$</span>. I'm asked what is the expectation of the tossing number, considering I toss the coin until I get both <span class="math-container">$H$</span> and <span class="math-container">$T$</span>. I thought I could do it the following way (X - number of coin tossing after the first toss): <span class="math-container">$$E\left[X\right]=E\left[E\left[X\mid Y\right]\right]=p\left(\text{first toss is H}\right)\cdot E\left[X\mid\text{first toss is H}\right]+p\left(\text{first toss is T}\right)\cdot E\left[X\mid\text{first toss is T}\right]=p\cdot\frac{1}{q}+q\cdot\frac{1}{p}$$</span> as the random variable is distributed geometrically after the first toss. The total number is then: <span class="math-container">$$1+\frac{p}{q}+\frac{q}{p}$$</span></p> <p>But that's not the answer. Have I done something wrong?</p> Answer:
https://math.stackexchange.com/questions/4182140/law-of-total-expectation-a-toss-of-a-coin
Question: <p>I am interested in learning deeper about the number 1.96 used in the test of 95% confidence with a normal distribution. </p> <p>More specifically, I am interested in whether someone could provide a numerical example of this, and how 1.96 is calculated using the 97.5th percentile, or anybody knows somewhere where it is shown in more detail? </p> <p>Would be really appreciated,</p> <p>Best,</p> <p>Andrew</p> Answer: <p><span class="math-container">$$X \sim N(\mu,\sigma^2)$$</span></p> <p><span class="math-container">$$P \bigg( \mu - 1.96\sigma &lt; X &lt; \mu + 1.96\sigma\bigg) = 0.95$$</span></p> <p><span class="math-container">$$P\bigg( \mu - 1.96\sigma &gt; X\bigg) = 0.025$$</span></p> <p><span class="math-container">$$P\bigg( \mu + 1.96\sigma &gt; X\bigg) = 0.975$$</span></p> <p>In English, if you go 1.96 standard deviations from the mean in both directions, you account for 95% of the density. By symmetry, you end up with 2.5% of the density in either tail that is further than 1.96 standard deviations from the mean.</p> <p>What this means in statistics is that, when you have a sampling distribution of a mean of a normal variable, the standard deviation of the sampling distribution is the standard error of the estimate, so you go 1.96 standard errors in either direction from the estimate to get 95% of the probability.</p>
https://math.stackexchange.com/questions/3336298/1-96-and-the-standard-normal-distribution
Question: <p>I have a bag that contains coins, these coins could be biased coins, and each coin has a certain pre-determined probability of head/tail (independently of the other coins). This pre-determined probability is derived from a uniform distribution over <span class="math-container">$[0,1].$</span></p> <p>I draw a coin at random and flip it until I get a tail, then I threw it away and withdraw another coin and flip it again and again until I get a tail, and so on.</p> <p>Then the expected distance between consecutive tails is given by: <span class="math-container">$$\int_0^1 \frac{1}{1-p}\cdot1\,dp$$</span> where <span class="math-container">$p$</span> is probability of getting a head, <span class="math-container">$1$</span> is the density of uniform <span class="math-container">$[0,1],$</span> <span class="math-container">$\frac{1}{1-p}$</span> is expected value of geometric with prob. success <span class="math-container">$p.$</span></p> <p>I didn't understand how we characterized the above from the expected distance between consecutive tails to be in the form shown above. Does anyone know the logic behind coming up with the integral above? Thank you.</p> Answer: <p><strong>Hint:</strong> Compute the expected number of flips needed to get a tail using a single biased coin whose probability of producing a head is $p$.</p>
https://math.stackexchange.com/questions/358904/biased-coins-that-are-uniformly-distributed
Question: <p>I'm having some trouble with this problem</p> <blockquote> <p>Suppose you flip a biased coin until a head appears. The coin has a $75%$ chance of coming up tails. Let $n$ be the number of flips that you need to do. What is the probability of the following events:</p> <p>a) $n$ is at most $3$?</p> <p>b) $n$ is even (note that a geometric series of the form $a + ar + ar^2 + ar^3 + ...$ is equal to $\frac{a}{1-r}$</p> </blockquote> <p>We really haven't talked much about biased coins, or even how to use variables like $n$ in answers.</p> Answer: <p>The biased coin in your case only means that $P(T)=\frac{3}{4}$. <br>First note that to stop at a single Head in $n$ tosses, you need to get Tail in the first $n-1$ tosses and a Head in the last (that is $n^{th}$ toss).</p> <p>$1.$ When $n\leq3$. You can get a head in these ways - $H,TH,TTH$. <br>(Here $TH$ implies first toss gives Tail and second toss gives Head)</p> <p>So the probability is $$P(H) + P(TH) + P(THH)=\frac{1}{4}+\frac{3}{4}.\frac{1}{4}+\frac{3}{4}.\frac{3}{4}.\frac{1}{4}$$</p> <p>$2.$ When $n$ is even.<br> Here you can see that your probablity will be given by $$P(TH)+P(TTTH)+P(TTTTTH)+\dots$$ Write the probabilities in a similar way to the first case and you will obtain an infinite G.P, the formula for sum to which is given in the question.</p>
https://math.stackexchange.com/questions/1216842/flip-a-biased-coin-until-a-head-appears
Question: <p>So, I'm going to reformulate the problem because I think I did a bad job in the title because of the word limit:<br/> We have 8 coins in a box - 1 has &quot;heads&quot; on both sides and the remaining 7 are normal. We pick 1 coin randomly and toss it 7 times. If we got &quot;heads&quot; all 7 times, what is the probability the coin is normal.<br/><br/> I have two &quot;solutions&quot; which I think I should combine but I'm not sure how.<br/> 1)The probability of picking a normal coin is <span class="math-container">$7/8$</span></p> <p>2)The probability of getting &quot;heads&quot; on a normal coin 7 times in a row is <span class="math-container">$1/2^7$</span></p> <p>Is it the case that both of these events need to occur so I am supposed to multiply the two probabilities and get a <span class="math-container">$7/2^{10}$</span> chance?</p> Answer: <p>This can be solved with Bayes' Theorem.</p> <p>The probability of having picked a fake coin <span class="math-container">$\displaystyle P(f) = \frac 18$</span>. The probability of having picked a real coin <span class="math-container">$\displaystyle P(r) = \frac 78$</span>.</p> <p>In the event you had picked a fake coin, the probability of your observation (<span class="math-container">$\displaystyle o$</span>) would be <span class="math-container">$P\displaystyle (o|f) = 1$</span>.</p> <p>In the event you had picked a real coin, the probability of your observation would be <span class="math-container">$P\displaystyle (o|r) = \frac 1{2^7} = \frac 1{128}$</span>.</p> <p>You want to determine <span class="math-container">$\displaystyle P(f|o)$</span>.</p> <p>By Bayes' Theorem,</p> <p><span class="math-container">$\displaystyle P(f|o) = \frac{P(o|f)\cdot P(f)}{P(o|f)\cdot P(f) + P(o|r)\cdot P(r)} = \frac{1\cdot \frac 18}{1\cdot \frac 18 + \frac 1{128}\cdot \frac 78}= \frac{128}{135}$</span></p> <p>In other words, it is close to certain your pick was fake.</p>
https://math.stackexchange.com/questions/4183975/what-is-the-probability-we-picked-a-fake-coin-if-we-have-7-proper-and-1-with-hea
Question: <p>I am confused about this. Do we need to consider in the factor of household distribution in the world?</p> <p>The context is the movie <em>Avengers: Infinity War.</em> The important part of the question is a movie spoiler:</p> <blockquote class="spoiler"> <p> At the end of the movie, Thanos gets all six infinity stones, snaps his fingers, and half of the universe dies.</p> </blockquote> Answer: <p>Consider each person getting turned to dust as an independent event, which happens with probability $1/2$. Then the probability of an entire family of four getting it is $(1/2)^4 = 1/16.$</p> <p>And to answer your question about the household disbribution, if half of the family <em>has</em> to get it, then you already have your answer.</p>
https://math.stackexchange.com/questions/2875086/what-is-the-probability-that-a-random-family-of-4-gets-wiped-out-by-thanos-i-e
Question: <p>I would calculate it as this <span class="math-container">$\frac{10^4-9^4-9^4+8^4}{10^4}$</span>. But it may be incorrect Because I summing the <span class="math-container">$10^4-9^4$</span> and <span class="math-container">$9^4-8^4$</span>. Is it the correct? </p> Answer: <p>Let's consider "common digits" to mean the digit appears in both plate numbers in any order. Let's find the probability that the two plates have no common digit.</p> <p>If the first plate has one distinct digit, this can happen in any of 9 ways (<span class="math-container">$0000$</span> is not possible), so there are <span class="math-container">$9^4-1$</span> ways to choose the numbers for the second plate.</p> <p>If the first plate has two distinct digits, there are 14 possible orders for the two digits: <span class="math-container">$\dfrac{4!}{3!1!}+\dfrac{4!}{2!2!}+\dfrac{4!}{1!3!}$</span>. There are <span class="math-container">$\dbinom{10}{2}$</span> ways to choose the two digits. There are <span class="math-container">$8^4$</span> ways to choose a four digit number for the second plate without those two digits. We have to subtract off the number of ways to choose the first plate with no zeros and the second plate with all zeros, since that is a forbidden number: <span class="math-container">$14\dbinom{9}{2}$</span>.</p> <p>If the first plate has three distinct digits, there are <span class="math-container">$\dfrac{4!}{2!1!1!}+\dfrac{4!}{1!2!1!}+\dfrac{4!}{1!1!2!} = 36$</span> ways to order the numbers, and <span class="math-container">$\dbinom{10}{3}$</span> ways to choose them. Then there are <span class="math-container">$7^4$</span> ways to choose the number for the second plate without any of the three digits from the first plate. However, this overcounts when the second plate is <span class="math-container">$0000$</span>, so we have to subtract <span class="math-container">$36\dbinom{9}{3}$</span>.</p> <p>Finally, if the first plate has four distinct digits: <span class="math-container">$\dbinom{10}{4}4!6^4-\dbinom{9}{4}4!$</span></p> <p>Complement of the total probability that there are no common digits (which means at least one shared digit):</p> <p><span class="math-container">$$1-\dfrac{9(9^4-1)+14\dbinom{10}{2}8^4-14 \dbinom{9}{2}+36\dbinom{10}{3}7^4-36\dbinom{9}{3}+24\dbinom{10}{4}6^4-24\dbinom{9}{4}}{(10^4-1)^2} = \dfrac{8,937,985}{11,108,889} \approx 80\%$$</span></p>
https://math.stackexchange.com/questions/3502794/probability-that-two-plates-with-numbers-ranging-from-0001-9999-have-at-least
Question: <p>An urn of 4 balls with 2 colors. Pick 2 balls and place them back 4 times. What's the probability of picking 2 balls of the same color twice in a row?</p> <p>So the probability of picking 2 balls of the same color is <span class="math-container">$2\choose1$$2\choose2$</span>/<span class="math-container">$4\choose2$</span>, but I don't know the probability of getting that twice in a row out of 4 draws</p> Answer: <p>Assumptions: </p> <ol> <li>There are two balls of one color and two balls of a second color</li> <li>Suppose the colors are Green and Red. A successful outcome occurs in any of these cases when you choose two green followed by two green, two green followed by two red, two red followed by two green, or two red followed by two red.</li> </ol> <p>The probability of picking two balls of the same color in a single draw is as you found <span class="math-container">$\dfrac{1}{3}$</span>. A single draw will be labeled <span class="math-container">$S$</span> for both balls are the same color and <span class="math-container">$D$</span> for the balls are different colors. Here are the possible outcomes:</p> <p><span class="math-container">$$\begin{array}{c|c|c}\text{Draws} &amp; \text{Probability} &amp; \text{2-in-a-row?} \\ \hline DDDD &amp; \left(\dfrac{2}{3}\right)^4\left(\dfrac{1}{3}\right)^0 &amp; \text{No} \\ DDDS &amp; \left(\dfrac{2}{3}\right)^3\left(\dfrac{1}{3}\right)^1 &amp; \text{No} \\ DDSD &amp; \left(\dfrac{2}{3}\right)^3\left(\dfrac{1}{3}\right)^1 &amp; \text{No} \\ DDSS &amp; \left(\dfrac{2}{3}\right)^2\left(\dfrac{1}{3}\right)^2 &amp; \text{Yes} \\ DSDD &amp; \left(\dfrac{2}{3}\right)^3\left(\dfrac{1}{3}\right)^1 &amp; \text{No} \\ DSDS &amp; \left(\dfrac{2}{3}\right)^2\left(\dfrac{1}{3}\right)^2 &amp; \text{No} \\ DSSD &amp; \left(\dfrac{2}{3}\right)^2\left(\dfrac{1}{3}\right)^2 &amp; \text{Yes} \\ DSSS &amp; \left(\dfrac{2}{3}\right)^1\left(\dfrac{1}{3}\right)^3 &amp; \text{Yes} \\ SDDD &amp; \left(\dfrac{2}{3}\right)^3\left(\dfrac{1}{3}\right)^1 &amp; \text{No} \\ SDDS &amp; \left(\dfrac{2}{3}\right)^2\left(\dfrac{1}{3}\right)^2 &amp; \text{No} \\ SDSD &amp; \left(\dfrac{2}{3}\right)^2\left(\dfrac{1}{3}\right)^2 &amp; \text{No} \\ SDSS &amp; \left(\dfrac{2}{3}\right)^1\left(\dfrac{1}{3}\right)^3 &amp; \text{Yes} \\ SSDD &amp; \left(\dfrac{2}{3}\right)^2\left(\dfrac{1}{3}\right)^2 &amp; \text{Yes} \\ SSDS &amp; \left(\dfrac{2}{3}\right)^1\left(\dfrac{1}{3}\right)^3 &amp; \text{Yes} \\ SSSD &amp; \left(\dfrac{2}{3}\right)^1\left(\dfrac{1}{3}\right)^3 &amp; \text{Yes} \\ SSSS &amp; \left(\dfrac{2}{3}\right)^0\left(\dfrac{1}{3}\right)^4 &amp; \text{Yes}\end{array}$$</span></p> <p>Adding this up, I get: <span class="math-container">$$3\left(\dfrac{2}{3}\right)^2\left(\dfrac{1}{3}\right)^2+4\left(\dfrac{2}{3}\right)^1\left(\dfrac{1}{3}\right)^3+\left(\dfrac{2}{3}\right)^0\left(\dfrac{1}{3}\right)^4 = \dfrac{7}{27}$$</span></p>
https://math.stackexchange.com/questions/3509877/an-urn-of-4-balls-with-2-colors-pick-2-balls-and-place-them-back-4-times-what
Question: <p>In a horse race there are 10 horses. Bob wants to make a "trifecta Box bet". A trifecta box bet is when you choose the first three horses that finish the race in ANY order. What is the probability to win a single trifecta box bet assuming every horse has equal chances to win.</p> <p>My solution: <span class="math-container">$\ P = {10!/(7!3!) \over 10!}$</span></p> <p>Given solution: <span class="math-container">$\ P = {7!3! \over 10!}$</span></p> <p>What am i doing wrong?</p> <p>Thanks!</p> Answer: <p>JMoravitz showed that you can use your probability space as all possible trifectas (ignoring specific orders of the horses). You attempted the problem by setting the probability space as all possible orders of the horses. For the numerator, you chose all possible ways of choosing three horses to finish in the first three spots. However, you do not win just because three horses finish in the first three spots. You only win if your chosen horses finish in the first three spots in some order.</p> <p>So, a winning outcome is your chosen three horses come in the first three spots in some order in <span class="math-container">$3!$</span> ways. The remaining seven horses come in the last seven places in <span class="math-container">$7!$</span> ways, and the total probability is:</p> <p><span class="math-container">$$\dfrac{3!7!}{10!}$$</span></p>
https://math.stackexchange.com/questions/3534016/probability-of-trifecta-box-bet
Question: <p>All fair coins.</p> <p>You pick 2 out of the bag and look at them, they are all heads.</p> <p>So what is the probability of it?</p> <p>I made a table under but it does not seem to work.</p> <pre><code>A = Coin #1, B = Coin #2, C = Coin #3, D = Coin #4. A B C D 1 H H H H &gt;&gt;&gt; AB, BC, CD, DA = 4 chances of head? 2 H H H T &gt;&gt;&gt; AB, BC, CA = 3 chances of head? 3 H H T H &gt;&gt;&gt; AB, BD, DA = 3 chances of head? 4 H H T T &gt;&gt;&gt; AB = 1 chance of head? 5 H T H H &gt;&gt;&gt; AC, CD, DA = 3 chances of head? 6 H T H T &gt;&gt;&gt; AC = 1 chance of head? 7 H T T H &gt;&gt;&gt; DA = 1 chance of head? 8 H T T T 9 T H H H &gt;&gt;&gt; BC, CD, DB = 3 chances of head? 10 T H H T &gt;&gt;&gt; BC = 1 chance of head? 11 T H T H &gt;&gt;&gt; BD = 4 chance of head? 12 T H T T 13 T T H H &gt;&gt;&gt; CD = 1 chance of head? 14 T T H T 15 T T T H 16 T T T T ... and so on? </code></pre> <p>This problem I created myself but I don't know if I created it right or not. Please help me solve this, thank you!</p> Answer: <p>There are six possible combinations of coins: <span class="math-container">$$AB,AC,AD,BC,BD,CD$$</span></p> <p>For each row, you need to check if both of them are heads or not. It breaks down to this:</p> <p><span class="math-container">$$\begin{array}{c|c}\text{Row} &amp; \text{Number Pairs With Two Heads} \\ \hline 1 &amp; 6 \\ 2 &amp; 3 \\ 3 &amp; 3 \\ 4 &amp; 1 \\ 5 &amp; 3 \\ 6 &amp; 1 \\ 7 &amp; 1 \\ 8 &amp; 0 \\ 9 &amp; 3 \\ 10 &amp; 1 \\ 11 &amp; 1 \\ 12 &amp; 0 \\ 13 &amp; 1 \\ 14 &amp; 0 \\ 15 &amp; 0 \\ 16 &amp; 0\end{array}$$</span></p> <p>Each row has <span class="math-container">$\dfrac{1}{16}$</span> chance of being the actual row, and independently, there is an <span class="math-container">$\dfrac{x}{6}$</span> chance that you wind up with a pair of heads (where <span class="math-container">$x$</span> is the number of pairs out of six for that row). You can just add up all of the <span class="math-container">$x$</span>'s, which add to 24.</p> <p><span class="math-container">$$\dfrac{1}{16}\cdot \dfrac{24}{6} = \dfrac{1}{4}$$</span></p>
https://math.stackexchange.com/questions/3536847/probability-of-picking-out-two-heads-in-a-bag-of-four-coins
Question: <p>A weekly lottery consists of 3 numbers drawn from the digits 0 through 9 with no repetition of digits. The first prize goes to the person with the correct sequence. Second prizes go to people with the correct digits in some other sequence. You buy a ticket.</p> <p>a) What is the probability of winning the first prize?</p> <p>b) What is the probability of winning the second prize? </p> <p>c) If you win a first or second prize last week, what is your probability of winning a first or second prize this week?</p> <p>I know that the odds of winning are 9!/(9-3)!=504, 1/504 but besides that, I am lost. Any help would be greatly appreciated. </p> Answer: <p>a) <span class="math-container">$$\dfrac{1}{_{10}P_3} = \dfrac{(10-3)!}{10!} = \dfrac{1}{720}$$</span></p> <p>b) <span class="math-container">$$\dfrac{3!-1}{_{10}P_3} = \dfrac{(3!-1)(10-3)!}{10!} = \dfrac{1}{_{10}C_3} - \dfrac{1}{_{10}P_3} = \dfrac{5}{720}$$</span></p> <p>c) Because winning in the past is independent of winning in the future: <span class="math-container">$$\begin{align*}P(\text{Win 1st or 2nd}|\text{Won 1st or 2nd last week}) &amp; = \dfrac{P(\text{Win 1st or 2nd} \cap \text{Won 1st or 2nd last week})}{P(\text{Won 1st or 2nd last week})} \\ &amp; = \dfrac{P(\text{Win 1st or 2nd})P(\text{Won 1st or 2nd last week})}{P(\text{Won 1st or 2nd last week})} \\ &amp; = P(\text{Win 1st or 2nd}) \\ &amp; = \dfrac{1}{_{10}P_3} + \left( \dfrac{1}{_{10}C_3} - \dfrac{1}{_{10}P_3} \right) \\ &amp; = \dfrac{1}{_{10}C_3} = \dfrac{1}{120} \end{align*}$$</span></p>
https://math.stackexchange.com/questions/3546985/what-is-the-probability-of-winning-first-prize-given-the-following-information
Question: <p>Friends, I found trouble understanding this sentence. This is an exercise from a homework on The Central Limit Theorem. Can someone explain what this question is trying to ask? Much thanks. </p> Answer: <p>Let <span class="math-container">$X_i$</span> be the number of eyes showing on the <span class="math-container">$i$</span>-th roll. Then:</p> <p><span class="math-container">$$X = \sum_{i=1}^{1000}X_i$$</span></p> <p><span class="math-container">$$E[X] = \sum_{i=1}^{1000}E[X_i] = 1000E[X_i] = 3500$$</span></p> <p><span class="math-container">$$V[X] = \sum_{i=1}^{1000}V[X_i] = 1000\left(\dfrac{35}{12}\right) = \dfrac{35000}{12}$$</span></p> <p>Therefore, the standard deviation is <span class="math-container">$\sqrt{\dfrac{35000}{12}}$</span>.</p> <p>So, you want <span class="math-container">$P(3500-k\sigma \le X \le 3500+k\sigma) \ge 0.99$</span>. Find <span class="math-container">$k$</span>.</p> <p>For <span class="math-container">$0.99$</span>, you need <span class="math-container">$k \approx 2.58$</span>. So, this gives the range <span class="math-container">$$3360 \le X \le 3640$$</span></p>
https://math.stackexchange.com/questions/3601681/a-die-is-thrown-1000-times-find-the-limits-within-which-the-number-of-eyes-comi
Question: <p>im trying to find a mathematical way to calculate the percentage chance that there is at least 1 Cheater in any given match chosen at random.</p> <p>Game has 100 players per match</p> <p>Total Players (including the Cheaters): 3000</p> <p>there are 3 scenarios needing to be tested, a group of 500, 100, 25 Cheaters</p> <p>i need to calculate the chance that there is at least 1 cheater in any random match, if someone could supply at least a formula and an example this would be great, thanks!</p> Answer: <p>I will use the following notation.</p> <p>Population of Players: <span class="math-container">$P$</span></p> <p>Number of Cheaters: <span class="math-container">$C$</span></p> <p>Number of players in a match: <span class="math-container">$M$</span></p> <p>The probability of at least one cheater in a match is <span class="math-container">$1$</span> minus the probability that there are no cheaters in the match.</p> <p><span class="math-container">$$1-\dfrac{\dbinom{P-C}{M}}{\dbinom{P}{M}}$$</span></p> <p>In the scenarios you chose:</p> <p><span class="math-container">$$P=3000, M=100, C=500: \\ \\ 1-\dfrac{\dbinom{2500}{100}}{\dbinom{3000}{100}} \approx 100\%$$</span></p> <p><span class="math-container">$$P=3000, M=100, C=100: \\ \\ 1-\dfrac{\dbinom{2900}{100}}{\dbinom{3000}{100}} \approx 96.82\%$$</span></p> <p><span class="math-container">$$P=3000, M=100, C=25: \\ \\ 1-\dfrac{\dbinom{2975}{100}}{\dbinom{3000}{100}} \approx 57.3\%$$</span></p>
https://math.stackexchange.com/questions/4047746/probability-of-at-least-1-person-in-group-a-being-in-a-group-of-x-size-with-a-ge
Question: <p><strong>Question</strong>: How many times should we throw a die if we want that the sum of points obtained was at least 4500 with probability <span class="math-container">$p \geq 0; 975?$</span> (use the central limit theorem).</p> <p>I know that the probability of getting a given value for the total on the dice may be calculated by taking the total number of ways that value can be produced and dividing it by the total number of distinguishable outcomes. So the probability of a <span class="math-container">$7$</span> on the dice is <span class="math-container">$\frac{1}{6}$</span> because it can be produced in <span class="math-container">$6$</span> ways out of a total of <span class="math-container">$36$</span> possible outcomes.</p> <p>Could some one help me on how to solve this problem? Thanks!</p> Answer: <p>Outline:</p> <p>For a single roll of the die the mean number of points is <span class="math-container">$\mu = 3.5$</span> and the variance of the number of points is <span class="math-container">$\sigma^2 = 2.916667.$</span> These values can be found using the definitions of the mean and variance of a discrete random variable.</p> <p>Let <span class="math-container">$T_n$</span> be the total points on <span class="math-container">$n$</span> independent rolls of the die. Then <span class="math-container">$E(T_n) = n\mu,$</span> <span class="math-container">$Var(T_n) = n\sigma^2,$</span> and <span class="math-container">$SD(T_n) = \sigma\sqrt{n}.$</span></p> <p>According to the CLT, You want <span class="math-container">$$P(T_n \ge 4500) = P\left(\frac{T_n -n\mu}{\sigma\sqrt{n}}\ge\frac{4500 -n\mu}{\sigma\sqrt{n}}\right)\\ \approx P\left(Z\ge\frac{4500 -n\mu}{\sigma\sqrt{n}} = 1.96 \right)= 0.975.$$</span></p> <p>In the next-to-last member of the above equation, use known values of <span class="math-container">$\mu$</span> and <span class="math-container">$\sigma,$</span> solve for <span class="math-container">$n$</span> and round up to the next higher integer.</p> <p><em>Note:</em> On account of the following simulation in R, I'm guessing <span class="math-container">$n$</span> is not far from <span class="math-container">$1320.$</span></p> <pre><code>d = replicate(10^6, sum(sample(1:6, 1320, rep=T))) mean(d &gt;= 4500) [1] 0.973992 </code></pre>
https://math.stackexchange.com/questions/4137950/probability-of-throwing-a-die
Question: <p>I'm in grade 10, and I've just started to learn about complementary events. I am rather perplexed with this question. Isn't this question kinda contradictory, since <span class="math-container">$P(A) + P(A') = 1$</span>?</p> <p>This is what I got to:</p> <p><span class="math-container">$P(A) + P(B) = 1$</span></p> <p><span class="math-container">$P(A) + P(A') = 1$</span></p> <p>How could it be proven that <span class="math-container">$B$</span> isn't the complement of <span class="math-container">$A$</span>?</p> <p>Help would be greatly appreciated.</p> Answer: <p>Take any event of probability <span class="math-container">$\frac 1 2$</span> and take <span class="math-container">$B=A$</span>.</p>
https://math.stackexchange.com/questions/3160157/by-means-of-an-example-show-that-pa-pb-1-does-not-mean-that-b-is-th
Question: <p>Say I have <span class="math-container">$S_n = X_1 + X_2 +...+X_n$</span> where <span class="math-container">$X_i \sim Ber(1/2)$</span>. Then: <span class="math-container">$$\lim_{n\to\infty}P(|S_n-\frac{n}{2}|&lt;\frac{\sqrt n}{2})\implies \lim_{n\to\infty} P(|M_n-\frac{1}{2}|&lt;\frac{1}{2\sqrt n}) \to 0$$</span> Where I denoted <span class="math-container">$M_n = S_n/n$</span> and the probability goes to zero because for <span class="math-container">$n\to \infty$</span>, <span class="math-container">$\frac{1}{2\sqrt n} \to 0$</span>. But if I instead use the central limit theorem: <span class="math-container">$$\lim_{n\to\infty}P(|S_n-\frac{n}{2}|&lt;\frac{\sqrt n}{2})\implies \lim_{n\to\infty} P(|\frac{S_n-\frac{n}{2}}{\frac{\sqrt n}{2}}|&lt;1) = 2\phi(1)-1$$</span> And what I received doesn't go to <span class="math-container">$0$</span> when <span class="math-container">$n\to \infty$</span>. What have I done wrong?</p> Answer: <p>The law of large numbers tells you that <span class="math-container">$$\lim_{n\to\infty} M_n-1/2=0$$</span> almost surely, i.e., <span class="math-container">$$\mathsf P\left(\lim_{n\to\infty} M_n-1/2=0\right)=1,$$</span> but this does not imply that <span class="math-container">$$\lim_{n\to\infty}\mathsf P\left(\lvert M_n-1/2\rvert&gt;1/(2\sqrt n)\right)=0.$$</span></p>
https://math.stackexchange.com/questions/4187056/something-unclear-about-central-limit-theorem-law-of-large-numbers
Question: <p>You are a military executioner tasked with eliminating some of the most dangerous criminals on Earth. You are handed 100 such criminals for immediate termination. However, just as you are about to execute them, word comes from a highly reliable source that 1 of the 100 is not a criminal at all. In fact he is one of the greatest examples of humanity and killing him will be a great loss to us all. You are also told that since he finds killing a grotesque and unfitting punishment, he refuses to identify himself in a hope for bringing amnesty to the others and stopping the executions. </p> <p>Your commanding officer however, says that you should then kill half of the prisoners and release the rest. He thinks based on his math knowledge that you have a very low chance of killing the innocent person if you kill 50% of the prisoners. </p> <p>However, while you are not a math whiz it seems like there is a great chance to kill the innocent person even if you kill a handful of people. If only you knew more math, you could prove to your commanding officer that killing just a few would already result in a very high chance of killing the innocent person and hopefully you can convince him of the futility of the executions. In your long career as an executioner you begin to wonder how many times an innocent was handed down and now that you know with certainty there is such a person in danger you would rather not have it on your conscience. </p> <p>Summary of problem:</p> <p>Your goal is to convince your superior to avoid the executions. To do this show that with every sequential killing the chance that you "miss" the innocent person decreases and the chance that you kill them increases, similar to the chance of avoiding a 6 on a dice decreases with each roll.</p> <p>You post on Mathematics Stack Exchange and hope for the best!</p> <p>(What... He's got an iPhone. I never said he's in the stone ages)</p> Answer: <p>Let's say you pick $n$ people out of $100$. There is a $\frac{n}{100}$ probability that the good person is in the set of $n$ people and thus a $\frac{n}{100}$ probability that the good person will be killed.</p> <p>This is a classic example of a very long word problem with lots of extraneous information that has an extremely simple solution because it's a very simple problem, just with a lot of other details. Don't let these other details distract you from what the relevant information is: How many people you pick, how many people there are, and how many good people there are.</p> <hr> <p>Here is an example with $n=3$.</p> <ul> <li>First, you kill one person. The probability that you kill the good person is $\frac 1{100}$.</li> <li><ul> <li>Now, the probability that this doesn't happen is $\frac{99}{100}$, so anything after this needs to be multiplied by that factor. If you kill another person, the probability that you kill the good person is now $\frac{1}{99}$.</li> </ul></li> <li><ul> <li><ul> <li>Now, the probability that this doesn't happen is $\frac{98}{99}$, so anything after this needs to be multiplied by that factor. If you kill another person, the probability that you kill the good person is now $\frac{1}{98}$.</li> </ul></li> </ul></li> </ul> <p>Now, if you follow this tree and add the cases while multiplying the conditions, we get: $$\frac{1}{100}+\frac{99}{100}\left(\frac{1}{99}+\frac{98}{99}\cdot \frac{1}{98}\right)=\frac{3}{100}$$</p>
https://math.stackexchange.com/questions/1832020/the-executioner-conundrum
Question: <p>In Casella and Berger, the definition of almost sure convergence of <span class="math-container">$\{X_n\}$</span> to <span class="math-container">$X$</span> is</p> <p><span class="math-container">$$P(\lim\limits_{n \rightarrow \infty}|X_n - X| &lt; \epsilon) = 1$$</span></p> <p>for all <span class="math-container">$\epsilon &gt; 0$</span>.</p> <p>I cannot figure why the inequality is necessary? Why not define it as:</p> <p><span class="math-container">$$P(\lim\limits_{n \rightarrow \infty}|X_n - X| = 0) = 1$$</span></p> <p>Assume some sequence is almost sure converges. Then, if there is some point in the sample space <span class="math-container">$s \in S$</span> such that the limit is not <span class="math-container">$0$</span>, then there exists <span class="math-container">$\epsilon$</span> s.t. this limit is larger than this <span class="math-container">$\epsilon$</span>. Thus, this <span class="math-container">$s$</span> must NOT belong to the subset of almost sure convergence. As such, we can remove all <span class="math-container">$s$</span> with limits which are not equal to zero. Thus, the second definition is created.</p> <p>PS. Plz explain without measure theory. Set theory allowed ofc.</p> Answer:
https://math.stackexchange.com/questions/4187882/definition-of-almost-sure-convergence-from-casella-berger
Question: <p>If I have a coin then chances of getting a <span class="math-container">$head$</span> or a <span class="math-container">$tail$</span> is <span class="math-container">$50-50$</span>. But why don't we take in account the case where coin is neither head and tail, where coin is standing upright, or it is making an angle with ground? Some people may object that when we are talking about probability of getting a <span class="math-container">$head$</span> or <span class="math-container">$tail$</span>, what we mean is that if we are doing experiment in a controlled room, where there is no matter except the coin and that coin is fully uniform, it is <span class="math-container">$50-50$</span> chance that I will get a <span class="math-container">$head$</span> or <span class="math-container">$tail$</span>. This argument is not a very satisfying one to me, it don't even answer the question:How can we know that only two possible cases are there? Why consider only head and tail? Actually it raises one more question: Why the probability of getting a <strong><span class="math-container">$head$</span></strong> or <span class="math-container">$tail$</span> is <span class="math-container">$50-50$</span>? </p> Answer: <p>Usually, when mathematicians talk about 'a coin flip' they mean the idea of an ideal coin flip, where the probability of getting 'head' or 'tails' is exactly one half for either of the possibilities. This may or may not be a good model for the outcomes of a real coin flip. But there is nothing stopping you from thinking of a model where the outcome can either be 'head', 'tails' or 'indeterminate', where also 'head' or 'tails' may not have the same probability.</p>
https://math.stackexchange.com/questions/3506173/a-question-related-to-probability
Question: <p>How can I find the probability of waking up at a precise minute? Say I fall asleep at $10$ PM and wake up at $6:01$ AM. There's a total of $481$ minutes we are dealing with so the odds of waking up at an exact minute would be $1:480$ right? And the probability would be $0.2$% ($1/481$), correct? But what about external factors. Like what if we didn't know the exact amount of time I would be asleep? Or what if there was a noise that woke me up in the middle of the night. How would these things be added to the equation if you were trying to find the probability of someone waking up at an exact minute? </p> Answer: <p>For the average British population, the duration of the sleep is well modeled by a Gaussian with $\mu=7.04$h and $\sigma=1.55$h [<a href="https://dx.doi.org/10.1111/j.1365-2869.2004.00418.x" rel="nofollow noreferrer">Groeger et al. 2004</a>]. To estimate the probability to wake-up after $m$ minutes,</p> <p>$$W_m:=\mathbb P(m\le t&lt;m+1)=\int_m^{m+1}\frac{e^{-(t-\mu)^2/2\sigma^2}}{\sqrt{2\pi}\sigma}dt=\left.\frac12\left(1+\text{erf}\left(\frac{t-\mu}{\sqrt2\sigma}\right)\right)\right|_{t=m}^{m+1},$$</p> <p>where $\text{erf}$ denotes the error function.</p> <p>This is well approximated by</p> <p>$$\frac{e^{-(m-\mu+1/2)^2/2\sigma^2}}{\sqrt{2\pi}\sigma}.$$</p> <hr> <p>If we model the occurence of a nightly wakening noise with an exponential law,</p> <p>$$N_m:=\mathbb P(t&lt;m)=1-e^{-\lambda m},$$</p> <p>(say $\lambda=(\ln2/3.5)h^{-1}$ so that it occurs in the middle of the night with probability $1/2$) the probability to wake-up is the product of the probability to spontaneously wake-up in the minute $m$ by the probability of not having been woken-up earlier, i.e.</p> <p>$$W'_m:=W_m(1-N_m)=\frac12\left(\text{erf}\left(\frac{m+1-\mu}{\sqrt2\sigma}\right)-\text{erf}\left(\frac{m-\mu}{\sqrt2\sigma}\right)\right)e^{-\lambda m}.$$</p> <hr> <p>If I am right, under these assumptions,</p> <p>$$W_{481'}=0.003505,\\ N_{481'}=0.760807,\\ W'_{481'}=0.000717.$$</p> <hr> <p>Note that $W_m$ nearly follows a Gaussian, which is damped by an exponential, giving for $W'_m$ another Gaussian which if a shifted version of the former. So the effect of the noise is to shorten the average sleep duration by $\lambda\sigma^2\approx 28.5$ minutes.</p> <hr> <p>Finally, note that this development is valid for the "average British". The constants and even the distribution can be different for a particular individual, for instance using an alarm clock (resulting in a truncated Gaussian).</p>
https://math.stackexchange.com/questions/2080754/how-can-i-find-the-probability-of-waking-up-at-a-precise-minute
Question: <p>A random number generator generates a number between 0-9. Single digit, totally random. </p> <p>We have the list of previous digits generated. </p> <p>I would like to calculate what is the probability for each number between 0-9 to be the next number generated. </p> <p>So we have something like: 0,2,3,4,6,4,9,1,3,5,8,7,2 generated</p> <p>And would like to get something like</p> <ul> <li>nr: probability to come next</li> <li>0: 10.125%</li> <li>1: 9.25%</li> <li>2: 6,58%</li> <li>3: 9.58%</li> <li>4: 6.23%</li> <li>5: 9.23%</li> <li>etc</li> </ul> <p>Thank you very much!</p> Answer: <p>Assuming this is a uniform distribution, each number is equally likely. That is, $\Pr(X=n) = \frac1{10}$, for $n=0,1,2,\ldots,9$.</p> <p>The expected value for the next number, regardless of the history, is always $$E[X] = 0(\tfrac1{10}) + 1(\tfrac1{10}) + \cdots + 9(\tfrac1{10}) = 4.5$$</p> <p>although that's pretty meaningless.</p> <p>It's like rolling a fair 10-sided die. </p>
https://math.stackexchange.com/questions/2082137/probability-of-the-next-random-number-based-on-previous-numbers
Question: <p>Take a look at this document: <a href="http://hal.in2p3.fr/in2p3-01082914v2/document" rel="nofollow noreferrer"><em>Functions of random variables</em>; Abdel-Hamid Soubra, Emilio Bastidas-Arteaga</a></p> <p>In the section <strong><span class="math-container">$2.2$</span></strong>, they have given an example about application for an exact distribution of a <strong>function of a single random variable</strong> using Normal Distribution where,</p> <p><span class="math-container">$$f_X(x) = \frac{1}{\sigma \sqrt{2 \pi}} \cdot e^{\displaystyle - \frac{1}{2} \cdot \left (\frac{x - \mu}{\sigma}\right) ^2 }$$</span></p> <p>and,</p> <p><span class="math-container">$$f_Y(y) = \frac{1}{\sigma \sqrt{2 \pi}} \cdot e^{\displaystyle \left [\frac{ \frac{1}{2}( \sigma y + \mu -\mu) ^2 }{\sigma^2}\right] } \left| \sigma \right| $$</span></p> <p><sup><strong>Note:</strong> <em>to me, the above equation is apparently incorrect.</em></sup></p> <p>Can anyone explain the same concept of function of random variables using a simpler equation? How about <span class="math-container">$y=mx+c$</span> ?</p> Answer: <p>The function used in the book is precisely a linear one, written</p> <p><span class="math-container">$$y=\frac{x-\mu}\sigma$$</span> or <span class="math-container">$$x=\sigma y+\mu.$$</span></p> <p>If you plug this in the initial distribution,</p> <p><span class="math-container">$$f_X(x)=\frac{1}{\sigma \sqrt{2 \pi}} \cdot e^{\displaystyle - \frac{1}{2} \cdot \left (\frac{x - \mu}{\sigma}\right) ^2 }$$</span></p> <p>the expression simplifies to</p> <p><span class="math-container">$$f_X(x)=f_X(\sigma y+\mu)=\frac{1}{\sigma \sqrt{2 \pi}} \cdot e^{\displaystyle - \frac{y^2}{2} }.$$</span></p> <p>The purpose of this section is to explain why the <span class="math-container">$\sigma$</span> at the denominator needs to vanish to obtain the distribution <span class="math-container">$f_Y(y)$</span> from <span class="math-container">$f_X(x)$</span> (this is because the distribution must remain normalized.)</p> <p><span class="math-container">$$f_X(x)=\sigma f_Y(y)$$</span> (absolute value omitted.)</p>
https://math.stackexchange.com/questions/2932875/example-of-function-of-a-single-random-variable
Question: <blockquote> <p>A bin has $2$ balls, one is black and one is white. Every round a uniformly chosen ball is drawn from the bin. If the color of the ball is white, then the ball is returned to the bin with an additional white ball. If the ball is black the experiment is over. Let $X$ be the number of rounds in the above experiment.</p> <ol> <li>Compute the distribution of $X$.</li> </ol> </blockquote> <p>$$P(X=k)= ?$$</p> <p>I have 3 questions regarding my (partial) solution.</p> <p>My (partial) solution:</p> <p>For $1&lt;=i&lt;=k$, I defined $X_i$ to be the number of white balls drawn in the $i$th round.</p> <ol> <li><p><strong>Is it valid to conclude without any explanation or proof that (and everything I did so far):</strong> $$ P(X=k)=P(X_1=1,X_2=1,....X_{k-1}=1,X_k=0) $$</p></li> <li><p>for every $1&lt;=i&lt;j&lt;=k$, $X_i$ and $X_j$ seems to be dependent because if I draw a white ball then in the next round, the probability to draw a white ball is smaller. Also, the only way to go to the next round is if we drew a white ball. Then, logically, why they are independent? (in the following section I'm trying to prove that they are independent mathematically).</p></li> </ol> <p>I was trying to prove that for every $1&lt;=i&lt;j&lt;=k-1$, $X_i$ and $X_j$ are independent (Eventually, I will prove for $1&lt;=i&lt;j&lt;=k$).</p> <p>$$ P(X_i=1,X_j=1)=P(X_j=1 | X_i=1)*P(X_i=1)=\frac{j}{j+1} * \frac{i}{i+1} $$ The above is true because $i&lt;j$, so we can conclude that because $P(X_i=1)$ then the experiment isn't over so the event $X_i=1$ doesn't have any infloence over $P(X_j=1)$.</p> <p>But the opposite seems to me not true: $$ P(X_i=1,X_j=1)=P(X_i=1 | X_j=1)*P(X_j=1)= 1 * \frac{j}{j+1} $$</p> <p>The above is true because $i&lt;j$ so the experiment wasn't over at the round number $i$, so at the round number $i$ we must have drawn a white ball - and there is no other option.</p> <ol start="3"> <li>$ P(X_i=1,X_j=1)=P(X_j=1 | X_i=1)*P(X_i=1)=P(X_i=1 | X_j=1)*P(X_j=1)$. <strong>Why am I wrong?</strong></li> </ol> Answer: <p>A difficulty encountered by your argument is that you do not have a clear definition of $X_j$ from the problem statement.</p> <p>You seem to be tempted to say $X_j = 0$ if the experiment ends before the $j$th drawing. But if there is no $j$th drawing, is "the number of white balls in the $j$th drawing" even defined?</p> <p>One way to resolve this is to make it part of the definition of $X_j$ that $X_j = 0$ if the experiment ends before $j$ balls are drawn. Then $X_i$ and $X_j$ clearly are <em>not</em> independent. In particular, because $X_2 = 1 \implies X_1 = 1,$ $$P(X_1 = 1 \cap X_2 = 1) = P(X_2 = 1) \neq P(X_1 = 1)P(X_2 = 1).$$</p> <p><em>Both</em> of your attempts to evaluate $P(X_1 = 1 \cap X_2 = 1)$ by means of conditional probability were incorrect, because you assumed that in general $P(X_i = 1) = \frac{i}{i+1}$ for every positive integer $i.$ In fact, according to the way you defined $X_i,$ $P(X_i = 1) &lt; \frac{i}{i+1}$ whenever $i &gt; 1.$</p> <p>Here is a way out of this difficulty. The value of $X_j$ is irrelevant to the outcome of the experiment when there is no $j$th drawing. So you can define $X_j$ any way you like in that case. Define $X_j$ as follows for any positive integer $j$: $$ X_j = \begin{cases} 1 &amp; \text{at least $j$ balls are drawn and the $j$th ball is white,} \\ 0 &amp; \text{at least $j$ balls are drawn and the $j$th ball is black,} \\ 1 &amp; \text{with probability $\frac{j}{j+1}$ if fewer than $j$ balls are drawn,}\\ 0 &amp; \text{otherwise.}\\ \end{cases} $$</p> <p>When you define $X_j$ this way, it turns out that indeed $P(X_j = 1) = \frac{j}{j+1},$ and $X_j$ is independent of $X_i$ whenever $i \neq j.$</p> <p>Now you can proceed to compute $P(X = k)$ without getting hung up on the dependence of $X_j$ on $X_i$ or on whether $X_j$ is even defined.</p>
https://math.stackexchange.com/questions/2795511/what-independent-events-actually-means
Question: <p>Link for the description: <a href="http://us.battle.net/hearthstone/en/blog/20324471/introducing-heroic-tavern-brawl-10-17-2016" rel="nofollow">http://us.battle.net/hearthstone/en/blog/20324471/introducing-heroic-tavern-brawl-10-17-2016</a></p> <p>Lest assume that you build a very good deck with 60% winrate. </p> <p>The problem worded more mathematically:<br> You have $60\%$ chance to win a game. If you lose $ 3 $ times you are out. What are the chances of winning $12$ games while having having $2$ or less losses between the wins? (Order if losses does not matter). So you will play 12 or 13 (if you lose once) or 14 (if you lose twice) games.</p> <p>Bonus question: lets assume that the $10$ dollars entry fee is worth it (you can get more rewards than just buying them from the shop) if you win $7$ or more games while losing $2$ or less times. What are the chances that your $10$ dollars was worth it? (Again assuming 60% win chance)</p> Answer: <p>Let us make the following simplification: We <strong>continue</strong> to play games <strong>after</strong> reaching the twelve-win maximum or the three loss maximum.</p> <p>Recognize then that to have reached twelve wins, then in the first fourteen games you play you will have lost at most two times.</p> <p>$\binom{14}{0}0.6^{14}0.4^0 + \binom{14}{1}0.6^{13}0.4^1 + \binom{14}{2}0.6^{12}0.4^2 \approx 0.03979$</p> <p>Similarly, to have reached at least 7 wins, within the first nine games you will have lost at most two times.</p> <p>$\binom{9}{0}0.6^{9}0.4^0+\binom{9}{1}0.6^80.4^1+\binom{9}{2}0.6^70.4^2\approx 0.231787$</p>
https://math.stackexchange.com/questions/1972993/what-is-the-probability-of-winning-hearthstones-heroic-tavern-brawl
Question: <p>There are 6 white beads and 5 black beads in your pocket. You randomly pull the beads one by one out of your pocket and place them on a table. Probability that the third bead drawn is the first white.</p> <p>Now the solution is : the prob.of drawning 1st black bead (5÷11) × the prob.of drawing 2nd black bead.(4÷10) × the prob.of drawing 1st white bead(6÷9) which equals to (4÷33) [By the product rule] But isn't drawing the 1st black bead will affect the probability of drawing the 2nd black bead so the events are dependent?</p> <p>I would be grateful if someone could clear my doubt Thanks in advance.</p> Answer: <p>Let <span class="math-container">$E_1$</span> denote the event that the 1st bead drawn is black.</p> <p>Let <span class="math-container">$E_2$</span> denote the event that the 2nd bead drawn is black.</p> <p>Let <span class="math-container">$E_3$</span> denote the event that the 3rd bead drawn is white.</p> <p><span class="math-container">$E_1$</span> and <span class="math-container">$E_2$</span> are <strong>definitely not independent</strong> events.</p> <p>However, the solution is somewhat poorly worded. The solution might be better expressed as</p> <p><span class="math-container">$$p(E_1) \times p(E_2 | E_1) \times p(E_3 | E_1, E_2).\tag1 $$</span></p> <p>The second factor above represents the chance of event <span class="math-container">$E_2$</span> occurring <strong>given</strong> that event <span class="math-container">$E_1$</span> occurred.</p> <p>The third factor above represents the chance of event <span class="math-container">$E_3$</span> occuring <strong>given</strong> that events <span class="math-container">$E_1$</span> and <span class="math-container">$E_2$</span> both occurred.</p> <p>The probabilities expressed in (1) above are consistent with the solution's math. For example, once event <span class="math-container">$E_1$</span> occurs, then you have <span class="math-container">$(10)$</span> beads left, of which <span class="math-container">$(4)$</span> are black.</p>
https://math.stackexchange.com/questions/4190549/confusion-in-identifying-independent-events
Question: <p>Flip a coin repeatedly. Let <span class="math-container">$E_s$</span> be the number of coin flips it takes before seeing <span class="math-container">$s$</span> heads in a row. What is <span class="math-container">$P(E_s=n)$</span>? Specifically, I am concerned with <span class="math-container">$P(E_4=E_3+k)$</span> (specifically for <span class="math-container">$k=1,9$</span>) but to calculate this I find that <span class="math-container">$$\begin{aligned} P(E_4=E_3+k) &amp;= \sum_{n=3}^{\infty}P(E_3=n\;\land\; E_4=n+k) \\ &amp;= \sum_{n=3}^{\infty}P(E_4=n+k\;|\; E_3=n)P(E_3=n) \\ &amp;= \sum_{n=3}^{\infty}\left(\begin{cases}1/2&amp; k=1 \\ P(E_4=k-1)&amp; k\ge 4 \\ 0 &amp; 1&lt;k&lt;4 \end{cases}\right)P(E_3=n) \\ &amp;= \begin{cases}1/2&amp; k=1 \\ P(E_4=k-1)&amp; k\ge 4 \\ 0 &amp; 1&lt;k&lt;4 \end{cases} \end{aligned}$$</span> where on the last line since <span class="math-container">$E_s\in [s,\infty]$</span> so <span class="math-container">$1=P(E_s=\infty)+\sum_{n=s}^{\infty}P(E_s=n)=\sum_{n=s}^{\infty}P(E_s=n)$</span>. Thus, we are left having to calculate <span class="math-container">$P(E_4=k-1)$</span>. Is the work for my specific case correct? And if so how do we finish the problem, or is there a method that avoid direct calculation?</p> Answer: <p>EDIT: The answer below isn't really an answer, as it failed to address the "in a row" component. See the comments for details.</p> <p>So there are a total of <span class="math-container">$2^n$</span> possible sequences of flipping a fair coin <span class="math-container">$n$</span> times, and of those there are <span class="math-container">$n-1\choose s-1$</span> that both contain exactly <span class="math-container">$s$</span> heads and end with a head (with the convention that binomial coefficients that don't make sense are all equal to <span class="math-container">$0$</span>). Thus, <span class="math-container">$P(E_s = n)={n-1\choose s-1}/2^n$</span> (you can verify that this statement is true by summing up all of these probabilities to show that you get back <span class="math-container">$1$</span>).</p> <p>If you need help with the specific problem, comment and let me know, but presumably this is what the doctor ordered.</p>
https://math.stackexchange.com/questions/3022791/probability-that-first-s-heads-in-a-row-occurs-after-n-flips
Question: <p>So I would like to know the probability of the following scenario to happen.</p> <p>I am from India. Jane(say) is from Brazil. I moved to Canada to work four months ago and have been hopping from one airbnb to another. Jane moves from Brazil to Canada to study and happens to be in the room next to me in the same airbnb. Now Jane and I are really good friends. Before this event actually happened, what would have been the chances(probability) of this meeting to occur?</p> <p>I can provide any other details if required. Not looking for an accurate answer, but at least an approximate one.</p> Answer: <p>I will try to answer the question I suspect is behind your question.</p> <p>The probability of that sequence of coincidences if specified in advance of the observation that they happened is extremely small. Estimating it would require lots of assumptions I wouldn't even try to specify.</p> <p>That's because you did not ask in advance of the event. In fact, rare things happen all the time. If by chance you shared your B&amp;B with a tall red headed man from Norway with the same first name as your brother that would be surprising - you didn't know it would happen.</p> <p>Think about the lottery: the chance that any particular ticket will win is miniscule - but some ticket will win. The owner of that ticket will feel singled out by fate - as you do about your meeting with Jane. But there are people all over the world sharing B&amp;Bs with a particular other person who don't take notics.</p>
https://math.stackexchange.com/questions/2898661/probability-of-two-people-from-two-different-countries-meeting-in-a-different-co
Question: <p>Assume a table with dimensions <span class="math-container">$n$</span>x<span class="math-container">$n$</span>. In each of the <span class="math-container">$n^2$</span> spaces, a random number (<span class="math-container">$m$</span>) such that <span class="math-container">$m\in\mathbb{N}$</span> and <span class="math-container">$1\leq m\le9$</span> will be placed. My question is two-fold:</p> <ol> <li>What are the odds that any two numbers next to each other along the cardinal directions (ie, vertically and horizontally) will have a sum of <span class="math-container">$10$</span>?</li> <li>What are the odds that the whole table will be filled with numbers that sum to <span class="math-container">$10$</span> with at least one of its neighbours?</li> </ol> <p>(If it helps, assume that <span class="math-container">$10 \le n \le 1000$</span>. Generalisation is appreciated, but specific answers are also okay!)</p> <p>I'm not very good at probability, but I know that the first square (take the upper-left, for instance) in the table would have <span class="math-container">$9$</span> possible numbers and the square right next to it will have only <span class="math-container">$1$</span> possibility each to make the <span class="math-container">$10$</span>. We could then move to <span class="math-container">$(2,2)$</span> in the table and repeat the process. However, if the number in <span class="math-container">$(2,2)$</span> is the same as the number in <span class="math-container">$(1,1)$</span>, then it already meets the criteria for the puzzle. If we expand this outwards, we have found one possible solution out of the <span class="math-container">$9^{n^2}$</span> possibilities. I don't know how to solve it, necessarily if we were to start, say, at <span class="math-container">$(\lfloor\frac{n}{2}\rfloor,\lfloor\frac{n}{3}\rfloor)$</span></p> Answer: <p>Q1: As I understood, you're asking of the probability that in the table that we get every two neighbours (vertically or horizontally) will add up to 10.</p> <p>If (1,1) contains a number <span class="math-container">$a$</span>, then (1,2) and (2,1) must contain <span class="math-container">$(10-a)$</span>, then (1,3), (2,2) and (3,1) must contain <span class="math-container">$10 - (10-a)$</span> = <span class="math-container">$a$</span> again , and so on, forming a chessboard pattern. As you can see, the number in (1,1) determines the whole table, so there are 9 total possible tables that satisfy our condition (for each starting number from 1 to 9).</p> <p>Then the probability that every two neighbours in a random table will add up to 10 is: 9 over the total number of possible tables which is <span class="math-container">$9^{(n^2)}$</span>.</p> <p>Thus, for <span class="math-container">$n \geq 2$</span>: <span class="math-container">$$P = \frac{9}{9^{(n^2)}} = 9^{(1-n^2)}$$</span></p> <hr /> <p>Q2: I am going to attempt to calculate the probability for <span class="math-container">$n = 3$</span> by hand.</p> <p>To start off, I will represent each table cell with a dot, and connect two neighbouring dots along the cardinal directions if they sum up to 10, and not connect them if they do not.</p> <p><a href="https://i.sstatic.net/iEV5O.jpg" rel="nofollow noreferrer">See this picture for an example</a>.</p> <p>So we get a way to represent the connections in any table visually. Notice also that <a href="https://i.sstatic.net/aMaAO.jpg" rel="nofollow noreferrer">this pattern is impossible</a>. Thus, as long as the dots arranged in a 3x3 table do not contain such patterns, they represent the connections of an existing 3x3 table of numbers.</p> <p>Now, like in Q1 I will calculate the number of tables that satisfy our condition and then divide it by the total number of possible tables (<span class="math-container">$9^{(3^2)}$</span>).</p> <p>The central dot can be connected to its neighbours in 15 different ways: <a href="https://i.sstatic.net/TjrQ0.jpg" rel="nofollow noreferrer">see this picture</a> (I've grouped them into 5 categories by rotational symmetry).</p> <p>I've selected one representative from each category and constructed <a href="https://i.sstatic.net/sFBRg.jpg" rel="nofollow noreferrer">all possible variants that satisfy our condition with these representatives as central dots</a>. Notice that to get all the variants that satisfy our condition it's enough to rotate all the members of the groups 1, 2, 4 by 90° thrice, and of the group 3: by 90° once. (Because of the rotational symmetry of these groups.)</p> <p>Now I will calculate the number of actual tables (populated with digits from 1 to 9) that satisfy our condition AND are represented by one of these variants. <a href="https://i.sstatic.net/DSGM6.jpg" rel="nofollow noreferrer">This picture will be helpful</a>:</p> <ul> <li>If there is 1 piece (i.e. there exist paths from any dot to any other dot) - there are 9 ways to number this piece. Thus, there are exactly 9 possible tables that can be represented by such a connection of dots.</li> <li>If there are 2 pieces - then they are definitely &quot;touching&quot; or are &quot;adjacent to&quot; each other (i.e. there exist two dots that are neighbours and that belong to different pieces), so while the first piece can be numbered in 9 ways, only 8 ways are left to number the second piece in order for it to remain separate from the first one. So there are <span class="math-container">$9\cdot8 = 72$</span> possible tables.</li> <li>If there are 3 pieces and all of them &quot;touch&quot; each other - then there are 9 ways to number the first piece, 8 left for the second one and 7 for the third. So there are <span class="math-container">$9\cdot8\cdot7 = 504$</span> possible tables.</li> <li>If there are 3 pieces but 2 of them do not &quot;touch&quot; each other - then there are 9 ways to number the first piece, 8 left for the second one and 8 for the third. So there are <span class="math-container">$9\cdot8\cdot8 = 576$</span> possible tables.</li> <li>If there are 4 pieces and 2 of them do not &quot;touch&quot; each other - then by the same logic there are <span class="math-container">$9\cdot8\cdot7\cdot7 = 3528$</span> possible tables.</li> </ul> <p>Now, summing it all up and multiplying by <span class="math-container">$4$</span> (for groups 1, 2, 4) and by <span class="math-container">$2$</span> (for group 3):</p> <p>Group 1: <span class="math-container">$\, 4 \cdot (3528 \cdot 5 + 504 \cdot 12 + 72 \cdot 4) = 95,904$</span></p> <p>Group 2: <span class="math-container">$\, 4 \cdot (504 \cdot 4 + 72 \cdot 4) = 9,216 $</span></p> <p>Group 3: <span class="math-container">$\, 2 \cdot (576 \cdot 7) = 8,064 $</span></p> <p>Group 4: <span class="math-container">$\, 4 \cdot (72 \cdot 3) = 864 $</span></p> <p>Group 5: <span class="math-container">$\, 9 $</span></p> <p>Then the total number of 3x3 tables that satisfy our condition is:</p> <p><span class="math-container">$$ 95,904 + 9,216 + 8,064 + 864 + 9 = 114,057 $$</span></p> <p>Finally,</p> <p><span class="math-container">$$ P = \frac {114,057} {9^{(3^2)}} \approx 0.0002944 \approx 0.03\% $$</span></p> <p>I feel like this probability will dwindle down to <span class="math-container">$0$</span> quite fast with growing n. The probability for <span class="math-container">$n = 2$</span> that I've got by a similar (but easier) process is <span class="math-container">$ \frac {72 + 72 + 9} {9^4} \approx 0.02332 \approx 2.3\% $</span></p> <hr /> <p>P.S.</p> <ul> <li>I did not check this with a computer program so there might be some mistakes, but I tried not to make any.</li> <li>Where are these questions from? It might be easier to find a solution if I know the topic of this problem (if it is from a textbook).</li> <li>This method of representing tables with connected dots generalizes to larger tables, but then you also need to check that 9 numbers is enough for assigning values to different touching pieces. (For example, it may be impossible to construct a table corresponding to some specific arrangement of dots with 10 different pieces all &quot;touching&quot; each other. By <a href="https://en.wikipedia.org/wiki/Pigeonhole_principle" rel="nofollow noreferrer">Pigeonhole principle</a> there simply may be not enough numbers from 1 to 9 to prevent all the neighbouring cells of these pieces from summing up to 10, so some of the pieces will HAVE TO BE a single piece.)</li> </ul>
https://math.stackexchange.com/questions/4156924/probability-of-random-numbers-in-a-table-summing-to-10
Question: <p>Imagine this. Robert goes on vacation. Upon his arrival at the destination, he is unexpectedly greeted by a good friend of him, Jeremy. Weirdly enough, they happened to go on the same trip, be neighbors at the site, at the same time and place, and for the same duration. Keep in mind, none of them knew of the other's plan. I know the math is close to impossible, for the amount of details to account for is just overwhelming. But, I think it would be interesting if we can find a sort of approximation for this kind of scenario (or have some sort of universal probability). Let's say this: Robert and Jeremy are best friends, there are about 300 houses for rental, they went on vacation for a whole week and it's summer.</p> Answer:
https://math.stackexchange.com/questions/4191840/how-would-someone-go-about-calculating-the-probability-of-the-unlikely-scenario
Question: <p>My wife left on a business trip this morning. 20 people from the same company caught two consecutive flights. Each person checked in independently, yet my wife ended up sitting next to the same colleague on both flights!</p> <p>What are the odds?</p> <p>Assume both aeroplanes had 150 seats, in 3+3 configuration, in 25 rows. It's not exactly right but will do for the purposes of the exercise. Assume also that all other passengers checked in independently, so there are no couples choosing or being assigned seats next to each other, thus changing the odds. This isn't right either, but will also do for the purposes of the exercise.</p> <p>My wife and colleague were sitting next to each other, not across the aisle from each other.</p> Answer: <p>Your wife has probability $\frac23$ to sit in a chair that has only $1$ chair next to it, and has probability $\frac13$ to sit in a chair that has $2$ other chairs next to it. </p> <p>The probability that at the first flight your wife had no colleagues sitting next to her is:$$p_0:=\frac23\frac{\binom{148}{19}\binom10}{\binom{149}{19}}+\frac13\frac{\binom{147}{19}\binom{2}{0}}{\binom{149}{19}}=\frac23\frac{130}{149}+\frac13\frac{130\cdot129}{149\cdot148}=\frac{55250 }{66156}\approx0.835147228 $$</p> <p>The probability that at the first flight your wife had $2$ colleagues sitting next to her is: $$p_2:=\frac13\frac{\binom{147}{17}\binom22}{\binom{149}{19}}=\frac13\frac{19\cdot18}{149\cdot148}=\frac{342}{66156}\approx0.005169599 $$</p> <p>The probability that at the first flight your wife had exactly $1$ colleague sitting next to her is:$$p_1:=1-p_0-p_2=\frac{10564}{66156}\approx0.159683173 $$</p> <p>The probability that at the first flight your wife had a colleague sitting next to her, and that this same person was sitting next to her the second flight is:$$p_1\times\left(\frac23\frac1{149}+\frac13\frac2{149}\right)+p_2\times\left(\frac23\frac2{149}+\frac13\left(1-\frac{147}{149}\frac{146}{148}\right)\right)=$$$$\frac{10564}{66156}\frac{4}{447}+\frac{342}{66156}\frac{1182}{66156}=\frac{6658132 }{4376616336 }\approx0.001521$$</p>
https://math.stackexchange.com/questions/2903649/what-are-the-odds-of-sitting-next-to-the-same-person-on-two-flights
Question: <p>How can you solve conditional probability without formula - simply by logic and intuition?</p> <p>For example, this problem has been circulated here and we all know the formal way to do it. Could anyone show how to logically solve it?</p> <p>At a workplace 1% of the staff where injured during a year. 60% of all injured where men. 30% of the employees were women. Is it male or female employees that has the biggest risk of getting injured?</p> Answer: <p>Women make up $30\%$ of the workforce, but sustain $40\%$ of the injuries. So women get injured at a higher rate.</p>
https://math.stackexchange.com/questions/2904005/solving-conditional-probability-without-formula
Question: <p>If there is a 1 in 8 chance of an event and there is a further 25% reduction in this event happening what is the answer expressed in terms of 1 in X chance?</p> <p>My first calculation I worked out as 1 in 32. 0.125 x 0.25 =0.03125</p> <p>Then 1 in 12 (more guesswork) and then 3 in 32 0.125 x 0.25 = 0.03125 0.125 - 0.03125 = 0.09375.</p> <p>But now I just have no idea. Any thoughts?</p> <p>Any help would be appreciated!</p> Answer: <p>The probability starts out as $\frac 18$. If we reduce that by $25\%$ we multiply it by $\frac 34,$ getting a probability of $\frac 3{32}$. If you want to express that as $1$ in something it is $1$ in $\frac {32}3=10\frac 23$. There is no whole number answer.</p>
https://math.stackexchange.com/questions/2907512/1-in-8-chance-of-an-event-decreases-by-25-what-is-it
Question: <blockquote> <p>Suppose we have a bag containing $m$ white and $n$ black caramels. We pic a caramel and if it is white, we eat it, otherwise we put it back in the bag. If we take out $r$ black caramels succesively, then we believe that we have eaten all the white caramels and we throw the bag. What is the distribution of the number of white caramels thrown away?</p> </blockquote> <p><strong>Attempt</strong> Let $W, B$ be the events we pic a white, a black caramel, respectively. The scenario we throw away $m-w$ white caramels, where $w=0,1,2,\ldots,m$ may be described as:</p> <p>$$B^{[r-1]}W B^{[r-1]} W \ldots B^{[r-1]}WB^k$$ </p> <p>($w$ appearences of $W$) where $B^{[r-1]}=B^0$ or $B$ or $B^2$ or $\ldots$ or $B^{r-1}$ </p> <p>and $B^k=\underbrace{BB\ldots B}_{k ~times}$ for $k=0,1,\ldots,r-1.$</p> <p>There are $0+1+\ldots+(r-1)=(r-1)r/2$ possible combinations for $B$ each time so the desired probability is:</p> <p>$$\frac{(r-1)r}{2}\frac{n}{n+m}\cdot \frac{m}{n+m}\cdot \frac{(r-1)r}{2}\frac{n}{n+m-1} \cdot\frac{m-1}{n+m-1}\,\ldots \frac{(r-1)r}{2}\frac{n}{n+m-(w-1)} \cdot\frac{m-(w-1)}{n+m-(w-1)}\cdot \bigg(\frac{n}{n+m-w}\bigg)^r.$$</p> <p>Is my solution correct? </p> <p>Thanks in advance.</p> Answer: <p>The probability that all of white caramels will be thrown away is $(\frac{n}{m+n})^r$.</p> <p>The probability that exactly one will be eaten is $(1-(\frac{n}{m+n})^r)(\frac{n}{m+n-1})^r.$</p> <p>The probability that exactly two will be eaten is $(1-(\frac{n}{m+n})^r)(1-(\frac{n}{m+n-1})^r)(\frac{n}{m+n-2})^r.$</p> <p>In general, letting $p_k = (\frac{n}{m+n-k})^r$, then the probability that $k$ white caramels are eaten is $(1-p_0)\cdot (1-p_1)\cdots (1-p_{k-1})\cdot p_k$, for $0\le k\le m$.</p>
https://math.stackexchange.com/questions/2907888/distribution-of-white-caramels-thrown-away-from-a-bag-of-white-and-black-caramel
Question: <p>Let X and Y be jointly continuous with joint probability density function<br> $$f_{X,Y}(x,y)=\frac{1}{x},0\leq y\le x\le1$$<br> Find the pdf of $Z=X+Y$ </p> <p>Here is my solution:<br> $$F_Z(z)=P(Z\leq z)=P(X+Y\leq z)$$<br> $$=\int_{-\infty}^{\infty}P(X+Y\leq z|X=x)f_X(x)dx$$<br> $$=\int_{-\infty}^{\infty}P(Y\leq z-x)f_X(x)dx$$<br> $$=\int_{-\infty}^{\infty}F_Y(z-x)f_X(x)dx$$<br> since $f_X(x)=1$, $f_Y(y)=-ln(y)$, and $F_Y(y)=y-yln(y)$,<br> $$=\int_{0}^{1}(z-x)-(z-x)ln(z-x)dx$$ $$=\frac{z}{2}-\frac{1}{2}+\frac{1}{2}\biggl((z-1)^2ln(z-1)-zln(z)\biggl)$$ So here is the answer I got:<br> $$f_Z(z)=\frac{d}{dz}F_Z(z)$$ $$=(z-1)ln(z-1)+\frac{z}{2}-\frac{1}{2}ln(z)-\frac{1}{2}$$</p> <p>I don't think my answer is correct since $0\leq z\leq2$, when I plug 2 into $F_Z(z)$, it doesn't show 1. I'm not sure which part I did wrong.</p> Answer: <p>Notice, $x,y$ are <em>not</em> independent, so stick with the joint functions. Use the Jaccobian transformation.</p> <p>Always keep your eye on the supports.</p> <p>$$\begin{split}f_Z(z) &amp;=\int_\Bbb R f_{X,Z}(x,z)\mathsf d x\\ &amp;= \int_\Bbb R f_{X,Y}(x,z-x) \begin{Vmatrix}\dfrac{\partial (x,z-x)}{\partial (x,z)}\end{Vmatrix}\mathsf d x \\ &amp;=\int_\Bbb R \dfrac 1{x}\mathbf 1_{0\leq (z-x)\leq x\leq 1}\mathsf d x~\\ &amp;= \int_{\max(0,z/2)}^{\min(1,z)} \dfrac 1x \mathbf 1_{z\in(0;2]}\mathrm d x\\ &amp;= \mathbf 1_{z\in(0;1)}\int_{z/2}^{z} \dfrac 1x \mathrm d x+\mathbf 1_{z\in[1;2]}\int_{z/2}^{1} \dfrac 1x \mathrm d x\end{split}$$</p>
https://math.stackexchange.com/questions/2908104/find-the-pdf-of-z-xy
Question: <p>If someone gets $13$ mails over the period of $5$ weekdays. What is the probability that he gets at least one mail in each day?</p> Answer: <p>HINT - I would say:</p> <p>If number of solutions of the equation $i_1+i_2+i_3+i_4+i_5 = 13$</p> <ul> <li><p>where $i_1,i_2,i_3,i_4,i_5\in (1,2,3,\cdots,13) = \omega$,</p></li> <li><p>where $i_1,i_2,i_3,i_4,i_5\in (0,1,2,3,\cdots,13) = \Omega$</p></li> </ul> <p>then the sought probability $P=\frac{\omega}{\Omega}$</p> <p>[I would say: $\quad \omega=495,\quad \Omega=2380$]</p>
https://math.stackexchange.com/questions/2912079/what-is-the-probability-that-he-gets-at-least-one-mail-in-each-day
Question: <p>Imagine I have a real random variable $X$ with some distribution (continuous, discrete or continuous with atoms)</p> <p>Now Imagine I have i.i.d. copies $X_1,...,X_n$, all independently and equally distributed as $X$</p> <p>My claim is:</p> <p>$$\mathbb{P}(X_2&gt;X_1)=\mathbb{P}(X_2&lt;X_1)$$ My secondy claim is the following:If I order them by size, so that $X_{(1)}&lt;X_{(2)}&lt;\ldots&lt;X_{(n)}$ and I define the interval $I_n=[X_{(1)},X_{(n)}]$; Then I claim:</p> <p>$$\mathbb{P}(X_{n+1}&lt;X_{(1)})=\mathbb{P}(X_{n+1}&gt;X_{(n)})$$</p> <p>So the probability that the $n+1$-th number exceeds the interval on the left equals the probability it exceeds on the right</p> <p>I guess the first one is true, but the second one not;</p> <p>E.g. Assume X can take the value 0 and 1; and assume $n=3$; Then</p> <p>$$\mathbb{P}(X_3&gt;{X_1,X_2})=\mathbb P (X_3=1)\mathbb P (X_2=0)\mathbb P (X_1=0)=\mathbb P (X=1)\mathbb P (X=0)\mathbb P (X=0)$$ but also $$\mathbb{P}(X_3&lt;{X_1,X_2})=\mathbb P (X_3=0)\mathbb P (X_2=1)\mathbb P (X_1=1)=\mathbb P (X=0)\mathbb P (X=1)\mathbb P (X=1)$$</p> <p>which is gernerally not the same; But What I am wondering is if there are simply conditions that it would become true</p> Answer: <p>Let $Y_n =\min(X_1,X_2 \cdots X_n)$, and let $y_n=\sum_{i=1}^n[X_i=Y_n]$ count the number of elements that attain that minimum. Analogously, let $Z_n$ and $z_n$ be the maximum and maximum-count.</p> <p>Then, by symmetry $P( X_{n} = Y_n \wedge y_n=1)=P(X_n=Y_n) P(y_n=1 \mid X_n=Y_n)=\frac{1}{n} P(y_n=1)$</p> <p>Then, esentially you are asking if $P(y_n=1)=P(z_n=1)$ , that is, if the probability of having a single maximum equals the probability of having a single minimum. This is not true in general.</p> <p>It's true for a continuous variable (continuous CDF) because in that case the probability of a having a single extrema equals $1$. It's also true for a symmetric (around the median) random variable. I'm not sure if there's a simple characterization for its CDF to be true in general.</p> <p>Added:</p> <p>Let $F(x) = P(X \le x)$ be the CDF, and let $p(x)= F(x) - F(x^-)$. </p> <p>Then the probability of having a single minimun in $n+1$ realizations equals</p> <p>$$A=p(y_{n+1}=1)= \int \left(\frac{1-F(x)}{1-F(x^-)}\right)^n dF(x)= \int \left(1-\frac{p(x)}{1+p(x)-F(x)}\right)^n dF(x) \tag{1}$$</p> <p>Similarly, for the maximum:</p> <p>$$B=p(z_{n+1}=1)= \int \left(\frac{F(x^-)}{F(x)}\right)^n dF(x) = \int \left(1- \frac{p(x)}{F(x)}\right)^n dF(x) \tag{2}$$</p> <p>If $F(x)$ have finite discontinuities at $x_i$, $i=1,2\cdots k$ (perhaps the result is also valid for more general settings), we can write $F(x)=F_c(x) + \sum_i p(x_i)u(x-x_i)$ where $F_c(x)$ is continuous and $u(\cdot)$ is the unit-step function. Then</p> <p>$$\begin{align} A &amp;=\sum_i p(x_i) \left(1-\frac{p(x_i)}{1+p(x_i)-F(x_i)}\right)^n +F_c(+\infty)\\ &amp;=1- \sum_i p(x_i)\left[1- \left(1-\frac{p(x_i)}{1+p(x_i)-F(x_i)}\right)^n \right]\tag{3} \end{align} $$</p> <p>$$\begin{align} B&amp;=\sum_i p(x_i) \left(1- \frac{p(x_i)}{F(x_i)}\right)^n +F_c(+\infty)\\ &amp;=1- \sum_i p(x_i)\left[1- \left(1- \frac{p(x_i)}{F(x_i)}\right)^n \right] \tag{4} \end{align}$$</p> <p>Of course, $A=B=1$ if $F(x)$ is continuous. Also, $A=B$ if the probability (both the continuous and the discrete parts!) is symmetric. There's not much more to say in general, I think...</p>
https://math.stackexchange.com/questions/2913017/is-it-true-that-the-probability-for-both-events-is-always-equal-if-yes-how-to
Question: <p>I am struggling with this interview prep question... SOS</p> <p>Two players pick cards from standard 52 card deck without replacement: 1st player picks a card, then 2nd, then again 1st, then 2nd etc. They stop once somebody picks a king (of any suit), the player who picks the king wins. What is the probability that the first player wins? the second player?</p> <p>Which player has more chances of winning: the first or the second? It is sufficient to set up the correct formula for the probabilities, no need to evaluate it numerically. The last question can be answered without numerical evaluation, by analysis of the formulas</p> <p><strong>What is the probability that the first player wins? the second player?</strong> couldn't be far of from 50% for both... right?</p> <p><strong>Which player has more chances of winning: the first or the second?</strong> Intuition tells me the first person but I'm not sure if that follows or how to setup a formula -- EDIT: for this I'm thinking certainly the first player because they have one more opportunity to win by choosing a king first</p> Answer: <p>In a given round of two draws, you start with $n$ cards of which $4$ are kings. The first player wins with probability $\frac 4n$. The second player wins with probability $\frac {n-4}n\cdot \frac 4{n-1}=\frac 4n\cdot\frac {n-4}{n-1}$ because they need the first player not to draw a king and there is one less card in the pack for their draw. As the last factor is less than $1$, on each round the first player has a greater chance to win than the second, so the first player has a greater chance to win overall. </p> <p>I made a spreadsheet to compute the probability. I find the first player wins about $51.98\%$ of the time.</p>
https://math.stackexchange.com/questions/2916365/two-players-pick-cards-from-standard-52-card-deck-without-replacement
Question: <p>This is related to my previous question <a href="https://math.stackexchange.com/questions/2920891/find-the-number-of-ways-of-constructing-8-using-three-distinct-integers-from">here</a>.</p> <blockquote> <p>The numbers $0, 1, 2, 3, \ldots , 8$ are written on individual cards and placed in a bag. Three cards are chosen at random. What is the probability that their sum is $8$? </p> </blockquote> <p>The more I read this question, the more I am thinking that the question is ambiguous. </p> <p><strong>My reasoning:</strong></p> <p>In general for a probability scenario, the ordering <em>does</em> matter because the more ways an event can occur (more orderings), the more likely it is to occur. But in the question posed above, it is not clear if the cards are chosen one-by-one, or if they are chosen all in one go, or if that even matters. If ordering is taken into account, I get an answer of $1/4$ but if ordering is not taken into account, I get an answer of $1/24$. </p> <p><strong>My question:</strong></p> <p>Is the question ambiguous? What would the answer be, in this case?</p> Answer: <p>The problem is unambiguous. It does not matter whether your three numbers are chosen "all in one go" or "one-by-one," at the end of the day you have three random numbers either way. You are hung up on whether or not order matters. The thing is this: you can choose whether or not order matters when you do your computation, and as long as you are consistent, you will get the same answer. </p> <p>Essentially, if you do your computations while caring about the order the numbers are drawn, then you are keeping track of extra information. There will be more possibilities in the same space, but also proportionally more elements in the event $\{\text{sum of numbers }=8\}$, so the result is the same.</p> <ul> <li><p>If order does not matter, then the sample space consists of all $\binom{9}3=84$ unordered subsets of $\{0,1,2,\dots,8\}$. The event that the sum is $8$ consists of just $5$ sets, namely $$ \{0,1,7\},\{0,2,6\},\dots,\{1,3,4\} $$ Therefore, the probability is $\frac{5}{84}$.</p></li> <li><p>If order does matter, then the sample space consists of all $9\cdot8\cdot 7=504$ ordered lists of three distinct elements of $\{0,1,\dots,8\}$. The even that the sum equals $8$ consists of the $5\cdot 3!$ possible permutations of one of the $5$ sets listed before, like $$ (0,1,7),(0,7,1),\dots(7,1,0),(0,2,6),(0,6,2),\dots $$ Therefore, the probability is $\frac{30}{504}=\frac{5}{84}$.</p></li> </ul>
https://math.stackexchange.com/questions/2920913/is-the-following-probability-question-ambiguous
Question: <p>There exist 7 doors numbered in order from 1 to 7 (going from left to right). A mouse is initially placed at center door 4. The mouse can only move 1 door at a time to either adjacent door and does so, but is twice as likely to move to a lower numbered door than to a higher numbered door each time it moves 1 door. There are cats waiting at doors 1 and 7 that will eat the mouse immediately after the mouse moves to either of those 2 doors.</p> <p>So for example, the mouse starts at door 4. He could then move to door 3, then to door 2, then back to 3, then back to 2, then to door 1 where he gets eaten. That counts as 5 moves total. Skipping doors is not allowed. </p> <p>So there are 2 questions I have regarding this:</p> <p>1) What is the expected average number of moves before the mouse gets eaten? (do not count the initial start at door 4 as a move but count any final move to doors 1 or 7 and any "intermediate" moves between those 2 states).</p> <p>2) What is the probability that the mouse will survive for 100 or more moves?</p> Answer: <p>I'll take a shot at this........</p> <p>1) The eventual fateful outcome occurs when the total number of moves, lower versus higher, differs by 3.</p> <p>So, a way to look at this as an <span class="math-container">$E(x) = n\cdot p$</span> type problem is to sum the <span class="math-container">$(n\cdot p)$</span>s for all possible outcomes.</p> <p>This will be:</p> <p><span class="math-container">$3(\frac{1}{3})+5(\frac{2}{9})+7(\frac{4}{27})+9(\frac{8}{81})+ .....\text{etc}$</span> which is an infinite arethmetico-geometric series whose infinite sum is: <span class="math-container">$$S = \frac{dg_2}{(1-r)^2} + \frac{a}{1-r} = \frac{2\cdot (\frac{1}{3}\cdot \frac{2}{3})}{\frac{1}{3}^2} + \frac{3\cdot \frac{1}{3}}{\frac{1}{3}} = 2\cdot 2 + 3 = 7$$</span></p> <p>2) <span class="math-container">$$P(n\ge 100) = 1 - P(n&lt;100)$$</span></p> <p><span class="math-container">$$P(n&lt;100) = S_n = \frac{1}{3}+\frac{2}{9}+\frac{4}{27}+ .......+\frac{2^{n-1}}{3^n}$$</span> </p> <p>This turns out to be a geometric series, where <span class="math-container">$a_1 = \frac{1}{3}, r = \frac{2}{3}$</span> and <span class="math-container">$n = 49$</span> (odd from <span class="math-container">$3$</span> to <span class="math-container">$99$</span>) a different n from the n moves.</p> <p>Example calculation for <span class="math-container">$3$</span>rd term is: <span class="math-container">$9(\frac{2}{3})^5(\frac{1}{3})^2 + 9(\frac{1}{3})^5(\frac{2}{3})^2 = \frac{4}{27}$</span></p> <p><span class="math-container">$$P(n\ge 100) = 1 - \frac{\frac{1}{3}(1-(\frac{2}{3})^{49})}{1-(\frac{2}{3})}$$</span></p> <p><span class="math-container">$$P(n\ge 100) = 1 - .9999999976 = 2.4\cdot 10^{-9}$$</span></p>
https://math.stackexchange.com/questions/2924838/cat-mouse-probability-question
Question: <p>I have collected data on the time duration between consecutive occurrences of a particular event ("success"), and the amount of time between consecutive "successes" (in days) seems to be distributed using a Gamma distribution. Intuitively, this is not a Bernoulli trial because the probability of a "success" in a given day seems to increase the longer it has been since the previous "success". I would like to know how to determine the probability that the next "success" will happen <strong>today</strong>. That is, each day, it is known how long it has been since the last "success", but I would like to determine the probability that (given the knowledge of the last "success") the next "success" occurs <strong>today</strong>.</p> <p>I would appreciate any insight into solving this problem.</p> <p><strong>-- EDIT --</strong></p> <p>I need to determine the probability that the next "success/arrival" occurs between n and n+1 days, i.e., the interval [n,n+1), after the last known "success", i.e., at day 0, <strong>assuming that</strong> no "successes" have occurred in the first n days, i.e., in the interval [0,n).</p> <p>The collected data is the number of days between consecutive "successes", and this "arrival/waiting time" is Gamma distributed with parameters k and θ. Is a non-homogeneous Poisson process the correct method? If so, how can I determine the Poisson rate λ(t) from the Gamma distribution? If I understand the Poisson process correctly, then I only want to know P{N[k,k+1)>0 | N[0,k)=0}.</p> <p>Thoughts?</p> Answer:
https://math.stackexchange.com/questions/2925412/probability-of-next-occurence
Question: <p>Bill gave exams for the entrance at some specific gymnasium. <span class="math-container">$602$</span> students took part, which were classified, after the exams, in an ascending order, and the first <span class="math-container">$108$</span> students will be taken, which will accept to enter. Every student that has the possibility to enter will not enter with a small possibility <span class="math-container">$p=0.02$</span>, same for all, and independent from the rest. Bill is at the position <span class="math-container">$113$</span>, so he will be accepted if at least <span class="math-container">$5$</span> students from the first <span class="math-container">$112$</span> will not enter at the gymnasium. I want to give an exact expression for the probability <span class="math-container">$q$</span> that Bill gets accepted. I also want to give an approximate expression for the probability <span class="math-container">$q$</span> .</p> <p>Is the probability that Bill get accepted equal to</p> <p><span class="math-container">$$5 \cdot 0.02?$$</span></p> <p>Or do we have to take also something else into consideration?</p> Answer: <p>This is a binomial distribution with <span class="math-container">$p=.02$</span>, <span class="math-container">$n=112$</span>, and five successes required. So the simple way to find the answer is simply to find a binomial calculator. For instance, <a href="https://stattrek.com/online-calculator/binomial.aspx" rel="nofollow noreferrer">https://stattrek.com/online-calculator/binomial.aspx</a> gives 7.49%</p> <p>If you want to do it by hand, you can take </p> <p><span class="math-container">$\sum_5^{112} \binom{112}{n}(.02)^n(.98)^{112-n}=1-\sum_0^4 \binom{112}{n}(.02)^n(.98)^{112-n}$</span></p> <p>You can also treat this as being approximated by a Poisson distribution with <span class="math-container">$\lambda = 112*.02=2.24$</span> and find the probability <span class="math-container">$x\geq5$</span>, which gives 7.68%, which is close to the exact answer of 7.49%.</p>
https://math.stackexchange.com/questions/2929025/entrance-at-gymnasium
Question: <blockquote> <p>A point is chosen uniformly at random inside the triangle with vertices at <span class="math-container">$(0, 0), (0, 1)$</span> and <span class="math-container">$(1, 0)$</span>, meaning that the probability that the point lies in a certain region inside the triangle is proportional to the area of that region. Let <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> be respectively the <span class="math-container">$x$</span> and <span class="math-container">$y$</span> coordinates of the point and let <span class="math-container">$Z = \max{\{X, Y\} }$</span>. Compute the density of <span class="math-container">$Z$</span>.</p> </blockquote> <p>MY WORK: I drew the triangle out and its of height and base length <span class="math-container">$1$</span> so its area is <span class="math-container">$\frac{1}{2}$</span>.</p> <p>I know <span class="math-container">$W=\sqrt{X^2+Y^2}$</span> is a random variable from class.</p> <p>I begin like all the examples <span class="math-container">$P(Z \leq a)=P(\max{\{X,Y\}}\leq a)$</span> </p> <p>now, on the picture, if we draw the <span class="math-container">$x=y$</span> line I know that the area of the triangle where <span class="math-container">$y&gt;x$</span> is <span class="math-container">$\frac{1}{4}$</span>, and vice versa.</p> <p>MY ATTEMPT TO FOLLOW USER @Jsevillamol's HINT BELOW:</p> <p><span class="math-container">$$F_Z(a)=P(Z\leq z)=P(\max{\{X,Y\}} \leq a)$$</span></p> <p>If <span class="math-container">$a \leq \frac{1}{2}$</span> we see that the area of the region is a square, as noted by user @Jsevillamol below (thank you).</p> <p>If <span class="math-container">$a &gt; \frac{1}{2}$</span> then we see that the area of the region is a square with a corner cut out so I get</p> <p><span class="math-container">$$P(Z \leq a)=\begin{cases} \frac{a^2}{\frac{1}{2}} \quad 0&lt;a\leq\frac{1}{2} \\ \frac{a^2-\frac{(2a-1)^2}{2}}{\frac{1}{2}} \quad 1/2 &lt;a \leq 1\end{cases}$$</span></p> <p>Thus we have to take a derivative so we get </p> <p><span class="math-container">$$f_Z:= \begin{cases} 4a \quad 0 &lt;a\leq \frac{1}{2} \\ 4-4a \quad \frac{1}{2}&lt;a\leq 1 \end{cases} $$</span></p> Answer: <p>Let us work through this example together.</p> <p>We want to find the relative size of the region of the triangle where <span class="math-container">$Z = max(X,Y)$</span> is less than equal some <span class="math-container">$a$</span>.</p> <p>If <span class="math-container">$a\le 1/2$</span>, the region is a square, and its area is easy to compute.</p> <p>If however <span class="math-container">$a &gt; 1/2$</span>, the region is a square missing a corner which we need to take into account. This corner has the shape of a isosceles right triangle and its common side is <span class="math-container">$a-(1-a)=2a-1$</span> long. </p> <p>Doing the calculations carefully we arrive at:</p> <p><span class="math-container">$$ P(Z\le a) = \frac{\text{area where $Z\le a$ }}{\text{area of whole triangle}} \begin{cases} \frac{a^2}{1/2} \text{ if $a \le 1/2$} \\ \frac{a^2 - \frac{(2a-1)^2}{2}}{1/2} \text{ if $a &gt; 1/2$} \end{cases} $$</span></p> <p>Can you finish the calculation and compute the PDF on your own?</p>
https://math.stackexchange.com/questions/2932126/density-of-a-random-variable
Question: <p>A group of $2n$ boys is to be divided into two groups of $n$ boys . What is the probability that the two tallest boys are in different groups ? </p> <p>This is how I attempted it: </p> <p>The probability that the two boys are in same group can be obtained as follows:</p> <p>First we separate those two particular boys leaving us with $2n-2$ boys. We then form a group of $n$ boys not containing the two particular boys giving us a group $n-2$ boys in which the two boys can be accommodated. The probability of forming such groups is $\frac{\binom{2n-2}{n}}{\binom{2n}{n}}$. Thus the actual probability of forming the groups with the two boys in different groups is $1- \frac{\binom{2n-2}{n}}{\binom{2n}{n}}$ . However there seems a problem with this. Could you please point out where I was wrong ? </p> <p>Also , please note that I already know the correct solution to this problem. I just wanted to correct my mistake. </p> <p>Thanks for your help !</p> Answer: <p>Let us assume that the tallest boys are Andrew and Bruce. The configurations of this kind <span class="math-container">$$ (A\text{ together with }n-1\text{ other people })\quad (B\text{ together with }n-1\text{ other people }) $$</span> are <span class="math-container">$\binom{2n-2}{n-1}$</span> (it is enough to select Andrew's mates), while the configurations of this kind <span class="math-container">$$ (A,B\text{ together with }n-2\text{ other people })\quad (n\text{ other people }) $$</span> are <span class="math-container">$\binom{2n-2}{n-2}=\binom{2n-2}{n}$</span> (it is enough to select Andrew <em>and</em> Bruce's mates). The wanted probability is so</p> <p><span class="math-container">$$ \frac{\binom{2n-2}{n-1}}{\binom{2n-2}{n-1}+\binom{2n-2}{n}}=\frac{\binom{2n-2}{n-1}}{\binom{2n-1}{n-1}}=\color{red}{\frac{n}{2n-1}}. $$</span></p>
https://math.stackexchange.com/questions/2936889/probability-of-dividing-boys-in-2-groups
Question: <p>Suppose we play a game where we roll a six-sided die. If a <span class="math-container">$4$</span>, <span class="math-container">$5$</span>, or <span class="math-container">$6$</span> is rolled, I get <span class="math-container">$1$</span> point. If a <span class="math-container">$1$</span>, <span class="math-container">$2$</span>, <span class="math-container">$3$</span> is rolled, you get <span class="math-container">$1$</span> point. We play first to <span class="math-container">$5$</span>, but the winner must also win by <span class="math-container">$2$</span>. If we are tied at <span class="math-container">$5$</span>, what’s the probability that you win the game?</p> <p>My approach has been that for "you" to win <span class="math-container">$2$</span> consecutive games, the probability is <span class="math-container">$\frac{1}{2}*\frac{1}{2} = \frac{1}{4}$</span>. </p> <p>I'm not sure of it though. Is this approach correct?</p> Answer: <p>Well, sure one could take that approach, and consider all of the edge cases, and take large infinite sums over all of them, or one could make the following observation. (An observation that could make life a bit more easy is to realize that this dice game is equivalent to a coin flipping game, but even as so, the infinite sum still isn't so fun.)</p> <p>The game is symmetrical for you and your opponent. Hence, your probability of winning is as good as your opponent's, i.e., you have a <span class="math-container">$\color{red}{50\%}$</span> of winning.</p>
https://math.stackexchange.com/questions/2937271/probability-of-winning-die-game
Question: <p>Let (Ω, <span class="math-container">$\mathcal{F}$</span>, <span class="math-container">$\mathbb{P}$</span>) be a probability space and, for each t ∈ [0, 1], let <span class="math-container">$X_t$</span> be a random variable on (Ω, F, P). For <span class="math-container">$\omega \in \Omega$</span></p> <p>Y(<span class="math-container">$\omega$</span>): = sup <span class="math-container">$X_t(\omega$</span>) with <span class="math-container">$t \in [0,1]$</span></p> <p>Is Y a random variable on (Ω, <span class="math-container">$\mathcal{F}$</span>, <span class="math-container">$\mathbb{P}$</span>)?</p> <p>I know it is a r.v. when it is unbounded but why in this situation too?</p> Answer: <p>You want to check <span class="math-container">$Y$</span> is a measurable function, right?</p> <p>There's some theorem saying if a class of functions <span class="math-container">$f_t(x)$</span> are measurable, then so does <span class="math-container">$\sup_t f_t(x)$</span>. So if you consider <span class="math-container">$w$</span> as <span class="math-container">$x$</span> here, this theorem applies.</p>
https://math.stackexchange.com/questions/2938028/lim-sup-as-a-random-variable-in-a-bounded-interval
Question: <p>I am helping a friend with a study guide for a class, and one of the problems is asking about the theoretical mean and standard deviation. Two 8-sided dice with equal probabilities for 1, 2, 3, 4, 5, 6, 7, and 8, are rolled, and the sum of the two dice are recorded. </p> <p>So I have a dataset that is the sum of the 8-sided dice rolled 10000 times. That is, I have a dataset of 10000 values between 2 and 16. This is what I did:</p> <p>Mean: <span class="math-container">$\mu = \frac{\sum(x_i)}{N} = \frac{\sum_{x = 2}^{16}(\frac{10000}{15})(x)}{10000}$</span> = 9</p> <p>Standard deviation: <span class="math-container">$\sigma = \sqrt{\frac{\sum(x_i-\mu)^2}{N}} = \sqrt{\frac{\sum_{x = 2}^{16}(\frac{10000}{15})(x-9)^2}{10000}} \approx$</span> 4.32</p> <p>My friend's study guide has the numerical answers to all the questions, but they are not in order. I do not see 4.32 on this list of answers so I was wondering if I did it wrong.</p> <p>Any help would be appreciated!</p> Answer: <p>You have assumed that each sum from <span class="math-container">$2$</span> through <span class="math-container">$16$</span> is equally probable, so each one shows up <span class="math-container">$\frac 1{15}$</span> of the time. That is not true. There are <span class="math-container">$64$</span> different rolls and a sum of <span class="math-container">$9$</span> shows up <span class="math-container">$8$</span> times so has a chance of <span class="math-container">$\frac 18$</span> while <span class="math-container">$2$</span> and <span class="math-container">$16$</span> each only show up <span class="math-container">$\frac 1{64}$</span> of the time. Because of the symmetry your calculation of the mean came out, even though you calculated it incorrectly. The variance is smaller than you calculate. </p> <p>What you should do is <span class="math-container">$\sum_{x=2}^{16}P(x)(x-9)^2$</span> where <span class="math-container">$P(x)$</span> is the probability of rolling a sum of <span class="math-container">$x$</span> This is <span class="math-container">$$\sqrt{\frac {1(2-9)^2+2(3-9)^2+3(4-9)^3+\ldots8(9-9)^2+7(10-9)^2+\ldots+1(16-9)^2}{64}}\approx 3.24$$</span></p>
https://math.stackexchange.com/questions/2941691/standard-deviation-of-two-8-sided-dice-rolled-10000-times
Question: <p>n white and k black balls are randomly and independently distributed amongst m boxes. There is no limit to the number of balls a box can contain.</p> <p>As a result, there are four possible states for each box:</p> <ol> <li>Empty</li> <li>black only</li> <li>white only</li> <li>black and white</li> </ol> <p>What is the expected fraction of boxes in each state given n, m and k?</p> <p>Obviously the expected distribution of states depend on the expected frequency of boxes containing at least one black or one white ball, but I do not even know how to calculate that given that there are no restrictions on the number of balls in a box. </p> <p>I had calculated this whithout taking into account the possibility that multiple balls with the same colour would end up in the same box. In that case it would simply be:</p> <ol> <li>1 - α - β + α∗β</li> <li>α − α∗β</li> <li>β − α∗β</li> <li>α∗β</li> </ol> <p>with α = m/k and β = m/n. But this is obviously wrong if there is no limitation on the number of balls per box.</p> Answer: <p>It is important to be clear about the process for distributing balls between boxes. Just clearing this up can help you think about how to work out the probabilities.</p> <p>Let's assume the procedure for assigning balls to boxes "randomly and independently" is this:</p> <ul> <li>Consider each of the <span class="math-container">$(n + k)$</span> balls, one at a time.</li> <li>For each ball, select 1 of the <span class="math-container">$m$</span> boxes uniformly at random (regardless of its contents).</li> <li>Place the ball in the box.</li> </ul> <p><strong>Case 1:</strong> What is the probability that a particular box is empty?</p> <p>To be empty the box had to <strong>not</strong> be selected <span class="math-container">$(n + k)$</span> times in the procedure above. The probability of this box not being selected once is <span class="math-container">$$\frac{m-1}{m}$$</span> Think of this as the probability that this box got missed, or equivalently that a different box got selected. The probability of this box not being selected <span class="math-container">$(n+k)$</span> times in a row is <span class="math-container">$$\left(\frac{m-1}{m}\right)^{n+k}$$</span></p> <p><strong>Case 2:</strong> What is the probability that a particular box contains only black balls?</p> <p>To have at least one black ball and no white balls we require both that all white balls missed the box, and that not all black balls missed the box. This gives the product below where the first factor is the probability that all white balls miss the box and the second factor is the probability that not all black balls miss the box.</p> <p><span class="math-container">$$\left(\frac{m-1}{m}\right)^{n}\left(1-\left(\frac{m-1}{m}\right)^{k}\right)$$</span></p> <p><strong>Case 3:</strong> What is the probability that a particular box contains only white balls?</p> <p>The reasoning here is the same as in Case 2, but with "black" and "white" swapped. Not all white balls can miss this box, but all black balls need to miss the box.</p> <p><span class="math-container">$$\left(1-\left(\frac{m-1}{m}\right)^{n}\right)\left(\frac{m-1}{m}\right)^{k}$$</span></p> <p><strong>Case 4:</strong> What is the probability that a particular box contains black and white balls?</p> <p>Here we need the probability that not all white balls can miss this box, and also that not all black balls miss the box. Then the box would contain at least 1 white and at least 1 black ball.</p> <p><span class="math-container">$$\left(1-\left(\frac{m-1}{m}\right)^{n}\right)\left(1-\left(\frac{m-1}{m}\right)^{k}\right)$$</span></p> <p><strong>Summary:</strong> The 4 cases above are mutually exclusive and their probability sum to 1.</p> <p>Since the expected <strong>fraction</strong> of boxes in each state is the same as the probability of a box being in each of the possible states, the four probabilities above answer the question.</p> <p>To get the expected <strong>number</strong> of boxes in each state you would need to multiply each probability by <span class="math-container">$m$</span>, the number of boxes.</p>
https://math.stackexchange.com/questions/2949935/n-white-and-k-black-balls-in-m-boxes-probability-of-co-occurence
Question: <p>Let <span class="math-container">$\theta_n$</span> be a random variable that can be <span class="math-container">$\{\frac{1}{n},\frac{2}{n},...,\frac{n}{n}\}$</span> with equal probability <span class="math-container">$\frac{1}{n}$</span>. My question is where does <span class="math-container">$\theta_n$</span> converge in distribution? My guess is <span class="math-container">$\theta=U[0,1]$</span> where <span class="math-container">$U$</span> is the uniform distribution. </p> <p>How can I show that?</p> <p>I tried to sketch a proof like this: </p> <p>We know that <span class="math-container">$P(\theta_n\leq\frac{i_n}{n})=\frac{i_n}{n}$</span>, for a sequence <span class="math-container">$i_n=xn$</span> we have that <span class="math-container">$P(\theta_n\leq x)=x,\forall n$</span>, this is the same distribution of the CDF of a uniform distribution, hence <span class="math-container">$\theta_n \rightarrow_d\theta \sim U[0,1]$</span>.</p> <p>Thank you all!</p> Answer: <p>You can't restrict attention to the nice cases, you have to handle arbitrary <span class="math-container">$x \in \mathbb{R}$</span>. To do that, note that for <span class="math-container">$x \in [0,1]$</span>:</p> <p><span class="math-container">$$P(\theta_n \leq x)=\frac{1}{n} |\{ k \in \mathbb{N} : k/n \leq x \}| \\ = \frac{1}{n} |\{ k \in \mathbb{N} : k \leq nx \}|.$$</span></p> <p>How many elements are in that set when <span class="math-container">$n$</span> is finite?</p> <p>The case <span class="math-container">$x \not \in [0,1]$</span> is easy of course.</p>
https://math.stackexchange.com/questions/2952037/let-theta-n-be-a-random-variable-that-can-be-frac1n-frac2n
Question: <blockquote> <p>What numbers in the interval <span class="math-container">$[0,1]$</span> can be generated by tossing a fair coin? By generating a number using a coin, we mean finding an event that its probability is the given number.</p> </blockquote> <p>I think that any number in <span class="math-container">$[0,1]$</span> can be generated by tossing a fair coin for an infinite number of times because we can generate the binary expansion. And by generating, I mean finding an event that gives the desired probability. </p> <p>So, it seems that if tossing a coin for an infinite number is allowed, the problem's done. However, what if we disallowed tossing for infinitely many times? Then I think only those numbers whose denominator are a power of <span class="math-container">$2$</span> can be expressed. Others cannot be expressed. But I am not sure. Any help is appreciated.</p> Answer: <p>Write <span class="math-container">$0.$</span> and now start tossing the coin. Write <span class="math-container">$1$</span> if it comes up heads and <span class="math-container">$0$</span> for tails. You will gradually spell out a binary representation of a number in the range <span class="math-container">$[0, 1]$</span>. As you say, if you stop after a finite number of throws then it will represent <span class="math-container">$\frac{m}{2^n}$</span> but you could get an event of probability <span class="math-container">$\frac{1}{3}$</span> by comparing against <span class="math-container">$0.01010101...$</span>. Stop when you get a value definitely above or below this. If you are incredibly unlucky you might never stop.</p>
https://math.stackexchange.com/questions/2953555/what-numbers-in-0-1-can-be-generated-by-tossing-a-fair-coin
Question: <p>Is there a nice rule describing how the values of <span class="math-container">$P(C|B)$</span> and <span class="math-container">$P(B|A)$</span> jointly constrain <span class="math-container">$P(C|A)$</span>? In particular, if I know that both <span class="math-container">$P(C|B)$</span> and <span class="math-container">$P(B|A)$</span> are above some threshold value <span class="math-container">$t$</span>, what does that tell us about <span class="math-container">$P(C|A)$</span>?</p> Answer: <p>You cannot conclude anything about <span class="math-container">$P(C|A)$</span>, it can still be <span class="math-container">$0$</span>. For example, if I am rolling a <span class="math-container">$6$</span> sided die and the events <span class="math-container">$A,B,C$</span> are</p> <ul> <li><span class="math-container">$A$</span>... I roll an odd number</li> <li><span class="math-container">$B$</span>... I roll a number between <span class="math-container">$1$</span> and <span class="math-container">$6$</span></li> <li><span class="math-container">$C$</span>... I roll an even number</li> </ul> <p>Then <span class="math-container">$P(C|B)=\frac12, P(B|A)=1$</span> (they are above the threshold <span class="math-container">$t=\frac12$</span>), however <span class="math-container">$P(C|A)=0$</span>.</p> <p>There is also no upper bound, since you can replace <span class="math-container">$A$</span> above with "rolling an even number",a nd <span class="math-container">$P(C|A($</span> is then <span class="math-container">$1$</span>.</p>
https://math.stackexchange.com/questions/2957640/transitive-conditional-probability-constraints
Question: <p>How can I solve this one? Ten cats are on the chessboard, every cat is on one of the square and the can be on the same square as well as in a different square. Every turn each cat jumps to the adjacent square with equal probability. They won't jump outside the board. So in the corner a cat can jump to three squares with each jump happens with probability <span class="math-container">$1/3$</span> and on the center of the board with probability <span class="math-container">$1/8$</span>. How many turns it takes on average that all cats are on the same square?</p> Answer: <p>There are three different types of squares: </p> <p>1) <span class="math-container">$6\times6=36$</span> central squares, each reachable from 8 different directions. Denote the probability of a cat taking a central square with <span class="math-container">$p_1$</span>.</p> <p>2) <span class="math-container">$4\times6=24$</span> edge squares, each reachable from 5 different directions. The probability of a cat taking an edge square is <span class="math-container">$p_2=\frac58p_1.$</span></p> <p>3) <span class="math-container">$4$</span> corner squares, each reachable from 3 different directions. The probability of a cat taking a corner square is <span class="math-container">$p_3=\frac38p_1$</span>.</p> <p>Sum of all proabilities has to be equal to 1:</p> <p><span class="math-container">$$36p_1+24\times\frac58p_1+4\times\frac38p_1=1$$</span></p> <p>...which implies:</p> <p><span class="math-container">$$p_1=\frac{2}{105},\quad p_2=\frac{1}{84}, \quad p_3=\frac{1}{140}$$</span></p> <p>The probability that 10 cats will end up on the same square is:</p> <p><span class="math-container">$$P=36p_1^{10}+24p_2^{10}+4p_3^{10}$$</span></p> <p>Expected number of turns <span class="math-container">$E$</span> is:</p> <p><span class="math-container">$$E=\frac{1}{P}=\frac{1}{36p_1^{10}+24p_2^{10}+4p_3^{10}}=\frac{1}{36\left(\frac{2}{105}\right)^{10}+24\left(\frac{1}{84}\right)^{10}+4\left(\frac{1}{140}\right)^{10}}=4.392\times10^{15}$$</span></p> <p>Assuming that cat needs one second to do the jump, we'll have to wait about 140 million years before all cats jump to the same square at the same time :)</p> <p><strong>EDIT:</strong> To confirm probabilities <span class="math-container">$p_1$</span>, <span class="math-container">$p_2$</span> and <span class="math-container">$p_3$</span> I have made a simulation with a single cat making one billion random jumps on a chessboard. Here is the number of visits the cat made to each square:</p> <pre><code> 7143004 11912628 11903356 11905677 11900317 11904718 11910367 7145275 11907889 19060966 19055158 19043601 19046747 19047165 19046961 11914199 11912438 19052604 19048672 19048660 19047571 19058482 19049581 11903917 11906619 19048517 19047432 19055252 19053011 19045127 19044827 11897243 11900140 19042740 19047619 19048125 19040301 19042256 19043828 11901244 11903306 19038343 19042442 19045611 19044110 19053048 19048414 11907292 11901422 19044287 19039872 19045175 19046148 19049024 19048627 11904469 7139570 11907338 11901830 11895860 11905404 11904611 11907035 7142529 </code></pre> <p>The above values agree with calculated values beautifully.</p> <p><strong>EDIT 2:</strong> I was curios to see what happens if cat's movements are restricted so that only horizontal or vertical jumps are allowed (no diagonal moves at all). My gut feeling was that the cats will meet faster if you restrict their movements but I was wrong (if my calculations were correct, of course). "Meeting time" almost doubled to 280 million years :)</p>
https://math.stackexchange.com/questions/2957812/how-many-turns-it-takes-to-cats-in-a-chessboard-jumps-to-the-same-square
Question: <p>Say that Amy tosses a coin 6 times, and Bob tosses a coin 5 times. What's the probability that Amy gets more heads than Bob does? </p> Answer:
https://math.stackexchange.com/questions/2965489/probability-that-amy-gets-more-heads-than-bob
Question: <p>We are playing with two dice until we get <span class="math-container">$10$</span> as the sum of the results two times in a row.</p> <p>(i) What is the probability of having gotten this in <span class="math-container">$8$</span> throws?</p> <p>(ii) What's the probability of having thrown a sum less than <span class="math-container">$10$</span> exactly <span class="math-container">$8$</span> times before we stopped?</p> <p>My attempt:</p> <p>i) The elements of ordered pairs that add up to <span class="math-container">$10$</span> are <span class="math-container">$(4,6), (6,4)$</span> and <span class="math-container">$(5,5)$</span> that appear on the <span class="math-container">$7$</span>th and <span class="math-container">$8$</span>th throw and therefore its probability is <span class="math-container">$3/36$</span>. I don't know how much I correctly tackled the problem. </p> <p>The second one a little bit confusing. Any help please. </p> Answer: <p>(i) What is the probability of having gotten this in 8 throws?</p> <p>Solution: You didn't consider the all possible events. In order to calculate the required probability, you have to consider the throws before <span class="math-container">$7th$</span> for not getting the sum of <span class="math-container">$10$</span> in consecutive throws. </p> <p>since you've got the sum in <span class="math-container">$7th$</span> &amp; <span class="math-container">$8th$</span> throw, therefore in the <span class="math-container">$6th$</span> throw you have a constraint of not getting a sum <span class="math-container">$10$</span> but all from <span class="math-container">$1$</span> to <span class="math-container">$5$</span> have a possibility of getting a sum of <span class="math-container">$10$</span> (not consecutively).</p> <p>let <span class="math-container">$p$</span> = prob of getting a sum <span class="math-container">$10$</span>.</p> <p><span class="math-container">$p$</span> = <span class="math-container">$3/36$</span></p> <p>and <span class="math-container">$\bar p$</span> = prob of <span class="math-container">$NOT$</span> geeting a sum <span class="math-container">$10$</span>. </p> <p><span class="math-container">$\bar p$</span> = <span class="math-container">$33/36$</span></p> <p><span class="math-container">$p+\bar p= 1$</span></p> <p>now there are two possibilities: </p> <p>(i) <span class="math-container">$1,~~~3,~~~5~~$</span> has an option of getting a sum or not but <span class="math-container">$2,4$</span> are constrained of not getting a sum 10.</p> <p><span class="math-container">$\color{red}{1} ~~~2 ~~~\color{red}{3}~~~4~~~\color{red}{5}~~~6~~~7~~~8$</span></p> <p>probability = <span class="math-container">$(p+\bar p)*\bar p*(p+\bar p)*\bar p*(p+\bar p)*\bar p*p*p = p^2 \bar p^3$</span> </p> <p>(ii) <span class="math-container">$~~1~~~\color{red}{2}~~~3~~~\color{red}{4}~~~5~~~6~~~7~~~8$</span></p> <p>probability= <span class="math-container">$\bar p*(p+\bar p)*\bar p*(p+\bar p)*\bar p*\bar p*p*p = p^2 \bar p^4$</span></p> <p>therefore required probability = <span class="math-container">$p^2 \bar p^3 +p^2 \bar p^4 = ~p^2 \bar p^3 (1+\bar p) $</span></p> <p>PS: long but easy to understand :) .</p>
https://math.stackexchange.com/questions/2965715/what-is-the-probability-of-obtaining-a-sum-of-10-in-two-consecutive-throws-of
Question: <p>If <span class="math-container">$\exists k&gt;1$</span> odd such that <span class="math-container">$E(X-E(X))^k=0$</span>, then is X symmetric?</p> <p>I know that the converse is true, in fact, if X is symmetric, then all odd moments will be zero.</p> Answer: <p>You can find a real-valued random variable <span class="math-container">$X$</span> that is not symmetric (I am guessing that it means <span class="math-container">$X-a$</span> and <span class="math-container">$a-X$</span> are identically distributed for some constant <span class="math-container">$a$</span>) and, for a given odd integer <span class="math-container">$k&gt;1$</span>, <span class="math-container">$$\mathbb{E}\left[\big(X-\mathbb{E}[X]\big)^k\right]=0\,.$$</span> Fix an odd integer <span class="math-container">$k&gt;1$</span>. Take for example a random variable <span class="math-container">$X$</span> with three possible values <span class="math-container">$-2$</span>, <span class="math-container">$1$</span>, and <span class="math-container">$3$</span> such that <span class="math-container">$$\mathbb{P}[X=-2]=\frac{3^k-3}{3^{k+1}-2^{k+1}-5}\,,$$</span> <span class="math-container">$$\mathbb{P}[X=1]=\frac{2\cdot 3^k-3\cdot2^k}{3^{k+1}-2^{k+1}-5}\,,$$</span> and <span class="math-container">$$\mathbb{P}[X=3]=\frac{2^k-2}{3^{k+1}-2^{k+1}-5}\,.$$</span> Then, <span class="math-container">$$\mathbb{E}[X]=0\text{ and }\mathbb{E}\left[X^k\right]=\mathbb{E}\left[\big(X-\mathbb{E}[X]\big)^k\right]=0\,.$$</span> Nonetheless, I think this should be true: if <span class="math-container">$X$</span> is an integrable real-valued random variable and <span class="math-container">$\mathbb{E}\left[\big(X-\mathbb{E}[X]\big)^k\right]=0$</span> for every odd integer <span class="math-container">$k&gt;1$</span>, then <span class="math-container">$X$</span> is symmetric about <span class="math-container">$\mathbb{E}[X]$</span>. </p> <hr> <p>My claim in the paragraph above indeed holds. Without loss of generality, suppose that <span class="math-container">$\mathbb{E}[X]=0$</span>. Let <span class="math-container">$\varphi_X$</span> denote the characteristic function of <span class="math-container">$X$</span>, namely, <span class="math-container">$\varphi_X(t)=\mathbb{E}\big[\exp(\text{i}tX)\big]$</span> for all real numbers <span class="math-container">$t$</span>. Thus, <span class="math-container">$$\varphi_X(t)=\int_{-\infty}^{+\infty}\,\exp(\text{i}tx)\,\text{d}P_X(x)\,,$$</span> where <span class="math-container">$P_X$</span> is the probability measure of <span class="math-container">$X$</span>. Observe that <span class="math-container">$\varphi_X$</span> is an even function. Let <span class="math-container">$E$</span> be a Lebesgue measurable subset of <span class="math-container">$\mathbb{R}$</span> and <span class="math-container">$\chi_E$</span> denotes the indicator function of <span class="math-container">$E$</span>. Then, we see that <span class="math-container">$$P_X(E)=\frac{1}{2\pi}\,\int_{-\infty}^{+\infty}\,\int_{-\infty}^{+\infty}\,\exp(-\text{i}ts)\,\varphi_X(t)\,\chi_E(s)\,\text{d}s\,\text{d}t\,.$$</span> Since <span class="math-container">$\varphi_X$</span> is even, it follows that <span class="math-container">$$P_X(-E)=P_X(+E)$$</span> for any measurable subset <span class="math-container">$E$</span> of <span class="math-container">$\mathbb{R}$</span>. Consequently, <span class="math-container">$X$</span> is symmetric about <span class="math-container">$0$</span>.</p>
https://math.stackexchange.com/questions/2967381/if-exists-k1-odd-such-that-ex-exk-0-then-is-x-symmetric
Question: <p>A and B draw coins in turn without replacement from a bag containing <span class="math-container">$3$</span> dimes and <span class="math-container">$4$</span> nickels. A draws first. It is known that A drew the first dime. Find the probability that A drew it on the first draw.</p> <p>I know that the probability of drawing the first dime on the first draw must be <span class="math-container">$\frac{3}{7}$</span>. Is this the correct answer?</p> Answer: <p>P<span class="math-container">$[$</span>A draws dime on first draw draws first dime<span class="math-container">$|$</span> A draws first dime<span class="math-container">$]=\dfrac{P(\mbox{A draws dime on the first draw })}{P(\mbox{A draws first dime})}$</span></p> <p>So, <span class="math-container">$$P(\mbox{A draws dime on first draw})=\dfrac37$$</span></p> <p>Since, there are only <span class="math-container">$3$</span> dimes, in order for <span class="math-container">$A$</span> to draw the first dime, this must happen on <span class="math-container">$A$</span>'s first,second or third draw. Thus, we need<span class="math-container">$$P(\mbox{A draws first dime})=P(\mbox{A draws dime on first draw})+P(\mbox{A draws first dime on second draw})+P(\mbox{A draws first dime on third draw.})$$</span></p> <p><span class="math-container">$$P(\mbox{A draws dime on second draw})=\dfrac47\cdot\dfrac36\cdot\dfrac35=\dfrac{6}{35}$$</span>Because <span class="math-container">$A$</span>'s first draw is one of the four non-dimes and <span class="math-container">$B$</span>'s first draw is on of th three remaining non-dimes after <span class="math-container">$A$</span>'s draw, and <span class="math-container">$A$</span>'s second draw is one of the three dimes of the five remaining coins. Similarly, <span class="math-container">$$P(\mbox{A draws first dime on the third draw})=\dfrac47\cdot\dfrac36\cdot\dfrac25\cdot\dfrac14=\dfrac{1}{35}$$</span></p> <p>Then, <span class="math-container">$$P(\mbox{A draws first dime})=\dfrac37+\dfrac{6}{35}+\dfrac{1}{35}=\dfrac{22}{35}$$</span></p> <p><span class="math-container">$$P(\mbox{A draws dime on first draw}|\mbox{A draws first dime})=\dfrac{\dfrac37}{\dfrac{22}{35}}=\dfrac{15}{22}$$</span></p>
https://math.stackexchange.com/questions/2969428/find-the-probability-that-a-drew-it-on-the-first-draw
Question: <p>I'm not sure what field of math this is, I'm just interested in mathematics in daily life, here's a question that kept me thinking:</p> <p>Lets say for a range between <span class="math-container">$1$</span> to <span class="math-container">$200$</span>, I randomly pick a number, for example I pick <span class="math-container">$63$</span>, then all numbers lesser than or equals to <span class="math-container">$63$</span> will be discarded, next iteration I will be picking from <span class="math-container">$64$</span> to <span class="math-container">$200$</span>, what is the probability that I will arrive at <span class="math-container">$200$</span> in some number of picks?</p> <p>E.g. What's the probability I will pick <span class="math-container">$200$</span> after <span class="math-container">$5$</span> picks</p> Answer: <p>This is a nice question. Let <span class="math-container">$X=\{\text{number of picks needed to get to }200\}$</span>. First question is how many possible outcomes are there? </p> <p>Well, if we play the game until we get to <span class="math-container">$200$</span>, then, we could either pick <span class="math-container">$200$</span> on the first try (<span class="math-container">$1$</span> outcome), or pick <span class="math-container">$1$</span> number less than <span class="math-container">$200$</span> and then pick <span class="math-container">$200$</span> (<span class="math-container">$199$</span> outcomes), et cetera. </p> <p>If you pick need to <span class="math-container">$k$</span> numbers <span class="math-container">$$1\leq n_1&lt; n_2&lt;\cdots &lt; n_k&lt;200$$</span> before you pick <span class="math-container">$200$</span>, this is the same as choosing <span class="math-container">$k$</span> distinct numbers less than or equal to <span class="math-container">$199$</span>, so there are <span class="math-container">${199\choose k}$</span> possible games that end after <span class="math-container">$k+1$</span> picks. Since we have to win in at most <span class="math-container">$200$</span> turns, the total number of games is <span class="math-container">$$ 1+199+\cdots+{199\choose k}+\cdots +{199\choose 198}+1=2^{199},$$</span> where I used the binomial theorem. Then the probability that you arrive at <span class="math-container">$200$</span> in <span class="math-container">$k$</span> picks is the number of such outcomes divided by the total number of outcomes, so that <span class="math-container">$$P(X=k) =\frac{{199\choose k-1}}{2^{199}}.$$</span></p> <p>In particular, the probability that you arrive at <span class="math-container">$200$</span> in exactly <span class="math-container">$5$</span> picks is <span class="math-container">$$P(X=5) = \frac{{199\choose 4}}{2^{199}}\sim 7.88\times 10^{-53}.$$</span></p>
https://math.stackexchange.com/questions/2976252/probability-of-arriving-at-a-number-after-a-certain-number-of-tries-with-the-fol
Question: <p>Suppose we have code with <span class="math-container">$𝑛 = 100$</span> pages. The variable <span class="math-container">$𝑋𝑖$</span> is the number of errors on the page that is distributed Poisson meets the average of 1. Also, the number of errors per page is independent of the other page. Number The total errors are calculated as <span class="math-container">$𝑌 = Σ𝑋i $</span>. Using the central limit theorem Estimate <span class="math-container">$𝑃(𝑌 &lt;90)$</span>.</p> <p>Since this is a normal Approximation to Poisson problem. i should use the <span class="math-container">$Z=Y−λ/√λ⟶N(0,1)$</span> therefore, <span class="math-container">$P(Y&lt;90) = P(Z&lt;\frac{90-1}{1})$</span>? </p> <p>And how can i get the P(Z&lt;89) from z table? </p> Answer: <p>The key fact to use here is that the sum of independent Poisson RVs also has a Poisson distribution with a rate that is the sum of the individual rates. Specifically, since <span class="math-container">$X_i \sim Pois(1)$</span>, <span class="math-container">$Y = \sum_{i=1}^{100} X_i \sim Pois(100)$</span>. Since <span class="math-container">$Y$</span> is Poisson with parameter 100, both it's mean and variance are 100. You can approximate <span class="math-container">$Y$</span> with a normal random variable <span class="math-container">$Z \sim N(100, 100)$</span>. </p> <p>Now, to estimate the probability of interest, <span class="math-container">$\mathbb{P}(Y &lt; 90) \approx \mathbb{P}(Z &lt; 90) = \mathbb{P}(Z_{norm} &lt; (90-100)/10) = \mathbb{P}(Z_{norm} &lt; -1)$</span>, where <span class="math-container">$Z_{norm}$</span> is a standardized normal RV with mean 0 and variance 1. You can look this up from standard tables to get the answer 0.1587.</p>
https://math.stackexchange.com/questions/2982204/normal-approximation-to-poisson-problem
Question: <p><span class="math-container">$$\mathbb P\{X\leq x\}=\int_{\Omega }\boldsymbol 1_{X^{-1}(-\infty,x]}(\omega )d\mathbb P(\omega )=\int_{\Omega }\boldsymbol 1_{(-\infty,x]}(X(\omega ))d\mathbb P(\omega ),$$</span></p> <p>how can I continue ? I guess I have to do the substitution <span class="math-container">$t=X(\omega )$</span>, but I don't really know how to manage with the <span class="math-container">$d\mathbb P$</span>. </p> Answer:
https://math.stackexchange.com/questions/2984419/how-can-i-prove-that-mathbb-p-x-leq-x-int-inftyx-f-xtdt
Question: <p>I have difficulty to understand random variable. Let <span class="math-container">$(\Omega ,\mathcal F,\mathbb P)$</span> a probability space. Let say <span class="math-container">$\Omega =[0,1]$</span>, <span class="math-container">$\mathbb P=m$</span> the lebesgue measure and <span class="math-container">$\mathcal F=\mathcal B_{[0,1]}$</span> the Borel <span class="math-container">$\sigma -$</span>algebra. Let take for example, <span class="math-container">$X(\omega )=1$</span> or <span class="math-container">$X(\omega )=\omega $</span>. In what is it random ? If I fix <span class="math-container">$\omega \in \Omega $</span>, then <span class="math-container">$X(\omega )=\omega $</span> is know, and not unknow. </p> <p>Other example, if I want a number in <span class="math-container">$[0,1]$</span>. So, <span class="math-container">$X(\omega )=\omega $</span> should work. So if I fix <span class="math-container">$\omega $</span>, I know <span class="math-container">$X(\omega )$</span>... why is it random ? Also, if <span class="math-container">$Y$</span> is a uniform r.v. in <span class="math-container">$[0,1]$</span> don't we have that <span class="math-container">$X(\omega )=Y(\omega )=\omega $</span> a.s. ? </p> Answer: <p>Here's a perspective that was imparted to me by my thesis advisor: You have a function (i.e. <span class="math-container">$X$</span>, the random variable) on a domain (i.e. <span class="math-container">$\Omega$</span>, the sample space). It behaves just like all other functions you've ever encountered behave. The catch, though, is that you never really pick an individual <span class="math-container">$\omega$</span> and evaluate <span class="math-container">$X(\omega)$</span>. Instead, Tyche, the Greek goddess of fortune, chooses an input <span class="math-container">$\omega$</span> for you, and she will make her choices according to the measure <span class="math-container">$\mathbb P$</span>. Your task is to describe broadly what will happen to the outputs <span class="math-container">$X(\omega)$</span>. </p> <p>You're quite right that if you fix a particular <span class="math-container">$\omega \in \Omega$</span>, then nothing is really random; you're just evaluating a function. The "random" component is the part where you surrender control over which particular <span class="math-container">$\omega$</span> you choose back to Tyche.</p> <blockquote> <p>Also, if Y is a uniform r.v. in [0,1] don't we have that X(ω)=Y(ω)=ω a.s. ?</p> </blockquote> <p>Not necessarily; it depends on how <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> map <span class="math-container">$\Omega$</span> onto the real number line. Consider this example, where the sample space is <span class="math-container">$\Omega = [0, 1]$</span> and the probability measure is the Borel / Lebesgue measure: <span class="math-container">$$X(\omega) = \begin{cases} \omega, &amp; 0 \leq \omega \leq 1 \\ 0, &amp;\text{otherwise} \end{cases}$$</span> <span class="math-container">$$Y(\omega) = \begin{cases} 1-\omega, &amp; 0 \leq \omega \leq 1 \\ 0, &amp;\text{otherwise} \end{cases}$$</span> Both <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> have an equally legitimate claim to the title "uniformly-distributed random variable on <span class="math-container">$[0, 1]$</span>". Consider why; your only concern is the probability of <span class="math-container">$X$</span> or <span class="math-container">$Y$</span> being in particular regions after Tyche has chosen her input. The essential characteristic is that if you choose <span class="math-container">$a, b$</span> such that <span class="math-container">$0 \leq a \leq b \leq 1$</span>, that <span class="math-container">$\mathbb P(a \leq X \leq b) = b-a$</span>. It is also true that <span class="math-container">$\mathbb P(a \leq Y \leq b) = b-a$</span>. Yet, it also happens to be true that <span class="math-container">$\mathbb P(X = Y) = 0$</span>. Here's a third variable that has the same property: <span class="math-container">$$Z(\omega) = \begin{cases} \omega, &amp; 0 \leq \omega \leq 1 \text{ and } \omega \not \in \mathbb Q \\ 0, &amp;\text{otherwise} \end{cases}$$</span> Here, <span class="math-container">$Z$</span> again has just as much claim to be a "uniformly distributed variable on <span class="math-container">$[0, 1]$</span>" as <span class="math-container">$X$</span> or <span class="math-container">$Y$</span> do. This function is not terribly well-behaved, since it has infinitely many discontinuities. Yet, all we care about are the distributions after <span class="math-container">$Z$</span> has done its mapping.</p> <p>How <span class="math-container">$X$</span> maps the particular sample spaces onto <span class="math-container">$\mathbb R$</span> is not relevant; to characterize a random variable, your only concern is to characterize statements like <span class="math-container">$\mathbb P(a \leq X \leq b)$</span>. There are many ways to accomplish this; an equivalent one would be to successfully characterize all statements like <span class="math-container">$\mathbb P(X \leq c)$</span>. But the point is that you don't care about how the actual inputs are structured, and you don't care about how exactly they're mapped onto the reals by <span class="math-container">$X$</span>; you only care about things like <span class="math-container">$\mathbb P(X \in A) = \mathbb P(\omega: X(\omega) \in A)$</span>.</p>
https://math.stackexchange.com/questions/2985937/problem-to-understand-random-variable-for-example-x-omega-1-is-really-ra
Question: <p>I was initially thinking about who would die first, Sean Connery or Roger Moore. Roger Moore passed on later so I got to thinking about could you calculate the odds of one instance versus one single group happening? In my example, what would happen first, an original cast member of the Simpsons dying or Sean Connery dying? Could you predict which would happen first?</p> Answer: <p>I cannot address your specific example because I would have to know lots of details about the lives of Sean Connery and the cast members of The Simpsons to determine which people are in better/worse health and are thus more/less likely to die first, etc.</p> <p>However, I will formalize, generalize, and answer your general question:</p> <blockquote> <p>Suppose that of two sets of people, set <span class="math-container">$A$</span> and set <span class="math-container">$B$</span>, the people in set <span class="math-container">$A$</span> have average life expectancy <span class="math-container">$a$</span> and the people in set <span class="math-container">$B$</span> have average life expectancy <span class="math-container">$b$</span>. What is the probability that at least one person from <span class="math-container">$B$</span> outlives all people from <span class="math-container">$A$</span>?</p> </blockquote> <p>To solve the problem, we shall assume that the lifespan of a person follows the <a href="https://en.wikipedia.org/wiki/Exponential_distribution" rel="noreferrer">Exponential Distribution</a> with mean equal to the life expectancy of that person (because this distribution is typically used to model the amount of time that passes before an event occurs); that is, if the random variable <span class="math-container">$X$</span> represents the lifespan of a person with life expectancy <span class="math-container">$x$</span>, then we shall assume that <span class="math-container">$X\sim \text{Exp}(1/x)$</span>. </p> <p>Consider now a set <span class="math-container">$A$</span> of people each with life expectancy <span class="math-container">$a$</span>. Let the random variables <span class="math-container">$X_1,...,X_{|A|}\sim \text{Exp}(1/a)$</span> represent the length of each person's respective life. Then the amount of time that passes before <em>all</em> of them are dead will be the maximum of all of those variables. As we can see from <a href="https://math.stackexchange.com/questions/1114516/probability-density-function-of-maxx-y">this</a> old post, if <span class="math-container">$M,N$</span> are random variables with respective PDFs <span class="math-container">$p_M$</span> and <span class="math-container">$p_N$</span> and respective cumulative PDFs <span class="math-container">$P_M$</span> and <span class="math-container">$P_N$</span>, then the probability density function of the maximum of <span class="math-container">$M,N$</span> is <span class="math-container">$$p_M(t)P_N(t)+P_M(t)p_N(t)$$</span> Note that for any one of the variables <span class="math-container">$X_i$</span>, we have that <span class="math-container">$p_{X_i}(t)=(1/a)e^{-t/a}$</span> and <span class="math-container">$P_{X_i}(t)=1-e^{-t/a}$</span>. Thus, if we let <span class="math-container">$M_k$</span> represent the maximum value of <span class="math-container">$X_1,X_2,...,X_{k}$</span>, then we may establish the recursion <span class="math-container">$$p_{M_{k+1}}(t)=(1-e^{-t/a})p_{M_k}(t)+\frac{e^{-t/a}}{a}\int_0^t p_{M_k}(s)ds$$</span> To solve this recurrence, it should be safe to assume that <span class="math-container">$p_{M_k}(t)$</span> is always in the form <span class="math-container">$$p_{M_k}(t)=\sum_{i=1}^{k} c(i,k) e^{-it/a}$$</span> where <span class="math-container">$c(i,k)$</span> is some sequence of coefficients, with <span class="math-container">$c(i,k)=0$</span> for <span class="math-container">$i\gt k$</span> or <span class="math-container">$i\lt 1$</span>, that we can hopefully find a recurrence for. We have that <span class="math-container">$$(1-e^{-t/a})\sum_{i=1}^k c(i,k) e^{-it/a}=\sum_{i=1}^{k+1} (c(i,k)-c(i-1,k))e^{-it/a}$$</span> and <span class="math-container">$$\frac{e^{-t/a}}{a}\int_0^t \sum_{i=1}^k c(i,k) e^{-is/a} ds=((1/1)c(1,k)+...+(1/k)c(k,k))e^{-t/a}+\sum_{i=2}^{k+1} \frac{-c(i-1,k)}{i-1}e^{-it/a}$$</span> Thus, we have the following: <span class="math-container">$$\sum_{i=1}^{k+1} c(i,k+1) e^{-it/a}=(2c(1,k)+(1/2)c(2,k)...+(1/k)c(k,k))e^{-t/a}+\sum_{i=2}^{k+1} \big(c(i,k)-\frac{i}{i-1}c(i-1,k)\big)e^{-it/a}$$</span> and as for our recurrence, we have that <span class="math-container">$$c(1,k+1)=2c(1,k)+(1/2)c(2,k)...+(1/k)c(k,k)$$</span> and, for <span class="math-container">$i\gt 1$</span>, <span class="math-container">$$c(i,k+1)=c(i,k)-\frac{i}{i-1}c(i-1,k)$$</span> This is a tricky recurrence. However, it <em>can</em> be shown by induction (though I won't do so here) that <span class="math-container">$$c(i,k)=(-1)^{i+1}\frac{k}{a}\binom{k-1}{i-1}$$</span> Thus, we have the following awesome formula: <span class="math-container">$$p_{M_k}(t)=\sum_{i=1}^{k} (-1)^{i+1}\frac{k}{a}\binom{k-1}{i-1} e^{-it/a}=\frac{k}{a}\frac{(1-e^{-t/a})^k}{e^{t/a}-1}$$</span></p> <hr> <p>What does this mean for the problem? Well, it means the following: if <span class="math-container">$A$</span> is a set of <span class="math-container">$|A|$</span> people so that each person has mean life expectancy <span class="math-container">$a$</span> and their lifespans are distributed exponentially, and if <span class="math-container">$X$</span> is a random variable modeling the amount of time passed before all people in <span class="math-container">$A$</span> die, then <span class="math-container">$$p_X(t)=\frac{|A|}{a}\frac{(1-e^{-t/a})^{|A|}}{e^{t/a}-1}$$</span> Similarly, if the set <span class="math-container">$B$</span> is defined analogously and <span class="math-container">$Y$</span> represents the amount of time passed before all people in <span class="math-container">$B$</span> die, then <span class="math-container">$$p_Y(t)=\frac{|B|}{b}\frac{(1-e^{-t/b})^{|B|}}{e^{t/b}-1}$$</span> To answer our original question, we seek the probability <span class="math-container">$P(Y\gt X)$</span>. This probability is given by <span class="math-container">$$P(Y\gt X)=\int_0^\infty P_X(t)p_Y(t)dt=\int_0^\infty\int_0^t \frac{|A|}{a}\frac{(1-e^{-s/a})^{|A|}}{e^{s/a}-1}\cdot \frac{|B|}{b}\frac{(1-e^{-t/b})^{|B|}}{e^{t/b}-1} dsdt$$</span> Of course, there's no way I'm going to evaluate that, but we can at least approximate it using Wolfram. The average life expectancy of an American man is about <span class="math-container">$75$</span> years, but Sean Connery is no mere man, so we'll give him a life expectancy of <span class="math-container">$85$</span> years. The regular cast members of The Simpsons can be found <a href="https://en.wikipedia.org/wiki/List_of_The_Simpsons_cast_members" rel="noreferrer">here</a>, and there are about <span class="math-container">$17$</span> of them. These guys are wasting their lives away in a dark animation studio, so we'll give each of them about <span class="math-container">$70$</span> years to live. This gives us the integral <span class="math-container">$$\int_0^\infty\int_0^t \frac{17}{70}\frac{(1-e^{-s/70})^{17}}{e^{s/70}-1}\cdot \frac{1}{85}\frac{(1-e^{-t/85})}{e^{t/85}-1} dsdt\approx 0.08712$$</span> ...thanks, Wolfram! So, even if we assume that Sean Connery has the magical ability to extend his life to ten years longer than the average life expectancy through sheer stardom, he still has less than a <span class="math-container">$10 \%$</span> chance to live longer than the entire regular cast of "The Simpsons" (keep in mind that I'm comparing their ages upon death, not who dies last chronologically). Does that answer your question, @user209627?</p> <p><strong>ADDENDUM:</strong> Investigation of these formulae lead to some interesting paradoxes surrounding my assumption that these variables are distributed exponentially. If <span class="math-container">$A$</span> is a set of people each with life expectancy <span class="math-container">$a$</span>, then it can be shown from the above formulae that the expected lifespan of the longest-living person in set <span class="math-container">$A$</span> is <span class="math-container">$$\mathbb E[\max\{X_i\}]=aH_{|A|}$$</span> where <span class="math-container">$H_n$</span> is the nth harmonic number. It is well known that the harmonic numbers diverge to infinity, and so as our set of people grows large, the average lifespan approaches infinity. In fact, if we assume that the average lifespan of a person is <span class="math-container">$75$</span> years and that lifespan is distributed exponentially, then we have that the longest-living person of <span class="math-container">$100$</span> identical people is expected to live about <span class="math-container">$389$</span> years! This flaw may reside in the fact that we assumed the exponential distribution of our variables, allowing them to take on arbitrarily large values, or it may reside in the fact that the arithmetic mean is an imperfect measure of central tendency.</p>
https://math.stackexchange.com/questions/2991957/could-you-calculate-the-odds-of-which-would-happen-first-sean-connery-or-an-ori
Question: <blockquote> <p>A machine has <span class="math-container">$5$</span> components and needs at least 3 working components to function. Suppose that their lifetimes are independent exponential(1). Find the density function for the time to failure <span class="math-container">$T$</span>.</p> </blockquote> <h3>attempt</h3> <p>Let <span class="math-container">$T$</span> be the time to failure. We need <span class="math-container">$f_T(t)$</span>. Lets find <span class="math-container">$P(T \leq t)$</span>. Let <span class="math-container">$X_i$</span> be lifetime of each of five components and each <span class="math-container">$X_i$</span> is exp(1). Thus, <span class="math-container">$f_{X_i} (x) = e^{-x} $</span> for <span class="math-container">$x&gt;0$</span>.</p> <p>Notice that P(one component not working) is <span class="math-container">$P(X_i &gt; t ) = e^{-t} $</span></p> <p>So, here is my confusion. Isnt <span class="math-container">$T$</span> a binomial random variable with <span class="math-container">$n=5$</span> and <span class="math-container">$p=e^{-t}$</span>? and so</p> <p><span class="math-container">$$ P(number \; of \; components \; not \; working &lt; 3) = (1-e^{-t})^5 + 5e^{-t}(1-e^{-t})^4 + {5 \choose 2} e^{-2t} (1-e^{-t})^3 $$</span></p> <p>Can we consider this to be our distribution and so <span class="math-container">$f_T(t)$</span> would just be the derivative of the above expression?</p> <p>Best regards,</p> <p>Jimmy</p> Answer: <p>No, <span class="math-container">$T$</span> cannot be binomial because it is a <strong>time</strong> to failure (of the system). If you want to <strong>count</strong> the number of components still operating at a given time, that at least is compatible with a discrete distribution with finite support.</p> <p>If each component's lifetime is independent and identically exponentially distributed with mean <span class="math-container">$1$</span>, then the time to failure of the system occurs when the <strong>third</strong> component fails (the system can operate with two failures, but not three). So this should immediately suggest that we consider the <strong>order statistics</strong>. In other words, let <span class="math-container">$X_{(1)}, X_{(2)}, X_{(3)}, X_{(4)}, X_{(5)}$</span> represent the failure times of the five components in ascending order, so in particular <span class="math-container">$X_{(i)} \le X_{(j)}$</span> for any <span class="math-container">$i &lt; j$</span>. Then <span class="math-container">$T = X_{(3)}$</span>, the time to the third failure.</p> <p>So how do we compute <span class="math-container">$\Pr[T \le t] = \Pr[X_{(3)} \le t]$</span>? What this means is that at least three failures (but possibly more) have occurred before time <span class="math-container">$t$</span>. If only two failures have happened by time <span class="math-container">$t$</span>, then the event <span class="math-container">$X_{(3)} \le t$</span> did not occur. So we note by the independence of individual failure times <span class="math-container">$$\Pr[X_{(3)} \le t] = \binom{5}{3} \Pr[X \le t]^3 \Pr[X &gt; t]^2 + \binom{5}{4} \Pr[X \le t]^4 \Pr[X &gt; t] + \binom{5}{5} \Pr[X \le t]^5.$$</span> Why? Partition the LHS probability into three cases:</p> <ol> <li>Exactly 3 components have failed by time <span class="math-container">$t$</span> and 2 have not; </li> <li>Exactly 4 components have failed by time <span class="math-container">$t$</span> and 1 has not;</li> <li>All 5 components have failed by time <span class="math-container">$t$</span>.</li> </ol> <p>Since each component's lifetime is independent, the first case gives us <span class="math-container">$\Pr[X \le t]^3 \Pr[X &gt; t]$</span> for a given ordering of components. But there are <span class="math-container">$\binom{5}{3}$</span> ways to choose the three components that fail. Similarly, we must account for the <span class="math-container">$\binom{5}{4}$</span> ways to choose four components that fail.</p> <p>Then, once you have computed the above, we simply take the derivative to find the density of <span class="math-container">$T$</span>. Your computation agrees with mine, except for some minor errors and the confusion between a time-to-event variable and a counting variable, as I pointed out at the start.</p>
https://math.stackexchange.com/questions/2993505/find-the-density-function-for-the-time-to-failure-t
Question: <p>How to combine the probability of liking something with the one of it being liked?</p> <p>I'd like to estimate the probability of a person liking a dish by only having the following two bits of information given:</p> <ul> <li>The person likes 70% (<code>like_rate = 0.7</code>) of all dishes you offer him.</li> <li>The dish is liked by 80% (<code>liked_rate = 0.8</code>) of all people that try it.</li> </ul> <p>So I'm looking for a function <code>f(like_rate, liked_rate)</code>. I think it has to fulfill the following properties:</p> <ol> <li><code>f(1.0, ?) ≈ 1.0</code> (If the person likes everything, the dish does not matter much.)</li> <li><code>f(0.0, ?) ≈ 0.0</code> (If the person hates everything, the dish does not matter much.)</li> <li><code>f(?, 1.0) ≈ 1.0</code> (If everybody likes the dish, the person does not matter much.)</li> <li><code>f(?, 0.0) ≈ 0.0</code> (If nobody likes the dish, the person does not matter much.)</li> </ol> <p>Of course we then run into a problem for the following two cases:</p> <ol start="5"> <li><code>f(1.0, 0.0)</code></li> <li><code>f(0.0, 1.0)</code></li> </ol> <p>For practical consideration these are very unlikely to happen, so maybe their results could just be <code>0.5</code>.</p> <p>Another property that should be fulfilled in any case, at least from how I understand it, is:</p> <ol start="7"> <li><code>f(0.5, 0.5) = 0.5</code></li> </ol> <p>But how to calculate these?</p> <ol start="8"> <li><code>f(0.5, 0.0) = ?</code></li> <li><code>f(0.5, 1.0) = ?</code></li> <li><code>f(0.0, 0.5) = ?</code></li> <li><code>f(1.0, 0.5) = ?</code></li> </ol> <p>There are many possible functions in 3D space that fulfill the given conditions. But is one of them the correct one? If so, which one is it?</p> Answer: <p>At the moment the question is a bit under defined as there is a lot of subjectivity in how one is to interpret it. For instance </p> <ol> <li><p>Is the person who likes 70% of dishes from the same population as the people who have already tried it? I.e. is it the case that in general any member of the population likes 70% of the dishes, and in this case 80% who tried it like it?</p></li> <li><p>If the person is representative of the population, then it seems like a reasonable assumption to say that he has an 80% change of liking it. This however is contrary to the scenarios which you give where they might have a 0% chance of liking it. So that makes me think you do not see them as coming fro the same population.</p></li> <li><p>If we suppose they are not from the same population, then we need a way to relate their tastes to the populations. If they are completely independent, then knowing that 80% of the population like the dish will not impact this person't likelihood of enjoying it.</p></li> </ol> <p>So, with all those caveats, I outline below one interpretation of the question.</p> <hr> <p><strong>Reformulating the problem</strong></p> <p>In my interpretation, I will assume that the person in question comes from the same population as those who have already tried the dish. To make it clear what I mean by this, I will rephrase the problem in terms of tossing a biased coin.</p> <p>Suppose you have a collection of identical coins, and that each of them is biased in a different way. Since the coins appear identical, if you pick one out at random you do not know the overall probability that it will fall heads. You do, however know from past experience that in general the average coin will land heads 70% of the time.</p> <p>[<strong>Pause:</strong> <em>To clarify the analogy: here each coin represents one of your dishes, and landing heads is analogous to the dish being liked. What we are saying is that from past experience a random person from the population will like 70% of your dishes</em>] </p> <p>Now you start tossing the coin and after a number of throws observe that in total it has landed heads 80% of the time.</p> <p>[<strong>Pause:</strong> <em>i.e. a number of people from your population eat the dish and 80% like it</em>]</p> <p>You would now like to know: given the prior knowledge about the collection of coins, and the new information about how many of these coin tosses have been heads, what is the probability that the next coin I throw will be a head?</p> <hr> <p><strong>A Bayesian Solution</strong></p> <p>This problem formulation in terms of combining prior knowledge with new observations is often tackled by turning to <a href="https://en.wikipedia.org/wiki/Bayesian_statistics" rel="nofollow noreferrer">Bayesian probability</a>.</p> <p>I do not have the space, nor authority, to give a complete introduction to Bayesian theory but there are plenty of places for you to read up on it. Instead I give a fairly brief summary to get to the end conclusion.</p> <p><em>The Prior</em></p> <p>The first thing we need to decide is how strongly we believe that the bias of the coin is exactly 70%. Suppose there is just a single coin, and when we obtained it we'd been told it would land heads up 70% of the time: then our belief would be very strong that this could would land heads up 70% of the time. And just because we'd observed a run of 80% heads, we would still believe that the next toss would have a 70% change of being heads: because of how strong our believe was.</p> <p>Suppose instead that we'd obtained 5 coins, and been told that they respectively land heads up 50/60/70/80/90% of the time: then if you picked a coin on average, before throwing it once you'd expect it would land heads with 70% chance. And after a few throws you would be far more willing to move away from the 70% assumption.</p> <p>This demonstrates how subjective the question is, and how dependent it is on your prior belief.</p> <p>The prior distribution is a probability distribution on the set <span class="math-container">$[0,1]$</span> of possible biases. For instance, in the first example above (where we are adamant that the bias is 70%) this prior is</p> <p><span class="math-container">$$p(\theta) = \begin{cases} 1 &amp; \text{if $\theta = 0.7$,} \\ 0 &amp; \text{else.} \end{cases} $$</span></p> <p>In the second example, the prior would be</p> <p><span class="math-container">$$p(\theta) = \begin{cases} \frac15 &amp; \text{if $\theta \in\{0.5,0.6,0.7,0.8,0.9\}$,} \\ 0 &amp; \text{else.} \end{cases} $$</span></p> <p>Note that both priors have the property that the expected value of the distribution is <span class="math-container">$0.7$</span>.</p> <p>To make the problem computationally tractable we will make a very specific assumption about the form of the prior, and suppose that it is Beta distributed with parameters <span class="math-container">$\alpha, \beta &gt; 0$</span></p> <p><span class="math-container">$$ p(\theta) = \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)} \theta^{\alpha-1}(1-\theta)^{\beta-1} $$</span></p> <p>There is insufficient space to fully justify this choice, but there are a few important points.</p> <ol> <li><p>The mean is <span class="math-container">$\alpha / (\alpha + \beta)$</span>; so we will want to choose parameters <span class="math-container">$\alpha,\beta$</span> such that this is equal to <span class="math-container">$0.7$</span>.</p></li> <li><p>These have a natural interpretation as <span class="math-container">$\alpha$</span> being the number of heads that have occurred when throwing coins from this set on previous occassions (i.e before the current experiment), and <span class="math-container">$\beta$</span> the number of tails (so that <span class="math-container">$\alpha + \beta$</span> is the total number of tosses). In terms of dishes, this would be equivalent to saying of all other dishes you've served, not including the one currently in question there were a total of <span class="math-container">$\alpha$</span> that were liked, and <span class="math-container">$\beta$</span> that weren't.</p></li> <li><p>The larger <span class="math-container">$\alpha, \beta$</span> are the more certainty (less variance) you have in your prior assumption.</p></li> </ol> <p>You will have to choose what parameters <span class="math-container">$\alpha,\,\beta$</span> make sense in your context. For instance (in the land of dishes, not coins): if you have only ever served 10 dishes, and 7 were liked then you'd use <span class="math-container">$\alpha = 7,\,\beta = 3$</span> which would give a moderate prior belief for the next dish. If you were basing it on restaurant review where there were 700 positive review, and 300 negative (<span class="math-container">$\alpha = 700, \,\beta =300$</span>) that would be a much more confident prior belief.</p> <p><em>The Posterior</em> Now that we have arrived at a prior distribution, we can combine this with the newly observed data to derive the posterior. This is done through Bayes rule by the formula:</p> <p><span class="math-container">$$p(\theta | x) = \frac{p(x|\theta) p(\theta)}{p(x)}.$$</span></p> <p>Here <span class="math-container">$p(x|\theta)$</span> is the probability of observing 80% heads, if you were to assume that the coin had a bias of <span class="math-container">$\theta$</span>. Again, space is insufficient to give full detail, but there are again some important points:</p> <ol> <li><p>The more data you have (i.e. the more coin tosses that made up the 80% figure) the better your posterior knowledge will be. We will suppose that there were a total of <span class="math-container">$n$</span> tosses, <span class="math-container">$x_1,\ldots, x_n$</span> and that <span class="math-container">$x_i = 1$</span> denotes that coin <span class="math-container">$i$</span> was heads. We have from the problem set up that <span class="math-container">$n^{-1}\sum_i x_i = 0.8$</span>.</p></li> <li><p>The <em>magic</em> of choosing the prior to be Beta distributed is that the posterior <span class="math-container">$p(\theta |x)$</span> is also Beta distributed. In particular, if the prior parameters were <span class="math-container">$\alpha, \beta$</span> then the posterior parameters are</p></li> </ol> <p><span class="math-container">$$ \alpha' = \alpha + \sum_i x_i, \qquad \beta' = \beta + n - \sum_i x_i.$$</span></p> <ol start="3"> <li>This fits with the interpretation we gave before: when we had the prior information a total of <span class="math-container">$\alpha + \beta$</span> dishes had been tasted, of which <span class="math-container">$\alpha$</span> were liked. Now a further <span class="math-container">$n$</span> have been tasted, of which <span class="math-container">$\sum_i x_i$</span> are liked. So in total <span class="math-container">$\alpha + \beta +n = \alpha' + \beta'$</span> have been tasted, of which <span class="math-container">$\alpha + \sum_i x_i$</span> have been liked.</li> </ol> <p>So, in all: combining our prior belief, and our observations the posterior mean for <span class="math-container">$\theta$</span>, which is the expected probability that the next coin will be a head is</p> <p><span class="math-container">$$\frac{ \alpha + \sum_{i} x_i }{\alpha + \beta + n}$$</span></p> <hr> <p><strong>A worked example</strong></p> <p>To make all of this a bit more concrete, and back in the context of your original question, we consider a specific example.</p> <p>Suppose that you'd come to the conclusion that 70% of your dishes were like after 10 people had tried them and 7 of them had liked their dishes. Then you would choose your prior parameters <span class="math-container">$\alpha = 7, \, \beta = 3$</span>.</p> <p>Now suppose that a further 5 people came to eat your new dish, of which 4 enjoyed it (i.e. 80%). Then your posterior prediction for the popularity of the new dish would have parameters <span class="math-container">$\alpha' = 7 + 4$</span> and <span class="math-container">$\beta' = 3 + 1$</span>, so that the posterior probability of liking the dish would be <span class="math-container">$11/15 \sim 73.3$</span>%.</p> <p>Suppose instead that it was based on restaurant reviews, and previously you'd had 700 people like the dishes, and 300 not (<span class="math-container">$\alpha = 700$</span>, <span class="math-container">$\beta = 300$</span>). Now you check your latest reviews and see that you have a further 100 reviews, of which <span class="math-container">$80$</span> were positive. Then the posterior probability would be <span class="math-container">$780/1100 \sim 71.0$</span>%</p> <p>So we see that in the second example when there is significantly more evidence to support our initial belief, we are more reluctant to move away from it.</p> <hr> <p><strong>Final Thoughts</strong></p> <ol> <li>Hopefully it is clear from this that the percentages of 70% and 80% on their own are not strong enough to let us update our belief that someone will like the dish. We need not just the percentage, but the weight of evidence behind them.</li> <li>Although we've invoked a lot of Bayesian machinery to do this, the final answer does not actually use any of the Bayesian theory itself, and is of itself hopefully quite interpretable.</li> </ol> <p>Finally, we link this back to the specific examples you gave. Hopefully the above motivates you to think of the function <span class="math-container">$f$</span> not in terms of <span class="math-container">$f(\text{likes}, \, \text{liked})$</span>, but rather as <span class="math-container">$f( (\alpha,\,\beta) \colon \, (n,x) )$</span>, and from the above this would have the formula</p> <p><span class="math-container">$$ f( (\alpha,\,\beta) \colon \, (n,x) ) = \frac{\alpha + x}{\alpha + \beta + n -x}.$$</span></p> <p>Your statements numbered 1-4 still hold, where the extent to which <span class="math-container">$\approx$</span> is accurate depends on the level of belief in the priors, and the amount of new evidence.</p> <p>Statements 5-6 do now have an interpretation which makes perfect sense.</p> <p>Statement 7 holds, and in fact more generally we have that if <span class="math-container">$\alpha/(\alpha + \beta) = x/n$</span> then</p> <p><span class="math-container">$$ f( (\alpha,\,\beta) \colon \, (n,x) ) = \frac{\alpha}{\alpha + \beta},$$</span></p> <p>i.e. if the prior mean and the mean of the evidence are equal, then the posterior mean is also equal.</p> <p>All the remaining statements are just special cases of the above formula. </p>
https://math.stackexchange.com/questions/2995012/how-to-combine-the-probability-of-liking-something-with-the-one-of-it-being-like
Question: <p>Suppose that X and Y are integer valued random variables with joint probability mass function given by <span class="math-container">$p_{X,Y}(a, b)=\begin{cases} \frac{1}{4a}, &amp; 1\leq b\leq a\leq 4\\ 0, &amp; \text{otherwise}. \end{cases}.$</span></p> <p>(b) Find the marginal probability mass functions of X and Y. (c) Find <span class="math-container">$P(X=Y+1)$</span></p> <p>Since this is a discrete random variable, I want to construct a table of joint distribution to find the sum of rows, columns for pmf of X and Y, but I'm having trouble with it. Anyone can help with part (b) and (c)?</p> Answer: <p>The table is not hard. &nbsp; Each cell in 4 rows (<span class="math-container">$1\leq a\leq 4$</span>) of <span class="math-container">$a$</span> columns (<span class="math-container">$1\leq b\leq a$</span>), contains <span class="math-container">$\tfrac 1{4a}$</span>, the rest of the cells are zero .</p> <p><span class="math-container">$$p_{X,Y}(a,b)=\tfrac 1{4a}\mathbf 1_{1\leq b\leq a\leq 4}\\\boxed{\begin{array}{c|c:c:c:c|c}a\backslash b &amp; 1 &amp; 2 &amp; 3 &amp; 4 &amp;\\\hline 1 &amp; \tfrac 14 &amp;0&amp;0&amp;0\\ \hdashline 2 &amp; \tfrac 18&amp;\tfrac 18&amp;0&amp;0\\ \hdashline 3 &amp;\tfrac 1{12}&amp;\tfrac 1{12}&amp;\tfrac 1{12}&amp;0\\ \hdashline 4 &amp; \tfrac 1{16}&amp;\tfrac 1{16}&amp;\tfrac 1{16}&amp;\tfrac 1{16}\\\hline ~&amp;&amp;&amp;&amp;&amp;1\end{array}}$$</span></p>
https://math.stackexchange.com/questions/2997775/find-px-y1