idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
19,301
What is the best out-of-the-box 2-class classifier for your application? [closed]
Gradient Boosted Trees. At least as accurate as RF on a lot of applications Incorporates missing values seamlessly Var importance (like RF probably biased in favor of continuous and many level nominal) Partial dependency plots GBM versus randomForest in R : handles MUCH larger datasets
What is the best out-of-the-box 2-class classifier for your application? [closed]
Gradient Boosted Trees. At least as accurate as RF on a lot of applications Incorporates missing values seamlessly Var importance (like RF probably biased in favor of continuous and many level nomina
What is the best out-of-the-box 2-class classifier for your application? [closed] Gradient Boosted Trees. At least as accurate as RF on a lot of applications Incorporates missing values seamlessly Var importance (like RF probably biased in favor of continuous and many level nominal) Partial dependency plots GBM versus randomForest in R : handles MUCH larger datasets
What is the best out-of-the-box 2-class classifier for your application? [closed] Gradient Boosted Trees. At least as accurate as RF on a lot of applications Incorporates missing values seamlessly Var importance (like RF probably biased in favor of continuous and many level nomina
19,302
What is the best out-of-the-box 2-class classifier for your application? [closed]
Gaussian Process classifier - it gives probabilistic predictions (which is useful when your operational relative class frequencies differ from those in your training set, or equivalenty your false-positive/false-negative costs are unknown or variable). It also provides an inidcation of the uncertainty in model predictions due to the uncertainty in "estimating the model" from a finite dataset. The co-variance function is equivalent to the kernel function in an SVM, so it can also operate directly on non-vectorial data (e.g. strings or graphs etc). The mathematical framework is also neat (but don't use the Laplace approximation). Automated model selection via maximising marginal likelihood. Essentially combines good features of logistic regression and SVM.
What is the best out-of-the-box 2-class classifier for your application? [closed]
Gaussian Process classifier - it gives probabilistic predictions (which is useful when your operational relative class frequencies differ from those in your training set, or equivalenty your false-pos
What is the best out-of-the-box 2-class classifier for your application? [closed] Gaussian Process classifier - it gives probabilistic predictions (which is useful when your operational relative class frequencies differ from those in your training set, or equivalenty your false-positive/false-negative costs are unknown or variable). It also provides an inidcation of the uncertainty in model predictions due to the uncertainty in "estimating the model" from a finite dataset. The co-variance function is equivalent to the kernel function in an SVM, so it can also operate directly on non-vectorial data (e.g. strings or graphs etc). The mathematical framework is also neat (but don't use the Laplace approximation). Automated model selection via maximising marginal likelihood. Essentially combines good features of logistic regression and SVM.
What is the best out-of-the-box 2-class classifier for your application? [closed] Gaussian Process classifier - it gives probabilistic predictions (which is useful when your operational relative class frequencies differ from those in your training set, or equivalenty your false-pos
19,303
What is the best out-of-the-box 2-class classifier for your application? [closed]
L1-regularized logistic regression. It is computationally fast. It has an intuitive interpretation. It has only one easily understandable hyperparameter that can be automatically tuned by cross-validation, which often is a good way to go. Its coefficients are piecewise linear and their relation to the hyperparameter is instantly and easily visible in a simple plot. It is one of the less dubious methods for variable selection. Also it has a really cool name.
What is the best out-of-the-box 2-class classifier for your application? [closed]
L1-regularized logistic regression. It is computationally fast. It has an intuitive interpretation. It has only one easily understandable hyperparameter that can be automatically tuned by cross-valid
What is the best out-of-the-box 2-class classifier for your application? [closed] L1-regularized logistic regression. It is computationally fast. It has an intuitive interpretation. It has only one easily understandable hyperparameter that can be automatically tuned by cross-validation, which often is a good way to go. Its coefficients are piecewise linear and their relation to the hyperparameter is instantly and easily visible in a simple plot. It is one of the less dubious methods for variable selection. Also it has a really cool name.
What is the best out-of-the-box 2-class classifier for your application? [closed] L1-regularized logistic regression. It is computationally fast. It has an intuitive interpretation. It has only one easily understandable hyperparameter that can be automatically tuned by cross-valid
19,304
What is the best out-of-the-box 2-class classifier for your application? [closed]
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. kNN
What is the best out-of-the-box 2-class classifier for your application? [closed]
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
What is the best out-of-the-box 2-class classifier for your application? [closed] Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. kNN
What is the best out-of-the-box 2-class classifier for your application? [closed] Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
19,305
What is the best out-of-the-box 2-class classifier for your application? [closed]
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Naive Bayes and Random Naive Bays
What is the best out-of-the-box 2-class classifier for your application? [closed]
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
What is the best out-of-the-box 2-class classifier for your application? [closed] Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Naive Bayes and Random Naive Bays
What is the best out-of-the-box 2-class classifier for your application? [closed] Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
19,306
What is the best out-of-the-box 2-class classifier for your application? [closed]
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. K-means clustering for unsupervised learning.
What is the best out-of-the-box 2-class classifier for your application? [closed]
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
What is the best out-of-the-box 2-class classifier for your application? [closed] Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. K-means clustering for unsupervised learning.
What is the best out-of-the-box 2-class classifier for your application? [closed] Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
19,307
A three dice roll question
Let $V =\max(X,Y)$ and note that $V$ takes values in $\{1,2,\ldots,6\}$. We see that $V=1$ if A realises $(1,1)$, $V=2$ if it realises $(1,2), (2,2), (1,2)$ and so on. Because the dice are uniformly distributed, you can check that $P(V=1) = \frac{1}{36}$, $P(V=2) = \frac{3}{36}$, and so on, $P(V=6) = \frac{11}{36}$. To compute $P(V>Z)$ use conditional probability $$ P(V>Z) = \sum_{z=1}^6 P(V>z, Z=x) = \sum_{z=1}^6 P(V>z| Z=z)P(Z=z) = \cdots = \frac{125}{216}. $$ Note: $P(V>z | Z=z) = P(V>z)$ for all $z$, since $V$ is independent of $Z$.
A three dice roll question
Let $V =\max(X,Y)$ and note that $V$ takes values in $\{1,2,\ldots,6\}$. We see that $V=1$ if A realises $(1,1)$, $V=2$ if it realises $(1,2), (2,2), (1,2)$ and so on. Because the dice are uniformly d
A three dice roll question Let $V =\max(X,Y)$ and note that $V$ takes values in $\{1,2,\ldots,6\}$. We see that $V=1$ if A realises $(1,1)$, $V=2$ if it realises $(1,2), (2,2), (1,2)$ and so on. Because the dice are uniformly distributed, you can check that $P(V=1) = \frac{1}{36}$, $P(V=2) = \frac{3}{36}$, and so on, $P(V=6) = \frac{11}{36}$. To compute $P(V>Z)$ use conditional probability $$ P(V>Z) = \sum_{z=1}^6 P(V>z, Z=x) = \sum_{z=1}^6 P(V>z| Z=z)P(Z=z) = \cdots = \frac{125}{216}. $$ Note: $P(V>z | Z=z) = P(V>z)$ for all $z$, since $V$ is independent of $Z$.
A three dice roll question Let $V =\max(X,Y)$ and note that $V$ takes values in $\{1,2,\ldots,6\}$. We see that $V=1$ if A realises $(1,1)$, $V=2$ if it realises $(1,2), (2,2), (1,2)$ and so on. Because the dice are uniformly d
19,308
A three dice roll question
Because the point to an interview question is to demonstrate your thinking, I want to emphasize two things: Finding a simple, clear, analysis using minimal calculation and straightforward notation. Explicitly identifying the rules of probability that justify the solution. Thus, in the following, you will see repeated references to simplicity and to key concepts of probability: independence, additivity, and complementation. The hope is that following such a principled, efficient analysis and explaining it well will help get you the job in addition to merely finding the answer. Focus first on the simplest part of the situation: B's throw. To win, A has to beat B's value and A has two independent tries to do so. This framing of the question suggests we break down the analysis according to B's possibilities. Again, in pursuit of the simplest approach, consider the chance that $A$ loses after B throws some value $b.$ To lose, A must throw two values between $1$ and $b$ inclusive. Since each of those events has a chance of $b/6$ and because the events are independent, A loses with probability $(b/6)^2$ in such a case. Because the chance of throwing $b$ was $1/6$ and A's throws are independent of B's throw, the event "B throws $b$ and A loses" has a probability of $1/6\times (b/6)^2 = b^2/6^3.$ This is why computing the chance of a loss is simpler than the chance of a win: all we need do is multiply. All that remains to do is sum these probabilities, because these events exhaust all the possible outcomes and are non-overlapping. The answer therefore can be expressed as $$\Pr(\text{A loses}) = \frac{1}{6^3}\left(1^2+2^2+3^2+4^2+5^2+6^2\right) = \frac{91}{216}.$$ Do not lose sight of the question: it asks for the chance that A wins. That, of course, is the complement of the chance of losing, $$\Pr(\text{A wins}) = 1 - \frac{91}{216} \approx 57.87\%.$$
A three dice roll question
Because the point to an interview question is to demonstrate your thinking, I want to emphasize two things: Finding a simple, clear, analysis using minimal calculation and straightforward notation.
A three dice roll question Because the point to an interview question is to demonstrate your thinking, I want to emphasize two things: Finding a simple, clear, analysis using minimal calculation and straightforward notation. Explicitly identifying the rules of probability that justify the solution. Thus, in the following, you will see repeated references to simplicity and to key concepts of probability: independence, additivity, and complementation. The hope is that following such a principled, efficient analysis and explaining it well will help get you the job in addition to merely finding the answer. Focus first on the simplest part of the situation: B's throw. To win, A has to beat B's value and A has two independent tries to do so. This framing of the question suggests we break down the analysis according to B's possibilities. Again, in pursuit of the simplest approach, consider the chance that $A$ loses after B throws some value $b.$ To lose, A must throw two values between $1$ and $b$ inclusive. Since each of those events has a chance of $b/6$ and because the events are independent, A loses with probability $(b/6)^2$ in such a case. Because the chance of throwing $b$ was $1/6$ and A's throws are independent of B's throw, the event "B throws $b$ and A loses" has a probability of $1/6\times (b/6)^2 = b^2/6^3.$ This is why computing the chance of a loss is simpler than the chance of a win: all we need do is multiply. All that remains to do is sum these probabilities, because these events exhaust all the possible outcomes and are non-overlapping. The answer therefore can be expressed as $$\Pr(\text{A loses}) = \frac{1}{6^3}\left(1^2+2^2+3^2+4^2+5^2+6^2\right) = \frac{91}{216}.$$ Do not lose sight of the question: it asks for the chance that A wins. That, of course, is the complement of the chance of losing, $$\Pr(\text{A wins}) = 1 - \frac{91}{216} \approx 57.87\%.$$
A three dice roll question Because the point to an interview question is to demonstrate your thinking, I want to emphasize two things: Finding a simple, clear, analysis using minimal calculation and straightforward notation.
19,309
A three dice roll question
Consider the probability of its complementary event $\{\max(X, Y) \leq Z\}$: \begin{align} P(\max(X, Y) > Z) = 1 - P(\max(X, Y) \leq Z) = 1 - P(X \leq Z, Y \leq Z). \tag{$*$} \end{align} The term $P(X \leq Z, Y \leq Z)$ can be evaluated by the law of total probability and the independence assumption: \begin{align} & P(X \leq Z, Y \leq Z) = \sum_{i = 1}^6P(X \leq Z, Y \leq Z, Z = i) \\ =& \sum_{i = 1}^6 P(X \leq i, Y \leq i, Z = i) \\ =& \sum_{i = 1}^6 P(X \leq i)P(Y \leq i)P(Z = i) \\ =& \sum_{i = 1}^6 \frac{i}{6} \times \frac{i}{6} \times \frac{1}{6} = \frac{1}{216} \times \frac{1}{6} \times 6 \times 7 \times 13 = \frac{91}{216}. \\ \end{align} Plugging this back to $(*)$ to get \begin{align} P(\max(X, Y) > Z) = 1 - \frac{91}{216} = \frac{125}{216}. \end{align}
A three dice roll question
Consider the probability of its complementary event $\{\max(X, Y) \leq Z\}$: \begin{align} P(\max(X, Y) > Z) = 1 - P(\max(X, Y) \leq Z) = 1 - P(X \leq Z, Y \leq Z). \tag{$*$} \end{align} The term $P
A three dice roll question Consider the probability of its complementary event $\{\max(X, Y) \leq Z\}$: \begin{align} P(\max(X, Y) > Z) = 1 - P(\max(X, Y) \leq Z) = 1 - P(X \leq Z, Y \leq Z). \tag{$*$} \end{align} The term $P(X \leq Z, Y \leq Z)$ can be evaluated by the law of total probability and the independence assumption: \begin{align} & P(X \leq Z, Y \leq Z) = \sum_{i = 1}^6P(X \leq Z, Y \leq Z, Z = i) \\ =& \sum_{i = 1}^6 P(X \leq i, Y \leq i, Z = i) \\ =& \sum_{i = 1}^6 P(X \leq i)P(Y \leq i)P(Z = i) \\ =& \sum_{i = 1}^6 \frac{i}{6} \times \frac{i}{6} \times \frac{1}{6} = \frac{1}{216} \times \frac{1}{6} \times 6 \times 7 \times 13 = \frac{91}{216}. \\ \end{align} Plugging this back to $(*)$ to get \begin{align} P(\max(X, Y) > Z) = 1 - \frac{91}{216} = \frac{125}{216}. \end{align}
A three dice roll question Consider the probability of its complementary event $\{\max(X, Y) \leq Z\}$: \begin{align} P(\max(X, Y) > Z) = 1 - P(\max(X, Y) \leq Z) = 1 - P(X \leq Z, Y \leq Z). \tag{$*$} \end{align} The term $P
19,310
A three dice roll question
I will take a less formal approach, in order to illustrate my thinking. My first instinct was to visualize the usual $6 \times 6$ array of outcomes $(X,Y)$ of $A$'s dice rolls, and looking at when the larger of the two values is less than or equal to some value: $$\begin{array}{cccccc} (1,1) & (2,1) & (3,1) & (4,1) & (5,1) & (6,1) \\ (1,2) & (2,2) & (3,2) & (4,2) & (5,2) & (6,2) \\ (1,3) & (2,3) & (3,3) & (4,3) & (5,3) & (6,3) \\ (1,4) & (2,4) & (3,4) & (4,4) & (5,4) & (6,4) \\ (1,5) & (2,5) & (3,5) & (4,5) & (5,5) & (6,5) \\ (1,6) & (2,6) & (3,6) & (4,6) & (5,6) & (6,6) \\ \end{array}$$ It's intuitively clear from this diagram that the number of ordered pairs whose maximum is at most $k$ is $k^2$, for $k \in \{1, 2, 3, 4, 5, 6\}$. This is because geometrically, the set of such outcomes are arranged in a series of nested squares in the array. So it follows that for each of the six equiprobable outcomes for $B$'s die roll $Z \in \{1, 2, 3, 4, 5, 6\}$, $A$ will lose with probability $z^2/6^2$, hence the total probability of $A$ losing to $B$ is simply $$\frac{1^2 + \cdots + 6^2}{6^2(6)} = \frac{6(7)(13)}{6} \cdot \frac{1}{6^3} = \frac{7(13)}{6^3}.$$ Hence $A$ wins with probability $1 - \frac{7(13)}{6^3} = \frac{125}{216}$. This line of reasoning is what I would use if I had no access to pencil or paper and had to answer the question mentally, reserving the computational part to the very end.
A three dice roll question
I will take a less formal approach, in order to illustrate my thinking. My first instinct was to visualize the usual $6 \times 6$ array of outcomes $(X,Y)$ of $A$'s dice rolls, and looking at when the
A three dice roll question I will take a less formal approach, in order to illustrate my thinking. My first instinct was to visualize the usual $6 \times 6$ array of outcomes $(X,Y)$ of $A$'s dice rolls, and looking at when the larger of the two values is less than or equal to some value: $$\begin{array}{cccccc} (1,1) & (2,1) & (3,1) & (4,1) & (5,1) & (6,1) \\ (1,2) & (2,2) & (3,2) & (4,2) & (5,2) & (6,2) \\ (1,3) & (2,3) & (3,3) & (4,3) & (5,3) & (6,3) \\ (1,4) & (2,4) & (3,4) & (4,4) & (5,4) & (6,4) \\ (1,5) & (2,5) & (3,5) & (4,5) & (5,5) & (6,5) \\ (1,6) & (2,6) & (3,6) & (4,6) & (5,6) & (6,6) \\ \end{array}$$ It's intuitively clear from this diagram that the number of ordered pairs whose maximum is at most $k$ is $k^2$, for $k \in \{1, 2, 3, 4, 5, 6\}$. This is because geometrically, the set of such outcomes are arranged in a series of nested squares in the array. So it follows that for each of the six equiprobable outcomes for $B$'s die roll $Z \in \{1, 2, 3, 4, 5, 6\}$, $A$ will lose with probability $z^2/6^2$, hence the total probability of $A$ losing to $B$ is simply $$\frac{1^2 + \cdots + 6^2}{6^2(6)} = \frac{6(7)(13)}{6} \cdot \frac{1}{6^3} = \frac{7(13)}{6^3}.$$ Hence $A$ wins with probability $1 - \frac{7(13)}{6^3} = \frac{125}{216}$. This line of reasoning is what I would use if I had no access to pencil or paper and had to answer the question mentally, reserving the computational part to the very end.
A three dice roll question I will take a less formal approach, in order to illustrate my thinking. My first instinct was to visualize the usual $6 \times 6$ array of outcomes $(X,Y)$ of $A$'s dice rolls, and looking at when the
19,311
A three dice roll question
We need to know two things to calculate Player A's overall chances of winning: 1) his odds of rolling each score and 2) his odds of winning with that score. If you visualize player A's possible outcomes on a 6x6 grid, it is easy to see that they form a series of bands (for lack of a better word), and we can count them and divide by the total number of outcomes to get the probabilities for each. It's also fairly easy to see that the number of occurrences of each outcome can be represented as 2n-1 (because it's two sides of a square of size n with the overlapping corner being -1), and the total number of outcomes is 6 squared, so the probability of each of A's possible outcomes is Player B's outcomes are evenly distributed: 1/6 (or 16.7%) for each number 1 through 6. Since Player A has to BEAT Player B's roll, we can find his chance of winning by summing the probabilities of each of B's possible outcomes that are less than his outcome. That is, if he scores 1, none of B's outcomes are less, meaning that he can beat 0% of B's possible scores. A score of 2 can beat just one of B's possible scores, or 16.7%, and a score of 3 wins in two out of six scenarios (33.3%). This can be represented as We can multiply these probabilities together (A's chance of rolling a specific number vs his chance of winning with that number) to obtain his overall chances of winning: 57.9%, or 125/216. The formula for the overall chance of winning can be derived by multiplying the two previously given formulae: or giving the same table of values: We can represent the summation like this: and this does yield the expected 125/216. Just a slightly different way to think about it, and the demonstration of it can quickly be setup in a spreadsheet in just a minute or two, if they were letting you use a computer, or sketched out just as quickly.
A three dice roll question
We need to know two things to calculate Player A's overall chances of winning: 1) his odds of rolling each score and 2) his odds of winning with that score. If you visualize player A's possible outcom
A three dice roll question We need to know two things to calculate Player A's overall chances of winning: 1) his odds of rolling each score and 2) his odds of winning with that score. If you visualize player A's possible outcomes on a 6x6 grid, it is easy to see that they form a series of bands (for lack of a better word), and we can count them and divide by the total number of outcomes to get the probabilities for each. It's also fairly easy to see that the number of occurrences of each outcome can be represented as 2n-1 (because it's two sides of a square of size n with the overlapping corner being -1), and the total number of outcomes is 6 squared, so the probability of each of A's possible outcomes is Player B's outcomes are evenly distributed: 1/6 (or 16.7%) for each number 1 through 6. Since Player A has to BEAT Player B's roll, we can find his chance of winning by summing the probabilities of each of B's possible outcomes that are less than his outcome. That is, if he scores 1, none of B's outcomes are less, meaning that he can beat 0% of B's possible scores. A score of 2 can beat just one of B's possible scores, or 16.7%, and a score of 3 wins in two out of six scenarios (33.3%). This can be represented as We can multiply these probabilities together (A's chance of rolling a specific number vs his chance of winning with that number) to obtain his overall chances of winning: 57.9%, or 125/216. The formula for the overall chance of winning can be derived by multiplying the two previously given formulae: or giving the same table of values: We can represent the summation like this: and this does yield the expected 125/216. Just a slightly different way to think about it, and the demonstration of it can quickly be setup in a spreadsheet in just a minute or two, if they were letting you use a computer, or sketched out just as quickly.
A three dice roll question We need to know two things to calculate Player A's overall chances of winning: 1) his odds of rolling each score and 2) his odds of winning with that score. If you visualize player A's possible outcom
19,312
A three dice roll question
Let's first code up a simulation to see where we should head, and then let's come at this theoretically. In R... sims <- replicate(1000000, { A <- max(sample(1:6, replace=T, size=2)) B <- sample(1:6, size=1) ifelse(A>B, 'A WINS', 'B WINS') }) prop.table(table(sims)) sims A WINS B WINS 0.578675 0.421325 We can be confident in these results up to the second decimal place, so A wins about 57% of the time. For an interview, I think this is a sufficiently good answer. Let's analyze Let $B$ be the roll form player B and let $A = \max(A_1, A_2)$ be the number rolled for player A which, which is the maximum of the two rolls. Luckily, we don't have to do any math to get the distribution of $A$, we can just do some logcial thinking. $A=1$ when both $(A_1 = 1, A_2=1)$. So out of a possible 36 outcomes, $A=1$ can only happen 1 way. So $P(A=1)=1/36$. Similarly, $A=1$ when either $(A_1=1, A_2)=2$ or $(A_1=2, A_2=1)$ or $(A_1=2, A_2=2)$. That's 3 outcomes out of 36 so $P(A=2) = 3/36$. We can continue on like this to get the entire distribution. Using R for some help... library(tidyverse) crossing( A1 = 1:6, A2 = 1:6 ) %>% mutate(A = pmax(A1, A2)) %>% count(A) # A tibble: 6 × 3 A n p <int> <int> <dbl> 1 1 1 0.0278 2 2 3 0.0833 3 3 5 0.139 4 4 7 0.194 5 5 9 0.25 6 6 11 0.306 Do you think you can take it from here? To compute $P(A>B)$ we need to know the joint distribution of $(A=a, B=b)$. Because the two random variables are independent, $P(A=a, B=b) = P(A=a)P(B=b)$. A can only win when their roll exceeds $B$ so we need to compute $P(\text{A Wins}) = \sum_{b=1}^{b=5} \sum_{a=b+1}^{6} P(A=a, B=b)$. The sum turns out to be approximately 0.579, so $P(\text{A Wins}) = 57.9\%$. If you wanted this as an exact answer, simply do the arithmetic.
A three dice roll question
Let's first code up a simulation to see where we should head, and then let's come at this theoretically. In R... sims <- replicate(1000000, { A <- max(sample(1:6, replace=T, size=2)) B <- sampl
A three dice roll question Let's first code up a simulation to see where we should head, and then let's come at this theoretically. In R... sims <- replicate(1000000, { A <- max(sample(1:6, replace=T, size=2)) B <- sample(1:6, size=1) ifelse(A>B, 'A WINS', 'B WINS') }) prop.table(table(sims)) sims A WINS B WINS 0.578675 0.421325 We can be confident in these results up to the second decimal place, so A wins about 57% of the time. For an interview, I think this is a sufficiently good answer. Let's analyze Let $B$ be the roll form player B and let $A = \max(A_1, A_2)$ be the number rolled for player A which, which is the maximum of the two rolls. Luckily, we don't have to do any math to get the distribution of $A$, we can just do some logcial thinking. $A=1$ when both $(A_1 = 1, A_2=1)$. So out of a possible 36 outcomes, $A=1$ can only happen 1 way. So $P(A=1)=1/36$. Similarly, $A=1$ when either $(A_1=1, A_2)=2$ or $(A_1=2, A_2=1)$ or $(A_1=2, A_2=2)$. That's 3 outcomes out of 36 so $P(A=2) = 3/36$. We can continue on like this to get the entire distribution. Using R for some help... library(tidyverse) crossing( A1 = 1:6, A2 = 1:6 ) %>% mutate(A = pmax(A1, A2)) %>% count(A) # A tibble: 6 × 3 A n p <int> <int> <dbl> 1 1 1 0.0278 2 2 3 0.0833 3 3 5 0.139 4 4 7 0.194 5 5 9 0.25 6 6 11 0.306 Do you think you can take it from here? To compute $P(A>B)$ we need to know the joint distribution of $(A=a, B=b)$. Because the two random variables are independent, $P(A=a, B=b) = P(A=a)P(B=b)$. A can only win when their roll exceeds $B$ so we need to compute $P(\text{A Wins}) = \sum_{b=1}^{b=5} \sum_{a=b+1}^{6} P(A=a, B=b)$. The sum turns out to be approximately 0.579, so $P(\text{A Wins}) = 57.9\%$. If you wanted this as an exact answer, simply do the arithmetic.
A three dice roll question Let's first code up a simulation to see where we should head, and then let's come at this theoretically. In R... sims <- replicate(1000000, { A <- max(sample(1:6, replace=T, size=2)) B <- sampl
19,313
A three dice roll question
Here's a fairly simple way — based on a symmetry argument — to get the solution for dice with $n$ sides, for arbitrarily large $n$, without having to consider up to $n$ different cases. Instead, the number of cases we need to consider is only proportional to the number of dice involved (i.e. three, in your example). First note that, if we knew (and thus conditioned the problem on) that no two dice were tied for the highest roll, then the answer would clearly be $\frac23$. This is because each of the three dice rolled is equally likely to be the one that rolls the highest number, and player A, with two dice for B's one, is twice as likely to own this highest rolling die as B. However, to get the correct answer for your problem, we do need to consider the possibility of ties. So let's break up the problem into cases based on how many dice are tied for the highest roll: The probability that all three dice are tied for highest is clearly $p_3 = \frac1{n^2}$ (where, again, $n = 6$ is the number of sides on the dice). The probability that exactly two of the three dice are tied for highest is a bit harder to calculate, but a moment's thought shows that it must equal the probability that there is a two-way tie (but not a three-way one) divided by two (since half the time the two dice that are tied will roll lower than the third one that isn't). By applying the inclusion-exclusion principle we can see* that the probability of getting a two-way tie is $\frac{3n - 3}{n^2}$, and thus the probability of a two-way tie for the highest roll is half of that, i.e. $p_2 = \frac{3n - 3}{2n^2}$. Finally, the probability that there is no tie for the highest roll, i.e. that a single die rolls the unique highest result, is of course $p_1 = 1 - p_2 - p_3 = 1 - \frac{3n - 3}{2n^2} - \frac1{n^2} = \frac{2n^2 - 3n + 1}{2n^2}$. Now, by the rules of your game: If all three dice are tied, player B always wins. If two dice are tied for highest, player A wins if they have both of these dice, i.e. with probability $\frac13$. If there is a single highest die, player A wins if they have it, i.e. with probability $\frac23$. Thus, the overall probability of player A winning your game is $$p = \frac13 p_2 + \frac23 p_1 = \frac{3n - 3}{6n^2} + \frac{4n^2 - 6n + 2}{6n^2} = \frac{4n^2 - 3n - 1}{6n^2}.$$ In particular, for $n = 6$, $p = \frac{4\cdot36 - 3\cdot6 - 1}{6\cdot36} = \frac{144 - 18 - 1}{216} = \frac{125}{216} ≈ 0.5787.$ *) Specifically, label the dice as $a$, $b$ and $c$, and note that there are three pairs of dice — $(a,b)$, $(a,c)$ and $(b,c)$ — that can be tied. Let $T_{a,b}$, $T_{a,c}$ and $T_{b,c}$ be the events where each of these pairs is tied, and note that $$P(T_{a,b}) = P(T_{a,c}) = P(T_{b,c}) = \tfrac1n,$$ while $$P(T_{a,b} \cap T_{a,c}) = P(T_{a,b} \cap T_{b,c}) = P(T_{a,c} \cap T_{b,c}) = P(T_{a,b} \cap T_{a,c} \cap T_{b,c}) = \tfrac1{n^2},$$ as any two pairs being tied implies that all three are. Thus, by the inclusion–exclusion principle, the probability of getting at least a two-way tie is $$\begin{aligned} P(T_{a,b} \cup T_{a,c} \cup T_{b,c}) &= P(T_{a,b}) + P(T_{a,c}) + P(T_{b,c}) \\ &\qquad - P(T_{a,b} \cap T_{a,c}) - P(T_{a,b} \cap T_{b,c}) - P(T_{a,c} \cap T_{b,c}) \\ &\qquad + P(T_{a,b} \cap T_{a,c} \cap T_{b,c}) \\ &= 3 \tfrac1n - 3 \tfrac1{n^2} + \tfrac1{n^2} = \tfrac{3n-2}{n^2}. \end{aligned}$$ But the number we actually want is the probability of getting an exactly two-way tie, i.e. $$P(T_{a,b} \cup T_{a,c} \cup T_{b,c}) - P(T_{a,b} \cap T_{a,c} \cap T_{b,c}) = \tfrac{3n-3}{n^2}.$$
A three dice roll question
Here's a fairly simple way — based on a symmetry argument — to get the solution for dice with $n$ sides, for arbitrarily large $n$, without having to consider up to $n$ different cases. Instead, the
A three dice roll question Here's a fairly simple way — based on a symmetry argument — to get the solution for dice with $n$ sides, for arbitrarily large $n$, without having to consider up to $n$ different cases. Instead, the number of cases we need to consider is only proportional to the number of dice involved (i.e. three, in your example). First note that, if we knew (and thus conditioned the problem on) that no two dice were tied for the highest roll, then the answer would clearly be $\frac23$. This is because each of the three dice rolled is equally likely to be the one that rolls the highest number, and player A, with two dice for B's one, is twice as likely to own this highest rolling die as B. However, to get the correct answer for your problem, we do need to consider the possibility of ties. So let's break up the problem into cases based on how many dice are tied for the highest roll: The probability that all three dice are tied for highest is clearly $p_3 = \frac1{n^2}$ (where, again, $n = 6$ is the number of sides on the dice). The probability that exactly two of the three dice are tied for highest is a bit harder to calculate, but a moment's thought shows that it must equal the probability that there is a two-way tie (but not a three-way one) divided by two (since half the time the two dice that are tied will roll lower than the third one that isn't). By applying the inclusion-exclusion principle we can see* that the probability of getting a two-way tie is $\frac{3n - 3}{n^2}$, and thus the probability of a two-way tie for the highest roll is half of that, i.e. $p_2 = \frac{3n - 3}{2n^2}$. Finally, the probability that there is no tie for the highest roll, i.e. that a single die rolls the unique highest result, is of course $p_1 = 1 - p_2 - p_3 = 1 - \frac{3n - 3}{2n^2} - \frac1{n^2} = \frac{2n^2 - 3n + 1}{2n^2}$. Now, by the rules of your game: If all three dice are tied, player B always wins. If two dice are tied for highest, player A wins if they have both of these dice, i.e. with probability $\frac13$. If there is a single highest die, player A wins if they have it, i.e. with probability $\frac23$. Thus, the overall probability of player A winning your game is $$p = \frac13 p_2 + \frac23 p_1 = \frac{3n - 3}{6n^2} + \frac{4n^2 - 6n + 2}{6n^2} = \frac{4n^2 - 3n - 1}{6n^2}.$$ In particular, for $n = 6$, $p = \frac{4\cdot36 - 3\cdot6 - 1}{6\cdot36} = \frac{144 - 18 - 1}{216} = \frac{125}{216} ≈ 0.5787.$ *) Specifically, label the dice as $a$, $b$ and $c$, and note that there are three pairs of dice — $(a,b)$, $(a,c)$ and $(b,c)$ — that can be tied. Let $T_{a,b}$, $T_{a,c}$ and $T_{b,c}$ be the events where each of these pairs is tied, and note that $$P(T_{a,b}) = P(T_{a,c}) = P(T_{b,c}) = \tfrac1n,$$ while $$P(T_{a,b} \cap T_{a,c}) = P(T_{a,b} \cap T_{b,c}) = P(T_{a,c} \cap T_{b,c}) = P(T_{a,b} \cap T_{a,c} \cap T_{b,c}) = \tfrac1{n^2},$$ as any two pairs being tied implies that all three are. Thus, by the inclusion–exclusion principle, the probability of getting at least a two-way tie is $$\begin{aligned} P(T_{a,b} \cup T_{a,c} \cup T_{b,c}) &= P(T_{a,b}) + P(T_{a,c}) + P(T_{b,c}) \\ &\qquad - P(T_{a,b} \cap T_{a,c}) - P(T_{a,b} \cap T_{b,c}) - P(T_{a,c} \cap T_{b,c}) \\ &\qquad + P(T_{a,b} \cap T_{a,c} \cap T_{b,c}) \\ &= 3 \tfrac1n - 3 \tfrac1{n^2} + \tfrac1{n^2} = \tfrac{3n-2}{n^2}. \end{aligned}$$ But the number we actually want is the probability of getting an exactly two-way tie, i.e. $$P(T_{a,b} \cup T_{a,c} \cup T_{b,c}) - P(T_{a,b} \cap T_{a,c} \cap T_{b,c}) = \tfrac{3n-3}{n^2}.$$
A three dice roll question Here's a fairly simple way — based on a symmetry argument — to get the solution for dice with $n$ sides, for arbitrarily large $n$, without having to consider up to $n$ different cases. Instead, the
19,314
A three dice roll question
To add to the other answers, while this isn't what you were exactly asking for, this is a good place to highlight that this type of problem is the exact type where numerical simulation is a powerful tool. A challenging mathematical problem is reduced to a very simple numerical problem. Here is some python code which simulates this game 1000 times and estimates the probability of A winning as the proportion of games A wins out of these 1000. from numpy import random def play_game(): dieA_1 = random.choice([1,2,3,4,5,6]) dieA_2 = random.choice([1,2,3,4,5,6]) A_num = max(dieA_1, dieA_2) B_num = random.choice([1,2,3,4,5,6]) A_wins = A_num > B_num return A_wins num_A_wins = 0 for iter in range(1000): A_wins = play_game() num_A_wins += A_wins print(f"probability of A winning is {num_A_wins/1000}") Output: probability of A winning is 0.582
A three dice roll question
To add to the other answers, while this isn't what you were exactly asking for, this is a good place to highlight that this type of problem is the exact type where numerical simulation is a powerful t
A three dice roll question To add to the other answers, while this isn't what you were exactly asking for, this is a good place to highlight that this type of problem is the exact type where numerical simulation is a powerful tool. A challenging mathematical problem is reduced to a very simple numerical problem. Here is some python code which simulates this game 1000 times and estimates the probability of A winning as the proportion of games A wins out of these 1000. from numpy import random def play_game(): dieA_1 = random.choice([1,2,3,4,5,6]) dieA_2 = random.choice([1,2,3,4,5,6]) A_num = max(dieA_1, dieA_2) B_num = random.choice([1,2,3,4,5,6]) A_wins = A_num > B_num return A_wins num_A_wins = 0 for iter in range(1000): A_wins = play_game() num_A_wins += A_wins print(f"probability of A winning is {num_A_wins/1000}") Output: probability of A winning is 0.582
A three dice roll question To add to the other answers, while this isn't what you were exactly asking for, this is a good place to highlight that this type of problem is the exact type where numerical simulation is a powerful t
19,315
A three dice roll question
Since this is an interview question, simple thinking and an approximate answer is best. Three dice are thrown, the biggest number wins. The probability to win is $1/3$ for each of the die. Player A has two dice, and so wins in $2/3$ of the cases. Done. This is a slight over-estimate since whenever player B throws a 6, player B automatically wins. Other answers give ideas on how to calculate more accurately. But somehow that's not so much fun, is it.
A three dice roll question
Since this is an interview question, simple thinking and an approximate answer is best. Three dice are thrown, the biggest number wins. The probability to win is $1/3$ for each of the die. Player A ha
A three dice roll question Since this is an interview question, simple thinking and an approximate answer is best. Three dice are thrown, the biggest number wins. The probability to win is $1/3$ for each of the die. Player A has two dice, and so wins in $2/3$ of the cases. Done. This is a slight over-estimate since whenever player B throws a 6, player B automatically wins. Other answers give ideas on how to calculate more accurately. But somehow that's not so much fun, is it.
A three dice roll question Since this is an interview question, simple thinking and an approximate answer is best. Three dice are thrown, the biggest number wins. The probability to win is $1/3$ for each of the die. Player A ha
19,316
Why are the numbers on a ball in a lotto draw categorical nominal instead of categorical ordinal?
You could color-code the balls without fundamentally changing the game. Instead of 6-12-11, we get red-blue-pink. You could go with letters without fundamentally changing the game. Instead of 6-12-11, we get Y-Q-X. You could use animal drawings without fundamentally changing the game. Instead of 6-12-11, we get dog-fish-horse. The 6-ball isn’t worth half as much as the 12-ball. It doesn’t even represent a lesser value. The number is just on the ball as a link to lottery tickets. It could be different if the number represented some kind of quantity, like rolling dice and advancing a game piece that many spots, but there’s nothing quantitative going on. The numbers on lottery balls just serve as links back to the tickets. You probably can accept this for something like towns having zip codes or people having phone numbers. It’s the same idea.
Why are the numbers on a ball in a lotto draw categorical nominal instead of categorical ordinal?
You could color-code the balls without fundamentally changing the game. Instead of 6-12-11, we get red-blue-pink. You could go with letters without fundamentally changing the game. Instead of 6-12-11,
Why are the numbers on a ball in a lotto draw categorical nominal instead of categorical ordinal? You could color-code the balls without fundamentally changing the game. Instead of 6-12-11, we get red-blue-pink. You could go with letters without fundamentally changing the game. Instead of 6-12-11, we get Y-Q-X. You could use animal drawings without fundamentally changing the game. Instead of 6-12-11, we get dog-fish-horse. The 6-ball isn’t worth half as much as the 12-ball. It doesn’t even represent a lesser value. The number is just on the ball as a link to lottery tickets. It could be different if the number represented some kind of quantity, like rolling dice and advancing a game piece that many spots, but there’s nothing quantitative going on. The numbers on lottery balls just serve as links back to the tickets. You probably can accept this for something like towns having zip codes or people having phone numbers. It’s the same idea.
Why are the numbers on a ball in a lotto draw categorical nominal instead of categorical ordinal? You could color-code the balls without fundamentally changing the game. Instead of 6-12-11, we get red-blue-pink. You could go with letters without fundamentally changing the game. Instead of 6-12-11,
19,317
Why are the numbers on a ball in a lotto draw categorical nominal instead of categorical ordinal?
The lines between the different types of variables are not as clear cut as we often define them. In many cases, the classification of a variables depends on how we use that variable instead of on the fundamental properties of the variable itself. Age in years and time measured in any fixed unit (days, hours, seconds) are good examples. These variables are fundamentally discrete (age in years falls in a countable set) but we often treat them as continuous in practice (e.g., regressing the probability of some disease on age as a continuous variable). I'd argue in this case that the numbers on the balls are an ordinal variable but if the order is not important in the lottery then we treat them as nominal. This makes the difference between numbered balls and coloured balls clearer. You could run the lottery differently so that the order of the balls matters (maybe you win if you match the highest number out of, say, four balls drawn at random). You can't do this with coloured balls (unless you impose an ordering like the wavelength of light).
Why are the numbers on a ball in a lotto draw categorical nominal instead of categorical ordinal?
The lines between the different types of variables are not as clear cut as we often define them. In many cases, the classification of a variables depends on how we use that variable instead of on the
Why are the numbers on a ball in a lotto draw categorical nominal instead of categorical ordinal? The lines between the different types of variables are not as clear cut as we often define them. In many cases, the classification of a variables depends on how we use that variable instead of on the fundamental properties of the variable itself. Age in years and time measured in any fixed unit (days, hours, seconds) are good examples. These variables are fundamentally discrete (age in years falls in a countable set) but we often treat them as continuous in practice (e.g., regressing the probability of some disease on age as a continuous variable). I'd argue in this case that the numbers on the balls are an ordinal variable but if the order is not important in the lottery then we treat them as nominal. This makes the difference between numbered balls and coloured balls clearer. You could run the lottery differently so that the order of the balls matters (maybe you win if you match the highest number out of, say, four balls drawn at random). You can't do this with coloured balls (unless you impose an ordering like the wavelength of light).
Why are the numbers on a ball in a lotto draw categorical nominal instead of categorical ordinal? The lines between the different types of variables are not as clear cut as we often define them. In many cases, the classification of a variables depends on how we use that variable instead of on the
19,318
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive numbers)?
If your numbers can only be positive, then modeling them as a normal distribution may not be desirable depending on your use case, because the normal distribution is supported on all real numbers. Perhaps you would want to model height as an exponential distribution, or maybe a truncated normal distribution? EDIT: After seeing your data, it really looks like it might fit an exponential distribution well! You could estimate the $ \lambda $ parameter by taking, for example, a maximum likelihood approach.
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive
If your numbers can only be positive, then modeling them as a normal distribution may not be desirable depending on your use case, because the normal distribution is supported on all real numbers. Per
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive numbers)? If your numbers can only be positive, then modeling them as a normal distribution may not be desirable depending on your use case, because the normal distribution is supported on all real numbers. Perhaps you would want to model height as an exponential distribution, or maybe a truncated normal distribution? EDIT: After seeing your data, it really looks like it might fit an exponential distribution well! You could estimate the $ \lambda $ parameter by taking, for example, a maximum likelihood approach.
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive If your numbers can only be positive, then modeling them as a normal distribution may not be desirable depending on your use case, because the normal distribution is supported on all real numbers. Per
19,319
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive numbers)?
"What is the correct way to apply 68-95-99.7 to my case?" One should only expect that rule of thumb for the coverage to apply exactly only if you are (1) looking at the entire (infinite) population or theoretical probability distribution, and (2) the distribution is exactly normal. If you take a random sample of size 20, even from a genuinely normal distribution, you won't always find that 95% of the data (19 of the 20 items) lies within 2 (or 1.960) standard deviations of the mean. In fact it is neither guaranteed that 19 of the 20 items will lie within 1.960 population standard deviations of the population mean, nor that 19 of the 20 items lie within 1.960 sample standard deviations of the sample mean. If you take a sample of data from a distribution that is not quite normally distributed, then again one would not expect the 68-95-99.7 rule to apply exactly. But it may come reasonably close to doing so, particularly if the sample size is large (the "99.7% coverage" rule-of-thumb may not be especially meaningful with a sample size below 1000) and the distribution is reasonably close to normality. In theory lots of data such as height or weight could not come from a precisely normal distribution or that would imply a small, but non-zero, probability of them being negative. Nevertheless, for data with an approximately symmetrical and unimodal distribution, where middling values are more common and extremely high or low values drop off in probability, the model of a normal distribution may be adequate for practical purposes. Incidentally you may be interested in If my histogram shows a bell-shaped curve, can I say my data is normally distributed? If you want theoretically binding bounds that apply to any distribution, then see Chebyshev's inequality, which states that at most $1/k^2$ of the values can lie more than $k$ standard deviations from the mean. This guarantees that at least 75% of data lie within two standard deviations of the mean, and 89% within three standard deviations. But those figures are just the theoretically-guaranteed minimum. For many roughly bell-shaped distributions, you will find that the two-standard deviation coverage figure comes much closer to 95% than to 75%, and so the "rule of thumb" from the normal distribution is still useful. On the other hand, if your data come from a distribution that is nowhere near bell-shaped, you may be able to find an alternative model that describes the data better and has a different coverage rule. (One thing that is nice about the 68-95-99.7 rule is that it applies to any normal distribution, regardless of its parameters for mean or standard deviation. Similarly, Chebyshev's inequality applies regardless of the parameters, or even the distribution, though only gives lower bounds for coverage. But if you apply, for example, a truncated normal or skew normal model, then there isn't a simple equivalent of "68-95-99.7" coverage, because it would depend upon the parameters of the distribution.)
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive
"What is the correct way to apply 68-95-99.7 to my case?" One should only expect that rule of thumb for the coverage to apply exactly only if you are (1) looking at the entire (infinite) population or
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive numbers)? "What is the correct way to apply 68-95-99.7 to my case?" One should only expect that rule of thumb for the coverage to apply exactly only if you are (1) looking at the entire (infinite) population or theoretical probability distribution, and (2) the distribution is exactly normal. If you take a random sample of size 20, even from a genuinely normal distribution, you won't always find that 95% of the data (19 of the 20 items) lies within 2 (or 1.960) standard deviations of the mean. In fact it is neither guaranteed that 19 of the 20 items will lie within 1.960 population standard deviations of the population mean, nor that 19 of the 20 items lie within 1.960 sample standard deviations of the sample mean. If you take a sample of data from a distribution that is not quite normally distributed, then again one would not expect the 68-95-99.7 rule to apply exactly. But it may come reasonably close to doing so, particularly if the sample size is large (the "99.7% coverage" rule-of-thumb may not be especially meaningful with a sample size below 1000) and the distribution is reasonably close to normality. In theory lots of data such as height or weight could not come from a precisely normal distribution or that would imply a small, but non-zero, probability of them being negative. Nevertheless, for data with an approximately symmetrical and unimodal distribution, where middling values are more common and extremely high or low values drop off in probability, the model of a normal distribution may be adequate for practical purposes. Incidentally you may be interested in If my histogram shows a bell-shaped curve, can I say my data is normally distributed? If you want theoretically binding bounds that apply to any distribution, then see Chebyshev's inequality, which states that at most $1/k^2$ of the values can lie more than $k$ standard deviations from the mean. This guarantees that at least 75% of data lie within two standard deviations of the mean, and 89% within three standard deviations. But those figures are just the theoretically-guaranteed minimum. For many roughly bell-shaped distributions, you will find that the two-standard deviation coverage figure comes much closer to 95% than to 75%, and so the "rule of thumb" from the normal distribution is still useful. On the other hand, if your data come from a distribution that is nowhere near bell-shaped, you may be able to find an alternative model that describes the data better and has a different coverage rule. (One thing that is nice about the 68-95-99.7 rule is that it applies to any normal distribution, regardless of its parameters for mean or standard deviation. Similarly, Chebyshev's inequality applies regardless of the parameters, or even the distribution, though only gives lower bounds for coverage. But if you apply, for example, a truncated normal or skew normal model, then there isn't a simple equivalent of "68-95-99.7" coverage, because it would depend upon the parameters of the distribution.)
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive "What is the correct way to apply 68-95-99.7 to my case?" One should only expect that rule of thumb for the coverage to apply exactly only if you are (1) looking at the entire (infinite) population or
19,320
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive numbers)?
Can someone help me to understand if i am using this in correct way? Oh, that's easy. No, you're not using it correctly. First off, you're using a rather small data set. Trying to tease out statistical behavior from this size set is certainly possible, but the confidence bounds are (ahem) rather large. For small data sets, deviations from the expected distributions are par for the course, and the smaller the set the greater the problem. Remember, "The Law of Averages not only permits the most outrageous coincidences, it requires them." Worse, the particular data set you're using simply doesn't look much like a normal distribution. Think about it - with a mean of .498 you've got two samples below 0.1, and three more at .748 or above. Then you've got a cluster of 3 points between .17 and .22. Looking at this particular data set and arguing that it must be normal distribution is a pretty good case of Procrustean argument. Does that look like a bell curve to you? It's perfectly possible that the larger population does follow a normal, or modified normal, distribution, and a larger sample size would address the issue, but I wouldn't bet on it, particularly without knowing more about the population. I say modified normal, since as Kevin Li has pointed out, technically a normal distribution includes all real numbers. As was also pointed out in comments to his answer, this does not prevent applying such a distribution over a limited range and getting useful results. As the saying goes, "All models are wrong. Some are useful." But this particular data set simply doesn't look like inferring a normal distribution (even over a limited range) is a particularly good idea. If your 10 data points looked like .275,.325,.375,.425,.475,.525,.575,.625,.675,.725 (mean of 0.500), would you assume a normal distribution?
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive
Can someone help me to understand if i am using this in correct way? Oh, that's easy. No, you're not using it correctly. First off, you're using a rather small data set. Trying to tease out statistic
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive numbers)? Can someone help me to understand if i am using this in correct way? Oh, that's easy. No, you're not using it correctly. First off, you're using a rather small data set. Trying to tease out statistical behavior from this size set is certainly possible, but the confidence bounds are (ahem) rather large. For small data sets, deviations from the expected distributions are par for the course, and the smaller the set the greater the problem. Remember, "The Law of Averages not only permits the most outrageous coincidences, it requires them." Worse, the particular data set you're using simply doesn't look much like a normal distribution. Think about it - with a mean of .498 you've got two samples below 0.1, and three more at .748 or above. Then you've got a cluster of 3 points between .17 and .22. Looking at this particular data set and arguing that it must be normal distribution is a pretty good case of Procrustean argument. Does that look like a bell curve to you? It's perfectly possible that the larger population does follow a normal, or modified normal, distribution, and a larger sample size would address the issue, but I wouldn't bet on it, particularly without knowing more about the population. I say modified normal, since as Kevin Li has pointed out, technically a normal distribution includes all real numbers. As was also pointed out in comments to his answer, this does not prevent applying such a distribution over a limited range and getting useful results. As the saying goes, "All models are wrong. Some are useful." But this particular data set simply doesn't look like inferring a normal distribution (even over a limited range) is a particularly good idea. If your 10 data points looked like .275,.325,.375,.425,.475,.525,.575,.625,.675,.725 (mean of 0.500), would you assume a normal distribution?
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive Can someone help me to understand if i am using this in correct way? Oh, that's easy. No, you're not using it correctly. First off, you're using a rather small data set. Trying to tease out statistic
19,321
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive numbers)?
In one of the comments you say you used "random data" but you don't say from what distribution. If you are talking about heights of humans, they are roughly normally distributed, but your data are not remotely appropriate for human heights - yours are fractions of a cm! And your data are not remotely normal. I'm guessing you used a uniform distribution with bounds of 0 and 1. And you generated a very small sample. Let's try with a bigger sample: set.seed(1234) #Sets a seed x <- runif(10000, 0 , 1) sd(x) #0.28 so, none of the data is beyond 2 sd from the mean, because that is beyond the bounds of the data. And the portion within 1 sd will be approximately 0.56.
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive
In one of the comments you say you used "random data" but you don't say from what distribution. If you are talking about heights of humans, they are roughly normally distributed, but your data are no
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive numbers)? In one of the comments you say you used "random data" but you don't say from what distribution. If you are talking about heights of humans, they are roughly normally distributed, but your data are not remotely appropriate for human heights - yours are fractions of a cm! And your data are not remotely normal. I'm guessing you used a uniform distribution with bounds of 0 and 1. And you generated a very small sample. Let's try with a bigger sample: set.seed(1234) #Sets a seed x <- runif(10000, 0 , 1) sd(x) #0.28 so, none of the data is beyond 2 sd from the mean, because that is beyond the bounds of the data. And the portion within 1 sd will be approximately 0.56.
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive In one of the comments you say you used "random data" but you don't say from what distribution. If you are talking about heights of humans, they are roughly normally distributed, but your data are no
19,322
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive numbers)?
Often, when you have a constraint that your samples must all be positive, it is worth looking at the logarithm of your data to see if your distribution can be approximated by a lognormal distribution.
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive
Often, when you have a constraint that your samples must all be positive, it is worth looking at the logarithm of your data to see if your distribution can be approximated by a lognormal distribution.
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive numbers)? Often, when you have a constraint that your samples must all be positive, it is worth looking at the logarithm of your data to see if your distribution can be approximated by a lognormal distribution.
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive Often, when you have a constraint that your samples must all be positive, it is worth looking at the logarithm of your data to see if your distribution can be approximated by a lognormal distribution.
19,323
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive numbers)?
A standard deviation calculation is relative to the mean. Can you apply standard deviation to numbers which are always positive? Absolutely. If you were to add 1000 to each of the values in your sample set, you would see the same standard deviation value, but you will have provided yourself with more breathing room above zero. $$\displaystyle s={\sqrt {\frac {\sum _{i=1}^{N}(x_{i}-{\overline {x}})^{2}}{N-1}}} = {\sqrt {\frac {\sum _{i=1}^{N}((x_{i}+k)-({\overline {x}}+k))^{2}}{N-1}}}$$ However, adding an arbitrary constant to your data is superficial. When using standard deviation for a data set so small, you will need to expect unrefined output. Consider the standard deviation like an auto-focus camera lens: the more time (data) you give it, the clearer the picture will be. If after you track 1000000 data points, your mean and standard deviation remain the same as with 10, then I may start to question the validity of your experiment.
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive
A standard deviation calculation is relative to the mean. Can you apply standard deviation to numbers which are always positive? Absolutely. If you were to add 1000 to each of the values in your sampl
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive numbers)? A standard deviation calculation is relative to the mean. Can you apply standard deviation to numbers which are always positive? Absolutely. If you were to add 1000 to each of the values in your sample set, you would see the same standard deviation value, but you will have provided yourself with more breathing room above zero. $$\displaystyle s={\sqrt {\frac {\sum _{i=1}^{N}(x_{i}-{\overline {x}})^{2}}{N-1}}} = {\sqrt {\frac {\sum _{i=1}^{N}((x_{i}+k)-({\overline {x}}+k))^{2}}{N-1}}}$$ However, adding an arbitrary constant to your data is superficial. When using standard deviation for a data set so small, you will need to expect unrefined output. Consider the standard deviation like an auto-focus camera lens: the more time (data) you give it, the clearer the picture will be. If after you track 1000000 data points, your mean and standard deviation remain the same as with 10, then I may start to question the validity of your experiment.
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive A standard deviation calculation is relative to the mean. Can you apply standard deviation to numbers which are always positive? Absolutely. If you were to add 1000 to each of the values in your sampl
19,324
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive numbers)?
Your histogram shows that the normal distribution is not a good fit. You could try lognormal or something else that is asymmetrical and strictly positive
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive
Your histogram shows that the normal distribution is not a good fit. You could try lognormal or something else that is asymmetrical and strictly positive
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive numbers)? Your histogram shows that the normal distribution is not a good fit. You could try lognormal or something else that is asymmetrical and strictly positive
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive Your histogram shows that the normal distribution is not a good fit. You could try lognormal or something else that is asymmetrical and strictly positive
19,325
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive numbers)?
The main point is that a lot of us are lazy*, and the normal distribution is convenient to work with for us lazy people. It is easy do calculations using normal distribution and it has nice mathematical foundation. As such it is a "model" for how to work on data. This model often works surprisingly well, and sometimes falls flat on its face. It is very obvious that your samples do not indicate a normal distribution in the data. So the solution to you dilemma is to choose a different "model", and work with a different distribution. Weibull distributions may be on direction, there are others. lazy in not really getting to know the data and selecting better models when necessary.
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive
The main point is that a lot of us are lazy*, and the normal distribution is convenient to work with for us lazy people. It is easy do calculations using normal distribution and it has nice mathematic
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive numbers)? The main point is that a lot of us are lazy*, and the normal distribution is convenient to work with for us lazy people. It is easy do calculations using normal distribution and it has nice mathematical foundation. As such it is a "model" for how to work on data. This model often works surprisingly well, and sometimes falls flat on its face. It is very obvious that your samples do not indicate a normal distribution in the data. So the solution to you dilemma is to choose a different "model", and work with a different distribution. Weibull distributions may be on direction, there are others. lazy in not really getting to know the data and selecting better models when necessary.
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive The main point is that a lot of us are lazy*, and the normal distribution is convenient to work with for us lazy people. It is easy do calculations using normal distribution and it has nice mathematic
19,326
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive numbers)?
Basically you are using Ratio data as opposed to Interval data. Geographers go through this all the time when calculating the S/D for annual rainfall at a specific location (100+ years of sample points at say L.A. Civic Center) or snowfall (100+ years of snowfall samples at Big Bear Lake). We can only have positive numbers, that's just the way it is.
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive
Basically you are using Ratio data as opposed to Interval data. Geographers go through this all the time when calculating the S/D for annual rainfall at a specific location (100+ years of sample point
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive numbers)? Basically you are using Ratio data as opposed to Interval data. Geographers go through this all the time when calculating the S/D for annual rainfall at a specific location (100+ years of sample points at say L.A. Civic Center) or snowfall (100+ years of snowfall samples at Big Bear Lake). We can only have positive numbers, that's just the way it is.
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive Basically you are using Ratio data as opposed to Interval data. Geographers go through this all the time when calculating the S/D for annual rainfall at a specific location (100+ years of sample point
19,327
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive numbers)?
In meteorology, distributions of wind speeds do look a lot like this. By definition wind speeds are also non-negative. So in your case, i would definitely look at the Weibull distribution.
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive
In meteorology, distributions of wind speeds do look a lot like this. By definition wind speeds are also non-negative. So in your case, i would definitely look at the Weibull distribution.
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive numbers)? In meteorology, distributions of wind speeds do look a lot like this. By definition wind speeds are also non-negative. So in your case, i would definitely look at the Weibull distribution.
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive In meteorology, distributions of wind speeds do look a lot like this. By definition wind speeds are also non-negative. So in your case, i would definitely look at the Weibull distribution.
19,328
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive numbers)?
You start with "according to the normal distribution" when your data is clearly not normal distributed, that's the first problem. You say "It doesn't matter if it is normal distribution or not." Which is absolute nonsense. You can't use statements about normal distributed data if your data is not normal distributed. And you misinterpret the statement. "99.7% must be within three standard deviations". And 99.7% of your data was indeed within three standard deviations. Even better, it was 100% within two standard deviations. So the statement is true.
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive
You start with "according to the normal distribution" when your data is clearly not normal distributed, that's the first problem. You say "It doesn't matter if it is normal distribution or not." Which
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive numbers)? You start with "according to the normal distribution" when your data is clearly not normal distributed, that's the first problem. You say "It doesn't matter if it is normal distribution or not." Which is absolute nonsense. You can't use statements about normal distributed data if your data is not normal distributed. And you misinterpret the statement. "99.7% must be within three standard deviations". And 99.7% of your data was indeed within three standard deviations. Even better, it was 100% within two standard deviations. So the statement is true.
Is standard deviation totally wrong? How can you calculate std for heights, counts and etc (positive You start with "according to the normal distribution" when your data is clearly not normal distributed, that's the first problem. You say "It doesn't matter if it is normal distribution or not." Which
19,329
Fast ways in R to get the first row of a data frame grouped by an identifier [closed]
Is your ID column really a factor? If it is in fact numeric, I think you can use the diff function to your advantage. You could also coerce it to numeric with as.numeric(). dx <- data.frame( ID = sort(sample(1:7000, 400000, TRUE)) , AGE = sample(18:65, 400000, TRUE) , FEM = sample(0:1, 400000, TRUE) ) dx[ diff(c(0,dx$ID)) != 0, ]
Fast ways in R to get the first row of a data frame grouped by an identifier [closed]
Is your ID column really a factor? If it is in fact numeric, I think you can use the diff function to your advantage. You could also coerce it to numeric with as.numeric(). dx <- data.frame( ID =
Fast ways in R to get the first row of a data frame grouped by an identifier [closed] Is your ID column really a factor? If it is in fact numeric, I think you can use the diff function to your advantage. You could also coerce it to numeric with as.numeric(). dx <- data.frame( ID = sort(sample(1:7000, 400000, TRUE)) , AGE = sample(18:65, 400000, TRUE) , FEM = sample(0:1, 400000, TRUE) ) dx[ diff(c(0,dx$ID)) != 0, ]
Fast ways in R to get the first row of a data frame grouped by an identifier [closed] Is your ID column really a factor? If it is in fact numeric, I think you can use the diff function to your advantage. You could also coerce it to numeric with as.numeric(). dx <- data.frame( ID =
19,330
Fast ways in R to get the first row of a data frame grouped by an identifier [closed]
Following up to Steve's reply, there is a much faster way in data.table : > # Preamble > dx <- data.frame( + ID = sort(sample(1:7000, 400000, TRUE)) + , AGE = sample(18:65, 400000, TRUE) + , FEM = sample(0:1, 400000, TRUE) + ) > dxt <- data.table(dx, key='ID') > # fast self join > system.time(ans2<-dxt[J(unique(ID)),mult="first"]) user system elapsed 0.048 0.016 0.064 > # slower using .SD > system.time(ans1<-dxt[, .SD[1], by=ID]) user system elapsed 14.209 0.012 14.281 > mapply(identical,ans1,ans2) # ans1 is keyed but ans2 isn't, otherwise identical ID AGE FEM TRUE TRUE TRUE If you merely need the first row of each group, it's much faster to join to that row directly. Why create the .SD object each time, only to use the first row of it? Compare the 0.064 of data.table to "Matt Parker's alternative to Chase's solution" (which seemed to be the fastest so far) : > system.time(ans3<-dxt[c(TRUE, dxt$ID[-1] != dxt$ID[-length(dxt$ID)]), ]) user system elapsed 0.284 0.028 0.310 > identical(ans1,ans3) [1] TRUE So ~5 times faster, but it's a tiny table at under 1 million rows. As size increases, so does the difference.
Fast ways in R to get the first row of a data frame grouped by an identifier [closed]
Following up to Steve's reply, there is a much faster way in data.table : > # Preamble > dx <- data.frame( + ID = sort(sample(1:7000, 400000, TRUE)) + , AGE = sample(18:65, 400000, TRUE) +
Fast ways in R to get the first row of a data frame grouped by an identifier [closed] Following up to Steve's reply, there is a much faster way in data.table : > # Preamble > dx <- data.frame( + ID = sort(sample(1:7000, 400000, TRUE)) + , AGE = sample(18:65, 400000, TRUE) + , FEM = sample(0:1, 400000, TRUE) + ) > dxt <- data.table(dx, key='ID') > # fast self join > system.time(ans2<-dxt[J(unique(ID)),mult="first"]) user system elapsed 0.048 0.016 0.064 > # slower using .SD > system.time(ans1<-dxt[, .SD[1], by=ID]) user system elapsed 14.209 0.012 14.281 > mapply(identical,ans1,ans2) # ans1 is keyed but ans2 isn't, otherwise identical ID AGE FEM TRUE TRUE TRUE If you merely need the first row of each group, it's much faster to join to that row directly. Why create the .SD object each time, only to use the first row of it? Compare the 0.064 of data.table to "Matt Parker's alternative to Chase's solution" (which seemed to be the fastest so far) : > system.time(ans3<-dxt[c(TRUE, dxt$ID[-1] != dxt$ID[-length(dxt$ID)]), ]) user system elapsed 0.284 0.028 0.310 > identical(ans1,ans3) [1] TRUE So ~5 times faster, but it's a tiny table at under 1 million rows. As size increases, so does the difference.
Fast ways in R to get the first row of a data frame grouped by an identifier [closed] Following up to Steve's reply, there is a much faster way in data.table : > # Preamble > dx <- data.frame( + ID = sort(sample(1:7000, 400000, TRUE)) + , AGE = sample(18:65, 400000, TRUE) +
19,331
Fast ways in R to get the first row of a data frame grouped by an identifier [closed]
You don't need multiple merge() steps, just aggregate() both variables of interest: > aggregate(dx[, -1], by = list(ID = dx$ID), head, 1) ID AGE FEM 1 1 30 1 2 2 40 0 3 3 35 1 > system.time(replicate(1000, aggregate(dx[, -1], by = list(ID = dx$ID), + head, 1))) user system elapsed 2.531 0.007 2.547 > system.time(replicate(1000, {ag <- data.frame(ID=levels(dx$ID)) + ag <- merge(ag, aggregate(AGE ~ ID, data=dx, function(x) x[1]), "ID") + ag <- merge(ag, aggregate(FEM ~ ID, data=dx, function(x) x[1]), "ID") + })) user system elapsed 9.264 0.009 9.301 Comparison timings: 1) Matt's solution: > system.time(replicate(1000, { + agg <- by(dx, dx$ID, FUN = function(x) x[1, ]) + # Which returns a list that you can then convert into a data.frame thusly: + do.call(rbind, agg) + })) user system elapsed 3.759 0.007 3.785 2) Zach's reshape2 solution: > system.time(replicate(1000, { + dx <- melt(dx,id=c('ID','FEM')) + dcast(dx,ID+FEM~variable,fun.aggregate=mean) + })) user system elapsed 12.804 0.032 13.019 3) Steve's data.table solution: > system.time(replicate(1000, { + dxt <- data.table(dx, key='ID') + dxt[, .SD[1,], by=ID] + })) user system elapsed 5.484 0.020 5.608 > dxt <- data.table(dx, key='ID') ## one time step > system.time(replicate(1000, { + dxt[, .SD[1,], by=ID] ## try this one line on own + })) user system elapsed 3.743 0.006 3.784 4) Chase's fast solution using numeric, not factor, ID: > dx2 <- within(dx, ID <- as.numeric(ID)) > system.time(replicate(1000, { + dy <- dx[order(dx$ID),] + dy[ diff(c(0,dy$ID)) != 0, ] + })) user system elapsed 0.663 0.000 0.663 and 5) Matt Parker's alternative to Chase's solution, for character or factor ID, which is slightly faster than Chase's numeric ID one: > system.time(replicate(1000, { + dx[c(TRUE, dx$ID[-1] != dx$ID[-length(dx$ID)]), ] + })) user system elapsed 0.513 0.000 0.516
Fast ways in R to get the first row of a data frame grouped by an identifier [closed]
You don't need multiple merge() steps, just aggregate() both variables of interest: > aggregate(dx[, -1], by = list(ID = dx$ID), head, 1) ID AGE FEM 1 1 30 1 2 2 40 0 3 3 35 1 > system.
Fast ways in R to get the first row of a data frame grouped by an identifier [closed] You don't need multiple merge() steps, just aggregate() both variables of interest: > aggregate(dx[, -1], by = list(ID = dx$ID), head, 1) ID AGE FEM 1 1 30 1 2 2 40 0 3 3 35 1 > system.time(replicate(1000, aggregate(dx[, -1], by = list(ID = dx$ID), + head, 1))) user system elapsed 2.531 0.007 2.547 > system.time(replicate(1000, {ag <- data.frame(ID=levels(dx$ID)) + ag <- merge(ag, aggregate(AGE ~ ID, data=dx, function(x) x[1]), "ID") + ag <- merge(ag, aggregate(FEM ~ ID, data=dx, function(x) x[1]), "ID") + })) user system elapsed 9.264 0.009 9.301 Comparison timings: 1) Matt's solution: > system.time(replicate(1000, { + agg <- by(dx, dx$ID, FUN = function(x) x[1, ]) + # Which returns a list that you can then convert into a data.frame thusly: + do.call(rbind, agg) + })) user system elapsed 3.759 0.007 3.785 2) Zach's reshape2 solution: > system.time(replicate(1000, { + dx <- melt(dx,id=c('ID','FEM')) + dcast(dx,ID+FEM~variable,fun.aggregate=mean) + })) user system elapsed 12.804 0.032 13.019 3) Steve's data.table solution: > system.time(replicate(1000, { + dxt <- data.table(dx, key='ID') + dxt[, .SD[1,], by=ID] + })) user system elapsed 5.484 0.020 5.608 > dxt <- data.table(dx, key='ID') ## one time step > system.time(replicate(1000, { + dxt[, .SD[1,], by=ID] ## try this one line on own + })) user system elapsed 3.743 0.006 3.784 4) Chase's fast solution using numeric, not factor, ID: > dx2 <- within(dx, ID <- as.numeric(ID)) > system.time(replicate(1000, { + dy <- dx[order(dx$ID),] + dy[ diff(c(0,dy$ID)) != 0, ] + })) user system elapsed 0.663 0.000 0.663 and 5) Matt Parker's alternative to Chase's solution, for character or factor ID, which is slightly faster than Chase's numeric ID one: > system.time(replicate(1000, { + dx[c(TRUE, dx$ID[-1] != dx$ID[-length(dx$ID)]), ] + })) user system elapsed 0.513 0.000 0.516
Fast ways in R to get the first row of a data frame grouped by an identifier [closed] You don't need multiple merge() steps, just aggregate() both variables of interest: > aggregate(dx[, -1], by = list(ID = dx$ID), head, 1) ID AGE FEM 1 1 30 1 2 2 40 0 3 3 35 1 > system.
19,332
Fast ways in R to get the first row of a data frame grouped by an identifier [closed]
You can try to use the data.table package. For your particular case, the upside is that it's (insanely) fast. The first time I was introduced to it, I was working on data.frame objects with hundreds of thousands of rows. "Normal" aggregate or ddply methods were taken ~ 1-2 mins to complete (this was before Hadley introduced the idata.frame mojo into ddply). Using data.table, the operation was literally done in a matter of seconds. The downside is that its so fast because it will resort your data.table (it's just like a data.frame) by "key columns" and use a smart searching strategy to find subsets of your data. This will result in a reordering of your data before you collect stats over it. Given that you will just want the first row of each group -- maybe the reordering will mess up which row is first, which is why it might not be appropriate in your situation. Anyway, you'll have to judge whether or not data.table is appropriate here, but this is how you would use it with the data you've presented: install.packages('data.table') ## if yo udon't have it already library(data.table) dxt <- data.table(dx, key='ID') dxt[, .SD[1,], by=ID] ID AGE FEM [1,] 1 30 1 [2,] 2 40 0 [3,] 3 35 1 Update: Matthew Dowle (the main developer of the data.table package) has provided a better/smarter/(extremely) more efficient way to use data.table to solve this problem as one of the answers here ... definitely check that out.
Fast ways in R to get the first row of a data frame grouped by an identifier [closed]
You can try to use the data.table package. For your particular case, the upside is that it's (insanely) fast. The first time I was introduced to it, I was working on data.frame objects with hundreds o
Fast ways in R to get the first row of a data frame grouped by an identifier [closed] You can try to use the data.table package. For your particular case, the upside is that it's (insanely) fast. The first time I was introduced to it, I was working on data.frame objects with hundreds of thousands of rows. "Normal" aggregate or ddply methods were taken ~ 1-2 mins to complete (this was before Hadley introduced the idata.frame mojo into ddply). Using data.table, the operation was literally done in a matter of seconds. The downside is that its so fast because it will resort your data.table (it's just like a data.frame) by "key columns" and use a smart searching strategy to find subsets of your data. This will result in a reordering of your data before you collect stats over it. Given that you will just want the first row of each group -- maybe the reordering will mess up which row is first, which is why it might not be appropriate in your situation. Anyway, you'll have to judge whether or not data.table is appropriate here, but this is how you would use it with the data you've presented: install.packages('data.table') ## if yo udon't have it already library(data.table) dxt <- data.table(dx, key='ID') dxt[, .SD[1,], by=ID] ID AGE FEM [1,] 1 30 1 [2,] 2 40 0 [3,] 3 35 1 Update: Matthew Dowle (the main developer of the data.table package) has provided a better/smarter/(extremely) more efficient way to use data.table to solve this problem as one of the answers here ... definitely check that out.
Fast ways in R to get the first row of a data frame grouped by an identifier [closed] You can try to use the data.table package. For your particular case, the upside is that it's (insanely) fast. The first time I was introduced to it, I was working on data.frame objects with hundreds o
19,333
Fast ways in R to get the first row of a data frame grouped by an identifier [closed]
You could try agg <- by(dx, dx$ID, FUN = function(x) x[1, ]) # Which returns a list that you can then convert into a data.frame thusly: do.call(rbind, agg) I have no idea if this will be any faster than plyr, though.
Fast ways in R to get the first row of a data frame grouped by an identifier [closed]
You could try agg <- by(dx, dx$ID, FUN = function(x) x[1, ]) # Which returns a list that you can then convert into a data.frame thusly: do.call(rbind, agg) I have no idea if this will be any faster t
Fast ways in R to get the first row of a data frame grouped by an identifier [closed] You could try agg <- by(dx, dx$ID, FUN = function(x) x[1, ]) # Which returns a list that you can then convert into a data.frame thusly: do.call(rbind, agg) I have no idea if this will be any faster than plyr, though.
Fast ways in R to get the first row of a data frame grouped by an identifier [closed] You could try agg <- by(dx, dx$ID, FUN = function(x) x[1, ]) # Which returns a list that you can then convert into a data.frame thusly: do.call(rbind, agg) I have no idea if this will be any faster t
19,334
Fast ways in R to get the first row of a data frame grouped by an identifier [closed]
Try reshape2 library(reshape2) dx <- melt(dx,id=c('ID','FEM')) dcast(dx,ID+FEM~variable,fun.aggregate=mean)
Fast ways in R to get the first row of a data frame grouped by an identifier [closed]
Try reshape2 library(reshape2) dx <- melt(dx,id=c('ID','FEM')) dcast(dx,ID+FEM~variable,fun.aggregate=mean)
Fast ways in R to get the first row of a data frame grouped by an identifier [closed] Try reshape2 library(reshape2) dx <- melt(dx,id=c('ID','FEM')) dcast(dx,ID+FEM~variable,fun.aggregate=mean)
Fast ways in R to get the first row of a data frame grouped by an identifier [closed] Try reshape2 library(reshape2) dx <- melt(dx,id=c('ID','FEM')) dcast(dx,ID+FEM~variable,fun.aggregate=mean)
19,335
What if a non-random sample is identical to a random sample?
A particularly biased / non-representative sample is unlikely if you sample randomly. In an ideal world you'd have a non-random sample which perfectly accurately represents the population such that the proportion of every demographic is the same in the sample as it is in the population as a whole. This is pretty hard problem to solve in the real world though (to say the least), as you'd need to understand every demographic and how it affects your results. You might say "white, 24-year-old, college-educated women" is specific enough and you just need to make sure your sample has the right proportion of such people (and similarly for every other similar demographic), but they may be more or less likely to act in a certain way based on where they live, where they studied, where they grew up, their religion and many other factors. So you need to take all of that into account too. That'll be a whole lot of work, and in the process you'll probably answer your original query anyway without ever using the sample you generated. Basically doing that just doesn't make a whole lot of sense. In the real world a random sample is a "good enough" attempt to obtain an accurate representation of the population. Now it is indeed possible to get a random sample that doesn't reflect what the population as a whole looks like particularly well (i.e. a "biased" sample). But the probability of getting any given sample when sampling randomly decreases significantly as the sample becomes more biased and a less accurate representation of the population as a whole. This applies especially when you have larger samples. This is acceptable since statistics is generally about having high confidence of being correct rather than having absolute certainty. Think of it this way: if 70% of your population is women and you randomly pick one person, you have a 70% chance of picking a woman. So you would expect roughly 70% of your random sample to be women. The maths might not work out to exactly 70% in all cases, but that's the general idea. So the sample proportions should roughly correspond to the proportions of the population as a whole. You should be rather surprised if your sample somehow ends up with 0% women. There could also be issues depending on how you obtain a random sample. If you want to sample from everyone living in a country, you could, for example, get a random subset of registered voters or people with driver's licences. But then your sample would be heavily biased towards people who are registered to vote or have driver's licences. This may also lead to a partially random sample where you combine differently-sized random samples from different sources such that the end result is more representative of the population as a whole. Although I'm not sure whether and how often this is done in practice. Finding a single data source for the entire population would be preferable. But that's a whole other question.
What if a non-random sample is identical to a random sample?
A particularly biased / non-representative sample is unlikely if you sample randomly. In an ideal world you'd have a non-random sample which perfectly accurately represents the population such that th
What if a non-random sample is identical to a random sample? A particularly biased / non-representative sample is unlikely if you sample randomly. In an ideal world you'd have a non-random sample which perfectly accurately represents the population such that the proportion of every demographic is the same in the sample as it is in the population as a whole. This is pretty hard problem to solve in the real world though (to say the least), as you'd need to understand every demographic and how it affects your results. You might say "white, 24-year-old, college-educated women" is specific enough and you just need to make sure your sample has the right proportion of such people (and similarly for every other similar demographic), but they may be more or less likely to act in a certain way based on where they live, where they studied, where they grew up, their religion and many other factors. So you need to take all of that into account too. That'll be a whole lot of work, and in the process you'll probably answer your original query anyway without ever using the sample you generated. Basically doing that just doesn't make a whole lot of sense. In the real world a random sample is a "good enough" attempt to obtain an accurate representation of the population. Now it is indeed possible to get a random sample that doesn't reflect what the population as a whole looks like particularly well (i.e. a "biased" sample). But the probability of getting any given sample when sampling randomly decreases significantly as the sample becomes more biased and a less accurate representation of the population as a whole. This applies especially when you have larger samples. This is acceptable since statistics is generally about having high confidence of being correct rather than having absolute certainty. Think of it this way: if 70% of your population is women and you randomly pick one person, you have a 70% chance of picking a woman. So you would expect roughly 70% of your random sample to be women. The maths might not work out to exactly 70% in all cases, but that's the general idea. So the sample proportions should roughly correspond to the proportions of the population as a whole. You should be rather surprised if your sample somehow ends up with 0% women. There could also be issues depending on how you obtain a random sample. If you want to sample from everyone living in a country, you could, for example, get a random subset of registered voters or people with driver's licences. But then your sample would be heavily biased towards people who are registered to vote or have driver's licences. This may also lead to a partially random sample where you combine differently-sized random samples from different sources such that the end result is more representative of the population as a whole. Although I'm not sure whether and how often this is done in practice. Finding a single data source for the entire population would be preferable. But that's a whole other question.
What if a non-random sample is identical to a random sample? A particularly biased / non-representative sample is unlikely if you sample randomly. In an ideal world you'd have a non-random sample which perfectly accurately represents the population such that th
19,336
What if a non-random sample is identical to a random sample?
Play poker with your friend, bet a lot of money, and cheat to give yourself a royal flush (it beats every other hand). “That’s cheating!” “Nah, it’s one of the possible hands. Pay up.” Yes, it’s about the procedure. (Don’t actually do the poker trick, but I think it makes the point.)
What if a non-random sample is identical to a random sample?
Play poker with your friend, bet a lot of money, and cheat to give yourself a royal flush (it beats every other hand). “That’s cheating!” “Nah, it’s one of the possible hands. Pay up.” Yes, it’s about
What if a non-random sample is identical to a random sample? Play poker with your friend, bet a lot of money, and cheat to give yourself a royal flush (it beats every other hand). “That’s cheating!” “Nah, it’s one of the possible hands. Pay up.” Yes, it’s about the procedure. (Don’t actually do the poker trick, but I think it makes the point.)
What if a non-random sample is identical to a random sample? Play poker with your friend, bet a lot of money, and cheat to give yourself a royal flush (it beats every other hand). “That’s cheating!” “Nah, it’s one of the possible hands. Pay up.” Yes, it’s about
19,337
What if a non-random sample is identical to a random sample?
The central issue that has not been explicitly addressed, is that when sampling is correctly performed (randomness being one criterion), the resulting sample is a faithful representation of the underlying distribution of the population being sampled. This is what allows us to make a meaningful inference about the population from the sample. When a sample is not chosen at random, depending on how it is chosen, any resulting inference is distorted because the sample is no longer necessarily representative of the likelihoods of the outcomes that were observed. It is important to phrase it this way because non-random sampling does not imply that rare or unlikely outcomes are overly represented. You could, for instance, always select the mode of a binomial random variable--this is clearly not random. And it still violates the notion that the sample represents the population.
What if a non-random sample is identical to a random sample?
The central issue that has not been explicitly addressed, is that when sampling is correctly performed (randomness being one criterion), the resulting sample is a faithful representation of the underl
What if a non-random sample is identical to a random sample? The central issue that has not been explicitly addressed, is that when sampling is correctly performed (randomness being one criterion), the resulting sample is a faithful representation of the underlying distribution of the population being sampled. This is what allows us to make a meaningful inference about the population from the sample. When a sample is not chosen at random, depending on how it is chosen, any resulting inference is distorted because the sample is no longer necessarily representative of the likelihoods of the outcomes that were observed. It is important to phrase it this way because non-random sampling does not imply that rare or unlikely outcomes are overly represented. You could, for instance, always select the mode of a binomial random variable--this is clearly not random. And it still violates the notion that the sample represents the population.
What if a non-random sample is identical to a random sample? The central issue that has not been explicitly addressed, is that when sampling is correctly performed (randomness being one criterion), the resulting sample is a faithful representation of the underl
19,338
What if a non-random sample is identical to a random sample?
This illustrates the unidirectionality of conditional probabilities. Given a particular a sample and a hypothesis with well-defined probabilities, we can say with confidence what the probability, given the hypothesis, of seeing the sample. But in frequentist statistics, we cannot say what the probability, given the sample, of the hypothesis is. That the sample is taken randomly is usually not explicitly stated as part of the null hypothesis, but it is always implicitly part of it. When we reject the null, we reject all of the null. And remember that the negation of a statement with "and" turns into a statement with "or". So if the null is "the sample is drawn from a distribution that is normal and the mean is $\mu$ and the standard deviation is $\sigma$ and the samples are independent of each other, and ..." then rejecting the null means that we believe that ""the sample is not drawn from a distribution that is normal or the mean is not $\mu$ or the standard deviation is not $\sigma$ or the samples are not independent of each other, or ..." It's only by eliminating the possibility that the sample was cherry picked that we can definitively conclude that one of the other possibilities holds. For a Bayesian perspective, this shows the importance of updating not only on your knowledge but also on your meta-knowledge. That is, not only what you know, but how you know it. Much of the controversy surrounding the Monty Hall problem comes from the ambiguous nature of the metaknowledge. If the host always randomly picks from the two unchosen doors and shows what's behind it, then switching doesn't help your odds. But if the host always picks a door with a goat and opens it, then switching does help your odds. Another puzzle is "Suppose you know a particular woman has two children, and you know that one of her children is a boy. What's the probability that she has two boys?" The answer depends on how you know that one of her children is a boy. If you asked whether her older child is a boy, and she said yes, then the probability is 1/2. But if you asked her whether any of her children are boys, and she said yes, then the probability is 1/3.
What if a non-random sample is identical to a random sample?
This illustrates the unidirectionality of conditional probabilities. Given a particular a sample and a hypothesis with well-defined probabilities, we can say with confidence what the probability, give
What if a non-random sample is identical to a random sample? This illustrates the unidirectionality of conditional probabilities. Given a particular a sample and a hypothesis with well-defined probabilities, we can say with confidence what the probability, given the hypothesis, of seeing the sample. But in frequentist statistics, we cannot say what the probability, given the sample, of the hypothesis is. That the sample is taken randomly is usually not explicitly stated as part of the null hypothesis, but it is always implicitly part of it. When we reject the null, we reject all of the null. And remember that the negation of a statement with "and" turns into a statement with "or". So if the null is "the sample is drawn from a distribution that is normal and the mean is $\mu$ and the standard deviation is $\sigma$ and the samples are independent of each other, and ..." then rejecting the null means that we believe that ""the sample is not drawn from a distribution that is normal or the mean is not $\mu$ or the standard deviation is not $\sigma$ or the samples are not independent of each other, or ..." It's only by eliminating the possibility that the sample was cherry picked that we can definitively conclude that one of the other possibilities holds. For a Bayesian perspective, this shows the importance of updating not only on your knowledge but also on your meta-knowledge. That is, not only what you know, but how you know it. Much of the controversy surrounding the Monty Hall problem comes from the ambiguous nature of the metaknowledge. If the host always randomly picks from the two unchosen doors and shows what's behind it, then switching doesn't help your odds. But if the host always picks a door with a goat and opens it, then switching does help your odds. Another puzzle is "Suppose you know a particular woman has two children, and you know that one of her children is a boy. What's the probability that she has two boys?" The answer depends on how you know that one of her children is a boy. If you asked whether her older child is a boy, and she said yes, then the probability is 1/2. But if you asked her whether any of her children are boys, and she said yes, then the probability is 1/3.
What if a non-random sample is identical to a random sample? This illustrates the unidirectionality of conditional probabilities. Given a particular a sample and a hypothesis with well-defined probabilities, we can say with confidence what the probability, give
19,339
What if a non-random sample is identical to a random sample?
Sometimes, in political polls, pollsters take non-random samples from a given population, This is a bit ambiguous. Very often samples are not completely randomised and there are some selection biases. But still, the results from this non-random selection might be in some way random. The question is by how much the selection effect and the related bias is negligible. A poll among your close friends is not a good representation. Neither is a poll on some website. However a polling organisation that selects a representative mixture of the population is probably gonna get close to the true answer. The selection by the polling agency might be random or not, that doesn't really matter. Urn example Say there are 100 urns labeled $i,j$ with $1\leq i\leq25$ and $1\leq j \leq 4$. The urns contain blue and red balls with fractions that are determined by a random process. The random process is likely depending on $j$ but not so much on $i$. We want to know the fraction of red and blue balls in the total of all urns. Say that we can only sample twelve of those urns due to limitations of resources. We can randomize our samples in different ways: We could make a random selection out of the 100 urns, but we could also decide to fix our pick (non randomly) to 3 urns out of each of the 4 $j$ categories. We could randomly select 3 $i$ out of each $j$ but we could also select some specific $i$ (because it might be more convenient). All these non-random choices introduce potential bias. But that bias might be negligible if we consider that the intentional choices have only a small effect on bias. Also note that in the end the sampling process is still giving a random variable (but only biased random). We might have selected some urn labels $i$ non randomly, but how the balls got inside the urns is still a random process, a random value. The issue with non randomised sampling methods is not that the outcome variable is not random, but that the outcome variable might be biased. E.g. that poll among your friends is still a random variable.
What if a non-random sample is identical to a random sample?
Sometimes, in political polls, pollsters take non-random samples from a given population, This is a bit ambiguous. Very often samples are not completely randomised and there are some selection biases
What if a non-random sample is identical to a random sample? Sometimes, in political polls, pollsters take non-random samples from a given population, This is a bit ambiguous. Very often samples are not completely randomised and there are some selection biases. But still, the results from this non-random selection might be in some way random. The question is by how much the selection effect and the related bias is negligible. A poll among your close friends is not a good representation. Neither is a poll on some website. However a polling organisation that selects a representative mixture of the population is probably gonna get close to the true answer. The selection by the polling agency might be random or not, that doesn't really matter. Urn example Say there are 100 urns labeled $i,j$ with $1\leq i\leq25$ and $1\leq j \leq 4$. The urns contain blue and red balls with fractions that are determined by a random process. The random process is likely depending on $j$ but not so much on $i$. We want to know the fraction of red and blue balls in the total of all urns. Say that we can only sample twelve of those urns due to limitations of resources. We can randomize our samples in different ways: We could make a random selection out of the 100 urns, but we could also decide to fix our pick (non randomly) to 3 urns out of each of the 4 $j$ categories. We could randomly select 3 $i$ out of each $j$ but we could also select some specific $i$ (because it might be more convenient). All these non-random choices introduce potential bias. But that bias might be negligible if we consider that the intentional choices have only a small effect on bias. Also note that in the end the sampling process is still giving a random variable (but only biased random). We might have selected some urn labels $i$ non randomly, but how the balls got inside the urns is still a random process, a random value. The issue with non randomised sampling methods is not that the outcome variable is not random, but that the outcome variable might be biased. E.g. that poll among your friends is still a random variable.
What if a non-random sample is identical to a random sample? Sometimes, in political polls, pollsters take non-random samples from a given population, This is a bit ambiguous. Very often samples are not completely randomised and there are some selection biases
19,340
A robust (non-parametric) measure like Coefficient of Variation -- IQR/median, or alternative?
The question implies that the standard deviation (SD) is somehow normalized so can be used to compare the variability of two different populations. Not so. As Peter and John said, this normalization is done as when calculating the coefficient of variation (CV), which equals SD/Mean. The SD is in in the same units as the original data. In contrast, the CV is a unitless ratio. Your choice 1 (IQR/Median) is analogous to the CV. Like the CV, it would only make sense when the data are ratio data. This means that zero is really zero. A weight of zero is no weight. A length of zero is no length. As a counter example, it would not make sense for temperature in C or F, as zero degrees temperature (C or F) does not mean there is no temperature. Simply switching between using C or F scale would give you a different value for the CV or for the ratio of IQR/Median, which makes both those ratios meaningless. I agree with Peter and John that your second idea (Range/IQR) would not be very robust to outliers, so probably wouldn't be useful.
A robust (non-parametric) measure like Coefficient of Variation -- IQR/median, or alternative?
The question implies that the standard deviation (SD) is somehow normalized so can be used to compare the variability of two different populations. Not so. As Peter and John said, this normalization i
A robust (non-parametric) measure like Coefficient of Variation -- IQR/median, or alternative? The question implies that the standard deviation (SD) is somehow normalized so can be used to compare the variability of two different populations. Not so. As Peter and John said, this normalization is done as when calculating the coefficient of variation (CV), which equals SD/Mean. The SD is in in the same units as the original data. In contrast, the CV is a unitless ratio. Your choice 1 (IQR/Median) is analogous to the CV. Like the CV, it would only make sense when the data are ratio data. This means that zero is really zero. A weight of zero is no weight. A length of zero is no length. As a counter example, it would not make sense for temperature in C or F, as zero degrees temperature (C or F) does not mean there is no temperature. Simply switching between using C or F scale would give you a different value for the CV or for the ratio of IQR/Median, which makes both those ratios meaningless. I agree with Peter and John that your second idea (Range/IQR) would not be very robust to outliers, so probably wouldn't be useful.
A robust (non-parametric) measure like Coefficient of Variation -- IQR/median, or alternative? The question implies that the standard deviation (SD) is somehow normalized so can be used to compare the variability of two different populations. Not so. As Peter and John said, this normalization i
19,341
A robust (non-parametric) measure like Coefficient of Variation -- IQR/median, or alternative?
It's important to realize the minimum and maximum are often not very good statistics to use (i.e., they can fluctuate greatly from sample to sample, and don't follow a normal distribution as, say, the mean might due to the Central Limit Theorem). As a result, the range is rarely a good choice for anything other than to state the range of this exact sample. For a simple, nonparametric statistic to represent variability, the Inter-Quartile Range is much better. However, while I see the analogy between IQR/median and the coefficient of variation, I don't think this is likely to be the best option. You may want to look into the median absolute deviation from the median (MADM). That is: $$ MADM = \text{median}(|x_i-\text{median}(\bf x)|) $$ I suspect a better nonparametric analogy to the coefficient of variation would be MADM/median, rather than IQR/median.
A robust (non-parametric) measure like Coefficient of Variation -- IQR/median, or alternative?
It's important to realize the minimum and maximum are often not very good statistics to use (i.e., they can fluctuate greatly from sample to sample, and don't follow a normal distribution as, say, the
A robust (non-parametric) measure like Coefficient of Variation -- IQR/median, or alternative? It's important to realize the minimum and maximum are often not very good statistics to use (i.e., they can fluctuate greatly from sample to sample, and don't follow a normal distribution as, say, the mean might due to the Central Limit Theorem). As a result, the range is rarely a good choice for anything other than to state the range of this exact sample. For a simple, nonparametric statistic to represent variability, the Inter-Quartile Range is much better. However, while I see the analogy between IQR/median and the coefficient of variation, I don't think this is likely to be the best option. You may want to look into the median absolute deviation from the median (MADM). That is: $$ MADM = \text{median}(|x_i-\text{median}(\bf x)|) $$ I suspect a better nonparametric analogy to the coefficient of variation would be MADM/median, rather than IQR/median.
A robust (non-parametric) measure like Coefficient of Variation -- IQR/median, or alternative? It's important to realize the minimum and maximum are often not very good statistics to use (i.e., they can fluctuate greatly from sample to sample, and don't follow a normal distribution as, say, the
19,342
A robust (non-parametric) measure like Coefficient of Variation -- IQR/median, or alternative?
"Choice 1" is what you want if you are using non-parametrics for the common purpose of reducing the effect of outliers. Even if you're using it because of skew that also has the side effect of commonly having extreme values in the tail, that might be outliers. Your "Choice 2" could be dramatically affected by outliers or any extreme values while the components of your first equation are relatively robust against them. [This will be a little dependent upon what kind of IQR you select (see the R help on quantile).]
A robust (non-parametric) measure like Coefficient of Variation -- IQR/median, or alternative?
"Choice 1" is what you want if you are using non-parametrics for the common purpose of reducing the effect of outliers. Even if you're using it because of skew that also has the side effect of common
A robust (non-parametric) measure like Coefficient of Variation -- IQR/median, or alternative? "Choice 1" is what you want if you are using non-parametrics for the common purpose of reducing the effect of outliers. Even if you're using it because of skew that also has the side effect of commonly having extreme values in the tail, that might be outliers. Your "Choice 2" could be dramatically affected by outliers or any extreme values while the components of your first equation are relatively robust against them. [This will be a little dependent upon what kind of IQR you select (see the R help on quantile).]
A robust (non-parametric) measure like Coefficient of Variation -- IQR/median, or alternative? "Choice 1" is what you want if you are using non-parametrics for the common purpose of reducing the effect of outliers. Even if you're using it because of skew that also has the side effect of common
19,343
A robust (non-parametric) measure like Coefficient of Variation -- IQR/median, or alternative?
I prefer not to compute measures like CV because I almost always have an arbitrary origin for the random variable. Concerning the choice of a robust dispersion measure it is difficult to beat Gini's mean difference, which is the mean of all possible absolute values of differences between two observations. For efficient computation see for example the R rms package GiniMd function. Under normality, Gini's mean difference is 0.98 as efficient as the SD for estimating dispersion.
A robust (non-parametric) measure like Coefficient of Variation -- IQR/median, or alternative?
I prefer not to compute measures like CV because I almost always have an arbitrary origin for the random variable. Concerning the choice of a robust dispersion measure it is difficult to beat Gini's
A robust (non-parametric) measure like Coefficient of Variation -- IQR/median, or alternative? I prefer not to compute measures like CV because I almost always have an arbitrary origin for the random variable. Concerning the choice of a robust dispersion measure it is difficult to beat Gini's mean difference, which is the mean of all possible absolute values of differences between two observations. For efficient computation see for example the R rms package GiniMd function. Under normality, Gini's mean difference is 0.98 as efficient as the SD for estimating dispersion.
A robust (non-parametric) measure like Coefficient of Variation -- IQR/median, or alternative? I prefer not to compute measures like CV because I almost always have an arbitrary origin for the random variable. Concerning the choice of a robust dispersion measure it is difficult to beat Gini's
19,344
A robust (non-parametric) measure like Coefficient of Variation -- IQR/median, or alternative?
This paper presents two good robust alternatives for the coefficient of variation. One is the interquartile range divided by the median, that is: IQR/median = (Q3-Q1)/median The other is the median absolute deviation divided by the median, that is: MAD/median They compare them and conclude generaly speaking the second is a little less variable and probably better for most applications.
A robust (non-parametric) measure like Coefficient of Variation -- IQR/median, or alternative?
This paper presents two good robust alternatives for the coefficient of variation. One is the interquartile range divided by the median, that is: IQR/median = (Q3-Q1)/median The other is the median ab
A robust (non-parametric) measure like Coefficient of Variation -- IQR/median, or alternative? This paper presents two good robust alternatives for the coefficient of variation. One is the interquartile range divided by the median, that is: IQR/median = (Q3-Q1)/median The other is the median absolute deviation divided by the median, that is: MAD/median They compare them and conclude generaly speaking the second is a little less variable and probably better for most applications.
A robust (non-parametric) measure like Coefficient of Variation -- IQR/median, or alternative? This paper presents two good robust alternatives for the coefficient of variation. One is the interquartile range divided by the median, that is: IQR/median = (Q3-Q1)/median The other is the median ab
19,345
A robust (non-parametric) measure like Coefficient of Variation -- IQR/median, or alternative?
Like @John I have never heard of that definition of coefficient of variation. I wouldn't call it that if I used it, it will confuse people. "Which is most useful?" will depend on what you want to use it for. Certainly choice 1 is more robust to outliers, if you are sure that is what you want. But what is the purpose of comparing the two distributions? What are you trying to do? One alternative is to standardize both measures and then look at summaries. Another is a QQ plot. There are many others as well.
A robust (non-parametric) measure like Coefficient of Variation -- IQR/median, or alternative?
Like @John I have never heard of that definition of coefficient of variation. I wouldn't call it that if I used it, it will confuse people. "Which is most useful?" will depend on what you want to use
A robust (non-parametric) measure like Coefficient of Variation -- IQR/median, or alternative? Like @John I have never heard of that definition of coefficient of variation. I wouldn't call it that if I used it, it will confuse people. "Which is most useful?" will depend on what you want to use it for. Certainly choice 1 is more robust to outliers, if you are sure that is what you want. But what is the purpose of comparing the two distributions? What are you trying to do? One alternative is to standardize both measures and then look at summaries. Another is a QQ plot. There are many others as well.
A robust (non-parametric) measure like Coefficient of Variation -- IQR/median, or alternative? Like @John I have never heard of that definition of coefficient of variation. I wouldn't call it that if I used it, it will confuse people. "Which is most useful?" will depend on what you want to use
19,346
Random Forest and Decision Tree Algorithm
No information is passed between trees. In a random forest, all of the trees are identically distributed, because trees are grown using the same randomization strategy for all trees. First, take a bootstrap sample of the data, and then grow the tree using splits from a randomly-chosen subset of features. This happens for each tree individually without attention to any other trees in the ensemble. However, the trees are correlated purely by virtue of each tree being trained on a sample from a common pool of training data; multiple samples from the same data set will tend to be similar, so the trees will encode some of that similarity. You might find it helpful to read an introduction to random forests from a high-quality text. One is "Random Forests" by Leo Breiman. There's also a chapter in Elements of Statistical Learning by Hastie et al. It's possible that you've confused random forests with boosting methods such as AdaBoost or gradient-boosted trees. Boosting methods are not the same, because they use information about misfit from previous boosting rounds to inform the next boosting round. See: Is random forest a boosting algorithm?
Random Forest and Decision Tree Algorithm
No information is passed between trees. In a random forest, all of the trees are identically distributed, because trees are grown using the same randomization strategy for all trees. First, take a boo
Random Forest and Decision Tree Algorithm No information is passed between trees. In a random forest, all of the trees are identically distributed, because trees are grown using the same randomization strategy for all trees. First, take a bootstrap sample of the data, and then grow the tree using splits from a randomly-chosen subset of features. This happens for each tree individually without attention to any other trees in the ensemble. However, the trees are correlated purely by virtue of each tree being trained on a sample from a common pool of training data; multiple samples from the same data set will tend to be similar, so the trees will encode some of that similarity. You might find it helpful to read an introduction to random forests from a high-quality text. One is "Random Forests" by Leo Breiman. There's also a chapter in Elements of Statistical Learning by Hastie et al. It's possible that you've confused random forests with boosting methods such as AdaBoost or gradient-boosted trees. Boosting methods are not the same, because they use information about misfit from previous boosting rounds to inform the next boosting round. See: Is random forest a boosting algorithm?
Random Forest and Decision Tree Algorithm No information is passed between trees. In a random forest, all of the trees are identically distributed, because trees are grown using the same randomization strategy for all trees. First, take a boo
19,347
Random Forest and Decision Tree Algorithm
The random forests is a collection of multiple decision trees which are trained independently of one another. So there is no notion of sequentially dependent training (which is the case in boosting algorithms). As a result of this, as mentioned in another answer, it is possible to do parallel training of the trees. You might like to know where the "random" in random forest comes from: there are two ways with which randomness is injected into the process of learning the trees. First is the random selection of data points used for training each of the trees, and second is the random selection of features used in building each tree. As a single decision tree usually tends to overfit on the data, the injection of randomness in this way results in having a bunch of trees where each one of them have a good accuracy (and possibly overfit) on a different subset of the available training data. Therefore, when we take the average of the predictions made by all the trees, we would observe a reduction in overfitting (compared to the case of training one single decision tree on all the available data). To better understand this, here is a rough sketch of the training process assuming all the data points are stored in a set denoted by $M$ and the number of trees in the forest is $N$: $i = 0$ Take a boostrap sample of $M$ (i.e. sampling with replacement and with the same size as $M$) which is denoted by $S_i$. Train $i$-th tree, denoted as $T_i$, using $S_i$ as input data. the training process is the same as training a decision tree except with the difference that at each node in the tree only a random selection of features is used for the split in that node. $i = i + 1$ if $i < N$ go to step 2, otherwise all the trees have been trained, so random forest training is finished. Note that I described the algorithm as a sequential algorithm, but since training of the trees is not dependent on each other, you can also do this in parallel. Now for prediction step, first make a prediction for every tree (i.e. $T_1$, $T_2$, ..., $T_N$) in the forest and then: If it is used for a regression task, take the average of predictions as the final prediction of the random forest. If it is used for a classification task, use soft voting strategy: take the average of the probabilities predicted by the trees for each class, then declare the class with the highest average probability as the final prediction of random forest. Further, it is worth mentioning that it is possible to train the trees in a sequentially dependent manner and that's exactly what gradient boosted trees algorithm does, which is a totally different method from random forests.
Random Forest and Decision Tree Algorithm
The random forests is a collection of multiple decision trees which are trained independently of one another. So there is no notion of sequentially dependent training (which is the case in boosting al
Random Forest and Decision Tree Algorithm The random forests is a collection of multiple decision trees which are trained independently of one another. So there is no notion of sequentially dependent training (which is the case in boosting algorithms). As a result of this, as mentioned in another answer, it is possible to do parallel training of the trees. You might like to know where the "random" in random forest comes from: there are two ways with which randomness is injected into the process of learning the trees. First is the random selection of data points used for training each of the trees, and second is the random selection of features used in building each tree. As a single decision tree usually tends to overfit on the data, the injection of randomness in this way results in having a bunch of trees where each one of them have a good accuracy (and possibly overfit) on a different subset of the available training data. Therefore, when we take the average of the predictions made by all the trees, we would observe a reduction in overfitting (compared to the case of training one single decision tree on all the available data). To better understand this, here is a rough sketch of the training process assuming all the data points are stored in a set denoted by $M$ and the number of trees in the forest is $N$: $i = 0$ Take a boostrap sample of $M$ (i.e. sampling with replacement and with the same size as $M$) which is denoted by $S_i$. Train $i$-th tree, denoted as $T_i$, using $S_i$ as input data. the training process is the same as training a decision tree except with the difference that at each node in the tree only a random selection of features is used for the split in that node. $i = i + 1$ if $i < N$ go to step 2, otherwise all the trees have been trained, so random forest training is finished. Note that I described the algorithm as a sequential algorithm, but since training of the trees is not dependent on each other, you can also do this in parallel. Now for prediction step, first make a prediction for every tree (i.e. $T_1$, $T_2$, ..., $T_N$) in the forest and then: If it is used for a regression task, take the average of predictions as the final prediction of the random forest. If it is used for a classification task, use soft voting strategy: take the average of the probabilities predicted by the trees for each class, then declare the class with the highest average probability as the final prediction of random forest. Further, it is worth mentioning that it is possible to train the trees in a sequentially dependent manner and that's exactly what gradient boosted trees algorithm does, which is a totally different method from random forests.
Random Forest and Decision Tree Algorithm The random forests is a collection of multiple decision trees which are trained independently of one another. So there is no notion of sequentially dependent training (which is the case in boosting al
19,348
Random Forest and Decision Tree Algorithm
Random forest is a bagging algorithm rather than a boosting algorithm. Random forest constructs the tree independently using random sample of the data. A parallel implementation is possible. You might like to check out gradient boosting where trees are built sequentially where new tree tries to correct the mistake previously made.
Random Forest and Decision Tree Algorithm
Random forest is a bagging algorithm rather than a boosting algorithm. Random forest constructs the tree independently using random sample of the data. A parallel implementation is possible. You migh
Random Forest and Decision Tree Algorithm Random forest is a bagging algorithm rather than a boosting algorithm. Random forest constructs the tree independently using random sample of the data. A parallel implementation is possible. You might like to check out gradient boosting where trees are built sequentially where new tree tries to correct the mistake previously made.
Random Forest and Decision Tree Algorithm Random forest is a bagging algorithm rather than a boosting algorithm. Random forest constructs the tree independently using random sample of the data. A parallel implementation is possible. You migh
19,349
Random Forest and Decision Tree Algorithm
So how does it works ? Random Forest is a collection of decision trees. The trees are constructed independently. Each tree is trained on subset of features and subset of a sample chosen with replacement. When predicting, say for Classification, the input parameters are given to each tree in the forest and each tree "votes" on the classification, label with most votes wins. Why to use Random Forest over simple Decision Tree? Bias/Variance trade off. Random Forest are built from much simpler trees when compared to a single decision tree. Generally Random forests provide a big reduction of error due to variance and small increase in error due to bias.
Random Forest and Decision Tree Algorithm
So how does it works ? Random Forest is a collection of decision trees. The trees are constructed independently. Each tree is trained on subset of features and subset of a sample chosen with replacem
Random Forest and Decision Tree Algorithm So how does it works ? Random Forest is a collection of decision trees. The trees are constructed independently. Each tree is trained on subset of features and subset of a sample chosen with replacement. When predicting, say for Classification, the input parameters are given to each tree in the forest and each tree "votes" on the classification, label with most votes wins. Why to use Random Forest over simple Decision Tree? Bias/Variance trade off. Random Forest are built from much simpler trees when compared to a single decision tree. Generally Random forests provide a big reduction of error due to variance and small increase in error due to bias.
Random Forest and Decision Tree Algorithm So how does it works ? Random Forest is a collection of decision trees. The trees are constructed independently. Each tree is trained on subset of features and subset of a sample chosen with replacem
19,350
Random Forest and Decision Tree Algorithm
Yes, as authors above said, the Random Forest algorithm is a bagging, not boosting algorithm. Bagging can reduce the variance of the classificator, because the base algorithms, that are fitted on different samples and their errors are mutually compensated for in the voting. Bagging refers to averaging slightly different versions of the same model as a means to improve the predictive power. To apply bagging we simply construct B regression trees using B bootstrapped training sets, and average the resulting predictions A common and quite successful application of bagging is the Random Forest But when building these decision trees in random forest, each time a split in a tree is considered, a random sample of m predictors is chosen as split candidates from the full set of p predictors. The split is allowed to use only one of those m predictors.
Random Forest and Decision Tree Algorithm
Yes, as authors above said, the Random Forest algorithm is a bagging, not boosting algorithm. Bagging can reduce the variance of the classificator, because the base algorithms, that are fitted on diff
Random Forest and Decision Tree Algorithm Yes, as authors above said, the Random Forest algorithm is a bagging, not boosting algorithm. Bagging can reduce the variance of the classificator, because the base algorithms, that are fitted on different samples and their errors are mutually compensated for in the voting. Bagging refers to averaging slightly different versions of the same model as a means to improve the predictive power. To apply bagging we simply construct B regression trees using B bootstrapped training sets, and average the resulting predictions A common and quite successful application of bagging is the Random Forest But when building these decision trees in random forest, each time a split in a tree is considered, a random sample of m predictors is chosen as split candidates from the full set of p predictors. The split is allowed to use only one of those m predictors.
Random Forest and Decision Tree Algorithm Yes, as authors above said, the Random Forest algorithm is a bagging, not boosting algorithm. Bagging can reduce the variance of the classificator, because the base algorithms, that are fitted on diff
19,351
Random Forest and Decision Tree Algorithm
To be clear on what is independent and what is dependent. Random Forest builds tree using bootstrap method by drawing observations INDEPENDENTLY. The trees in the forest are indeed DEPENDENT, trees in the forest is not independently built, random subset of feature is used to reduce the correlation between different trees.
Random Forest and Decision Tree Algorithm
To be clear on what is independent and what is dependent. Random Forest builds tree using bootstrap method by drawing observations INDEPENDENTLY. The trees in the forest are indeed DEPENDENT, trees in
Random Forest and Decision Tree Algorithm To be clear on what is independent and what is dependent. Random Forest builds tree using bootstrap method by drawing observations INDEPENDENTLY. The trees in the forest are indeed DEPENDENT, trees in the forest is not independently built, random subset of feature is used to reduce the correlation between different trees.
Random Forest and Decision Tree Algorithm To be clear on what is independent and what is dependent. Random Forest builds tree using bootstrap method by drawing observations INDEPENDENTLY. The trees in the forest are indeed DEPENDENT, trees in
19,352
Random Forest and Decision Tree Algorithm
Random forest is a bagging algorithm. Here, we train a number (ensemble) of decision trees from bootstrap samples of your training set. Bootstrap sampling means drawing random samples from our training set with replacement. In random forest all the trees are built independently. Only the training sample of each of the trees are different. Since there is no flow of information between the trees, all the trees can be built parallel.
Random Forest and Decision Tree Algorithm
Random forest is a bagging algorithm. Here, we train a number (ensemble) of decision trees from bootstrap samples of your training set. Bootstrap sampling means drawing random samples from our trainin
Random Forest and Decision Tree Algorithm Random forest is a bagging algorithm. Here, we train a number (ensemble) of decision trees from bootstrap samples of your training set. Bootstrap sampling means drawing random samples from our training set with replacement. In random forest all the trees are built independently. Only the training sample of each of the trees are different. Since there is no flow of information between the trees, all the trees can be built parallel.
Random Forest and Decision Tree Algorithm Random forest is a bagging algorithm. Here, we train a number (ensemble) of decision trees from bootstrap samples of your training set. Bootstrap sampling means drawing random samples from our trainin
19,353
How many 2-letter words can you get from aabcccddef
You have 6 different letters : a,b,c,d,e,f out of which you can generate 6 x 5 = 30 words with two different letters. In addition, you can generate the 3 words aa,cc,dd with the same letter twice. So the total number of words is 30+3=33.
How many 2-letter words can you get from aabcccddef
You have 6 different letters : a,b,c,d,e,f out of which you can generate 6 x 5 = 30 words with two different letters. In addition, you can generate the 3 words aa,cc,dd with the same letter twice. So
How many 2-letter words can you get from aabcccddef You have 6 different letters : a,b,c,d,e,f out of which you can generate 6 x 5 = 30 words with two different letters. In addition, you can generate the 3 words aa,cc,dd with the same letter twice. So the total number of words is 30+3=33.
How many 2-letter words can you get from aabcccddef You have 6 different letters : a,b,c,d,e,f out of which you can generate 6 x 5 = 30 words with two different letters. In addition, you can generate the 3 words aa,cc,dd with the same letter twice. So
19,354
How many 2-letter words can you get from aabcccddef
An alternative to Zahava's method: there are $6^2=36$ ways of pairing two of the letters a-f. However, there aren't 2 b, e or f characters, so "bb", "ee" and "ff" aren't possible, making the number of words $36-3=33$. The way you've tried to approach the problem seems to ignore the fact that there aren't 10 distinct letters. If you had 10 distinct letters then your answer would be correct.
How many 2-letter words can you get from aabcccddef
An alternative to Zahava's method: there are $6^2=36$ ways of pairing two of the letters a-f. However, there aren't 2 b, e or f characters, so "bb", "ee" and "ff" aren't possible, making the number of
How many 2-letter words can you get from aabcccddef An alternative to Zahava's method: there are $6^2=36$ ways of pairing two of the letters a-f. However, there aren't 2 b, e or f characters, so "bb", "ee" and "ff" aren't possible, making the number of words $36-3=33$. The way you've tried to approach the problem seems to ignore the fact that there aren't 10 distinct letters. If you had 10 distinct letters then your answer would be correct.
How many 2-letter words can you get from aabcccddef An alternative to Zahava's method: there are $6^2=36$ ways of pairing two of the letters a-f. However, there aren't 2 b, e or f characters, so "bb", "ee" and "ff" aren't possible, making the number of
19,355
How many 2-letter words can you get from aabcccddef
If you can't reason it out in a "clever" way, it is often worth trying brute force. Imagine trying to write down an alphabetically ordered list of all the words you can make. How many can start with "A"? Well "A" can be followed by A, B, C, D, E or F, so that's six ways. How many can start with "B"? That can be followed by A, C, D, E or F, which is only five ways, since there isn't a second "B". How many can start with "C"? Since "C" appears three times in your list, it can be followed by itself, or by any of the other five letters, so just as with "A" there are six ways. Note that we don't get any "extra" ways just because "C" appears more times than "A"; anything beyond a second appearance is redundant. Hopefully it is now clear that each letter that appears only once in your list can appear at the start of five words, and letters that appear twice or more can appear at the start of six words. The letters that appear only once are "B", "E" and "F", each of which can be at the start of five words, so that makes 5 + 5 + 5 = 15 words. The letters that appear twice or more are "A", "C" and "D", each of which can be at the start of six words, so that makes 6 + 6 + 6 = 18 words. In total there are 15 + 18 = 33 words. This is more long-winded than the other methods, but by trying to think about the answer in this systematic sort of way you may have been able to "spot" one of the faster methods. Note that if this had been phrased as a probability question, your first inclination may have been to draw out a tree diagram. It would have started with six branches for the first letter, but for the second letter there would have been six branches coming out from "A", "C" and "D" (because they can be followed by any of the six letters) but only five branches coming out from "B", "E" and "F" (because they cannot be followed by themselves). This branching pattern is effectively the same as in my answer, but you may prefer to think of it more visually in a tree.
How many 2-letter words can you get from aabcccddef
If you can't reason it out in a "clever" way, it is often worth trying brute force. Imagine trying to write down an alphabetically ordered list of all the words you can make. How many can start with "
How many 2-letter words can you get from aabcccddef If you can't reason it out in a "clever" way, it is often worth trying brute force. Imagine trying to write down an alphabetically ordered list of all the words you can make. How many can start with "A"? Well "A" can be followed by A, B, C, D, E or F, so that's six ways. How many can start with "B"? That can be followed by A, C, D, E or F, which is only five ways, since there isn't a second "B". How many can start with "C"? Since "C" appears three times in your list, it can be followed by itself, or by any of the other five letters, so just as with "A" there are six ways. Note that we don't get any "extra" ways just because "C" appears more times than "A"; anything beyond a second appearance is redundant. Hopefully it is now clear that each letter that appears only once in your list can appear at the start of five words, and letters that appear twice or more can appear at the start of six words. The letters that appear only once are "B", "E" and "F", each of which can be at the start of five words, so that makes 5 + 5 + 5 = 15 words. The letters that appear twice or more are "A", "C" and "D", each of which can be at the start of six words, so that makes 6 + 6 + 6 = 18 words. In total there are 15 + 18 = 33 words. This is more long-winded than the other methods, but by trying to think about the answer in this systematic sort of way you may have been able to "spot" one of the faster methods. Note that if this had been phrased as a probability question, your first inclination may have been to draw out a tree diagram. It would have started with six branches for the first letter, but for the second letter there would have been six branches coming out from "A", "C" and "D" (because they can be followed by any of the six letters) but only five branches coming out from "B", "E" and "F" (because they cannot be followed by themselves). This branching pattern is effectively the same as in my answer, but you may prefer to think of it more visually in a tree.
How many 2-letter words can you get from aabcccddef If you can't reason it out in a "clever" way, it is often worth trying brute force. Imagine trying to write down an alphabetically ordered list of all the words you can make. How many can start with "
19,356
How many 2-letter words can you get from aabcccddef
A mathematical approach From a mathematical point of view, the solution is the set of elements of the cartesian product between the list and itself once removed the diagonal. You can solve this problem using this algorithm: calculating the cartesian product between your list and itself. removing the diagonal create a set from the array A set is a well-defined collection of distinct objects, hence objects are not repeated. Translating it into Python from itertools import product import numpy as np letters = list("aabcccddef") cartesianproduct = np.array(["".join(i) for i in product(letters,letters)]).reshape(10,10) cartesianproduct Out : array([['aa', 'aa', 'ab', 'ac', 'ac', 'ac', 'ad', 'ad', 'ae', 'af'], ['aa', 'aa', 'ab', 'ac', 'ac', 'ac', 'ad', 'ad', 'ae', 'af'], ['ba', 'ba', 'bb', 'bc', 'bc', 'bc', 'bd', 'bd', 'be', 'bf'], ['ca', 'ca', 'cb', 'cc', 'cc', 'cc', 'cd', 'cd', 'ce', 'cf'], ['ca', 'ca', 'cb', 'cc', 'cc', 'cc', 'cd', 'cd', 'ce', 'cf'], ['ca', 'ca', 'cb', 'cc', 'cc', 'cc', 'cd', 'cd', 'ce', 'cf'], ['da', 'da', 'db', 'dc', 'dc', 'dc', 'dd', 'dd', 'de', 'df'], ['da', 'da', 'db', 'dc', 'dc', 'dc', 'dd', 'dd', 'de', 'df'], ['ea', 'ea', 'eb', 'ec', 'ec', 'ec', 'ed', 'ed', 'ee', 'ef'], ['fa', 'fa', 'fb', 'fc', 'fc', 'fc', 'fd', 'fd', 'fe', 'ff']], dtype='|S2') We remove the diagonal diagremv = np.array([ np.delete(arr,index) for index,arr in enumerate(cartesianproduct)]) diagremv array([['aa', 'ab', 'ac', 'ac', 'ac', 'ad', 'ad', 'ae', 'af'], ['aa', 'ab', 'ac', 'ac', 'ac', 'ad', 'ad', 'ae', 'af'], ['ba', 'ba', 'bc', 'bc', 'bc', 'bd', 'bd', 'be', 'bf'], ['ca', 'ca', 'cb', 'cc', 'cc', 'cd', 'cd', 'ce', 'cf'], ['ca', 'ca', 'cb', 'cc', 'cc', 'cd', 'cd', 'ce', 'cf'], ['ca', 'ca', 'cb', 'cc', 'cc', 'cd', 'cd', 'ce', 'cf'], ['da', 'da', 'db', 'dc', 'dc', 'dc', 'dd', 'de', 'df'], ['da', 'da', 'db', 'dc', 'dc', 'dc', 'dd', 'de', 'df'], ['ea', 'ea', 'eb', 'ec', 'ec', 'ec', 'ed', 'ed', 'ef'], ['fa', 'fa', 'fb', 'fc', 'fc', 'fc', 'fd', 'fd', 'fe']], dtype='|S2') We compute the lenght of the set of elements: len(set(list(diagremv.flatten()))) Out: 33
How many 2-letter words can you get from aabcccddef
A mathematical approach From a mathematical point of view, the solution is the set of elements of the cartesian product between the list and itself once removed the diagonal. You can solve this proble
How many 2-letter words can you get from aabcccddef A mathematical approach From a mathematical point of view, the solution is the set of elements of the cartesian product between the list and itself once removed the diagonal. You can solve this problem using this algorithm: calculating the cartesian product between your list and itself. removing the diagonal create a set from the array A set is a well-defined collection of distinct objects, hence objects are not repeated. Translating it into Python from itertools import product import numpy as np letters = list("aabcccddef") cartesianproduct = np.array(["".join(i) for i in product(letters,letters)]).reshape(10,10) cartesianproduct Out : array([['aa', 'aa', 'ab', 'ac', 'ac', 'ac', 'ad', 'ad', 'ae', 'af'], ['aa', 'aa', 'ab', 'ac', 'ac', 'ac', 'ad', 'ad', 'ae', 'af'], ['ba', 'ba', 'bb', 'bc', 'bc', 'bc', 'bd', 'bd', 'be', 'bf'], ['ca', 'ca', 'cb', 'cc', 'cc', 'cc', 'cd', 'cd', 'ce', 'cf'], ['ca', 'ca', 'cb', 'cc', 'cc', 'cc', 'cd', 'cd', 'ce', 'cf'], ['ca', 'ca', 'cb', 'cc', 'cc', 'cc', 'cd', 'cd', 'ce', 'cf'], ['da', 'da', 'db', 'dc', 'dc', 'dc', 'dd', 'dd', 'de', 'df'], ['da', 'da', 'db', 'dc', 'dc', 'dc', 'dd', 'dd', 'de', 'df'], ['ea', 'ea', 'eb', 'ec', 'ec', 'ec', 'ed', 'ed', 'ee', 'ef'], ['fa', 'fa', 'fb', 'fc', 'fc', 'fc', 'fd', 'fd', 'fe', 'ff']], dtype='|S2') We remove the diagonal diagremv = np.array([ np.delete(arr,index) for index,arr in enumerate(cartesianproduct)]) diagremv array([['aa', 'ab', 'ac', 'ac', 'ac', 'ad', 'ad', 'ae', 'af'], ['aa', 'ab', 'ac', 'ac', 'ac', 'ad', 'ad', 'ae', 'af'], ['ba', 'ba', 'bc', 'bc', 'bc', 'bd', 'bd', 'be', 'bf'], ['ca', 'ca', 'cb', 'cc', 'cc', 'cd', 'cd', 'ce', 'cf'], ['ca', 'ca', 'cb', 'cc', 'cc', 'cd', 'cd', 'ce', 'cf'], ['ca', 'ca', 'cb', 'cc', 'cc', 'cd', 'cd', 'ce', 'cf'], ['da', 'da', 'db', 'dc', 'dc', 'dc', 'dd', 'de', 'df'], ['da', 'da', 'db', 'dc', 'dc', 'dc', 'dd', 'de', 'df'], ['ea', 'ea', 'eb', 'ec', 'ec', 'ec', 'ed', 'ed', 'ef'], ['fa', 'fa', 'fb', 'fc', 'fc', 'fc', 'fd', 'fd', 'fe']], dtype='|S2') We compute the lenght of the set of elements: len(set(list(diagremv.flatten()))) Out: 33
How many 2-letter words can you get from aabcccddef A mathematical approach From a mathematical point of view, the solution is the set of elements of the cartesian product between the list and itself once removed the diagonal. You can solve this proble
19,357
How many 2-letter words can you get from aabcccddef
I think the reason some think the question unclear is because it uses the term "2-letter words". Given the way everyone is approaching a solution, they're all interpreting "2-letter words" to mean something like "letter pairs". As an avid Scrabble player, I immediately took the question to mean, "How many legitimate 2-letter words can be made from these letters?" And that answer is -- 12! At least, according to the latest edition of the Official Scrabble Players Dictionary (OSPD5). The words are aa, ab, ad, ae, ba, be, da, de, ed, ef, fa, and fe. (Please bear in mind that the fact that you've never heard of many of these words does not negate their validity!) ;o) Just my "2 sense".
How many 2-letter words can you get from aabcccddef
I think the reason some think the question unclear is because it uses the term "2-letter words". Given the way everyone is approaching a solution, they're all interpreting "2-letter words" to mean so
How many 2-letter words can you get from aabcccddef I think the reason some think the question unclear is because it uses the term "2-letter words". Given the way everyone is approaching a solution, they're all interpreting "2-letter words" to mean something like "letter pairs". As an avid Scrabble player, I immediately took the question to mean, "How many legitimate 2-letter words can be made from these letters?" And that answer is -- 12! At least, according to the latest edition of the Official Scrabble Players Dictionary (OSPD5). The words are aa, ab, ad, ae, ba, be, da, de, ed, ef, fa, and fe. (Please bear in mind that the fact that you've never heard of many of these words does not negate their validity!) ;o) Just my "2 sense".
How many 2-letter words can you get from aabcccddef I think the reason some think the question unclear is because it uses the term "2-letter words". Given the way everyone is approaching a solution, they're all interpreting "2-letter words" to mean so
19,358
How many 2-letter words can you get from aabcccddef
Yet another way to count without brute force: If the first letter is a, c, or d there are 6 distinct remaining choices for the second letter. But if the first letter is b, e, or f there are only 5 distinct remaining choices for the second letter. So there are $3\cdot6 +3\cdot5 = 33$ distinct two letter words.
How many 2-letter words can you get from aabcccddef
Yet another way to count without brute force: If the first letter is a, c, or d there are 6 distinct remaining choices for the second letter. But if the first letter is b, e, or f there are only 5 dis
How many 2-letter words can you get from aabcccddef Yet another way to count without brute force: If the first letter is a, c, or d there are 6 distinct remaining choices for the second letter. But if the first letter is b, e, or f there are only 5 distinct remaining choices for the second letter. So there are $3\cdot6 +3\cdot5 = 33$ distinct two letter words.
How many 2-letter words can you get from aabcccddef Yet another way to count without brute force: If the first letter is a, c, or d there are 6 distinct remaining choices for the second letter. But if the first letter is b, e, or f there are only 5 dis
19,359
How many 2-letter words can you get from aabcccddef
There is a problem in the way you ask your question. What actions are actually allowed on line "aabcccddef" to take 2-letters word? Can we replace the latter or only cross the unnecessary? I've found two possible answers depending on this conditions: 1) It we can replace the letters in any way the answer as 33 as it's mentioned before. 30 pairs of different letters(6*5) and 3 pairs of similar letters. 2) If we can't switch letters places and can only cross, we'll get much less answer. Let's count from start to end. Starting with "a" we have 6 letters to be second, starting with "b" it's only 4. "c" also has 4, "d" - 3 and "e" - 1. That's 18 totally.
How many 2-letter words can you get from aabcccddef
There is a problem in the way you ask your question. What actions are actually allowed on line "aabcccddef" to take 2-letters word? Can we replace the latter or only cross the unnecessary? I've found
How many 2-letter words can you get from aabcccddef There is a problem in the way you ask your question. What actions are actually allowed on line "aabcccddef" to take 2-letters word? Can we replace the latter or only cross the unnecessary? I've found two possible answers depending on this conditions: 1) It we can replace the letters in any way the answer as 33 as it's mentioned before. 30 pairs of different letters(6*5) and 3 pairs of similar letters. 2) If we can't switch letters places and can only cross, we'll get much less answer. Let's count from start to end. Starting with "a" we have 6 letters to be second, starting with "b" it's only 4. "c" also has 4, "d" - 3 and "e" - 1. That's 18 totally.
How many 2-letter words can you get from aabcccddef There is a problem in the way you ask your question. What actions are actually allowed on line "aabcccddef" to take 2-letters word? Can we replace the latter or only cross the unnecessary? I've found
19,360
How many 2-letter words can you get from aabcccddef
my answer to the question: How many 2-letter words can you get from aabcccddef aa; 2. ab; 3. ad; 4. ae; 5. ad; 6. ba; 7. be; 8. de; 9. fa; 10. fe *//The point is the question reads, "words" not combinations of pairs. Using words the letter would have to appear twice to use the word more than once for example there are two of the letter 'a' and two of the letter 'd' therefore, it is possible to write 'ad' as a word twice.
How many 2-letter words can you get from aabcccddef
my answer to the question: How many 2-letter words can you get from aabcccddef aa; 2. ab; 3. ad; 4. ae; 5. ad; 6. ba; 7. be; 8. de; 9. fa; 10. fe *//The point is the question reads, "words" not comb
How many 2-letter words can you get from aabcccddef my answer to the question: How many 2-letter words can you get from aabcccddef aa; 2. ab; 3. ad; 4. ae; 5. ad; 6. ba; 7. be; 8. de; 9. fa; 10. fe *//The point is the question reads, "words" not combinations of pairs. Using words the letter would have to appear twice to use the word more than once for example there are two of the letter 'a' and two of the letter 'd' therefore, it is possible to write 'ad' as a word twice.
How many 2-letter words can you get from aabcccddef my answer to the question: How many 2-letter words can you get from aabcccddef aa; 2. ab; 3. ad; 4. ae; 5. ad; 6. ba; 7. be; 8. de; 9. fa; 10. fe *//The point is the question reads, "words" not comb
19,361
What does it mean for a linear regression to be statistically significant but has very low r squared?
It means that you can explain a small portion of the variance in the data. For instance, you can establish that a college degree impacts salaries, but at the same time it's just a small factor. There are many other factors that impact your salary, and the contribution of the college degree is very small, but detectable. In practical terms it could mean that in average the college degree increases the salary by \$500 per year, while the standard deviation of salaries of people is \$10K. So, many college educated people have lower salaries than non-educated, and the value of your model for prediction is low.
What does it mean for a linear regression to be statistically significant but has very low r squared
It means that you can explain a small portion of the variance in the data. For instance, you can establish that a college degree impacts salaries, but at the same time it's just a small factor. There
What does it mean for a linear regression to be statistically significant but has very low r squared? It means that you can explain a small portion of the variance in the data. For instance, you can establish that a college degree impacts salaries, but at the same time it's just a small factor. There are many other factors that impact your salary, and the contribution of the college degree is very small, but detectable. In practical terms it could mean that in average the college degree increases the salary by \$500 per year, while the standard deviation of salaries of people is \$10K. So, many college educated people have lower salaries than non-educated, and the value of your model for prediction is low.
What does it mean for a linear regression to be statistically significant but has very low r squared It means that you can explain a small portion of the variance in the data. For instance, you can establish that a college degree impacts salaries, but at the same time it's just a small factor. There
19,362
What does it mean for a linear regression to be statistically significant but has very low r squared?
It means "irreducible error is high", i.e., the best thing we can do (with linear model) is limited. For example, the following data set: data=rbind( cbind(1,1:400), cbind(2,200:400), cbind(3,300:400)) plot(data) Note, the trick in this data set is that given one $x$ value, there are too many different $y$ values, that we cannot make a good prediction to satisfy all of them. At the same time, there are "strong" linear correlations between $x$ and $y$. If we fit a linear model, we will get significant coefficients, but low R squared. fit=lm(data[,2]~data[,1]) summary(fit) abline(fit) Call: lm(formula = data[, 2] ~ data[, 1]) Residuals: Min 1Q Median 3Q Max -203.331 -59.647 -1.252 68.103 195.669 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 123.910 8.428 14.70 <2e-16 *** data[, 1] 80.421 4.858 16.56 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 93.9 on 700 degrees of freedom Multiple R-squared: 0.2814, Adjusted R-squared: 0.2804 F-statistic: 274.1 on 1 and 700 DF, p-value: < 2.2e-16
What does it mean for a linear regression to be statistically significant but has very low r squared
It means "irreducible error is high", i.e., the best thing we can do (with linear model) is limited. For example, the following data set: data=rbind( cbind(1,1:400), cbind(2,200:400), cbind(3,300:400
What does it mean for a linear regression to be statistically significant but has very low r squared? It means "irreducible error is high", i.e., the best thing we can do (with linear model) is limited. For example, the following data set: data=rbind( cbind(1,1:400), cbind(2,200:400), cbind(3,300:400)) plot(data) Note, the trick in this data set is that given one $x$ value, there are too many different $y$ values, that we cannot make a good prediction to satisfy all of them. At the same time, there are "strong" linear correlations between $x$ and $y$. If we fit a linear model, we will get significant coefficients, but low R squared. fit=lm(data[,2]~data[,1]) summary(fit) abline(fit) Call: lm(formula = data[, 2] ~ data[, 1]) Residuals: Min 1Q Median 3Q Max -203.331 -59.647 -1.252 68.103 195.669 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 123.910 8.428 14.70 <2e-16 *** data[, 1] 80.421 4.858 16.56 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 93.9 on 700 degrees of freedom Multiple R-squared: 0.2814, Adjusted R-squared: 0.2804 F-statistic: 274.1 on 1 and 700 DF, p-value: < 2.2e-16
What does it mean for a linear regression to be statistically significant but has very low r squared It means "irreducible error is high", i.e., the best thing we can do (with linear model) is limited. For example, the following data set: data=rbind( cbind(1,1:400), cbind(2,200:400), cbind(3,300:400
19,363
What does it mean for a linear regression to be statistically significant but has very low r squared?
Put in a simple way (oversimplifying a bit) to prove that something is significant you need a strong effect and/or a lot of data. You may get a statistically significant linear regression even in the case of a small effect (small $R^2$) if you have enough data. This is not restricted to linear regression.
What does it mean for a linear regression to be statistically significant but has very low r squared
Put in a simple way (oversimplifying a bit) to prove that something is significant you need a strong effect and/or a lot of data. You may get a statistically significant linear regression even in the
What does it mean for a linear regression to be statistically significant but has very low r squared? Put in a simple way (oversimplifying a bit) to prove that something is significant you need a strong effect and/or a lot of data. You may get a statistically significant linear regression even in the case of a small effect (small $R^2$) if you have enough data. This is not restricted to linear regression.
What does it mean for a linear regression to be statistically significant but has very low r squared Put in a simple way (oversimplifying a bit) to prove that something is significant you need a strong effect and/or a lot of data. You may get a statistically significant linear regression even in the
19,364
What does it mean for a linear regression to be statistically significant but has very low r squared?
What does it mean for a linear regression to be statistically significant but has very low r squared? It means that there is a linear relationship between the independent and dependent variable, but that this relationship might not be worth talking about. The meaningfulness of the relationship, however, is very much contingent upon what you are examining but generally, you can take it to mean that statistical significance should not be confused with relevance. With a large enough sample size, even the most trivial of relationships can be found to be statistically significant.
What does it mean for a linear regression to be statistically significant but has very low r squared
What does it mean for a linear regression to be statistically significant but has very low r squared? It means that there is a linear relationship between the independent and dependent variable, but
What does it mean for a linear regression to be statistically significant but has very low r squared? What does it mean for a linear regression to be statistically significant but has very low r squared? It means that there is a linear relationship between the independent and dependent variable, but that this relationship might not be worth talking about. The meaningfulness of the relationship, however, is very much contingent upon what you are examining but generally, you can take it to mean that statistical significance should not be confused with relevance. With a large enough sample size, even the most trivial of relationships can be found to be statistically significant.
What does it mean for a linear regression to be statistically significant but has very low r squared What does it mean for a linear regression to be statistically significant but has very low r squared? It means that there is a linear relationship between the independent and dependent variable, but
19,365
What does it mean for a linear regression to be statistically significant but has very low r squared?
Another way of phrasing this is that it means you can confidently predict a change at the population level but not at the individual level. i.e. there is a high variance in individual data, but when a large enough sample is used, an underlying effect can be seen overall. It is one reason why some Government health advice is unhelpful to the individual. Governments sometime feel the need to act because they can see that more of some activity leads to more deaths overall in the population. They produce advice or a policy that 'saves' these lives. However, because of the high variance in individual responses, an individual may be very unlikely to personally see any benefit (or, worse, because of specific genetic conditions, their own health would actually have improved from obeying the opposite advice, but this is hidden in the population aggregation). If the individual derives benefit (e.g. pleasure) from the 'unhealthy' activity, following the advice may mean they forgo this definite pleasure throughout their lifetime, yet does not actually personally change whether they would or would not have suffered from the condition.
What does it mean for a linear regression to be statistically significant but has very low r squared
Another way of phrasing this is that it means you can confidently predict a change at the population level but not at the individual level. i.e. there is a high variance in individual data, but when
What does it mean for a linear regression to be statistically significant but has very low r squared? Another way of phrasing this is that it means you can confidently predict a change at the population level but not at the individual level. i.e. there is a high variance in individual data, but when a large enough sample is used, an underlying effect can be seen overall. It is one reason why some Government health advice is unhelpful to the individual. Governments sometime feel the need to act because they can see that more of some activity leads to more deaths overall in the population. They produce advice or a policy that 'saves' these lives. However, because of the high variance in individual responses, an individual may be very unlikely to personally see any benefit (or, worse, because of specific genetic conditions, their own health would actually have improved from obeying the opposite advice, but this is hidden in the population aggregation). If the individual derives benefit (e.g. pleasure) from the 'unhealthy' activity, following the advice may mean they forgo this definite pleasure throughout their lifetime, yet does not actually personally change whether they would or would not have suffered from the condition.
What does it mean for a linear regression to be statistically significant but has very low r squared Another way of phrasing this is that it means you can confidently predict a change at the population level but not at the individual level. i.e. there is a high variance in individual data, but when
19,366
What are the "hot algorithms" for machine learning?
Deep Learning got a lot of focus since 2006. It's basically an approach to train deep neural networks and is leading to really impressive results on very hard datasets (like document clustering or object recognition). Some people are talking about the second neural network renaissance (eg in this Google talk by Schmidhuber). If you want to be impressed you should look at this Science paper Reducing the Dimensionality of Data with Neural Networks, Hinton & Salakhutdinov. (There is so much work going on right now in that area, that there is only two upcoming books I know about that will treat it: Large scale machine learning, Langford et al and Machine Learning: a probabilistic perspective by Kevin Murphy.) If you want to know more, check out what the main deep learning groups are doing: Stanford, Montreal and most importantly Toronto #1 and Toronto #2.
What are the "hot algorithms" for machine learning?
Deep Learning got a lot of focus since 2006. It's basically an approach to train deep neural networks and is leading to really impressive results on very hard datasets (like document clustering or obj
What are the "hot algorithms" for machine learning? Deep Learning got a lot of focus since 2006. It's basically an approach to train deep neural networks and is leading to really impressive results on very hard datasets (like document clustering or object recognition). Some people are talking about the second neural network renaissance (eg in this Google talk by Schmidhuber). If you want to be impressed you should look at this Science paper Reducing the Dimensionality of Data with Neural Networks, Hinton & Salakhutdinov. (There is so much work going on right now in that area, that there is only two upcoming books I know about that will treat it: Large scale machine learning, Langford et al and Machine Learning: a probabilistic perspective by Kevin Murphy.) If you want to know more, check out what the main deep learning groups are doing: Stanford, Montreal and most importantly Toronto #1 and Toronto #2.
What are the "hot algorithms" for machine learning? Deep Learning got a lot of focus since 2006. It's basically an approach to train deep neural networks and is leading to really impressive results on very hard datasets (like document clustering or obj
19,367
What are the "hot algorithms" for machine learning?
Most of the answers given so far refer to "Supervised Learning" (i.e. where you have labels for a portion of your dataset, that you can use to train algorithms). The question specifically mentioned clustering, which is an "Unsupervised" approach (i.e. no labels are known beforehand). In this scenario I'd suggest looking at: k-means and kernel k-means Agglomerative Clustering Non-negative Matrix Factorisation Latent Dirichlet Allocation Dirichlet Processes and Hierarchical Dirichlet Processes But actually you'll probably find that your similarity/distance measure is more important than the specific algorithm you use. If you have some labelled data, then "Semi-Supervised Learning" approaches are gaining popularity and can be very powerful. A good starting point for SSL is the LapSVM (Laplacian Support Vector Machine).
What are the "hot algorithms" for machine learning?
Most of the answers given so far refer to "Supervised Learning" (i.e. where you have labels for a portion of your dataset, that you can use to train algorithms). The question specifically mentioned cl
What are the "hot algorithms" for machine learning? Most of the answers given so far refer to "Supervised Learning" (i.e. where you have labels for a portion of your dataset, that you can use to train algorithms). The question specifically mentioned clustering, which is an "Unsupervised" approach (i.e. no labels are known beforehand). In this scenario I'd suggest looking at: k-means and kernel k-means Agglomerative Clustering Non-negative Matrix Factorisation Latent Dirichlet Allocation Dirichlet Processes and Hierarchical Dirichlet Processes But actually you'll probably find that your similarity/distance measure is more important than the specific algorithm you use. If you have some labelled data, then "Semi-Supervised Learning" approaches are gaining popularity and can be very powerful. A good starting point for SSL is the LapSVM (Laplacian Support Vector Machine).
What are the "hot algorithms" for machine learning? Most of the answers given so far refer to "Supervised Learning" (i.e. where you have labels for a portion of your dataset, that you can use to train algorithms). The question specifically mentioned cl
19,368
What are the "hot algorithms" for machine learning?
These are books that might be helpful: Introduction to Data Mining by Pang-Ning Tan, Michael Steinbach, Vipin Kumar. This was the suggested book during my Data Mining classes at university. I like its layout and the theoretical approach; Data Mining: Practical Machine Learning Tools and Techniques by Ian H. Witten, Eibe Frank, Mark A. Hall. A very interesting book. This book covers also many implemented techniques with the Data Mining Framework WEKA; Machine Learning by Thomas Mitchell. It is a bit old book but it can be useful. Then remember that you can attend free classes of Machine learning at Stanford have just started: www.ml-class.com. And for your particular problem, that is SNP analysis, I would suggest to have a look to the Di Camillo's group at University of Padova.
What are the "hot algorithms" for machine learning?
These are books that might be helpful: Introduction to Data Mining by Pang-Ning Tan, Michael Steinbach, Vipin Kumar. This was the suggested book during my Data Mining classes at university. I like it
What are the "hot algorithms" for machine learning? These are books that might be helpful: Introduction to Data Mining by Pang-Ning Tan, Michael Steinbach, Vipin Kumar. This was the suggested book during my Data Mining classes at university. I like its layout and the theoretical approach; Data Mining: Practical Machine Learning Tools and Techniques by Ian H. Witten, Eibe Frank, Mark A. Hall. A very interesting book. This book covers also many implemented techniques with the Data Mining Framework WEKA; Machine Learning by Thomas Mitchell. It is a bit old book but it can be useful. Then remember that you can attend free classes of Machine learning at Stanford have just started: www.ml-class.com. And for your particular problem, that is SNP analysis, I would suggest to have a look to the Di Camillo's group at University of Padova.
What are the "hot algorithms" for machine learning? These are books that might be helpful: Introduction to Data Mining by Pang-Ning Tan, Michael Steinbach, Vipin Kumar. This was the suggested book during my Data Mining classes at university. I like it
19,369
What are the "hot algorithms" for machine learning?
Here is a great article and book that explains the rationale, theory, and application of most of the most popular methods: Top 10 Algorithms in Data Mining It's especially neat because it's a "top 10" chosen by polling experts in the field. Also, for gene data in general, feature selection is hugely important because of the many features. For example, SVM recursive feature elimination (SVM-RFE) and related methods are very popular and being actively developed and applied in the context of gene data.
What are the "hot algorithms" for machine learning?
Here is a great article and book that explains the rationale, theory, and application of most of the most popular methods: Top 10 Algorithms in Data Mining It's especially neat because it's a "top 10"
What are the "hot algorithms" for machine learning? Here is a great article and book that explains the rationale, theory, and application of most of the most popular methods: Top 10 Algorithms in Data Mining It's especially neat because it's a "top 10" chosen by polling experts in the field. Also, for gene data in general, feature selection is hugely important because of the many features. For example, SVM recursive feature elimination (SVM-RFE) and related methods are very popular and being actively developed and applied in the context of gene data.
What are the "hot algorithms" for machine learning? Here is a great article and book that explains the rationale, theory, and application of most of the most popular methods: Top 10 Algorithms in Data Mining It's especially neat because it's a "top 10"
19,370
What are the "hot algorithms" for machine learning?
Boosted trees and some form of svm win lots of competitions, but it always comes down to context. Manifold regularization is on the cutting edge as well.
What are the "hot algorithms" for machine learning?
Boosted trees and some form of svm win lots of competitions, but it always comes down to context. Manifold regularization is on the cutting edge as well.
What are the "hot algorithms" for machine learning? Boosted trees and some form of svm win lots of competitions, but it always comes down to context. Manifold regularization is on the cutting edge as well.
What are the "hot algorithms" for machine learning? Boosted trees and some form of svm win lots of competitions, but it always comes down to context. Manifold regularization is on the cutting edge as well.
19,371
What are the "hot algorithms" for machine learning?
I recommend "The Elements of Statistical Learning", by Hastie, Tibshirani, and Friedman. Don't just read it, play with some algorithms described by them (most of them are implemented in R, or you could even implement some yourself), and learn their weak and strong points.
What are the "hot algorithms" for machine learning?
I recommend "The Elements of Statistical Learning", by Hastie, Tibshirani, and Friedman. Don't just read it, play with some algorithms described by them (most of them are implemented in R, or you coul
What are the "hot algorithms" for machine learning? I recommend "The Elements of Statistical Learning", by Hastie, Tibshirani, and Friedman. Don't just read it, play with some algorithms described by them (most of them are implemented in R, or you could even implement some yourself), and learn their weak and strong points.
What are the "hot algorithms" for machine learning? I recommend "The Elements of Statistical Learning", by Hastie, Tibshirani, and Friedman. Don't just read it, play with some algorithms described by them (most of them are implemented in R, or you coul
19,372
What are the "hot algorithms" for machine learning?
I would recommend the following books Machine Learning in Bioinformatics Handbook Of Research On Machine Learning Applications and Trends: Algorithms, Methods and Techniques
What are the "hot algorithms" for machine learning?
I would recommend the following books Machine Learning in Bioinformatics Handbook Of Research On Machine Learning Applications and Trends: Algorithms, Methods and Techniques
What are the "hot algorithms" for machine learning? I would recommend the following books Machine Learning in Bioinformatics Handbook Of Research On Machine Learning Applications and Trends: Algorithms, Methods and Techniques
What are the "hot algorithms" for machine learning? I would recommend the following books Machine Learning in Bioinformatics Handbook Of Research On Machine Learning Applications and Trends: Algorithms, Methods and Techniques
19,373
What are the "hot algorithms" for machine learning?
Gaussian Processes for Machine Learning by Rasmussen and Williams (MIT Press) is a must. Gaussian processes are a one of the hot algorithms for machine learning now that Expectation Propagation and variational inference algorithms are available. The book is very well written, has a free MATLAB toolbox (good bit of kit), and the book can be downloaded for free.
What are the "hot algorithms" for machine learning?
Gaussian Processes for Machine Learning by Rasmussen and Williams (MIT Press) is a must. Gaussian processes are a one of the hot algorithms for machine learning now that Expectation Propagation and v
What are the "hot algorithms" for machine learning? Gaussian Processes for Machine Learning by Rasmussen and Williams (MIT Press) is a must. Gaussian processes are a one of the hot algorithms for machine learning now that Expectation Propagation and variational inference algorithms are available. The book is very well written, has a free MATLAB toolbox (good bit of kit), and the book can be downloaded for free.
What are the "hot algorithms" for machine learning? Gaussian Processes for Machine Learning by Rasmussen and Williams (MIT Press) is a must. Gaussian processes are a one of the hot algorithms for machine learning now that Expectation Propagation and v
19,374
How can I model a proportion with BUGS/JAGS/STAN?
The beta regression approach is to reparameterize in terms of $\mu$ and $\phi$. Where $\mu$ will be the equivalent to y_hat that you predict. In this parameterization you will have $\alpha=\mu\times\phi$ and $\beta=(1-\mu) \times \phi$. Then you can model $\mu$ as the logit of the linear combination. $\phi$ can either have its own prior (must be greater than 0), or can be modeled on covariates as well (choose a link function to keep it greater than 0, such as exponential). Possibly something like: for(i in 1:n) { y[i] ~ dbeta(alpha[i], beta[i]) alpha[i] <- mu[i] * phi beta[i] <- (1-mu[i]) * phi logit(mu[i]) <- a + b*x[i] } phi ~ dgamma(.1,.1) a ~ dnorm(0,.001) b ~ dnorm(0,.001)
How can I model a proportion with BUGS/JAGS/STAN?
The beta regression approach is to reparameterize in terms of $\mu$ and $\phi$. Where $\mu$ will be the equivalent to y_hat that you predict. In this parameterization you will have $\alpha=\mu\times
How can I model a proportion with BUGS/JAGS/STAN? The beta regression approach is to reparameterize in terms of $\mu$ and $\phi$. Where $\mu$ will be the equivalent to y_hat that you predict. In this parameterization you will have $\alpha=\mu\times\phi$ and $\beta=(1-\mu) \times \phi$. Then you can model $\mu$ as the logit of the linear combination. $\phi$ can either have its own prior (must be greater than 0), or can be modeled on covariates as well (choose a link function to keep it greater than 0, such as exponential). Possibly something like: for(i in 1:n) { y[i] ~ dbeta(alpha[i], beta[i]) alpha[i] <- mu[i] * phi beta[i] <- (1-mu[i]) * phi logit(mu[i]) <- a + b*x[i] } phi ~ dgamma(.1,.1) a ~ dnorm(0,.001) b ~ dnorm(0,.001)
How can I model a proportion with BUGS/JAGS/STAN? The beta regression approach is to reparameterize in terms of $\mu$ and $\phi$. Where $\mu$ will be the equivalent to y_hat that you predict. In this parameterization you will have $\alpha=\mu\times
19,375
How can I model a proportion with BUGS/JAGS/STAN?
Greg Snow gave a great answer. For completeness, here is the equivalent in Stan syntax. Although Stan has a beta distribution that you could use, it is faster to work out the logarithm of the beta density yourself because the constants log(y) and log(1-y) can be calculated once at the outset (rather than every time that y ~ beta(alpha,beta) would be called). By incrementing the reserved lp__ variable (see below), you can sum the logarithm of the beta density over the observations in your sample. I use the label "gamma" for the parameter vector in the linear predictor. data { int<lower=1> N; int<lower=1> K; real<lower=0,upper=1> y[N]; matrix[N,K] X; } transformed data { real log_y[N]; real log_1my[N]; for (i in 1:N) { log_y[i] <- log(y[i]); log_1my[i] <- log1m(y[i]); } } parameters { vector[K] gamma; real<lower=0> phi; } model { vector[N] Xgamma; real mu; real alpha_m1; real beta_m1; Xgamma <- X * gamma; for (i in 1:N) { mu <- inv_logit(Xgamma[i]); alpha_m1 <- mu * phi - 1.0; beta_m1 <- (1.0 - mu) * phi - 1.0; lp__ <- lp__ - lbeta(alpha,beta) + alpha_m1 * log_y[i] + beta_m1 * log_1my[i]; } // optional priors on gamma and phi here }
How can I model a proportion with BUGS/JAGS/STAN?
Greg Snow gave a great answer. For completeness, here is the equivalent in Stan syntax. Although Stan has a beta distribution that you could use, it is faster to work out the logarithm of the beta den
How can I model a proportion with BUGS/JAGS/STAN? Greg Snow gave a great answer. For completeness, here is the equivalent in Stan syntax. Although Stan has a beta distribution that you could use, it is faster to work out the logarithm of the beta density yourself because the constants log(y) and log(1-y) can be calculated once at the outset (rather than every time that y ~ beta(alpha,beta) would be called). By incrementing the reserved lp__ variable (see below), you can sum the logarithm of the beta density over the observations in your sample. I use the label "gamma" for the parameter vector in the linear predictor. data { int<lower=1> N; int<lower=1> K; real<lower=0,upper=1> y[N]; matrix[N,K] X; } transformed data { real log_y[N]; real log_1my[N]; for (i in 1:N) { log_y[i] <- log(y[i]); log_1my[i] <- log1m(y[i]); } } parameters { vector[K] gamma; real<lower=0> phi; } model { vector[N] Xgamma; real mu; real alpha_m1; real beta_m1; Xgamma <- X * gamma; for (i in 1:N) { mu <- inv_logit(Xgamma[i]); alpha_m1 <- mu * phi - 1.0; beta_m1 <- (1.0 - mu) * phi - 1.0; lp__ <- lp__ - lbeta(alpha,beta) + alpha_m1 * log_y[i] + beta_m1 * log_1my[i]; } // optional priors on gamma and phi here }
How can I model a proportion with BUGS/JAGS/STAN? Greg Snow gave a great answer. For completeness, here is the equivalent in Stan syntax. Although Stan has a beta distribution that you could use, it is faster to work out the logarithm of the beta den
19,376
Draw multiple plots on one graph in R?
If you want to stick with something like the method you've been using then you might want to learn the layout() command. A few other detail changes and you can get the graphs much closer together. You could also put the unique things that change between graphs in a list (like the data and margins) and then go through a loop. Also, you'll note I made the bottom axis with the direct axis() command so that you can control where the items go. layout(matrix(1:5, ncol = 1), widths = 1, heights = c(1,5,5,5,7), respect = FALSE) par(mar=c(0, 4, 0, 0)) plot(1, type = 'n', axes = FALSE, bty = 'n', ylab = '') legend('left', , c("X","Y"), bty="n", horiz=T, cex=1.5, col=c("red1","darkblue"), text.col=c("red1","darkblue"), pch=c(1,3), lty=c(2,3), x.intersp=0.4,adj=0.2) par(mar=c(0, 4, 2, 1), bty = 'o') plot(a1, type="b", ylim=c(0,14.5), xlab="Time (secs)", ylab="", xaxt = 'n', cex.axis=1.4, cex.lab=1.3,cex=1.2, lwd=2.5, col="red1", lty=2, pch=1, main="A") lines(a2,type="b",pch=3,lty=3,col="darkblue",lwd=2.5,cex=1.2) par(xpd=T) plot(b1, type="b", ylim=c(0,14.5), xlab="Time (secs)", ylab="", xaxt = 'n', cex.axis=1.4, cex.lab=1.3,cex=1.2,lwd=2.5,col="red1",lty=2,pch=1, main="B") lines(b2,type="b",pch=3,lty=3,col="darkblue",lwd=2.5,cex=1.2) plot(c1, type="b", ylim=c(0,14.5), xlab="Time (secs)", ylab="", xaxt = 'n', cex.axis=1.4, cex.lab=1.3,cex=1.2,lwd=2.5,col="red1",lty=2,pch=1, main="C") lines(c2,type="b",pch=3,lty=3,col="darkblue",lwd=2.5,cex=1.2) par(mar=c(4, 4, 2, 1)) plot(d1/1000, type="b", ylim=c(0,14.5), xlab="Time (secs)", ylab="", xaxt = 'n', cex.axis=1.4, cex.lab=1.3,cex=1.2,lwd=2.5,col="red1",lty=2,pch=1, main="D") lines(d2,type="b",pch=3,lty=3,col="darkblue",lwd=2.5,cex=1.2) mtext("Price", side=2, at=40,line=2.5,cex=1.1) axis(1, 1:10, cex.axis = 1.4) I should note that I really didn't put an effort into making this as nice as I could and instead of making that first dummy graph I could have just set enough space in the first frame. Unfortunately the mar() setting try to fill the frame and the top margin influences the distance that the label above the graph is away so I'd have to go and make all my labels with mtext() or text() instead of just using the main setting within plot and I didn't feel like doing that
Draw multiple plots on one graph in R?
If you want to stick with something like the method you've been using then you might want to learn the layout() command. A few other detail changes and you can get the graphs much closer together. Y
Draw multiple plots on one graph in R? If you want to stick with something like the method you've been using then you might want to learn the layout() command. A few other detail changes and you can get the graphs much closer together. You could also put the unique things that change between graphs in a list (like the data and margins) and then go through a loop. Also, you'll note I made the bottom axis with the direct axis() command so that you can control where the items go. layout(matrix(1:5, ncol = 1), widths = 1, heights = c(1,5,5,5,7), respect = FALSE) par(mar=c(0, 4, 0, 0)) plot(1, type = 'n', axes = FALSE, bty = 'n', ylab = '') legend('left', , c("X","Y"), bty="n", horiz=T, cex=1.5, col=c("red1","darkblue"), text.col=c("red1","darkblue"), pch=c(1,3), lty=c(2,3), x.intersp=0.4,adj=0.2) par(mar=c(0, 4, 2, 1), bty = 'o') plot(a1, type="b", ylim=c(0,14.5), xlab="Time (secs)", ylab="", xaxt = 'n', cex.axis=1.4, cex.lab=1.3,cex=1.2, lwd=2.5, col="red1", lty=2, pch=1, main="A") lines(a2,type="b",pch=3,lty=3,col="darkblue",lwd=2.5,cex=1.2) par(xpd=T) plot(b1, type="b", ylim=c(0,14.5), xlab="Time (secs)", ylab="", xaxt = 'n', cex.axis=1.4, cex.lab=1.3,cex=1.2,lwd=2.5,col="red1",lty=2,pch=1, main="B") lines(b2,type="b",pch=3,lty=3,col="darkblue",lwd=2.5,cex=1.2) plot(c1, type="b", ylim=c(0,14.5), xlab="Time (secs)", ylab="", xaxt = 'n', cex.axis=1.4, cex.lab=1.3,cex=1.2,lwd=2.5,col="red1",lty=2,pch=1, main="C") lines(c2,type="b",pch=3,lty=3,col="darkblue",lwd=2.5,cex=1.2) par(mar=c(4, 4, 2, 1)) plot(d1/1000, type="b", ylim=c(0,14.5), xlab="Time (secs)", ylab="", xaxt = 'n', cex.axis=1.4, cex.lab=1.3,cex=1.2,lwd=2.5,col="red1",lty=2,pch=1, main="D") lines(d2,type="b",pch=3,lty=3,col="darkblue",lwd=2.5,cex=1.2) mtext("Price", side=2, at=40,line=2.5,cex=1.1) axis(1, 1:10, cex.axis = 1.4) I should note that I really didn't put an effort into making this as nice as I could and instead of making that first dummy graph I could have just set enough space in the first frame. Unfortunately the mar() setting try to fill the frame and the top margin influences the distance that the label above the graph is away so I'd have to go and make all my labels with mtext() or text() instead of just using the main setting within plot and I didn't feel like doing that
Draw multiple plots on one graph in R? If you want to stick with something like the method you've been using then you might want to learn the layout() command. A few other detail changes and you can get the graphs much closer together. Y
19,377
Draw multiple plots on one graph in R?
I would recommend learning the lattice graphics package. I can get close to what you want with a few lines. First, package up your data in a data frame, something like this: dat <- data.frame (x=rep (1:10, 8), y=c(a1, a2, b1, b2, c1, c2, d1, d2), var=factor (rep (c("X", "Y"), each=10)), graph=factor (rep (c("A", "B", "C", "D"), each=20))) which yields: x y var graph 1 1 0.556372979 X A 2 2 0.754257646 X A 3 3 0.815432905 X A 4 4 0.559513013 X A 5 5 0.763368168 X A 6 6 0.426415259 X A 7 7 0.597962532 X A 8 8 0.723780143 X A 9 9 0.228920116 X A 10 10 0.607378894 X A 11 1 0.865114425 Y A 12 2 0.919804947 Y A 13 3 0.437003794 Y A 14 4 0.203349303 Y A 15 5 0.620425977 Y A 16 6 0.703170299 Y A 17 7 0.174297656 Y A 18 8 0.698144659 Y A 19 9 0.732527016 Y A 20 10 0.778057398 Y A 21 1 0.355583032 X B 22 2 0.015765144 X B 23 3 0.315004753 X B 24 4 0.257723585 X B 25 5 0.506324279 X B 26 6 0.028634427 X B 27 7 0.475360443 X B 28 8 0.577119754 X B 29 9 0.709063777 X B 30 10 0.308695235 X B 31 1 0.852567748 Y B 32 2 0.938889121 Y B 33 3 0.080869739 Y B 34 4 0.732318482 Y B 35 5 0.325673156 Y B 36 6 0.378161864 Y B 37 7 0.830962248 Y B 38 8 0.990504039 Y B 39 9 0.331377188 Y B 40 10 0.448251682 Y B 41 1 0.967255983 X C 42 2 0.722894624 X C 43 3 0.039523960 X C 44 4 0.003774719 X C 45 5 0.218605160 X C 46 6 0.722304874 X C 47 7 0.576140686 X C 48 8 0.108219812 X C 49 9 0.258440127 X C 50 10 0.739656846 X C 51 1 0.528278201 Y C 52 2 0.104415716 Y C 53 3 0.966076056 Y C 54 4 0.504415150 Y C 55 5 0.655384900 Y C 56 6 0.247340395 Y C 57 7 0.193857228 Y C 58 8 0.019133583 Y C 59 9 0.799404908 Y C 60 10 0.159209090 Y C 61 1 0.422574508 X D 62 2 0.823192614 X D 63 3 0.808715876 X D 64 4 0.770499188 X D 65 5 0.049138399 X D 66 6 0.747017767 X D 67 7 0.239916970 X D 68 8 0.152777362 X D 69 9 0.052862276 X D 70 10 0.937605577 X D 71 1 0.850112019 Y D 72 2 0.675407232 Y D 73 3 0.273276166 Y D 74 4 0.455995477 Y D 75 5 0.695497498 Y D 76 6 0.688414035 Y D 77 7 0.454013633 Y D 78 8 0.874853452 Y D 79 9 0.568746031 Y D Then, use lattice's xyplot: library (lattice) xyplot (y ~ x | graph, groups=var, data=dat, type="o", layout=c(1, 4), as.table=T, xlab="Time (secs)", ylab="Price") which yields a nice graph like: EDIT: If you want to have different symbols and lines and have that show up in your legend, it gets complicated, because you literally build the legend yourself, and you have to know how to get the default lattice colors if you didn't override them yourself: my.text <- levels (dat$var) my.lty <- c(2, 3) my.pch <- c(1, 2) my.col <- trellis.par.get ("superpose.symbol")$col[1:2] xyplot (y ~ x | graph, groups=var, data=dat, type="o", pch=my.pch, lty=my.lty, main="Main Title", layout=c(1, 4), as.table=T, xlab="Time (secs)", ylab="Price", key=list (columns=2, text=list (my.text), points=list (pch=my.pch, col=my.col))) EDIT 2: You can simplify the code and the graph, if the two categories really are as simple as "X" and "Y": xyplot (y ~ x | graph, groups=var, data=dat, type="o", pch=c("X", "Y"), cex=1.25, lty=c(2, 3), layout=c(1, 4), as.table=T, xlab="Time (secs)", ylab="Price") which will use "X" and "Y" as the point symbols. You don't need a legend at all, and can then devote even more space to the graphs themselves. (On the other hand, you might not like the look, or might find it harder to determine the exact center of the point, though that's not as much of an issue as it might be since the line goes through each point.) EDIT 3: Actually, you should add strip=F, strip.left=T, to the plot, to put the A, B, C, D, labels to the left of the graphs, which gives you more room on a long graph like this: xyplot (y ~ x | graph, groups=var, data=dat, type="o", pch=my.pch, lty=my.lty, main="Main Title", layout=c(1, 4), as.table=T, xlab="Time (secs)", ylab="Price", strip.left=T, strip=F, key=list (columns=2, text=list (my.text), points=list (pch=my.pch, col=my.col), lines=list (lty=my.lty, col=my.col)))
Draw multiple plots on one graph in R?
I would recommend learning the lattice graphics package. I can get close to what you want with a few lines. First, package up your data in a data frame, something like this: dat <- data.frame (x=rep (
Draw multiple plots on one graph in R? I would recommend learning the lattice graphics package. I can get close to what you want with a few lines. First, package up your data in a data frame, something like this: dat <- data.frame (x=rep (1:10, 8), y=c(a1, a2, b1, b2, c1, c2, d1, d2), var=factor (rep (c("X", "Y"), each=10)), graph=factor (rep (c("A", "B", "C", "D"), each=20))) which yields: x y var graph 1 1 0.556372979 X A 2 2 0.754257646 X A 3 3 0.815432905 X A 4 4 0.559513013 X A 5 5 0.763368168 X A 6 6 0.426415259 X A 7 7 0.597962532 X A 8 8 0.723780143 X A 9 9 0.228920116 X A 10 10 0.607378894 X A 11 1 0.865114425 Y A 12 2 0.919804947 Y A 13 3 0.437003794 Y A 14 4 0.203349303 Y A 15 5 0.620425977 Y A 16 6 0.703170299 Y A 17 7 0.174297656 Y A 18 8 0.698144659 Y A 19 9 0.732527016 Y A 20 10 0.778057398 Y A 21 1 0.355583032 X B 22 2 0.015765144 X B 23 3 0.315004753 X B 24 4 0.257723585 X B 25 5 0.506324279 X B 26 6 0.028634427 X B 27 7 0.475360443 X B 28 8 0.577119754 X B 29 9 0.709063777 X B 30 10 0.308695235 X B 31 1 0.852567748 Y B 32 2 0.938889121 Y B 33 3 0.080869739 Y B 34 4 0.732318482 Y B 35 5 0.325673156 Y B 36 6 0.378161864 Y B 37 7 0.830962248 Y B 38 8 0.990504039 Y B 39 9 0.331377188 Y B 40 10 0.448251682 Y B 41 1 0.967255983 X C 42 2 0.722894624 X C 43 3 0.039523960 X C 44 4 0.003774719 X C 45 5 0.218605160 X C 46 6 0.722304874 X C 47 7 0.576140686 X C 48 8 0.108219812 X C 49 9 0.258440127 X C 50 10 0.739656846 X C 51 1 0.528278201 Y C 52 2 0.104415716 Y C 53 3 0.966076056 Y C 54 4 0.504415150 Y C 55 5 0.655384900 Y C 56 6 0.247340395 Y C 57 7 0.193857228 Y C 58 8 0.019133583 Y C 59 9 0.799404908 Y C 60 10 0.159209090 Y C 61 1 0.422574508 X D 62 2 0.823192614 X D 63 3 0.808715876 X D 64 4 0.770499188 X D 65 5 0.049138399 X D 66 6 0.747017767 X D 67 7 0.239916970 X D 68 8 0.152777362 X D 69 9 0.052862276 X D 70 10 0.937605577 X D 71 1 0.850112019 Y D 72 2 0.675407232 Y D 73 3 0.273276166 Y D 74 4 0.455995477 Y D 75 5 0.695497498 Y D 76 6 0.688414035 Y D 77 7 0.454013633 Y D 78 8 0.874853452 Y D 79 9 0.568746031 Y D Then, use lattice's xyplot: library (lattice) xyplot (y ~ x | graph, groups=var, data=dat, type="o", layout=c(1, 4), as.table=T, xlab="Time (secs)", ylab="Price") which yields a nice graph like: EDIT: If you want to have different symbols and lines and have that show up in your legend, it gets complicated, because you literally build the legend yourself, and you have to know how to get the default lattice colors if you didn't override them yourself: my.text <- levels (dat$var) my.lty <- c(2, 3) my.pch <- c(1, 2) my.col <- trellis.par.get ("superpose.symbol")$col[1:2] xyplot (y ~ x | graph, groups=var, data=dat, type="o", pch=my.pch, lty=my.lty, main="Main Title", layout=c(1, 4), as.table=T, xlab="Time (secs)", ylab="Price", key=list (columns=2, text=list (my.text), points=list (pch=my.pch, col=my.col))) EDIT 2: You can simplify the code and the graph, if the two categories really are as simple as "X" and "Y": xyplot (y ~ x | graph, groups=var, data=dat, type="o", pch=c("X", "Y"), cex=1.25, lty=c(2, 3), layout=c(1, 4), as.table=T, xlab="Time (secs)", ylab="Price") which will use "X" and "Y" as the point symbols. You don't need a legend at all, and can then devote even more space to the graphs themselves. (On the other hand, you might not like the look, or might find it harder to determine the exact center of the point, though that's not as much of an issue as it might be since the line goes through each point.) EDIT 3: Actually, you should add strip=F, strip.left=T, to the plot, to put the A, B, C, D, labels to the left of the graphs, which gives you more room on a long graph like this: xyplot (y ~ x | graph, groups=var, data=dat, type="o", pch=my.pch, lty=my.lty, main="Main Title", layout=c(1, 4), as.table=T, xlab="Time (secs)", ylab="Price", strip.left=T, strip=F, key=list (columns=2, text=list (my.text), points=list (pch=my.pch, col=my.col), lines=list (lty=my.lty, col=my.col)))
Draw multiple plots on one graph in R? I would recommend learning the lattice graphics package. I can get close to what you want with a few lines. First, package up your data in a data frame, something like this: dat <- data.frame (x=rep (
19,378
Draw multiple plots on one graph in R?
Here's a version of @Brandon's ggplot2 solution that incorporates the desired legend behavior: dat <- data.frame (x=rep (1:10, 8), y=runif(80), var=factor (rep (c("X", "Y"), each=10)), graph=factor (rep (c("A", "B", "C", "D"), each=20))) ggplot(data = dat,aes(x = x, y = y)) + facet_wrap(~graph,nrow = 4) + geom_point(aes(shape = var)) + geom_line(aes(colour = var, group = var)) + labs(x = NULL, y = NULL, shape = "", colour = "") + theme_bw() + opts(legend.position = "top", legend.direction = "horizontal") I find legends to be far easier in ggplot2, but YMMV. EDIT Addressing a few questions in the comments. To specify particular point or line types, you would use scale_aesthetic_manual where aesthetic is either shape, linetype, etc. For instance: ggplot(data = dat,aes(x = x, y = y)) + facet_wrap(~graph,nrow = 4) + geom_point(aes(shape = var)) + geom_line(aes(colour = var, linetype = var, group = var)) + labs(x = NULL, y = NULL, shape = "", colour = "", linetype = "") + scale_shape_manual(values = 4:5) + theme_bw() + opts(legend.position = "top", legend.direction = "horizontal") Changing the size of various axis labels is done by changing settings in the theme, usually using opts(). For instance: ggplot(data = dat,aes(x = x, y = y)) + facet_wrap(~graph,nrow = 4) + geom_point(aes(shape = var)) + geom_line(aes(colour = var, linetype = var, group = var)) + labs(x = "X Label", y = "Y Label", shape = "", colour = "", linetype = "") + theme_bw() + opts(legend.position = "top", legend.direction = "horizontal", axis.text.x = theme_text(size = 15),axis.title.y = theme_text(size = 25, angle = 90)) You should really dive into the website and his book for more information.
Draw multiple plots on one graph in R?
Here's a version of @Brandon's ggplot2 solution that incorporates the desired legend behavior: dat <- data.frame (x=rep (1:10, 8), y=runif(80), var=factor (rep (c("X", "Y"), each=10)),
Draw multiple plots on one graph in R? Here's a version of @Brandon's ggplot2 solution that incorporates the desired legend behavior: dat <- data.frame (x=rep (1:10, 8), y=runif(80), var=factor (rep (c("X", "Y"), each=10)), graph=factor (rep (c("A", "B", "C", "D"), each=20))) ggplot(data = dat,aes(x = x, y = y)) + facet_wrap(~graph,nrow = 4) + geom_point(aes(shape = var)) + geom_line(aes(colour = var, group = var)) + labs(x = NULL, y = NULL, shape = "", colour = "") + theme_bw() + opts(legend.position = "top", legend.direction = "horizontal") I find legends to be far easier in ggplot2, but YMMV. EDIT Addressing a few questions in the comments. To specify particular point or line types, you would use scale_aesthetic_manual where aesthetic is either shape, linetype, etc. For instance: ggplot(data = dat,aes(x = x, y = y)) + facet_wrap(~graph,nrow = 4) + geom_point(aes(shape = var)) + geom_line(aes(colour = var, linetype = var, group = var)) + labs(x = NULL, y = NULL, shape = "", colour = "", linetype = "") + scale_shape_manual(values = 4:5) + theme_bw() + opts(legend.position = "top", legend.direction = "horizontal") Changing the size of various axis labels is done by changing settings in the theme, usually using opts(). For instance: ggplot(data = dat,aes(x = x, y = y)) + facet_wrap(~graph,nrow = 4) + geom_point(aes(shape = var)) + geom_line(aes(colour = var, linetype = var, group = var)) + labs(x = "X Label", y = "Y Label", shape = "", colour = "", linetype = "") + theme_bw() + opts(legend.position = "top", legend.direction = "horizontal", axis.text.x = theme_text(size = 15),axis.title.y = theme_text(size = 25, angle = 90)) You should really dive into the website and his book for more information.
Draw multiple plots on one graph in R? Here's a version of @Brandon's ggplot2 solution that incorporates the desired legend behavior: dat <- data.frame (x=rep (1:10, 8), y=runif(80), var=factor (rep (c("X", "Y"), each=10)),
19,379
Draw multiple plots on one graph in R?
Similar to Wayne's answer, I would also use a different package as well, namely ggplot2 library(ggplot2) df <- data.frame( parameter=runif(300), Time=1:300, split=sample(c(1:4),300,replace=T), split2=sample(c(1:2),300,replace=T) ) ggplot(df, aes(Time, parameter, colour=as.factor(split2))) + geom_line() + facet_wrap(~split,nrow=4) Which gives us a chart like:
Draw multiple plots on one graph in R?
Similar to Wayne's answer, I would also use a different package as well, namely ggplot2 library(ggplot2) df <- data.frame( parameter=runif(300), Time=1:300, split=sample(c(1:4),300,replace=T), spli
Draw multiple plots on one graph in R? Similar to Wayne's answer, I would also use a different package as well, namely ggplot2 library(ggplot2) df <- data.frame( parameter=runif(300), Time=1:300, split=sample(c(1:4),300,replace=T), split2=sample(c(1:2),300,replace=T) ) ggplot(df, aes(Time, parameter, colour=as.factor(split2))) + geom_line() + facet_wrap(~split,nrow=4) Which gives us a chart like:
Draw multiple plots on one graph in R? Similar to Wayne's answer, I would also use a different package as well, namely ggplot2 library(ggplot2) df <- data.frame( parameter=runif(300), Time=1:300, split=sample(c(1:4),300,replace=T), spli
19,380
Binomial-binomial is binomial?
Write $Y = \sum_i B_i$ and $X = \sum_i A_i B_i$ where $A_i \sim \text{Bernoulli}(p)$ and $B_i \sim \text{Bernoulli}(q)$. Then $Y$ and $X$ have the joint distribution specified ($Y$ is obvious, and conditional on $Y$ we will have a sum of exactly $Y$ of the $B_i$'s), but clearly $A_i B_i \sim \text{Bernoulli}(pq)$ independently. Hence $X \sim \text{Binomial}(n, pq)$ marginally.
Binomial-binomial is binomial?
Write $Y = \sum_i B_i$ and $X = \sum_i A_i B_i$ where $A_i \sim \text{Bernoulli}(p)$ and $B_i \sim \text{Bernoulli}(q)$. Then $Y$ and $X$ have the joint distribution specified ($Y$ is obvious, and con
Binomial-binomial is binomial? Write $Y = \sum_i B_i$ and $X = \sum_i A_i B_i$ where $A_i \sim \text{Bernoulli}(p)$ and $B_i \sim \text{Bernoulli}(q)$. Then $Y$ and $X$ have the joint distribution specified ($Y$ is obvious, and conditional on $Y$ we will have a sum of exactly $Y$ of the $B_i$'s), but clearly $A_i B_i \sim \text{Bernoulli}(pq)$ independently. Hence $X \sim \text{Binomial}(n, pq)$ marginally.
Binomial-binomial is binomial? Write $Y = \sum_i B_i$ and $X = \sum_i A_i B_i$ where $A_i \sim \text{Bernoulli}(p)$ and $B_i \sim \text{Bernoulli}(q)$. Then $Y$ and $X$ have the joint distribution specified ($Y$ is obvious, and con
19,381
Binomial-binomial is binomial?
As Ben points out, you've made an algebraic error and the result is correct. This process is called binomial thinning and, if you search for that expression, you'll find many mentions of it in the published literature. The process applies not just to binomial random variables, but also to multinomial, Poisson and negative binomial. Suppose that we have binomial, Poisson or negative binomial random variables: $Y_1\sim {\rm Binomial}(n,q)$ $Y_2\sim {\rm Poisson}(\lambda)$ $Y_3\sim {\rm Negative\ Binomial}(\mu,\phi)$, i.e., with mean $\mu$ and variance $\mu+\phi\mu^2$ We can view each of these random variables as counting events from a random process. Suppose now that the individual events are not all observed but are randomly intercepted so that on average $p$ of them get through and are observed while the others are lost. In other words, we "thin out" the random processes by keeping each of the original events with probability $p$: $X_1|Y_1 \sim {\rm Binomial}(Y_1,p)$ $X_2|Y_2 \sim {\rm Binomial}(Y_2,p)$ $X_3|Y_3 \sim {\rm Binomial}(Y_3,p)$ The resulting "thinned" distributions have the following marginal distributions: $X_1 \sim {\rm Binomial}(n,pq)$ $X_2 \sim {\rm Poisson}(p\lambda)$ $X_3 \sim {\rm Negative\ Binomial}(p\mu,\phi)$ The effect is to scale down the expected value of the distribution by factor $p$ without otherwise changing the distributional form. An example of binomial thinning from my own use is the thinCounts function of the edgeR package ( https://rdrr.io/bioc/edgeR/man/thinCounts.html ) which can be used to generate RNA-seq read counts for reduced sequencing depths.
Binomial-binomial is binomial?
As Ben points out, you've made an algebraic error and the result is correct. This process is called binomial thinning and, if you search for that expression, you'll find many mentions of it in the pub
Binomial-binomial is binomial? As Ben points out, you've made an algebraic error and the result is correct. This process is called binomial thinning and, if you search for that expression, you'll find many mentions of it in the published literature. The process applies not just to binomial random variables, but also to multinomial, Poisson and negative binomial. Suppose that we have binomial, Poisson or negative binomial random variables: $Y_1\sim {\rm Binomial}(n,q)$ $Y_2\sim {\rm Poisson}(\lambda)$ $Y_3\sim {\rm Negative\ Binomial}(\mu,\phi)$, i.e., with mean $\mu$ and variance $\mu+\phi\mu^2$ We can view each of these random variables as counting events from a random process. Suppose now that the individual events are not all observed but are randomly intercepted so that on average $p$ of them get through and are observed while the others are lost. In other words, we "thin out" the random processes by keeping each of the original events with probability $p$: $X_1|Y_1 \sim {\rm Binomial}(Y_1,p)$ $X_2|Y_2 \sim {\rm Binomial}(Y_2,p)$ $X_3|Y_3 \sim {\rm Binomial}(Y_3,p)$ The resulting "thinned" distributions have the following marginal distributions: $X_1 \sim {\rm Binomial}(n,pq)$ $X_2 \sim {\rm Poisson}(p\lambda)$ $X_3 \sim {\rm Negative\ Binomial}(p\mu,\phi)$ The effect is to scale down the expected value of the distribution by factor $p$ without otherwise changing the distributional form. An example of binomial thinning from my own use is the thinCounts function of the edgeR package ( https://rdrr.io/bioc/edgeR/man/thinCounts.html ) which can be used to generate RNA-seq read counts for reduced sequencing depths.
Binomial-binomial is binomial? As Ben points out, you've made an algebraic error and the result is correct. This process is called binomial thinning and, if you search for that expression, you'll find many mentions of it in the pub
19,382
Binomial-binomial is binomial?
You have an algebraic error in your working --- since $\mathbb{E}(Y)=nq$ you should have: $$\begin{align} \mathbb{V}(X) &= \mathbb{E}(\mathbb{V}(X\mid Y)) + \mathbb{V}(\mathbb{E}(X\mid Y)) \\[6pt] &= \mathbb{E}(Yp(1-p)) + \mathbb{V}(pY) \\[6pt] &= p(1-p) \mathbb{E}(Y) + p^2 \mathbb{V}(Y) \\[6pt] &= n q p(1-p) + p^2 n q (1-q) \\[6pt] &= n p q [(1-p) + p (1-q)] \\[6pt] &= n p q (1 - p q), \\[6pt] \end{align}$$ which matches the marginal variance of the distribution $\text{Bin}(n,pq)$.
Binomial-binomial is binomial?
You have an algebraic error in your working --- since $\mathbb{E}(Y)=nq$ you should have: $$\begin{align} \mathbb{V}(X) &= \mathbb{E}(\mathbb{V}(X\mid Y)) + \mathbb{V}(\mathbb{E}(X\mid Y)) \\[6pt] &=
Binomial-binomial is binomial? You have an algebraic error in your working --- since $\mathbb{E}(Y)=nq$ you should have: $$\begin{align} \mathbb{V}(X) &= \mathbb{E}(\mathbb{V}(X\mid Y)) + \mathbb{V}(\mathbb{E}(X\mid Y)) \\[6pt] &= \mathbb{E}(Yp(1-p)) + \mathbb{V}(pY) \\[6pt] &= p(1-p) \mathbb{E}(Y) + p^2 \mathbb{V}(Y) \\[6pt] &= n q p(1-p) + p^2 n q (1-q) \\[6pt] &= n p q [(1-p) + p (1-q)] \\[6pt] &= n p q (1 - p q), \\[6pt] \end{align}$$ which matches the marginal variance of the distribution $\text{Bin}(n,pq)$.
Binomial-binomial is binomial? You have an algebraic error in your working --- since $\mathbb{E}(Y)=nq$ you should have: $$\begin{align} \mathbb{V}(X) &= \mathbb{E}(\mathbb{V}(X\mid Y)) + \mathbb{V}(\mathbb{E}(X\mid Y)) \\[6pt] &=
19,383
Is Matlab/octave or R better suited for monte carlo simulation?
I use both. I often prototype functions & algorithms in Matlab because, as stated, it is easier to express an algorithm in something which is close to a pure mathematical language. R does have excellent libraries. I'm still learning it, but I'm starting to leave Matlab in the dust because once you know R, it's also fairly easy to prototype functions there. However, I find that if you want algorithms to function efficiently within a production environment, it is best to move to a compiled language like C++. I have experience wrapping C++ into both Matlab and R (and excel for that matter), but I've had a better experience with R. Disclaimer: Being a grad student, I haven't used a recent version of Matlab for my dlls, I've been working almost exclusively in Matlab 7.1 (which is like 4 years old). Perhaps the newer versions work better, but I can think of two situations off the top of my head where a C++ dll in the back of Matlab caused Windows XP to blue screen because I walked inappropriately outside an array bounds -- a very hard problem to debug if your computer reboots every time you make that mistake... Lastly, the R community appears to be growing much faster and with much more momentum than the Matlab community ever had. Further, as it's free you also don't have deal with the Godforsaken flexlm license manager. Note: Almost all of my development is in MCMC algorithms right now. I do about 90% in production in C++ with the visualization in R using ggplot2. Update for Parallel Comments: A fair amount of my development time right now is spent on parallelizing MCMC routines (it's my PhD thesis). I have used Matlab's parallel toolbox and Star P's solution (which I guess is now owned by Microsoft?? -- jeez another one is gobbled up...) I found the parallel toolbox to be a configuration nightmare -- when I used it, it required root access to every single client node. I think they've fixed that little "bug" now, but still a mess. I found *'p solution to be elegant, but often difficult to profile. I have not used Jacket, but I've heard good things. I also have not used the more recent versions of the parallel toolbox which also support GPU computation. I have virtually no experience with the R parallel packages. It's been my experience that parallelizing code must occur at the C++ level where you have a finer granularity of control for task decomposition and memory/resource allocation. I find that if you attempt to parallelize programs at a high level, you often only receive a minimal speedup unless your code is trivially decomposable (also called dummy-parallelism). That said, you can even get reasonable speedups using a single-line at the C++ level using OpenMP: #pragma omp parallel for More complicated schemes have a learning curve, but I really like where gpgpu things are going. As of JSM this year, the few people I talked to about GPU development in R quote it as being only "toes in the deep end" so to speak. But as stated, I have minimal experience -- to change in the near future.
Is Matlab/octave or R better suited for monte carlo simulation?
I use both. I often prototype functions & algorithms in Matlab because, as stated, it is easier to express an algorithm in something which is close to a pure mathematical language. R does have excell
Is Matlab/octave or R better suited for monte carlo simulation? I use both. I often prototype functions & algorithms in Matlab because, as stated, it is easier to express an algorithm in something which is close to a pure mathematical language. R does have excellent libraries. I'm still learning it, but I'm starting to leave Matlab in the dust because once you know R, it's also fairly easy to prototype functions there. However, I find that if you want algorithms to function efficiently within a production environment, it is best to move to a compiled language like C++. I have experience wrapping C++ into both Matlab and R (and excel for that matter), but I've had a better experience with R. Disclaimer: Being a grad student, I haven't used a recent version of Matlab for my dlls, I've been working almost exclusively in Matlab 7.1 (which is like 4 years old). Perhaps the newer versions work better, but I can think of two situations off the top of my head where a C++ dll in the back of Matlab caused Windows XP to blue screen because I walked inappropriately outside an array bounds -- a very hard problem to debug if your computer reboots every time you make that mistake... Lastly, the R community appears to be growing much faster and with much more momentum than the Matlab community ever had. Further, as it's free you also don't have deal with the Godforsaken flexlm license manager. Note: Almost all of my development is in MCMC algorithms right now. I do about 90% in production in C++ with the visualization in R using ggplot2. Update for Parallel Comments: A fair amount of my development time right now is spent on parallelizing MCMC routines (it's my PhD thesis). I have used Matlab's parallel toolbox and Star P's solution (which I guess is now owned by Microsoft?? -- jeez another one is gobbled up...) I found the parallel toolbox to be a configuration nightmare -- when I used it, it required root access to every single client node. I think they've fixed that little "bug" now, but still a mess. I found *'p solution to be elegant, but often difficult to profile. I have not used Jacket, but I've heard good things. I also have not used the more recent versions of the parallel toolbox which also support GPU computation. I have virtually no experience with the R parallel packages. It's been my experience that parallelizing code must occur at the C++ level where you have a finer granularity of control for task decomposition and memory/resource allocation. I find that if you attempt to parallelize programs at a high level, you often only receive a minimal speedup unless your code is trivially decomposable (also called dummy-parallelism). That said, you can even get reasonable speedups using a single-line at the C++ level using OpenMP: #pragma omp parallel for More complicated schemes have a learning curve, but I really like where gpgpu things are going. As of JSM this year, the few people I talked to about GPU development in R quote it as being only "toes in the deep end" so to speak. But as stated, I have minimal experience -- to change in the near future.
Is Matlab/octave or R better suited for monte carlo simulation? I use both. I often prototype functions & algorithms in Matlab because, as stated, it is easier to express an algorithm in something which is close to a pure mathematical language. R does have excell
19,384
Is Matlab/octave or R better suited for monte carlo simulation?
To be honest, I think any question you ask around here about R vs ... will be bias towards R. Remember that R is by far the most used tag! What I do My current working practice is to use R to prototype and use C when I need an extra boost of speed. It used to be that I would have to switch to C very quickly (again for my particular applications), but the R multicore libraries have helped delay that switch. Essentially, you make a for loop run in parallel with a trivial change. I should mention that my applications are very computationally intensive. Recommendation To be perfectly honest, it really depends on exactly what you want to do. So I'm basing my answer on this statement in your question. I want to construct static models with sensitivity analysis, later dynamic models. Need good libraries/ algorithms that guide me I'd imagine that this problem would be ideally suited to prototyping in R and using C when needed (or some other compiled language). On saying that, typically Monte-Carlo/sensitivity analysis doesn't involve particularly advanced statistical routines - of course it may needed other advanced functionality. So I think (without more information) that you could carry out your analysis in any language, but being completely biased, I would recommend R!
Is Matlab/octave or R better suited for monte carlo simulation?
To be honest, I think any question you ask around here about R vs ... will be bias towards R. Remember that R is by far the most used tag! What I do My current working practice is to use R to prototyp
Is Matlab/octave or R better suited for monte carlo simulation? To be honest, I think any question you ask around here about R vs ... will be bias towards R. Remember that R is by far the most used tag! What I do My current working practice is to use R to prototype and use C when I need an extra boost of speed. It used to be that I would have to switch to C very quickly (again for my particular applications), but the R multicore libraries have helped delay that switch. Essentially, you make a for loop run in parallel with a trivial change. I should mention that my applications are very computationally intensive. Recommendation To be perfectly honest, it really depends on exactly what you want to do. So I'm basing my answer on this statement in your question. I want to construct static models with sensitivity analysis, later dynamic models. Need good libraries/ algorithms that guide me I'd imagine that this problem would be ideally suited to prototyping in R and using C when needed (or some other compiled language). On saying that, typically Monte-Carlo/sensitivity analysis doesn't involve particularly advanced statistical routines - of course it may needed other advanced functionality. So I think (without more information) that you could carry out your analysis in any language, but being completely biased, I would recommend R!
Is Matlab/octave or R better suited for monte carlo simulation? To be honest, I think any question you ask around here about R vs ... will be bias towards R. Remember that R is by far the most used tag! What I do My current working practice is to use R to prototyp
19,385
Is Matlab/octave or R better suited for monte carlo simulation?
Although I almost exclusively use R, I really admire the profiler in Matlab. When your program is kind of slow you normally want to know where the bottleneck is. Matlab's profiler is a great tool for achieving this as it tells you how much time is spend on each line of the code. At least to me, using Rprof is incomparably worse. I can't figure out which call is the bottleneck. Using Rprof you don't get the information on how much time is spend on each line, but how much time is spend on each primitive function (or so). However, a lot of the same primitive functions are called by a lot of different functions. Although I recommend R (because it is just great: free, extremely powerful, ...) if you know you have to profile your code a lot, Matlab is way better. And to be fair, there are multicore and parallel computing toolboxes in Matlab (though, extremely pricey).
Is Matlab/octave or R better suited for monte carlo simulation?
Although I almost exclusively use R, I really admire the profiler in Matlab. When your program is kind of slow you normally want to know where the bottleneck is. Matlab's profiler is a great tool for
Is Matlab/octave or R better suited for monte carlo simulation? Although I almost exclusively use R, I really admire the profiler in Matlab. When your program is kind of slow you normally want to know where the bottleneck is. Matlab's profiler is a great tool for achieving this as it tells you how much time is spend on each line of the code. At least to me, using Rprof is incomparably worse. I can't figure out which call is the bottleneck. Using Rprof you don't get the information on how much time is spend on each line, but how much time is spend on each primitive function (or so). However, a lot of the same primitive functions are called by a lot of different functions. Although I recommend R (because it is just great: free, extremely powerful, ...) if you know you have to profile your code a lot, Matlab is way better. And to be fair, there are multicore and parallel computing toolboxes in Matlab (though, extremely pricey).
Is Matlab/octave or R better suited for monte carlo simulation? Although I almost exclusively use R, I really admire the profiler in Matlab. When your program is kind of slow you normally want to know where the bottleneck is. Matlab's profiler is a great tool for
19,386
Is Matlab/octave or R better suited for monte carlo simulation?
If your simulations will involve relatively sophisticated techniques, then R is the way to go, because it is likely that routines you'll need will be available in R, but not necessarily in matlab.
Is Matlab/octave or R better suited for monte carlo simulation?
If your simulations will involve relatively sophisticated techniques, then R is the way to go, because it is likely that routines you'll need will be available in R, but not necessarily in matlab.
Is Matlab/octave or R better suited for monte carlo simulation? If your simulations will involve relatively sophisticated techniques, then R is the way to go, because it is likely that routines you'll need will be available in R, but not necessarily in matlab.
Is Matlab/octave or R better suited for monte carlo simulation? If your simulations will involve relatively sophisticated techniques, then R is the way to go, because it is likely that routines you'll need will be available in R, but not necessarily in matlab.
19,387
Is Matlab/octave or R better suited for monte carlo simulation?
In my opinion, Matlab is an ugly language. Perhaps it's gotten default arguments and named arguments in its core by now, but many examples you find online do the old "If there are 6 arguments, this, else if there are 5 arguments this and that..." and named arguments are just vectors with alternating strings (names) and values. That's so 1970's that I simply can't use it. R may have its issues, and it is also old, but it was built on a foundation (Scheme/Lisp) that was forward-looking and has held up rather well in comparison. That said, Matlab is much faster if you like to code with loops, etc. And it has much better debugging facilities. And more interactive graphics. On the other hand, what passes for documenting your code/libraries is rather laughable compared to R and you pay a pretty penny to use Matlab. All IMO.
Is Matlab/octave or R better suited for monte carlo simulation?
In my opinion, Matlab is an ugly language. Perhaps it's gotten default arguments and named arguments in its core by now, but many examples you find online do the old "If there are 6 arguments, this, e
Is Matlab/octave or R better suited for monte carlo simulation? In my opinion, Matlab is an ugly language. Perhaps it's gotten default arguments and named arguments in its core by now, but many examples you find online do the old "If there are 6 arguments, this, else if there are 5 arguments this and that..." and named arguments are just vectors with alternating strings (names) and values. That's so 1970's that I simply can't use it. R may have its issues, and it is also old, but it was built on a foundation (Scheme/Lisp) that was forward-looking and has held up rather well in comparison. That said, Matlab is much faster if you like to code with loops, etc. And it has much better debugging facilities. And more interactive graphics. On the other hand, what passes for documenting your code/libraries is rather laughable compared to R and you pay a pretty penny to use Matlab. All IMO.
Is Matlab/octave or R better suited for monte carlo simulation? In my opinion, Matlab is an ugly language. Perhaps it's gotten default arguments and named arguments in its core by now, but many examples you find online do the old "If there are 6 arguments, this, e
19,388
Can I use 'mean ± SD' for non-negative data when SD is higher than mean?
[edited based on helpful feedback in the comments] It bothers me immensely when people do this. The argument against it is simple: the standard deviation is typically shown to convey information about data distribution (and standard error for a parameter). It achieves this goal well in some situations but not others. If the standard deviation/error implies that negative values are reasonable when you know they are not, it is not helping you communicate accurately. Bimodal distributions are another situation in which mean ± SD/SE is likely to mislead. So what else can you do? If you're interested in the data distribution, just show the full distributions using density plots, violin plots, histograms, or their alternatives. If you're interested in the uncertainty of a parameter, you could show confidence intervals or the posterior distribution. Unlike standard deviation or standard error, these options can be asymmetric and will communicate the data distribution or uncertainty more accurately. If you must use a numerical summary for a data distribution without referring to a graph, you could use quartiles instead of mean ± SD.
Can I use 'mean ± SD' for non-negative data when SD is higher than mean?
[edited based on helpful feedback in the comments] It bothers me immensely when people do this. The argument against it is simple: the standard deviation is typically shown to convey information about
Can I use 'mean ± SD' for non-negative data when SD is higher than mean? [edited based on helpful feedback in the comments] It bothers me immensely when people do this. The argument against it is simple: the standard deviation is typically shown to convey information about data distribution (and standard error for a parameter). It achieves this goal well in some situations but not others. If the standard deviation/error implies that negative values are reasonable when you know they are not, it is not helping you communicate accurately. Bimodal distributions are another situation in which mean ± SD/SE is likely to mislead. So what else can you do? If you're interested in the data distribution, just show the full distributions using density plots, violin plots, histograms, or their alternatives. If you're interested in the uncertainty of a parameter, you could show confidence intervals or the posterior distribution. Unlike standard deviation or standard error, these options can be asymmetric and will communicate the data distribution or uncertainty more accurately. If you must use a numerical summary for a data distribution without referring to a graph, you could use quartiles instead of mean ± SD.
Can I use 'mean ± SD' for non-negative data when SD is higher than mean? [edited based on helpful feedback in the comments] It bothers me immensely when people do this. The argument against it is simple: the standard deviation is typically shown to convey information about
19,389
Can I use 'mean ± SD' for non-negative data when SD is higher than mean?
'Mean ± SD' is notation. Once you define it in a manner visible to the reader, you can use it in that manner regardless of the values. When your statistics are skewed enough that they are positive with a standard deviation larger than the mean, the question is whether describing them in terms of mean and standard deviation is really sensible because cumulants other than mean and variance will be highly relevant for the distribution, making it significantly different from a normal distribution (for which mean and variance are the only non-zero cumulants). Chances are that the logarithm of your positive random variable is quite better approaching a normal distribution and parameterising that makes more sense.
Can I use 'mean ± SD' for non-negative data when SD is higher than mean?
'Mean ± SD' is notation. Once you define it in a manner visible to the reader, you can use it in that manner regardless of the values. When your statistics are skewed enough that they are positive wi
Can I use 'mean ± SD' for non-negative data when SD is higher than mean? 'Mean ± SD' is notation. Once you define it in a manner visible to the reader, you can use it in that manner regardless of the values. When your statistics are skewed enough that they are positive with a standard deviation larger than the mean, the question is whether describing them in terms of mean and standard deviation is really sensible because cumulants other than mean and variance will be highly relevant for the distribution, making it significantly different from a normal distribution (for which mean and variance are the only non-zero cumulants). Chances are that the logarithm of your positive random variable is quite better approaching a normal distribution and parameterising that makes more sense.
Can I use 'mean ± SD' for non-negative data when SD is higher than mean? 'Mean ± SD' is notation. Once you define it in a manner visible to the reader, you can use it in that manner regardless of the values. When your statistics are skewed enough that they are positive wi
19,390
Can I use 'mean ± SD' for non-negative data when SD is higher than mean?
In cases like yours, I've reported the median and the quartiles, as mkt suggested. But, inspired by the OverLordGoldDragon's answer and motivated by the wish to keep the idea of the mean and sd and, at the same time, not to deviate too much from established statistical practices, I propose an alternative. I don't know whether it's been used so far, so I'll call it the "decomposed standard deviation". It also allows you to report the results as three numbers, in the form $\overline x ~ (+sd_A; -sd_B)$. Standard deviation is: $$ sd = \sqrt{\frac{1}{N-1}\sum_i (x_i - \overline x)^2}. $$ The sum can be decomposed into the sum over the elements above and below $\overline x$: $$ sd = \sqrt{\frac{1}{N-1} \left( \sum_{i:x_i \gt \overline x} (x_i - \overline x)^2 + \sum_{i:x_i < \overline x} (x_i - \overline x)^2 \right)} $$ (I've left out the summation over $i: x_i = \overline x$, as it evaluates to zero). Define: $$ \begin{align} sd_A &= \sqrt{\frac{1}{N_A + \frac{N_0-1}{2}} \sum_{i:x_i \gt \overline x} (x_i - \overline x)^2 }, \\ sd_B &= \sqrt{\frac{1}{N_B + \frac{N_0-1}{2}} \sum_{i:x_i \lt \overline x} (x_i - \overline x)^2 } \end{align} $$ with $N_A$, $N_B$, and $N_0$ being the number of elements "above", "below" and "equal to" the mean, respectively. Then, the standard deviation can be rewritten as: $$ sd = \sqrt{\frac{(N_A + \frac{N_0-1}{2})sd_A^2 + (N_B + \frac{N_0-1}{2})sd_B^2} {N-1} }. $$ If no values are exactly equal to $\overline x$, which is very likely in practice, the formulae simplify to: $$ \begin{align} sd_A &= \sqrt{\frac{1}{N_A - 0.5} \sum_{i:x_i \gt \overline x} (x_i - \overline x)^2 }, \\ sd_B &= \sqrt{\frac{1}{N_B - 0.5} \sum_{i:x_i \lt \overline x} (x_i - \overline x)^2 }, \\ sd &= \sqrt{\frac{(N_A -0.5)sd_A^2 + (N_B - 0.5)sd_B^2} {N-1} }. \end{align} $$ It is easy to show that for perfectly symmetric data, $sd$, $sd_A$, and $sd_B$ are exactly the same. For asymmetric, they differ. Also, it is easy to see that for non-negative data, $\overline x - sd_B$ is always non-negative. Below is a simple graphical example: and you'd report the result as $1.56 ~ (+3.08; -0.93)$. This makes the asymmetry in the data explicit and, at the same time, avoids the implication that data can be negative. Below is the Python code to reproduce the figure and play with the data: import matplotlib.pyplot as plt import numpy as np def decomposed_std(x): m = x.mean() xA = x[x > m] xB = x[x < m] nA = len(xA) nB = len(xB) n0 = len(x[x == m]) sA = np.sqrt(np.sum((xA-m)**2) / (nA + (n0-1)/2)) sB = np.sqrt(np.sum((xB-m)**2) / (nB + (n0-1)/2)) # the two are equal: # np.sqrt((sA**2 * (nA + (n0-1)/2) + sB**2 * (nB + (n0-1)/2)) / (n-1)) # x.std(ddof=1) return sA, sB np.random.seed(0) x = np.exp(np.random.normal(0, 1, 1000)) m = x.mean() x = np.hstack([x, [m, m, m, m, m]]) # append some averages s = x.std(ddof=1) sA, sB = decomposed_std(x) h = plt.hist(x, bins=20, fc='skyblue', ec='steelblue') y_top = max(h[0]) x_right = max(h[1]) plt.vlines(x.mean(), 0, 1.1*y_top, colors='chocolate') plt.plot([m-sB, m], [1.025*y_top]*2, '-', color='seagreen') plt.plot([m+sA, m], [1.025*y_top]*2, '-', color='firebrick') plt.grid(linestyle=':') plt.text(0.8*m, 1.05*y_top, f'$sd_B = {sB:.2f}$', horizontalalignment='right') plt.text(1.2*m, 1.05*y_top, f'$sd_A = {sA:.2f}$', horizontalalignment='left') plt.text(x_right, 1.1*y_top, '$\overline{x} = ' f'{m:.2f}$\n' '$sd = ' f'{s:.2f}$', horizontalalignment='right', verticalalignment='top') plt.title('Histogram with decomposed standard deviation') plt.xlim(-2, 1.05*x_right) plt.show()
Can I use 'mean ± SD' for non-negative data when SD is higher than mean?
In cases like yours, I've reported the median and the quartiles, as mkt suggested. But, inspired by the OverLordGoldDragon's answer and motivated by the wish to keep the idea of the mean and sd and, a
Can I use 'mean ± SD' for non-negative data when SD is higher than mean? In cases like yours, I've reported the median and the quartiles, as mkt suggested. But, inspired by the OverLordGoldDragon's answer and motivated by the wish to keep the idea of the mean and sd and, at the same time, not to deviate too much from established statistical practices, I propose an alternative. I don't know whether it's been used so far, so I'll call it the "decomposed standard deviation". It also allows you to report the results as three numbers, in the form $\overline x ~ (+sd_A; -sd_B)$. Standard deviation is: $$ sd = \sqrt{\frac{1}{N-1}\sum_i (x_i - \overline x)^2}. $$ The sum can be decomposed into the sum over the elements above and below $\overline x$: $$ sd = \sqrt{\frac{1}{N-1} \left( \sum_{i:x_i \gt \overline x} (x_i - \overline x)^2 + \sum_{i:x_i < \overline x} (x_i - \overline x)^2 \right)} $$ (I've left out the summation over $i: x_i = \overline x$, as it evaluates to zero). Define: $$ \begin{align} sd_A &= \sqrt{\frac{1}{N_A + \frac{N_0-1}{2}} \sum_{i:x_i \gt \overline x} (x_i - \overline x)^2 }, \\ sd_B &= \sqrt{\frac{1}{N_B + \frac{N_0-1}{2}} \sum_{i:x_i \lt \overline x} (x_i - \overline x)^2 } \end{align} $$ with $N_A$, $N_B$, and $N_0$ being the number of elements "above", "below" and "equal to" the mean, respectively. Then, the standard deviation can be rewritten as: $$ sd = \sqrt{\frac{(N_A + \frac{N_0-1}{2})sd_A^2 + (N_B + \frac{N_0-1}{2})sd_B^2} {N-1} }. $$ If no values are exactly equal to $\overline x$, which is very likely in practice, the formulae simplify to: $$ \begin{align} sd_A &= \sqrt{\frac{1}{N_A - 0.5} \sum_{i:x_i \gt \overline x} (x_i - \overline x)^2 }, \\ sd_B &= \sqrt{\frac{1}{N_B - 0.5} \sum_{i:x_i \lt \overline x} (x_i - \overline x)^2 }, \\ sd &= \sqrt{\frac{(N_A -0.5)sd_A^2 + (N_B - 0.5)sd_B^2} {N-1} }. \end{align} $$ It is easy to show that for perfectly symmetric data, $sd$, $sd_A$, and $sd_B$ are exactly the same. For asymmetric, they differ. Also, it is easy to see that for non-negative data, $\overline x - sd_B$ is always non-negative. Below is a simple graphical example: and you'd report the result as $1.56 ~ (+3.08; -0.93)$. This makes the asymmetry in the data explicit and, at the same time, avoids the implication that data can be negative. Below is the Python code to reproduce the figure and play with the data: import matplotlib.pyplot as plt import numpy as np def decomposed_std(x): m = x.mean() xA = x[x > m] xB = x[x < m] nA = len(xA) nB = len(xB) n0 = len(x[x == m]) sA = np.sqrt(np.sum((xA-m)**2) / (nA + (n0-1)/2)) sB = np.sqrt(np.sum((xB-m)**2) / (nB + (n0-1)/2)) # the two are equal: # np.sqrt((sA**2 * (nA + (n0-1)/2) + sB**2 * (nB + (n0-1)/2)) / (n-1)) # x.std(ddof=1) return sA, sB np.random.seed(0) x = np.exp(np.random.normal(0, 1, 1000)) m = x.mean() x = np.hstack([x, [m, m, m, m, m]]) # append some averages s = x.std(ddof=1) sA, sB = decomposed_std(x) h = plt.hist(x, bins=20, fc='skyblue', ec='steelblue') y_top = max(h[0]) x_right = max(h[1]) plt.vlines(x.mean(), 0, 1.1*y_top, colors='chocolate') plt.plot([m-sB, m], [1.025*y_top]*2, '-', color='seagreen') plt.plot([m+sA, m], [1.025*y_top]*2, '-', color='firebrick') plt.grid(linestyle=':') plt.text(0.8*m, 1.05*y_top, f'$sd_B = {sB:.2f}$', horizontalalignment='right') plt.text(1.2*m, 1.05*y_top, f'$sd_A = {sA:.2f}$', horizontalalignment='left') plt.text(x_right, 1.1*y_top, '$\overline{x} = ' f'{m:.2f}$\n' '$sd = ' f'{s:.2f}$', horizontalalignment='right', verticalalignment='top') plt.title('Histogram with decomposed standard deviation') plt.xlim(-2, 1.05*x_right) plt.show()
Can I use 'mean ± SD' for non-negative data when SD is higher than mean? In cases like yours, I've reported the median and the quartiles, as mkt suggested. But, inspired by the OverLordGoldDragon's answer and motivated by the wish to keep the idea of the mean and sd and, a
19,391
Can I use 'mean ± SD' for non-negative data when SD is higher than mean?
You use mean ± SD to summarize the distribution of your data and mean ± SE to indicate the uncertainty of your estimate of the mean. However, mean ± SD might provide a bad summary of the distribution, as seems to be the case for your data. Then you must look around for other descriptors to provide the shorthand summary. If space is not an issue, show the distribution with a histogram, density plot or whatever. It might be worth the effort to identify the distribution of your data (negative binomial, Poisson, or whatever) and provide the distribution parameters as a summary.
Can I use 'mean ± SD' for non-negative data when SD is higher than mean?
You use mean ± SD to summarize the distribution of your data and mean ± SE to indicate the uncertainty of your estimate of the mean. However, mean ± SD might provide a bad summary of the distribution,
Can I use 'mean ± SD' for non-negative data when SD is higher than mean? You use mean ± SD to summarize the distribution of your data and mean ± SE to indicate the uncertainty of your estimate of the mean. However, mean ± SD might provide a bad summary of the distribution, as seems to be the case for your data. Then you must look around for other descriptors to provide the shorthand summary. If space is not an issue, show the distribution with a histogram, density plot or whatever. It might be worth the effort to identify the distribution of your data (negative binomial, Poisson, or whatever) and provide the distribution parameters as a summary.
Can I use 'mean ± SD' for non-negative data when SD is higher than mean? You use mean ± SD to summarize the distribution of your data and mean ± SE to indicate the uncertainty of your estimate of the mean. However, mean ± SD might provide a bad summary of the distribution,
19,392
Can I use 'mean ± SD' for non-negative data when SD is higher than mean?
Sometimes for positive data it can make sense to report the mean and standard deviation of the log of the data rather than the data itself. This is arguably the best summary you can give if the data seems to follow an approximately log-normal distribution. The answer to this question probably gives a better discussion of this option than I can.
Can I use 'mean ± SD' for non-negative data when SD is higher than mean?
Sometimes for positive data it can make sense to report the mean and standard deviation of the log of the data rather than the data itself. This is arguably the best summary you can give if the data s
Can I use 'mean ± SD' for non-negative data when SD is higher than mean? Sometimes for positive data it can make sense to report the mean and standard deviation of the log of the data rather than the data itself. This is arguably the best summary you can give if the data seems to follow an approximately log-normal distribution. The answer to this question probably gives a better discussion of this option than I can.
Can I use 'mean ± SD' for non-negative data when SD is higher than mean? Sometimes for positive data it can make sense to report the mean and standard deviation of the log of the data rather than the data itself. This is arguably the best summary you can give if the data s
19,393
Can I use 'mean ± SD' for non-negative data when SD is higher than mean?
If your intention is to summarize the spread of your data, then just standard deviation, variance, range or coefficient of variation can be sufficient. In your case, you should compute modes to check whether your data follows a multimodal distribution as well. However I cannot see any reason just differencing standard deviation from the mean would inform you about the dataset. I would use median, mode and range values to summarize that dataset. This operation is probably inspired by the construction of confidence intervals for the estimation of a population mean. However, these intervals are constructed randomly and utilizes information about the distribution of the random variable in question to infer how likely the true population parameter is in many intervals calculated in the same manner. In essence, it tests how stable your estimation is concerning the mean. That is often done to support hypothesis tests for population parameters. I see distribution information is completely discarded here, which may be the reason that you obtain irrelevant, negative values. Edit: I also cannot see any reason to prefer "standard deviations above and below the mean", as some answers have suggested, over computing first and third quartiles or interquartile range (in general, L-estimators); then again, if your goal is to inform about the spread in your data.
Can I use 'mean ± SD' for non-negative data when SD is higher than mean?
If your intention is to summarize the spread of your data, then just standard deviation, variance, range or coefficient of variation can be sufficient. In your case, you should compute modes to check
Can I use 'mean ± SD' for non-negative data when SD is higher than mean? If your intention is to summarize the spread of your data, then just standard deviation, variance, range or coefficient of variation can be sufficient. In your case, you should compute modes to check whether your data follows a multimodal distribution as well. However I cannot see any reason just differencing standard deviation from the mean would inform you about the dataset. I would use median, mode and range values to summarize that dataset. This operation is probably inspired by the construction of confidence intervals for the estimation of a population mean. However, these intervals are constructed randomly and utilizes information about the distribution of the random variable in question to infer how likely the true population parameter is in many intervals calculated in the same manner. In essence, it tests how stable your estimation is concerning the mean. That is often done to support hypothesis tests for population parameters. I see distribution information is completely discarded here, which may be the reason that you obtain irrelevant, negative values. Edit: I also cannot see any reason to prefer "standard deviations above and below the mean", as some answers have suggested, over computing first and third quartiles or interquartile range (in general, L-estimators); then again, if your goal is to inform about the spread in your data.
Can I use 'mean ± SD' for non-negative data when SD is higher than mean? If your intention is to summarize the spread of your data, then just standard deviation, variance, range or coefficient of variation can be sufficient. In your case, you should compute modes to check
19,394
Can I use 'mean ± SD' for non-negative data when SD is higher than mean?
It varies with context, I present one option - "directional standard deviation": compute SD separately above and below mean: If the goal is a measure of spread of data, this can work. Arithmetic mean isn't always best - could try median, another averaging metric, or for sparse data, "sparse mean" that I developed and applied on an audio task. std = lambda x, x_mean: np.sqrt(1 / len(x) * np.sum((x - x_mean)**2)) x_mean = x.mean() std_up = std(x[x >= x_mean], x_mean) std_dn = std(x[x < x_mean], x_mean) This was typed in a hurry and isn't polished; no consideration was given to handling x == x.mean() for equivalence with usual SD via constant rescaling, or to whether < should be <=, but it can be done, refer to @IgorF.'s answer. Clarification This is simply feature engineering. It has nothing to do with statistical analysis or describing a distribution. SD (standard deviation) is a nonlinear alternative to mean absolute deviation with a quadratic emphasis. I saw a paper compute SD from 3 samples. I first-authored it and remarked it as ludicrous. Then I figure, it just functions as a spread measure, where another metric wouldn't be much better. Whether there's better ways to handle asymmetry is a separate topic. Sometimes SD is best for similar reasons it's normally best. I can imagine it being a thresholding feature in skewed non-negative data. Connection to question I read the question, going off of the title and most of the body, as: "I want to use SD but want to stay non-negative". Hence, a premise is, SD is desired - making any objections to SD itself irrelevant. Of course, the question can also read as "alternatives to SD" (as it does in last sentence), but I did say, "I present one option". More generally, any objections to my metric also hold for SD itself. There's one exception, but often it's an advantage rather than disadvantage: each number in my metric has less confidence per being derived from less data. This can be advantage since, it's more points per sub-distribution. Imagine, SDD = "standard deviation, directional". For the right-most example, points to right of mean are only a detriment to describing points to left, and the mismatch in distributions can be much worse than shown here (though it does assume "mean" is the right anchor, hence importance of choosing it right). Formalizing @IgorF's answer shows exactly what I intended, minus handling of x == x.mean() which I've not considered at the time, and I favor 1/N over 1/(N-1); I build this section off of that. What I dislike about that mean handling is [-2, -1, -1, 0, 1, 1, 2] --> (1.31, 1.31), 1.31 [-2, -1, -1, 1e-15, 1, 1, 2] --> (1.41, 1.31), 1.31 showing --> SDD, SD. i.e. the sequences barely differ, yet their results differ significantly - that's an instability. SD itself has other such weaknesses, and it's fair to call this one a weakness of SDD; generally, caution is due with mean-based metrics. If the relative spread of the two sub-distributions is desired, I propose an alternative: Replace $\geq$ and $\leq$ with $\gtrapprox$ and $\lessapprox$, as in "points within mean that won't change the pre-normalized SD much", "pre-normalized" meaning without square root and constant rescaling. Do this for each side separately. Don't double-count - instead, points which qualify both for > mean and ~ mean are counted toward ~ mean alone, and halve the rescaling contribution of the ~ mean points (as in @IgorF.'s). This assures SDD = SD for symmetric distributions. "won't change much" becomes a heuristic, and there's many ways to do it - I simply go with abs(x - mean)**2 < current_sd / 50 [-2, -1, -1, 0, 1, 1, 2] --> (1.31, 1.31), 1.31 [-2, -1, -1, 1e-15, 1, 1, 2] --> (1.31, 1.31), 1.31 [-2, -1, -1, 3e-1, 1, 1, 2] --> (1.35, 1.29), 1.31 [-2, -1, -1, 5e-1, 1, 1, 2] --> (1.48, 1.19), 1.32 It can be made ideal in sense that we can include points based on not changing sd_up or sd_dn by some percentage, guaranteeing stability, but I've not explored how to do so compute-efficiently. I've not checked that this satisfies various SD properties exactly, so take with a grain of salt. Code import numpy as np def std_d(x, mean_fn=np.mean, div=50): # initial estimate mu = mean_fn(x) idxs0 = np.where(x < mu)[0] idxs1 = np.where(x > mu)[0] sA = np.sum((x[idxs0] - mu)**2) sB = np.sum((x[idxs1] - mu)**2) # account for points near mean idxs0n = np.where(abs(x - mu)**2 < sA/div)[0] idxs1n = np.where(abs(x - mu)**2 < sB/div)[0] nmatch0 = sum(1 for b in idxs0n for a in idxs0 if a == b) nmatch1 = sum(1 for b in idxs1n for a in idxs1 if a == b) NA = len(idxs0) - nmatch0 NB = len(idxs1) - nmatch1 N0A = len(idxs0n) N0B = len(idxs1n) sA += np.sum((x[idxs0n] - mu)**2) sB += np.sum((x[idxs1n] - mu)**2) # finalize kA = 1 / (NA + N0A/2) kB = 1 / (NB + N0B/2) sdA = np.sqrt(kA * sA) sdB = np.sqrt(kB * sB) return sdA, sdB x_all = [ [-2, -1, -1, 0, 1, 1, 2], [-2, -1, -1, 1e-15, 1, 1, 2], [-2, -1, -1, 3e-1, 1, 1, 2], [-2, -1, -1, 5e-1, 1, 1, 2], ] x_all = [np.array(x) for x in x_all] for x in x_all: print(std_d(x), x.std())
Can I use 'mean ± SD' for non-negative data when SD is higher than mean?
It varies with context, I present one option - "directional standard deviation": compute SD separately above and below mean: If the goal is a measure of spread of data, this can work. Arithmetic mean
Can I use 'mean ± SD' for non-negative data when SD is higher than mean? It varies with context, I present one option - "directional standard deviation": compute SD separately above and below mean: If the goal is a measure of spread of data, this can work. Arithmetic mean isn't always best - could try median, another averaging metric, or for sparse data, "sparse mean" that I developed and applied on an audio task. std = lambda x, x_mean: np.sqrt(1 / len(x) * np.sum((x - x_mean)**2)) x_mean = x.mean() std_up = std(x[x >= x_mean], x_mean) std_dn = std(x[x < x_mean], x_mean) This was typed in a hurry and isn't polished; no consideration was given to handling x == x.mean() for equivalence with usual SD via constant rescaling, or to whether < should be <=, but it can be done, refer to @IgorF.'s answer. Clarification This is simply feature engineering. It has nothing to do with statistical analysis or describing a distribution. SD (standard deviation) is a nonlinear alternative to mean absolute deviation with a quadratic emphasis. I saw a paper compute SD from 3 samples. I first-authored it and remarked it as ludicrous. Then I figure, it just functions as a spread measure, where another metric wouldn't be much better. Whether there's better ways to handle asymmetry is a separate topic. Sometimes SD is best for similar reasons it's normally best. I can imagine it being a thresholding feature in skewed non-negative data. Connection to question I read the question, going off of the title and most of the body, as: "I want to use SD but want to stay non-negative". Hence, a premise is, SD is desired - making any objections to SD itself irrelevant. Of course, the question can also read as "alternatives to SD" (as it does in last sentence), but I did say, "I present one option". More generally, any objections to my metric also hold for SD itself. There's one exception, but often it's an advantage rather than disadvantage: each number in my metric has less confidence per being derived from less data. This can be advantage since, it's more points per sub-distribution. Imagine, SDD = "standard deviation, directional". For the right-most example, points to right of mean are only a detriment to describing points to left, and the mismatch in distributions can be much worse than shown here (though it does assume "mean" is the right anchor, hence importance of choosing it right). Formalizing @IgorF's answer shows exactly what I intended, minus handling of x == x.mean() which I've not considered at the time, and I favor 1/N over 1/(N-1); I build this section off of that. What I dislike about that mean handling is [-2, -1, -1, 0, 1, 1, 2] --> (1.31, 1.31), 1.31 [-2, -1, -1, 1e-15, 1, 1, 2] --> (1.41, 1.31), 1.31 showing --> SDD, SD. i.e. the sequences barely differ, yet their results differ significantly - that's an instability. SD itself has other such weaknesses, and it's fair to call this one a weakness of SDD; generally, caution is due with mean-based metrics. If the relative spread of the two sub-distributions is desired, I propose an alternative: Replace $\geq$ and $\leq$ with $\gtrapprox$ and $\lessapprox$, as in "points within mean that won't change the pre-normalized SD much", "pre-normalized" meaning without square root and constant rescaling. Do this for each side separately. Don't double-count - instead, points which qualify both for > mean and ~ mean are counted toward ~ mean alone, and halve the rescaling contribution of the ~ mean points (as in @IgorF.'s). This assures SDD = SD for symmetric distributions. "won't change much" becomes a heuristic, and there's many ways to do it - I simply go with abs(x - mean)**2 < current_sd / 50 [-2, -1, -1, 0, 1, 1, 2] --> (1.31, 1.31), 1.31 [-2, -1, -1, 1e-15, 1, 1, 2] --> (1.31, 1.31), 1.31 [-2, -1, -1, 3e-1, 1, 1, 2] --> (1.35, 1.29), 1.31 [-2, -1, -1, 5e-1, 1, 1, 2] --> (1.48, 1.19), 1.32 It can be made ideal in sense that we can include points based on not changing sd_up or sd_dn by some percentage, guaranteeing stability, but I've not explored how to do so compute-efficiently. I've not checked that this satisfies various SD properties exactly, so take with a grain of salt. Code import numpy as np def std_d(x, mean_fn=np.mean, div=50): # initial estimate mu = mean_fn(x) idxs0 = np.where(x < mu)[0] idxs1 = np.where(x > mu)[0] sA = np.sum((x[idxs0] - mu)**2) sB = np.sum((x[idxs1] - mu)**2) # account for points near mean idxs0n = np.where(abs(x - mu)**2 < sA/div)[0] idxs1n = np.where(abs(x - mu)**2 < sB/div)[0] nmatch0 = sum(1 for b in idxs0n for a in idxs0 if a == b) nmatch1 = sum(1 for b in idxs1n for a in idxs1 if a == b) NA = len(idxs0) - nmatch0 NB = len(idxs1) - nmatch1 N0A = len(idxs0n) N0B = len(idxs1n) sA += np.sum((x[idxs0n] - mu)**2) sB += np.sum((x[idxs1n] - mu)**2) # finalize kA = 1 / (NA + N0A/2) kB = 1 / (NB + N0B/2) sdA = np.sqrt(kA * sA) sdB = np.sqrt(kB * sB) return sdA, sdB x_all = [ [-2, -1, -1, 0, 1, 1, 2], [-2, -1, -1, 1e-15, 1, 1, 2], [-2, -1, -1, 3e-1, 1, 1, 2], [-2, -1, -1, 5e-1, 1, 1, 2], ] x_all = [np.array(x) for x in x_all] for x in x_all: print(std_d(x), x.std())
Can I use 'mean ± SD' for non-negative data when SD is higher than mean? It varies with context, I present one option - "directional standard deviation": compute SD separately above and below mean: If the goal is a measure of spread of data, this can work. Arithmetic mean
19,395
What is the probability of 4 person in group of 18 can have same birth month?
You can see your argument is not correct by applying it to the standard birthday problem, where we know the probability is 50% at 23 people. Your argument would give $\frac{{23\choose 2}{365\choose 1}}{365^{23}}$, which is very small. The usual argument is to say that if we are going to avoid a coincidence we have $365-(k-1)$ choices for the $k$th person's birthday, so the probability of no coincidence in $K$ people is $\prod_{k=1}^K \frac{365-k+1}{365}$ Unfortunately, there is no such simple argument for more than two coincident birthdays. There is only one way (up to symmetry) for $k$ people to have no two-way coincidence, but there are many, many ways to have no four-way coincidence, so the computation as you add people is not straightforward. That's why R provides pbirthday() and why it is still only an approximation. I'd certainly hope this wasn't a class assignment. The reason your argument is not correct is that it undercounts the number of ways you can get 4 matching months. For example, it's not just that you can choose any month of the 12 as the matching one. You can also relabel the other 11 months arbitrarily (giving you a factor of 11! ). And your denominator of $12^{18}$ implies that the ordering of the people matters, so there are more than $18\choose 4$ orderings that have 4 matches.
What is the probability of 4 person in group of 18 can have same birth month?
You can see your argument is not correct by applying it to the standard birthday problem, where we know the probability is 50% at 23 people. Your argument would give $\frac{{23\choose 2}{365\choose 1}
What is the probability of 4 person in group of 18 can have same birth month? You can see your argument is not correct by applying it to the standard birthday problem, where we know the probability is 50% at 23 people. Your argument would give $\frac{{23\choose 2}{365\choose 1}}{365^{23}}$, which is very small. The usual argument is to say that if we are going to avoid a coincidence we have $365-(k-1)$ choices for the $k$th person's birthday, so the probability of no coincidence in $K$ people is $\prod_{k=1}^K \frac{365-k+1}{365}$ Unfortunately, there is no such simple argument for more than two coincident birthdays. There is only one way (up to symmetry) for $k$ people to have no two-way coincidence, but there are many, many ways to have no four-way coincidence, so the computation as you add people is not straightforward. That's why R provides pbirthday() and why it is still only an approximation. I'd certainly hope this wasn't a class assignment. The reason your argument is not correct is that it undercounts the number of ways you can get 4 matching months. For example, it's not just that you can choose any month of the 12 as the matching one. You can also relabel the other 11 months arbitrarily (giving you a factor of 11! ). And your denominator of $12^{18}$ implies that the ordering of the people matters, so there are more than $18\choose 4$ orderings that have 4 matches.
What is the probability of 4 person in group of 18 can have same birth month? You can see your argument is not correct by applying it to the standard birthday problem, where we know the probability is 50% at 23 people. Your argument would give $\frac{{23\choose 2}{365\choose 1}
19,396
What is the probability of 4 person in group of 18 can have same birth month?
The correct way to solve the 2 coincident problem is to calculate the probability of 2 people not sharing the same birthday month. For this example the second person has a 11/12 chance of not sharing the same month as the first. The third person has 10/12 chance of not sharing the same month as 1 &2. The fourth person has 9/12 chance of not sharing the same month as 1, 2 & 3. Thus chance of no one sharing the same month is $(11*10*9)/12^3$ which is about 57%. Or 43% chance of at least 2 sharing the same month. I can't provide advice on how to extend this manual calculation to the 3 or 4 coincident problem. If you know R, there is the pbirthday() function to calculate this: pbirthday(18, classes=12, coincident = 4) [1] 0.5537405 So for 18 people there is a 55% chance that at least 4 people will share the same month. Here is a good source for understanding the problem: https://www.math.ucdavis.edu/~tracy/courses/math135A/UsefullCourseMaterial/birthday.pdf Edit For completeness here is a quick and dirty simulation in R: four <- 0 #count for exactly 4 fourmore <- 0 #count for 4 or more count<-100000 for (i in 1:count) { #sample 12 objects, eighteen times m<- sample(1:12, 18, replace=TRUE) if (any(table(m)>=4)){fourmore <-fourmore +1} if (any(table(m)==4)){four <-four +1} } print(fourmore/count) #[1] 0.57768 print(four/count) #[1] 0.45192
What is the probability of 4 person in group of 18 can have same birth month?
The correct way to solve the 2 coincident problem is to calculate the probability of 2 people not sharing the same birthday month. For this example the second person has a 11/12 chance of not sharing
What is the probability of 4 person in group of 18 can have same birth month? The correct way to solve the 2 coincident problem is to calculate the probability of 2 people not sharing the same birthday month. For this example the second person has a 11/12 chance of not sharing the same month as the first. The third person has 10/12 chance of not sharing the same month as 1 &2. The fourth person has 9/12 chance of not sharing the same month as 1, 2 & 3. Thus chance of no one sharing the same month is $(11*10*9)/12^3$ which is about 57%. Or 43% chance of at least 2 sharing the same month. I can't provide advice on how to extend this manual calculation to the 3 or 4 coincident problem. If you know R, there is the pbirthday() function to calculate this: pbirthday(18, classes=12, coincident = 4) [1] 0.5537405 So for 18 people there is a 55% chance that at least 4 people will share the same month. Here is a good source for understanding the problem: https://www.math.ucdavis.edu/~tracy/courses/math135A/UsefullCourseMaterial/birthday.pdf Edit For completeness here is a quick and dirty simulation in R: four <- 0 #count for exactly 4 fourmore <- 0 #count for 4 or more count<-100000 for (i in 1:count) { #sample 12 objects, eighteen times m<- sample(1:12, 18, replace=TRUE) if (any(table(m)>=4)){fourmore <-fourmore +1} if (any(table(m)==4)){four <-four +1} } print(fourmore/count) #[1] 0.57768 print(four/count) #[1] 0.45192
What is the probability of 4 person in group of 18 can have same birth month? The correct way to solve the 2 coincident problem is to calculate the probability of 2 people not sharing the same birthday month. For this example the second person has a 11/12 chance of not sharing
19,397
What is the probability of 4 person in group of 18 can have same birth month?
There are $43$ partitions of $18$ into $12$ non-negative parts where the largest part is $4$, while there are another $298$ partitions where the largest part is greater than $4$, and $25$ partitions where the largest part is less than $4$. For example one partition is $$18=4+3+3+2+2+1+1+1+1+0+0+0\\= 1\times 4+2\times 3+2 \times2 + 4\times 1 + 3 \times 0$$ The probability of that particular partition pattern occurring among the birthmonths of your team is $\dfrac{\dfrac{18!}{4!^1 3!^2 2!^2 1!^4 0!^3} \times \dfrac{12!}{1! 2! 2! 4! 3!}}{12^{18}} \approx 0.05786545$ Add the probabilities up where the largest part of the partition is $4$ and you get about $0.4165314$; add them up where the largest part of the partition is $4$ or more and you get about $0.5771871$. These are the answers to your question. More specifically, the probabilities for the different frequencies of the most frequent month are as follows. $4$ turns out to be most likely and the median (the mean is about $3.76$) Freq of most freq month Probability 1 0 2 0.0138050 3 0.4090079 4 0.4165314 5 0.1297855 6 0.0262102 7 0.0040923 8 0.0005116 9 0.0000517 10 0.00000423 11 0.000000280 12 0.0000000148 13 0.000000000622 14 0.0000000000202 15 0.000000000000490 16 0.00000000000000834 17 0.0000000000000000892 18 0.000000000000000000451
What is the probability of 4 person in group of 18 can have same birth month?
There are $43$ partitions of $18$ into $12$ non-negative parts where the largest part is $4$, while there are another $298$ partitions where the largest part is greater than $4$, and $25$ partitions
What is the probability of 4 person in group of 18 can have same birth month? There are $43$ partitions of $18$ into $12$ non-negative parts where the largest part is $4$, while there are another $298$ partitions where the largest part is greater than $4$, and $25$ partitions where the largest part is less than $4$. For example one partition is $$18=4+3+3+2+2+1+1+1+1+0+0+0\\= 1\times 4+2\times 3+2 \times2 + 4\times 1 + 3 \times 0$$ The probability of that particular partition pattern occurring among the birthmonths of your team is $\dfrac{\dfrac{18!}{4!^1 3!^2 2!^2 1!^4 0!^3} \times \dfrac{12!}{1! 2! 2! 4! 3!}}{12^{18}} \approx 0.05786545$ Add the probabilities up where the largest part of the partition is $4$ and you get about $0.4165314$; add them up where the largest part of the partition is $4$ or more and you get about $0.5771871$. These are the answers to your question. More specifically, the probabilities for the different frequencies of the most frequent month are as follows. $4$ turns out to be most likely and the median (the mean is about $3.76$) Freq of most freq month Probability 1 0 2 0.0138050 3 0.4090079 4 0.4165314 5 0.1297855 6 0.0262102 7 0.0040923 8 0.0005116 9 0.0000517 10 0.00000423 11 0.000000280 12 0.0000000148 13 0.000000000622 14 0.0000000000202 15 0.000000000000490 16 0.00000000000000834 17 0.0000000000000000892 18 0.000000000000000000451
What is the probability of 4 person in group of 18 can have same birth month? There are $43$ partitions of $18$ into $12$ non-negative parts where the largest part is $4$, while there are another $298$ partitions where the largest part is greater than $4$, and $25$ partitions
19,398
What is the probability of 4 person in group of 18 can have same birth month?
While Henry already has given a way to compute the number exactly by counting all the partitions, it might be interesting to know about two approximate methods. In addition, there is an alternative exact computation based on conditional Poisson distributed variables. Computational simulation You won't be easily able to compute all $12^{18}$ possibilities (and it won't be easy to scale up the problem), but you can have a computer simulate randomly a subset of the possible ways and obtain a distribution from those simulations. # function to sample 18 birthmonths # and get the maximum number of similar months monthsample <- function() { x <- sample(1:12,18,replace = TRUE) # sample n <- max(table(x)) # get the maximum return(n) } # sample a million times y <- replicate(10^6,monthsample()) # obtain the frequency using a histogram h<-hist(y, breaks=seq(-0.5,18.5,1)) Approximation with Poissonation The frequency of the number of birthdays in a particular months is approximately Poisson/binomial distributed. Based on that we can compute the probability that the number of birthdays in a particular month won't exceed some value, and by taking the power of twelve we compute the probability that this happens for all twelve months. Note: here we neglect the fact that the number of birthdays are correlated so this is obviously not exact. # approximation with Poisson distribution t <- 0:18 z <- ppois(t,1.5)^12 # P(max <= t) dz <- diff(z) # P(max = t+1) Computation with Bruce Levin's representation In the comments Whuber has pointed to the pmultinom package. This package is based on Bruce Levin 1981 'A Representation for Multinomial Cumulative Distribution Functions' in Ann. Statist. Volume 9. The outcome of birth-months (which is more precisely distributed according to a multinomial distribution) is represented as independent Poisson distributed variables. But unlike the before mentioned naive computation, the distribution of those Poisson distributed variables is regarded to be conditional on the total sum being equal to $n=18$. So above we computed $$P(X_1, X_2, \ldots , X_{12} \leq 4) = P(X_1 \leq 4) \cdot P(X_1 \leq 4) \cdot \ldots \cdot P(X_{12} \leq 4)$$ but we should have computed the conditional probability for the Poisson distributed variables being all equal or lower than $$P(X_1, X_2, \ldots, X_{12} \leq 4 \vert X_1+ X_2+ \ldots + X_{12} = 18)$$ which introduces an extra term based on Bayes' rule. $$P(\forall i:X_i \leq 4 \vert \sum X_i = 18) = P(\forall i:X_i \leq 4) \frac{P(\sum X_i = 18 \vert \forall i:X_i \leq 4 )}{P( \sum X_i = 18)} $$ This correction factor is the ratio of the probability that a sum of truncated Poisson distributed variables equals 18 $P(\sum X_i = 18 \vert \forall i:X_i \leq 4 )$, and the probability that a sum of regular Poisson distributed variables equals 18, $P( \sum X_i = 18)$. For a small amount of birth months and people in the group this truncated distribution can be computed manually # correction factor by Bruce Levin correction <- function(y) { Nptrunc(y)[19]/dpois(18,18) } Nptrunc <- function(lim) { # truncacted Poisson distribution ptrunc <- dpois(0:lim,1.5)/sum(dpois(0:lim,1.5)) ## vector with probabilities outvec <- rep(0,lim*12+1) outvec[1] <- 1 #convolve 12 times for each months for (i in 1:12) { newvec <- rep(0,lim*12+1) for (k in 1:(lim+1)) { newvec <- newvec + ptrunc[k]*c(rep(0,k-1),outvec[1:(lim*12+1-(k-1))]) } outvec <- newvec } outvec } z2 <- ppois(t,1.5)^12*Vectorize(correction)(t) # P(max<=t) z2[1:2] <- c(0,0) dz2 <- diff(z2) # P(max = t+1) Results These approximations give the following results > ### simulation > sum(y>=4)/10^6 [1] 0.577536 > ### computation > 1-z[4] [1] 0.5572514 > ### computation exact > 1-z2[4] [1] 0.5771871
What is the probability of 4 person in group of 18 can have same birth month?
While Henry already has given a way to compute the number exactly by counting all the partitions, it might be interesting to know about two approximate methods. In addition, there is an alternative ex
What is the probability of 4 person in group of 18 can have same birth month? While Henry already has given a way to compute the number exactly by counting all the partitions, it might be interesting to know about two approximate methods. In addition, there is an alternative exact computation based on conditional Poisson distributed variables. Computational simulation You won't be easily able to compute all $12^{18}$ possibilities (and it won't be easy to scale up the problem), but you can have a computer simulate randomly a subset of the possible ways and obtain a distribution from those simulations. # function to sample 18 birthmonths # and get the maximum number of similar months monthsample <- function() { x <- sample(1:12,18,replace = TRUE) # sample n <- max(table(x)) # get the maximum return(n) } # sample a million times y <- replicate(10^6,monthsample()) # obtain the frequency using a histogram h<-hist(y, breaks=seq(-0.5,18.5,1)) Approximation with Poissonation The frequency of the number of birthdays in a particular months is approximately Poisson/binomial distributed. Based on that we can compute the probability that the number of birthdays in a particular month won't exceed some value, and by taking the power of twelve we compute the probability that this happens for all twelve months. Note: here we neglect the fact that the number of birthdays are correlated so this is obviously not exact. # approximation with Poisson distribution t <- 0:18 z <- ppois(t,1.5)^12 # P(max <= t) dz <- diff(z) # P(max = t+1) Computation with Bruce Levin's representation In the comments Whuber has pointed to the pmultinom package. This package is based on Bruce Levin 1981 'A Representation for Multinomial Cumulative Distribution Functions' in Ann. Statist. Volume 9. The outcome of birth-months (which is more precisely distributed according to a multinomial distribution) is represented as independent Poisson distributed variables. But unlike the before mentioned naive computation, the distribution of those Poisson distributed variables is regarded to be conditional on the total sum being equal to $n=18$. So above we computed $$P(X_1, X_2, \ldots , X_{12} \leq 4) = P(X_1 \leq 4) \cdot P(X_1 \leq 4) \cdot \ldots \cdot P(X_{12} \leq 4)$$ but we should have computed the conditional probability for the Poisson distributed variables being all equal or lower than $$P(X_1, X_2, \ldots, X_{12} \leq 4 \vert X_1+ X_2+ \ldots + X_{12} = 18)$$ which introduces an extra term based on Bayes' rule. $$P(\forall i:X_i \leq 4 \vert \sum X_i = 18) = P(\forall i:X_i \leq 4) \frac{P(\sum X_i = 18 \vert \forall i:X_i \leq 4 )}{P( \sum X_i = 18)} $$ This correction factor is the ratio of the probability that a sum of truncated Poisson distributed variables equals 18 $P(\sum X_i = 18 \vert \forall i:X_i \leq 4 )$, and the probability that a sum of regular Poisson distributed variables equals 18, $P( \sum X_i = 18)$. For a small amount of birth months and people in the group this truncated distribution can be computed manually # correction factor by Bruce Levin correction <- function(y) { Nptrunc(y)[19]/dpois(18,18) } Nptrunc <- function(lim) { # truncacted Poisson distribution ptrunc <- dpois(0:lim,1.5)/sum(dpois(0:lim,1.5)) ## vector with probabilities outvec <- rep(0,lim*12+1) outvec[1] <- 1 #convolve 12 times for each months for (i in 1:12) { newvec <- rep(0,lim*12+1) for (k in 1:(lim+1)) { newvec <- newvec + ptrunc[k]*c(rep(0,k-1),outvec[1:(lim*12+1-(k-1))]) } outvec <- newvec } outvec } z2 <- ppois(t,1.5)^12*Vectorize(correction)(t) # P(max<=t) z2[1:2] <- c(0,0) dz2 <- diff(z2) # P(max = t+1) Results These approximations give the following results > ### simulation > sum(y>=4)/10^6 [1] 0.577536 > ### computation > 1-z[4] [1] 0.5572514 > ### computation exact > 1-z2[4] [1] 0.5771871
What is the probability of 4 person in group of 18 can have same birth month? While Henry already has given a way to compute the number exactly by counting all the partitions, it might be interesting to know about two approximate methods. In addition, there is an alternative ex
19,399
What is the probability of 4 person in group of 18 can have same birth month?
It so happened that 4 team members in my group of 18 happened to share same birth month. Let's say June. What are the chances that this could happen? I'm trying to present this as a probability problem in our team meeting. There are several other good answers here on the mathematics of computing probabilities in these "birthday problems". One point to note is that birthdays are not uniformly distributed over calendar days, so the uniformity assumption that is used in most analyses slightly underestimates the true probability of clusters like this. However, setting that issue aside, I would like to get a bit "meta" on you here and encourage you to think about this problem a little differently, as one that involves a great deal of "confirmation bias". Confirmation bias occurs in this context because you are more likely to take note of an outcome and seek a probabilistic analysis of that outcome if it is unusual (i.e., low probability). To put it another way, think of all the previous times in your life where you were in a room with people and learned their birthday month and the results were not unusual. In those cases, I imagine that you did not bother to come on CV.SE and ask a question about it. So the fact that you are here asking this question is an important conditioning event, that would only happen if you observe something that is sufficiently unusual to warrant the question. In view of this, the conditional probabiltity of the result you observed, conditional on your presence asking this question, is quite high --- much higher than the analysis in the other answers would suggest. To examine this situation more formally, consider these the following events: $$\begin{matrix} \mathcal{A}(x,y) & & & \text{Seeing } x \text{ people with same birthday month out of } y \text{ random people}, \\[6pt] \mathcal{B} & & & \text{Deciding the observed outcome warrants probabilistic investigation}. \ \end{matrix}$$ Most of the answers here are telling you how to estimate $\mathbb{P}(\mathcal{A}(4,18))$ but the actual probabilty at play here is the conditional probability $\mathbb{P}(\mathcal{A}(4,18) | \mathcal{B})$, which is much, much higher (and cannot really be computed here).
What is the probability of 4 person in group of 18 can have same birth month?
It so happened that 4 team members in my group of 18 happened to share same birth month. Let's say June. What are the chances that this could happen? I'm trying to present this as a probability prob
What is the probability of 4 person in group of 18 can have same birth month? It so happened that 4 team members in my group of 18 happened to share same birth month. Let's say June. What are the chances that this could happen? I'm trying to present this as a probability problem in our team meeting. There are several other good answers here on the mathematics of computing probabilities in these "birthday problems". One point to note is that birthdays are not uniformly distributed over calendar days, so the uniformity assumption that is used in most analyses slightly underestimates the true probability of clusters like this. However, setting that issue aside, I would like to get a bit "meta" on you here and encourage you to think about this problem a little differently, as one that involves a great deal of "confirmation bias". Confirmation bias occurs in this context because you are more likely to take note of an outcome and seek a probabilistic analysis of that outcome if it is unusual (i.e., low probability). To put it another way, think of all the previous times in your life where you were in a room with people and learned their birthday month and the results were not unusual. In those cases, I imagine that you did not bother to come on CV.SE and ask a question about it. So the fact that you are here asking this question is an important conditioning event, that would only happen if you observe something that is sufficiently unusual to warrant the question. In view of this, the conditional probabiltity of the result you observed, conditional on your presence asking this question, is quite high --- much higher than the analysis in the other answers would suggest. To examine this situation more formally, consider these the following events: $$\begin{matrix} \mathcal{A}(x,y) & & & \text{Seeing } x \text{ people with same birthday month out of } y \text{ random people}, \\[6pt] \mathcal{B} & & & \text{Deciding the observed outcome warrants probabilistic investigation}. \ \end{matrix}$$ Most of the answers here are telling you how to estimate $\mathbb{P}(\mathcal{A}(4,18))$ but the actual probabilty at play here is the conditional probability $\mathbb{P}(\mathcal{A}(4,18) | \mathcal{B})$, which is much, much higher (and cannot really be computed here).
What is the probability of 4 person in group of 18 can have same birth month? It so happened that 4 team members in my group of 18 happened to share same birth month. Let's say June. What are the chances that this could happen? I'm trying to present this as a probability prob
19,400
What is the probability of 4 person in group of 18 can have same birth month?
The maths is way beyond me. However, this sort of thing fascinates me, so I built a spreadsheet to replicate this for 10,000 groups of 18 people each with a birth month generated at random. I then counted how many of these groups had exactly four people with a shared birth month. For the purists, as the question didn't specify, I did also include any incidences of four people sharing a birth month and a separate four people sharing a different birth month. I also didn't rule out three or four groups of four sharing three or four different birth months respectively. I ran this spreadsheet 50 times, and the lowest result I got was 43.95%. The highest was 46.16%. The mean was 45.05%. I'll leave it to someone more experienced to do the maths to validate this approximate outcome!
What is the probability of 4 person in group of 18 can have same birth month?
The maths is way beyond me. However, this sort of thing fascinates me, so I built a spreadsheet to replicate this for 10,000 groups of 18 people each with a birth month generated at random. I then co
What is the probability of 4 person in group of 18 can have same birth month? The maths is way beyond me. However, this sort of thing fascinates me, so I built a spreadsheet to replicate this for 10,000 groups of 18 people each with a birth month generated at random. I then counted how many of these groups had exactly four people with a shared birth month. For the purists, as the question didn't specify, I did also include any incidences of four people sharing a birth month and a separate four people sharing a different birth month. I also didn't rule out three or four groups of four sharing three or four different birth months respectively. I ran this spreadsheet 50 times, and the lowest result I got was 43.95%. The highest was 46.16%. The mean was 45.05%. I'll leave it to someone more experienced to do the maths to validate this approximate outcome!
What is the probability of 4 person in group of 18 can have same birth month? The maths is way beyond me. However, this sort of thing fascinates me, so I built a spreadsheet to replicate this for 10,000 groups of 18 people each with a birth month generated at random. I then co