idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
10,801
Posterior very different to prior and likelihood
Yes this situation can arise and is a feature of your modeling assumptions specifically normality in the prior and sampling model (likelihood). If instead you had chosen a Cauchy distribution for your prior, the posterior would look much different. prior = function(x) dcauchy(x, 1.5, 0.4) like = function(x) dnorm(x,6....
Posterior very different to prior and likelihood
Yes this situation can arise and is a feature of your modeling assumptions specifically normality in the prior and sampling model (likelihood). If instead you had chosen a Cauchy distribution for your
Posterior very different to prior and likelihood Yes this situation can arise and is a feature of your modeling assumptions specifically normality in the prior and sampling model (likelihood). If instead you had chosen a Cauchy distribution for your prior, the posterior would look much different. prior = function(x) d...
Posterior very different to prior and likelihood Yes this situation can arise and is a feature of your modeling assumptions specifically normality in the prior and sampling model (likelihood). If instead you had chosen a Cauchy distribution for your
10,802
Posterior very different to prior and likelihood
Tony O'Hagan wrote about this situation of "Bayesian surprise" in detail for the one-sample location problem, in 1990. Basically, it depends on whether the prior or likelihood has heavier tails. If the prior has heavier tails, the posterior is happy to be way out in the tail of the prior, near the data. If the likelih...
Posterior very different to prior and likelihood
Tony O'Hagan wrote about this situation of "Bayesian surprise" in detail for the one-sample location problem, in 1990. Basically, it depends on whether the prior or likelihood has heavier tails. If t
Posterior very different to prior and likelihood Tony O'Hagan wrote about this situation of "Bayesian surprise" in detail for the one-sample location problem, in 1990. Basically, it depends on whether the prior or likelihood has heavier tails. If the prior has heavier tails, the posterior is happy to be way out in the...
Posterior very different to prior and likelihood Tony O'Hagan wrote about this situation of "Bayesian surprise" in detail for the one-sample location problem, in 1990. Basically, it depends on whether the prior or likelihood has heavier tails. If t
10,803
Posterior very different to prior and likelihood
I somewhat disagree with the answers given so far - there is nothing weird about this situation. The likelihood is asymptotically normal anyway, and a normal prior is not uncommon at all. If you put both together, with the fact that prior and likelihood don't give the same answer, we have the situation we are talking a...
Posterior very different to prior and likelihood
I somewhat disagree with the answers given so far - there is nothing weird about this situation. The likelihood is asymptotically normal anyway, and a normal prior is not uncommon at all. If you put b
Posterior very different to prior and likelihood I somewhat disagree with the answers given so far - there is nothing weird about this situation. The likelihood is asymptotically normal anyway, and a normal prior is not uncommon at all. If you put both together, with the fact that prior and likelihood don't give the sa...
Posterior very different to prior and likelihood I somewhat disagree with the answers given so far - there is nothing weird about this situation. The likelihood is asymptotically normal anyway, and a normal prior is not uncommon at all. If you put b
10,804
Posterior very different to prior and likelihood
After thinking about this for a while, my conclusion is that with bad modelling assumptions, the posterior can be a result that accords with neither prior beliefs or the likelihood. From this the natural result is the the posterior is not, in general, the end of the analysis. If it is the case that the posterior should...
Posterior very different to prior and likelihood
After thinking about this for a while, my conclusion is that with bad modelling assumptions, the posterior can be a result that accords with neither prior beliefs or the likelihood. From this the natu
Posterior very different to prior and likelihood After thinking about this for a while, my conclusion is that with bad modelling assumptions, the posterior can be a result that accords with neither prior beliefs or the likelihood. From this the natural result is the the posterior is not, in general, the end of the anal...
Posterior very different to prior and likelihood After thinking about this for a while, my conclusion is that with bad modelling assumptions, the posterior can be a result that accords with neither prior beliefs or the likelihood. From this the natu
10,805
Posterior very different to prior and likelihood
I feel like the answer that I was looking for when it came to this question is best summarized by Lesaffre and Lawson in Bayesian Biostatistics The posterior precision is the sum of the prior and the sample precision, i.e.: $$ \frac{1}{\sigma^2} = w_{0} + w_{1} $$ This shows that the posterior is more peaked than the ...
Posterior very different to prior and likelihood
I feel like the answer that I was looking for when it came to this question is best summarized by Lesaffre and Lawson in Bayesian Biostatistics The posterior precision is the sum of the prior and the
Posterior very different to prior and likelihood I feel like the answer that I was looking for when it came to this question is best summarized by Lesaffre and Lawson in Bayesian Biostatistics The posterior precision is the sum of the prior and the sample precision, i.e.: $$ \frac{1}{\sigma^2} = w_{0} + w_{1} $$ This ...
Posterior very different to prior and likelihood I feel like the answer that I was looking for when it came to this question is best summarized by Lesaffre and Lawson in Bayesian Biostatistics The posterior precision is the sum of the prior and the
10,806
Posterior very different to prior and likelihood
If your model is correct, observing a likelihood function falling this far from the prior is extremely unlikely. Imagine the situation before seeing the data $X_1$. Based on on correct inferences from past data $X_0$ you believe that $\mu \sim N(1.6, 0.4^2)$. Suppose that the future set of data is a single normal va...
Posterior very different to prior and likelihood
If your model is correct, observing a likelihood function falling this far from the prior is extremely unlikely. Imagine the situation before seeing the data $X_1$. Based on on correct inferences fr
Posterior very different to prior and likelihood If your model is correct, observing a likelihood function falling this far from the prior is extremely unlikely. Imagine the situation before seeing the data $X_1$. Based on on correct inferences from past data $X_0$ you believe that $\mu \sim N(1.6, 0.4^2)$. Suppose ...
Posterior very different to prior and likelihood If your model is correct, observing a likelihood function falling this far from the prior is extremely unlikely. Imagine the situation before seeing the data $X_1$. Based on on correct inferences fr
10,807
Posterior very different to prior and likelihood
I think this is actually a really interesting question. Having slept on it, I think I have a stab at an answer. The key issue is as follows: You've treated the likelihood as a gaussian pdf. But it's not a probability distribution - it's a likelihood! What's more, you've not labelled your axis clearly. These things co...
Posterior very different to prior and likelihood
I think this is actually a really interesting question. Having slept on it, I think I have a stab at an answer. The key issue is as follows: You've treated the likelihood as a gaussian pdf. But it's
Posterior very different to prior and likelihood I think this is actually a really interesting question. Having slept on it, I think I have a stab at an answer. The key issue is as follows: You've treated the likelihood as a gaussian pdf. But it's not a probability distribution - it's a likelihood! What's more, you'v...
Posterior very different to prior and likelihood I think this is actually a really interesting question. Having slept on it, I think I have a stab at an answer. The key issue is as follows: You've treated the likelihood as a gaussian pdf. But it's
10,808
Posterior very different to prior and likelihood
Bayes theorem is $$ p(A|B) = \frac{ p(B|A) \, p(A) }{ p(B) } $$ Based on prior knowledge you can define a prior, it can be updated using the data, then via Bayesian updating you could use the posterior as a prior for a next update, etc. Think of the prior as a way of augmenting your data with artificial data. In such a...
Posterior very different to prior and likelihood
Bayes theorem is $$ p(A|B) = \frac{ p(B|A) \, p(A) }{ p(B) } $$ Based on prior knowledge you can define a prior, it can be updated using the data, then via Bayesian updating you could use the posterio
Posterior very different to prior and likelihood Bayes theorem is $$ p(A|B) = \frac{ p(B|A) \, p(A) }{ p(B) } $$ Based on prior knowledge you can define a prior, it can be updated using the data, then via Bayesian updating you could use the posterior as a prior for a next update, etc. Think of the prior as a way of aug...
Posterior very different to prior and likelihood Bayes theorem is $$ p(A|B) = \frac{ p(B|A) \, p(A) }{ p(B) } $$ Based on prior knowledge you can define a prior, it can be updated using the data, then via Bayesian updating you could use the posterio
10,809
How to deal with an SVM with categorical attributes
If you are sure the categorical attribute is actually ordinal, then just treat it as numerical attribute. If not, use some coding trick to turn it into numerical attribute. According to the suggestion by the author of libsvm, one can simply use 1-of-K coding. For instance, suppose a 1-dimensional category attribute tak...
How to deal with an SVM with categorical attributes
If you are sure the categorical attribute is actually ordinal, then just treat it as numerical attribute. If not, use some coding trick to turn it into numerical attribute. According to the suggestion
How to deal with an SVM with categorical attributes If you are sure the categorical attribute is actually ordinal, then just treat it as numerical attribute. If not, use some coding trick to turn it into numerical attribute. According to the suggestion by the author of libsvm, one can simply use 1-of-K coding. For inst...
How to deal with an SVM with categorical attributes If you are sure the categorical attribute is actually ordinal, then just treat it as numerical attribute. If not, use some coding trick to turn it into numerical attribute. According to the suggestion
10,810
Comparison of ranked lists
Summary I share my thoughts in Details section. I think they are useful in identifying what we really want to achieve. I think that the main problem here is that you haven't defined what a rank similarity means. Therefore, no one knows which method of measuring the difference between the ranks is better. Effectively, t...
Comparison of ranked lists
Summary I share my thoughts in Details section. I think they are useful in identifying what we really want to achieve. I think that the main problem here is that you haven't defined what a rank simila
Comparison of ranked lists Summary I share my thoughts in Details section. I think they are useful in identifying what we really want to achieve. I think that the main problem here is that you haven't defined what a rank similarity means. Therefore, no one knows which method of measuring the difference between the rank...
Comparison of ranked lists Summary I share my thoughts in Details section. I think they are useful in identifying what we really want to achieve. I think that the main problem here is that you haven't defined what a rank simila
10,811
Comparison of ranked lists
Warning: it's a great question and I don't know the answer, so this is really more of a "what I would do if I had to": In this problem there are lots of degrees of freedom and lots of comparisons one can do, but with limited data it's really a matter of aggregating data efficiently. If you don't know what test to run, ...
Comparison of ranked lists
Warning: it's a great question and I don't know the answer, so this is really more of a "what I would do if I had to": In this problem there are lots of degrees of freedom and lots of comparisons one
Comparison of ranked lists Warning: it's a great question and I don't know the answer, so this is really more of a "what I would do if I had to": In this problem there are lots of degrees of freedom and lots of comparisons one can do, but with limited data it's really a matter of aggregating data efficiently. If you do...
Comparison of ranked lists Warning: it's a great question and I don't know the answer, so this is really more of a "what I would do if I had to": In this problem there are lots of degrees of freedom and lots of comparisons one
10,812
Comparison of ranked lists
This sounds like the 'Willcoxon signed-rank test' (wikipedia link). Assuming that the values of your ranks are from the same set (ie [1, 25]) then this is a paired-difference test (with the null-hypothesis being these pairs were picked randomly). NB this is a dis-similarity score! There are both R and Python implementa...
Comparison of ranked lists
This sounds like the 'Willcoxon signed-rank test' (wikipedia link). Assuming that the values of your ranks are from the same set (ie [1, 25]) then this is a paired-difference test (with the null-hypot
Comparison of ranked lists This sounds like the 'Willcoxon signed-rank test' (wikipedia link). Assuming that the values of your ranks are from the same set (ie [1, 25]) then this is a paired-difference test (with the null-hypothesis being these pairs were picked randomly). NB this is a dis-similarity score! There are b...
Comparison of ranked lists This sounds like the 'Willcoxon signed-rank test' (wikipedia link). Assuming that the values of your ranks are from the same set (ie [1, 25]) then this is a paired-difference test (with the null-hypot
10,813
Comparison of ranked lists
In "Sequential rank agreement methods for comparison of ranked lists" Ekstrøm et al. discuss this in detail (including a survey of existing techniques circa 2015) while introducing a new measure called "sequential rank agreement". It's available on arxiv at: https://arxiv.org/pdf/1508.06803.pdf. The abstract says it ...
Comparison of ranked lists
In "Sequential rank agreement methods for comparison of ranked lists" Ekstrøm et al. discuss this in detail (including a survey of existing techniques circa 2015) while introducing a new measure calle
Comparison of ranked lists In "Sequential rank agreement methods for comparison of ranked lists" Ekstrøm et al. discuss this in detail (including a survey of existing techniques circa 2015) while introducing a new measure called "sequential rank agreement". It's available on arxiv at: https://arxiv.org/pdf/1508.06803....
Comparison of ranked lists In "Sequential rank agreement methods for comparison of ranked lists" Ekstrøm et al. discuss this in detail (including a survey of existing techniques circa 2015) while introducing a new measure calle
10,814
Is there a Random Forest implementation that works well with very sparse data?
No, there is no RF implementation for sparse data in R. Partially because RF does not fit very well on this type of problem -- bagging and suboptimal selection of splits may waste most of the model insight on zero-only areas. Try some kernel method or better think of converting your data into some more lush representat...
Is there a Random Forest implementation that works well with very sparse data?
No, there is no RF implementation for sparse data in R. Partially because RF does not fit very well on this type of problem -- bagging and suboptimal selection of splits may waste most of the model in
Is there a Random Forest implementation that works well with very sparse data? No, there is no RF implementation for sparse data in R. Partially because RF does not fit very well on this type of problem -- bagging and suboptimal selection of splits may waste most of the model insight on zero-only areas. Try some kernel...
Is there a Random Forest implementation that works well with very sparse data? No, there is no RF implementation for sparse data in R. Partially because RF does not fit very well on this type of problem -- bagging and suboptimal selection of splits may waste most of the model in
10,815
Is there a Random Forest implementation that works well with very sparse data?
Actually, yes there is. It's xgboost, which is made for eXtreme gradient boosting. This is currently the package of choice for running models with sparse matrices in R for a lot of folks, and as the link above explains, you can use it for Random Forest by tweaking the parameters!
Is there a Random Forest implementation that works well with very sparse data?
Actually, yes there is. It's xgboost, which is made for eXtreme gradient boosting. This is currently the package of choice for running models with sparse matrices in R for a lot of folks, and as the
Is there a Random Forest implementation that works well with very sparse data? Actually, yes there is. It's xgboost, which is made for eXtreme gradient boosting. This is currently the package of choice for running models with sparse matrices in R for a lot of folks, and as the link above explains, you can use it for R...
Is there a Random Forest implementation that works well with very sparse data? Actually, yes there is. It's xgboost, which is made for eXtreme gradient boosting. This is currently the package of choice for running models with sparse matrices in R for a lot of folks, and as the
10,816
Is there a Random Forest implementation that works well with very sparse data?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. The R package "Ranger" should do. https://cran.r-proj...
Is there a Random Forest implementation that works well with very sparse data?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Is there a Random Forest implementation that works well with very sparse data? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. ...
Is there a Random Forest implementation that works well with very sparse data? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
10,817
Is there a Random Forest implementation that works well with very sparse data?
There is a blog called Quick-R that should help you with the basics of R. R works with packages. Each package can do something different. There is this packages called "randomForests" that should be just what you are asking for. Be aware that sparse data will give problems no matter what method you apply. To my knowl...
Is there a Random Forest implementation that works well with very sparse data?
There is a blog called Quick-R that should help you with the basics of R. R works with packages. Each package can do something different. There is this packages called "randomForests" that should be
Is there a Random Forest implementation that works well with very sparse data? There is a blog called Quick-R that should help you with the basics of R. R works with packages. Each package can do something different. There is this packages called "randomForests" that should be just what you are asking for. Be aware t...
Is there a Random Forest implementation that works well with very sparse data? There is a blog called Quick-R that should help you with the basics of R. R works with packages. Each package can do something different. There is this packages called "randomForests" that should be
10,818
Have the reports of the death of the t-test been greatly exaggerated?
I wouldn't say the classic one sample (including paired) and two-sample equal variance t-tests are exactly obsolete, but there's a plethora of alternatives that have excellent properties and in many cases they should be used. Nor would I say the ability to rapidly perform Wilcoxon-Mann-Whitney tests on large samples – ...
Have the reports of the death of the t-test been greatly exaggerated?
I wouldn't say the classic one sample (including paired) and two-sample equal variance t-tests are exactly obsolete, but there's a plethora of alternatives that have excellent properties and in many c
Have the reports of the death of the t-test been greatly exaggerated? I wouldn't say the classic one sample (including paired) and two-sample equal variance t-tests are exactly obsolete, but there's a plethora of alternatives that have excellent properties and in many cases they should be used. Nor would I say the abil...
Have the reports of the death of the t-test been greatly exaggerated? I wouldn't say the classic one sample (including paired) and two-sample equal variance t-tests are exactly obsolete, but there's a plethora of alternatives that have excellent properties and in many c
10,819
What is the problem with using R-squared in time series models?
Some aspects of the issue: If somebody gives us a vector of numbers $\mathbf y$ and a conformable matrix of numbers $\mathbf X$, we do not need to know what is the relation between them to execute some estimation algebra, treating $y$ as the dependent variable. The algebra will result, irrespective of whether these nu...
What is the problem with using R-squared in time series models?
Some aspects of the issue: If somebody gives us a vector of numbers $\mathbf y$ and a conformable matrix of numbers $\mathbf X$, we do not need to know what is the relation between them to execute so
What is the problem with using R-squared in time series models? Some aspects of the issue: If somebody gives us a vector of numbers $\mathbf y$ and a conformable matrix of numbers $\mathbf X$, we do not need to know what is the relation between them to execute some estimation algebra, treating $y$ as the dependent var...
What is the problem with using R-squared in time series models? Some aspects of the issue: If somebody gives us a vector of numbers $\mathbf y$ and a conformable matrix of numbers $\mathbf X$, we do not need to know what is the relation between them to execute so
10,820
What is the problem with using R-squared in time series models?
Some extra comments to the post above. When dealing with time-series an R squared (or adjusted R^2) would always be greater if explanatory variables were not differenced. However, when it goes to out-of-time fit, the error term would be significantly higher for non-differenced time series. This happens because of trend...
What is the problem with using R-squared in time series models?
Some extra comments to the post above. When dealing with time-series an R squared (or adjusted R^2) would always be greater if explanatory variables were not differenced. However, when it goes to out-
What is the problem with using R-squared in time series models? Some extra comments to the post above. When dealing with time-series an R squared (or adjusted R^2) would always be greater if explanatory variables were not differenced. However, when it goes to out-of-time fit, the error term would be significantly highe...
What is the problem with using R-squared in time series models? Some extra comments to the post above. When dealing with time-series an R squared (or adjusted R^2) would always be greater if explanatory variables were not differenced. However, when it goes to out-
10,821
Intraclass correlation (ICC) for an interaction?
The R model formula lmer(measurement ~ 1 + (1 | subject) + (1 | site), mydata) fits the model $$ Y_{ijk} = \beta_0 + \eta_{i} + \theta_{j} + \varepsilon_{ijk} $$ where $Y_{ijk}$ is the $k$'th measurement from subject $i$ at site $j$, $\eta_{i}$ is the subject $i$ random effect, $\theta_{j}$ is the site $j$ random ef...
Intraclass correlation (ICC) for an interaction?
The R model formula lmer(measurement ~ 1 + (1 | subject) + (1 | site), mydata) fits the model $$ Y_{ijk} = \beta_0 + \eta_{i} + \theta_{j} + \varepsilon_{ijk} $$ where $Y_{ijk}$ is the $k$'th measu
Intraclass correlation (ICC) for an interaction? The R model formula lmer(measurement ~ 1 + (1 | subject) + (1 | site), mydata) fits the model $$ Y_{ijk} = \beta_0 + \eta_{i} + \theta_{j} + \varepsilon_{ijk} $$ where $Y_{ijk}$ is the $k$'th measurement from subject $i$ at site $j$, $\eta_{i}$ is the subject $i$ rand...
Intraclass correlation (ICC) for an interaction? The R model formula lmer(measurement ~ 1 + (1 | subject) + (1 | site), mydata) fits the model $$ Y_{ijk} = \beta_0 + \eta_{i} + \theta_{j} + \varepsilon_{ijk} $$ where $Y_{ijk}$ is the $k$'th measu
10,822
Why does Q-Learning use epsilon-greedy during testing?
In the nature paper they mention: The trained agents were evaluated by playing each game 30 times for up to 5 min each time with different initial random conditions (‘noop’;see Extended Data Table 1) and an e-greedy policy with epsilon 0.05. This procedure is adopted to minimize the possibility of overfitting ...
Why does Q-Learning use epsilon-greedy during testing?
In the nature paper they mention: The trained agents were evaluated by playing each game 30 times for up to 5 min each time with different initial random conditions (‘noop’;see Extended Data Tabl
Why does Q-Learning use epsilon-greedy during testing? In the nature paper they mention: The trained agents were evaluated by playing each game 30 times for up to 5 min each time with different initial random conditions (‘noop’;see Extended Data Table 1) and an e-greedy policy with epsilon 0.05. This procedure i...
Why does Q-Learning use epsilon-greedy during testing? In the nature paper they mention: The trained agents were evaluated by playing each game 30 times for up to 5 min each time with different initial random conditions (‘noop’;see Extended Data Tabl
10,823
Why does Q-Learning use epsilon-greedy during testing?
The answer is there in the paper itself. They used $\epsilon\ = 0.05$ to avoid overfitting. This model is used as a baseline. And yobibyte mentioned in the comment they do random starts for the same reason. And then the algorithm is evaluated for performance against a human expert. The algorithm has no model of its opp...
Why does Q-Learning use epsilon-greedy during testing?
The answer is there in the paper itself. They used $\epsilon\ = 0.05$ to avoid overfitting. This model is used as a baseline. And yobibyte mentioned in the comment they do random starts for the same r
Why does Q-Learning use epsilon-greedy during testing? The answer is there in the paper itself. They used $\epsilon\ = 0.05$ to avoid overfitting. This model is used as a baseline. And yobibyte mentioned in the comment they do random starts for the same reason. And then the algorithm is evaluated for performance agains...
Why does Q-Learning use epsilon-greedy during testing? The answer is there in the paper itself. They used $\epsilon\ = 0.05$ to avoid overfitting. This model is used as a baseline. And yobibyte mentioned in the comment they do random starts for the same r
10,824
Why does Q-Learning use epsilon-greedy during testing?
I think the purpose of testing is to get a sense of how the system responds in real-world situations. Option 1: They might actually put some noise in the real world play - making truly random moves. This could make $\epsilon$-policy switching perfectly reflective of actual play. Option 2: If they are worried about...
Why does Q-Learning use epsilon-greedy during testing?
I think the purpose of testing is to get a sense of how the system responds in real-world situations. Option 1: They might actually put some noise in the real world play - making truly random moves.
Why does Q-Learning use epsilon-greedy during testing? I think the purpose of testing is to get a sense of how the system responds in real-world situations. Option 1: They might actually put some noise in the real world play - making truly random moves. This could make $\epsilon$-policy switching perfectly reflectiv...
Why does Q-Learning use epsilon-greedy during testing? I think the purpose of testing is to get a sense of how the system responds in real-world situations. Option 1: They might actually put some noise in the real world play - making truly random moves.
10,825
Why does Q-Learning use epsilon-greedy during testing?
The reason for using $\epsilon$-greedy during testing is that, unlike in supervised machine learning (for example image classification), in reinforcement learning there is no unseen, held-out data set available for the test phase. This means the algorithm is tested on the very same setup that it has been trained on. No...
Why does Q-Learning use epsilon-greedy during testing?
The reason for using $\epsilon$-greedy during testing is that, unlike in supervised machine learning (for example image classification), in reinforcement learning there is no unseen, held-out data set
Why does Q-Learning use epsilon-greedy during testing? The reason for using $\epsilon$-greedy during testing is that, unlike in supervised machine learning (for example image classification), in reinforcement learning there is no unseen, held-out data set available for the test phase. This means the algorithm is tested...
Why does Q-Learning use epsilon-greedy during testing? The reason for using $\epsilon$-greedy during testing is that, unlike in supervised machine learning (for example image classification), in reinforcement learning there is no unseen, held-out data set
10,826
Bridge penalty vs. Elastic Net regularization
How bridge regression and elastic net differ is a fascinating question, given their similar-looking penalties. Here's one possible approach. Suppose we solve the bridge regression problem. We can then ask how the elastic net solution would differ. Looking at the gradients of the two loss functions can tell us something...
Bridge penalty vs. Elastic Net regularization
How bridge regression and elastic net differ is a fascinating question, given their similar-looking penalties. Here's one possible approach. Suppose we solve the bridge regression problem. We can then
Bridge penalty vs. Elastic Net regularization How bridge regression and elastic net differ is a fascinating question, given their similar-looking penalties. Here's one possible approach. Suppose we solve the bridge regression problem. We can then ask how the elastic net solution would differ. Looking at the gradients o...
Bridge penalty vs. Elastic Net regularization How bridge regression and elastic net differ is a fascinating question, given their similar-looking penalties. Here's one possible approach. Suppose we solve the bridge regression problem. We can then
10,827
Can the mean squared error be used for classification?
Many classifiers can predict continuous scores. Often, continuous scores are intermediate results that are only converted to class labels (usually by threshold) as the very last step of the classification. In other cases, e.g. posterior probabilities for the class membership can be calculated (e.g. discriminant analysi...
Can the mean squared error be used for classification?
Many classifiers can predict continuous scores. Often, continuous scores are intermediate results that are only converted to class labels (usually by threshold) as the very last step of the classifica
Can the mean squared error be used for classification? Many classifiers can predict continuous scores. Often, continuous scores are intermediate results that are only converted to class labels (usually by threshold) as the very last step of the classification. In other cases, e.g. posterior probabilities for the class ...
Can the mean squared error be used for classification? Many classifiers can predict continuous scores. Often, continuous scores are intermediate results that are only converted to class labels (usually by threshold) as the very last step of the classifica
10,828
Can the mean squared error be used for classification?
For probability estimates $\hat{\pi}$ you would want to compute not MSE (the log likelihood of a Norma random variable) but instead use likelihood of a Bernoulli random variable $L=\prod_i \hat{\pi}_i^{y_i} (1-\hat{\pi}_i)^{1-y_i}$ This likelihood is for a binary response, which is assumed to have a Bernoulli distribut...
Can the mean squared error be used for classification?
For probability estimates $\hat{\pi}$ you would want to compute not MSE (the log likelihood of a Norma random variable) but instead use likelihood of a Bernoulli random variable $L=\prod_i \hat{\pi}_i
Can the mean squared error be used for classification? For probability estimates $\hat{\pi}$ you would want to compute not MSE (the log likelihood of a Norma random variable) but instead use likelihood of a Bernoulli random variable $L=\prod_i \hat{\pi}_i^{y_i} (1-\hat{\pi}_i)^{1-y_i}$ This likelihood is for a binary r...
Can the mean squared error be used for classification? For probability estimates $\hat{\pi}$ you would want to compute not MSE (the log likelihood of a Norma random variable) but instead use likelihood of a Bernoulli random variable $L=\prod_i \hat{\pi}_i
10,829
Can the mean squared error be used for classification?
Technically you can, but the MSE function is non-convex for binary classification. Thus, if a binary classification model is trained with MSE Cost function, it is not guaranteed to minimize the Cost function. Also, using MSE as a cost function assumes the Gaussian distribution which is not the case for binary classific...
Can the mean squared error be used for classification?
Technically you can, but the MSE function is non-convex for binary classification. Thus, if a binary classification model is trained with MSE Cost function, it is not guaranteed to minimize the Cost f
Can the mean squared error be used for classification? Technically you can, but the MSE function is non-convex for binary classification. Thus, if a binary classification model is trained with MSE Cost function, it is not guaranteed to minimize the Cost function. Also, using MSE as a cost function assumes the Gaussian ...
Can the mean squared error be used for classification? Technically you can, but the MSE function is non-convex for binary classification. Thus, if a binary classification model is trained with MSE Cost function, it is not guaranteed to minimize the Cost f
10,830
Can the mean squared error be used for classification?
I don't quite see how... successful classification is a binary variable (correct or not), so it is difficult to see what you would square. Generally classifications are measured on indicators such as percentage correct, when a classification that has been estimated from a training set, is applied to a testing set that ...
Can the mean squared error be used for classification?
I don't quite see how... successful classification is a binary variable (correct or not), so it is difficult to see what you would square. Generally classifications are measured on indicators such as
Can the mean squared error be used for classification? I don't quite see how... successful classification is a binary variable (correct or not), so it is difficult to see what you would square. Generally classifications are measured on indicators such as percentage correct, when a classification that has been estimated...
Can the mean squared error be used for classification? I don't quite see how... successful classification is a binary variable (correct or not), so it is difficult to see what you would square. Generally classifications are measured on indicators such as
10,831
What is the distribution of the ratio of two Poisson random variables?
I think you're going to have a problem with that. Because variable Y will have zero's, X/Y will have some undefined values such that you won't get a distribution.
What is the distribution of the ratio of two Poisson random variables?
I think you're going to have a problem with that. Because variable Y will have zero's, X/Y will have some undefined values such that you won't get a distribution.
What is the distribution of the ratio of two Poisson random variables? I think you're going to have a problem with that. Because variable Y will have zero's, X/Y will have some undefined values such that you won't get a distribution.
What is the distribution of the ratio of two Poisson random variables? I think you're going to have a problem with that. Because variable Y will have zero's, X/Y will have some undefined values such that you won't get a distribution.
10,832
What is the distribution of the ratio of two Poisson random variables?
By realizing that the ratio is in fact not a well defined measurable set, we redefine the ratio as a properly measurable set $$ \mathbb{P}\left[\frac{X}{Y} \leq r \right] := \mathbb{P}\left[X \leq r Y\right]\\ = \sum_{y = 0}^\infty \sum_{x=0}^{\left\lfloor ry \right\rfloor} \frac{\lambda_{2}^y }{y!}e^{-\lambda_2} \frac...
What is the distribution of the ratio of two Poisson random variables?
By realizing that the ratio is in fact not a well defined measurable set, we redefine the ratio as a properly measurable set $$ \mathbb{P}\left[\frac{X}{Y} \leq r \right] := \mathbb{P}\left[X \leq r Y
What is the distribution of the ratio of two Poisson random variables? By realizing that the ratio is in fact not a well defined measurable set, we redefine the ratio as a properly measurable set $$ \mathbb{P}\left[\frac{X}{Y} \leq r \right] := \mathbb{P}\left[X \leq r Y\right]\\ = \sum_{y = 0}^\infty \sum_{x=0}^{\left...
What is the distribution of the ratio of two Poisson random variables? By realizing that the ratio is in fact not a well defined measurable set, we redefine the ratio as a properly measurable set $$ \mathbb{P}\left[\frac{X}{Y} \leq r \right] := \mathbb{P}\left[X \leq r Y
10,833
t-SNE versus MDS
PCA selects influential dimensions by eigenanalysis of the N data points themselves, while MDS selects influential dimensions by eigenanalysis of the $N^2$ data points of a pairwise distance matrix. This has the effect of highlighting the deviations from uniformity in the distribution. Considering the distance matrix ...
t-SNE versus MDS
PCA selects influential dimensions by eigenanalysis of the N data points themselves, while MDS selects influential dimensions by eigenanalysis of the $N^2$ data points of a pairwise distance matrix. T
t-SNE versus MDS PCA selects influential dimensions by eigenanalysis of the N data points themselves, while MDS selects influential dimensions by eigenanalysis of the $N^2$ data points of a pairwise distance matrix. This has the effect of highlighting the deviations from uniformity in the distribution. Considering the...
t-SNE versus MDS PCA selects influential dimensions by eigenanalysis of the N data points themselves, while MDS selects influential dimensions by eigenanalysis of the $N^2$ data points of a pairwise distance matrix. T
10,834
Estimating the most important features in a k-means cluster partition
One way to quantify the usefulness of each feature (= variable = dimension), from the book Burns, Robert P., and Richard Burns. Business research methods and statistics using SPSS. Sage, 2008. (mirror), usefulness being defined by the features' discriminative power to tell clusters apart. We usually examine the means...
Estimating the most important features in a k-means cluster partition
One way to quantify the usefulness of each feature (= variable = dimension), from the book Burns, Robert P., and Richard Burns. Business research methods and statistics using SPSS. Sage, 2008. (mirror
Estimating the most important features in a k-means cluster partition One way to quantify the usefulness of each feature (= variable = dimension), from the book Burns, Robert P., and Richard Burns. Business research methods and statistics using SPSS. Sage, 2008. (mirror), usefulness being defined by the features' discr...
Estimating the most important features in a k-means cluster partition One way to quantify the usefulness of each feature (= variable = dimension), from the book Burns, Robert P., and Richard Burns. Business research methods and statistics using SPSS. Sage, 2008. (mirror
10,835
Estimating the most important features in a k-means cluster partition
I can think of two other possibilities that focus more on which variables are important to which clusters. Multi-class classification. Consider the objects that belong to cluster x members of the same class (e.g., class 1) and the objects that belong to other clusters members of a second class (e.g., class 2). Train a...
Estimating the most important features in a k-means cluster partition
I can think of two other possibilities that focus more on which variables are important to which clusters. Multi-class classification. Consider the objects that belong to cluster x members of the sam
Estimating the most important features in a k-means cluster partition I can think of two other possibilities that focus more on which variables are important to which clusters. Multi-class classification. Consider the objects that belong to cluster x members of the same class (e.g., class 1) and the objects that belon...
Estimating the most important features in a k-means cluster partition I can think of two other possibilities that focus more on which variables are important to which clusters. Multi-class classification. Consider the objects that belong to cluster x members of the sam
10,836
Estimating the most important features in a k-means cluster partition
I faced this problem before and developed two possible methods to find the most important features responsible for each K-Means cluster sub-optimal solution. Focusing on each centroid’s position and the dimensions responsible for the highest Within-Cluster Sum of Squares minimization Converting the problem into class...
Estimating the most important features in a k-means cluster partition
I faced this problem before and developed two possible methods to find the most important features responsible for each K-Means cluster sub-optimal solution. Focusing on each centroid’s position and
Estimating the most important features in a k-means cluster partition I faced this problem before and developed two possible methods to find the most important features responsible for each K-Means cluster sub-optimal solution. Focusing on each centroid’s position and the dimensions responsible for the highest Within-...
Estimating the most important features in a k-means cluster partition I faced this problem before and developed two possible methods to find the most important features responsible for each K-Means cluster sub-optimal solution. Focusing on each centroid’s position and
10,837
Estimating the most important features in a k-means cluster partition
Here is a very simple method. Note that the Euclidean distance between two cluster centers is a sum of square difference between individual features. We can then just use the square difference as the weight for each feature.
Estimating the most important features in a k-means cluster partition
Here is a very simple method. Note that the Euclidean distance between two cluster centers is a sum of square difference between individual features. We can then just use the square difference as the
Estimating the most important features in a k-means cluster partition Here is a very simple method. Note that the Euclidean distance between two cluster centers is a sum of square difference between individual features. We can then just use the square difference as the weight for each feature.
Estimating the most important features in a k-means cluster partition Here is a very simple method. Note that the Euclidean distance between two cluster centers is a sum of square difference between individual features. We can then just use the square difference as the
10,838
sum of noncentral Chi-square random variables
As Glen_b noted in the comments, if the variances are all the same you end up with a scaled noncentral chi-squared. If not, there is a concept of a generalized chi-squared distribution, i.e. $x^T A x$ for $x \sim N(\mu, \Sigma)$ and $A$ fixed. In this case, you have the special case of diagonal $\Sigma$ ($\Sigma_{ii} =...
sum of noncentral Chi-square random variables
As Glen_b noted in the comments, if the variances are all the same you end up with a scaled noncentral chi-squared. If not, there is a concept of a generalized chi-squared distribution, i.e. $x^T A x$
sum of noncentral Chi-square random variables As Glen_b noted in the comments, if the variances are all the same you end up with a scaled noncentral chi-squared. If not, there is a concept of a generalized chi-squared distribution, i.e. $x^T A x$ for $x \sim N(\mu, \Sigma)$ and $A$ fixed. In this case, you have the spe...
sum of noncentral Chi-square random variables As Glen_b noted in the comments, if the variances are all the same you end up with a scaled noncentral chi-squared. If not, there is a concept of a generalized chi-squared distribution, i.e. $x^T A x$
10,839
How to calculate p-value for multivariate linear regression
t-test With a t-test you standardize the measured parameters by dividing by them by the variance. If the variance is an estimate then this standardized value will be distributed according to the t-distribution (otherwise, if the variance of the distribution of the errors is known, then you have a z-distribution) Say yo...
How to calculate p-value for multivariate linear regression
t-test With a t-test you standardize the measured parameters by dividing by them by the variance. If the variance is an estimate then this standardized value will be distributed according to the t-dis
How to calculate p-value for multivariate linear regression t-test With a t-test you standardize the measured parameters by dividing by them by the variance. If the variance is an estimate then this standardized value will be distributed according to the t-distribution (otherwise, if the variance of the distribution of...
How to calculate p-value for multivariate linear regression t-test With a t-test you standardize the measured parameters by dividing by them by the variance. If the variance is an estimate then this standardized value will be distributed according to the t-dis
10,840
Why isn't Akaike information criterion used more in machine learning?
AIC and BIC are used, e.g. in stepwise regression. They are actually part of a larger class of "heuristics", which are also used. For example the DIC (Deviance Information Criterion) is often used in Bayesian Model selection. However, they are basically "heuristics". While it can be shown, that both the AIC and BIC con...
Why isn't Akaike information criterion used more in machine learning?
AIC and BIC are used, e.g. in stepwise regression. They are actually part of a larger class of "heuristics", which are also used. For example the DIC (Deviance Information Criterion) is often used in
Why isn't Akaike information criterion used more in machine learning? AIC and BIC are used, e.g. in stepwise regression. They are actually part of a larger class of "heuristics", which are also used. For example the DIC (Deviance Information Criterion) is often used in Bayesian Model selection. However, they are basica...
Why isn't Akaike information criterion used more in machine learning? AIC and BIC are used, e.g. in stepwise regression. They are actually part of a larger class of "heuristics", which are also used. For example the DIC (Deviance Information Criterion) is often used in
10,841
Logistic Regression - Multicollinearity Concerns/Pitfalls
All of the same principles concerning multicollinearity apply to logistic regression as they do to OLS. The same diagnostics assessing multicollinearity can be used (e.g. VIF, condition number, auxiliary regressions.), and the same dimension reduction techniques can be used (such as combining variables via principal co...
Logistic Regression - Multicollinearity Concerns/Pitfalls
All of the same principles concerning multicollinearity apply to logistic regression as they do to OLS. The same diagnostics assessing multicollinearity can be used (e.g. VIF, condition number, auxili
Logistic Regression - Multicollinearity Concerns/Pitfalls All of the same principles concerning multicollinearity apply to logistic regression as they do to OLS. The same diagnostics assessing multicollinearity can be used (e.g. VIF, condition number, auxiliary regressions.), and the same dimension reduction techniques...
Logistic Regression - Multicollinearity Concerns/Pitfalls All of the same principles concerning multicollinearity apply to logistic regression as they do to OLS. The same diagnostics assessing multicollinearity can be used (e.g. VIF, condition number, auxili
10,842
Is there a decision-tree-like algorithm for unsupervised clustering?
You may want to consider the following approach: Use any clustering algorithm that is adequate for your data Assume the resulting cluster are classes Train a decision tree on the clusters This will allow you to try different clustering algorithms, but you will get a decision tree approximation for each of them.
Is there a decision-tree-like algorithm for unsupervised clustering?
You may want to consider the following approach: Use any clustering algorithm that is adequate for your data Assume the resulting cluster are classes Train a decision tree on the clusters This will
Is there a decision-tree-like algorithm for unsupervised clustering? You may want to consider the following approach: Use any clustering algorithm that is adequate for your data Assume the resulting cluster are classes Train a decision tree on the clusters This will allow you to try different clustering algorithms, b...
Is there a decision-tree-like algorithm for unsupervised clustering? You may want to consider the following approach: Use any clustering algorithm that is adequate for your data Assume the resulting cluster are classes Train a decision tree on the clusters This will
10,843
Is there a decision-tree-like algorithm for unsupervised clustering?
The first paper that comes to mind is this: Clustering Via Decision Tree Construction https://pdfs.semanticscholar.org/8996/148e8f0b34308e2d22f78ff89bf1f038d1d6.pdf As another mentioned, "hierarchical" (top down) and "hierarchical agglomeration" (bottom up) are both well known techniques devised using trees to do clust...
Is there a decision-tree-like algorithm for unsupervised clustering?
The first paper that comes to mind is this: Clustering Via Decision Tree Construction https://pdfs.semanticscholar.org/8996/148e8f0b34308e2d22f78ff89bf1f038d1d6.pdf As another mentioned, "hierarchical
Is there a decision-tree-like algorithm for unsupervised clustering? The first paper that comes to mind is this: Clustering Via Decision Tree Construction https://pdfs.semanticscholar.org/8996/148e8f0b34308e2d22f78ff89bf1f038d1d6.pdf As another mentioned, "hierarchical" (top down) and "hierarchical agglomeration" (bott...
Is there a decision-tree-like algorithm for unsupervised clustering? The first paper that comes to mind is this: Clustering Via Decision Tree Construction https://pdfs.semanticscholar.org/8996/148e8f0b34308e2d22f78ff89bf1f038d1d6.pdf As another mentioned, "hierarchical
10,844
Is there a decision-tree-like algorithm for unsupervised clustering?
What you're looking for is a divisive clustering algorithm. Most common algorithms are agglomerative, which cluster the data in a bottom up manner - each observation starts as its own cluster and clusters get merged. Divisive clustering is top down - observations start in one cluster which is gradually divided. The ...
Is there a decision-tree-like algorithm for unsupervised clustering?
What you're looking for is a divisive clustering algorithm. Most common algorithms are agglomerative, which cluster the data in a bottom up manner - each observation starts as its own cluster and cl
Is there a decision-tree-like algorithm for unsupervised clustering? What you're looking for is a divisive clustering algorithm. Most common algorithms are agglomerative, which cluster the data in a bottom up manner - each observation starts as its own cluster and clusters get merged. Divisive clustering is top down...
Is there a decision-tree-like algorithm for unsupervised clustering? What you're looking for is a divisive clustering algorithm. Most common algorithms are agglomerative, which cluster the data in a bottom up manner - each observation starts as its own cluster and cl
10,845
Is there a decision-tree-like algorithm for unsupervised clustering?
One idea to consider is let suppose you have k features and n points. You can build random trees using (k-1) feature and 1 feature as a dependent variable. Y. You can select a height h after which you will have data points in roots. You can take voting kind of different trees. Just a thought.
Is there a decision-tree-like algorithm for unsupervised clustering?
One idea to consider is let suppose you have k features and n points. You can build random trees using (k-1) feature and 1 feature as a dependent variable. Y. You can select a height h after which you
Is there a decision-tree-like algorithm for unsupervised clustering? One idea to consider is let suppose you have k features and n points. You can build random trees using (k-1) feature and 1 feature as a dependent variable. Y. You can select a height h after which you will have data points in roots. You can take votin...
Is there a decision-tree-like algorithm for unsupervised clustering? One idea to consider is let suppose you have k features and n points. You can build random trees using (k-1) feature and 1 feature as a dependent variable. Y. You can select a height h after which you
10,846
What's a good tool to create Sankey diagrams?
Have you seen this list? And there is also a function in R available. I personally would start with the path geom and size aesthetic in ggplot2 and see where that got me. I haven't tested any of these. If you find a preferred option perhaps you could let us all know as they are rather cool graphics.
What's a good tool to create Sankey diagrams?
Have you seen this list? And there is also a function in R available. I personally would start with the path geom and size aesthetic in ggplot2 and see where that got me. I haven't tested any of the
What's a good tool to create Sankey diagrams? Have you seen this list? And there is also a function in R available. I personally would start with the path geom and size aesthetic in ggplot2 and see where that got me. I haven't tested any of these. If you find a preferred option perhaps you could let us all know as t...
What's a good tool to create Sankey diagrams? Have you seen this list? And there is also a function in R available. I personally would start with the path geom and size aesthetic in ggplot2 and see where that got me. I haven't tested any of the
10,847
What's a good tool to create Sankey diagrams?
If you are looking for client side (JavaScript library) you can try: http://tamc.github.com/Sankey/ You can also see a related question on StackOverFlow: https://stackoverflow.com/q/4545254/179529
What's a good tool to create Sankey diagrams?
If you are looking for client side (JavaScript library) you can try: http://tamc.github.com/Sankey/ You can also see a related question on StackOverFlow: https://stackoverflow.com/q/4545254/179529
What's a good tool to create Sankey diagrams? If you are looking for client side (JavaScript library) you can try: http://tamc.github.com/Sankey/ You can also see a related question on StackOverFlow: https://stackoverflow.com/q/4545254/179529
What's a good tool to create Sankey diagrams? If you are looking for client side (JavaScript library) you can try: http://tamc.github.com/Sankey/ You can also see a related question on StackOverFlow: https://stackoverflow.com/q/4545254/179529
10,848
What's a good tool to create Sankey diagrams?
Check out my HTML5 D3 Sankey Diagram Generator - complete with self-loops and all :) http://sankey.csaladen.es
What's a good tool to create Sankey diagrams?
Check out my HTML5 D3 Sankey Diagram Generator - complete with self-loops and all :) http://sankey.csaladen.es
What's a good tool to create Sankey diagrams? Check out my HTML5 D3 Sankey Diagram Generator - complete with self-loops and all :) http://sankey.csaladen.es
What's a good tool to create Sankey diagrams? Check out my HTML5 D3 Sankey Diagram Generator - complete with self-loops and all :) http://sankey.csaladen.es
10,849
What's a good tool to create Sankey diagrams?
Are you looking for library or web app? If the second, you may try this Sankey Builder. It has visual interface and drag&drop UI. It is just Beta, at this point it only saves data to it's URL and is not optimised for mobile. http://wikibudgets.org/sankey/ disclaimer: I work for wikiBudgets
What's a good tool to create Sankey diagrams?
Are you looking for library or web app? If the second, you may try this Sankey Builder. It has visual interface and drag&drop UI. It is just Beta, at this point it only saves data to it's URL and is n
What's a good tool to create Sankey diagrams? Are you looking for library or web app? If the second, you may try this Sankey Builder. It has visual interface and drag&drop UI. It is just Beta, at this point it only saves data to it's URL and is not optimised for mobile. http://wikibudgets.org/sankey/ disclaimer: I wor...
What's a good tool to create Sankey diagrams? Are you looking for library or web app? If the second, you may try this Sankey Builder. It has visual interface and drag&drop UI. It is just Beta, at this point it only saves data to it's URL and is n
10,850
What's a good tool to create Sankey diagrams?
Our Sankey Diagram app is for Apple iOS. It uses the touch interface to provide intuitive creation of flow diagrams. Just search for "Sankey Diagram" in the Apple iTunes app store. Our web site is squishLogic Sankey Diagram
What's a good tool to create Sankey diagrams?
Our Sankey Diagram app is for Apple iOS. It uses the touch interface to provide intuitive creation of flow diagrams. Just search for "Sankey Diagram" in the Apple iTunes app store. Our web site is s
What's a good tool to create Sankey diagrams? Our Sankey Diagram app is for Apple iOS. It uses the touch interface to provide intuitive creation of flow diagrams. Just search for "Sankey Diagram" in the Apple iTunes app store. Our web site is squishLogic Sankey Diagram
What's a good tool to create Sankey diagrams? Our Sankey Diagram app is for Apple iOS. It uses the touch interface to provide intuitive creation of flow diagrams. Just search for "Sankey Diagram" in the Apple iTunes app store. Our web site is s
10,851
What's a good tool to create Sankey diagrams?
I just uploaded a brand new online Sankey Builder. You can upload data, configure via a wide range of tools and save. It allows you to drag and drop fields from your data to customize the Sankey Flow and dynamically add filters on any field in the diagram to squeeze the data. It has automatic highlights of bands across...
What's a good tool to create Sankey diagrams?
I just uploaded a brand new online Sankey Builder. You can upload data, configure via a wide range of tools and save. It allows you to drag and drop fields from your data to customize the Sankey Flow
What's a good tool to create Sankey diagrams? I just uploaded a brand new online Sankey Builder. You can upload data, configure via a wide range of tools and save. It allows you to drag and drop fields from your data to customize the Sankey Flow and dynamically add filters on any field in the diagram to squeeze the dat...
What's a good tool to create Sankey diagrams? I just uploaded a brand new online Sankey Builder. You can upload data, configure via a wide range of tools and save. It allows you to drag and drop fields from your data to customize the Sankey Flow
10,852
What's a good tool to create Sankey diagrams?
Draw Sankey diagrams directly in your browser with "Sankey Flow Show - Attractive flow diagrams made in minutes!" http://www.sankeyflowshow.com
What's a good tool to create Sankey diagrams?
Draw Sankey diagrams directly in your browser with "Sankey Flow Show - Attractive flow diagrams made in minutes!" http://www.sankeyflowshow.com
What's a good tool to create Sankey diagrams? Draw Sankey diagrams directly in your browser with "Sankey Flow Show - Attractive flow diagrams made in minutes!" http://www.sankeyflowshow.com
What's a good tool to create Sankey diagrams? Draw Sankey diagrams directly in your browser with "Sankey Flow Show - Attractive flow diagrams made in minutes!" http://www.sankeyflowshow.com
10,853
What's a good tool to create Sankey diagrams?
Like everything Latex is the way ! It you do not know latex at all you might not want this solution, but if you know even just a little, you just to have to change parameters accordingly to what you want, and it will be more precise and flexible than many other possibilities. Note that you will have to enter the values...
What's a good tool to create Sankey diagrams?
Like everything Latex is the way ! It you do not know latex at all you might not want this solution, but if you know even just a little, you just to have to change parameters accordingly to what you w
What's a good tool to create Sankey diagrams? Like everything Latex is the way ! It you do not know latex at all you might not want this solution, but if you know even just a little, you just to have to change parameters accordingly to what you want, and it will be more precise and flexible than many other possibilitie...
What's a good tool to create Sankey diagrams? Like everything Latex is the way ! It you do not know latex at all you might not want this solution, but if you know even just a little, you just to have to change parameters accordingly to what you w
10,854
What's a good tool to create Sankey diagrams?
We build on Denes's online (web app) Sankey Diagram Generator with some more functions and abilties to copy data in from CSV, Excel Pivot Tables, and JSON. Sankey Diagram Generator
What's a good tool to create Sankey diagrams?
We build on Denes's online (web app) Sankey Diagram Generator with some more functions and abilties to copy data in from CSV, Excel Pivot Tables, and JSON. Sankey Diagram Generator
What's a good tool to create Sankey diagrams? We build on Denes's online (web app) Sankey Diagram Generator with some more functions and abilties to copy data in from CSV, Excel Pivot Tables, and JSON. Sankey Diagram Generator
What's a good tool to create Sankey diagrams? We build on Denes's online (web app) Sankey Diagram Generator with some more functions and abilties to copy data in from CSV, Excel Pivot Tables, and JSON. Sankey Diagram Generator
10,855
Does the distribution $\log(1 + x^{-2}) / 2\pi$ have a name?
Indeed, even the first moment does not exist. The CDF of this distribution is given by $$F(x) = 1/2 + \left(\arctan(x) - x \log(\sin(\arctan(x)))\right)/\pi$$ for $x \ge 0$ and, by symmetry, $F(x) = 1 - F(|x|)$ for $x \lt 0$. Neither this nor any of the obvious transforms look familiar to me. (The fact that we can o...
Does the distribution $\log(1 + x^{-2}) / 2\pi$ have a name?
Indeed, even the first moment does not exist. The CDF of this distribution is given by $$F(x) = 1/2 + \left(\arctan(x) - x \log(\sin(\arctan(x)))\right)/\pi$$ for $x \ge 0$ and, by symmetry, $F(x) =
Does the distribution $\log(1 + x^{-2}) / 2\pi$ have a name? Indeed, even the first moment does not exist. The CDF of this distribution is given by $$F(x) = 1/2 + \left(\arctan(x) - x \log(\sin(\arctan(x)))\right)/\pi$$ for $x \ge 0$ and, by symmetry, $F(x) = 1 - F(|x|)$ for $x \lt 0$. Neither this nor any of the obv...
Does the distribution $\log(1 + x^{-2}) / 2\pi$ have a name? Indeed, even the first moment does not exist. The CDF of this distribution is given by $$F(x) = 1/2 + \left(\arctan(x) - x \log(\sin(\arctan(x)))\right)/\pi$$ for $x \ge 0$ and, by symmetry, $F(x) =
10,856
Does the distribution $\log(1 + x^{-2}) / 2\pi$ have a name?
Perhaps not. I could not find it in this fairly extensive list of distributions: Leemis and McQuestion 2008 Univariate Distribution Relationships. American Statistician 62(1) 45:53
Does the distribution $\log(1 + x^{-2}) / 2\pi$ have a name?
Perhaps not. I could not find it in this fairly extensive list of distributions: Leemis and McQuestion 2008 Univariate Distribution Relationships. American Statistician 62(1) 45:53
Does the distribution $\log(1 + x^{-2}) / 2\pi$ have a name? Perhaps not. I could not find it in this fairly extensive list of distributions: Leemis and McQuestion 2008 Univariate Distribution Relationships. American Statistician 62(1) 45:53
Does the distribution $\log(1 + x^{-2}) / 2\pi$ have a name? Perhaps not. I could not find it in this fairly extensive list of distributions: Leemis and McQuestion 2008 Univariate Distribution Relationships. American Statistician 62(1) 45:53
10,857
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
While your question is similar to a number of other questions on site, aspects of this question (such as your emphasis on consistency) make me think they're not sufficiently close to being duplicates. Why not choose some other objective function to minimize? Why not, indeed? If you objective is different from least s...
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
While your question is similar to a number of other questions on site, aspects of this question (such as your emphasis on consistency) make me think they're not sufficiently close to being duplicates.
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model? While your question is similar to a number of other questions on site, aspects of this question (such as your emphasis on consistency) make me think they're not sufficiently close to being duplicates. Why not choose some other ob...
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model? While your question is similar to a number of other questions on site, aspects of this question (such as your emphasis on consistency) make me think they're not sufficiently close to being duplicates.
10,858
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
You asked a statistics question, and I hope that my control system engineer answer is a stab at it from enough of a different direction to be enlightening. Here is a "canonical" information-flow form for control system engineering: The "r" is for reference value. It is summed with an "F" transform of the output "y" t...
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
You asked a statistics question, and I hope that my control system engineer answer is a stab at it from enough of a different direction to be enlightening. Here is a "canonical" information-flow form
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model? You asked a statistics question, and I hope that my control system engineer answer is a stab at it from enough of a different direction to be enlightening. Here is a "canonical" information-flow form for control system engineering...
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model? You asked a statistics question, and I hope that my control system engineer answer is a stab at it from enough of a different direction to be enlightening. Here is a "canonical" information-flow form
10,859
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
I think that, when fitting models, we usually choose to minimize the sum of squared errors ($SSE$) due to the fact that $SSE$ has a direct (negative) relation with $R^2$, a major goodness-of-fit (GoF) statistic for a model, as follows ($SST$ is sum of squares total): $$ R^2 = 1 - \frac{SSE}{SST} $$ Omitting the discuss...
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
I think that, when fitting models, we usually choose to minimize the sum of squared errors ($SSE$) due to the fact that $SSE$ has a direct (negative) relation with $R^2$, a major goodness-of-fit (GoF)
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model? I think that, when fitting models, we usually choose to minimize the sum of squared errors ($SSE$) due to the fact that $SSE$ has a direct (negative) relation with $R^2$, a major goodness-of-fit (GoF) statistic for a model, as fol...
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model? I think that, when fitting models, we usually choose to minimize the sum of squared errors ($SSE$) due to the fact that $SSE$ has a direct (negative) relation with $R^2$, a major goodness-of-fit (GoF)
10,860
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
You might also look at minimizing the maximum error instead of least squares fitting. There is an ample literature on the subject. For a search word, try "Tchebechev" also spelled "Chebyshev" polynomials.
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
You might also look at minimizing the maximum error instead of least squares fitting. There is an ample literature on the subject. For a search word, try "Tchebechev" also spelled "Chebyshev" polynomi
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model? You might also look at minimizing the maximum error instead of least squares fitting. There is an ample literature on the subject. For a search word, try "Tchebechev" also spelled "Chebyshev" polynomials.
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model? You might also look at minimizing the maximum error instead of least squares fitting. There is an ample literature on the subject. For a search word, try "Tchebechev" also spelled "Chebyshev" polynomi
10,861
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
It looks people use squares because it allow to be within Linear Algebra realm and not touch other more complicated stuff like convex optimization which is more powerfull, but it lead to usin solvers without nice closed-form solutions. Also idea from this math realm which has name convex optimization has not spread a l...
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
It looks people use squares because it allow to be within Linear Algebra realm and not touch other more complicated stuff like convex optimization which is more powerfull, but it lead to usin solvers
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model? It looks people use squares because it allow to be within Linear Algebra realm and not touch other more complicated stuff like convex optimization which is more powerfull, but it lead to usin solvers without nice closed-form solut...
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model? It looks people use squares because it allow to be within Linear Algebra realm and not touch other more complicated stuff like convex optimization which is more powerfull, but it lead to usin solvers
10,862
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
On a side note: When factoring in uncertanty over the values of our target variable t, we can express the probability distribution of t as $$p(t|x,w,\beta) = \mathbb{N}(t|y(x,\textbf{w}),\beta^{-1})$$ assuming t follows a Gaussian conitioned on the polyomial y. Using training data $\{\textbf{x}, \textbf{t}\}$ the likel...
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
On a side note: When factoring in uncertanty over the values of our target variable t, we can express the probability distribution of t as $$p(t|x,w,\beta) = \mathbb{N}(t|y(x,\textbf{w}),\beta^{-1})$$
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model? On a side note: When factoring in uncertanty over the values of our target variable t, we can express the probability distribution of t as $$p(t|x,w,\beta) = \mathbb{N}(t|y(x,\textbf{w}),\beta^{-1})$$ assuming t follows a Gaussian...
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model? On a side note: When factoring in uncertanty over the values of our target variable t, we can express the probability distribution of t as $$p(t|x,w,\beta) = \mathbb{N}(t|y(x,\textbf{w}),\beta^{-1})$$
10,863
Standard deviation of binned observations
This reply presents two solutions: Sheppard's corrections and a maximum likelihood estimate. Both closely agree on an estimate of the standard deviation: $7.70$ for the first and $7.69$ for the second (when adjusted to be comparable to the usual "unbiased" estimator). Sheppard's corrections "Sheppard's corrections" a...
Standard deviation of binned observations
This reply presents two solutions: Sheppard's corrections and a maximum likelihood estimate. Both closely agree on an estimate of the standard deviation: $7.70$ for the first and $7.69$ for the secon
Standard deviation of binned observations This reply presents two solutions: Sheppard's corrections and a maximum likelihood estimate. Both closely agree on an estimate of the standard deviation: $7.70$ for the first and $7.69$ for the second (when adjusted to be comparable to the usual "unbiased" estimator). Sheppar...
Standard deviation of binned observations This reply presents two solutions: Sheppard's corrections and a maximum likelihood estimate. Both closely agree on an estimate of the standard deviation: $7.70$ for the first and $7.69$ for the secon
10,864
EM maximum likelihood estimation for Weibull distribution
I think the answer is yes, if I have understood the question correctly. Write $z_i = x_i^k$. Then an EM algorithm type of iteration, starting with for example $\hat k = 1$, is E step: ${\hat z}_i = x_i^{\hat k}$ M step: $\hat k = \frac{n}{\left[\sum({\hat z}_i - 1)\log x_i\right]}$ This is a special case (the case...
EM maximum likelihood estimation for Weibull distribution
I think the answer is yes, if I have understood the question correctly. Write $z_i = x_i^k$. Then an EM algorithm type of iteration, starting with for example $\hat k = 1$, is E step: ${\hat z}_i =
EM maximum likelihood estimation for Weibull distribution I think the answer is yes, if I have understood the question correctly. Write $z_i = x_i^k$. Then an EM algorithm type of iteration, starting with for example $\hat k = 1$, is E step: ${\hat z}_i = x_i^{\hat k}$ M step: $\hat k = \frac{n}{\left[\sum({\hat z}_...
EM maximum likelihood estimation for Weibull distribution I think the answer is yes, if I have understood the question correctly. Write $z_i = x_i^k$. Then an EM algorithm type of iteration, starting with for example $\hat k = 1$, is E step: ${\hat z}_i =
10,865
EM maximum likelihood estimation for Weibull distribution
The Weibull MLE is only numerically solvable: Let $$ f_{\lambda,\beta}(x) = \begin{cases} \frac{\beta}{\lambda}\left(\frac{x}{\lambda}\right)^{\beta-1}e^{-\left(\frac{x}{\lambda}\right)^{\beta}} & ,\,x\geq0 \\ 0 &,\, x<0 \end{cases} $$ with $\beta,\,\lambda>0$. 1) Likelihoodfunction: $$ \mathcal{L}_{\hat{x}}(\lambda, ...
EM maximum likelihood estimation for Weibull distribution
The Weibull MLE is only numerically solvable: Let $$ f_{\lambda,\beta}(x) = \begin{cases} \frac{\beta}{\lambda}\left(\frac{x}{\lambda}\right)^{\beta-1}e^{-\left(\frac{x}{\lambda}\right)^{\beta}} & ,\,
EM maximum likelihood estimation for Weibull distribution The Weibull MLE is only numerically solvable: Let $$ f_{\lambda,\beta}(x) = \begin{cases} \frac{\beta}{\lambda}\left(\frac{x}{\lambda}\right)^{\beta-1}e^{-\left(\frac{x}{\lambda}\right)^{\beta}} & ,\,x\geq0 \\ 0 &,\, x<0 \end{cases} $$ with $\beta,\,\lambda>0$....
EM maximum likelihood estimation for Weibull distribution The Weibull MLE is only numerically solvable: Let $$ f_{\lambda,\beta}(x) = \begin{cases} \frac{\beta}{\lambda}\left(\frac{x}{\lambda}\right)^{\beta-1}e^{-\left(\frac{x}{\lambda}\right)^{\beta}} & ,\,
10,866
EM maximum likelihood estimation for Weibull distribution
Though this is an old question, it looks like there is an answer in a paper published here: http://home.iitk.ac.in/~kundu/interval-censoring-REVISED-2.pdf In this work the analysis of interval-censored data, with Weibull distribution as the underlying lifetime distribution has been considered. It is assumed that cen...
EM maximum likelihood estimation for Weibull distribution
Though this is an old question, it looks like there is an answer in a paper published here: http://home.iitk.ac.in/~kundu/interval-censoring-REVISED-2.pdf In this work the analysis of interval-censor
EM maximum likelihood estimation for Weibull distribution Though this is an old question, it looks like there is an answer in a paper published here: http://home.iitk.ac.in/~kundu/interval-censoring-REVISED-2.pdf In this work the analysis of interval-censored data, with Weibull distribution as the underlying lifetim...
EM maximum likelihood estimation for Weibull distribution Though this is an old question, it looks like there is an answer in a paper published here: http://home.iitk.ac.in/~kundu/interval-censoring-REVISED-2.pdf In this work the analysis of interval-censor
10,867
EM maximum likelihood estimation for Weibull distribution
In this case the MLE and EM estimators are equivalent, since the MLE estimator is actually just a special case of the EM estimator. (I am assuming a frequentist framework in my answer; this isn't true for EM in a Bayesian context in which we're talking about MAP's). Since there is no missing data (just an unknown param...
EM maximum likelihood estimation for Weibull distribution
In this case the MLE and EM estimators are equivalent, since the MLE estimator is actually just a special case of the EM estimator. (I am assuming a frequentist framework in my answer; this isn't true
EM maximum likelihood estimation for Weibull distribution In this case the MLE and EM estimators are equivalent, since the MLE estimator is actually just a special case of the EM estimator. (I am assuming a frequentist framework in my answer; this isn't true for EM in a Bayesian context in which we're talking about MAP...
EM maximum likelihood estimation for Weibull distribution In this case the MLE and EM estimators are equivalent, since the MLE estimator is actually just a special case of the EM estimator. (I am assuming a frequentist framework in my answer; this isn't true
10,868
Why LKJcorr is a good prior for correlation matrix?
The LKJ distribution is an extension of the work of H. Joe (1). Joe proposed a procedure to generate correlation matrices uniformly over the space of all positive definite correlation matrices. The contribution of (2) is that it extends Joe's work to show that there is a more efficient manner of generating such samples...
Why LKJcorr is a good prior for correlation matrix?
The LKJ distribution is an extension of the work of H. Joe (1). Joe proposed a procedure to generate correlation matrices uniformly over the space of all positive definite correlation matrices. The co
Why LKJcorr is a good prior for correlation matrix? The LKJ distribution is an extension of the work of H. Joe (1). Joe proposed a procedure to generate correlation matrices uniformly over the space of all positive definite correlation matrices. The contribution of (2) is that it extends Joe's work to show that there i...
Why LKJcorr is a good prior for correlation matrix? The LKJ distribution is an extension of the work of H. Joe (1). Joe proposed a procedure to generate correlation matrices uniformly over the space of all positive definite correlation matrices. The co
10,869
What are the classical notations in statistics, linear algebra and machine learning? And what are the connections between these notations?
Perhaps a related question is, "What are words used in different languages, and what are the connections between these words?" Notation is in some sense like language: Some words have region specific meanings; some words are broadly understood. Like powerful nations spread their language, successful fields and influe...
What are the classical notations in statistics, linear algebra and machine learning? And what are th
Perhaps a related question is, "What are words used in different languages, and what are the connections between these words?" Notation is in some sense like language: Some words have region specific
What are the classical notations in statistics, linear algebra and machine learning? And what are the connections between these notations? Perhaps a related question is, "What are words used in different languages, and what are the connections between these words?" Notation is in some sense like language: Some words h...
What are the classical notations in statistics, linear algebra and machine learning? And what are th Perhaps a related question is, "What are words used in different languages, and what are the connections between these words?" Notation is in some sense like language: Some words have region specific
10,870
Machine Learning to Predict Class Probabilities
SVM is closely related to logistic regression, and can be used to predict the probabilities as well based on the distance to the hyperplane (the score of each point). You do this by making score -> probability mapping some way, which is relatively easy as the problem is one-dimensional. One way is to fit an S-curve (e....
Machine Learning to Predict Class Probabilities
SVM is closely related to logistic regression, and can be used to predict the probabilities as well based on the distance to the hyperplane (the score of each point). You do this by making score -> pr
Machine Learning to Predict Class Probabilities SVM is closely related to logistic regression, and can be used to predict the probabilities as well based on the distance to the hyperplane (the score of each point). You do this by making score -> probability mapping some way, which is relatively easy as the problem is o...
Machine Learning to Predict Class Probabilities SVM is closely related to logistic regression, and can be used to predict the probabilities as well based on the distance to the hyperplane (the score of each point). You do this by making score -> pr
10,871
Machine Learning to Predict Class Probabilities
Another possibility are neural networks, if you use the cross-entropy as the cost functional with sigmoidal output units. That will provide you with the estimates you are looking for. Neural networks, as well as logistic regression, are discriminative classifiers, meaning that they attempt to maximize the conditional d...
Machine Learning to Predict Class Probabilities
Another possibility are neural networks, if you use the cross-entropy as the cost functional with sigmoidal output units. That will provide you with the estimates you are looking for. Neural networks,
Machine Learning to Predict Class Probabilities Another possibility are neural networks, if you use the cross-entropy as the cost functional with sigmoidal output units. That will provide you with the estimates you are looking for. Neural networks, as well as logistic regression, are discriminative classifiers, meaning...
Machine Learning to Predict Class Probabilities Another possibility are neural networks, if you use the cross-entropy as the cost functional with sigmoidal output units. That will provide you with the estimates you are looking for. Neural networks,
10,872
Machine Learning to Predict Class Probabilities
There are many - and what works best depends on the data. There are also many ways to cheat - for example, you can perform probability calibration on the outputs of any classifier that gives some semblance of a score (i.e.: a dot product between the weight vector and the input). The most common example of this is calle...
Machine Learning to Predict Class Probabilities
There are many - and what works best depends on the data. There are also many ways to cheat - for example, you can perform probability calibration on the outputs of any classifier that gives some semb
Machine Learning to Predict Class Probabilities There are many - and what works best depends on the data. There are also many ways to cheat - for example, you can perform probability calibration on the outputs of any classifier that gives some semblance of a score (i.e.: a dot product between the weight vector and the ...
Machine Learning to Predict Class Probabilities There are many - and what works best depends on the data. There are also many ways to cheat - for example, you can perform probability calibration on the outputs of any classifier that gives some semb
10,873
Does it make sense for a fixed effect to be nested within a random one, or how to code repeated measures in R (aov and lmer)?
In mixed models the treatment of factors as either fixed or random, particularly in conjunction with whether they are crossed, partially crossed or nested can lead to a lot of confusion. Also, there appears to be differences in terminology between what is meant by nesting in the anova/designed experiments world and mix...
Does it make sense for a fixed effect to be nested within a random one, or how to code repeated meas
In mixed models the treatment of factors as either fixed or random, particularly in conjunction with whether they are crossed, partially crossed or nested can lead to a lot of confusion. Also, there a
Does it make sense for a fixed effect to be nested within a random one, or how to code repeated measures in R (aov and lmer)? In mixed models the treatment of factors as either fixed or random, particularly in conjunction with whether they are crossed, partially crossed or nested can lead to a lot of confusion. Also, t...
Does it make sense for a fixed effect to be nested within a random one, or how to code repeated meas In mixed models the treatment of factors as either fixed or random, particularly in conjunction with whether they are crossed, partially crossed or nested can lead to a lot of confusion. Also, there a
10,874
Does it make sense for a fixed effect to be nested within a random one, or how to code repeated measures in R (aov and lmer)?
Ooooops. Alert commenters have spotted that my post was full of nonsense. I was confusing nested designs and repeated measures designs. This site gives a useful breakdown of the difference between nested and repeated measures designs. Interestingly, the author shows expected mean squares for fixed within fixed, random...
Does it make sense for a fixed effect to be nested within a random one, or how to code repeated meas
Ooooops. Alert commenters have spotted that my post was full of nonsense. I was confusing nested designs and repeated measures designs. This site gives a useful breakdown of the difference between ne
Does it make sense for a fixed effect to be nested within a random one, or how to code repeated measures in R (aov and lmer)? Ooooops. Alert commenters have spotted that my post was full of nonsense. I was confusing nested designs and repeated measures designs. This site gives a useful breakdown of the difference betw...
Does it make sense for a fixed effect to be nested within a random one, or how to code repeated meas Ooooops. Alert commenters have spotted that my post was full of nonsense. I was confusing nested designs and repeated measures designs. This site gives a useful breakdown of the difference between ne
10,875
How to use scikit-learn's cross validation functions on multi-label classifiers
Stratified sampling means that the class membership distribution is preserved in your KFold sampling. This doesn't make a lot of sense in the multilabel case where your target vector might have more than one label per observation. There are two possible interpretations of stratified in this sense. For $n$ labels where ...
How to use scikit-learn's cross validation functions on multi-label classifiers
Stratified sampling means that the class membership distribution is preserved in your KFold sampling. This doesn't make a lot of sense in the multilabel case where your target vector might have more t
How to use scikit-learn's cross validation functions on multi-label classifiers Stratified sampling means that the class membership distribution is preserved in your KFold sampling. This doesn't make a lot of sense in the multilabel case where your target vector might have more than one label per observation. There are...
How to use scikit-learn's cross validation functions on multi-label classifiers Stratified sampling means that the class membership distribution is preserved in your KFold sampling. This doesn't make a lot of sense in the multilabel case where your target vector might have more t
10,876
How to use scikit-learn's cross validation functions on multi-label classifiers
You might want to check: On the stratification of multi-label data . Here the authors first tell the simple idea of sampling from unique labelsets and then introduce a new approach iterative stratification for multi-label datasets. The approach of iterative stratification is greedy. For a quick overview, here is what t...
How to use scikit-learn's cross validation functions on multi-label classifiers
You might want to check: On the stratification of multi-label data . Here the authors first tell the simple idea of sampling from unique labelsets and then introduce a new approach iterative stratific
How to use scikit-learn's cross validation functions on multi-label classifiers You might want to check: On the stratification of multi-label data . Here the authors first tell the simple idea of sampling from unique labelsets and then introduce a new approach iterative stratification for multi-label datasets. The appr...
How to use scikit-learn's cross validation functions on multi-label classifiers You might want to check: On the stratification of multi-label data . Here the authors first tell the simple idea of sampling from unique labelsets and then introduce a new approach iterative stratific
10,877
What is the use of the line produced by qqline() in R?
As you can see on the picture, obtained by > y <- rnorm(2000)*4-4 > qqnorm(y); qqline(y, col = 2,lwd=2,lty=2) the diagonal would not make sense because the first axis is scaled in terms of the theoretical quantiles of a $\mathcal{N}(0,1)$ distribution. I think using the first and third quartiles to set the line gives ...
What is the use of the line produced by qqline() in R?
As you can see on the picture, obtained by > y <- rnorm(2000)*4-4 > qqnorm(y); qqline(y, col = 2,lwd=2,lty=2) the diagonal would not make sense because the first axis is scaled in terms of the theore
What is the use of the line produced by qqline() in R? As you can see on the picture, obtained by > y <- rnorm(2000)*4-4 > qqnorm(y); qqline(y, col = 2,lwd=2,lty=2) the diagonal would not make sense because the first axis is scaled in terms of the theoretical quantiles of a $\mathcal{N}(0,1)$ distribution. I think usi...
What is the use of the line produced by qqline() in R? As you can see on the picture, obtained by > y <- rnorm(2000)*4-4 > qqnorm(y); qqline(y, col = 2,lwd=2,lty=2) the diagonal would not make sense because the first axis is scaled in terms of the theore
10,878
How does a causal tree optimize for heterogenous treatment effects?
Your understanding is correct, the core notion of the paper is that sampling-splitting is essential for empirical work and that it allows us to have an unbiased estimate of the treatment effect. To tackle your main question: The criteria of choice are $\hat{EMSE}_\tau$ and $\hat{EMSE}_\mu$. Both penalise variance and ...
How does a causal tree optimize for heterogenous treatment effects?
Your understanding is correct, the core notion of the paper is that sampling-splitting is essential for empirical work and that it allows us to have an unbiased estimate of the treatment effect. To t
How does a causal tree optimize for heterogenous treatment effects? Your understanding is correct, the core notion of the paper is that sampling-splitting is essential for empirical work and that it allows us to have an unbiased estimate of the treatment effect. To tackle your main question: The criteria of choice are...
How does a causal tree optimize for heterogenous treatment effects? Your understanding is correct, the core notion of the paper is that sampling-splitting is essential for empirical work and that it allows us to have an unbiased estimate of the treatment effect. To t
10,879
When to use Poisson vs. geometric vs. negative binomial GLMs for count data?
Both the Poisson distribution and the geometric distribution are special cases of the negative binomial (NB) distribution. One common notation is that the variance of the NB is $\mu + 1/\theta \cdot \mu^2$ where $\mu$ is the expectation and $\theta$ is responsible for the amount of (over-)dispersion. Sometimes $\alpha ...
When to use Poisson vs. geometric vs. negative binomial GLMs for count data?
Both the Poisson distribution and the geometric distribution are special cases of the negative binomial (NB) distribution. One common notation is that the variance of the NB is $\mu + 1/\theta \cdot \
When to use Poisson vs. geometric vs. negative binomial GLMs for count data? Both the Poisson distribution and the geometric distribution are special cases of the negative binomial (NB) distribution. One common notation is that the variance of the NB is $\mu + 1/\theta \cdot \mu^2$ where $\mu$ is the expectation and $\...
When to use Poisson vs. geometric vs. negative binomial GLMs for count data? Both the Poisson distribution and the geometric distribution are special cases of the negative binomial (NB) distribution. One common notation is that the variance of the NB is $\mu + 1/\theta \cdot \
10,880
Why is the posterior distribution in Bayesian Inference often intractable?
Why can one not simply calculate the posterior distribution as the numerator of the right-hand side and then infer this normalization constant by requiring that the integral over the posterior distribution has to be 1? This is precisely what is being done. The posterior distribution is $$P(\theta|D) = \dfrac{p(D|\thet...
Why is the posterior distribution in Bayesian Inference often intractable?
Why can one not simply calculate the posterior distribution as the numerator of the right-hand side and then infer this normalization constant by requiring that the integral over the posterior distrib
Why is the posterior distribution in Bayesian Inference often intractable? Why can one not simply calculate the posterior distribution as the numerator of the right-hand side and then infer this normalization constant by requiring that the integral over the posterior distribution has to be 1? This is precisely what is...
Why is the posterior distribution in Bayesian Inference often intractable? Why can one not simply calculate the posterior distribution as the numerator of the right-hand side and then infer this normalization constant by requiring that the integral over the posterior distrib
10,881
Why is the posterior distribution in Bayesian Inference often intractable?
I had the same question. This great post explains it really well. In a nutshell. It is intractable because the denominator has to evaluate the probability for ALL possible values of 𝜃; in most interesting cases ALL is a large amount. Whereas the numerator is for a single realization of 𝜃. See Eqs. 4-8 in the post....
Why is the posterior distribution in Bayesian Inference often intractable?
I had the same question. This great post explains it really well. In a nutshell. It is intractable because the denominator has to evaluate the probability for ALL possible values of 𝜃; in most interes
Why is the posterior distribution in Bayesian Inference often intractable? I had the same question. This great post explains it really well. In a nutshell. It is intractable because the denominator has to evaluate the probability for ALL possible values of 𝜃; in most interesting cases ALL is a large amount. Whereas th...
Why is the posterior distribution in Bayesian Inference often intractable? I had the same question. This great post explains it really well. In a nutshell. It is intractable because the denominator has to evaluate the probability for ALL possible values of 𝜃; in most interes
10,882
Different probability density transformations due to Jacobian factor
I suggest you reading the solution of Question 1.4 which provides a good intuition. In a nutshell, if you have an arbitrary function $ f(x) $ and two variable $x$ and $y$ which are related to each other by the function $x = g(y)$, then you can find the maximum of the function either by directly analyzing $f(x)$: $ \ha...
Different probability density transformations due to Jacobian factor
I suggest you reading the solution of Question 1.4 which provides a good intuition. In a nutshell, if you have an arbitrary function $ f(x) $ and two variable $x$ and $y$ which are related to each ot
Different probability density transformations due to Jacobian factor I suggest you reading the solution of Question 1.4 which provides a good intuition. In a nutshell, if you have an arbitrary function $ f(x) $ and two variable $x$ and $y$ which are related to each other by the function $x = g(y)$, then you can find t...
Different probability density transformations due to Jacobian factor I suggest you reading the solution of Question 1.4 which provides a good intuition. In a nutshell, if you have an arbitrary function $ f(x) $ and two variable $x$ and $y$ which are related to each ot
10,883
Bayesian thinking about overfitting
I might start by saying that a Bayesian model cannot systematically overfit (or underfit) data that are drawn from the prior predictive distribution, which is the basis for a procedure to validate that Bayesian software is working correctly before it is applied to data collected from the world. But it can overfit a sin...
Bayesian thinking about overfitting
I might start by saying that a Bayesian model cannot systematically overfit (or underfit) data that are drawn from the prior predictive distribution, which is the basis for a procedure to validate tha
Bayesian thinking about overfitting I might start by saying that a Bayesian model cannot systematically overfit (or underfit) data that are drawn from the prior predictive distribution, which is the basis for a procedure to validate that Bayesian software is working correctly before it is applied to data collected from...
Bayesian thinking about overfitting I might start by saying that a Bayesian model cannot systematically overfit (or underfit) data that are drawn from the prior predictive distribution, which is the basis for a procedure to validate tha
10,884
Bayesian thinking about overfitting
Overfitting means the model works well on the training set but performs poorly on test set. IMHO, it comes from two sources: the data and the model we use (or our subjectivity). Data is probably the more important factor. With whatever models/approaches we use, we implicitly assume the our data is representative enough...
Bayesian thinking about overfitting
Overfitting means the model works well on the training set but performs poorly on test set. IMHO, it comes from two sources: the data and the model we use (or our subjectivity). Data is probably the m
Bayesian thinking about overfitting Overfitting means the model works well on the training set but performs poorly on test set. IMHO, it comes from two sources: the data and the model we use (or our subjectivity). Data is probably the more important factor. With whatever models/approaches we use, we implicitly assume t...
Bayesian thinking about overfitting Overfitting means the model works well on the training set but performs poorly on test set. IMHO, it comes from two sources: the data and the model we use (or our subjectivity). Data is probably the m
10,885
Detecting patterns of cheating on a multi-question exam
Ad hoc approach I'd assume that $\beta_i$ is reasonably reliable because it was estimated on many students, most of who did not cheat on question $i$. For each student $j$, sort the questions in order of increasing difficulty, compute $\beta_i + q_j$ (note that $q_j$ is just a constant offset) and threshold it at some ...
Detecting patterns of cheating on a multi-question exam
Ad hoc approach I'd assume that $\beta_i$ is reasonably reliable because it was estimated on many students, most of who did not cheat on question $i$. For each student $j$, sort the questions in order
Detecting patterns of cheating on a multi-question exam Ad hoc approach I'd assume that $\beta_i$ is reasonably reliable because it was estimated on many students, most of who did not cheat on question $i$. For each student $j$, sort the questions in order of increasing difficulty, compute $\beta_i + q_j$ (note that $q...
Detecting patterns of cheating on a multi-question exam Ad hoc approach I'd assume that $\beta_i$ is reasonably reliable because it was estimated on many students, most of who did not cheat on question $i$. For each student $j$, sort the questions in order
10,886
Detecting patterns of cheating on a multi-question exam
If you want to get into some more complex approaches, you might look at item response theory models. You could then model the difficulty of each question. Students who got difficult items correct while missing easier ones would, I think, be more likely to be cheating than those who did the reverse. It's been more tha...
Detecting patterns of cheating on a multi-question exam
If you want to get into some more complex approaches, you might look at item response theory models. You could then model the difficulty of each question. Students who got difficult items correct wh
Detecting patterns of cheating on a multi-question exam If you want to get into some more complex approaches, you might look at item response theory models. You could then model the difficulty of each question. Students who got difficult items correct while missing easier ones would, I think, be more likely to be che...
Detecting patterns of cheating on a multi-question exam If you want to get into some more complex approaches, you might look at item response theory models. You could then model the difficulty of each question. Students who got difficult items correct wh
10,887
Restricted Boltzmann Machine : how is it used in machine learning?
It is possible to use RBMs to deal with typical problems that arise in data collection (that could be used for example to train a machine learning model). Such problems include imbalanced data sets (in a classification problem), or datasets with missing values (the values of some features are unknown). In the first cas...
Restricted Boltzmann Machine : how is it used in machine learning?
It is possible to use RBMs to deal with typical problems that arise in data collection (that could be used for example to train a machine learning model). Such problems include imbalanced data sets (i
Restricted Boltzmann Machine : how is it used in machine learning? It is possible to use RBMs to deal with typical problems that arise in data collection (that could be used for example to train a machine learning model). Such problems include imbalanced data sets (in a classification problem), or datasets with missing...
Restricted Boltzmann Machine : how is it used in machine learning? It is possible to use RBMs to deal with typical problems that arise in data collection (that could be used for example to train a machine learning model). Such problems include imbalanced data sets (i
10,888
Restricted Boltzmann Machine : how is it used in machine learning?
RBM was one of the first practical ways of training/learning a deep network, having more than just one or two layers. And the deep belief network was proposed by Geoffrey Hinton, who's considered one of the 'father's of deep learning, I suppose, although Yann LeCun is the other main 'father' of deep learning, I think,...
Restricted Boltzmann Machine : how is it used in machine learning?
RBM was one of the first practical ways of training/learning a deep network, having more than just one or two layers. And the deep belief network was proposed by Geoffrey Hinton, who's considered one
Restricted Boltzmann Machine : how is it used in machine learning? RBM was one of the first practical ways of training/learning a deep network, having more than just one or two layers. And the deep belief network was proposed by Geoffrey Hinton, who's considered one of the 'father's of deep learning, I suppose, althou...
Restricted Boltzmann Machine : how is it used in machine learning? RBM was one of the first practical ways of training/learning a deep network, having more than just one or two layers. And the deep belief network was proposed by Geoffrey Hinton, who's considered one
10,889
Paradox in model selection (AIC, BIC, to explain or to predict?)
I will try to explain what's going on with some materials that I am referring to and what I have learned with personal correspondence with the author of the materials. Above is an example where we are trying to infer a 3rd degree polynomial plus noise. If you look at the bottom left quadrant, you will see that on a cu...
Paradox in model selection (AIC, BIC, to explain or to predict?)
I will try to explain what's going on with some materials that I am referring to and what I have learned with personal correspondence with the author of the materials. Above is an example where we ar
Paradox in model selection (AIC, BIC, to explain or to predict?) I will try to explain what's going on with some materials that I am referring to and what I have learned with personal correspondence with the author of the materials. Above is an example where we are trying to infer a 3rd degree polynomial plus noise. I...
Paradox in model selection (AIC, BIC, to explain or to predict?) I will try to explain what's going on with some materials that I am referring to and what I have learned with personal correspondence with the author of the materials. Above is an example where we ar
10,890
Paradox in model selection (AIC, BIC, to explain or to predict?)
They are not to be taken in the same context; points 1 and 2 have different contexts. For both AIC and BIC one first explores which combination of parameters in which number yield the best indices (Some authors have epileptic fits when I use the word index in this context. Ignore them, or look up index in the dictionar...
Paradox in model selection (AIC, BIC, to explain or to predict?)
They are not to be taken in the same context; points 1 and 2 have different contexts. For both AIC and BIC one first explores which combination of parameters in which number yield the best indices (So
Paradox in model selection (AIC, BIC, to explain or to predict?) They are not to be taken in the same context; points 1 and 2 have different contexts. For both AIC and BIC one first explores which combination of parameters in which number yield the best indices (Some authors have epileptic fits when I use the word inde...
Paradox in model selection (AIC, BIC, to explain or to predict?) They are not to be taken in the same context; points 1 and 2 have different contexts. For both AIC and BIC one first explores which combination of parameters in which number yield the best indices (So
10,891
Paradox in model selection (AIC, BIC, to explain or to predict?)
I read Shmueli's "To Explain or to Predict" (2010) a couple of years ago for the first time and it was one of the most important readings for me. Several great doubts come to solve after such reading. It seems me that the contradictions you notice are less relevant that it seems to be. I try to reply to your two questi...
Paradox in model selection (AIC, BIC, to explain or to predict?)
I read Shmueli's "To Explain or to Predict" (2010) a couple of years ago for the first time and it was one of the most important readings for me. Several great doubts come to solve after such reading.
Paradox in model selection (AIC, BIC, to explain or to predict?) I read Shmueli's "To Explain or to Predict" (2010) a couple of years ago for the first time and it was one of the most important readings for me. Several great doubts come to solve after such reading. It seems me that the contradictions you notice are les...
Paradox in model selection (AIC, BIC, to explain or to predict?) I read Shmueli's "To Explain or to Predict" (2010) a couple of years ago for the first time and it was one of the most important readings for me. Several great doubts come to solve after such reading.
10,892
Can the Mantel test be extended to asymmetric matrices?
It doesn't need to be extended. The original Mantel test, as presented in Mantel's 1967 paper, allows for asymmetric matrices. Recall that this test compares two $n\times n$ distance matrices $X$ and $Y$. We may at this point anticipate a modification of our statistic which will simplify the statistical procedures t...
Can the Mantel test be extended to asymmetric matrices?
It doesn't need to be extended. The original Mantel test, as presented in Mantel's 1967 paper, allows for asymmetric matrices. Recall that this test compares two $n\times n$ distance matrices $X$ an
Can the Mantel test be extended to asymmetric matrices? It doesn't need to be extended. The original Mantel test, as presented in Mantel's 1967 paper, allows for asymmetric matrices. Recall that this test compares two $n\times n$ distance matrices $X$ and $Y$. We may at this point anticipate a modification of our st...
Can the Mantel test be extended to asymmetric matrices? It doesn't need to be extended. The original Mantel test, as presented in Mantel's 1967 paper, allows for asymmetric matrices. Recall that this test compares two $n\times n$ distance matrices $X$ an
10,893
Bootstrapping Generalized Least Squares
As long as your data set is not infinitely large, a bootstrapping approach would likely entail overlaying alternative realizations of biased parameter values and standard errors on top of one another. Shrinkage approaches which attempt to nudge off-diagonals in the upper(lower) triangular toward the diagonal may help,...
Bootstrapping Generalized Least Squares
As long as your data set is not infinitely large, a bootstrapping approach would likely entail overlaying alternative realizations of biased parameter values and standard errors on top of one another.
Bootstrapping Generalized Least Squares As long as your data set is not infinitely large, a bootstrapping approach would likely entail overlaying alternative realizations of biased parameter values and standard errors on top of one another. Shrinkage approaches which attempt to nudge off-diagonals in the upper(lower) ...
Bootstrapping Generalized Least Squares As long as your data set is not infinitely large, a bootstrapping approach would likely entail overlaying alternative realizations of biased parameter values and standard errors on top of one another.
10,894
Properties of PCA for dependent observations
Presumably, you could add the time-component as an additional feature to your sampled points, and now they are i.i.d.? Basically, the original data points are conditional on time: $$ p(\mathbf{x}_i \mid t_i) \ne p(\mathbf{x}_i) $$ But, if we define $\mathbf{x}_i' = \{\mathbf{x}_i, t_i\}$, then we have: $$ p(\mathbf{x}'...
Properties of PCA for dependent observations
Presumably, you could add the time-component as an additional feature to your sampled points, and now they are i.i.d.? Basically, the original data points are conditional on time: $$ p(\mathbf{x}_i \m
Properties of PCA for dependent observations Presumably, you could add the time-component as an additional feature to your sampled points, and now they are i.i.d.? Basically, the original data points are conditional on time: $$ p(\mathbf{x}_i \mid t_i) \ne p(\mathbf{x}_i) $$ But, if we define $\mathbf{x}_i' = \{\mathbf...
Properties of PCA for dependent observations Presumably, you could add the time-component as an additional feature to your sampled points, and now they are i.i.d.? Basically, the original data points are conditional on time: $$ p(\mathbf{x}_i \m
10,895
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but P(H | HHHH) = 0.5
P(HHHHH) is the probability of having five heads in a row. But, P(H|HHHH) means having heads if the last four tosses were heads. In the former, you're at the beginning of the experiment and in the latter one you have already completed four tosses and know the results. Think about the following rewordings: P(HHHHH): If ...
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but
P(HHHHH) is the probability of having five heads in a row. But, P(H|HHHH) means having heads if the last four tosses were heads. In the former, you're at the beginning of the experiment and in the lat
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but P(H | HHHH) = 0.5 P(HHHHH) is the probability of having five heads in a row. But, P(H|HHHH) means having heads if the last four tosses were heads. In the former, you're at the beginning of the experiment and in the latt...
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but P(HHHHH) is the probability of having five heads in a row. But, P(H|HHHH) means having heads if the last four tosses were heads. In the former, you're at the beginning of the experiment and in the lat
10,896
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but P(H | HHHH) = 0.5
P(HHHHH) There are 32 possible outcomes from flipping a coin 5 times. Here they are listed: HHHHH THHHH HTHHH TTHHH HHTHH THTHH HTTHH TTTHH HHHTH THHTH HTHTH TTHTH HHTTH THTTH HTTTH TTTTH HHHHT THHHT HTHHT TTHHT HHTHT THTHT HTTHT TTTHT HHHTT THHTT HTHTT TTHTT HHTTT THTTT HTTTT TTTTT All of these outcomes are equally l...
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but
P(HHHHH) There are 32 possible outcomes from flipping a coin 5 times. Here they are listed: HHHHH THHHH HTHHH TTHHH HHTHH THTHH HTTHH TTTHH HHHTH THHTH HTHTH TTHTH HHTTH THTTH HTTTH TTTTH HHHHT THHHT
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but P(H | HHHH) = 0.5 P(HHHHH) There are 32 possible outcomes from flipping a coin 5 times. Here they are listed: HHHHH THHHH HTHHH TTHHH HHTHH THTHH HTTHH TTTHH HHHTH THHTH HTHTH TTHTH HHTTH THTTH HTTTH TTTTH HHHHT THHHT H...
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but P(HHHHH) There are 32 possible outcomes from flipping a coin 5 times. Here they are listed: HHHHH THHHH HTHHH TTHHH HHTHH THTHH HTTHH TTTHH HHHTH THHTH HTHTH TTHTH HHTTH THTTH HTTTH TTTTH HHHHT THHHT
10,897
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but P(H | HHHH) = 0.5
Often it's helpful to thing of conditions in terms of information: $$ \mathbb{P}[H | HHHH] $$ can be read as "The probability of getting Heads, given that I have 4 heads already", i.e., given the information that there are already 4 heads. Of course, we're told the coin tosses are independent, so this information is no...
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but
Often it's helpful to thing of conditions in terms of information: $$ \mathbb{P}[H | HHHH] $$ can be read as "The probability of getting Heads, given that I have 4 heads already", i.e., given the info
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but P(H | HHHH) = 0.5 Often it's helpful to thing of conditions in terms of information: $$ \mathbb{P}[H | HHHH] $$ can be read as "The probability of getting Heads, given that I have 4 heads already", i.e., given the infor...
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but Often it's helpful to thing of conditions in terms of information: $$ \mathbb{P}[H | HHHH] $$ can be read as "The probability of getting Heads, given that I have 4 heads already", i.e., given the info
10,898
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but P(H | HHHH) = 0.5
The notation that does not index the coin throws and/or their outcomes (and does not even separate the outcomes by commas or signs of intersection) may be confusing. How do we know which coin throw each $H$ refers to in $P(H|HHHH)$ or $P(HHHHH)$? We can often guess, but this is needlessly ambiguous. Let us index the co...
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but
The notation that does not index the coin throws and/or their outcomes (and does not even separate the outcomes by commas or signs of intersection) may be confusing. How do we know which coin throw ea
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but P(H | HHHH) = 0.5 The notation that does not index the coin throws and/or their outcomes (and does not even separate the outcomes by commas or signs of intersection) may be confusing. How do we know which coin throw eac...
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but The notation that does not index the coin throws and/or their outcomes (and does not even separate the outcomes by commas or signs of intersection) may be confusing. How do we know which coin throw ea
10,899
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but P(H | HHHH) = 0.5
I would suggest you to run a simulation, and view the conditional distribution as apply filter on data. Specifically, you may simulation large amount of (say 5 million) coin flips on 5 fair coins try to find for first 4 coins, the results are HHHH select a subset of the data by first 4 coins' results are HHHH check th...
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but
I would suggest you to run a simulation, and view the conditional distribution as apply filter on data. Specifically, you may simulation large amount of (say 5 million) coin flips on 5 fair coins try
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but P(H | HHHH) = 0.5 I would suggest you to run a simulation, and view the conditional distribution as apply filter on data. Specifically, you may simulation large amount of (say 5 million) coin flips on 5 fair coins try ...
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but I would suggest you to run a simulation, and view the conditional distribution as apply filter on data. Specifically, you may simulation large amount of (say 5 million) coin flips on 5 fair coins try
10,900
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but P(H | HHHH) = 0.5
Fair independent coin The probability of event a (5th case is heads $H_5$) given event b (already 4 heads $H_4H_3H_2H_1$) $$\underbrace{P(H_5|H_4H_3H_2H_1)}_{\text {P(a given b)}} = \frac{\overbrace{P(H_5 \& H_4H_3H_2H_1)}^{\text{P(a and b)}}}{\underbrace{P(H_5 \& H_4H_3H_2H_1) }_{\text {P(a and b)}}+\underbrace{P(T_5 ...
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but
Fair independent coin The probability of event a (5th case is heads $H_5$) given event b (already 4 heads $H_4H_3H_2H_1$) $$\underbrace{P(H_5|H_4H_3H_2H_1)}_{\text {P(a given b)}} = \frac{\overbrace{P
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but P(H | HHHH) = 0.5 Fair independent coin The probability of event a (5th case is heads $H_5$) given event b (already 4 heads $H_4H_3H_2H_1$) $$\underbrace{P(H_5|H_4H_3H_2H_1)}_{\text {P(a given b)}} = \frac{\overbrace{P(...
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but Fair independent coin The probability of event a (5th case is heads $H_5$) given event b (already 4 heads $H_4H_3H_2H_1$) $$\underbrace{P(H_5|H_4H_3H_2H_1)}_{\text {P(a given b)}} = \frac{\overbrace{P