idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
8,201
Imputation before or after splitting into train and test?
You should split before pre-processing or imputing. The division between training and test set is an attempt to replicate the situation where you have past information and are building a model which you will test on future as-yet unknown information: the training set takes the place of the past and the test set takes t...
Imputation before or after splitting into train and test?
You should split before pre-processing or imputing. The division between training and test set is an attempt to replicate the situation where you have past information and are building a model which y
Imputation before or after splitting into train and test? You should split before pre-processing or imputing. The division between training and test set is an attempt to replicate the situation where you have past information and are building a model which you will test on future as-yet unknown information: the trainin...
Imputation before or after splitting into train and test? You should split before pre-processing or imputing. The division between training and test set is an attempt to replicate the situation where you have past information and are building a model which y
8,202
Imputation before or after splitting into train and test?
I think you'd better split before you do imputation. For instances, you may want to impute missing values with column mean. In this case, if you impute first with train+valid data set and split next, then you have used validation data set before you built your model, which is how a data leakage problem comes into pictu...
Imputation before or after splitting into train and test?
I think you'd better split before you do imputation. For instances, you may want to impute missing values with column mean. In this case, if you impute first with train+valid data set and split next,
Imputation before or after splitting into train and test? I think you'd better split before you do imputation. For instances, you may want to impute missing values with column mean. In this case, if you impute first with train+valid data set and split next, then you have used validation data set before you built your m...
Imputation before or after splitting into train and test? I think you'd better split before you do imputation. For instances, you may want to impute missing values with column mean. In this case, if you impute first with train+valid data set and split next,
8,203
Imputation before or after splitting into train and test?
Just to add on the above I would also favour spliting before imputing or any type of pre-processing. Nothing you do with the training data should be informed by the test data (the analogy is that the future should not affect the past). You can then remember what you did to your training set if your test set also needs ...
Imputation before or after splitting into train and test?
Just to add on the above I would also favour spliting before imputing or any type of pre-processing. Nothing you do with the training data should be informed by the test data (the analogy is that the
Imputation before or after splitting into train and test? Just to add on the above I would also favour spliting before imputing or any type of pre-processing. Nothing you do with the training data should be informed by the test data (the analogy is that the future should not affect the past). You can then remember what...
Imputation before or after splitting into train and test? Just to add on the above I would also favour spliting before imputing or any type of pre-processing. Nothing you do with the training data should be informed by the test data (the analogy is that the
8,204
How to change data between wide and long formats in R? [closed]
There are several resources on Hadley Wickham's website for the package (now called reshape2), including a link to a paper on the package in the Journal of Statistical Software. Here is a brief example from the paper: > require(reshape2) Loading required package: reshape2 > data(smiths) > smiths subject time age w...
How to change data between wide and long formats in R? [closed]
There are several resources on Hadley Wickham's website for the package (now called reshape2), including a link to a paper on the package in the Journal of Statistical Software. Here is a brief exampl
How to change data between wide and long formats in R? [closed] There are several resources on Hadley Wickham's website for the package (now called reshape2), including a link to a paper on the package in the Journal of Statistical Software. Here is a brief example from the paper: > require(reshape2) Loading required p...
How to change data between wide and long formats in R? [closed] There are several resources on Hadley Wickham's website for the package (now called reshape2), including a link to a paper on the package in the Journal of Statistical Software. Here is a brief exampl
8,205
How to change data between wide and long formats in R? [closed]
Quick-R has simple example of using reshape package See also ?reshape (LINK) for the Base R way of moving between wide and long format.
How to change data between wide and long formats in R? [closed]
Quick-R has simple example of using reshape package See also ?reshape (LINK) for the Base R way of moving between wide and long format.
How to change data between wide and long formats in R? [closed] Quick-R has simple example of using reshape package See also ?reshape (LINK) for the Base R way of moving between wide and long format.
How to change data between wide and long formats in R? [closed] Quick-R has simple example of using reshape package See also ?reshape (LINK) for the Base R way of moving between wide and long format.
8,206
How to change data between wide and long formats in R? [closed]
You don't have to use melt and cast. Reshaping data can be done lots of ways. In your particular example on your cite using recast with aggregate was redundant because aggregate does the task fine all on it's own. aggregate(cbind(LPMVTUZ, LPMVTVC, LPMVTXC) ~ year, dtm, sum) # or even briefer by first removing the co...
How to change data between wide and long formats in R? [closed]
You don't have to use melt and cast. Reshaping data can be done lots of ways. In your particular example on your cite using recast with aggregate was redundant because aggregate does the task fine
How to change data between wide and long formats in R? [closed] You don't have to use melt and cast. Reshaping data can be done lots of ways. In your particular example on your cite using recast with aggregate was redundant because aggregate does the task fine all on it's own. aggregate(cbind(LPMVTUZ, LPMVTVC, LPMVT...
How to change data between wide and long formats in R? [closed] You don't have to use melt and cast. Reshaping data can be done lots of ways. In your particular example on your cite using recast with aggregate was redundant because aggregate does the task fine
8,207
How to change data between wide and long formats in R? [closed]
See the reshape2 wiki. It surely provides more examples as you could expect.
How to change data between wide and long formats in R? [closed]
See the reshape2 wiki. It surely provides more examples as you could expect.
How to change data between wide and long formats in R? [closed] See the reshape2 wiki. It surely provides more examples as you could expect.
How to change data between wide and long formats in R? [closed] See the reshape2 wiki. It surely provides more examples as you could expect.
8,208
How to change data between wide and long formats in R? [closed]
Just noticing there's no reference to the more efficient and extensive reshaping methods in data.table here, so I am posting without further comment the excellent answer by Zach/Arun on StackOverflow for a similar question: https://stackoverflow.com/questions/6902087/proper-fastest-way-to-reshape-a-data-table/6913151#6...
How to change data between wide and long formats in R? [closed]
Just noticing there's no reference to the more efficient and extensive reshaping methods in data.table here, so I am posting without further comment the excellent answer by Zach/Arun on StackOverflow
How to change data between wide and long formats in R? [closed] Just noticing there's no reference to the more efficient and extensive reshaping methods in data.table here, so I am posting without further comment the excellent answer by Zach/Arun on StackOverflow for a similar question: https://stackoverflow.com/questi...
How to change data between wide and long formats in R? [closed] Just noticing there's no reference to the more efficient and extensive reshaping methods in data.table here, so I am posting without further comment the excellent answer by Zach/Arun on StackOverflow
8,209
What is "baseline" in precision recall curve
The "baseline curve" in a PR curve plot is a horizontal line with height equal to the number of positive examples $P$ over the total number of training data $N$, ie. the proportion of positive examples in our data ($\frac{P}{N}$). OK, why is this the case though? Let's assume we have a "junk classifier" $C_J$. $C_J$ ...
What is "baseline" in precision recall curve
The "baseline curve" in a PR curve plot is a horizontal line with height equal to the number of positive examples $P$ over the total number of training data $N$, ie. the proportion of positive exampl
What is "baseline" in precision recall curve The "baseline curve" in a PR curve plot is a horizontal line with height equal to the number of positive examples $P$ over the total number of training data $N$, ie. the proportion of positive examples in our data ($\frac{P}{N}$). OK, why is this the case though? Let's ass...
What is "baseline" in precision recall curve The "baseline curve" in a PR curve plot is a horizontal line with height equal to the number of positive examples $P$ over the total number of training data $N$, ie. the proportion of positive exampl
8,210
What is "baseline" in precision recall curve
Great answer above. Here is my intuitive way of thinking about it. Imagine you have a bunch of balls red = positive and yellow = negative, and you throw them randomly into a bucket = positive fraction. Then if you have the same number of red and yellow balls, when you calculate PREC=tp/tp+fp=100/100+100 from your bucke...
What is "baseline" in precision recall curve
Great answer above. Here is my intuitive way of thinking about it. Imagine you have a bunch of balls red = positive and yellow = negative, and you throw them randomly into a bucket = positive fraction
What is "baseline" in precision recall curve Great answer above. Here is my intuitive way of thinking about it. Imagine you have a bunch of balls red = positive and yellow = negative, and you throw them randomly into a bucket = positive fraction. Then if you have the same number of red and yellow balls, when you calcul...
What is "baseline" in precision recall curve Great answer above. Here is my intuitive way of thinking about it. Imagine you have a bunch of balls red = positive and yellow = negative, and you throw them randomly into a bucket = positive fraction
8,211
What is the difference between Maximum Likelihood Estimation & Gradient Descent?
Maximum likelihood estimation is a general approach to estimating parameters in statistical models by maximizing the likelihood function defined as $$ L(\theta|X) = f(X|\theta) $$ that is, the probability of obtaining data $X$ given some value of parameter $\theta$. Knowing the likelihood function for a given problem y...
What is the difference between Maximum Likelihood Estimation & Gradient Descent?
Maximum likelihood estimation is a general approach to estimating parameters in statistical models by maximizing the likelihood function defined as $$ L(\theta|X) = f(X|\theta) $$ that is, the probabi
What is the difference between Maximum Likelihood Estimation & Gradient Descent? Maximum likelihood estimation is a general approach to estimating parameters in statistical models by maximizing the likelihood function defined as $$ L(\theta|X) = f(X|\theta) $$ that is, the probability of obtaining data $X$ given some v...
What is the difference between Maximum Likelihood Estimation & Gradient Descent? Maximum likelihood estimation is a general approach to estimating parameters in statistical models by maximizing the likelihood function defined as $$ L(\theta|X) = f(X|\theta) $$ that is, the probabi
8,212
What is the difference between Maximum Likelihood Estimation & Gradient Descent?
Usually, when we get likelihood function $$f = l(\theta)$$, then we solve equation $$\frac{ df }{ d\theta } = 0$$. we can get the value of $$\theta$$ that can give max or min value of f, done! But logistic regression's likelihood function no closed-form solution by this way. So we have to use other method, such as grad...
What is the difference between Maximum Likelihood Estimation & Gradient Descent?
Usually, when we get likelihood function $$f = l(\theta)$$, then we solve equation $$\frac{ df }{ d\theta } = 0$$. we can get the value of $$\theta$$ that can give max or min value of f, done! But log
What is the difference between Maximum Likelihood Estimation & Gradient Descent? Usually, when we get likelihood function $$f = l(\theta)$$, then we solve equation $$\frac{ df }{ d\theta } = 0$$. we can get the value of $$\theta$$ that can give max or min value of f, done! But logistic regression's likelihood function ...
What is the difference between Maximum Likelihood Estimation & Gradient Descent? Usually, when we get likelihood function $$f = l(\theta)$$, then we solve equation $$\frac{ df }{ d\theta } = 0$$. we can get the value of $$\theta$$ that can give max or min value of f, done! But log
8,213
Statistical classification of text
I recommend these books - they are highly rated on Amazon too: "Text Mining" by Weiss "Text Mining Application Programming", by Konchady For software, I recommend RapidMiner (with the text plugin), free and open-source. This is my "text mining process": collect the documents (usually a web crawl) [sample if too larg...
Statistical classification of text
I recommend these books - they are highly rated on Amazon too: "Text Mining" by Weiss "Text Mining Application Programming", by Konchady For software, I recommend RapidMiner (with the text plugin), fr
Statistical classification of text I recommend these books - they are highly rated on Amazon too: "Text Mining" by Weiss "Text Mining Application Programming", by Konchady For software, I recommend RapidMiner (with the text plugin), free and open-source. This is my "text mining process": collect the documents (usually...
Statistical classification of text I recommend these books - they are highly rated on Amazon too: "Text Mining" by Weiss "Text Mining Application Programming", by Konchady For software, I recommend RapidMiner (with the text plugin), fr
8,214
Statistical classification of text
A great introductory text covering the topics you mentioned is Introduction to Information Retrieval, which is available online in full text for free.
Statistical classification of text
A great introductory text covering the topics you mentioned is Introduction to Information Retrieval, which is available online in full text for free.
Statistical classification of text A great introductory text covering the topics you mentioned is Introduction to Information Retrieval, which is available online in full text for free.
Statistical classification of text A great introductory text covering the topics you mentioned is Introduction to Information Retrieval, which is available online in full text for free.
8,215
Statistical classification of text
Neural network may be to slow for a large number of documents (also this is now pretty much obsolete). And you may also check Random Forest among classifiers; it is quite fast, scales nice and does not need complex tuning.
Statistical classification of text
Neural network may be to slow for a large number of documents (also this is now pretty much obsolete). And you may also check Random Forest among classifiers; it is quite fast, scales nice and does no
Statistical classification of text Neural network may be to slow for a large number of documents (also this is now pretty much obsolete). And you may also check Random Forest among classifiers; it is quite fast, scales nice and does not need complex tuning.
Statistical classification of text Neural network may be to slow for a large number of documents (also this is now pretty much obsolete). And you may also check Random Forest among classifiers; it is quite fast, scales nice and does no
8,216
Statistical classification of text
Firstly I can recommend you the book Foundations of statistical natural language processing by Manning and Schütze. The methods I would use are word-frequency distributions and ngram language models. The first works very well when you want to classify on topic and your topics are specific and expert (having keywords). ...
Statistical classification of text
Firstly I can recommend you the book Foundations of statistical natural language processing by Manning and Schütze. The methods I would use are word-frequency distributions and ngram language models.
Statistical classification of text Firstly I can recommend you the book Foundations of statistical natural language processing by Manning and Schütze. The methods I would use are word-frequency distributions and ngram language models. The first works very well when you want to classify on topic and your topics are spec...
Statistical classification of text Firstly I can recommend you the book Foundations of statistical natural language processing by Manning and Schütze. The methods I would use are word-frequency distributions and ngram language models.
8,217
Statistical classification of text
If you're coming from the programming side, one option is to use the Natural Language Toolkit (NLTK) for Python. There's an O'Reilly book, available freely, which might be a less dense and more practical introduction to building classifiers for documents among other things. If you're interested in beefing up on the ...
Statistical classification of text
If you're coming from the programming side, one option is to use the Natural Language Toolkit (NLTK) for Python. There's an O'Reilly book, available freely, which might be a less dense and more pract
Statistical classification of text If you're coming from the programming side, one option is to use the Natural Language Toolkit (NLTK) for Python. There's an O'Reilly book, available freely, which might be a less dense and more practical introduction to building classifiers for documents among other things. If you'...
Statistical classification of text If you're coming from the programming side, one option is to use the Natural Language Toolkit (NLTK) for Python. There's an O'Reilly book, available freely, which might be a less dense and more pract
8,218
Statistical classification of text
Naive Bayes is usually the starting point for text classification, here's an article from Dr. Dobbs on how to implement one. It's also often the ending point for text classification because it's so efficient and parallelizes well, SpamAssassin and POPFile use it.
Statistical classification of text
Naive Bayes is usually the starting point for text classification, here's an article from Dr. Dobbs on how to implement one. It's also often the ending point for text classification because it's so ef
Statistical classification of text Naive Bayes is usually the starting point for text classification, here's an article from Dr. Dobbs on how to implement one. It's also often the ending point for text classification because it's so efficient and parallelizes well, SpamAssassin and POPFile use it.
Statistical classification of text Naive Bayes is usually the starting point for text classification, here's an article from Dr. Dobbs on how to implement one. It's also often the ending point for text classification because it's so ef
8,219
If I generate a random symmetric matrix, what's the chance it is positive definite?
If your matrices are drawn from standard-normal iid entries, the probability of being positive-definite is approximately $p_N\approx 3^{-N^2/4}$, so for example if $N=5$, the chance is 1/1000, and goes down quite fast after that. You can find an extended discussion of this question here. You can somewhat intuit this an...
If I generate a random symmetric matrix, what's the chance it is positive definite?
If your matrices are drawn from standard-normal iid entries, the probability of being positive-definite is approximately $p_N\approx 3^{-N^2/4}$, so for example if $N=5$, the chance is 1/1000, and goe
If I generate a random symmetric matrix, what's the chance it is positive definite? If your matrices are drawn from standard-normal iid entries, the probability of being positive-definite is approximately $p_N\approx 3^{-N^2/4}$, so for example if $N=5$, the chance is 1/1000, and goes down quite fast after that. You ca...
If I generate a random symmetric matrix, what's the chance it is positive definite? If your matrices are drawn from standard-normal iid entries, the probability of being positive-definite is approximately $p_N\approx 3^{-N^2/4}$, so for example if $N=5$, the chance is 1/1000, and goe
8,220
What are "residual connections" in RNNs?
Residual connections are the same thing as 'skip connections'. They are used to allow gradients to flow through a network directly, without passing through non-linear activation functions. Non-linear activation functions, by nature of being non-linear, cause the gradients to explode or vanish (depending on the weights)...
What are "residual connections" in RNNs?
Residual connections are the same thing as 'skip connections'. They are used to allow gradients to flow through a network directly, without passing through non-linear activation functions. Non-linear
What are "residual connections" in RNNs? Residual connections are the same thing as 'skip connections'. They are used to allow gradients to flow through a network directly, without passing through non-linear activation functions. Non-linear activation functions, by nature of being non-linear, cause the gradients to exp...
What are "residual connections" in RNNs? Residual connections are the same thing as 'skip connections'. They are used to allow gradients to flow through a network directly, without passing through non-linear activation functions. Non-linear
8,221
What are "residual connections" in RNNs?
With respect to Deep Residual Learning for Image Recognition, I think it's correct to say that a ResNet contains both residual connections and skip connections, and that they are not the same thing. Here's a quotation from the paper: We hypothesize that it is easier to optimize the residual mapping than to optimize th...
What are "residual connections" in RNNs?
With respect to Deep Residual Learning for Image Recognition, I think it's correct to say that a ResNet contains both residual connections and skip connections, and that they are not the same thing. H
What are "residual connections" in RNNs? With respect to Deep Residual Learning for Image Recognition, I think it's correct to say that a ResNet contains both residual connections and skip connections, and that they are not the same thing. Here's a quotation from the paper: We hypothesize that it is easier to optimize...
What are "residual connections" in RNNs? With respect to Deep Residual Learning for Image Recognition, I think it's correct to say that a ResNet contains both residual connections and skip connections, and that they are not the same thing. H
8,222
What are "residual connections" in RNNs?
For better and deeper understanding of the Residual Connection concept, you may want to also read this paper: Deep Residual Learning for Image Recognition. This is the same paper that is also referenced by "Attention Is All You Need" paper when explaining encoder element in the Transformers architecture.
What are "residual connections" in RNNs?
For better and deeper understanding of the Residual Connection concept, you may want to also read this paper: Deep Residual Learning for Image Recognition. This is the same paper that is also referenc
What are "residual connections" in RNNs? For better and deeper understanding of the Residual Connection concept, you may want to also read this paper: Deep Residual Learning for Image Recognition. This is the same paper that is also referenced by "Attention Is All You Need" paper when explaining encoder element in the ...
What are "residual connections" in RNNs? For better and deeper understanding of the Residual Connection concept, you may want to also read this paper: Deep Residual Learning for Image Recognition. This is the same paper that is also referenc
8,223
What are "residual connections" in RNNs?
in super-resolution there are many network architectures with residual connections. If you have a low-resolution picture x and you want to reconstruct a high resolution picture y, a network has to learn to not only predict the missing pixels from y, it also has to learn the representation of x. Because x and y have a...
What are "residual connections" in RNNs?
in super-resolution there are many network architectures with residual connections. If you have a low-resolution picture x and you want to reconstruct a high resolution picture y, a network has to le
What are "residual connections" in RNNs? in super-resolution there are many network architectures with residual connections. If you have a low-resolution picture x and you want to reconstruct a high resolution picture y, a network has to learn to not only predict the missing pixels from y, it also has to learn the rep...
What are "residual connections" in RNNs? in super-resolution there are many network architectures with residual connections. If you have a low-resolution picture x and you want to reconstruct a high resolution picture y, a network has to le
8,224
What are "residual connections" in RNNs?
Adding up to the answers above, Residual connection implies a mechanism which carries gradients from the initial layers to the later layers in a deep network, preventing its gradient from vanishing. It does not resolve the vanishing gradient problems but avoids them with shallow networks. You can imagine this as a bunc...
What are "residual connections" in RNNs?
Adding up to the answers above, Residual connection implies a mechanism which carries gradients from the initial layers to the later layers in a deep network, preventing its gradient from vanishing. I
What are "residual connections" in RNNs? Adding up to the answers above, Residual connection implies a mechanism which carries gradients from the initial layers to the later layers in a deep network, preventing its gradient from vanishing. It does not resolve the vanishing gradient problems but avoids them with shallow...
What are "residual connections" in RNNs? Adding up to the answers above, Residual connection implies a mechanism which carries gradients from the initial layers to the later layers in a deep network, preventing its gradient from vanishing. I
8,225
When are confidence intervals useful?
I like to think of CIs as some way to escape the Hypothesis Testing (HT) framework, at least the binary decision framework following Neyman's approach, and keep in line with theory of measurement in some way. More precisely, I view them as more close to the reliability of an estimation (a difference of means, for insta...
When are confidence intervals useful?
I like to think of CIs as some way to escape the Hypothesis Testing (HT) framework, at least the binary decision framework following Neyman's approach, and keep in line with theory of measurement in s
When are confidence intervals useful? I like to think of CIs as some way to escape the Hypothesis Testing (HT) framework, at least the binary decision framework following Neyman's approach, and keep in line with theory of measurement in some way. More precisely, I view them as more close to the reliability of an estima...
When are confidence intervals useful? I like to think of CIs as some way to escape the Hypothesis Testing (HT) framework, at least the binary decision framework following Neyman's approach, and keep in line with theory of measurement in s
8,226
When are confidence intervals useful?
An alternative approach relevant to your 2nd Q, "Are there ways of looking at confidence intervals, at least in some circumstances, which would be meaningful to users of statistics?": You should take a look at Bayesian inference and the resulting credible intervals. A 95% credible interval can be interpreted as an inte...
When are confidence intervals useful?
An alternative approach relevant to your 2nd Q, "Are there ways of looking at confidence intervals, at least in some circumstances, which would be meaningful to users of statistics?": You should take
When are confidence intervals useful? An alternative approach relevant to your 2nd Q, "Are there ways of looking at confidence intervals, at least in some circumstances, which would be meaningful to users of statistics?": You should take a look at Bayesian inference and the resulting credible intervals. A 95% credible ...
When are confidence intervals useful? An alternative approach relevant to your 2nd Q, "Are there ways of looking at confidence intervals, at least in some circumstances, which would be meaningful to users of statistics?": You should take
8,227
When are confidence intervals useful?
You are correct in saying that the 95% confidence intervals are things that result from using a method that works in 95% of cases, rather than any individual interval having a 95% likelihood of containing the expected value. "The logical basis and interpretation of confidence limits are, even now, a matter of controver...
When are confidence intervals useful?
You are correct in saying that the 95% confidence intervals are things that result from using a method that works in 95% of cases, rather than any individual interval having a 95% likelihood of contai
When are confidence intervals useful? You are correct in saying that the 95% confidence intervals are things that result from using a method that works in 95% of cases, rather than any individual interval having a 95% likelihood of containing the expected value. "The logical basis and interpretation of confidence limit...
When are confidence intervals useful? You are correct in saying that the 95% confidence intervals are things that result from using a method that works in 95% of cases, rather than any individual interval having a 95% likelihood of contai
8,228
When are confidence intervals useful?
I think the premise of this question is flawed because it denies the distinction between the uncertain and the known. Describing a coin flip provides a good analogy. Before the coin is flipped, the outcome is uncertain; afterwards, it is no longer "hypothetical." Confusing this fait accompli with the actual situation...
When are confidence intervals useful?
I think the premise of this question is flawed because it denies the distinction between the uncertain and the known. Describing a coin flip provides a good analogy. Before the coin is flipped, the o
When are confidence intervals useful? I think the premise of this question is flawed because it denies the distinction between the uncertain and the known. Describing a coin flip provides a good analogy. Before the coin is flipped, the outcome is uncertain; afterwards, it is no longer "hypothetical." Confusing this f...
When are confidence intervals useful? I think the premise of this question is flawed because it denies the distinction between the uncertain and the known. Describing a coin flip provides a good analogy. Before the coin is flipped, the o
8,229
When are confidence intervals useful?
This is a great discussion. I feel that Bayesian credible intervals and likelihood support intervals are the way to go, as well as Bayesian posterior probabilities of events of interest (e.g., a drug is efficacious). But supplanting P-values with confidence intervals is a major gain. Virtually every issue of the fin...
When are confidence intervals useful?
This is a great discussion. I feel that Bayesian credible intervals and likelihood support intervals are the way to go, as well as Bayesian posterior probabilities of events of interest (e.g., a drug
When are confidence intervals useful? This is a great discussion. I feel that Bayesian credible intervals and likelihood support intervals are the way to go, as well as Bayesian posterior probabilities of events of interest (e.g., a drug is efficacious). But supplanting P-values with confidence intervals is a major g...
When are confidence intervals useful? This is a great discussion. I feel that Bayesian credible intervals and likelihood support intervals are the way to go, as well as Bayesian posterior probabilities of events of interest (e.g., a drug
8,230
When are confidence intervals useful?
To address your question directly: Suppose that you are contemplating the use of a machine to fill a cereal box with a certain amount of cereal. Obviously, you do not want to overfill/underfill the box. You want to assess the reliability of the machine. You perform a series of tests like so: (a) Use the machine to fill...
When are confidence intervals useful?
To address your question directly: Suppose that you are contemplating the use of a machine to fill a cereal box with a certain amount of cereal. Obviously, you do not want to overfill/underfill the bo
When are confidence intervals useful? To address your question directly: Suppose that you are contemplating the use of a machine to fill a cereal box with a certain amount of cereal. Obviously, you do not want to overfill/underfill the box. You want to assess the reliability of the machine. You perform a series of test...
When are confidence intervals useful? To address your question directly: Suppose that you are contemplating the use of a machine to fill a cereal box with a certain amount of cereal. Obviously, you do not want to overfill/underfill the bo
8,231
Is hour of day a categorical variable?
Depending on what you want to model, hours (and many other attributes like seasons) are actually ordinal cyclic variables. In case of seasons you can consider them to be more or less categorical, and in case of hours you can model them as continuous as well. However, using hours in your model in a form that does not t...
Is hour of day a categorical variable?
Depending on what you want to model, hours (and many other attributes like seasons) are actually ordinal cyclic variables. In case of seasons you can consider them to be more or less categorical, and
Is hour of day a categorical variable? Depending on what you want to model, hours (and many other attributes like seasons) are actually ordinal cyclic variables. In case of seasons you can consider them to be more or less categorical, and in case of hours you can model them as continuous as well. However, using hours ...
Is hour of day a categorical variable? Depending on what you want to model, hours (and many other attributes like seasons) are actually ordinal cyclic variables. In case of seasons you can consider them to be more or less categorical, and
8,232
Is hour of day a categorical variable?
Hour of the day isn't best represented as a categorical variable, because there is a natural ordering of the values. Hair color, for example, is categorical, because the ordering of the categories has no meaning - {red, brown, blonde} is as valid as {blonde, brown, red}. Hour of the day, on the other hand, has a natura...
Is hour of day a categorical variable?
Hour of the day isn't best represented as a categorical variable, because there is a natural ordering of the values. Hair color, for example, is categorical, because the ordering of the categories has
Is hour of day a categorical variable? Hour of the day isn't best represented as a categorical variable, because there is a natural ordering of the values. Hair color, for example, is categorical, because the ordering of the categories has no meaning - {red, brown, blonde} is as valid as {blonde, brown, red}. Hour of t...
Is hour of day a categorical variable? Hour of the day isn't best represented as a categorical variable, because there is a natural ordering of the values. Hair color, for example, is categorical, because the ordering of the categories has
8,233
Is hour of day a categorical variable?
Theoretically, it depends on how you format the variable i.e. it can be "continuous" (modeled with a single coefficient) or categorical (a coefficient per "hour" of day). You could also do a mix of both e.g. piece-wise functions. Practically, because 0 and 23 is essentially the same "hour" of day, I would consider gr...
Is hour of day a categorical variable?
Theoretically, it depends on how you format the variable i.e. it can be "continuous" (modeled with a single coefficient) or categorical (a coefficient per "hour" of day). You could also do a mix of bo
Is hour of day a categorical variable? Theoretically, it depends on how you format the variable i.e. it can be "continuous" (modeled with a single coefficient) or categorical (a coefficient per "hour" of day). You could also do a mix of both e.g. piece-wise functions. Practically, because 0 and 23 is essentially the ...
Is hour of day a categorical variable? Theoretically, it depends on how you format the variable i.e. it can be "continuous" (modeled with a single coefficient) or categorical (a coefficient per "hour" of day). You could also do a mix of bo
8,234
How to judge if a supervised machine learning model is overfitting or not?
In short: by validating your model. The main reason of validation is to assert no overfit occurs and to estimate generalized model performance. Overfit First let us look at what overfitting actually is. Models are normally trained to fit a dataset by minimizing some loss function on a training set. There is however a ...
How to judge if a supervised machine learning model is overfitting or not?
In short: by validating your model. The main reason of validation is to assert no overfit occurs and to estimate generalized model performance. Overfit First let us look at what overfitting actually
How to judge if a supervised machine learning model is overfitting or not? In short: by validating your model. The main reason of validation is to assert no overfit occurs and to estimate generalized model performance. Overfit First let us look at what overfitting actually is. Models are normally trained to fit a data...
How to judge if a supervised machine learning model is overfitting or not? In short: by validating your model. The main reason of validation is to assert no overfit occurs and to estimate generalized model performance. Overfit First let us look at what overfitting actually
8,235
How to judge if a supervised machine learning model is overfitting or not?
Here's how you can estimate the extent of overfitting: Get an internal error estimate. Either resubstitutio (= predict training data), or if you do an inner cross "validation" to optimize hyperparameters, also that measure would be of interest. Get an independent test set error estimate. Usually, resampling (iterated ...
How to judge if a supervised machine learning model is overfitting or not?
Here's how you can estimate the extent of overfitting: Get an internal error estimate. Either resubstitutio (= predict training data), or if you do an inner cross "validation" to optimize hyperparame
How to judge if a supervised machine learning model is overfitting or not? Here's how you can estimate the extent of overfitting: Get an internal error estimate. Either resubstitutio (= predict training data), or if you do an inner cross "validation" to optimize hyperparameters, also that measure would be of interest....
How to judge if a supervised machine learning model is overfitting or not? Here's how you can estimate the extent of overfitting: Get an internal error estimate. Either resubstitutio (= predict training data), or if you do an inner cross "validation" to optimize hyperparame
8,236
How to judge if a supervised machine learning model is overfitting or not?
The overfitting is simply the direct consequence of considering the statistical parameters, and therefore the results obtained, as a useful information without checking that them was not obtained in a random way. Therefore, in order to estimate the presence of overfitting we have to use the algorithm on a database equi...
How to judge if a supervised machine learning model is overfitting or not?
The overfitting is simply the direct consequence of considering the statistical parameters, and therefore the results obtained, as a useful information without checking that them was not obtained in a
How to judge if a supervised machine learning model is overfitting or not? The overfitting is simply the direct consequence of considering the statistical parameters, and therefore the results obtained, as a useful information without checking that them was not obtained in a random way. Therefore, in order to estimate ...
How to judge if a supervised machine learning model is overfitting or not? The overfitting is simply the direct consequence of considering the statistical parameters, and therefore the results obtained, as a useful information without checking that them was not obtained in a
8,237
Intuitive explanation of how UMAP works, compared to t-SNE
You said that your understanding of t-SNE is based on https://www.youtube.com/watch?v=NEaUSP4YerM and you are looking for an explanation of UMAP on a similar level. I watched this video and it is pretty accurate in what it says (I have some minor nitpicks, but overall it is fine). Funny enough, it almost applies to UMA...
Intuitive explanation of how UMAP works, compared to t-SNE
You said that your understanding of t-SNE is based on https://www.youtube.com/watch?v=NEaUSP4YerM and you are looking for an explanation of UMAP on a similar level. I watched this video and it is pret
Intuitive explanation of how UMAP works, compared to t-SNE You said that your understanding of t-SNE is based on https://www.youtube.com/watch?v=NEaUSP4YerM and you are looking for an explanation of UMAP on a similar level. I watched this video and it is pretty accurate in what it says (I have some minor nitpicks, but ...
Intuitive explanation of how UMAP works, compared to t-SNE You said that your understanding of t-SNE is based on https://www.youtube.com/watch?v=NEaUSP4YerM and you are looking for an explanation of UMAP on a similar level. I watched this video and it is pret
8,238
Intuitive explanation of how UMAP works, compared to t-SNE
The main difference between t-SNE and UMAP is the interpretation of the distance between objects or "clusters". I use the quotation marks since both algorithms are not meant for clustering - they are meant for visualization mostly. t-SNE preserves local structure in the data. UMAP claims to preserve both local and most...
Intuitive explanation of how UMAP works, compared to t-SNE
The main difference between t-SNE and UMAP is the interpretation of the distance between objects or "clusters". I use the quotation marks since both algorithms are not meant for clustering - they are
Intuitive explanation of how UMAP works, compared to t-SNE The main difference between t-SNE and UMAP is the interpretation of the distance between objects or "clusters". I use the quotation marks since both algorithms are not meant for clustering - they are meant for visualization mostly. t-SNE preserves local structu...
Intuitive explanation of how UMAP works, compared to t-SNE The main difference between t-SNE and UMAP is the interpretation of the distance between objects or "clusters". I use the quotation marks since both algorithms are not meant for clustering - they are
8,239
Drawing from Dirichlet distribution
First, draw $K$ independent random samples $y_1, \ldots, y_K$ from Gamma distributions each with density $$ \textrm{Gamma}(\alpha_i, 1) = \frac{y_i^{\alpha_i-1} \; e^{-y_i}}{\Gamma (\alpha_i)},$$ and then set $$x_i = \frac{y_i}{\sum_{j=1}^K y_j}. $$ Now, $x_1,...,x_K$ will follow a Dirichlet distribution The Wikipedia ...
Drawing from Dirichlet distribution
First, draw $K$ independent random samples $y_1, \ldots, y_K$ from Gamma distributions each with density $$ \textrm{Gamma}(\alpha_i, 1) = \frac{y_i^{\alpha_i-1} \; e^{-y_i}}{\Gamma (\alpha_i)},$$ and
Drawing from Dirichlet distribution First, draw $K$ independent random samples $y_1, \ldots, y_K$ from Gamma distributions each with density $$ \textrm{Gamma}(\alpha_i, 1) = \frac{y_i^{\alpha_i-1} \; e^{-y_i}}{\Gamma (\alpha_i)},$$ and then set $$x_i = \frac{y_i}{\sum_{j=1}^K y_j}. $$ Now, $x_1,...,x_K$ will follow a D...
Drawing from Dirichlet distribution First, draw $K$ independent random samples $y_1, \ldots, y_K$ from Gamma distributions each with density $$ \textrm{Gamma}(\alpha_i, 1) = \frac{y_i^{\alpha_i-1} \; e^{-y_i}}{\Gamma (\alpha_i)},$$ and
8,240
Drawing from Dirichlet distribution
A simple method (while not exact) consists in using the fact that drawing a Dirichlet distribution is equivalent to the Polya's urn experiment. (Drawing from a set of colored balls and each time you draw a ball, you put it back in the urn with a second ball of the same color) Consider your Dirichlet parameters $\alpha_...
Drawing from Dirichlet distribution
A simple method (while not exact) consists in using the fact that drawing a Dirichlet distribution is equivalent to the Polya's urn experiment. (Drawing from a set of colored balls and each time you d
Drawing from Dirichlet distribution A simple method (while not exact) consists in using the fact that drawing a Dirichlet distribution is equivalent to the Polya's urn experiment. (Drawing from a set of colored balls and each time you draw a ball, you put it back in the urn with a second ball of the same color) Conside...
Drawing from Dirichlet distribution A simple method (while not exact) consists in using the fact that drawing a Dirichlet distribution is equivalent to the Polya's urn experiment. (Drawing from a set of colored balls and each time you d
8,241
Why are rectified linear units considered non-linear?
RELUs are nonlinearities. To help your intuition, consider a very simple network with 1 input unit $x$, 2 hidden units $y_i$, and 1 output unit $z$. With this simple network we could implement an absolute value function, $$z = \max(0, x) + \max(0, -x),$$ or something that looks similar to the commonly used sigmoid func...
Why are rectified linear units considered non-linear?
RELUs are nonlinearities. To help your intuition, consider a very simple network with 1 input unit $x$, 2 hidden units $y_i$, and 1 output unit $z$. With this simple network we could implement an abso
Why are rectified linear units considered non-linear? RELUs are nonlinearities. To help your intuition, consider a very simple network with 1 input unit $x$, 2 hidden units $y_i$, and 1 output unit $z$. With this simple network we could implement an absolute value function, $$z = \max(0, x) + \max(0, -x),$$ or somethin...
Why are rectified linear units considered non-linear? RELUs are nonlinearities. To help your intuition, consider a very simple network with 1 input unit $x$, 2 hidden units $y_i$, and 1 output unit $z$. With this simple network we could implement an abso
8,242
explain meaning and purpose of L2 normalization
we scale the values so that if they were all squared and summed, the value would be 1 That's correct. I'm not totally sure how that would be helpful for the model, though Consider a simpler case, where we just count the number of times each word appears in each document. In this case, two documents might appear diff...
explain meaning and purpose of L2 normalization
we scale the values so that if they were all squared and summed, the value would be 1 That's correct. I'm not totally sure how that would be helpful for the model, though Consider a simpler case, w
explain meaning and purpose of L2 normalization we scale the values so that if they were all squared and summed, the value would be 1 That's correct. I'm not totally sure how that would be helpful for the model, though Consider a simpler case, where we just count the number of times each word appears in each documen...
explain meaning and purpose of L2 normalization we scale the values so that if they were all squared and summed, the value would be 1 That's correct. I'm not totally sure how that would be helpful for the model, though Consider a simpler case, w
8,243
k-NN computational complexity
Assuming $k$ is fixed (as both of the linked lectures do), then your algorithmic choices will determine whether your computation takes $O(nd+kn)$ runtime or $O(ndk)$ runtime. First, let's consider a $O(nd+kn)$ runtime algorithm: Initialize $selected_i = 0$ for all observations $i$ in the training set For each training...
k-NN computational complexity
Assuming $k$ is fixed (as both of the linked lectures do), then your algorithmic choices will determine whether your computation takes $O(nd+kn)$ runtime or $O(ndk)$ runtime. First, let's consider a $
k-NN computational complexity Assuming $k$ is fixed (as both of the linked lectures do), then your algorithmic choices will determine whether your computation takes $O(nd+kn)$ runtime or $O(ndk)$ runtime. First, let's consider a $O(nd+kn)$ runtime algorithm: Initialize $selected_i = 0$ for all observations $i$ in the ...
k-NN computational complexity Assuming $k$ is fixed (as both of the linked lectures do), then your algorithmic choices will determine whether your computation takes $O(nd+kn)$ runtime or $O(ndk)$ runtime. First, let's consider a $
8,244
Clustering a correlation matrix
Looks like a job for block modeling. Google for "block modeling" and the first few hits are helpful. Say we have a covariance matrix where N=100 and there are actually 5 clusters: What block modelling is trying to do is find an ordering of the rows, so that the clusters become apparent as 'blocks': Below is a code ex...
Clustering a correlation matrix
Looks like a job for block modeling. Google for "block modeling" and the first few hits are helpful. Say we have a covariance matrix where N=100 and there are actually 5 clusters: What block modellin
Clustering a correlation matrix Looks like a job for block modeling. Google for "block modeling" and the first few hits are helpful. Say we have a covariance matrix where N=100 and there are actually 5 clusters: What block modelling is trying to do is find an ordering of the rows, so that the clusters become apparent ...
Clustering a correlation matrix Looks like a job for block modeling. Google for "block modeling" and the first few hits are helpful. Say we have a covariance matrix where N=100 and there are actually 5 clusters: What block modellin
8,245
Clustering a correlation matrix
Have you looked at hierarchical clustering? It can work with similarities, not only distances. You can cut the dendrogram at a height where it splits into k clusters, but usually it is better to visually inspect the dendrogram and decide on a height to cut. Hierarchical clustering is also often used to produce a clever...
Clustering a correlation matrix
Have you looked at hierarchical clustering? It can work with similarities, not only distances. You can cut the dendrogram at a height where it splits into k clusters, but usually it is better to visua
Clustering a correlation matrix Have you looked at hierarchical clustering? It can work with similarities, not only distances. You can cut the dendrogram at a height where it splits into k clusters, but usually it is better to visually inspect the dendrogram and decide on a height to cut. Hierarchical clustering is als...
Clustering a correlation matrix Have you looked at hierarchical clustering? It can work with similarities, not only distances. You can cut the dendrogram at a height where it splits into k clusters, but usually it is better to visua
8,246
Clustering a correlation matrix
Have you looked into correlation clustering? This clustering algorithm uses the pair-wise positive/negative correlation information to automatically propose the optimal number of clusters with a well defined functional and a rigorous generative probabilistic interpretation.
Clustering a correlation matrix
Have you looked into correlation clustering? This clustering algorithm uses the pair-wise positive/negative correlation information to automatically propose the optimal number of clusters with a well
Clustering a correlation matrix Have you looked into correlation clustering? This clustering algorithm uses the pair-wise positive/negative correlation information to automatically propose the optimal number of clusters with a well defined functional and a rigorous generative probabilistic interpretation.
Clustering a correlation matrix Have you looked into correlation clustering? This clustering algorithm uses the pair-wise positive/negative correlation information to automatically propose the optimal number of clusters with a well
8,247
Clustering a correlation matrix
I would filter at some meaningful (statistical significance) threshold and then use the dulmage-mendelsohn decomposition to get the connected components. Maybe before you can try to remove some problem like transitive correlations (A highly correlated to B, B to C, C to D, so there is a component containing all of them...
Clustering a correlation matrix
I would filter at some meaningful (statistical significance) threshold and then use the dulmage-mendelsohn decomposition to get the connected components. Maybe before you can try to remove some proble
Clustering a correlation matrix I would filter at some meaningful (statistical significance) threshold and then use the dulmage-mendelsohn decomposition to get the connected components. Maybe before you can try to remove some problem like transitive correlations (A highly correlated to B, B to C, C to D, so there is a ...
Clustering a correlation matrix I would filter at some meaningful (statistical significance) threshold and then use the dulmage-mendelsohn decomposition to get the connected components. Maybe before you can try to remove some proble
8,248
Dropping unused levels in facets with ggplot2 [closed]
Your example data just doesn't have any unused levels to drop. Check the behavior in this example: dat <- data.frame(x = runif(12), y = runif(12), grp1 = factor(rep(letters[1:4],times = 3)), grp2 = factor(rep(LETTERS[1:2],times = 6))) levels(dat$grp2) <- LETTERS[1:...
Dropping unused levels in facets with ggplot2 [closed]
Your example data just doesn't have any unused levels to drop. Check the behavior in this example: dat <- data.frame(x = runif(12), y = runif(12), grp1 = factor(rep
Dropping unused levels in facets with ggplot2 [closed] Your example data just doesn't have any unused levels to drop. Check the behavior in this example: dat <- data.frame(x = runif(12), y = runif(12), grp1 = factor(rep(letters[1:4],times = 3)), grp2 = factor(rep(LE...
Dropping unused levels in facets with ggplot2 [closed] Your example data just doesn't have any unused levels to drop. Check the behavior in this example: dat <- data.frame(x = runif(12), y = runif(12), grp1 = factor(rep
8,249
Visualizing the intersections of many sets
When you have a large number of sets, I would try something that is more linear and shows the links directly (like a network graph). Flare and Protovis both have utilities to handle these visualizations. See this question for some examples like this:
Visualizing the intersections of many sets
When you have a large number of sets, I would try something that is more linear and shows the links directly (like a network graph). Flare and Protovis both have utilities to handle these visualizati
Visualizing the intersections of many sets When you have a large number of sets, I would try something that is more linear and shows the links directly (like a network graph). Flare and Protovis both have utilities to handle these visualizations. See this question for some examples like this:
Visualizing the intersections of many sets When you have a large number of sets, I would try something that is more linear and shows the links directly (like a network graph). Flare and Protovis both have utilities to handle these visualizati
8,250
Visualizing the intersections of many sets
This won't compete with @Shane's answer because circular displays are really well suited for displaying complex relationships with high-dimensional datasets. For Venn diagrams, I've been using the venneuler R package. It has a simple yet intuitive interface and produce nifty diagrams with transparency, compared to the ...
Visualizing the intersections of many sets
This won't compete with @Shane's answer because circular displays are really well suited for displaying complex relationships with high-dimensional datasets. For Venn diagrams, I've been using the ven
Visualizing the intersections of many sets This won't compete with @Shane's answer because circular displays are really well suited for displaying complex relationships with high-dimensional datasets. For Venn diagrams, I've been using the venneuler R package. It has a simple yet intuitive interface and produce nifty d...
Visualizing the intersections of many sets This won't compete with @Shane's answer because circular displays are really well suited for displaying complex relationships with high-dimensional datasets. For Venn diagrams, I've been using the ven
8,251
Visualizing the intersections of many sets
We developed a matrix-based approach for set intersections called UpSet, you can check it out at http://vcg.github.io/upset/. Here is an example: The Matrix on the left identifies the intersection a row represents, the last row here, for example, is the intersection of the "Action, Adventure, and Children" movie genre...
Visualizing the intersections of many sets
We developed a matrix-based approach for set intersections called UpSet, you can check it out at http://vcg.github.io/upset/. Here is an example: The Matrix on the left identifies the intersection a
Visualizing the intersections of many sets We developed a matrix-based approach for set intersections called UpSet, you can check it out at http://vcg.github.io/upset/. Here is an example: The Matrix on the left identifies the intersection a row represents, the last row here, for example, is the intersection of the "A...
Visualizing the intersections of many sets We developed a matrix-based approach for set intersections called UpSet, you can check it out at http://vcg.github.io/upset/. Here is an example: The Matrix on the left identifies the intersection a
8,252
Three versions of discriminant analysis: differences and how to use them
"Fisher's Discriminant Analysis" is simply LDA in a situation of 2 classes. When there is only 2 classes computations by hand are feasible and the analysis is directly related to Multiple Regression. LDA is the direct extension of Fisher's idea on situation of any number of classes and uses matrix algebra devices (such...
Three versions of discriminant analysis: differences and how to use them
"Fisher's Discriminant Analysis" is simply LDA in a situation of 2 classes. When there is only 2 classes computations by hand are feasible and the analysis is directly related to Multiple Regression.
Three versions of discriminant analysis: differences and how to use them "Fisher's Discriminant Analysis" is simply LDA in a situation of 2 classes. When there is only 2 classes computations by hand are feasible and the analysis is directly related to Multiple Regression. LDA is the direct extension of Fisher's idea on...
Three versions of discriminant analysis: differences and how to use them "Fisher's Discriminant Analysis" is simply LDA in a situation of 2 classes. When there is only 2 classes computations by hand are feasible and the analysis is directly related to Multiple Regression.
8,253
Three versions of discriminant analysis: differences and how to use them
I find it hard to agree that FDA is LDA for two-classes as @ttnphns suggested. I recommend two very informative and beautiful lectures on this topic by Professor Ali Ghodsi: LDA & QDA. In addition, page 108 of the book The Elements of Statistical Learning (pdf) has a description of LDA consistent with the lecture. FDA...
Three versions of discriminant analysis: differences and how to use them
I find it hard to agree that FDA is LDA for two-classes as @ttnphns suggested. I recommend two very informative and beautiful lectures on this topic by Professor Ali Ghodsi: LDA & QDA. In addition, p
Three versions of discriminant analysis: differences and how to use them I find it hard to agree that FDA is LDA for two-classes as @ttnphns suggested. I recommend two very informative and beautiful lectures on this topic by Professor Ali Ghodsi: LDA & QDA. In addition, page 108 of the book The Elements of Statistical...
Three versions of discriminant analysis: differences and how to use them I find it hard to agree that FDA is LDA for two-classes as @ttnphns suggested. I recommend two very informative and beautiful lectures on this topic by Professor Ali Ghodsi: LDA & QDA. In addition, p
8,254
How to generate a large full-rank random correlation matrix with some strong correlations present?
Other answers came up with nice tricks to solve my problem in various ways. However, I found a principled approach that I think has a large advantage of being conceptually very clear and easy to adjust. In this thread: How to efficiently generate random positive-semidefinite correlation matrices? -- I described and pro...
How to generate a large full-rank random correlation matrix with some strong correlations present?
Other answers came up with nice tricks to solve my problem in various ways. However, I found a principled approach that I think has a large advantage of being conceptually very clear and easy to adjus
How to generate a large full-rank random correlation matrix with some strong correlations present? Other answers came up with nice tricks to solve my problem in various ways. However, I found a principled approach that I think has a large advantage of being conceptually very clear and easy to adjust. In this thread: Ho...
How to generate a large full-rank random correlation matrix with some strong correlations present? Other answers came up with nice tricks to solve my problem in various ways. However, I found a principled approach that I think has a large advantage of being conceptually very clear and easy to adjus
8,255
How to generate a large full-rank random correlation matrix with some strong correlations present?
A simple thing but maybe will work for benchmark purposes: took your 2. and injected some correlations into starting matrix. Distribution is somewhat uniform, and changing $a$ you can get concentration near 1 and -1 or near 0. import numpy as np from random import choice import matplotlib.pyplot as plt n = 100 a = 2 ...
How to generate a large full-rank random correlation matrix with some strong correlations present?
A simple thing but maybe will work for benchmark purposes: took your 2. and injected some correlations into starting matrix. Distribution is somewhat uniform, and changing $a$ you can get concentratio
How to generate a large full-rank random correlation matrix with some strong correlations present? A simple thing but maybe will work for benchmark purposes: took your 2. and injected some correlations into starting matrix. Distribution is somewhat uniform, and changing $a$ you can get concentration near 1 and -1 or ne...
How to generate a large full-rank random correlation matrix with some strong correlations present? A simple thing but maybe will work for benchmark purposes: took your 2. and injected some correlations into starting matrix. Distribution is somewhat uniform, and changing $a$ you can get concentratio
8,256
How to generate a large full-rank random correlation matrix with some strong correlations present?
Hmm, after I' done an example in my MatMate-language I see that there is already a python-answer, which might be preferable because python is widely used. But because you had still questions I show you my approach using the Matmate-matrix-language, perhaps it is more selfcommenting. Method 1 (Using MatMate): v=12 ...
How to generate a large full-rank random correlation matrix with some strong correlations present?
Hmm, after I' done an example in my MatMate-language I see that there is already a python-answer, which might be preferable because python is widely used. But because you had still questions I show yo
How to generate a large full-rank random correlation matrix with some strong correlations present? Hmm, after I' done an example in my MatMate-language I see that there is already a python-answer, which might be preferable because python is widely used. But because you had still questions I show you my approach using t...
How to generate a large full-rank random correlation matrix with some strong correlations present? Hmm, after I' done an example in my MatMate-language I see that there is already a python-answer, which might be preferable because python is widely used. But because you had still questions I show yo
8,257
How to generate a large full-rank random correlation matrix with some strong correlations present?
Interesting question (as always!). How about finding a set of example matrices that exhibit the properties you desire, and then take convex combinations thereof, since if $A$ and $B$ are positive definite, then so is $\lambda A + (1-\lambda)B$. As a bonus, no rescaling of the diagonals will be necessary, by the conve...
How to generate a large full-rank random correlation matrix with some strong correlations present?
Interesting question (as always!). How about finding a set of example matrices that exhibit the properties you desire, and then take convex combinations thereof, since if $A$ and $B$ are positive def
How to generate a large full-rank random correlation matrix with some strong correlations present? Interesting question (as always!). How about finding a set of example matrices that exhibit the properties you desire, and then take convex combinations thereof, since if $A$ and $B$ are positive definite, then so is $\l...
How to generate a large full-rank random correlation matrix with some strong correlations present? Interesting question (as always!). How about finding a set of example matrices that exhibit the properties you desire, and then take convex combinations thereof, since if $A$ and $B$ are positive def
8,258
How to generate a large full-rank random correlation matrix with some strong correlations present?
R has a package (clusterGeneration) that implements the method in: Joe, H. (2006) Generating Random Correlation Matrices Based on Partial Correlations. Journal of Multivariate Analysis, 97, 2177--2189. Example: > (cormat10 = clusterGeneration::rcorrmatrix(10, alphad = 1/100000000000000)) [,1] [,2] [,3] ...
How to generate a large full-rank random correlation matrix with some strong correlations present?
R has a package (clusterGeneration) that implements the method in: Joe, H. (2006) Generating Random Correlation Matrices Based on Partial Correlations. Journal of Multivariate Analysis, 97, 2177--218
How to generate a large full-rank random correlation matrix with some strong correlations present? R has a package (clusterGeneration) that implements the method in: Joe, H. (2006) Generating Random Correlation Matrices Based on Partial Correlations. Journal of Multivariate Analysis, 97, 2177--2189. Example: > (corm...
How to generate a large full-rank random correlation matrix with some strong correlations present? R has a package (clusterGeneration) that implements the method in: Joe, H. (2006) Generating Random Correlation Matrices Based on Partial Correlations. Journal of Multivariate Analysis, 97, 2177--218
8,259
The difference of kernels in SVM?
The linear kernel is what you would expect, a linear model. I believe that the polynomial kernel is similar, but the boundary is of some defined but arbitrary order (e.g. order 3: $ a= b_1 + b_2 \cdot X + b_3 \cdot X^2 + b_4 \cdot X^3$). RBF uses normal curves around the data points, and sums these so that the decisio...
The difference of kernels in SVM?
The linear kernel is what you would expect, a linear model. I believe that the polynomial kernel is similar, but the boundary is of some defined but arbitrary order (e.g. order 3: $ a= b_1 + b_2 \cdo
The difference of kernels in SVM? The linear kernel is what you would expect, a linear model. I believe that the polynomial kernel is similar, but the boundary is of some defined but arbitrary order (e.g. order 3: $ a= b_1 + b_2 \cdot X + b_3 \cdot X^2 + b_4 \cdot X^3$). RBF uses normal curves around the data points, ...
The difference of kernels in SVM? The linear kernel is what you would expect, a linear model. I believe that the polynomial kernel is similar, but the boundary is of some defined but arbitrary order (e.g. order 3: $ a= b_1 + b_2 \cdo
8,260
The difference of kernels in SVM?
Relying on basic knowledge of reader about kernels. Linear Kernel: $K(X, Y) = X^T Y$ Polynomial kernel: $K(X, Y) = (γ\cdot X^T Y + r)^d , γ > 0$ Radial basis function (RBF) Kernel: $K(X, Y) = \exp(\|X-Y\|^2/2σ^2)$ which in simple form can be written as $\exp(-γ \cdot \|X - Y\|^2), γ > 0$ Sigmoid Kernel: $K(X, Y) = \t...
The difference of kernels in SVM?
Relying on basic knowledge of reader about kernels. Linear Kernel: $K(X, Y) = X^T Y$ Polynomial kernel: $K(X, Y) = (γ\cdot X^T Y + r)^d , γ > 0$ Radial basis function (RBF) Kernel: $K(X, Y) = \exp(\
The difference of kernels in SVM? Relying on basic knowledge of reader about kernels. Linear Kernel: $K(X, Y) = X^T Y$ Polynomial kernel: $K(X, Y) = (γ\cdot X^T Y + r)^d , γ > 0$ Radial basis function (RBF) Kernel: $K(X, Y) = \exp(\|X-Y\|^2/2σ^2)$ which in simple form can be written as $\exp(-γ \cdot \|X - Y\|^2), γ ...
The difference of kernels in SVM? Relying on basic knowledge of reader about kernels. Linear Kernel: $K(X, Y) = X^T Y$ Polynomial kernel: $K(X, Y) = (γ\cdot X^T Y + r)^d , γ > 0$ Radial basis function (RBF) Kernel: $K(X, Y) = \exp(\
8,261
The difference of kernels in SVM?
This question can be answered from theoretical and practical point of view. From theoretical according to No-Free Lunch theorem states that there are no guarantees for one kernel to work better than the other. That is a-priori you never know nor you can find out which kernel will work better. From practical point of vi...
The difference of kernels in SVM?
This question can be answered from theoretical and practical point of view. From theoretical according to No-Free Lunch theorem states that there are no guarantees for one kernel to work better than t
The difference of kernels in SVM? This question can be answered from theoretical and practical point of view. From theoretical according to No-Free Lunch theorem states that there are no guarantees for one kernel to work better than the other. That is a-priori you never know nor you can find out which kernel will work ...
The difference of kernels in SVM? This question can be answered from theoretical and practical point of view. From theoretical according to No-Free Lunch theorem states that there are no guarantees for one kernel to work better than t
8,262
The difference of kernels in SVM?
While reflecting on what a kernel is "good for" or when it should be used, there are no hard and fast rules. If you're classifier/regressor is performing well with a given kernel, it is appropriate, if not, consider changing to another. Insight into how your kernel may perform, specifically if it is a classification ...
The difference of kernels in SVM?
While reflecting on what a kernel is "good for" or when it should be used, there are no hard and fast rules. If you're classifier/regressor is performing well with a given kernel, it is appropriate,
The difference of kernels in SVM? While reflecting on what a kernel is "good for" or when it should be used, there are no hard and fast rules. If you're classifier/regressor is performing well with a given kernel, it is appropriate, if not, consider changing to another. Insight into how your kernel may perform, speci...
The difference of kernels in SVM? While reflecting on what a kernel is "good for" or when it should be used, there are no hard and fast rules. If you're classifier/regressor is performing well with a given kernel, it is appropriate,
8,263
What did my neural network just learn? What features does it care about and why?
It is true that it's hard to understand what a neural network is learning but there has been a lot of work on that front. We definitely can get some idea of what our network is looking for. Let's consider the case of a convolutional neural net for images. We have the interpretation for our first layer that we are slidi...
What did my neural network just learn? What features does it care about and why?
It is true that it's hard to understand what a neural network is learning but there has been a lot of work on that front. We definitely can get some idea of what our network is looking for. Let's cons
What did my neural network just learn? What features does it care about and why? It is true that it's hard to understand what a neural network is learning but there has been a lot of work on that front. We definitely can get some idea of what our network is looking for. Let's consider the case of a convolutional neural...
What did my neural network just learn? What features does it care about and why? It is true that it's hard to understand what a neural network is learning but there has been a lot of work on that front. We definitely can get some idea of what our network is looking for. Let's cons
8,264
What did my neural network just learn? What features does it care about and why?
Neural Network is one of the black box models that would not give "easy to understand" rules / or what has been learned. Specifically, what has been learned are the parameters in the model, but the parameters can be large: hundreds of thousands of the parameters is very normal. In addition, it is also not clear on the...
What did my neural network just learn? What features does it care about and why?
Neural Network is one of the black box models that would not give "easy to understand" rules / or what has been learned. Specifically, what has been learned are the parameters in the model, but the pa
What did my neural network just learn? What features does it care about and why? Neural Network is one of the black box models that would not give "easy to understand" rules / or what has been learned. Specifically, what has been learned are the parameters in the model, but the parameters can be large: hundreds of thou...
What did my neural network just learn? What features does it care about and why? Neural Network is one of the black box models that would not give "easy to understand" rules / or what has been learned. Specifically, what has been learned are the parameters in the model, but the pa
8,265
How should Feature Selection and Hyperparameter optimization be ordered in the machine learning pipeline?
Like you already observed yourself, your choice of features (feature selection) may have an impact on which hyperparameters for your algorithm are optimal, and which hyperparameters you select for your algorithm may have an impact on which choice of features would be optimal. So, yes, if you really really care about sq...
How should Feature Selection and Hyperparameter optimization be ordered in the machine learning pipe
Like you already observed yourself, your choice of features (feature selection) may have an impact on which hyperparameters for your algorithm are optimal, and which hyperparameters you select for you
How should Feature Selection and Hyperparameter optimization be ordered in the machine learning pipeline? Like you already observed yourself, your choice of features (feature selection) may have an impact on which hyperparameters for your algorithm are optimal, and which hyperparameters you select for your algorithm ma...
How should Feature Selection and Hyperparameter optimization be ordered in the machine learning pipe Like you already observed yourself, your choice of features (feature selection) may have an impact on which hyperparameters for your algorithm are optimal, and which hyperparameters you select for you
8,266
How should Feature Selection and Hyperparameter optimization be ordered in the machine learning pipeline?
No one mentioned approaches that make hyper-parameter tuning and feature selection the same so I will talk about it. For this case you should engineer all the features you want at the beginning and include them all. Research now in the statistics community have tried to make feature selection a tuning criterion. Basi...
How should Feature Selection and Hyperparameter optimization be ordered in the machine learning pipe
No one mentioned approaches that make hyper-parameter tuning and feature selection the same so I will talk about it. For this case you should engineer all the features you want at the beginning and i
How should Feature Selection and Hyperparameter optimization be ordered in the machine learning pipeline? No one mentioned approaches that make hyper-parameter tuning and feature selection the same so I will talk about it. For this case you should engineer all the features you want at the beginning and include them al...
How should Feature Selection and Hyperparameter optimization be ordered in the machine learning pipe No one mentioned approaches that make hyper-parameter tuning and feature selection the same so I will talk about it. For this case you should engineer all the features you want at the beginning and i
8,267
How should Feature Selection and Hyperparameter optimization be ordered in the machine learning pipeline?
@DennisSoemers has a great solution. I'll add a two similar solutions that are a bit more explicit and based on Feature Engineering and Selection: A Practical Approach for Predictive Models by Max Kuhn and Kjell Johnson. Kuhn uses the term resample to describe a fold of a dataset, but the dominant term on StackExchang...
How should Feature Selection and Hyperparameter optimization be ordered in the machine learning pipe
@DennisSoemers has a great solution. I'll add a two similar solutions that are a bit more explicit and based on Feature Engineering and Selection: A Practical Approach for Predictive Models by Max Kuh
How should Feature Selection and Hyperparameter optimization be ordered in the machine learning pipeline? @DennisSoemers has a great solution. I'll add a two similar solutions that are a bit more explicit and based on Feature Engineering and Selection: A Practical Approach for Predictive Models by Max Kuhn and Kjell Jo...
How should Feature Selection and Hyperparameter optimization be ordered in the machine learning pipe @DennisSoemers has a great solution. I'll add a two similar solutions that are a bit more explicit and based on Feature Engineering and Selection: A Practical Approach for Predictive Models by Max Kuh
8,268
How should Feature Selection and Hyperparameter optimization be ordered in the machine learning pipeline?
I think you are overthinking quite a bit there. Feature selection, which is part of feature engineering, is usually helpful but some redundant features are not much harmful in early stage of a machine learning system. So best practice is that you generate all meaningful features first, then use them to select algorithm...
How should Feature Selection and Hyperparameter optimization be ordered in the machine learning pipe
I think you are overthinking quite a bit there. Feature selection, which is part of feature engineering, is usually helpful but some redundant features are not much harmful in early stage of a machine
How should Feature Selection and Hyperparameter optimization be ordered in the machine learning pipeline? I think you are overthinking quite a bit there. Feature selection, which is part of feature engineering, is usually helpful but some redundant features are not much harmful in early stage of a machine learning syst...
How should Feature Selection and Hyperparameter optimization be ordered in the machine learning pipe I think you are overthinking quite a bit there. Feature selection, which is part of feature engineering, is usually helpful but some redundant features are not much harmful in early stage of a machine
8,269
Why is it important to include a bias correction term for the Adam optimizer for Deep Learning?
The problem of NOT correcting the bias According to the paper In case of sparse gradients, for a reliable estimate of the second moment one needs to average over many gradients by chosing a small value of β2; however it is exactly this case of small β2 where a lack of initialisation bias correction would lead to i...
Why is it important to include a bias correction term for the Adam optimizer for Deep Learning?
The problem of NOT correcting the bias According to the paper In case of sparse gradients, for a reliable estimate of the second moment one needs to average over many gradients by chosing a small v
Why is it important to include a bias correction term for the Adam optimizer for Deep Learning? The problem of NOT correcting the bias According to the paper In case of sparse gradients, for a reliable estimate of the second moment one needs to average over many gradients by chosing a small value of β2; however it i...
Why is it important to include a bias correction term for the Adam optimizer for Deep Learning? The problem of NOT correcting the bias According to the paper In case of sparse gradients, for a reliable estimate of the second moment one needs to average over many gradients by chosing a small v
8,270
Why is it important to include a bias correction term for the Adam optimizer for Deep Learning?
An example, with some number crunching, might be intuitive and also help debunk the idea of using the initial gradient instead of $0$. Consider the 1D problem $f(x)=x$, where $f'(x)=1$. $\beta_1=0.9$ and $\beta_2=0.999$ as usual. The first few values of $m_t$ and $v_t$ (rounded to 4 places) are given below. \begin{arra...
Why is it important to include a bias correction term for the Adam optimizer for Deep Learning?
An example, with some number crunching, might be intuitive and also help debunk the idea of using the initial gradient instead of $0$. Consider the 1D problem $f(x)=x$, where $f'(x)=1$. $\beta_1=0.9$
Why is it important to include a bias correction term for the Adam optimizer for Deep Learning? An example, with some number crunching, might be intuitive and also help debunk the idea of using the initial gradient instead of $0$. Consider the 1D problem $f(x)=x$, where $f'(x)=1$. $\beta_1=0.9$ and $\beta_2=0.999$ as u...
Why is it important to include a bias correction term for the Adam optimizer for Deep Learning? An example, with some number crunching, might be intuitive and also help debunk the idea of using the initial gradient instead of $0$. Consider the 1D problem $f(x)=x$, where $f'(x)=1$. $\beta_1=0.9$
8,271
Why is it important to include a bias correction term for the Adam optimizer for Deep Learning?
This correction term isn't really about de-biasing the exponentially-weighted moving average filter, it is just that the optimum EWMA filter should have a transient component -- this is well known within signal processing: see, e.g., Sophocles J. Orfanidis, Applied Optimum Signal Processing, ch 6. Consider the followi...
Why is it important to include a bias correction term for the Adam optimizer for Deep Learning?
This correction term isn't really about de-biasing the exponentially-weighted moving average filter, it is just that the optimum EWMA filter should have a transient component -- this is well known wit
Why is it important to include a bias correction term for the Adam optimizer for Deep Learning? This correction term isn't really about de-biasing the exponentially-weighted moving average filter, it is just that the optimum EWMA filter should have a transient component -- this is well known within signal processing: s...
Why is it important to include a bias correction term for the Adam optimizer for Deep Learning? This correction term isn't really about de-biasing the exponentially-weighted moving average filter, it is just that the optimum EWMA filter should have a transient component -- this is well known wit
8,272
Why is it important to include a bias correction term for the Adam optimizer for Deep Learning?
All the above answers are helpful. Why not just visualize the claims? Here is an animation that I created to demonstrate the following statement from the paper lack of initialisation bias correction would lead to initial steps that are much larger. As we can observe, without a bias correction the learning rate becom...
Why is it important to include a bias correction term for the Adam optimizer for Deep Learning?
All the above answers are helpful. Why not just visualize the claims? Here is an animation that I created to demonstrate the following statement from the paper lack of initialisation bias correction
Why is it important to include a bias correction term for the Adam optimizer for Deep Learning? All the above answers are helpful. Why not just visualize the claims? Here is an animation that I created to demonstrate the following statement from the paper lack of initialisation bias correction would lead to initial st...
Why is it important to include a bias correction term for the Adam optimizer for Deep Learning? All the above answers are helpful. Why not just visualize the claims? Here is an animation that I created to demonstrate the following statement from the paper lack of initialisation bias correction
8,273
Difference between standard and spherical k-means algorithms
The question is: What is the difference between classical k-means and spherical k-means? Classic K-means: In classic k-means, we seek to minimize a Euclidean distance between the cluster center and the members of the cluster. The intuition behind this is that the radial distance from the cluster-center to the elemen...
Difference between standard and spherical k-means algorithms
The question is: What is the difference between classical k-means and spherical k-means? Classic K-means: In classic k-means, we seek to minimize a Euclidean distance between the cluster center and
Difference between standard and spherical k-means algorithms The question is: What is the difference between classical k-means and spherical k-means? Classic K-means: In classic k-means, we seek to minimize a Euclidean distance between the cluster center and the members of the cluster. The intuition behind this is t...
Difference between standard and spherical k-means algorithms The question is: What is the difference between classical k-means and spherical k-means? Classic K-means: In classic k-means, we seek to minimize a Euclidean distance between the cluster center and
8,274
If 'correlation doesn't imply causation', then if I find a statistically significant correlation, how can I prove the causality?
A very likely reason for 2 variables being correlated is that their changes are linked to a third variable. Other likely reasons are chance (if you test enough non-correlated variables for correlation, some will show correlation), or very complex mechanisms that involve multiple steps. See http://tylervigen.com/ for e...
If 'correlation doesn't imply causation', then if I find a statistically significant correlation, ho
A very likely reason for 2 variables being correlated is that their changes are linked to a third variable. Other likely reasons are chance (if you test enough non-correlated variables for correlation
If 'correlation doesn't imply causation', then if I find a statistically significant correlation, how can I prove the causality? A very likely reason for 2 variables being correlated is that their changes are linked to a third variable. Other likely reasons are chance (if you test enough non-correlated variables for co...
If 'correlation doesn't imply causation', then if I find a statistically significant correlation, ho A very likely reason for 2 variables being correlated is that their changes are linked to a third variable. Other likely reasons are chance (if you test enough non-correlated variables for correlation
8,275
If 'correlation doesn't imply causation', then if I find a statistically significant correlation, how can I prove the causality?
Regardless of whether the design is experimental or observational, an association between a variable A and an outcome Y reflects a causal relationship between A and Y if there are no open backdoor paths between A and Y. In an experimental design, this is most easily achieved by randomization of exposure or treatment as...
If 'correlation doesn't imply causation', then if I find a statistically significant correlation, ho
Regardless of whether the design is experimental or observational, an association between a variable A and an outcome Y reflects a causal relationship between A and Y if there are no open backdoor pat
If 'correlation doesn't imply causation', then if I find a statistically significant correlation, how can I prove the causality? Regardless of whether the design is experimental or observational, an association between a variable A and an outcome Y reflects a causal relationship between A and Y if there are no open bac...
If 'correlation doesn't imply causation', then if I find a statistically significant correlation, ho Regardless of whether the design is experimental or observational, an association between a variable A and an outcome Y reflects a causal relationship between A and Y if there are no open backdoor pat
8,276
If 'correlation doesn't imply causation', then if I find a statistically significant correlation, how can I prove the causality?
If A and B are correlated, and after you excluded coincidence, it is most likely that either A causes B, or B causes A, or some possibly unknown cause X causes both A and B. The first step would be to examine a possible mechanism. Could you think of how A could case B, or vice versa, or what kind of other cause X coul...
If 'correlation doesn't imply causation', then if I find a statistically significant correlation, ho
If A and B are correlated, and after you excluded coincidence, it is most likely that either A causes B, or B causes A, or some possibly unknown cause X causes both A and B. The first step would be t
If 'correlation doesn't imply causation', then if I find a statistically significant correlation, how can I prove the causality? If A and B are correlated, and after you excluded coincidence, it is most likely that either A causes B, or B causes A, or some possibly unknown cause X causes both A and B. The first step w...
If 'correlation doesn't imply causation', then if I find a statistically significant correlation, ho If A and B are correlated, and after you excluded coincidence, it is most likely that either A causes B, or B causes A, or some possibly unknown cause X causes both A and B. The first step would be t
8,277
If 'correlation doesn't imply causation', then if I find a statistically significant correlation, how can I prove the causality?
Consider an increase in divorce rate, correlated with an increase in lawyer income. Intuitively it seems obvious these the metrics should be correlated. More couples (demand) file for more divorces, so more lawyers (supply) raise their prices. It seems that an increase in divorce rate causes an increase in lawyer incom...
If 'correlation doesn't imply causation', then if I find a statistically significant correlation, ho
Consider an increase in divorce rate, correlated with an increase in lawyer income. Intuitively it seems obvious these the metrics should be correlated. More couples (demand) file for more divorces, s
If 'correlation doesn't imply causation', then if I find a statistically significant correlation, how can I prove the causality? Consider an increase in divorce rate, correlated with an increase in lawyer income. Intuitively it seems obvious these the metrics should be correlated. More couples (demand) file for more di...
If 'correlation doesn't imply causation', then if I find a statistically significant correlation, ho Consider an increase in divorce rate, correlated with an increase in lawyer income. Intuitively it seems obvious these the metrics should be correlated. More couples (demand) file for more divorces, s
8,278
If 'correlation doesn't imply causation', then if I find a statistically significant correlation, how can I prove the causality?
Interventional (experimental) data as described by gnasher and Peter is the most straightforward way to make a good case for a causal relationship. However, only Ash's answer mentions the possibility of deducing a causal relationship via observational data. In addition to the backdoor method that he mentions, the fro...
If 'correlation doesn't imply causation', then if I find a statistically significant correlation, ho
Interventional (experimental) data as described by gnasher and Peter is the most straightforward way to make a good case for a causal relationship. However, only Ash's answer mentions the possibility
If 'correlation doesn't imply causation', then if I find a statistically significant correlation, how can I prove the causality? Interventional (experimental) data as described by gnasher and Peter is the most straightforward way to make a good case for a causal relationship. However, only Ash's answer mentions the po...
If 'correlation doesn't imply causation', then if I find a statistically significant correlation, ho Interventional (experimental) data as described by gnasher and Peter is the most straightforward way to make a good case for a causal relationship. However, only Ash's answer mentions the possibility
8,279
If 'correlation doesn't imply causation', then if I find a statistically significant correlation, how can I prove the causality?
To make a causal statement, you need to have both Random Sampling and Random Assignment Random Sampling: each individual has an equal probability to be selected for the study Random Assignment: each individual in the experiment shows a little different trait. So when selecting a treatment and a control group from t...
If 'correlation doesn't imply causation', then if I find a statistically significant correlation, ho
To make a causal statement, you need to have both Random Sampling and Random Assignment Random Sampling: each individual has an equal probability to be selected for the study Random Assignment: each
If 'correlation doesn't imply causation', then if I find a statistically significant correlation, how can I prove the causality? To make a causal statement, you need to have both Random Sampling and Random Assignment Random Sampling: each individual has an equal probability to be selected for the study Random Assignme...
If 'correlation doesn't imply causation', then if I find a statistically significant correlation, ho To make a causal statement, you need to have both Random Sampling and Random Assignment Random Sampling: each individual has an equal probability to be selected for the study Random Assignment: each
8,280
"Kernel density estimation" is a convolution of what?
Corresponding to any batch of data $X = (x_1, x_2, \ldots, x_n)$ is its "empirical density function" $$f_X(x) = \frac{1}{n}\sum_{i=1}^{n} \delta(x-x_i).$$ Here, $\delta$ is a "generalized function." Despite that name, it isn't a function at all: it's a new mathematical object that can be used only within integrals. I...
"Kernel density estimation" is a convolution of what?
Corresponding to any batch of data $X = (x_1, x_2, \ldots, x_n)$ is its "empirical density function" $$f_X(x) = \frac{1}{n}\sum_{i=1}^{n} \delta(x-x_i).$$ Here, $\delta$ is a "generalized function."
"Kernel density estimation" is a convolution of what? Corresponding to any batch of data $X = (x_1, x_2, \ldots, x_n)$ is its "empirical density function" $$f_X(x) = \frac{1}{n}\sum_{i=1}^{n} \delta(x-x_i).$$ Here, $\delta$ is a "generalized function." Despite that name, it isn't a function at all: it's a new mathemat...
"Kernel density estimation" is a convolution of what? Corresponding to any batch of data $X = (x_1, x_2, \ldots, x_n)$ is its "empirical density function" $$f_X(x) = \frac{1}{n}\sum_{i=1}^{n} \delta(x-x_i).$$ Here, $\delta$ is a "generalized function."
8,281
"Kernel density estimation" is a convolution of what?
Another convenient way of understanding this connection is from the modeling perspective. Recall that if $X$ and $Y$ are independent, then the PDF of $X+Y$ are the convolution of the marginal PDFs. In kernel density estimation (KDE), we may think about the problem via the following model $$ X = X' + \varepsilon,$$ wher...
"Kernel density estimation" is a convolution of what?
Another convenient way of understanding this connection is from the modeling perspective. Recall that if $X$ and $Y$ are independent, then the PDF of $X+Y$ are the convolution of the marginal PDFs. In
"Kernel density estimation" is a convolution of what? Another convenient way of understanding this connection is from the modeling perspective. Recall that if $X$ and $Y$ are independent, then the PDF of $X+Y$ are the convolution of the marginal PDFs. In kernel density estimation (KDE), we may think about the problem v...
"Kernel density estimation" is a convolution of what? Another convenient way of understanding this connection is from the modeling perspective. Recall that if $X$ and $Y$ are independent, then the PDF of $X+Y$ are the convolution of the marginal PDFs. In
8,282
What is the difference between confidence intervals and hypothesis testing?
You can use a confidence interval (CI) for hypothesis testing. In the typical case, if the CI for an effect does not span 0 then you can reject the null hypothesis. But a CI can be used for more, whereas reporting whether it has been passed is the limit of the usefulness of a test. The reason you're recommended to use...
What is the difference between confidence intervals and hypothesis testing?
You can use a confidence interval (CI) for hypothesis testing. In the typical case, if the CI for an effect does not span 0 then you can reject the null hypothesis. But a CI can be used for more, whe
What is the difference between confidence intervals and hypothesis testing? You can use a confidence interval (CI) for hypothesis testing. In the typical case, if the CI for an effect does not span 0 then you can reject the null hypothesis. But a CI can be used for more, whereas reporting whether it has been passed is...
What is the difference between confidence intervals and hypothesis testing? You can use a confidence interval (CI) for hypothesis testing. In the typical case, if the CI for an effect does not span 0 then you can reject the null hypothesis. But a CI can be used for more, whe
8,283
What is the difference between confidence intervals and hypothesis testing?
There is an equivalence between hypothesis tests and confidence intervals. (see e.g. http://en.wikipedia.org/wiki/Confidence_interval#Statistical_hypothesis_testing) I'll give a very specific example. Suppose we have sample $x_1, x_2, \ldots, x_n$ from a normal distribution with mean $\mu$ and variance 1, which we'll...
What is the difference between confidence intervals and hypothesis testing?
There is an equivalence between hypothesis tests and confidence intervals. (see e.g. http://en.wikipedia.org/wiki/Confidence_interval#Statistical_hypothesis_testing) I'll give a very specific example
What is the difference between confidence intervals and hypothesis testing? There is an equivalence between hypothesis tests and confidence intervals. (see e.g. http://en.wikipedia.org/wiki/Confidence_interval#Statistical_hypothesis_testing) I'll give a very specific example. Suppose we have sample $x_1, x_2, \ldots,...
What is the difference between confidence intervals and hypothesis testing? There is an equivalence between hypothesis tests and confidence intervals. (see e.g. http://en.wikipedia.org/wiki/Confidence_interval#Statistical_hypothesis_testing) I'll give a very specific example
8,284
What is the difference between confidence intervals and hypothesis testing?
'Student' argued for confidence intervals on the grounds that they could show which effects were more important as well as which were more significant. For example, if you found two effects where the first had a confidence interval for its financial impact from £5 to £6, while the second had a confidence interval from ...
What is the difference between confidence intervals and hypothesis testing?
'Student' argued for confidence intervals on the grounds that they could show which effects were more important as well as which were more significant. For example, if you found two effects where the
What is the difference between confidence intervals and hypothesis testing? 'Student' argued for confidence intervals on the grounds that they could show which effects were more important as well as which were more significant. For example, if you found two effects where the first had a confidence interval for its fina...
What is the difference between confidence intervals and hypothesis testing? 'Student' argued for confidence intervals on the grounds that they could show which effects were more important as well as which were more significant. For example, if you found two effects where the
8,285
What math subjects would you suggest to prepare for data mining and machine learning?
The suggestions that @gung made are certainly worth following up. Having done the coursera course, I think your list is a good start. Some comments: linear algebra and matrix algebra are the same thing, so drop the latter. in calculus be sure to include partial differentiation. This is calculus applied to functions of...
What math subjects would you suggest to prepare for data mining and machine learning?
The suggestions that @gung made are certainly worth following up. Having done the coursera course, I think your list is a good start. Some comments: linear algebra and matrix algebra are the same thi
What math subjects would you suggest to prepare for data mining and machine learning? The suggestions that @gung made are certainly worth following up. Having done the coursera course, I think your list is a good start. Some comments: linear algebra and matrix algebra are the same thing, so drop the latter. in calculu...
What math subjects would you suggest to prepare for data mining and machine learning? The suggestions that @gung made are certainly worth following up. Having done the coursera course, I think your list is a good start. Some comments: linear algebra and matrix algebra are the same thi
8,286
What math subjects would you suggest to prepare for data mining and machine learning?
There are a couple of excellent threads on this forum-- including THIS ONE that I have found particularly helpful for me in terms of developing a conceptual outline of the important skills for data science work. As mentioned above, there are many online courses available. For example Coursera now has a Data Science Spe...
What math subjects would you suggest to prepare for data mining and machine learning?
There are a couple of excellent threads on this forum-- including THIS ONE that I have found particularly helpful for me in terms of developing a conceptual outline of the important skills for data sc
What math subjects would you suggest to prepare for data mining and machine learning? There are a couple of excellent threads on this forum-- including THIS ONE that I have found particularly helpful for me in terms of developing a conceptual outline of the important skills for data science work. As mentioned above, th...
What math subjects would you suggest to prepare for data mining and machine learning? There are a couple of excellent threads on this forum-- including THIS ONE that I have found particularly helpful for me in terms of developing a conceptual outline of the important skills for data sc
8,287
What math subjects would you suggest to prepare for data mining and machine learning?
If you are looking to bulk up on machine learning/data mining I would strongly urge optimization/linear algebra/statistics and probability. Here is a list of books for probability. Hope that helps.
What math subjects would you suggest to prepare for data mining and machine learning?
If you are looking to bulk up on machine learning/data mining I would strongly urge optimization/linear algebra/statistics and probability. Here is a list of books for probability. Hope that helps.
What math subjects would you suggest to prepare for data mining and machine learning? If you are looking to bulk up on machine learning/data mining I would strongly urge optimization/linear algebra/statistics and probability. Here is a list of books for probability. Hope that helps.
What math subjects would you suggest to prepare for data mining and machine learning? If you are looking to bulk up on machine learning/data mining I would strongly urge optimization/linear algebra/statistics and probability. Here is a list of books for probability. Hope that helps.
8,288
What math subjects would you suggest to prepare for data mining and machine learning?
As far as brushing very very basic math skills, i'm using these books: Elements of Mathematics for Economics and Finance. Mavron, Vassilis C., Phillips, Timothy N This books covers essential math skills (addition substraction), to partial differentiation, integration, matrix and determinants, and a small chapter on opt...
What math subjects would you suggest to prepare for data mining and machine learning?
As far as brushing very very basic math skills, i'm using these books: Elements of Mathematics for Economics and Finance. Mavron, Vassilis C., Phillips, Timothy N This books covers essential math skil
What math subjects would you suggest to prepare for data mining and machine learning? As far as brushing very very basic math skills, i'm using these books: Elements of Mathematics for Economics and Finance. Mavron, Vassilis C., Phillips, Timothy N This books covers essential math skills (addition substraction), to par...
What math subjects would you suggest to prepare for data mining and machine learning? As far as brushing very very basic math skills, i'm using these books: Elements of Mathematics for Economics and Finance. Mavron, Vassilis C., Phillips, Timothy N This books covers essential math skil
8,289
What math subjects would you suggest to prepare for data mining and machine learning?
There are quite a lot of relevant resources listed (and categorized) here, at the so-called "Open Source Data Science Masters". Specifically for mathematics they list: Linear Algebra & Programming Statistics Differential Equations & Calculus Pretty generic recommendations, although they do list some textbooks that yo...
What math subjects would you suggest to prepare for data mining and machine learning?
There are quite a lot of relevant resources listed (and categorized) here, at the so-called "Open Source Data Science Masters". Specifically for mathematics they list: Linear Algebra & Programming St
What math subjects would you suggest to prepare for data mining and machine learning? There are quite a lot of relevant resources listed (and categorized) here, at the so-called "Open Source Data Science Masters". Specifically for mathematics they list: Linear Algebra & Programming Statistics Differential Equations & ...
What math subjects would you suggest to prepare for data mining and machine learning? There are quite a lot of relevant resources listed (and categorized) here, at the so-called "Open Source Data Science Masters". Specifically for mathematics they list: Linear Algebra & Programming St
8,290
What math subjects would you suggest to prepare for data mining and machine learning?
Probability and statistics are essential. Some keywords are hypothesis test, multivariate normal distribution, Bayesian inference (joint probability, conditional probability), mean, variance, covariance, Kullback-Leibler divergence, ... Basic linear algebra is essential for machine learning. Topics that you could learn...
What math subjects would you suggest to prepare for data mining and machine learning?
Probability and statistics are essential. Some keywords are hypothesis test, multivariate normal distribution, Bayesian inference (joint probability, conditional probability), mean, variance, covarian
What math subjects would you suggest to prepare for data mining and machine learning? Probability and statistics are essential. Some keywords are hypothesis test, multivariate normal distribution, Bayesian inference (joint probability, conditional probability), mean, variance, covariance, Kullback-Leibler divergence, ....
What math subjects would you suggest to prepare for data mining and machine learning? Probability and statistics are essential. Some keywords are hypothesis test, multivariate normal distribution, Bayesian inference (joint probability, conditional probability), mean, variance, covarian
8,291
What math subjects would you suggest to prepare for data mining and machine learning?
Linear Algebra, Stats, Calculus. I think you can learn them in tandem w/ ML - or even after the basics. The starter courses / books do a great job with math primer chapters, and you learn the math essentials while learning ML. I made a podcast episode on the math you need for machine learning, and the resources for lea...
What math subjects would you suggest to prepare for data mining and machine learning?
Linear Algebra, Stats, Calculus. I think you can learn them in tandem w/ ML - or even after the basics. The starter courses / books do a great job with math primer chapters, and you learn the math ess
What math subjects would you suggest to prepare for data mining and machine learning? Linear Algebra, Stats, Calculus. I think you can learn them in tandem w/ ML - or even after the basics. The starter courses / books do a great job with math primer chapters, and you learn the math essentials while learning ML. I made ...
What math subjects would you suggest to prepare for data mining and machine learning? Linear Algebra, Stats, Calculus. I think you can learn them in tandem w/ ML - or even after the basics. The starter courses / books do a great job with math primer chapters, and you learn the math ess
8,292
What math subjects would you suggest to prepare for data mining and machine learning?
Before Starting any machine learning course go through following mathematics course. Also don't try to dig in single attempt. Learn basic concepts then again brush-up your mathematics skills and repeat:- Mathematics Topics are as following:- Linear Algebra Probability Basic Calculus Maxima and minima of function
What math subjects would you suggest to prepare for data mining and machine learning?
Before Starting any machine learning course go through following mathematics course. Also don't try to dig in single attempt. Learn basic concepts then again brush-up your mathematics skills and repea
What math subjects would you suggest to prepare for data mining and machine learning? Before Starting any machine learning course go through following mathematics course. Also don't try to dig in single attempt. Learn basic concepts then again brush-up your mathematics skills and repeat:- Mathematics Topics are as foll...
What math subjects would you suggest to prepare for data mining and machine learning? Before Starting any machine learning course go through following mathematics course. Also don't try to dig in single attempt. Learn basic concepts then again brush-up your mathematics skills and repea
8,293
Understanding distance correlation computations
Distance covariance/correlation (= Brownian covariance/correlation) is computed in the following steps: Compute matrix of euclidean distances between N cases by variable $X$, and another likewise matrix by variable $Y$. Any of the two quantitative features, $X$ or $Y$, might be multivariate, not just univariate. Perfo...
Understanding distance correlation computations
Distance covariance/correlation (= Brownian covariance/correlation) is computed in the following steps: Compute matrix of euclidean distances between N cases by variable $X$, and another likewise mat
Understanding distance correlation computations Distance covariance/correlation (= Brownian covariance/correlation) is computed in the following steps: Compute matrix of euclidean distances between N cases by variable $X$, and another likewise matrix by variable $Y$. Any of the two quantitative features, $X$ or $Y$, m...
Understanding distance correlation computations Distance covariance/correlation (= Brownian covariance/correlation) is computed in the following steps: Compute matrix of euclidean distances between N cases by variable $X$, and another likewise mat
8,294
Understanding distance correlation computations
I think both of your questions are deeply linked. While the original diagonals in the distance matrix are 0, what's used for the covariance (which determines the numerator of the correlation) is the doubly centered values of the distances--which, for a vector with any variation, means that the diagonals will be negativ...
Understanding distance correlation computations
I think both of your questions are deeply linked. While the original diagonals in the distance matrix are 0, what's used for the covariance (which determines the numerator of the correlation) is the d
Understanding distance correlation computations I think both of your questions are deeply linked. While the original diagonals in the distance matrix are 0, what's used for the covariance (which determines the numerator of the correlation) is the doubly centered values of the distances--which, for a vector with any var...
Understanding distance correlation computations I think both of your questions are deeply linked. While the original diagonals in the distance matrix are 0, what's used for the covariance (which determines the numerator of the correlation) is the d
8,295
Accommodating entrenched views of p-values
There is indeed an argument to be had not to include the disclaimer. Frankly, I'd find a brief treatise on the nature of p-values in a journal article to be a little off-putting, and for a moment would have to pause and try to figure out if you'd done something particularly...esoteric...to warrant devoting that space t...
Accommodating entrenched views of p-values
There is indeed an argument to be had not to include the disclaimer. Frankly, I'd find a brief treatise on the nature of p-values in a journal article to be a little off-putting, and for a moment woul
Accommodating entrenched views of p-values There is indeed an argument to be had not to include the disclaimer. Frankly, I'd find a brief treatise on the nature of p-values in a journal article to be a little off-putting, and for a moment would have to pause and try to figure out if you'd done something particularly......
Accommodating entrenched views of p-values There is indeed an argument to be had not to include the disclaimer. Frankly, I'd find a brief treatise on the nature of p-values in a journal article to be a little off-putting, and for a moment woul
8,296
Accommodating entrenched views of p-values
The use of inferential statistics can be justified not only based on a population model, but also based on a randomization model. The latter does not make any assumptions about the way the sample has been obtained. In fact, Fisher was the one that suggested that the randomization model should be the basis for statistic...
Accommodating entrenched views of p-values
The use of inferential statistics can be justified not only based on a population model, but also based on a randomization model. The latter does not make any assumptions about the way the sample has
Accommodating entrenched views of p-values The use of inferential statistics can be justified not only based on a population model, but also based on a randomization model. The latter does not make any assumptions about the way the sample has been obtained. In fact, Fisher was the one that suggested that the randomizat...
Accommodating entrenched views of p-values The use of inferential statistics can be justified not only based on a population model, but also based on a randomization model. The latter does not make any assumptions about the way the sample has
8,297
Accommodating entrenched views of p-values
I haven't had to do battle with any bad reviewers yet, so I wouldn't claim any knowledge of how to get out of a battle that's already begun. However, if their objections are a mere matter of obstructive ignorance, a little preemptive diversion might do the trick. If $p$ values are in fact necessary to report despite th...
Accommodating entrenched views of p-values
I haven't had to do battle with any bad reviewers yet, so I wouldn't claim any knowledge of how to get out of a battle that's already begun. However, if their objections are a mere matter of obstructi
Accommodating entrenched views of p-values I haven't had to do battle with any bad reviewers yet, so I wouldn't claim any knowledge of how to get out of a battle that's already begun. However, if their objections are a mere matter of obstructive ignorance, a little preemptive diversion might do the trick. If $p$ values...
Accommodating entrenched views of p-values I haven't had to do battle with any bad reviewers yet, so I wouldn't claim any knowledge of how to get out of a battle that's already begun. However, if their objections are a mere matter of obstructi
8,298
What are attention mechanisms exactly?
Attention is a method for aggregating a set of vectors $v_i$ into just one vector, often via a lookup vector $u$. Usually, $v_i$ is either the inputs to the model or the hidden states of previous time-steps, or the hidden states one level down (in the case of stacked LSTMs). The result is often called the context vect...
What are attention mechanisms exactly?
Attention is a method for aggregating a set of vectors $v_i$ into just one vector, often via a lookup vector $u$. Usually, $v_i$ is either the inputs to the model or the hidden states of previous time
What are attention mechanisms exactly? Attention is a method for aggregating a set of vectors $v_i$ into just one vector, often via a lookup vector $u$. Usually, $v_i$ is either the inputs to the model or the hidden states of previous time-steps, or the hidden states one level down (in the case of stacked LSTMs). The ...
What are attention mechanisms exactly? Attention is a method for aggregating a set of vectors $v_i$ into just one vector, often via a lookup vector $u$. Usually, $v_i$ is either the inputs to the model or the hidden states of previous time
8,299
What does interaction depth mean in GBM?
Both of the previous answers are wrong. Package GBM uses interaction.depth parameter as a number of splits it has to perform on a tree (starting from a single node). As each split increases the total number of nodes by 3 and number of terminal nodes by 2 (node $\to$ {left node, right node, NA node}) the total number of...
What does interaction depth mean in GBM?
Both of the previous answers are wrong. Package GBM uses interaction.depth parameter as a number of splits it has to perform on a tree (starting from a single node). As each split increases the total
What does interaction depth mean in GBM? Both of the previous answers are wrong. Package GBM uses interaction.depth parameter as a number of splits it has to perform on a tree (starting from a single node). As each split increases the total number of nodes by 3 and number of terminal nodes by 2 (node $\to$ {left node, ...
What does interaction depth mean in GBM? Both of the previous answers are wrong. Package GBM uses interaction.depth parameter as a number of splits it has to perform on a tree (starting from a single node). As each split increases the total
8,300
What does interaction depth mean in GBM?
I had a question on the interaction depth parameter in gbm in R. This may be a noob question, for which I apologize, but how does the parameter, which I believe denotes the number of terminal nodes in a tree, basically indicate X-way interaction among the predictors? Link between interaction.depth and the number of te...
What does interaction depth mean in GBM?
I had a question on the interaction depth parameter in gbm in R. This may be a noob question, for which I apologize, but how does the parameter, which I believe denotes the number of terminal nodes in
What does interaction depth mean in GBM? I had a question on the interaction depth parameter in gbm in R. This may be a noob question, for which I apologize, but how does the parameter, which I believe denotes the number of terminal nodes in a tree, basically indicate X-way interaction among the predictors? Link betwe...
What does interaction depth mean in GBM? I had a question on the interaction depth parameter in gbm in R. This may be a noob question, for which I apologize, but how does the parameter, which I believe denotes the number of terminal nodes in