idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
12,201 | What is the "horseshoe effect" and/or the "arch effect" in PCA / correspondence analysis? | Q1
Ecologists talk of gradients all the time. There are lots of kinds of gradients, but it may be best to think of them as some combination of whatever variable(s) you want or are important for the response. So a gradient could be time, or space, or soil acidity, or nutrients, or something more complex such as a linear... | What is the "horseshoe effect" and/or the "arch effect" in PCA / correspondence analysis? | Q1
Ecologists talk of gradients all the time. There are lots of kinds of gradients, but it may be best to think of them as some combination of whatever variable(s) you want or are important for the re | What is the "horseshoe effect" and/or the "arch effect" in PCA / correspondence analysis?
Q1
Ecologists talk of gradients all the time. There are lots of kinds of gradients, but it may be best to think of them as some combination of whatever variable(s) you want or are important for the response. So a gradient could be... | What is the "horseshoe effect" and/or the "arch effect" in PCA / correspondence analysis?
Q1
Ecologists talk of gradients all the time. There are lots of kinds of gradients, but it may be best to think of them as some combination of whatever variable(s) you want or are important for the re |
12,202 | Why do we divide by the standard deviation and not some other standardizing factor before doing PCA? | This is in partial answer to "it is not clear to me why dividing by the standard deviation would achieve such a goal". In particular, why it puts the transformed (standardized) data on the "same scale". The question hints at deeper issues (what else might have "worked", which is linked to what "worked" might even mean,... | Why do we divide by the standard deviation and not some other standardizing factor before doing PCA? | This is in partial answer to "it is not clear to me why dividing by the standard deviation would achieve such a goal". In particular, why it puts the transformed (standardized) data on the "same scale | Why do we divide by the standard deviation and not some other standardizing factor before doing PCA?
This is in partial answer to "it is not clear to me why dividing by the standard deviation would achieve such a goal". In particular, why it puts the transformed (standardized) data on the "same scale". The question hin... | Why do we divide by the standard deviation and not some other standardizing factor before doing PCA?
This is in partial answer to "it is not clear to me why dividing by the standard deviation would achieve such a goal". In particular, why it puts the transformed (standardized) data on the "same scale |
12,203 | Why do we divide by the standard deviation and not some other standardizing factor before doing PCA? | Why do we divide by the standard deviation
whats wrong with dividing by the variance?
as @Silverfish already pointed out in a comment, the standard deviation has the same unit as the measurements. Thus, dividing by standard deviation as opposed to variance, you end up with a plain number that tells you where your c... | Why do we divide by the standard deviation and not some other standardizing factor before doing PCA? | Why do we divide by the standard deviation
whats wrong with dividing by the variance?
as @Silverfish already pointed out in a comment, the standard deviation has the same unit as the measurements. | Why do we divide by the standard deviation and not some other standardizing factor before doing PCA?
Why do we divide by the standard deviation
whats wrong with dividing by the variance?
as @Silverfish already pointed out in a comment, the standard deviation has the same unit as the measurements. Thus, dividing by ... | Why do we divide by the standard deviation and not some other standardizing factor before doing PCA?
Why do we divide by the standard deviation
whats wrong with dividing by the variance?
as @Silverfish already pointed out in a comment, the standard deviation has the same unit as the measurements. |
12,204 | Why do we divide by the standard deviation and not some other standardizing factor before doing PCA? | This link answers your question clearly, I guess: http://sebastianraschka.com/Articles/2014_about_feature_scaling.html
I quote a small piece:
Z-score standardization or Min-Max scaling?
“Standardization or Min-Max scaling?” - There is no obvious answer to this question: it really depends on the application.
For exampl... | Why do we divide by the standard deviation and not some other standardizing factor before doing PCA? | This link answers your question clearly, I guess: http://sebastianraschka.com/Articles/2014_about_feature_scaling.html
I quote a small piece:
Z-score standardization or Min-Max scaling?
“Standardizat | Why do we divide by the standard deviation and not some other standardizing factor before doing PCA?
This link answers your question clearly, I guess: http://sebastianraschka.com/Articles/2014_about_feature_scaling.html
I quote a small piece:
Z-score standardization or Min-Max scaling?
“Standardization or Min-Max scal... | Why do we divide by the standard deviation and not some other standardizing factor before doing PCA?
This link answers your question clearly, I guess: http://sebastianraschka.com/Articles/2014_about_feature_scaling.html
I quote a small piece:
Z-score standardization or Min-Max scaling?
“Standardizat |
12,205 | What is the best book about generalized linear models for novices? | For a new practitioner, I like Gelman and Hill.
Data Analysis Using Regression and Multilevel/Hierarchical Models
Ostensibly the book is about Hierarchical Generalized Linear Models, a more advanced topic than GLMs; the first section, though, is a wonderful practitioners guide to GLMs.
The book is light on theory, hea... | What is the best book about generalized linear models for novices? | For a new practitioner, I like Gelman and Hill.
Data Analysis Using Regression and Multilevel/Hierarchical Models
Ostensibly the book is about Hierarchical Generalized Linear Models, a more advanced | What is the best book about generalized linear models for novices?
For a new practitioner, I like Gelman and Hill.
Data Analysis Using Regression and Multilevel/Hierarchical Models
Ostensibly the book is about Hierarchical Generalized Linear Models, a more advanced topic than GLMs; the first section, though, is a wond... | What is the best book about generalized linear models for novices?
For a new practitioner, I like Gelman and Hill.
Data Analysis Using Regression and Multilevel/Hierarchical Models
Ostensibly the book is about Hierarchical Generalized Linear Models, a more advanced |
12,206 | What is the best book about generalized linear models for novices? | I am a big fan of Agresti's Categorical Data Analysis.
I have read Agresti's Intro book but found it missing key interpretations for how generalized linear model is built and how it works. For example, you may not need to know how the binomial distribution and logit link work if you only want to fit a logistic regress... | What is the best book about generalized linear models for novices? | I am a big fan of Agresti's Categorical Data Analysis.
I have read Agresti's Intro book but found it missing key interpretations for how generalized linear model is built and how it works. For exampl | What is the best book about generalized linear models for novices?
I am a big fan of Agresti's Categorical Data Analysis.
I have read Agresti's Intro book but found it missing key interpretations for how generalized linear model is built and how it works. For example, you may not need to know how the binomial distribu... | What is the best book about generalized linear models for novices?
I am a big fan of Agresti's Categorical Data Analysis.
I have read Agresti's Intro book but found it missing key interpretations for how generalized linear model is built and how it works. For exampl |
12,207 | What is the best book about generalized linear models for novices? | As a complete beginner myself, I found Foundations of Linear and Generalized Linear Models by the celebrated author of Categorical Data Analysis Alan Agresti to be helpful. Language is fluid, though some exposure to Linear Algebra is assumed. | What is the best book about generalized linear models for novices? | As a complete beginner myself, I found Foundations of Linear and Generalized Linear Models by the celebrated author of Categorical Data Analysis Alan Agresti to be helpful. Language is fluid, though s | What is the best book about generalized linear models for novices?
As a complete beginner myself, I found Foundations of Linear and Generalized Linear Models by the celebrated author of Categorical Data Analysis Alan Agresti to be helpful. Language is fluid, though some exposure to Linear Algebra is assumed. | What is the best book about generalized linear models for novices?
As a complete beginner myself, I found Foundations of Linear and Generalized Linear Models by the celebrated author of Categorical Data Analysis Alan Agresti to be helpful. Language is fluid, though s |
12,208 | What is the best book about generalized linear models for novices? | I really liked Mixed Effects Models with Extensions in R - Zuur, et. al. It's a followup to their older book Analysing Ecological Data (2007). They do a good job of motivating the models, along with plenty of visual examples to explain what GLMs look like. They also strike a good balance between, theory, application an... | What is the best book about generalized linear models for novices? | I really liked Mixed Effects Models with Extensions in R - Zuur, et. al. It's a followup to their older book Analysing Ecological Data (2007). They do a good job of motivating the models, along with p | What is the best book about generalized linear models for novices?
I really liked Mixed Effects Models with Extensions in R - Zuur, et. al. It's a followup to their older book Analysing Ecological Data (2007). They do a good job of motivating the models, along with plenty of visual examples to explain what GLMs look li... | What is the best book about generalized linear models for novices?
I really liked Mixed Effects Models with Extensions in R - Zuur, et. al. It's a followup to their older book Analysing Ecological Data (2007). They do a good job of motivating the models, along with p |
12,209 | Repeated measures ANOVA with lme/lmer in R for two within-subject factors | What you're fitting with aov is called a strip plot, and it's tricky to fit with lme because the subject:A and subject:B random effects are crossed.
Your first attempt is equivalent to aov(Y ~ A*B + Error(subject), data=d), which doesn't include all the random effects; your second attempt is the right idea, but the syn... | Repeated measures ANOVA with lme/lmer in R for two within-subject factors | What you're fitting with aov is called a strip plot, and it's tricky to fit with lme because the subject:A and subject:B random effects are crossed.
Your first attempt is equivalent to aov(Y ~ A*B + E | Repeated measures ANOVA with lme/lmer in R for two within-subject factors
What you're fitting with aov is called a strip plot, and it's tricky to fit with lme because the subject:A and subject:B random effects are crossed.
Your first attempt is equivalent to aov(Y ~ A*B + Error(subject), data=d), which doesn't include ... | Repeated measures ANOVA with lme/lmer in R for two within-subject factors
What you're fitting with aov is called a strip plot, and it's tricky to fit with lme because the subject:A and subject:B random effects are crossed.
Your first attempt is equivalent to aov(Y ~ A*B + E |
12,210 | Repeated measures ANOVA with lme/lmer in R for two within-subject factors | Your first attempt is the correct answer if that's all you're trying to do. nlme() works out the between and within components, you don't need to specify them.
The problem you're running into isn't because you don't know how to specify the model, it's because repeated measures ANOVA and mixed effects are not the same ... | Repeated measures ANOVA with lme/lmer in R for two within-subject factors | Your first attempt is the correct answer if that's all you're trying to do. nlme() works out the between and within components, you don't need to specify them.
The problem you're running into isn't b | Repeated measures ANOVA with lme/lmer in R for two within-subject factors
Your first attempt is the correct answer if that's all you're trying to do. nlme() works out the between and within components, you don't need to specify them.
The problem you're running into isn't because you don't know how to specify the model... | Repeated measures ANOVA with lme/lmer in R for two within-subject factors
Your first attempt is the correct answer if that's all you're trying to do. nlme() works out the between and within components, you don't need to specify them.
The problem you're running into isn't b |
12,211 | Relationship between Gram and covariance matrices | A Singular Value Decomposition (SVD) of $X$ expresses it as
$$X = U D V^\prime$$
where $U$ is an $n\times r$ matrix whose columns are mutually orthonormal, $V$ is an $p\times r$ matrix whose columns are mutually orthonormal, and $D$ is an $r\times r$ diagonal matrix with positive values (the "singular values" of $X$) o... | Relationship between Gram and covariance matrices | A Singular Value Decomposition (SVD) of $X$ expresses it as
$$X = U D V^\prime$$
where $U$ is an $n\times r$ matrix whose columns are mutually orthonormal, $V$ is an $p\times r$ matrix whose columns a | Relationship between Gram and covariance matrices
A Singular Value Decomposition (SVD) of $X$ expresses it as
$$X = U D V^\prime$$
where $U$ is an $n\times r$ matrix whose columns are mutually orthonormal, $V$ is an $p\times r$ matrix whose columns are mutually orthonormal, and $D$ is an $r\times r$ diagonal matrix wit... | Relationship between Gram and covariance matrices
A Singular Value Decomposition (SVD) of $X$ expresses it as
$$X = U D V^\prime$$
where $U$ is an $n\times r$ matrix whose columns are mutually orthonormal, $V$ is an $p\times r$ matrix whose columns a |
12,212 | Jensen Shannon Divergence vs Kullback-Leibler Divergence? | I found a very mature answer on the Quora and just put it here for people who look for it here:
The Kullback-Leibler divergence has a few nice properties, one of them
being that $𝐾𝐿[𝑞;𝑝]$ kind of abhors regions where $𝑞(𝑥)$ have
non-null mass and $𝑝(𝑥)$ has null mass. This might look like a bug,
but it’s... | Jensen Shannon Divergence vs Kullback-Leibler Divergence? | I found a very mature answer on the Quora and just put it here for people who look for it here:
The Kullback-Leibler divergence has a few nice properties, one of them
being that $𝐾𝐿[𝑞;𝑝]$ kind of a | Jensen Shannon Divergence vs Kullback-Leibler Divergence?
I found a very mature answer on the Quora and just put it here for people who look for it here:
The Kullback-Leibler divergence has a few nice properties, one of them
being that $𝐾𝐿[𝑞;𝑝]$ kind of abhors regions where $𝑞(𝑥)$ have
non-null mass and $𝑝(... | Jensen Shannon Divergence vs Kullback-Leibler Divergence?
I found a very mature answer on the Quora and just put it here for people who look for it here:
The Kullback-Leibler divergence has a few nice properties, one of them
being that $𝐾𝐿[𝑞;𝑝]$ kind of a |
12,213 | Jensen Shannon Divergence vs Kullback-Leibler Divergence? | I recently stumbled into a similar question.
To answer why an asymmetric divergence can be more favourable than a symmetric divergence, consider a scenario where you want to quantify the quality of a proposal distribution used in importance sampling (IS). If you are unfamiliar with IS, the key idea here is that to desi... | Jensen Shannon Divergence vs Kullback-Leibler Divergence? | I recently stumbled into a similar question.
To answer why an asymmetric divergence can be more favourable than a symmetric divergence, consider a scenario where you want to quantify the quality of a | Jensen Shannon Divergence vs Kullback-Leibler Divergence?
I recently stumbled into a similar question.
To answer why an asymmetric divergence can be more favourable than a symmetric divergence, consider a scenario where you want to quantify the quality of a proposal distribution used in importance sampling (IS). If you... | Jensen Shannon Divergence vs Kullback-Leibler Divergence?
I recently stumbled into a similar question.
To answer why an asymmetric divergence can be more favourable than a symmetric divergence, consider a scenario where you want to quantify the quality of a |
12,214 | Jensen Shannon Divergence vs Kullback-Leibler Divergence? | KL divergence has clear information theoretical interpretation and is well-known; but I am first time to hear that the symmetrization of KL divergence is called JS divergence. The reason that JS-divergence is not so often used is probably that it is less well-known and does not offer must-have properties. | Jensen Shannon Divergence vs Kullback-Leibler Divergence? | KL divergence has clear information theoretical interpretation and is well-known; but I am first time to hear that the symmetrization of KL divergence is called JS divergence. The reason that JS-diver | Jensen Shannon Divergence vs Kullback-Leibler Divergence?
KL divergence has clear information theoretical interpretation and is well-known; but I am first time to hear that the symmetrization of KL divergence is called JS divergence. The reason that JS-divergence is not so often used is probably that it is less well-kn... | Jensen Shannon Divergence vs Kullback-Leibler Divergence?
KL divergence has clear information theoretical interpretation and is well-known; but I am first time to hear that the symmetrization of KL divergence is called JS divergence. The reason that JS-diver |
12,215 | Multiple imputation and model selection | There are many things you could do to select variables from multiply imputed data, but not all yield appropriate estimates. See Wood et al (2008) Stat Med for a comparison of various possibilities.
I have found the following two-step procedure useful in practice.
Apply your preferred variable selection method independ... | Multiple imputation and model selection | There are many things you could do to select variables from multiply imputed data, but not all yield appropriate estimates. See Wood et al (2008) Stat Med for a comparison of various possibilities.
I | Multiple imputation and model selection
There are many things you could do to select variables from multiply imputed data, but not all yield appropriate estimates. See Wood et al (2008) Stat Med for a comparison of various possibilities.
I have found the following two-step procedure useful in practice.
Apply your pref... | Multiple imputation and model selection
There are many things you could do to select variables from multiply imputed data, but not all yield appropriate estimates. See Wood et al (2008) Stat Med for a comparison of various possibilities.
I |
12,216 | Multiple imputation and model selection | It is straightforward: You can apply standard MI combining rules - but effects of variables which are not supported throughout imputed datasets will be less pronounced. For example, if a variable is not selected in a specific imputed dataset its estimate (incl. variance) is zero and this has to be reflected in the esti... | Multiple imputation and model selection | It is straightforward: You can apply standard MI combining rules - but effects of variables which are not supported throughout imputed datasets will be less pronounced. For example, if a variable is n | Multiple imputation and model selection
It is straightforward: You can apply standard MI combining rules - but effects of variables which are not supported throughout imputed datasets will be less pronounced. For example, if a variable is not selected in a specific imputed dataset its estimate (incl. variance) is zero ... | Multiple imputation and model selection
It is straightforward: You can apply standard MI combining rules - but effects of variables which are not supported throughout imputed datasets will be less pronounced. For example, if a variable is n |
12,217 | Multiple imputation and model selection | I was having the same problem.
My choice was the so-called "multiple imputation lasso". Basically it combines all imputed datasets together and adopts the concept of group lasso: every candidate variable would generate m dummy variables. Each dummy variable corresponds to a imputed dataset.
Then all the m dummy varia... | Multiple imputation and model selection | I was having the same problem.
My choice was the so-called "multiple imputation lasso". Basically it combines all imputed datasets together and adopts the concept of group lasso: every candidate vari | Multiple imputation and model selection
I was having the same problem.
My choice was the so-called "multiple imputation lasso". Basically it combines all imputed datasets together and adopts the concept of group lasso: every candidate variable would generate m dummy variables. Each dummy variable corresponds to a impu... | Multiple imputation and model selection
I was having the same problem.
My choice was the so-called "multiple imputation lasso". Basically it combines all imputed datasets together and adopts the concept of group lasso: every candidate vari |
12,218 | Multiple imputation and model selection | I've been facing a similar problem -- I've got a dataset in which I knew from the start that I wanted to include all variables (I was interested in the coefficients more than the prediction), but I didn't know a priori what interactions should be specified.
My approach was to write out a set of candidate models, perf... | Multiple imputation and model selection | I've been facing a similar problem -- I've got a dataset in which I knew from the start that I wanted to include all variables (I was interested in the coefficients more than the prediction), but I di | Multiple imputation and model selection
I've been facing a similar problem -- I've got a dataset in which I knew from the start that I wanted to include all variables (I was interested in the coefficients more than the prediction), but I didn't know a priori what interactions should be specified.
My approach was to w... | Multiple imputation and model selection
I've been facing a similar problem -- I've got a dataset in which I knew from the start that I wanted to include all variables (I was interested in the coefficients more than the prediction), but I di |
12,219 | Support vector regression for multivariate time series prediction | In the context of support vector regression, the fact that your data is a time series is mainly relevant from a methodological standpoint -- for example, you can't do a k-fold cross validation, and you need to take precautions when running backtests/simulations.
Basically, support vector regression is a discriminative ... | Support vector regression for multivariate time series prediction | In the context of support vector regression, the fact that your data is a time series is mainly relevant from a methodological standpoint -- for example, you can't do a k-fold cross validation, and yo | Support vector regression for multivariate time series prediction
In the context of support vector regression, the fact that your data is a time series is mainly relevant from a methodological standpoint -- for example, you can't do a k-fold cross validation, and you need to take precautions when running backtests/simu... | Support vector regression for multivariate time series prediction
In the context of support vector regression, the fact that your data is a time series is mainly relevant from a methodological standpoint -- for example, you can't do a k-fold cross validation, and yo |
12,220 | Support vector regression for multivariate time series prediction | My personal answer to the question as asked is "yes". You may view it as a pro or a con that there are an infinite number of choices of features to describe the past.Try to pick features that correspond to how you might concisely describe to someone what the market has just done [eg "the price is at 1.4" tells you noth... | Support vector regression for multivariate time series prediction | My personal answer to the question as asked is "yes". You may view it as a pro or a con that there are an infinite number of choices of features to describe the past.Try to pick features that correspo | Support vector regression for multivariate time series prediction
My personal answer to the question as asked is "yes". You may view it as a pro or a con that there are an infinite number of choices of features to describe the past.Try to pick features that correspond to how you might concisely describe to someone what... | Support vector regression for multivariate time series prediction
My personal answer to the question as asked is "yes". You may view it as a pro or a con that there are an infinite number of choices of features to describe the past.Try to pick features that correspo |
12,221 | Support vector regression for multivariate time series prediction | There's an example up on Quantum Financier for using an SVM to forecast financial series. It could easily be converted from a classification system (Long/Short) to a regression system. | Support vector regression for multivariate time series prediction | There's an example up on Quantum Financier for using an SVM to forecast financial series. It could easily be converted from a classification system (Long/Short) to a regression system. | Support vector regression for multivariate time series prediction
There's an example up on Quantum Financier for using an SVM to forecast financial series. It could easily be converted from a classification system (Long/Short) to a regression system. | Support vector regression for multivariate time series prediction
There's an example up on Quantum Financier for using an SVM to forecast financial series. It could easily be converted from a classification system (Long/Short) to a regression system. |
12,222 | Linear regression what does the F statistic, R squared and residual standard error tell us? | The best way to understand these terms is to do a regression calculation by hand. I wrote two closely related answers (here and here), however they may not fully help you understanding your particular case. But read through them nonetheless. Maybe they will also help you conceptualizing these terms better.
In a regress... | Linear regression what does the F statistic, R squared and residual standard error tell us? | The best way to understand these terms is to do a regression calculation by hand. I wrote two closely related answers (here and here), however they may not fully help you understanding your particular | Linear regression what does the F statistic, R squared and residual standard error tell us?
The best way to understand these terms is to do a regression calculation by hand. I wrote two closely related answers (here and here), however they may not fully help you understanding your particular case. But read through them... | Linear regression what does the F statistic, R squared and residual standard error tell us?
The best way to understand these terms is to do a regression calculation by hand. I wrote two closely related answers (here and here), however they may not fully help you understanding your particular |
12,223 | Linear regression what does the F statistic, R squared and residual standard error tell us? | (2) You are understanding it correctly, you are just having a hard time with the concept.
The $R^2$ value represents how well the model accounts for all of the data. It can only take on values between 0 and 1. It is the percentage of the deviation of the points in the dataset that the model can explain.
The RSE is mor... | Linear regression what does the F statistic, R squared and residual standard error tell us? | (2) You are understanding it correctly, you are just having a hard time with the concept.
The $R^2$ value represents how well the model accounts for all of the data. It can only take on values betwee | Linear regression what does the F statistic, R squared and residual standard error tell us?
(2) You are understanding it correctly, you are just having a hard time with the concept.
The $R^2$ value represents how well the model accounts for all of the data. It can only take on values between 0 and 1. It is the percent... | Linear regression what does the F statistic, R squared and residual standard error tell us?
(2) You are understanding it correctly, you are just having a hard time with the concept.
The $R^2$ value represents how well the model accounts for all of the data. It can only take on values betwee |
12,224 | Linear regression what does the F statistic, R squared and residual standard error tell us? | Just to complement what Chris replied above:
The F-statistic is the division of the model mean square and the residual mean square. Software like Stata, after fitting a regression model, also provide the p-value associated with the F-statistic. This allows you to test the null hypothesis that your model's coefficients ... | Linear regression what does the F statistic, R squared and residual standard error tell us? | Just to complement what Chris replied above:
The F-statistic is the division of the model mean square and the residual mean square. Software like Stata, after fitting a regression model, also provide | Linear regression what does the F statistic, R squared and residual standard error tell us?
Just to complement what Chris replied above:
The F-statistic is the division of the model mean square and the residual mean square. Software like Stata, after fitting a regression model, also provide the p-value associated with ... | Linear regression what does the F statistic, R squared and residual standard error tell us?
Just to complement what Chris replied above:
The F-statistic is the division of the model mean square and the residual mean square. Software like Stata, after fitting a regression model, also provide |
12,225 | Linear regression what does the F statistic, R squared and residual standard error tell us? | As I point out in this other answer, $F$, $RSS$ and $R^2$ are all interrelated. Here's the relevant excerpt:
The F-statistic between two models, the null model (intercept only)
$m_0$ and the alternative model $m_1$ ($m_0$ is nested within $m_1$)
is:
$$F = \frac{\left( \frac{RSS_0-RSS_1}{p_1-p_0} \right)} {\left(
\fra... | Linear regression what does the F statistic, R squared and residual standard error tell us? | As I point out in this other answer, $F$, $RSS$ and $R^2$ are all interrelated. Here's the relevant excerpt:
The F-statistic between two models, the null model (intercept only)
$m_0$ and the alternat | Linear regression what does the F statistic, R squared and residual standard error tell us?
As I point out in this other answer, $F$, $RSS$ and $R^2$ are all interrelated. Here's the relevant excerpt:
The F-statistic between two models, the null model (intercept only)
$m_0$ and the alternative model $m_1$ ($m_0$ is ne... | Linear regression what does the F statistic, R squared and residual standard error tell us?
As I point out in this other answer, $F$, $RSS$ and $R^2$ are all interrelated. Here's the relevant excerpt:
The F-statistic between two models, the null model (intercept only)
$m_0$ and the alternat |
12,226 | What is the difference between random variable and random sample? | A random variable, $X:\Omega \rightarrow \mathbb R$, is a function from the sample space to the real line. This is a deterministic formula that can be as simple as writing down the number a die lands on in the random experiment of tossing a die. The experiment is random, in the way that we don't control many of the phy... | What is the difference between random variable and random sample? | A random variable, $X:\Omega \rightarrow \mathbb R$, is a function from the sample space to the real line. This is a deterministic formula that can be as simple as writing down the number a die lands | What is the difference between random variable and random sample?
A random variable, $X:\Omega \rightarrow \mathbb R$, is a function from the sample space to the real line. This is a deterministic formula that can be as simple as writing down the number a die lands on in the random experiment of tossing a die. The expe... | What is the difference between random variable and random sample?
A random variable, $X:\Omega \rightarrow \mathbb R$, is a function from the sample space to the real line. This is a deterministic formula that can be as simple as writing down the number a die lands |
12,227 | What is the difference between random variable and random sample? | In the OP's example, each random sample $X_i$ is an observation of same random variable $X$. So random sample is an observation of a random variable. Random variable is a function that maps sample space to real numbers. | What is the difference between random variable and random sample? | In the OP's example, each random sample $X_i$ is an observation of same random variable $X$. So random sample is an observation of a random variable. Random variable is a function that maps sample spa | What is the difference between random variable and random sample?
In the OP's example, each random sample $X_i$ is an observation of same random variable $X$. So random sample is an observation of a random variable. Random variable is a function that maps sample space to real numbers. | What is the difference between random variable and random sample?
In the OP's example, each random sample $X_i$ is an observation of same random variable $X$. So random sample is an observation of a random variable. Random variable is a function that maps sample spa |
12,228 | Linear regression, conditional expectations and expected values | In the probability model underlying linear regression, X and Y are random variables.
if so, as an example, if Y = obesity and X = age, if we take the conditional expectation E(Y|X=35) meaning, whats the expected value of being obese if the individual is 35 across the sample, would we just take the average(arithmetic m... | Linear regression, conditional expectations and expected values | In the probability model underlying linear regression, X and Y are random variables.
if so, as an example, if Y = obesity and X = age, if we take the conditional expectation E(Y|X=35) meaning, whats | Linear regression, conditional expectations and expected values
In the probability model underlying linear regression, X and Y are random variables.
if so, as an example, if Y = obesity and X = age, if we take the conditional expectation E(Y|X=35) meaning, whats the expected value of being obese if the individual is 3... | Linear regression, conditional expectations and expected values
In the probability model underlying linear regression, X and Y are random variables.
if so, as an example, if Y = obesity and X = age, if we take the conditional expectation E(Y|X=35) meaning, whats |
12,229 | Linear regression, conditional expectations and expected values | There will be a LOT of answers to this question, but I still want to add one since you made some interesting points. For simplicity I only consider the simple linear model.
It is my understanding that the linear regression model
is predicted via a conditional expectation E(Y|X)=b+Xb+e
The fundamental equation of... | Linear regression, conditional expectations and expected values | There will be a LOT of answers to this question, but I still want to add one since you made some interesting points. For simplicity I only consider the simple linear model.
It is my understanding t | Linear regression, conditional expectations and expected values
There will be a LOT of answers to this question, but I still want to add one since you made some interesting points. For simplicity I only consider the simple linear model.
It is my understanding that the linear regression model
is predicted via a co... | Linear regression, conditional expectations and expected values
There will be a LOT of answers to this question, but I still want to add one since you made some interesting points. For simplicity I only consider the simple linear model.
It is my understanding t |
12,230 | How many times must I roll a die to confidently assess its fairness? | TL;DR: if $p$ = 1/6 and you want to know how large $n$ needs to be 98% sure the dice is fair (to within 2%), $n$ needs to be at least $n$ ≥ 766.
Let $n$ be the number of rolls and $X$ the number of rolls that land on some specified side. Then $X$ follows a Binomial(n,p) distribution where $p$ is the probability of ge... | How many times must I roll a die to confidently assess its fairness? | TL;DR: if $p$ = 1/6 and you want to know how large $n$ needs to be 98% sure the dice is fair (to within 2%), $n$ needs to be at least $n$ ≥ 766.
Let $n$ be the number of rolls and $X$ the number of r | How many times must I roll a die to confidently assess its fairness?
TL;DR: if $p$ = 1/6 and you want to know how large $n$ needs to be 98% sure the dice is fair (to within 2%), $n$ needs to be at least $n$ ≥ 766.
Let $n$ be the number of rolls and $X$ the number of rolls that land on some specified side. Then $X$ fol... | How many times must I roll a die to confidently assess its fairness?
TL;DR: if $p$ = 1/6 and you want to know how large $n$ needs to be 98% sure the dice is fair (to within 2%), $n$ needs to be at least $n$ ≥ 766.
Let $n$ be the number of rolls and $X$ the number of r |
12,231 | Advanced regression modeling examples | Regression Modeling Strategies and ISLR, which have already been mentioned by others, are two very good suggestions. I have a few others that you might want to consider.
Applied Predictive Modeling by Kuhn and Johnson contains a number of good case studies and is pretty hands-on.
Practical Data Science with R treats p... | Advanced regression modeling examples | Regression Modeling Strategies and ISLR, which have already been mentioned by others, are two very good suggestions. I have a few others that you might want to consider.
Applied Predictive Modeling by | Advanced regression modeling examples
Regression Modeling Strategies and ISLR, which have already been mentioned by others, are two very good suggestions. I have a few others that you might want to consider.
Applied Predictive Modeling by Kuhn and Johnson contains a number of good case studies and is pretty hands-on.
... | Advanced regression modeling examples
Regression Modeling Strategies and ISLR, which have already been mentioned by others, are two very good suggestions. I have a few others that you might want to consider.
Applied Predictive Modeling by |
12,232 | Advanced regression modeling examples | One of the best course material that you can find on advanced, multiple, complex (including nonlinear) regression is based on the book Regression Modeling Strategies by Frank E. Harrell Jr.
The book is being discussed in the comments but not this material, which itself is a great resource. | Advanced regression modeling examples | One of the best course material that you can find on advanced, multiple, complex (including nonlinear) regression is based on the book Regression Modeling Strategies by Frank E. Harrell Jr.
The book | Advanced regression modeling examples
One of the best course material that you can find on advanced, multiple, complex (including nonlinear) regression is based on the book Regression Modeling Strategies by Frank E. Harrell Jr.
The book is being discussed in the comments but not this material, which itself is a great ... | Advanced regression modeling examples
One of the best course material that you can find on advanced, multiple, complex (including nonlinear) regression is based on the book Regression Modeling Strategies by Frank E. Harrell Jr.
The book |
12,233 | Advanced regression modeling examples | I would recommend the book
Mostly Harmless Econometrics by by Joshua D. Angrist and Jörn-Steffen Pischke
This is the most real-world, salt to the earth, text I own and it is super cheap, around $26.00 new. The book is written for the graduate statistician/economist so it is plenty advanced.
Now this book is not exac... | Advanced regression modeling examples | I would recommend the book
Mostly Harmless Econometrics by by Joshua D. Angrist and Jörn-Steffen Pischke
This is the most real-world, salt to the earth, text I own and it is super cheap, around $26.0 | Advanced regression modeling examples
I would recommend the book
Mostly Harmless Econometrics by by Joshua D. Angrist and Jörn-Steffen Pischke
This is the most real-world, salt to the earth, text I own and it is super cheap, around $26.00 new. The book is written for the graduate statistician/economist so it is plenty... | Advanced regression modeling examples
I would recommend the book
Mostly Harmless Econometrics by by Joshua D. Angrist and Jörn-Steffen Pischke
This is the most real-world, salt to the earth, text I own and it is super cheap, around $26.0 |
12,234 | Advanced regression modeling examples | You can refer Introduction to Statistical Learning with R (ISLR), the book talks about splines and polynomial regression in detail with cases. | Advanced regression modeling examples | You can refer Introduction to Statistical Learning with R (ISLR), the book talks about splines and polynomial regression in detail with cases. | Advanced regression modeling examples
You can refer Introduction to Statistical Learning with R (ISLR), the book talks about splines and polynomial regression in detail with cases. | Advanced regression modeling examples
You can refer Introduction to Statistical Learning with R (ISLR), the book talks about splines and polynomial regression in detail with cases. |
12,235 | Advanced regression modeling examples | I'm not sure what is the objective of your question. I can recommend Greene's Econometric Analysis text. It has a ton of references to papers inside. Pretty much each example in the book references a published paper.
To give you a flavor, look at Example 7.6 "Interaction Effects in a Loglinear Model for Income" on p.19... | Advanced regression modeling examples | I'm not sure what is the objective of your question. I can recommend Greene's Econometric Analysis text. It has a ton of references to papers inside. Pretty much each example in the book references a | Advanced regression modeling examples
I'm not sure what is the objective of your question. I can recommend Greene's Econometric Analysis text. It has a ton of references to papers inside. Pretty much each example in the book references a published paper.
To give you a flavor, look at Example 7.6 "Interaction Effects in... | Advanced regression modeling examples
I'm not sure what is the objective of your question. I can recommend Greene's Econometric Analysis text. It has a ton of references to papers inside. Pretty much each example in the book references a |
12,236 | Advanced regression modeling examples | Have you looked into some of the Financial Time Series Analysis courses/books that Ruey Tsay (UChicago) writes?
http://faculty.chicagobooth.edu/ruey.tsay/teaching/
Ruey Tsays classes and the textbook provide multiple real world examples in Finance of complex regressions of the type that are created for use in financi... | Advanced regression modeling examples | Have you looked into some of the Financial Time Series Analysis courses/books that Ruey Tsay (UChicago) writes?
http://faculty.chicagobooth.edu/ruey.tsay/teaching/
Ruey Tsays classes and the textboo | Advanced regression modeling examples
Have you looked into some of the Financial Time Series Analysis courses/books that Ruey Tsay (UChicago) writes?
http://faculty.chicagobooth.edu/ruey.tsay/teaching/
Ruey Tsays classes and the textbook provide multiple real world examples in Finance of complex regressions of the ty... | Advanced regression modeling examples
Have you looked into some of the Financial Time Series Analysis courses/books that Ruey Tsay (UChicago) writes?
http://faculty.chicagobooth.edu/ruey.tsay/teaching/
Ruey Tsays classes and the textboo |
12,237 | Overall rank from multiple ranked lists | I am not sure why you were looking at correlations and similar measures. There doesn't seem to be anything to correlate.
Instead, there are a number of options, none really better than the other, but depending on what you want:
Take the average rank and then rank the averages (but this treats the data as interval)
Take... | Overall rank from multiple ranked lists | I am not sure why you were looking at correlations and similar measures. There doesn't seem to be anything to correlate.
Instead, there are a number of options, none really better than the other, but | Overall rank from multiple ranked lists
I am not sure why you were looking at correlations and similar measures. There doesn't seem to be anything to correlate.
Instead, there are a number of options, none really better than the other, but depending on what you want:
Take the average rank and then rank the averages (bu... | Overall rank from multiple ranked lists
I am not sure why you were looking at correlations and similar measures. There doesn't seem to be anything to correlate.
Instead, there are a number of options, none really better than the other, but |
12,238 | Overall rank from multiple ranked lists | As others have pointed out, there are a lot of options you might pursue. The method I recommend is based on average ranks, i.e., the first proposal of Peter.
In this case, the statistical importance of the final ranking can be examined by a two-step statistical test. This is a non-parametric procedure consisting of the... | Overall rank from multiple ranked lists | As others have pointed out, there are a lot of options you might pursue. The method I recommend is based on average ranks, i.e., the first proposal of Peter.
In this case, the statistical importance o | Overall rank from multiple ranked lists
As others have pointed out, there are a lot of options you might pursue. The method I recommend is based on average ranks, i.e., the first proposal of Peter.
In this case, the statistical importance of the final ranking can be examined by a two-step statistical test. This is a no... | Overall rank from multiple ranked lists
As others have pointed out, there are a lot of options you might pursue. The method I recommend is based on average ranks, i.e., the first proposal of Peter.
In this case, the statistical importance o |
12,239 | Overall rank from multiple ranked lists | I (well, Google) found a paper that benchmarks methods for combining ranked lists:
Li, X., Wang, X. and Xiao, G., 2019. A comparative study of rank aggregation methods for partial and top ranked lists in genomic applications. Briefings in bioinformatics, 20(1), pp.178-189. https://doi.org/10.1093/bib/bbx101
They use tw... | Overall rank from multiple ranked lists | I (well, Google) found a paper that benchmarks methods for combining ranked lists:
Li, X., Wang, X. and Xiao, G., 2019. A comparative study of rank aggregation methods for partial and top ranked lists | Overall rank from multiple ranked lists
I (well, Google) found a paper that benchmarks methods for combining ranked lists:
Li, X., Wang, X. and Xiao, G., 2019. A comparative study of rank aggregation methods for partial and top ranked lists in genomic applications. Briefings in bioinformatics, 20(1), pp.178-189. https:... | Overall rank from multiple ranked lists
I (well, Google) found a paper that benchmarks methods for combining ranked lists:
Li, X., Wang, X. and Xiao, G., 2019. A comparative study of rank aggregation methods for partial and top ranked lists |
12,240 | Overall rank from multiple ranked lists | Use Tau-x (where the "x" refers to "eXtended" Tau-b). Tau-x is the correlation equivalent of the Kemeny-Snell distance metric -- proven to be the unique distance metric between lists of ranked items that satisfies all the requirements of a distance metric. See chapter 2 of "Mathematical Models in the Social Sciences" b... | Overall rank from multiple ranked lists | Use Tau-x (where the "x" refers to "eXtended" Tau-b). Tau-x is the correlation equivalent of the Kemeny-Snell distance metric -- proven to be the unique distance metric between lists of ranked items t | Overall rank from multiple ranked lists
Use Tau-x (where the "x" refers to "eXtended" Tau-b). Tau-x is the correlation equivalent of the Kemeny-Snell distance metric -- proven to be the unique distance metric between lists of ranked items that satisfies all the requirements of a distance metric. See chapter 2 of "Mathe... | Overall rank from multiple ranked lists
Use Tau-x (where the "x" refers to "eXtended" Tau-b). Tau-x is the correlation equivalent of the Kemeny-Snell distance metric -- proven to be the unique distance metric between lists of ranked items t |
12,241 | As a reviewer, can I justify requesting data and code be made available even if the journal does not? | As far as getting data as a reviewer goes, you're entitled to it if you need it to complete your review properly. More reviewers should be asking for data and assessing it. Lots of journals have policies that they may require the data and analysis code for review purposes.
Availability at the time of publication isn'... | As a reviewer, can I justify requesting data and code be made available even if the journal does not | As far as getting data as a reviewer goes, you're entitled to it if you need it to complete your review properly. More reviewers should be asking for data and assessing it. Lots of journals have pol | As a reviewer, can I justify requesting data and code be made available even if the journal does not?
As far as getting data as a reviewer goes, you're entitled to it if you need it to complete your review properly. More reviewers should be asking for data and assessing it. Lots of journals have policies that they ma... | As a reviewer, can I justify requesting data and code be made available even if the journal does not
As far as getting data as a reviewer goes, you're entitled to it if you need it to complete your review properly. More reviewers should be asking for data and assessing it. Lots of journals have pol |
12,242 | As a reviewer, can I justify requesting data and code be made available even if the journal does not? | Addressing the two situations seperately:
As a reviewer: Yes, I think you'd have grounds to ask to see the data or the code. But if I were you, I'd prepare to see things like pared down code, or a subsample of the data. People implement future research not being reported in this paper in their code all the time, and yo... | As a reviewer, can I justify requesting data and code be made available even if the journal does not | Addressing the two situations seperately:
As a reviewer: Yes, I think you'd have grounds to ask to see the data or the code. But if I were you, I'd prepare to see things like pared down code, or a sub | As a reviewer, can I justify requesting data and code be made available even if the journal does not?
Addressing the two situations seperately:
As a reviewer: Yes, I think you'd have grounds to ask to see the data or the code. But if I were you, I'd prepare to see things like pared down code, or a subsample of the data... | As a reviewer, can I justify requesting data and code be made available even if the journal does not
Addressing the two situations seperately:
As a reviewer: Yes, I think you'd have grounds to ask to see the data or the code. But if I were you, I'd prepare to see things like pared down code, or a sub |
12,243 | As a reviewer, can I justify requesting data and code be made available even if the journal does not? | As John says availability of data to reviewers should be a no-brainer; careful review should include replicating the analysis and as such necessitates access to the data.
With regards to public availability of the data following publication, I'd say that battle should be fought with the journal generally rather than wi... | As a reviewer, can I justify requesting data and code be made available even if the journal does not | As John says availability of data to reviewers should be a no-brainer; careful review should include replicating the analysis and as such necessitates access to the data.
With regards to public availa | As a reviewer, can I justify requesting data and code be made available even if the journal does not?
As John says availability of data to reviewers should be a no-brainer; careful review should include replicating the analysis and as such necessitates access to the data.
With regards to public availability of the data... | As a reviewer, can I justify requesting data and code be made available even if the journal does not
As John says availability of data to reviewers should be a no-brainer; careful review should include replicating the analysis and as such necessitates access to the data.
With regards to public availa |
12,244 | As a reviewer, can I justify requesting data and code be made available even if the journal does not? | I don't have any experience with this, but it seems to me that you might be able to insist on #1 as a part of your own due diligence in reviewing their results. I don't see how you can insist on #2, though. | As a reviewer, can I justify requesting data and code be made available even if the journal does not | I don't have any experience with this, but it seems to me that you might be able to insist on #1 as a part of your own due diligence in reviewing their results. I don't see how you can insist on #2, t | As a reviewer, can I justify requesting data and code be made available even if the journal does not?
I don't have any experience with this, but it seems to me that you might be able to insist on #1 as a part of your own due diligence in reviewing their results. I don't see how you can insist on #2, though. | As a reviewer, can I justify requesting data and code be made available even if the journal does not
I don't have any experience with this, but it seems to me that you might be able to insist on #1 as a part of your own due diligence in reviewing their results. I don't see how you can insist on #2, t |
12,245 | What is the essential difference between a neural network and nonlinear regression? | In theory, yes. In practice, things are more subtle.
First of all, let's clear the field from a doubt raised in the comments: neural networks can handle multiple outputs in a seamless fashion, so it doesn't really matter whether we consider multiple regression or not (see The Elements of Statistical Learning, paragraph... | What is the essential difference between a neural network and nonlinear regression? | In theory, yes. In practice, things are more subtle.
First of all, let's clear the field from a doubt raised in the comments: neural networks can handle multiple outputs in a seamless fashion, so it d | What is the essential difference between a neural network and nonlinear regression?
In theory, yes. In practice, things are more subtle.
First of all, let's clear the field from a doubt raised in the comments: neural networks can handle multiple outputs in a seamless fashion, so it doesn't really matter whether we cons... | What is the essential difference between a neural network and nonlinear regression?
In theory, yes. In practice, things are more subtle.
First of all, let's clear the field from a doubt raised in the comments: neural networks can handle multiple outputs in a seamless fashion, so it d |
12,246 | How are weights updated in the batch learning method in neural networks? | Using average or sum are equivalent, in the sense that there exist pairs of learning rates for which they produce the same update.
To confirm this, first recall the update rule:
$$\Delta w_{ij} = -\alpha \frac{\partial E}{\partial w_{ij}}$$
Then, let $\mu_E$ be the average error for a dataset of size $n$ over an epoch.... | How are weights updated in the batch learning method in neural networks? | Using average or sum are equivalent, in the sense that there exist pairs of learning rates for which they produce the same update.
To confirm this, first recall the update rule:
$$\Delta w_{ij} = -\al | How are weights updated in the batch learning method in neural networks?
Using average or sum are equivalent, in the sense that there exist pairs of learning rates for which they produce the same update.
To confirm this, first recall the update rule:
$$\Delta w_{ij} = -\alpha \frac{\partial E}{\partial w_{ij}}$$
Then, ... | How are weights updated in the batch learning method in neural networks?
Using average or sum are equivalent, in the sense that there exist pairs of learning rates for which they produce the same update.
To confirm this, first recall the update rule:
$$\Delta w_{ij} = -\al |
12,247 | How are weights updated in the batch learning method in neural networks? | The two answers are equivalent. I personally would think of it as average error instead of the sum. But remember that gradient descent has a parameter called the learning rate, and that only a portion of the gradient of the error is subtracted. So whether the error is defined as total of average can be compensated b... | How are weights updated in the batch learning method in neural networks? | The two answers are equivalent. I personally would think of it as average error instead of the sum. But remember that gradient descent has a parameter called the learning rate, and that only a porti | How are weights updated in the batch learning method in neural networks?
The two answers are equivalent. I personally would think of it as average error instead of the sum. But remember that gradient descent has a parameter called the learning rate, and that only a portion of the gradient of the error is subtracted. ... | How are weights updated in the batch learning method in neural networks?
The two answers are equivalent. I personally would think of it as average error instead of the sum. But remember that gradient descent has a parameter called the learning rate, and that only a porti |
12,248 | How are weights updated in the batch learning method in neural networks? | Someone explained like;
The batch size is a hyperparameter that defines the number of samples to work through before updating the internal model parameters.
Think of a batch as a for-loop iterating over one or more samples and making predictions. At the end of the batch, the predictions are compared to the expected out... | How are weights updated in the batch learning method in neural networks? | Someone explained like;
The batch size is a hyperparameter that defines the number of samples to work through before updating the internal model parameters.
Think of a batch as a for-loop iterating ov | How are weights updated in the batch learning method in neural networks?
Someone explained like;
The batch size is a hyperparameter that defines the number of samples to work through before updating the internal model parameters.
Think of a batch as a for-loop iterating over one or more samples and making predictions. ... | How are weights updated in the batch learning method in neural networks?
Someone explained like;
The batch size is a hyperparameter that defines the number of samples to work through before updating the internal model parameters.
Think of a batch as a for-loop iterating ov |
12,249 | How to cluster time series? | A) Spend a lot of time on preprocessing the data. Preprocessing is 90% of your job.
B) Choose an appropriate similarity measure for the time series. For example, threshold crossing distance may be a good choice here. You probably won't desire dynamic time warping distance, unless you have different time zones. Threshol... | How to cluster time series? | A) Spend a lot of time on preprocessing the data. Preprocessing is 90% of your job.
B) Choose an appropriate similarity measure for the time series. For example, threshold crossing distance may be a g | How to cluster time series?
A) Spend a lot of time on preprocessing the data. Preprocessing is 90% of your job.
B) Choose an appropriate similarity measure for the time series. For example, threshold crossing distance may be a good choice here. You probably won't desire dynamic time warping distance, unless you have di... | How to cluster time series?
A) Spend a lot of time on preprocessing the data. Preprocessing is 90% of your job.
B) Choose an appropriate similarity measure for the time series. For example, threshold crossing distance may be a g |
12,250 | How to cluster time series? | You might want to look at Forecasting hourly time series with daily, weekly & annual periodicity for a discussion of hourly data involving daily data and holidays/regressors. You have 5 years of data while the other discussion involved 883 daily values. What I would suggest is that you could build an hourly forecast in... | How to cluster time series? | You might want to look at Forecasting hourly time series with daily, weekly & annual periodicity for a discussion of hourly data involving daily data and holidays/regressors. You have 5 years of data | How to cluster time series?
You might want to look at Forecasting hourly time series with daily, weekly & annual periodicity for a discussion of hourly data involving daily data and holidays/regressors. You have 5 years of data while the other discussion involved 883 daily values. What I would suggest is that you could... | How to cluster time series?
You might want to look at Forecasting hourly time series with daily, weekly & annual periodicity for a discussion of hourly data involving daily data and holidays/regressors. You have 5 years of data |
12,251 | Why use ANOVA at all instead of jumping straight into post-hoc or planned comparisons tests? | Indeed an omnibus test is not strictly needed in that particular scenario and multiple inference procedures like Bonferroni or Bonferroni-Holm are not limited to an ANOVA/mean comparison settings. They are often presented as post-hoc tests in textbooks or associated with ANOVA in statistical software but if you look up... | Why use ANOVA at all instead of jumping straight into post-hoc or planned comparisons tests? | Indeed an omnibus test is not strictly needed in that particular scenario and multiple inference procedures like Bonferroni or Bonferroni-Holm are not limited to an ANOVA/mean comparison settings. The | Why use ANOVA at all instead of jumping straight into post-hoc or planned comparisons tests?
Indeed an omnibus test is not strictly needed in that particular scenario and multiple inference procedures like Bonferroni or Bonferroni-Holm are not limited to an ANOVA/mean comparison settings. They are often presented as po... | Why use ANOVA at all instead of jumping straight into post-hoc or planned comparisons tests?
Indeed an omnibus test is not strictly needed in that particular scenario and multiple inference procedures like Bonferroni or Bonferroni-Holm are not limited to an ANOVA/mean comparison settings. The |
12,252 | When is "Nearest Neighbor" meaningful, today? | I don't have a full answer to this question, but I can give a partial answer on some of the analytical aspects. Warning: I've been working on other problems since the first paper below, so it's very likely there is other good stuff out there I'm not aware of.
First I think it's worth noting that despite the title of th... | When is "Nearest Neighbor" meaningful, today? | I don't have a full answer to this question, but I can give a partial answer on some of the analytical aspects. Warning: I've been working on other problems since the first paper below, so it's very l | When is "Nearest Neighbor" meaningful, today?
I don't have a full answer to this question, but I can give a partial answer on some of the analytical aspects. Warning: I've been working on other problems since the first paper below, so it's very likely there is other good stuff out there I'm not aware of.
First I think ... | When is "Nearest Neighbor" meaningful, today?
I don't have a full answer to this question, but I can give a partial answer on some of the analytical aspects. Warning: I've been working on other problems since the first paper below, so it's very l |
12,253 | When is "Nearest Neighbor" meaningful, today? | You might as well be interested in neighbourhood components analysis by Goldberger et al.
Here, a linear transformation is learned to maximize the expected correctly classified points via a stochastic nearest neighbourhood selection.
As a side effect the (expected) number of neighbours is determined from the data. | When is "Nearest Neighbor" meaningful, today? | You might as well be interested in neighbourhood components analysis by Goldberger et al.
Here, a linear transformation is learned to maximize the expected correctly classified points via a stochastic | When is "Nearest Neighbor" meaningful, today?
You might as well be interested in neighbourhood components analysis by Goldberger et al.
Here, a linear transformation is learned to maximize the expected correctly classified points via a stochastic nearest neighbourhood selection.
As a side effect the (expected) number o... | When is "Nearest Neighbor" meaningful, today?
You might as well be interested in neighbourhood components analysis by Goldberger et al.
Here, a linear transformation is learned to maximize the expected correctly classified points via a stochastic |
12,254 | Clustering (k-means, or otherwise) with a minimum cluster size constraint | Use EM Clustering
In EM clustering, the algorithm iteratively refines an initial cluster model to fit the data and determines the probability that a data point exists in a cluster. The algorithm ends the process when the probabilistic model fits the data. The function used to determine the fit is the log-likelihood of ... | Clustering (k-means, or otherwise) with a minimum cluster size constraint | Use EM Clustering
In EM clustering, the algorithm iteratively refines an initial cluster model to fit the data and determines the probability that a data point exists in a cluster. The algorithm ends | Clustering (k-means, or otherwise) with a minimum cluster size constraint
Use EM Clustering
In EM clustering, the algorithm iteratively refines an initial cluster model to fit the data and determines the probability that a data point exists in a cluster. The algorithm ends the process when the probabilistic model fits ... | Clustering (k-means, or otherwise) with a minimum cluster size constraint
Use EM Clustering
In EM clustering, the algorithm iteratively refines an initial cluster model to fit the data and determines the probability that a data point exists in a cluster. The algorithm ends |
12,255 | Clustering (k-means, or otherwise) with a minimum cluster size constraint | This problem is addressed in this paper:
Bradley, P. S., K. P. Bennett, and Ayhan Demiriz. "Constrained k-means clustering." Microsoft Research, Redmond (2000): 1-8.
I have an implementation of the algorithm in python. | Clustering (k-means, or otherwise) with a minimum cluster size constraint | This problem is addressed in this paper:
Bradley, P. S., K. P. Bennett, and Ayhan Demiriz. "Constrained k-means clustering." Microsoft Research, Redmond (2000): 1-8.
I have an implementation of the al | Clustering (k-means, or otherwise) with a minimum cluster size constraint
This problem is addressed in this paper:
Bradley, P. S., K. P. Bennett, and Ayhan Demiriz. "Constrained k-means clustering." Microsoft Research, Redmond (2000): 1-8.
I have an implementation of the algorithm in python. | Clustering (k-means, or otherwise) with a minimum cluster size constraint
This problem is addressed in this paper:
Bradley, P. S., K. P. Bennett, and Ayhan Demiriz. "Constrained k-means clustering." Microsoft Research, Redmond (2000): 1-8.
I have an implementation of the al |
12,256 | Clustering (k-means, or otherwise) with a minimum cluster size constraint | I think it would just be a matter of running the k means as part of an if loop with a test for cluster sizes, I.e. Count n in cluster k - also remember that k means will give different results for each run on the same data so you should probably be running it as part of a loop anyway to extract the "best" result | Clustering (k-means, or otherwise) with a minimum cluster size constraint | I think it would just be a matter of running the k means as part of an if loop with a test for cluster sizes, I.e. Count n in cluster k - also remember that k means will give different results for eac | Clustering (k-means, or otherwise) with a minimum cluster size constraint
I think it would just be a matter of running the k means as part of an if loop with a test for cluster sizes, I.e. Count n in cluster k - also remember that k means will give different results for each run on the same data so you should probably ... | Clustering (k-means, or otherwise) with a minimum cluster size constraint
I think it would just be a matter of running the k means as part of an if loop with a test for cluster sizes, I.e. Count n in cluster k - also remember that k means will give different results for eac |
12,257 | Clustering (k-means, or otherwise) with a minimum cluster size constraint | How large is your data set? Maybe you could try to run a hierarchical clustering and then decide which clusters retain based on your dendrogram.
If your data set is huge, you could also combine both clustering methods: an initial non-hierarchical clustering and then a hierarchical clustering using the groups from the n... | Clustering (k-means, or otherwise) with a minimum cluster size constraint | How large is your data set? Maybe you could try to run a hierarchical clustering and then decide which clusters retain based on your dendrogram.
If your data set is huge, you could also combine both c | Clustering (k-means, or otherwise) with a minimum cluster size constraint
How large is your data set? Maybe you could try to run a hierarchical clustering and then decide which clusters retain based on your dendrogram.
If your data set is huge, you could also combine both clustering methods: an initial non-hierarchical... | Clustering (k-means, or otherwise) with a minimum cluster size constraint
How large is your data set? Maybe you could try to run a hierarchical clustering and then decide which clusters retain based on your dendrogram.
If your data set is huge, you could also combine both c |
12,258 | Clustering (k-means, or otherwise) with a minimum cluster size constraint | This can be achieved by modifying the cluster assignment step (E in EM) by formulating it as a Minimum Cost Flow (MCF) linear network optimisation problem.
I have written a python package which uses Google's Operations Research tools's SimpleMinCostFlow which is a fast C++ implementation. Its has a standard scikit-lean... | Clustering (k-means, or otherwise) with a minimum cluster size constraint | This can be achieved by modifying the cluster assignment step (E in EM) by formulating it as a Minimum Cost Flow (MCF) linear network optimisation problem.
I have written a python package which uses G | Clustering (k-means, or otherwise) with a minimum cluster size constraint
This can be achieved by modifying the cluster assignment step (E in EM) by formulating it as a Minimum Cost Flow (MCF) linear network optimisation problem.
I have written a python package which uses Google's Operations Research tools's SimpleMinC... | Clustering (k-means, or otherwise) with a minimum cluster size constraint
This can be achieved by modifying the cluster assignment step (E in EM) by formulating it as a Minimum Cost Flow (MCF) linear network optimisation problem.
I have written a python package which uses G |
12,259 | RNN for irregular time intervals? | I just wrote a blog post on that topic!
In short, I write about different methods for dealing with the problem of sparse / irregular sequential data.
Here is a short outline of methods to try:
Lomb-Scargle Periodogram
This is a way of computing spectrograms on non-equidistant timestep series.
Data modeling with Inter... | RNN for irregular time intervals? | I just wrote a blog post on that topic!
In short, I write about different methods for dealing with the problem of sparse / irregular sequential data.
Here is a short outline of methods to try:
Lomb-S | RNN for irregular time intervals?
I just wrote a blog post on that topic!
In short, I write about different methods for dealing with the problem of sparse / irregular sequential data.
Here is a short outline of methods to try:
Lomb-Scargle Periodogram
This is a way of computing spectrograms on non-equidistant timestep... | RNN for irregular time intervals?
I just wrote a blog post on that topic!
In short, I write about different methods for dealing with the problem of sparse / irregular sequential data.
Here is a short outline of methods to try:
Lomb-S |
12,260 | RNN for irregular time intervals? | If you are feeding in some data vector $v_t$ at time $t$, the straightforward solution is to obtain a one-hot encoding of the day of week, $d_t$, and then simply feed into the network the concatenation of $v_t$ and $d_t$. The time/date encoding scheme can be more complicated if the time format is more complicated than ... | RNN for irregular time intervals? | If you are feeding in some data vector $v_t$ at time $t$, the straightforward solution is to obtain a one-hot encoding of the day of week, $d_t$, and then simply feed into the network the concatenatio | RNN for irregular time intervals?
If you are feeding in some data vector $v_t$ at time $t$, the straightforward solution is to obtain a one-hot encoding of the day of week, $d_t$, and then simply feed into the network the concatenation of $v_t$ and $d_t$. The time/date encoding scheme can be more complicated if the tim... | RNN for irregular time intervals?
If you are feeding in some data vector $v_t$ at time $t$, the straightforward solution is to obtain a one-hot encoding of the day of week, $d_t$, and then simply feed into the network the concatenatio |
12,261 | RNN for irregular time intervals? | I would try incorporating time interval explicitly into the model. For instance, a conventional time series models such as autoregressive AR(p) can be thought of as discretizations of continuous time model. For instance, AR(1) model:
$$y_t=c+\phi y_{t-1}+\varepsilon_t$$
can be thought of as a version of:
$$y_t=c\Delta ... | RNN for irregular time intervals? | I would try incorporating time interval explicitly into the model. For instance, a conventional time series models such as autoregressive AR(p) can be thought of as discretizations of continuous time | RNN for irregular time intervals?
I would try incorporating time interval explicitly into the model. For instance, a conventional time series models such as autoregressive AR(p) can be thought of as discretizations of continuous time model. For instance, AR(1) model:
$$y_t=c+\phi y_{t-1}+\varepsilon_t$$
can be thought ... | RNN for irregular time intervals?
I would try incorporating time interval explicitly into the model. For instance, a conventional time series models such as autoregressive AR(p) can be thought of as discretizations of continuous time |
12,262 | RNN for irregular time intervals? | I think it depends on the data. For example, if you are processing counts and you just forgot to measure it on some days, then the best strategy is to impute the missing values (e.g., via interpolation or Gaussian processes) and then process the imputed time series with an RNN. By imputing, you would be embedding know... | RNN for irregular time intervals? | I think it depends on the data. For example, if you are processing counts and you just forgot to measure it on some days, then the best strategy is to impute the missing values (e.g., via interpolati | RNN for irregular time intervals?
I think it depends on the data. For example, if you are processing counts and you just forgot to measure it on some days, then the best strategy is to impute the missing values (e.g., via interpolation or Gaussian processes) and then process the imputed time series with an RNN. By imp... | RNN for irregular time intervals?
I think it depends on the data. For example, if you are processing counts and you just forgot to measure it on some days, then the best strategy is to impute the missing values (e.g., via interpolati |
12,263 | High Recall - Low Precision for unbalanced dataset | does anyone have a clue why I’m getting way more false positives than false negatives (positive is the minority class)? Thanks in advance for your help!
Because positive is the minority class. There are a lot of negative examples that could become false positives. Conversely, there are fewer positive examples that c... | High Recall - Low Precision for unbalanced dataset | does anyone have a clue why I’m getting way more false positives than false negatives (positive is the minority class)? Thanks in advance for your help!
Because positive is the minority class. There | High Recall - Low Precision for unbalanced dataset
does anyone have a clue why I’m getting way more false positives than false negatives (positive is the minority class)? Thanks in advance for your help!
Because positive is the minority class. There are a lot of negative examples that could become false positives. C... | High Recall - Low Precision for unbalanced dataset
does anyone have a clue why I’m getting way more false positives than false negatives (positive is the minority class)? Thanks in advance for your help!
Because positive is the minority class. There |
12,264 | High Recall - Low Precision for unbalanced dataset | Methods to try out:
UnderSampling:
I suggest using under sampling techniques and then training your classifier.
Imbalanced Learning provides a scikit learn style api for imbalanced dataset and should be a good starting point for sampling and algorithms to try out.
Library: https://imbalanced-learn.readthedocs.io/en/st... | High Recall - Low Precision for unbalanced dataset | Methods to try out:
UnderSampling:
I suggest using under sampling techniques and then training your classifier.
Imbalanced Learning provides a scikit learn style api for imbalanced dataset and should | High Recall - Low Precision for unbalanced dataset
Methods to try out:
UnderSampling:
I suggest using under sampling techniques and then training your classifier.
Imbalanced Learning provides a scikit learn style api for imbalanced dataset and should be a good starting point for sampling and algorithms to try out.
Lib... | High Recall - Low Precision for unbalanced dataset
Methods to try out:
UnderSampling:
I suggest using under sampling techniques and then training your classifier.
Imbalanced Learning provides a scikit learn style api for imbalanced dataset and should |
12,265 | High Recall - Low Precision for unbalanced dataset | The standard approach would be to weight your error based on class frequency. For example, if you were doing it in Python with sklearn:
model = sklearn.svm.SVC(C=1.0, kernel='linear', class_weight='balanced')
model.fit(X, y) | High Recall - Low Precision for unbalanced dataset | The standard approach would be to weight your error based on class frequency. For example, if you were doing it in Python with sklearn:
model = sklearn.svm.SVC(C=1.0, kernel='linear', class_weight='ba | High Recall - Low Precision for unbalanced dataset
The standard approach would be to weight your error based on class frequency. For example, if you were doing it in Python with sklearn:
model = sklearn.svm.SVC(C=1.0, kernel='linear', class_weight='balanced')
model.fit(X, y) | High Recall - Low Precision for unbalanced dataset
The standard approach would be to weight your error based on class frequency. For example, if you were doing it in Python with sklearn:
model = sklearn.svm.SVC(C=1.0, kernel='linear', class_weight='ba |
12,266 | number of feature maps in convolutional neural networks | 1) C1 in the layer 1 has 6 feature maps, does that mean there are six convolutional kernels? Each convolutional kernel is used to generate a feature map based on input.
There are 6 convolutional kernels and each is used to generate a feature map based on input. Another way to say this is that there are 6 filters or 3D... | number of feature maps in convolutional neural networks | 1) C1 in the layer 1 has 6 feature maps, does that mean there are six convolutional kernels? Each convolutional kernel is used to generate a feature map based on input.
There are 6 convolutional kern | number of feature maps in convolutional neural networks
1) C1 in the layer 1 has 6 feature maps, does that mean there are six convolutional kernels? Each convolutional kernel is used to generate a feature map based on input.
There are 6 convolutional kernels and each is used to generate a feature map based on input. A... | number of feature maps in convolutional neural networks
1) C1 in the layer 1 has 6 feature maps, does that mean there are six convolutional kernels? Each convolutional kernel is used to generate a feature map based on input.
There are 6 convolutional kern |
12,267 | Bayesian Survival Analysis: please, write me a prior for Kaplan Meier! | Note that because your likelihood function is a product of $ \alpha_i $ functions - the data are telling you that there is no evidence for correlation between them. Note that the $ d_i $ variables are already scaling to account for time. Longer time period means more chance for events, generally meaning larger $ d_i ... | Bayesian Survival Analysis: please, write me a prior for Kaplan Meier! | Note that because your likelihood function is a product of $ \alpha_i $ functions - the data are telling you that there is no evidence for correlation between them. Note that the $ d_i $ variables ar | Bayesian Survival Analysis: please, write me a prior for Kaplan Meier!
Note that because your likelihood function is a product of $ \alpha_i $ functions - the data are telling you that there is no evidence for correlation between them. Note that the $ d_i $ variables are already scaling to account for time. Longer ti... | Bayesian Survival Analysis: please, write me a prior for Kaplan Meier!
Note that because your likelihood function is a product of $ \alpha_i $ functions - the data are telling you that there is no evidence for correlation between them. Note that the $ d_i $ variables ar |
12,268 | Bayesian Survival Analysis: please, write me a prior for Kaplan Meier! | For readers facing the problem of going to Bayesian for estimating survival functions accepting right censoring, I would recommend the nonparametric Bayesian approach developed by F Mangili, A Benavoli et al. The only prior specification is a (precision or strength) parameter. It avoids the need to specify the Dirichle... | Bayesian Survival Analysis: please, write me a prior for Kaplan Meier! | For readers facing the problem of going to Bayesian for estimating survival functions accepting right censoring, I would recommend the nonparametric Bayesian approach developed by F Mangili, A Benavol | Bayesian Survival Analysis: please, write me a prior for Kaplan Meier!
For readers facing the problem of going to Bayesian for estimating survival functions accepting right censoring, I would recommend the nonparametric Bayesian approach developed by F Mangili, A Benavoli et al. The only prior specification is a (preci... | Bayesian Survival Analysis: please, write me a prior for Kaplan Meier!
For readers facing the problem of going to Bayesian for estimating survival functions accepting right censoring, I would recommend the nonparametric Bayesian approach developed by F Mangili, A Benavol |
12,269 | Topic stability in topic models | For my own curiosity, I applied a clustering algorithm that I've been working on to this dataset.
I've temporarily put-up the results here (choose the essays dataset).
It seems like the problem is not the starting points or the algorithm, but the data. You can 'reasonably' (subjectively, in my limited experience) get g... | Topic stability in topic models | For my own curiosity, I applied a clustering algorithm that I've been working on to this dataset.
I've temporarily put-up the results here (choose the essays dataset).
It seems like the problem is not | Topic stability in topic models
For my own curiosity, I applied a clustering algorithm that I've been working on to this dataset.
I've temporarily put-up the results here (choose the essays dataset).
It seems like the problem is not the starting points or the algorithm, but the data. You can 'reasonably' (subjectively,... | Topic stability in topic models
For my own curiosity, I applied a clustering algorithm that I've been working on to this dataset.
I've temporarily put-up the results here (choose the essays dataset).
It seems like the problem is not |
12,270 | Topic stability in topic models | The notion of "topics" in so-called "topic models" is misleading. The model does not know or is not designed to know semantically coherent "topics" at all. The "topics" are just distributions over tokens (words). In other words, the model just capture the high-order co-occurrence of terms. Whether these structures mean... | Topic stability in topic models | The notion of "topics" in so-called "topic models" is misleading. The model does not know or is not designed to know semantically coherent "topics" at all. The "topics" are just distributions over tok | Topic stability in topic models
The notion of "topics" in so-called "topic models" is misleading. The model does not know or is not designed to know semantically coherent "topics" at all. The "topics" are just distributions over tokens (words). In other words, the model just capture the high-order co-occurrence of term... | Topic stability in topic models
The notion of "topics" in so-called "topic models" is misleading. The model does not know or is not designed to know semantically coherent "topics" at all. The "topics" are just distributions over tok |
12,271 | Should sampling for logistic regression reflect the real ratio of 1's and 0's? | If the goal of such a model is prediction, then you cannot use unweighted logistic regression to predict outcomes: you will overpredict risk. The strength of logistic models is that the odds ratio (OR)--the "slope" which measures association between a risk factor and a binary outcome in a logistic model--is invariant t... | Should sampling for logistic regression reflect the real ratio of 1's and 0's? | If the goal of such a model is prediction, then you cannot use unweighted logistic regression to predict outcomes: you will overpredict risk. The strength of logistic models is that the odds ratio (OR | Should sampling for logistic regression reflect the real ratio of 1's and 0's?
If the goal of such a model is prediction, then you cannot use unweighted logistic regression to predict outcomes: you will overpredict risk. The strength of logistic models is that the odds ratio (OR)--the "slope" which measures association... | Should sampling for logistic regression reflect the real ratio of 1's and 0's?
If the goal of such a model is prediction, then you cannot use unweighted logistic regression to predict outcomes: you will overpredict risk. The strength of logistic models is that the odds ratio (OR |
12,272 | Comparing two histograms using Chi-Square distance | @Silverfish asked for an expansion of the answer by PolatAlemdar, which was not given, so I will try to expand on it here.
Why the name chisquare distance? The chisquare test for contingency tables is based on
$$
\chi^2 = \sum_{\text{cells}} \frac{(O_i-E_i)^2}{E_i}
$$
so the idea is to keep this form and use it as ... | Comparing two histograms using Chi-Square distance | @Silverfish asked for an expansion of the answer by PolatAlemdar, which was not given, so I will try to expand on it here.
Why the name chisquare distance? The chisquare test for contingency tables i | Comparing two histograms using Chi-Square distance
@Silverfish asked for an expansion of the answer by PolatAlemdar, which was not given, so I will try to expand on it here.
Why the name chisquare distance? The chisquare test for contingency tables is based on
$$
\chi^2 = \sum_{\text{cells}} \frac{(O_i-E_i)^2}{E_i}... | Comparing two histograms using Chi-Square distance
@Silverfish asked for an expansion of the answer by PolatAlemdar, which was not given, so I will try to expand on it here.
Why the name chisquare distance? The chisquare test for contingency tables i |
12,273 | Comparing two histograms using Chi-Square distance | I found this link to be quite useful: http://docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/histogram_comparison/histogram_comparison.html
I am not quite sure why, but OpenCV uses the 3rd formula you list for Chi-Square histogram comparison.
In terms of meaning, I am not sure any measurement algorithm is going t... | Comparing two histograms using Chi-Square distance | I found this link to be quite useful: http://docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/histogram_comparison/histogram_comparison.html
I am not quite sure why, but OpenCV uses the 3rd formu | Comparing two histograms using Chi-Square distance
I found this link to be quite useful: http://docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/histogram_comparison/histogram_comparison.html
I am not quite sure why, but OpenCV uses the 3rd formula you list for Chi-Square histogram comparison.
In terms of meaning,... | Comparing two histograms using Chi-Square distance
I found this link to be quite useful: http://docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/histogram_comparison/histogram_comparison.html
I am not quite sure why, but OpenCV uses the 3rd formu |
12,274 | Comparing two histograms using Chi-Square distance | In fact you can use whatever you believe is correct for your case. The last one is different. It is used in discrete probability distributions, as the last one will be symmetric if you swap $x$ and $y$.
The other two are used in calculating histogram similarities. | Comparing two histograms using Chi-Square distance | In fact you can use whatever you believe is correct for your case. The last one is different. It is used in discrete probability distributions, as the last one will be symmetric if you swap $x$ and $y | Comparing two histograms using Chi-Square distance
In fact you can use whatever you believe is correct for your case. The last one is different. It is used in discrete probability distributions, as the last one will be symmetric if you swap $x$ and $y$.
The other two are used in calculating histogram similarities. | Comparing two histograms using Chi-Square distance
In fact you can use whatever you believe is correct for your case. The last one is different. It is used in discrete probability distributions, as the last one will be symmetric if you swap $x$ and $y |
12,275 | Comparing two histograms using Chi-Square distance | As OP requested, the value in percentage (for equation 1):
$p = \frac{\chi * S * 100}{N}$
Where:
$p$ is the percentage of difference (0..100).
$\chi$ is the result of equation 1.
$N$ is the number of bins in histogram.
$S$ is the maximum possible value in the bin.
Complemented as requested:
Calculating ... | Comparing two histograms using Chi-Square distance | As OP requested, the value in percentage (for equation 1):
$p = \frac{\chi * S * 100}{N}$
Where:
$p$ is the percentage of difference (0..100).
$\chi$ is the result of equation 1.
$N$ is th | Comparing two histograms using Chi-Square distance
As OP requested, the value in percentage (for equation 1):
$p = \frac{\chi * S * 100}{N}$
Where:
$p$ is the percentage of difference (0..100).
$\chi$ is the result of equation 1.
$N$ is the number of bins in histogram.
$S$ is the maximum possible value ... | Comparing two histograms using Chi-Square distance
As OP requested, the value in percentage (for equation 1):
$p = \frac{\chi * S * 100}{N}$
Where:
$p$ is the percentage of difference (0..100).
$\chi$ is the result of equation 1.
$N$ is th |
12,276 | Has the reported state-of-the-art performance of using paragraph vectors for sentiment analysis been replicated? | Footnote at http://arxiv.org/abs/1412.5335 (one of the authors is Tomas Mikolov) says
In our experiments, to match the results from (Le & Mikolov, 2014), we followed the suggestion by Quoc Le to use hierarchical softmax instead of negative sampling. However, this produces the 92.6% accuracy result only when the traini... | Has the reported state-of-the-art performance of using paragraph vectors for sentiment analysis been | Footnote at http://arxiv.org/abs/1412.5335 (one of the authors is Tomas Mikolov) says
In our experiments, to match the results from (Le & Mikolov, 2014), we followed the suggestion by Quoc Le to use | Has the reported state-of-the-art performance of using paragraph vectors for sentiment analysis been replicated?
Footnote at http://arxiv.org/abs/1412.5335 (one of the authors is Tomas Mikolov) says
In our experiments, to match the results from (Le & Mikolov, 2014), we followed the suggestion by Quoc Le to use hierarc... | Has the reported state-of-the-art performance of using paragraph vectors for sentiment analysis been
Footnote at http://arxiv.org/abs/1412.5335 (one of the authors is Tomas Mikolov) says
In our experiments, to match the results from (Le & Mikolov, 2014), we followed the suggestion by Quoc Le to use |
12,277 | How to control the cost of misclassification in Random Forests? | Not really, if not by manually making RF clone doing bagging of rpart models.
Some option comes from the fact that the output of RF is actually a continuous score rather than a crisp decision, i.e. the fraction of trees that voted on some class. It can be extracted with predict(rf_model,type="prob") and used to make, ... | How to control the cost of misclassification in Random Forests? | Not really, if not by manually making RF clone doing bagging of rpart models.
Some option comes from the fact that the output of RF is actually a continuous score rather than a crisp decision, i.e. t | How to control the cost of misclassification in Random Forests?
Not really, if not by manually making RF clone doing bagging of rpart models.
Some option comes from the fact that the output of RF is actually a continuous score rather than a crisp decision, i.e. the fraction of trees that voted on some class. It can be... | How to control the cost of misclassification in Random Forests?
Not really, if not by manually making RF clone doing bagging of rpart models.
Some option comes from the fact that the output of RF is actually a continuous score rather than a crisp decision, i.e. t |
12,278 | How to control the cost of misclassification in Random Forests? | There are a number of ways of including costs.
(1) Over/under sampling for each bagged tree (stratified sampling) is the most common method of introducing costs. you intentionally imbalance dataset.
(2) Weighting. Never works. I think this is emphasized in documentation. Some claim you just need to weight at all stage... | How to control the cost of misclassification in Random Forests? | There are a number of ways of including costs.
(1) Over/under sampling for each bagged tree (stratified sampling) is the most common method of introducing costs. you intentionally imbalance dataset.
| How to control the cost of misclassification in Random Forests?
There are a number of ways of including costs.
(1) Over/under sampling for each bagged tree (stratified sampling) is the most common method of introducing costs. you intentionally imbalance dataset.
(2) Weighting. Never works. I think this is emphasized i... | How to control the cost of misclassification in Random Forests?
There are a number of ways of including costs.
(1) Over/under sampling for each bagged tree (stratified sampling) is the most common method of introducing costs. you intentionally imbalance dataset.
|
12,279 | How to control the cost of misclassification in Random Forests? | It's recommended that if the variable you are trying to predict is not 50% for class 1 and 50% for class 2 (like most of the cases), you adjust the cutoff parameter to represents the real OOB in summary.
For example,
randomForest(data=my_data, formula, ntree = 501, cutoff=c(.96,.04))
In this case, probability of havi... | How to control the cost of misclassification in Random Forests? | It's recommended that if the variable you are trying to predict is not 50% for class 1 and 50% for class 2 (like most of the cases), you adjust the cutoff parameter to represents the real OOB in summa | How to control the cost of misclassification in Random Forests?
It's recommended that if the variable you are trying to predict is not 50% for class 1 and 50% for class 2 (like most of the cases), you adjust the cutoff parameter to represents the real OOB in summary.
For example,
randomForest(data=my_data, formula, nt... | How to control the cost of misclassification in Random Forests?
It's recommended that if the variable you are trying to predict is not 50% for class 1 and 50% for class 2 (like most of the cases), you adjust the cutoff parameter to represents the real OOB in summa |
12,280 | How to control the cost of misclassification in Random Forests? | One can incorporate costMatrix in randomForest explicitly via parms parameter:
library(randomForest)
costMatrix <- matrix(c(0,10,1,0), nrow=2)
mod_rf <- randomForest(outcome ~ ., data = train, ntree = 1000, parms = list(loss=costMatrix)) | How to control the cost of misclassification in Random Forests? | One can incorporate costMatrix in randomForest explicitly via parms parameter:
library(randomForest)
costMatrix <- matrix(c(0,10,1,0), nrow=2)
mod_rf <- randomForest(outcome ~ ., data = train, ntree = | How to control the cost of misclassification in Random Forests?
One can incorporate costMatrix in randomForest explicitly via parms parameter:
library(randomForest)
costMatrix <- matrix(c(0,10,1,0), nrow=2)
mod_rf <- randomForest(outcome ~ ., data = train, ntree = 1000, parms = list(loss=costMatrix)) | How to control the cost of misclassification in Random Forests?
One can incorporate costMatrix in randomForest explicitly via parms parameter:
library(randomForest)
costMatrix <- matrix(c(0,10,1,0), nrow=2)
mod_rf <- randomForest(outcome ~ ., data = train, ntree = |
12,281 | How to control the cost of misclassification in Random Forests? | You can incorporate cost sensitivity using the sampsize function in the randomForest package.
model1=randomForest(DependentVariable~., data=my_data, sampsize=c(100,20))
Vary the figures (100,20) based on the data you have and the assumptions/business rules you are working with.
It takes a bit of a trial and error appr... | How to control the cost of misclassification in Random Forests? | You can incorporate cost sensitivity using the sampsize function in the randomForest package.
model1=randomForest(DependentVariable~., data=my_data, sampsize=c(100,20))
Vary the figures (100,20) base | How to control the cost of misclassification in Random Forests?
You can incorporate cost sensitivity using the sampsize function in the randomForest package.
model1=randomForest(DependentVariable~., data=my_data, sampsize=c(100,20))
Vary the figures (100,20) based on the data you have and the assumptions/business rule... | How to control the cost of misclassification in Random Forests?
You can incorporate cost sensitivity using the sampsize function in the randomForest package.
model1=randomForest(DependentVariable~., data=my_data, sampsize=c(100,20))
Vary the figures (100,20) base |
12,282 | Difficulty of testing linearity in regression | I created a simulation that would answer to Breiman's description and found only the obvious: the result depends on the context and on what is meant by "extreme."
An awful lot could be said, but let me limit it to just one example conducted by means of easily modified R code for interested readers to use in their own i... | Difficulty of testing linearity in regression | I created a simulation that would answer to Breiman's description and found only the obvious: the result depends on the context and on what is meant by "extreme."
An awful lot could be said, but let m | Difficulty of testing linearity in regression
I created a simulation that would answer to Breiman's description and found only the obvious: the result depends on the context and on what is meant by "extreme."
An awful lot could be said, but let me limit it to just one example conducted by means of easily modified R cod... | Difficulty of testing linearity in regression
I created a simulation that would answer to Breiman's description and found only the obvious: the result depends on the context and on what is meant by "extreme."
An awful lot could be said, but let m |
12,283 | Difficulty of testing linearity in regression | Not sure it gives a final answer to the question, but I would give a look at this. Especially point 2. See also the discussion in appendix A2 of the paper. | Difficulty of testing linearity in regression | Not sure it gives a final answer to the question, but I would give a look at this. Especially point 2. See also the discussion in appendix A2 of the paper. | Difficulty of testing linearity in regression
Not sure it gives a final answer to the question, but I would give a look at this. Especially point 2. See also the discussion in appendix A2 of the paper. | Difficulty of testing linearity in regression
Not sure it gives a final answer to the question, but I would give a look at this. Especially point 2. See also the discussion in appendix A2 of the paper. |
12,284 | Residual diagnostics in MCMC -based regression models | I think the use of the term residual is not consistent with Bayesian regression. Remember, in frequentist probability models, it's the parameters which are considered fixed estimable quantities and the data generating mechanism has some random probability model associated with observed data. For Bayesians, the paramete... | Residual diagnostics in MCMC -based regression models | I think the use of the term residual is not consistent with Bayesian regression. Remember, in frequentist probability models, it's the parameters which are considered fixed estimable quantities and th | Residual diagnostics in MCMC -based regression models
I think the use of the term residual is not consistent with Bayesian regression. Remember, in frequentist probability models, it's the parameters which are considered fixed estimable quantities and the data generating mechanism has some random probability model asso... | Residual diagnostics in MCMC -based regression models
I think the use of the term residual is not consistent with Bayesian regression. Remember, in frequentist probability models, it's the parameters which are considered fixed estimable quantities and th |
12,285 | Has anyone solved PTLOS exercise 4.1? | For the record, here is a somewhat more extensive proof. It also contains some background information. Maybe this is helpful for others studying the topic.
The main idea of the proof is to show that Jaynes' conditions 1 and 2 imply that
$$P(D_{m_k}|H_iX)=P(D_{m_k}|X),$$
for all but one data set $m_k=1,\ldots,m$. It th... | Has anyone solved PTLOS exercise 4.1? | For the record, here is a somewhat more extensive proof. It also contains some background information. Maybe this is helpful for others studying the topic.
The main idea of the proof is to show that J | Has anyone solved PTLOS exercise 4.1?
For the record, here is a somewhat more extensive proof. It also contains some background information. Maybe this is helpful for others studying the topic.
The main idea of the proof is to show that Jaynes' conditions 1 and 2 imply that
$$P(D_{m_k}|H_iX)=P(D_{m_k}|X),$$
for all bu... | Has anyone solved PTLOS exercise 4.1?
For the record, here is a somewhat more extensive proof. It also contains some background information. Maybe this is helpful for others studying the topic.
The main idea of the proof is to show that J |
12,286 | Has anyone solved PTLOS exercise 4.1? | The reason we accepted eq. 4.28 (in the book, your condition 1) was that we assumed the probability of the data given a certain hypothesis $H_a$ and background information $X$ is independent, in other words for any $D_i$ and $D_j$ with $i\neq{j}$:
\begin{equation}P(D_i|D_jH_aX)=P(D_i|H_aX)\quad\quad{\rm (1)}\end{equat... | Has anyone solved PTLOS exercise 4.1? | The reason we accepted eq. 4.28 (in the book, your condition 1) was that we assumed the probability of the data given a certain hypothesis $H_a$ and background information $X$ is independent, in other | Has anyone solved PTLOS exercise 4.1?
The reason we accepted eq. 4.28 (in the book, your condition 1) was that we assumed the probability of the data given a certain hypothesis $H_a$ and background information $X$ is independent, in other words for any $D_i$ and $D_j$ with $i\neq{j}$:
\begin{equation}P(D_i|D_jH_aX)=P(... | Has anyone solved PTLOS exercise 4.1?
The reason we accepted eq. 4.28 (in the book, your condition 1) was that we assumed the probability of the data given a certain hypothesis $H_a$ and background information $X$ is independent, in other |
12,287 | Has anyone solved PTLOS exercise 4.1? | Okay, so rather than go and re-derive Saunder's equation (5), I will just state it here. Condition 1 and 2 imply the following equality:
$$\prod_{j=1}^{m}\left(\sum_{k\neq i}h_{k}d_{jk}\right)=\left(\sum_{k\neq i}h_{k}\right)^{m-1}\left(\sum_{k\neq i}h_{k}\prod_{j=1}^{m}d_{jk}\right)$$
where
$$d_{jk}=P(D_{j}|H_{k},I)\... | Has anyone solved PTLOS exercise 4.1? | Okay, so rather than go and re-derive Saunder's equation (5), I will just state it here. Condition 1 and 2 imply the following equality:
$$\prod_{j=1}^{m}\left(\sum_{k\neq i}h_{k}d_{jk}\right)=\left( | Has anyone solved PTLOS exercise 4.1?
Okay, so rather than go and re-derive Saunder's equation (5), I will just state it here. Condition 1 and 2 imply the following equality:
$$\prod_{j=1}^{m}\left(\sum_{k\neq i}h_{k}d_{jk}\right)=\left(\sum_{k\neq i}h_{k}\right)^{m-1}\left(\sum_{k\neq i}h_{k}\prod_{j=1}^{m}d_{jk}\rig... | Has anyone solved PTLOS exercise 4.1?
Okay, so rather than go and re-derive Saunder's equation (5), I will just state it here. Condition 1 and 2 imply the following equality:
$$\prod_{j=1}^{m}\left(\sum_{k\neq i}h_{k}d_{jk}\right)=\left( |
12,288 | Has anyone solved PTLOS exercise 4.1? | Here's a visual example for some intuition (note: for simplification, I've omitted writing that all the probabilities below are conditioned on the background information $X$, as is done in the book, but you should assume that they are).
Left column: $H_i$
Middle column: $P(D_1)$ and $P(\overline{D_1})$
Right column: ... | Has anyone solved PTLOS exercise 4.1? | Here's a visual example for some intuition (note: for simplification, I've omitted writing that all the probabilities below are conditioned on the background information $X$, as is done in the book, b | Has anyone solved PTLOS exercise 4.1?
Here's a visual example for some intuition (note: for simplification, I've omitted writing that all the probabilities below are conditioned on the background information $X$, as is done in the book, but you should assume that they are).
Left column: $H_i$
Middle column: $P(D_1)$ ... | Has anyone solved PTLOS exercise 4.1?
Here's a visual example for some intuition (note: for simplification, I've omitted writing that all the probabilities below are conditioned on the background information $X$, as is done in the book, b |
12,289 | AIC & BIC number interpretation | $AIC$ for model $i$ of an a priori model set can be recaled to $\mathsf{\Delta}_i=AIC_i-minAIC$ where the best model of the model set will have $\mathsf{\Delta}=0$. We can use the $\mathsf{\Delta}_i$ values to estimate strength of evidence ($w_i$) for the all models in the model set where:
$$
w_i = \frac{e^{(-0.5\math... | AIC & BIC number interpretation | $AIC$ for model $i$ of an a priori model set can be recaled to $\mathsf{\Delta}_i=AIC_i-minAIC$ where the best model of the model set will have $\mathsf{\Delta}=0$. We can use the $\mathsf{\Delta}_i$ | AIC & BIC number interpretation
$AIC$ for model $i$ of an a priori model set can be recaled to $\mathsf{\Delta}_i=AIC_i-minAIC$ where the best model of the model set will have $\mathsf{\Delta}=0$. We can use the $\mathsf{\Delta}_i$ values to estimate strength of evidence ($w_i$) for the all models in the model set whe... | AIC & BIC number interpretation
$AIC$ for model $i$ of an a priori model set can be recaled to $\mathsf{\Delta}_i=AIC_i-minAIC$ where the best model of the model set will have $\mathsf{\Delta}=0$. We can use the $\mathsf{\Delta}_i$ |
12,290 | AIC & BIC number interpretation | I don't think there is any simple interpretation of AIC or BIC like that. They are both quantities that take the log likelihood and apply a penalty to it for the number of parameters being estimated. The specific penalties are explained for AIC by Akaike in his papers starting in 1974. BIC was selected by Gideon Sch... | AIC & BIC number interpretation | I don't think there is any simple interpretation of AIC or BIC like that. They are both quantities that take the log likelihood and apply a penalty to it for the number of parameters being estimated. | AIC & BIC number interpretation
I don't think there is any simple interpretation of AIC or BIC like that. They are both quantities that take the log likelihood and apply a penalty to it for the number of parameters being estimated. The specific penalties are explained for AIC by Akaike in his papers starting in 1974.... | AIC & BIC number interpretation
I don't think there is any simple interpretation of AIC or BIC like that. They are both quantities that take the log likelihood and apply a penalty to it for the number of parameters being estimated. |
12,291 | AIC & BIC number interpretation | You probably use the BIC as a result of approximation to Bayes factor. Therefore you don't consider (more or less) a prior distribution. BIC in a model selection stage is useful when you compare the models. To fully understand BIC, Bayes factor I highly recommend reading an article (sec. 4): http://www.stat.washington.... | AIC & BIC number interpretation | You probably use the BIC as a result of approximation to Bayes factor. Therefore you don't consider (more or less) a prior distribution. BIC in a model selection stage is useful when you compare the m | AIC & BIC number interpretation
You probably use the BIC as a result of approximation to Bayes factor. Therefore you don't consider (more or less) a prior distribution. BIC in a model selection stage is useful when you compare the models. To fully understand BIC, Bayes factor I highly recommend reading an article (sec.... | AIC & BIC number interpretation
You probably use the BIC as a result of approximation to Bayes factor. Therefore you don't consider (more or less) a prior distribution. BIC in a model selection stage is useful when you compare the m |
12,292 | Calibrating a multi-class boosted classifier | This is a topic of practical interest to me as well so I did a little research. Here are two papers by an author that is often listed as a reference in these matters.
Transforming classifier scores into
accurate multiclass probability
estimates
Reducing multiclass
to binary by coupling probability
estimates
The gist... | Calibrating a multi-class boosted classifier | This is a topic of practical interest to me as well so I did a little research. Here are two papers by an author that is often listed as a reference in these matters.
Transforming classifier scores i | Calibrating a multi-class boosted classifier
This is a topic of practical interest to me as well so I did a little research. Here are two papers by an author that is often listed as a reference in these matters.
Transforming classifier scores into
accurate multiclass probability
estimates
Reducing multiclass
to binar... | Calibrating a multi-class boosted classifier
This is a topic of practical interest to me as well so I did a little research. Here are two papers by an author that is often listed as a reference in these matters.
Transforming classifier scores i |
12,293 | What causes sudden drops in training/test errors when training a neural network? | They changed the learning rate. Note the drop is at exactly 30 and 60 epochs, obviously set manually by someone. | What causes sudden drops in training/test errors when training a neural network? | They changed the learning rate. Note the drop is at exactly 30 and 60 epochs, obviously set manually by someone. | What causes sudden drops in training/test errors when training a neural network?
They changed the learning rate. Note the drop is at exactly 30 and 60 epochs, obviously set manually by someone. | What causes sudden drops in training/test errors when training a neural network?
They changed the learning rate. Note the drop is at exactly 30 and 60 epochs, obviously set manually by someone. |
12,294 | What causes sudden drops in training/test errors when training a neural network? | If you refer to the ResNet (Deep Residual Learning for Image Recognition) paper it reads as follows, "The learning rate starts from 0.1 and is divided by 10 when the error plateaus". Hence, the reason fro drop is the update in the learning rate. | What causes sudden drops in training/test errors when training a neural network? | If you refer to the ResNet (Deep Residual Learning for Image Recognition) paper it reads as follows, "The learning rate starts from 0.1 and is divided by 10 when the error plateaus". Hence, the reason | What causes sudden drops in training/test errors when training a neural network?
If you refer to the ResNet (Deep Residual Learning for Image Recognition) paper it reads as follows, "The learning rate starts from 0.1 and is divided by 10 when the error plateaus". Hence, the reason fro drop is the update in the learning... | What causes sudden drops in training/test errors when training a neural network?
If you refer to the ResNet (Deep Residual Learning for Image Recognition) paper it reads as follows, "The learning rate starts from 0.1 and is divided by 10 when the error plateaus". Hence, the reason |
12,295 | Suggestions for improving a probability and statistics cheat sheet | Tom Short's R Reference Card is excellent. | Suggestions for improving a probability and statistics cheat sheet | Tom Short's R Reference Card is excellent. | Suggestions for improving a probability and statistics cheat sheet
Tom Short's R Reference Card is excellent. | Suggestions for improving a probability and statistics cheat sheet
Tom Short's R Reference Card is excellent. |
12,296 | Suggestions for improving a probability and statistics cheat sheet | My favorite is the R Inferno by Patrick Burns. | Suggestions for improving a probability and statistics cheat sheet | My favorite is the R Inferno by Patrick Burns. | Suggestions for improving a probability and statistics cheat sheet
My favorite is the R Inferno by Patrick Burns. | Suggestions for improving a probability and statistics cheat sheet
My favorite is the R Inferno by Patrick Burns. |
12,297 | Are random variables correlated if and only if their ranks are correlated? | Neither correlation being zero necessarily tells you much about the other, since they 'weight' the data - especially extreme data - quite differently. I am just going to play with samples, but similar examples could be constructed with bivariate distributions / copulas.
1. Spearman correlation 0 doesn't imply Pearson c... | Are random variables correlated if and only if their ranks are correlated? | Neither correlation being zero necessarily tells you much about the other, since they 'weight' the data - especially extreme data - quite differently. I am just going to play with samples, but similar | Are random variables correlated if and only if their ranks are correlated?
Neither correlation being zero necessarily tells you much about the other, since they 'weight' the data - especially extreme data - quite differently. I am just going to play with samples, but similar examples could be constructed with bivariate... | Are random variables correlated if and only if their ranks are correlated?
Neither correlation being zero necessarily tells you much about the other, since they 'weight' the data - especially extreme data - quite differently. I am just going to play with samples, but similar |
12,298 | Logistic regression for time series | There are two methods to consider:
Only use the last $\mathrm{N}$ input samples. Assuming your input signal is of dimension $\mathrm{D}$,
then you have $\mathrm{N} \times \mathrm{D}$ samples per ground truth label. This way you can train using any classifier you like, including logistic regression. This way, each outp... | Logistic regression for time series | There are two methods to consider:
Only use the last $\mathrm{N}$ input samples. Assuming your input signal is of dimension $\mathrm{D}$,
then you have $\mathrm{N} \times \mathrm{D}$ samples per grou | Logistic regression for time series
There are two methods to consider:
Only use the last $\mathrm{N}$ input samples. Assuming your input signal is of dimension $\mathrm{D}$,
then you have $\mathrm{N} \times \mathrm{D}$ samples per ground truth label. This way you can train using any classifier you like, including logi... | Logistic regression for time series
There are two methods to consider:
Only use the last $\mathrm{N}$ input samples. Assuming your input signal is of dimension $\mathrm{D}$,
then you have $\mathrm{N} \times \mathrm{D}$ samples per grou |
12,299 | What is the difference between PCA and asymptotic PCA? | There is absolutely no difference.
There is absolutely no difference between standard PCA and what C&K suggested and called "asymptotic PCA". It is quite ridiculous to give it a separate name.
Here is a short explanation of PCA. If centered data with samples in rows are stored in a data matrix $\mathbf X$, then PCA lo... | What is the difference between PCA and asymptotic PCA? | There is absolutely no difference.
There is absolutely no difference between standard PCA and what C&K suggested and called "asymptotic PCA". It is quite ridiculous to give it a separate name.
Here i | What is the difference between PCA and asymptotic PCA?
There is absolutely no difference.
There is absolutely no difference between standard PCA and what C&K suggested and called "asymptotic PCA". It is quite ridiculous to give it a separate name.
Here is a short explanation of PCA. If centered data with samples in ro... | What is the difference between PCA and asymptotic PCA?
There is absolutely no difference.
There is absolutely no difference between standard PCA and what C&K suggested and called "asymptotic PCA". It is quite ridiculous to give it a separate name.
Here i |
12,300 | What is the difference between PCA and asymptotic PCA? | Typically APCA gets used when there are lots of series but very few samples. I wouldn't describe APCA as better or worse than PCA, because of the equivalence you noted. They do, however, differ in when the tools are applicable. That is the insight of the paper: you can flip the dimension if it's more convenient! So in ... | What is the difference between PCA and asymptotic PCA? | Typically APCA gets used when there are lots of series but very few samples. I wouldn't describe APCA as better or worse than PCA, because of the equivalence you noted. They do, however, differ in whe | What is the difference between PCA and asymptotic PCA?
Typically APCA gets used when there are lots of series but very few samples. I wouldn't describe APCA as better or worse than PCA, because of the equivalence you noted. They do, however, differ in when the tools are applicable. That is the insight of the paper: you... | What is the difference between PCA and asymptotic PCA?
Typically APCA gets used when there are lots of series but very few samples. I wouldn't describe APCA as better or worse than PCA, because of the equivalence you noted. They do, however, differ in whe |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.