idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
13,101
Non-transitivity of correlation: correlations between gender and brain size and between brain size and IQ, but no correlation between gender and IQ
This is a situation in which I like using path diagrams to illustrate direct effects and indirect effects, and how those two impact the overall correlations. Per the original description we have a correlation matrix below. Brain size has around a 0.3 correlation with IQ, female and IQ have a 0 correlation with each oth...
Non-transitivity of correlation: correlations between gender and brain size and between brain size a
This is a situation in which I like using path diagrams to illustrate direct effects and indirect effects, and how those two impact the overall correlations. Per the original description we have a cor
Non-transitivity of correlation: correlations between gender and brain size and between brain size and IQ, but no correlation between gender and IQ This is a situation in which I like using path diagrams to illustrate direct effects and indirect effects, and how those two impact the overall correlations. Per the origin...
Non-transitivity of correlation: correlations between gender and brain size and between brain size a This is a situation in which I like using path diagrams to illustrate direct effects and indirect effects, and how those two impact the overall correlations. Per the original description we have a cor
13,102
Non-transitivity of correlation: correlations between gender and brain size and between brain size and IQ, but no correlation between gender and IQ
To provide the purely abstract mathematical answer, denote $v$ the brain volume and $q$ the IQ index. Use $1$ to index men and $2$ to index women. Let's assume that the following are facts: $$E(v_1) > E(v_2) = \beta E(v_1), 0< \beta <1, \;\; \rho(v_1,q_1) >0, \;\; \rho(v_2,q_2)>0 \tag{1}$$ Note that while the quoted t...
Non-transitivity of correlation: correlations between gender and brain size and between brain size a
To provide the purely abstract mathematical answer, denote $v$ the brain volume and $q$ the IQ index. Use $1$ to index men and $2$ to index women. Let's assume that the following are facts: $$E(v_1)
Non-transitivity of correlation: correlations between gender and brain size and between brain size and IQ, but no correlation between gender and IQ To provide the purely abstract mathematical answer, denote $v$ the brain volume and $q$ the IQ index. Use $1$ to index men and $2$ to index women. Let's assume that the fol...
Non-transitivity of correlation: correlations between gender and brain size and between brain size a To provide the purely abstract mathematical answer, denote $v$ the brain volume and $q$ the IQ index. Use $1$ to index men and $2$ to index women. Let's assume that the following are facts: $$E(v_1)
13,103
Should I use t-test on highly skewed data ? Scientific proof, please?
I wouldn't call 'exponential' particularly highly skew. Its log is distinctly left-skew, for example, and its moment-skewness is only 2. 1) Using the t-test with exponential data and $n$ near 500 is fine: a) The numerator of the test statistic should be fine: If the data are independent exponential with common scale (a...
Should I use t-test on highly skewed data ? Scientific proof, please?
I wouldn't call 'exponential' particularly highly skew. Its log is distinctly left-skew, for example, and its moment-skewness is only 2. 1) Using the t-test with exponential data and $n$ near 500 is f
Should I use t-test on highly skewed data ? Scientific proof, please? I wouldn't call 'exponential' particularly highly skew. Its log is distinctly left-skew, for example, and its moment-skewness is only 2. 1) Using the t-test with exponential data and $n$ near 500 is fine: a) The numerator of the test statistic should...
Should I use t-test on highly skewed data ? Scientific proof, please? I wouldn't call 'exponential' particularly highly skew. Its log is distinctly left-skew, for example, and its moment-skewness is only 2. 1) Using the t-test with exponential data and $n$ near 500 is f
13,104
Why does the t-distribution become more normal as sample size increases?
I'll try to give an intuitive explanation. The t-statistic* has a numerator and a denominator. For example, the statistic in the one sample t-test is $$\frac{\bar{x}-\mu_0}{s/\sqrt{n}}$$ *(there are several, but this discussion should hopefully be general enough to cover the ones you are asking about) Under the assu...
Why does the t-distribution become more normal as sample size increases?
I'll try to give an intuitive explanation. The t-statistic* has a numerator and a denominator. For example, the statistic in the one sample t-test is $$\frac{\bar{x}-\mu_0}{s/\sqrt{n}}$$ *(there are
Why does the t-distribution become more normal as sample size increases? I'll try to give an intuitive explanation. The t-statistic* has a numerator and a denominator. For example, the statistic in the one sample t-test is $$\frac{\bar{x}-\mu_0}{s/\sqrt{n}}$$ *(there are several, but this discussion should hopefully ...
Why does the t-distribution become more normal as sample size increases? I'll try to give an intuitive explanation. The t-statistic* has a numerator and a denominator. For example, the statistic in the one sample t-test is $$\frac{\bar{x}-\mu_0}{s/\sqrt{n}}$$ *(there are
13,105
Why does the t-distribution become more normal as sample size increases?
@Glen_b gave you the intuition on why the t statistic looks more normal as the sample size increases. Now, I will give you a slightly more technical explanation for the case when you already got the distribution of the statistic. It is well-known that the t-statistic is distributed as a student t dsitribution with $n-1...
Why does the t-distribution become more normal as sample size increases?
@Glen_b gave you the intuition on why the t statistic looks more normal as the sample size increases. Now, I will give you a slightly more technical explanation for the case when you already got the d
Why does the t-distribution become more normal as sample size increases? @Glen_b gave you the intuition on why the t statistic looks more normal as the sample size increases. Now, I will give you a slightly more technical explanation for the case when you already got the distribution of the statistic. It is well-known ...
Why does the t-distribution become more normal as sample size increases? @Glen_b gave you the intuition on why the t statistic looks more normal as the sample size increases. Now, I will give you a slightly more technical explanation for the case when you already got the d
13,106
Why does the t-distribution become more normal as sample size increases?
I just wanted to share something that helped my intuition as a beginner (though it's less rigorous than the other answers). If $Z, Z_1, ..., Z_n$ are iid standard normal RVs then the following RV, $$\frac{Z}{\sqrt{\frac{Z_1^2+...+Z_n^2}{n}}}$$ has a t-distribution with $n$ degrees of freedom. As $n$ gets really big, u...
Why does the t-distribution become more normal as sample size increases?
I just wanted to share something that helped my intuition as a beginner (though it's less rigorous than the other answers). If $Z, Z_1, ..., Z_n$ are iid standard normal RVs then the following RV, $$\
Why does the t-distribution become more normal as sample size increases? I just wanted to share something that helped my intuition as a beginner (though it's less rigorous than the other answers). If $Z, Z_1, ..., Z_n$ are iid standard normal RVs then the following RV, $$\frac{Z}{\sqrt{\frac{Z_1^2+...+Z_n^2}{n}}}$$ has...
Why does the t-distribution become more normal as sample size increases? I just wanted to share something that helped my intuition as a beginner (though it's less rigorous than the other answers). If $Z, Z_1, ..., Z_n$ are iid standard normal RVs then the following RV, $$\
13,107
How to display error bars for cross-over (paired) experiments
You are totally correct in your assumption that error bars representing the standard error of the mean are totally inappropriate for within-subject designs. However, the question of overlapping error bars and significance is yet another topic, to which I will come back at the end of this commented reference list. There...
How to display error bars for cross-over (paired) experiments
You are totally correct in your assumption that error bars representing the standard error of the mean are totally inappropriate for within-subject designs. However, the question of overlapping error
How to display error bars for cross-over (paired) experiments You are totally correct in your assumption that error bars representing the standard error of the mean are totally inappropriate for within-subject designs. However, the question of overlapping error bars and significance is yet another topic, to which I wil...
How to display error bars for cross-over (paired) experiments You are totally correct in your assumption that error bars representing the standard error of the mean are totally inappropriate for within-subject designs. However, the question of overlapping error
13,108
How to display error bars for cross-over (paired) experiments
The question does not seem to be about error bars so much as about the best ways of plotting paired data. In essence error bars here are at most a way of summarizing uncertainty: they do not, and they necessarily cannot, say much about any fine structure in the data. Parallel coordinate plots -- sometimes called prof...
How to display error bars for cross-over (paired) experiments
The question does not seem to be about error bars so much as about the best ways of plotting paired data. In essence error bars here are at most a way of summarizing uncertainty: they do not, and the
How to display error bars for cross-over (paired) experiments The question does not seem to be about error bars so much as about the best ways of plotting paired data. In essence error bars here are at most a way of summarizing uncertainty: they do not, and they necessarily cannot, say much about any fine structure in...
How to display error bars for cross-over (paired) experiments The question does not seem to be about error bars so much as about the best ways of plotting paired data. In essence error bars here are at most a way of summarizing uncertainty: they do not, and the
13,109
How to display error bars for cross-over (paired) experiments
Try a scatter plot of the individual (A,B) points. Most of them should lie on only one side of the diagonal (the line A = B). There are two analogs of error bars. The conventional one, equivalent to a CI for the mean difference, would be a confidence band for the mean difference. The band would be the region between t...
How to display error bars for cross-over (paired) experiments
Try a scatter plot of the individual (A,B) points. Most of them should lie on only one side of the diagonal (the line A = B). There are two analogs of error bars. The conventional one, equivalent to
How to display error bars for cross-over (paired) experiments Try a scatter plot of the individual (A,B) points. Most of them should lie on only one side of the diagonal (the line A = B). There are two analogs of error bars. The conventional one, equivalent to a CI for the mean difference, would be a confidence band f...
How to display error bars for cross-over (paired) experiments Try a scatter plot of the individual (A,B) points. Most of them should lie on only one side of the diagonal (the line A = B). There are two analogs of error bars. The conventional one, equivalent to
13,110
How to display error bars for cross-over (paired) experiments
Preliminary summary: Masson/Loftus is very exhaustive, and not an easy reading to give to my medical colleagues who would not accept something like an "interaction". They also have some suggestions for multiple comparisons, which show that pairwise confidence intervals are difficult to illustrate when one does not want...
How to display error bars for cross-over (paired) experiments
Preliminary summary: Masson/Loftus is very exhaustive, and not an easy reading to give to my medical colleagues who would not accept something like an "interaction". They also have some suggestions fo
How to display error bars for cross-over (paired) experiments Preliminary summary: Masson/Loftus is very exhaustive, and not an easy reading to give to my medical colleagues who would not accept something like an "interaction". They also have some suggestions for multiple comparisons, which show that pairwise confidenc...
How to display error bars for cross-over (paired) experiments Preliminary summary: Masson/Loftus is very exhaustive, and not an easy reading to give to my medical colleagues who would not accept something like an "interaction". They also have some suggestions fo
13,111
How to display error bars for cross-over (paired) experiments
Why not just plot the difference* for each patient? You could then use a histogram, a box plot or a normal probability plot and overlay a 95% confidence interval for the difference. In some scenarios it might be the difference of the logarithms. See, for example, Patterson & Jones, "Bioequivalence and Statistics in ...
How to display error bars for cross-over (paired) experiments
Why not just plot the difference* for each patient? You could then use a histogram, a box plot or a normal probability plot and overlay a 95% confidence interval for the difference. In some scenario
How to display error bars for cross-over (paired) experiments Why not just plot the difference* for each patient? You could then use a histogram, a box plot or a normal probability plot and overlay a 95% confidence interval for the difference. In some scenarios it might be the difference of the logarithms. See, for ...
How to display error bars for cross-over (paired) experiments Why not just plot the difference* for each patient? You could then use a histogram, a box plot or a normal probability plot and overlay a 95% confidence interval for the difference. In some scenario
13,112
Crash course in robust mean estimation
Are you looking for the theory, or something practical? If you are looking for books, here are some that I found helpful: F.R. Hampel, E.M. Ronchetti, P.J.Rousseeuw, W.A. Stahel, Robust Statistics: The Approach Based on In fluence Functions, John Wiley & Sons, 1986. P.J. Huber, Robust Statistics, John Wiley & Sons, 19...
Crash course in robust mean estimation
Are you looking for the theory, or something practical? If you are looking for books, here are some that I found helpful: F.R. Hampel, E.M. Ronchetti, P.J.Rousseeuw, W.A. Stahel, Robust Statistics: T
Crash course in robust mean estimation Are you looking for the theory, or something practical? If you are looking for books, here are some that I found helpful: F.R. Hampel, E.M. Ronchetti, P.J.Rousseeuw, W.A. Stahel, Robust Statistics: The Approach Based on In fluence Functions, John Wiley & Sons, 1986. P.J. Huber, R...
Crash course in robust mean estimation Are you looking for the theory, or something practical? If you are looking for books, here are some that I found helpful: F.R. Hampel, E.M. Ronchetti, P.J.Rousseeuw, W.A. Stahel, Robust Statistics: T
13,113
Crash course in robust mean estimation
If you like something short and easy to digest, then have a look at the following paper from the psychological literature: Erceg-Hurn, D. M., & Mirosevich, V. M. (2008). Modern robust statistical methods: An easy way to maximize the accuracy and power of your research. American Psychologist, 63(7), 591–601. doi:10.1037...
Crash course in robust mean estimation
If you like something short and easy to digest, then have a look at the following paper from the psychological literature: Erceg-Hurn, D. M., & Mirosevich, V. M. (2008). Modern robust statistical meth
Crash course in robust mean estimation If you like something short and easy to digest, then have a look at the following paper from the psychological literature: Erceg-Hurn, D. M., & Mirosevich, V. M. (2008). Modern robust statistical methods: An easy way to maximize the accuracy and power of your research. American Ps...
Crash course in robust mean estimation If you like something short and easy to digest, then have a look at the following paper from the psychological literature: Erceg-Hurn, D. M., & Mirosevich, V. M. (2008). Modern robust statistical meth
13,114
Crash course in robust mean estimation
One book that combines theory with practice pretty well is Robust Statistical Methods with R, by Jurečková and Picek. I also like Robust Statistics, by Maronna et al. Both of these may have more math than you'd care for, however. For a more applied tutorial focused on R, this BelVenTutorial pdf may help.
Crash course in robust mean estimation
One book that combines theory with practice pretty well is Robust Statistical Methods with R, by Jurečková and Picek. I also like Robust Statistics, by Maronna et al. Both of these may have more mat
Crash course in robust mean estimation One book that combines theory with practice pretty well is Robust Statistical Methods with R, by Jurečková and Picek. I also like Robust Statistics, by Maronna et al. Both of these may have more math than you'd care for, however. For a more applied tutorial focused on R, this B...
Crash course in robust mean estimation One book that combines theory with practice pretty well is Robust Statistical Methods with R, by Jurečková and Picek. I also like Robust Statistics, by Maronna et al. Both of these may have more mat
13,115
Generate random numbers following a distribution within an interval
It sounds like you want to simulate from a truncated distribution, and in your specific example, a truncated normal. There are a variety of methods for doing so, some simple, some relatively efficient. I'll illustrate some approaches on your normal example. Here's one very simple method for generating one at a time (...
Generate random numbers following a distribution within an interval
It sounds like you want to simulate from a truncated distribution, and in your specific example, a truncated normal. There are a variety of methods for doing so, some simple, some relatively efficient
Generate random numbers following a distribution within an interval It sounds like you want to simulate from a truncated distribution, and in your specific example, a truncated normal. There are a variety of methods for doing so, some simple, some relatively efficient. I'll illustrate some approaches on your normal ex...
Generate random numbers following a distribution within an interval It sounds like you want to simulate from a truncated distribution, and in your specific example, a truncated normal. There are a variety of methods for doing so, some simple, some relatively efficient
13,116
Generate random numbers following a distribution within an interval
None of the answers here give an efficient method of generating truncated normal variables that does not involve rejection of arbitrarily large numbers of generated values. If you want to generate values from a truncated normal distribution, with specified lower and upper bounds $a<b$, this can be done ---without rej...
Generate random numbers following a distribution within an interval
None of the answers here give an efficient method of generating truncated normal variables that does not involve rejection of arbitrarily large numbers of generated values. If you want to generate v
Generate random numbers following a distribution within an interval None of the answers here give an efficient method of generating truncated normal variables that does not involve rejection of arbitrarily large numbers of generated values. If you want to generate values from a truncated normal distribution, with spe...
Generate random numbers following a distribution within an interval None of the answers here give an efficient method of generating truncated normal variables that does not involve rejection of arbitrarily large numbers of generated values. If you want to generate v
13,117
Generate random numbers following a distribution within an interval
The quick-and-dirty approach is to use the 68-95-99.7 rule. In a normal distribution, 99.7% of values fall within 3 standard deviations of the mean. So, if you set your mean to the middle of your desired minimum value and maximum value, and set your standard deviation to 1/3 of your mean, you get (mostly) values that f...
Generate random numbers following a distribution within an interval
The quick-and-dirty approach is to use the 68-95-99.7 rule. In a normal distribution, 99.7% of values fall within 3 standard deviations of the mean. So, if you set your mean to the middle of your desi
Generate random numbers following a distribution within an interval The quick-and-dirty approach is to use the 68-95-99.7 rule. In a normal distribution, 99.7% of values fall within 3 standard deviations of the mean. So, if you set your mean to the middle of your desired minimum value and maximum value, and set your st...
Generate random numbers following a distribution within an interval The quick-and-dirty approach is to use the 68-95-99.7 rule. In a normal distribution, 99.7% of values fall within 3 standard deviations of the mean. So, if you set your mean to the middle of your desi
13,118
Generate random numbers following a distribution within an interval
Three ways has worked for me: using sample() with rnorm(): sample(x=min:max, replace= TRUE, rnorm(n, mean)) using the msm package and the rtnorm function: rtnorm(n, mean, lower=min, upper=max) using the rnorm() and specifying the lower and upper limits, as Hugh has posted above: sample <- rnorm(n, mean=mean); sample <...
Generate random numbers following a distribution within an interval
Three ways has worked for me: using sample() with rnorm(): sample(x=min:max, replace= TRUE, rnorm(n, mean)) using the msm package and the rtnorm function: rtnorm(n, mean, lower=min, upper=max) using
Generate random numbers following a distribution within an interval Three ways has worked for me: using sample() with rnorm(): sample(x=min:max, replace= TRUE, rnorm(n, mean)) using the msm package and the rtnorm function: rtnorm(n, mean, lower=min, upper=max) using the rnorm() and specifying the lower and upper limit...
Generate random numbers following a distribution within an interval Three ways has worked for me: using sample() with rnorm(): sample(x=min:max, replace= TRUE, rnorm(n, mean)) using the msm package and the rtnorm function: rtnorm(n, mean, lower=min, upper=max) using
13,119
Is there an R equivalent of SAS PROC FREQ?
I use table and prop.table, but CrossTable in the gmodels package might give you results even closer to SAS. See this link. Also, to generate "descriptive statistics for multiple variables at once," you would use the summary function; e.g., summary(mydata).
Is there an R equivalent of SAS PROC FREQ?
I use table and prop.table, but CrossTable in the gmodels package might give you results even closer to SAS. See this link. Also, to generate "descriptive statistics for multiple variables at once," y
Is there an R equivalent of SAS PROC FREQ? I use table and prop.table, but CrossTable in the gmodels package might give you results even closer to SAS. See this link. Also, to generate "descriptive statistics for multiple variables at once," you would use the summary function; e.g., summary(mydata).
Is there an R equivalent of SAS PROC FREQ? I use table and prop.table, but CrossTable in the gmodels package might give you results even closer to SAS. See this link. Also, to generate "descriptive statistics for multiple variables at once," y
13,120
Is there an R equivalent of SAS PROC FREQ?
Summarising data in base R is just a headache. This is one of the areas where SAS works quite well. For R, I recommend the plyr package. In SAS: /* tabulate by a and b, with summary stats for x and y in each cell */ proc summary data=dat nway; class a b; var x y; output out=smry mean(x)=xmean mean(y)=ymean var(y)...
Is there an R equivalent of SAS PROC FREQ?
Summarising data in base R is just a headache. This is one of the areas where SAS works quite well. For R, I recommend the plyr package. In SAS: /* tabulate by a and b, with summary stats for x and y
Is there an R equivalent of SAS PROC FREQ? Summarising data in base R is just a headache. This is one of the areas where SAS works quite well. For R, I recommend the plyr package. In SAS: /* tabulate by a and b, with summary stats for x and y in each cell */ proc summary data=dat nway; class a b; var x y; output ...
Is there an R equivalent of SAS PROC FREQ? Summarising data in base R is just a headache. This is one of the areas where SAS works quite well. For R, I recommend the plyr package. In SAS: /* tabulate by a and b, with summary stats for x and y
13,121
Is there an R equivalent of SAS PROC FREQ?
I don't use SAS; so I can't comment on whether the following replicate SAS PROC FREQ, but these are two quick strategies for describing variables in a data.frame that I often use: describe in Hmisc provides a useful summary of variables including numeric and non-numeric data describe in psych provides descriptive stat...
Is there an R equivalent of SAS PROC FREQ?
I don't use SAS; so I can't comment on whether the following replicate SAS PROC FREQ, but these are two quick strategies for describing variables in a data.frame that I often use: describe in Hmisc p
Is there an R equivalent of SAS PROC FREQ? I don't use SAS; so I can't comment on whether the following replicate SAS PROC FREQ, but these are two quick strategies for describing variables in a data.frame that I often use: describe in Hmisc provides a useful summary of variables including numeric and non-numeric data ...
Is there an R equivalent of SAS PROC FREQ? I don't use SAS; so I can't comment on whether the following replicate SAS PROC FREQ, but these are two quick strategies for describing variables in a data.frame that I often use: describe in Hmisc p
13,122
Is there an R equivalent of SAS PROC FREQ?
You can check out my summarytools package (CRAN link) which includes a codebook-like function, with markdown and html formatting options. install.packages("summarytools") library(summarytools) dfSummary(CO2, style = "grid", plain.ascii = TRUE) Dataframe Summary CO2 +------------+---------------+-----------------------...
Is there an R equivalent of SAS PROC FREQ?
You can check out my summarytools package (CRAN link) which includes a codebook-like function, with markdown and html formatting options. install.packages("summarytools") library(summarytools) dfSumma
Is there an R equivalent of SAS PROC FREQ? You can check out my summarytools package (CRAN link) which includes a codebook-like function, with markdown and html formatting options. install.packages("summarytools") library(summarytools) dfSummary(CO2, style = "grid", plain.ascii = TRUE) Dataframe Summary CO2 +---------...
Is there an R equivalent of SAS PROC FREQ? You can check out my summarytools package (CRAN link) which includes a codebook-like function, with markdown and html formatting options. install.packages("summarytools") library(summarytools) dfSumma
13,123
Is there an R equivalent of SAS PROC FREQ?
I use the codebook function from {EPICALC} which gives summary statistics for a numeric variable and a frequency table with level labels and codes for factors. http://cran.r-project.org/doc/contrib/Epicalc_Book.pdf (see p.50) Moreover, this is very useful because it provides sd for quantitative variables. Enjoy !
Is there an R equivalent of SAS PROC FREQ?
I use the codebook function from {EPICALC} which gives summary statistics for a numeric variable and a frequency table with level labels and codes for factors. http://cran.r-project.org/doc/contrib/Ep
Is there an R equivalent of SAS PROC FREQ? I use the codebook function from {EPICALC} which gives summary statistics for a numeric variable and a frequency table with level labels and codes for factors. http://cran.r-project.org/doc/contrib/Epicalc_Book.pdf (see p.50) Moreover, this is very useful because it provides s...
Is there an R equivalent of SAS PROC FREQ? I use the codebook function from {EPICALC} which gives summary statistics for a numeric variable and a frequency table with level labels and codes for factors. http://cran.r-project.org/doc/contrib/Ep
13,124
Is there an R equivalent of SAS PROC FREQ?
Thanks for all the suggestions everyone. I ended up using either table or Rcmdr's numSummary function plus apply: apply(dataframe[,c('need_rbcs','need_platelets','need_ffp')],2,table) This works pretty well and is not too inconvenient. However I will definitely give some of these other solutions a try!
Is there an R equivalent of SAS PROC FREQ?
Thanks for all the suggestions everyone. I ended up using either table or Rcmdr's numSummary function plus apply: apply(dataframe[,c('need_rbcs','need_platelets','need_ffp')],2,table) This works pre
Is there an R equivalent of SAS PROC FREQ? Thanks for all the suggestions everyone. I ended up using either table or Rcmdr's numSummary function plus apply: apply(dataframe[,c('need_rbcs','need_platelets','need_ffp')],2,table) This works pretty well and is not too inconvenient. However I will definitely give some of ...
Is there an R equivalent of SAS PROC FREQ? Thanks for all the suggestions everyone. I ended up using either table or Rcmdr's numSummary function plus apply: apply(dataframe[,c('need_rbcs','need_platelets','need_ffp')],2,table) This works pre
13,125
R package for multilevel structural equation modeling?
It seems that OpenMx (based on Mx but it's now an R package) can do what you are looking for: "Multi Level Analysis"
R package for multilevel structural equation modeling?
It seems that OpenMx (based on Mx but it's now an R package) can do what you are looking for: "Multi Level Analysis"
R package for multilevel structural equation modeling? It seems that OpenMx (based on Mx but it's now an R package) can do what you are looking for: "Multi Level Analysis"
R package for multilevel structural equation modeling? It seems that OpenMx (based on Mx but it's now an R package) can do what you are looking for: "Multi Level Analysis"
13,126
R package for multilevel structural equation modeling?
You can do multilevel SEM in any package that supports multiple group analysis using Muthen's MUML method. You model 2 groups, the first with the within-covariance matrix and the second with the between covariance matrix as data. Then you restrict the relevant parameters to be equal across groups (which depends on the...
R package for multilevel structural equation modeling?
You can do multilevel SEM in any package that supports multiple group analysis using Muthen's MUML method. You model 2 groups, the first with the within-covariance matrix and the second with the betw
R package for multilevel structural equation modeling? You can do multilevel SEM in any package that supports multiple group analysis using Muthen's MUML method. You model 2 groups, the first with the within-covariance matrix and the second with the between covariance matrix as data. Then you restrict the relevant par...
R package for multilevel structural equation modeling? You can do multilevel SEM in any package that supports multiple group analysis using Muthen's MUML method. You model 2 groups, the first with the within-covariance matrix and the second with the betw
13,127
R package for multilevel structural equation modeling?
If your model is complicated, I would recommend xxM, a package for R by Paras Mehta. http://xxm.times.uh.edu/ Mehta, P. D. (2013). n-level structural equation modeling. In Y. Petscher, C. Schatschneider & D. L. Compton (Eds.), Applied quantitative analysis in the social sciences (pp. 329-362). New York: Routledge.
R package for multilevel structural equation modeling?
If your model is complicated, I would recommend xxM, a package for R by Paras Mehta. http://xxm.times.uh.edu/ Mehta, P. D. (2013). n-level structural equation modeling. In Y. Petscher, C. Schatschnei
R package for multilevel structural equation modeling? If your model is complicated, I would recommend xxM, a package for R by Paras Mehta. http://xxm.times.uh.edu/ Mehta, P. D. (2013). n-level structural equation modeling. In Y. Petscher, C. Schatschneider & D. L. Compton (Eds.), Applied quantitative analysis in the ...
R package for multilevel structural equation modeling? If your model is complicated, I would recommend xxM, a package for R by Paras Mehta. http://xxm.times.uh.edu/ Mehta, P. D. (2013). n-level structural equation modeling. In Y. Petscher, C. Schatschnei
13,128
R package for multilevel structural equation modeling?
In regards to the ability to pull this off in any SEM program....yes, you don't always need specialized SEM software, but you might have a hell of a data wrangling job if you don't use SEM software that is specialized for this task. FYI: I don't find openmx to be intuitive. Here's a reference for pulling this off in m...
R package for multilevel structural equation modeling?
In regards to the ability to pull this off in any SEM program....yes, you don't always need specialized SEM software, but you might have a hell of a data wrangling job if you don't use SEM software th
R package for multilevel structural equation modeling? In regards to the ability to pull this off in any SEM program....yes, you don't always need specialized SEM software, but you might have a hell of a data wrangling job if you don't use SEM software that is specialized for this task. FYI: I don't find openmx to be i...
R package for multilevel structural equation modeling? In regards to the ability to pull this off in any SEM program....yes, you don't always need specialized SEM software, but you might have a hell of a data wrangling job if you don't use SEM software th
13,129
R package for multilevel structural equation modeling?
Try searching for "structural equation modeling" on http://rseek.org. You'll find several helpful links, including links to several possible packages. You might also check out the Task View for the social sciences, there's a section for structural equation modeling maybe a third of the way down. See http://cran.r-pr...
R package for multilevel structural equation modeling?
Try searching for "structural equation modeling" on http://rseek.org. You'll find several helpful links, including links to several possible packages. You might also check out the Task View for the
R package for multilevel structural equation modeling? Try searching for "structural equation modeling" on http://rseek.org. You'll find several helpful links, including links to several possible packages. You might also check out the Task View for the social sciences, there's a section for structural equation modeli...
R package for multilevel structural equation modeling? Try searching for "structural equation modeling" on http://rseek.org. You'll find several helpful links, including links to several possible packages. You might also check out the Task View for the
13,130
R package for multilevel structural equation modeling?
This post is old, but I thought I'd link the question I posted with the solution. It provides a description on how OpenMx can be used for fitting multilevel SEM.
R package for multilevel structural equation modeling?
This post is old, but I thought I'd link the question I posted with the solution. It provides a description on how OpenMx can be used for fitting multilevel SEM.
R package for multilevel structural equation modeling? This post is old, but I thought I'd link the question I posted with the solution. It provides a description on how OpenMx can be used for fitting multilevel SEM.
R package for multilevel structural equation modeling? This post is old, but I thought I'd link the question I posted with the solution. It provides a description on how OpenMx can be used for fitting multilevel SEM.
13,131
Two stage models: Difference between Heckman models (to deal with sample selection) and Instrumental variables (to deal with endogenity)
To answer your first question, you are correct that sample selection is a specific form of endogeneity (See Antonakis et al. 2010 for a good basic review of endogeneity and common remedies), however you are not correct in saying that the likelihood of being treated is the endogenous variable, as it is the treatment var...
Two stage models: Difference between Heckman models (to deal with sample selection) and Instrumental
To answer your first question, you are correct that sample selection is a specific form of endogeneity (See Antonakis et al. 2010 for a good basic review of endogeneity and common remedies), however y
Two stage models: Difference between Heckman models (to deal with sample selection) and Instrumental variables (to deal with endogenity) To answer your first question, you are correct that sample selection is a specific form of endogeneity (See Antonakis et al. 2010 for a good basic review of endogeneity and common rem...
Two stage models: Difference between Heckman models (to deal with sample selection) and Instrumental To answer your first question, you are correct that sample selection is a specific form of endogeneity (See Antonakis et al. 2010 for a good basic review of endogeneity and common remedies), however y
13,132
Two stage models: Difference between Heckman models (to deal with sample selection) and Instrumental variables (to deal with endogenity)
One should make a distinction between the specific Heckman sample selection model (where only one sample is observed) and Heckman-type corrections for self-selection, which can also work for the case where the two samples are observed. The latter is referred to as control function approach, and amounts to include into ...
Two stage models: Difference between Heckman models (to deal with sample selection) and Instrumental
One should make a distinction between the specific Heckman sample selection model (where only one sample is observed) and Heckman-type corrections for self-selection, which can also work for the case
Two stage models: Difference between Heckman models (to deal with sample selection) and Instrumental variables (to deal with endogenity) One should make a distinction between the specific Heckman sample selection model (where only one sample is observed) and Heckman-type corrections for self-selection, which can also w...
Two stage models: Difference between Heckman models (to deal with sample selection) and Instrumental One should make a distinction between the specific Heckman sample selection model (where only one sample is observed) and Heckman-type corrections for self-selection, which can also work for the case
13,133
Two stage models: Difference between Heckman models (to deal with sample selection) and Instrumental variables (to deal with endogenity)
From Heckman, Urzua and Vytlacil (2006): Example of selection bias: Consider the effects of a policy on the outcome of a country (e.g. GDP). If the countries that would have done well in terms of the unobservable even in the absence of the policy are the ones that adopt the policy, then the OLS estimates are biased. Tw...
Two stage models: Difference between Heckman models (to deal with sample selection) and Instrumental
From Heckman, Urzua and Vytlacil (2006): Example of selection bias: Consider the effects of a policy on the outcome of a country (e.g. GDP). If the countries that would have done well in terms of the
Two stage models: Difference between Heckman models (to deal with sample selection) and Instrumental variables (to deal with endogenity) From Heckman, Urzua and Vytlacil (2006): Example of selection bias: Consider the effects of a policy on the outcome of a country (e.g. GDP). If the countries that would have done well...
Two stage models: Difference between Heckman models (to deal with sample selection) and Instrumental From Heckman, Urzua and Vytlacil (2006): Example of selection bias: Consider the effects of a policy on the outcome of a country (e.g. GDP). If the countries that would have done well in terms of the
13,134
hinge loss vs logistic loss advantages and disadvantages/limitations
Logarithmic loss minimization leads to well-behaved probabilistic outputs. Hinge loss leads to some (not guaranteed) sparsity on the dual, but it doesn't help at probability estimation. Instead, it punishes misclassifications (that's why it's so useful to determine margins): diminishing hinge-loss comes with diminishin...
hinge loss vs logistic loss advantages and disadvantages/limitations
Logarithmic loss minimization leads to well-behaved probabilistic outputs. Hinge loss leads to some (not guaranteed) sparsity on the dual, but it doesn't help at probability estimation. Instead, it pu
hinge loss vs logistic loss advantages and disadvantages/limitations Logarithmic loss minimization leads to well-behaved probabilistic outputs. Hinge loss leads to some (not guaranteed) sparsity on the dual, but it doesn't help at probability estimation. Instead, it punishes misclassifications (that's why it's so usefu...
hinge loss vs logistic loss advantages and disadvantages/limitations Logarithmic loss minimization leads to well-behaved probabilistic outputs. Hinge loss leads to some (not guaranteed) sparsity on the dual, but it doesn't help at probability estimation. Instead, it pu
13,135
hinge loss vs logistic loss advantages and disadvantages/limitations
@Firebug had a good answer (+1). In fact, I had a similar question here. What are the impacts of choosing different loss functions in classification to approximate 0-1 loss I just want to add more on another big advantages of logistic loss: probabilistic interpretation. An example, can be found here Specifically, logis...
hinge loss vs logistic loss advantages and disadvantages/limitations
@Firebug had a good answer (+1). In fact, I had a similar question here. What are the impacts of choosing different loss functions in classification to approximate 0-1 loss I just want to add more on
hinge loss vs logistic loss advantages and disadvantages/limitations @Firebug had a good answer (+1). In fact, I had a similar question here. What are the impacts of choosing different loss functions in classification to approximate 0-1 loss I just want to add more on another big advantages of logistic loss: probabilis...
hinge loss vs logistic loss advantages and disadvantages/limitations @Firebug had a good answer (+1). In fact, I had a similar question here. What are the impacts of choosing different loss functions in classification to approximate 0-1 loss I just want to add more on
13,136
hinge loss vs logistic loss advantages and disadvantages/limitations
Since @hxd1011 added a advantage of cross entropy, I'll be adding one drawback of it. Cross entropy error is one of many distance measures between probability distributions, but one drawback of it is that distributions with long tails can be modeled poorly with too much weight given to the unlikely events.
hinge loss vs logistic loss advantages and disadvantages/limitations
Since @hxd1011 added a advantage of cross entropy, I'll be adding one drawback of it. Cross entropy error is one of many distance measures between probability distributions, but one drawback of it is
hinge loss vs logistic loss advantages and disadvantages/limitations Since @hxd1011 added a advantage of cross entropy, I'll be adding one drawback of it. Cross entropy error is one of many distance measures between probability distributions, but one drawback of it is that distributions with long tails can be modeled p...
hinge loss vs logistic loss advantages and disadvantages/limitations Since @hxd1011 added a advantage of cross entropy, I'll be adding one drawback of it. Cross entropy error is one of many distance measures between probability distributions, but one drawback of it is
13,137
What is the proper association measure of a variable with a PCA component (on a biplot / loading plot)?
Explanation of a loading plot of PCA or Factor analysis. Loading plot shows variables as points in the space of principal components (or factors). The coordinates of variables are, usually, the loadings. (If you properly combine loading plot with the corresponding scatterplot of data cases in the same components space,...
What is the proper association measure of a variable with a PCA component (on a biplot / loading plo
Explanation of a loading plot of PCA or Factor analysis. Loading plot shows variables as points in the space of principal components (or factors). The coordinates of variables are, usually, the loadin
What is the proper association measure of a variable with a PCA component (on a biplot / loading plot)? Explanation of a loading plot of PCA or Factor analysis. Loading plot shows variables as points in the space of principal components (or factors). The coordinates of variables are, usually, the loadings. (If you prop...
What is the proper association measure of a variable with a PCA component (on a biplot / loading plo Explanation of a loading plot of PCA or Factor analysis. Loading plot shows variables as points in the space of principal components (or factors). The coordinates of variables are, usually, the loadin
13,138
Can the MIC algorithm for detecting non-linear correlations be explained intuitively?
Is it not telling that this was published in a non-statistical journal whose statistical peer review we are unsure of? This problem was solved by Hoeffding in 1948 (Annals of Mathematical Statistics 19:546) who developed a straightforward algorithm requiring no binning nor multiple steps. Hoeffding's work was not eve...
Can the MIC algorithm for detecting non-linear correlations be explained intuitively?
Is it not telling that this was published in a non-statistical journal whose statistical peer review we are unsure of? This problem was solved by Hoeffding in 1948 (Annals of Mathematical Statistics
Can the MIC algorithm for detecting non-linear correlations be explained intuitively? Is it not telling that this was published in a non-statistical journal whose statistical peer review we are unsure of? This problem was solved by Hoeffding in 1948 (Annals of Mathematical Statistics 19:546) who developed a straightfo...
Can the MIC algorithm for detecting non-linear correlations be explained intuitively? Is it not telling that this was published in a non-statistical journal whose statistical peer review we are unsure of? This problem was solved by Hoeffding in 1948 (Annals of Mathematical Statistics
13,139
Can the MIC algorithm for detecting non-linear correlations be explained intuitively?
The MIC method is based on mutual information (MI), which quantifies the dependence between the joint distribution of $X$ and $Y$ and what the joint distribution would be if $X$ and $Y$ were independent (see, e.g., the Wikipedia entry). Mathematically, MI is defined as $$MI=H(X)+H(Y)-H(X,Y)$$ where $$H(X)=-\sum_i p(z_i...
Can the MIC algorithm for detecting non-linear correlations be explained intuitively?
The MIC method is based on mutual information (MI), which quantifies the dependence between the joint distribution of $X$ and $Y$ and what the joint distribution would be if $X$ and $Y$ were independe
Can the MIC algorithm for detecting non-linear correlations be explained intuitively? The MIC method is based on mutual information (MI), which quantifies the dependence between the joint distribution of $X$ and $Y$ and what the joint distribution would be if $X$ and $Y$ were independent (see, e.g., the Wikipedia entry...
Can the MIC algorithm for detecting non-linear correlations be explained intuitively? The MIC method is based on mutual information (MI), which quantifies the dependence between the joint distribution of $X$ and $Y$ and what the joint distribution would be if $X$ and $Y$ were independe
13,140
Can the MIC algorithm for detecting non-linear correlations be explained intuitively?
I found two good articles explaining more clearly the idea of MIC. In particular the blog post "large-scale data exploration, MIC-style", and Gelman's blog post "Mr. Pearson, meet Mr. Mandelbrot: Detecting Novel Associations in Large Data Sets". As I understood from these reads is that you can zoom in to different comp...
Can the MIC algorithm for detecting non-linear correlations be explained intuitively?
I found two good articles explaining more clearly the idea of MIC. In particular the blog post "large-scale data exploration, MIC-style", and Gelman's blog post "Mr. Pearson, meet Mr. Mandelbrot: Dete
Can the MIC algorithm for detecting non-linear correlations be explained intuitively? I found two good articles explaining more clearly the idea of MIC. In particular the blog post "large-scale data exploration, MIC-style", and Gelman's blog post "Mr. Pearson, meet Mr. Mandelbrot: Detecting Novel Associations in Large ...
Can the MIC algorithm for detecting non-linear correlations be explained intuitively? I found two good articles explaining more clearly the idea of MIC. In particular the blog post "large-scale data exploration, MIC-style", and Gelman's blog post "Mr. Pearson, meet Mr. Mandelbrot: Dete
13,141
Why uppercase for $X$ and lowercase for $y$?
The question about why $X$ and $y$ are popular choices in mathematical notions has been answered in the History of Science and Mathematics SE website: Why are X and Y commonly used as mathematical placeholders? (In short: cause Descartes said so!) In terms of Linear Algebra, it is extremely common to use capital Latin ...
Why uppercase for $X$ and lowercase for $y$?
The question about why $X$ and $y$ are popular choices in mathematical notions has been answered in the History of Science and Mathematics SE website: Why are X and Y commonly used as mathematical pla
Why uppercase for $X$ and lowercase for $y$? The question about why $X$ and $y$ are popular choices in mathematical notions has been answered in the History of Science and Mathematics SE website: Why are X and Y commonly used as mathematical placeholders? (In short: cause Descartes said so!) In terms of Linear Algebra,...
Why uppercase for $X$ and lowercase for $y$? The question about why $X$ and $y$ are popular choices in mathematical notions has been answered in the History of Science and Mathematics SE website: Why are X and Y commonly used as mathematical pla
13,142
Why uppercase for $X$ and lowercase for $y$?
Before you collect any data values on the feature and target variables, these variables can be considered to be random variables provided a random mechanism will be used to select the subjects who will generate these values. In that case, the correct notation for these variables is Y and X (i.e., upper case letters for...
Why uppercase for $X$ and lowercase for $y$?
Before you collect any data values on the feature and target variables, these variables can be considered to be random variables provided a random mechanism will be used to select the subjects who wil
Why uppercase for $X$ and lowercase for $y$? Before you collect any data values on the feature and target variables, these variables can be considered to be random variables provided a random mechanism will be used to select the subjects who will generate these values. In that case, the correct notation for these varia...
Why uppercase for $X$ and lowercase for $y$? Before you collect any data values on the feature and target variables, these variables can be considered to be random variables provided a random mechanism will be used to select the subjects who wil
13,143
Why uppercase for $X$ and lowercase for $y$?
To understand when to use lowercase or uppercase, we need to know what is represented in X_train or X_test. It is a capital letter X to represent a 2-D matrix. And for y_train and y_test, it is a small letter y to represent a 1-D vector. Mathematically, it is a common notation for Linear Algebra to use uppercase Latin ...
Why uppercase for $X$ and lowercase for $y$?
To understand when to use lowercase or uppercase, we need to know what is represented in X_train or X_test. It is a capital letter X to represent a 2-D matrix. And for y_train and y_test, it is a smal
Why uppercase for $X$ and lowercase for $y$? To understand when to use lowercase or uppercase, we need to know what is represented in X_train or X_test. It is a capital letter X to represent a 2-D matrix. And for y_train and y_test, it is a small letter y to represent a 1-D vector. Mathematically, it is a common notati...
Why uppercase for $X$ and lowercase for $y$? To understand when to use lowercase or uppercase, we need to know what is represented in X_train or X_test. It is a capital letter X to represent a 2-D matrix. And for y_train and y_test, it is a smal
13,144
Does this discrete distribution have a name?
You have a discretized version of the negative log distribution, that is, the distribution whose support is $[0, 1]$ and whose pdf is $f(t) = - \log t$. To see this, I'm going to redefine your random variable to take values in the set $\{ 0, 1/N, 2/N, \ldots, 1 \}$ instead of $\{0, 1, 2, \ldots, N \}$ and call the resu...
Does this discrete distribution have a name?
You have a discretized version of the negative log distribution, that is, the distribution whose support is $[0, 1]$ and whose pdf is $f(t) = - \log t$. To see this, I'm going to redefine your random
Does this discrete distribution have a name? You have a discretized version of the negative log distribution, that is, the distribution whose support is $[0, 1]$ and whose pdf is $f(t) = - \log t$. To see this, I'm going to redefine your random variable to take values in the set $\{ 0, 1/N, 2/N, \ldots, 1 \}$ instead o...
Does this discrete distribution have a name? You have a discretized version of the negative log distribution, that is, the distribution whose support is $[0, 1]$ and whose pdf is $f(t) = - \log t$. To see this, I'm going to redefine your random
13,145
Does this discrete distribution have a name?
This appears to be related to the Whitworth distribution. (I don't believe it is the Whitworth distribution, since if I remember right, that's the distribution of a set of ordered values, but it seems to be connected to it, and relies on the same summation-scheme.) There's some discussion of the Whitworth (and numerous...
Does this discrete distribution have a name?
This appears to be related to the Whitworth distribution. (I don't believe it is the Whitworth distribution, since if I remember right, that's the distribution of a set of ordered values, but it seems
Does this discrete distribution have a name? This appears to be related to the Whitworth distribution. (I don't believe it is the Whitworth distribution, since if I remember right, that's the distribution of a set of ordered values, but it seems to be connected to it, and relies on the same summation-scheme.) There's s...
Does this discrete distribution have a name? This appears to be related to the Whitworth distribution. (I don't believe it is the Whitworth distribution, since if I remember right, that's the distribution of a set of ordered values, but it seems
13,146
Is KNN a discriminative learning algorithm?
KNN is a discriminative algorithm since it models the conditional probability of a sample belonging to a given class. To see this just consider how one gets to the decision rule of kNNs. A class label corresponds to a set of points which belong to some region in the feature space $R$. If you draw sample points from the...
Is KNN a discriminative learning algorithm?
KNN is a discriminative algorithm since it models the conditional probability of a sample belonging to a given class. To see this just consider how one gets to the decision rule of kNNs. A class label
Is KNN a discriminative learning algorithm? KNN is a discriminative algorithm since it models the conditional probability of a sample belonging to a given class. To see this just consider how one gets to the decision rule of kNNs. A class label corresponds to a set of points which belong to some region in the feature s...
Is KNN a discriminative learning algorithm? KNN is a discriminative algorithm since it models the conditional probability of a sample belonging to a given class. To see this just consider how one gets to the decision rule of kNNs. A class label
13,147
Is KNN a discriminative learning algorithm?
Answer by @jpmuc doesn't seem to be accurate. Generative models model the underlying distribution P(x/Ci) and then later use Bayes theorem to find the posterior probabilities. That is exactly what has been shown in that answer and then concludes the exact opposite. :O For KNN to be a generative model, we should be able...
Is KNN a discriminative learning algorithm?
Answer by @jpmuc doesn't seem to be accurate. Generative models model the underlying distribution P(x/Ci) and then later use Bayes theorem to find the posterior probabilities. That is exactly what has
Is KNN a discriminative learning algorithm? Answer by @jpmuc doesn't seem to be accurate. Generative models model the underlying distribution P(x/Ci) and then later use Bayes theorem to find the posterior probabilities. That is exactly what has been shown in that answer and then concludes the exact opposite. :O For KNN...
Is KNN a discriminative learning algorithm? Answer by @jpmuc doesn't seem to be accurate. Generative models model the underlying distribution P(x/Ci) and then later use Bayes theorem to find the posterior probabilities. That is exactly what has
13,148
Is KNN a discriminative learning algorithm?
I have come accross a book which says the opposite (i.e. a Generative Nonparametric Classification Model) This is the online link: Machine Learning A Probabilistic Perspective by Murphy, Kevin P. (2012) Here the excerpt from the book:
Is KNN a discriminative learning algorithm?
I have come accross a book which says the opposite (i.e. a Generative Nonparametric Classification Model) This is the online link: Machine Learning A Probabilistic Perspective by Murphy, Kevin P. (201
Is KNN a discriminative learning algorithm? I have come accross a book which says the opposite (i.e. a Generative Nonparametric Classification Model) This is the online link: Machine Learning A Probabilistic Perspective by Murphy, Kevin P. (2012) Here the excerpt from the book:
Is KNN a discriminative learning algorithm? I have come accross a book which says the opposite (i.e. a Generative Nonparametric Classification Model) This is the online link: Machine Learning A Probabilistic Perspective by Murphy, Kevin P. (201
13,149
Is KNN a discriminative learning algorithm?
I agree that kNN is discriminative. The reason is that it does not explicitly store or tries to learn a (probabilistic) model that explains the data (as opposed to, e.g. Naive Bayes). The answer by juampa confuses me since, to my understanding, a generative classifier is one that attempts to explain how the data is ge...
Is KNN a discriminative learning algorithm?
I agree that kNN is discriminative. The reason is that it does not explicitly store or tries to learn a (probabilistic) model that explains the data (as opposed to, e.g. Naive Bayes). The answer by j
Is KNN a discriminative learning algorithm? I agree that kNN is discriminative. The reason is that it does not explicitly store or tries to learn a (probabilistic) model that explains the data (as opposed to, e.g. Naive Bayes). The answer by juampa confuses me since, to my understanding, a generative classifier is one...
Is KNN a discriminative learning algorithm? I agree that kNN is discriminative. The reason is that it does not explicitly store or tries to learn a (probabilistic) model that explains the data (as opposed to, e.g. Naive Bayes). The answer by j
13,150
Does adding more variables into a multivariable regression change coefficients of existing variables?
A parameter estimate in a regression model (e.g., $\hat\beta_i$) will change if a variable, $X_j$, is added to the model that is: correlated with that parameter's corresponding variable, $X_i$ (which was already in the model), and correlated with the response variable, $Y$ An estimated beta will not change when ...
Does adding more variables into a multivariable regression change coefficients of existing variables
A parameter estimate in a regression model (e.g., $\hat\beta_i$) will change if a variable, $X_j$, is added to the model that is: correlated with that parameter's corresponding variable, $X_i$ (whi
Does adding more variables into a multivariable regression change coefficients of existing variables? A parameter estimate in a regression model (e.g., $\hat\beta_i$) will change if a variable, $X_j$, is added to the model that is: correlated with that parameter's corresponding variable, $X_i$ (which was already in ...
Does adding more variables into a multivariable regression change coefficients of existing variables A parameter estimate in a regression model (e.g., $\hat\beta_i$) will change if a variable, $X_j$, is added to the model that is: correlated with that parameter's corresponding variable, $X_i$ (whi
13,151
Does adding more variables into a multivariable regression change coefficients of existing variables?
It is mathematically possible that the coefficients will not change, but it is unlikely that there will be no change at all with real data, even if all the independent variables are independent of each other. But, when this is the case, the changes (other than in the intercept) will tend to 0: set.seed(129231) x1 <- rn...
Does adding more variables into a multivariable regression change coefficients of existing variables
It is mathematically possible that the coefficients will not change, but it is unlikely that there will be no change at all with real data, even if all the independent variables are independent of eac
Does adding more variables into a multivariable regression change coefficients of existing variables? It is mathematically possible that the coefficients will not change, but it is unlikely that there will be no change at all with real data, even if all the independent variables are independent of each other. But, when...
Does adding more variables into a multivariable regression change coefficients of existing variables It is mathematically possible that the coefficients will not change, but it is unlikely that there will be no change at all with real data, even if all the independent variables are independent of eac
13,152
Does adding more variables into a multivariable regression change coefficients of existing variables?
Generally speaking, yes, adding a variable changes the earlier coefficients, almost always. Indeed, this is essentially the cause of Simpson's paradox, where relations between effects can change, even reverse sign, because of omitted covariates. For that not to happen, we'd need that the new variables were orthogonal t...
Does adding more variables into a multivariable regression change coefficients of existing variables
Generally speaking, yes, adding a variable changes the earlier coefficients, almost always. Indeed, this is essentially the cause of Simpson's paradox, where relations between effects can change, even
Does adding more variables into a multivariable regression change coefficients of existing variables? Generally speaking, yes, adding a variable changes the earlier coefficients, almost always. Indeed, this is essentially the cause of Simpson's paradox, where relations between effects can change, even reverse sign, bec...
Does adding more variables into a multivariable regression change coefficients of existing variables Generally speaking, yes, adding a variable changes the earlier coefficients, almost always. Indeed, this is essentially the cause of Simpson's paradox, where relations between effects can change, even
13,153
How does a Poisson distribution work when modeling continuous data and does it result in information loss?
I've been estimating continuous positive outcome Poisson regressions with the Huber/White/Sandwich linearized estimator of variance fairly frequently. However, that's not a particularly good reason to do anything, so here are some actual references. From the theory side, $y$ does not need to be an integer for for the e...
How does a Poisson distribution work when modeling continuous data and does it result in information
I've been estimating continuous positive outcome Poisson regressions with the Huber/White/Sandwich linearized estimator of variance fairly frequently. However, that's not a particularly good reason to
How does a Poisson distribution work when modeling continuous data and does it result in information loss? I've been estimating continuous positive outcome Poisson regressions with the Huber/White/Sandwich linearized estimator of variance fairly frequently. However, that's not a particularly good reason to do anything,...
How does a Poisson distribution work when modeling continuous data and does it result in information I've been estimating continuous positive outcome Poisson regressions with the Huber/White/Sandwich linearized estimator of variance fairly frequently. However, that's not a particularly good reason to
13,154
How does a Poisson distribution work when modeling continuous data and does it result in information loss?
Poisson distribution is for count data only, trying to feed it with continuous data is nasty and I believe should not be done. One of the reasons is that you don't know how to scale your continuous variable. And the Poisson depends very much on the scale! I tried to explain it with a simple example here. So For this re...
How does a Poisson distribution work when modeling continuous data and does it result in information
Poisson distribution is for count data only, trying to feed it with continuous data is nasty and I believe should not be done. One of the reasons is that you don't know how to scale your continuous va
How does a Poisson distribution work when modeling continuous data and does it result in information loss? Poisson distribution is for count data only, trying to feed it with continuous data is nasty and I believe should not be done. One of the reasons is that you don't know how to scale your continuous variable. And t...
How does a Poisson distribution work when modeling continuous data and does it result in information Poisson distribution is for count data only, trying to feed it with continuous data is nasty and I believe should not be done. One of the reasons is that you don't know how to scale your continuous va
13,155
How does a Poisson distribution work when modeling continuous data and does it result in information loss?
Here's another great discussion of how to use the Poisson model to fit the log-regressions: http://blog.stata.com/2011/08/22/use-poisson-rather-than-regress-tell-a-friend/ (I am telling a friend, just as the blog entry suggests). The basic thrust is that we only use the part of the Poisson model that is the log link. T...
How does a Poisson distribution work when modeling continuous data and does it result in information
Here's another great discussion of how to use the Poisson model to fit the log-regressions: http://blog.stata.com/2011/08/22/use-poisson-rather-than-regress-tell-a-friend/ (I am telling a friend, just
How does a Poisson distribution work when modeling continuous data and does it result in information loss? Here's another great discussion of how to use the Poisson model to fit the log-regressions: http://blog.stata.com/2011/08/22/use-poisson-rather-than-regress-tell-a-friend/ (I am telling a friend, just as the blog ...
How does a Poisson distribution work when modeling continuous data and does it result in information Here's another great discussion of how to use the Poisson model to fit the log-regressions: http://blog.stata.com/2011/08/22/use-poisson-rather-than-regress-tell-a-friend/ (I am telling a friend, just
13,156
How does a Poisson distribution work when modeling continuous data and does it result in information loss?
If the problem is the variance scaling with the mean, but you have continuous data, have you thought about using continuous distributions that can accomodate the issues you're having. Perhaps a Gamma? The variance will have a quadratic relationship with the mean - much like a negative binomial, actually.
How does a Poisson distribution work when modeling continuous data and does it result in information
If the problem is the variance scaling with the mean, but you have continuous data, have you thought about using continuous distributions that can accomodate the issues you're having. Perhaps a Gamma
How does a Poisson distribution work when modeling continuous data and does it result in information loss? If the problem is the variance scaling with the mean, but you have continuous data, have you thought about using continuous distributions that can accomodate the issues you're having. Perhaps a Gamma? The varian...
How does a Poisson distribution work when modeling continuous data and does it result in information If the problem is the variance scaling with the mean, but you have continuous data, have you thought about using continuous distributions that can accomodate the issues you're having. Perhaps a Gamma
13,157
How to test the statistical significance for categorical variable in linear regression?
You are correct that those $p$-values only tell you whether each level's mean is significantly different from the reference level's mean. Therefore, they only tell you about the pairwise differences between the levels. To test whether the categorical predictor, as a whole, is significant is equivalent to testing whethe...
How to test the statistical significance for categorical variable in linear regression?
You are correct that those $p$-values only tell you whether each level's mean is significantly different from the reference level's mean. Therefore, they only tell you about the pairwise differences b
How to test the statistical significance for categorical variable in linear regression? You are correct that those $p$-values only tell you whether each level's mean is significantly different from the reference level's mean. Therefore, they only tell you about the pairwise differences between the levels. To test wheth...
How to test the statistical significance for categorical variable in linear regression? You are correct that those $p$-values only tell you whether each level's mean is significantly different from the reference level's mean. Therefore, they only tell you about the pairwise differences b
13,158
Narrow confidence interval -- higher accuracy?
The 95% is not numerically attached to how confident you are that you've covered the true effect in your experiment. Perhaps recognizing that 95% is attached to the procedure that produced the interval, and not the interval itself, would help. Part of the procedure is that you decide that the interval contains the true...
Narrow confidence interval -- higher accuracy?
The 95% is not numerically attached to how confident you are that you've covered the true effect in your experiment. Perhaps recognizing that 95% is attached to the procedure that produced the interva
Narrow confidence interval -- higher accuracy? The 95% is not numerically attached to how confident you are that you've covered the true effect in your experiment. Perhaps recognizing that 95% is attached to the procedure that produced the interval, and not the interval itself, would help. Part of the procedure is that...
Narrow confidence interval -- higher accuracy? The 95% is not numerically attached to how confident you are that you've covered the true effect in your experiment. Perhaps recognizing that 95% is attached to the procedure that produced the interva
13,159
Narrow confidence interval -- higher accuracy?
For a given dataset, increasing the confidence level of a confidence interval will only result in larger intervals (or at least not smaller). That's not about accuracy or precision but rather about how much risk you're willing to take about missing the true value. If you're comparing confidence intervals for the same ...
Narrow confidence interval -- higher accuracy?
For a given dataset, increasing the confidence level of a confidence interval will only result in larger intervals (or at least not smaller). That's not about accuracy or precision but rather about h
Narrow confidence interval -- higher accuracy? For a given dataset, increasing the confidence level of a confidence interval will only result in larger intervals (or at least not smaller). That's not about accuracy or precision but rather about how much risk you're willing to take about missing the true value. If you'...
Narrow confidence interval -- higher accuracy? For a given dataset, increasing the confidence level of a confidence interval will only result in larger intervals (or at least not smaller). That's not about accuracy or precision but rather about h
13,160
Narrow confidence interval -- higher accuracy?
First of all, a CI for a given confidence percentage (e.g.95%) means, for all practical purposes (though technically it is not correct) that you are confident that the true value is in the interval. If this is interval is "narrow" (note that this can only be regarded in a relative fashion, so, for comparison with what ...
Narrow confidence interval -- higher accuracy?
First of all, a CI for a given confidence percentage (e.g.95%) means, for all practical purposes (though technically it is not correct) that you are confident that the true value is in the interval. I
Narrow confidence interval -- higher accuracy? First of all, a CI for a given confidence percentage (e.g.95%) means, for all practical purposes (though technically it is not correct) that you are confident that the true value is in the interval. If this is interval is "narrow" (note that this can only be regarded in a ...
Narrow confidence interval -- higher accuracy? First of all, a CI for a given confidence percentage (e.g.95%) means, for all practical purposes (though technically it is not correct) that you are confident that the true value is in the interval. I
13,161
Narrow confidence interval -- higher accuracy?
I am adding to some good answers here that I gave upvotes to. I think there is a little more that should be said to completely clear up the conclusion. I like the terms accurate and correct as Efron defines them. I gave a lengthy discussion on this very recently on a different question. The moderator whuber really like...
Narrow confidence interval -- higher accuracy?
I am adding to some good answers here that I gave upvotes to. I think there is a little more that should be said to completely clear up the conclusion. I like the terms accurate and correct as Efron d
Narrow confidence interval -- higher accuracy? I am adding to some good answers here that I gave upvotes to. I think there is a little more that should be said to completely clear up the conclusion. I like the terms accurate and correct as Efron defines them. I gave a lengthy discussion on this very recently on a diffe...
Narrow confidence interval -- higher accuracy? I am adding to some good answers here that I gave upvotes to. I think there is a little more that should be said to completely clear up the conclusion. I like the terms accurate and correct as Efron d
13,162
Why is step function not used in activation functions in machine learning?
There are two main reasons why we cannot use the Heaviside step function in (deep) Neural Net: At the moment, one of the most efficient ways to train a multi-layer neural network is by using gradient descent with backpropagation. A requirement for backpropagation algorithm is a differentiable activation function. Howe...
Why is step function not used in activation functions in machine learning?
There are two main reasons why we cannot use the Heaviside step function in (deep) Neural Net: At the moment, one of the most efficient ways to train a multi-layer neural network is by using gradient
Why is step function not used in activation functions in machine learning? There are two main reasons why we cannot use the Heaviside step function in (deep) Neural Net: At the moment, one of the most efficient ways to train a multi-layer neural network is by using gradient descent with backpropagation. A requirement ...
Why is step function not used in activation functions in machine learning? There are two main reasons why we cannot use the Heaviside step function in (deep) Neural Net: At the moment, one of the most efficient ways to train a multi-layer neural network is by using gradient
13,163
Why is step function not used in activation functions in machine learning?
As answered by the others, the primary reason is that it would not work well during backpropagation. However, adding to what the others wrote, it is important to note that differentiability everywhere is not a necessary condition for backpropagation in neural networks, as one may use subderivatives as well. For example...
Why is step function not used in activation functions in machine learning?
As answered by the others, the primary reason is that it would not work well during backpropagation. However, adding to what the others wrote, it is important to note that differentiability everywhere
Why is step function not used in activation functions in machine learning? As answered by the others, the primary reason is that it would not work well during backpropagation. However, adding to what the others wrote, it is important to note that differentiability everywhere is not a necessary condition for backpropaga...
Why is step function not used in activation functions in machine learning? As answered by the others, the primary reason is that it would not work well during backpropagation. However, adding to what the others wrote, it is important to note that differentiability everywhere
13,164
Why is step function not used in activation functions in machine learning?
Why isn't step function used? What is bad about using a step function in an activation function for neural networks? I assume you mean the Heaviside step function $$ H(x)= \begin{cases} 1 & x \ge 0 \\ 0 & x< 0 \end{cases}. $$ The key feature of $H$ is not that the gradients are sometimes zero, it's that the gradients ...
Why is step function not used in activation functions in machine learning?
Why isn't step function used? What is bad about using a step function in an activation function for neural networks? I assume you mean the Heaviside step function $$ H(x)= \begin{cases} 1 & x \ge 0 \
Why is step function not used in activation functions in machine learning? Why isn't step function used? What is bad about using a step function in an activation function for neural networks? I assume you mean the Heaviside step function $$ H(x)= \begin{cases} 1 & x \ge 0 \\ 0 & x< 0 \end{cases}. $$ The key feature of...
Why is step function not used in activation functions in machine learning? Why isn't step function used? What is bad about using a step function in an activation function for neural networks? I assume you mean the Heaviside step function $$ H(x)= \begin{cases} 1 & x \ge 0 \
13,165
Why is step function not used in activation functions in machine learning?
Since we differentiate the activation function in back propagation process to find optimal weight values, we need to have an activation function that is suitable for differentiation. There mainly 2 types of activation functions: *Linear Functions *Non Linear Functions Linear Functions: 1.Identity function:f(x)=x, f'(x...
Why is step function not used in activation functions in machine learning?
Since we differentiate the activation function in back propagation process to find optimal weight values, we need to have an activation function that is suitable for differentiation. There mainly 2 ty
Why is step function not used in activation functions in machine learning? Since we differentiate the activation function in back propagation process to find optimal weight values, we need to have an activation function that is suitable for differentiation. There mainly 2 types of activation functions: *Linear Function...
Why is step function not used in activation functions in machine learning? Since we differentiate the activation function in back propagation process to find optimal weight values, we need to have an activation function that is suitable for differentiation. There mainly 2 ty
13,166
If multiple comparisons are "planned", do you still need to correct for multiple comparisons?
This is IMHO a complex issue and I would like to make three comments about this situation. First and generally, I would more focus on whether you face a confirmatory study with a set of well-shaped hypotheses defined in a argumentative context or an explanatory study in which many likely indicators are observed than wh...
If multiple comparisons are "planned", do you still need to correct for multiple comparisons?
This is IMHO a complex issue and I would like to make three comments about this situation. First and generally, I would more focus on whether you face a confirmatory study with a set of well-shaped hy
If multiple comparisons are "planned", do you still need to correct for multiple comparisons? This is IMHO a complex issue and I would like to make three comments about this situation. First and generally, I would more focus on whether you face a confirmatory study with a set of well-shaped hypotheses defined in a argu...
If multiple comparisons are "planned", do you still need to correct for multiple comparisons? This is IMHO a complex issue and I would like to make three comments about this situation. First and generally, I would more focus on whether you face a confirmatory study with a set of well-shaped hy
13,167
If multiple comparisons are "planned", do you still need to correct for multiple comparisons?
If you substitute the word 'premeditated' for 'planned', this may help dispel the argument offered by the authors. Consider two different statistical analyses of the same data: A 'premeditated crime' in which every possible hypothesis test is laid out combinatorially in advance by a 'statistical criminal mastermind', ...
If multiple comparisons are "planned", do you still need to correct for multiple comparisons?
If you substitute the word 'premeditated' for 'planned', this may help dispel the argument offered by the authors. Consider two different statistical analyses of the same data: A 'premeditated crime'
If multiple comparisons are "planned", do you still need to correct for multiple comparisons? If you substitute the word 'premeditated' for 'planned', this may help dispel the argument offered by the authors. Consider two different statistical analyses of the same data: A 'premeditated crime' in which every possible h...
If multiple comparisons are "planned", do you still need to correct for multiple comparisons? If you substitute the word 'premeditated' for 'planned', this may help dispel the argument offered by the authors. Consider two different statistical analyses of the same data: A 'premeditated crime'
13,168
If multiple comparisons are "planned", do you still need to correct for multiple comparisons?
Given your update on the design I would suggest that they do some form of log-linear model to use all of the data at once. Doing the piece-meal analyses they have done seems (a) inefficient (b) unscientific as it tests 15 hypotheses where surely there are fewer real hypotheses. I am not a fan of correcting for multipli...
If multiple comparisons are "planned", do you still need to correct for multiple comparisons?
Given your update on the design I would suggest that they do some form of log-linear model to use all of the data at once. Doing the piece-meal analyses they have done seems (a) inefficient (b) unscie
If multiple comparisons are "planned", do you still need to correct for multiple comparisons? Given your update on the design I would suggest that they do some form of log-linear model to use all of the data at once. Doing the piece-meal analyses they have done seems (a) inefficient (b) unscientific as it tests 15 hypo...
If multiple comparisons are "planned", do you still need to correct for multiple comparisons? Given your update on the design I would suggest that they do some form of log-linear model to use all of the data at once. Doing the piece-meal analyses they have done seems (a) inefficient (b) unscie
13,169
If multiple comparisons are "planned", do you still need to correct for multiple comparisons?
This paper directly addresses your question: http://jrp.icaap.org/index.php/jrp/article/view/514/417 (Frane, A.V., "Planned Hypothesis Tests Are Not Necessarily Exempt From Multiplicity Adjustment", Journal of Research Practice, 2015)
If multiple comparisons are "planned", do you still need to correct for multiple comparisons?
This paper directly addresses your question: http://jrp.icaap.org/index.php/jrp/article/view/514/417 (Frane, A.V., "Planned Hypothesis Tests Are Not Necessarily Exempt From Multiplicity Adjustment", J
If multiple comparisons are "planned", do you still need to correct for multiple comparisons? This paper directly addresses your question: http://jrp.icaap.org/index.php/jrp/article/view/514/417 (Frane, A.V., "Planned Hypothesis Tests Are Not Necessarily Exempt From Multiplicity Adjustment", Journal of Research Practic...
If multiple comparisons are "planned", do you still need to correct for multiple comparisons? This paper directly addresses your question: http://jrp.icaap.org/index.php/jrp/article/view/514/417 (Frane, A.V., "Planned Hypothesis Tests Are Not Necessarily Exempt From Multiplicity Adjustment", J
13,170
From a statistical perspective: Fourier transform vs regression with Fourier basis
They're the same. Here's how... Doing a Regression Say you fit the model $$ y_t = \sum_{j=1}^n A_j \cos(2 \pi t [j/N] + \phi_j) $$ where $t=1,\ldots,N$ and $n = \text{floor}(N/2)$. This isn't suitable for linear regression, though, so instead you use some trigonometry ( $\cos(a + b) = \cos(a)\cos(b) - \sin(a)\sin(b)$)...
From a statistical perspective: Fourier transform vs regression with Fourier basis
They're the same. Here's how... Doing a Regression Say you fit the model $$ y_t = \sum_{j=1}^n A_j \cos(2 \pi t [j/N] + \phi_j) $$ where $t=1,\ldots,N$ and $n = \text{floor}(N/2)$. This isn't suitabl
From a statistical perspective: Fourier transform vs regression with Fourier basis They're the same. Here's how... Doing a Regression Say you fit the model $$ y_t = \sum_{j=1}^n A_j \cos(2 \pi t [j/N] + \phi_j) $$ where $t=1,\ldots,N$ and $n = \text{floor}(N/2)$. This isn't suitable for linear regression, though, so i...
From a statistical perspective: Fourier transform vs regression with Fourier basis They're the same. Here's how... Doing a Regression Say you fit the model $$ y_t = \sum_{j=1}^n A_j \cos(2 \pi t [j/N] + \phi_j) $$ where $t=1,\ldots,N$ and $n = \text{floor}(N/2)$. This isn't suitabl
13,171
From a statistical perspective: Fourier transform vs regression with Fourier basis
They are strongly related. Your example is not reproducible because you didn't include your data, thus I'll make a new one. First of all, let's create a periodic function: T <- 10 omega <- 2*pi/T N <- 21 x <- seq(0, T, len = N) sum_sines_cosines <- function(x, omega){ sin(omega*x)+2*cos(2*omega*x)+3*sin(4*omega*x)+...
From a statistical perspective: Fourier transform vs regression with Fourier basis
They are strongly related. Your example is not reproducible because you didn't include your data, thus I'll make a new one. First of all, let's create a periodic function: T <- 10 omega <- 2*pi/T N <-
From a statistical perspective: Fourier transform vs regression with Fourier basis They are strongly related. Your example is not reproducible because you didn't include your data, thus I'll make a new one. First of all, let's create a periodic function: T <- 10 omega <- 2*pi/T N <- 21 x <- seq(0, T, len = N) sum_sines...
From a statistical perspective: Fourier transform vs regression with Fourier basis They are strongly related. Your example is not reproducible because you didn't include your data, thus I'll make a new one. First of all, let's create a periodic function: T <- 10 omega <- 2*pi/T N <-
13,172
From a statistical perspective: Fourier transform vs regression with Fourier basis
I want to say that the first answer has an overflow error since the second formula for DFT coefficients is infinite when j = N/2 at this value the argumment for sine is π * t making the value zero for all t's. β^2,j=∑Nt=1ytsin(2πt[j/N])∑Nt=1sin2(2πt[j/N]) There are others DFT Data Regressions in published papers out th...
From a statistical perspective: Fourier transform vs regression with Fourier basis
I want to say that the first answer has an overflow error since the second formula for DFT coefficients is infinite when j = N/2 at this value the argumment for sine is π * t making the value zero for
From a statistical perspective: Fourier transform vs regression with Fourier basis I want to say that the first answer has an overflow error since the second formula for DFT coefficients is infinite when j = N/2 at this value the argumment for sine is π * t making the value zero for all t's. β^2,j=∑Nt=1ytsin(2πt[j/N])∑...
From a statistical perspective: Fourier transform vs regression with Fourier basis I want to say that the first answer has an overflow error since the second formula for DFT coefficients is infinite when j = N/2 at this value the argumment for sine is π * t making the value zero for
13,173
The sum of independent lognormal random variables appears lognormal?
This approximate lognormality of sums of lognormals is a well-known rule of thumb; it's mentioned in numerous papers -- and in a number of posts on site. A lognormal approximation for a sum of lognormals by matching the first two moments is sometimes called a Fenton-Wilkinson approximation. You may find this document ...
The sum of independent lognormal random variables appears lognormal?
This approximate lognormality of sums of lognormals is a well-known rule of thumb; it's mentioned in numerous papers -- and in a number of posts on site. A lognormal approximation for a sum of lognor
The sum of independent lognormal random variables appears lognormal? This approximate lognormality of sums of lognormals is a well-known rule of thumb; it's mentioned in numerous papers -- and in a number of posts on site. A lognormal approximation for a sum of lognormals by matching the first two moments is sometimes...
The sum of independent lognormal random variables appears lognormal? This approximate lognormality of sums of lognormals is a well-known rule of thumb; it's mentioned in numerous papers -- and in a number of posts on site. A lognormal approximation for a sum of lognor
13,174
The sum of independent lognormal random variables appears lognormal?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. It's probably too late, but I've found the following p...
The sum of independent lognormal random variables appears lognormal?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
The sum of independent lognormal random variables appears lognormal? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. ...
The sum of independent lognormal random variables appears lognormal? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
13,175
The sum of independent lognormal random variables appears lognormal?
Lognormal law is widely present on physical phenomena, sums of this kind of variable distributions are needed for instance to study any scaling behavior of a system. I know this article (very long and very strong, the beginning can be undertood if you are not specilist!), "Broad distribution effects in sums of lognorma...
The sum of independent lognormal random variables appears lognormal?
Lognormal law is widely present on physical phenomena, sums of this kind of variable distributions are needed for instance to study any scaling behavior of a system. I know this article (very long and
The sum of independent lognormal random variables appears lognormal? Lognormal law is widely present on physical phenomena, sums of this kind of variable distributions are needed for instance to study any scaling behavior of a system. I know this article (very long and very strong, the beginning can be undertood if you...
The sum of independent lognormal random variables appears lognormal? Lognormal law is widely present on physical phenomena, sums of this kind of variable distributions are needed for instance to study any scaling behavior of a system. I know this article (very long and
13,176
The sum of independent lognormal random variables appears lognormal?
The adviced paper by Dufresne of 2009 and this one from 2004 together with this useful paper cover the history on the approximations of the sum of log-normal distribution and gives sum mathematical result. The problem is that all the approximations cited there are found by supposing from the depart that you are in a ...
The sum of independent lognormal random variables appears lognormal?
The adviced paper by Dufresne of 2009 and this one from 2004 together with this useful paper cover the history on the approximations of the sum of log-normal distribution and gives sum mathematical r
The sum of independent lognormal random variables appears lognormal? The adviced paper by Dufresne of 2009 and this one from 2004 together with this useful paper cover the history on the approximations of the sum of log-normal distribution and gives sum mathematical result. The problem is that all the approximations ...
The sum of independent lognormal random variables appears lognormal? The adviced paper by Dufresne of 2009 and this one from 2004 together with this useful paper cover the history on the approximations of the sum of log-normal distribution and gives sum mathematical r
13,177
Random forest vs regression
I don't know exactly what you did, so your source code would help me to guess less. Many random forests are essentially windows within which the average is assumed to represent the system. It is an over-glorified CAR-tree. Lets say you have a two-leaf CAR-tree. Your data will be split into two piles. The (constant...
Random forest vs regression
I don't know exactly what you did, so your source code would help me to guess less. Many random forests are essentially windows within which the average is assumed to represent the system. It is an o
Random forest vs regression I don't know exactly what you did, so your source code would help me to guess less. Many random forests are essentially windows within which the average is assumed to represent the system. It is an over-glorified CAR-tree. Lets say you have a two-leaf CAR-tree. Your data will be split in...
Random forest vs regression I don't know exactly what you did, so your source code would help me to guess less. Many random forests are essentially windows within which the average is assumed to represent the system. It is an o
13,178
Random forest vs regression
I notice that this is an old question, but I think more should be added. As @Manoel Galdino said in the comments, usually you are interested in predictions on unseen data. But this question is about performance on the training data and the question is why does random forest perform badly on the training data? The answe...
Random forest vs regression
I notice that this is an old question, but I think more should be added. As @Manoel Galdino said in the comments, usually you are interested in predictions on unseen data. But this question is about p
Random forest vs regression I notice that this is an old question, but I think more should be added. As @Manoel Galdino said in the comments, usually you are interested in predictions on unseen data. But this question is about performance on the training data and the question is why does random forest perform badly on ...
Random forest vs regression I notice that this is an old question, but I think more should be added. As @Manoel Galdino said in the comments, usually you are interested in predictions on unseen data. But this question is about p
13,179
Random forest vs regression
Random forest tries to find localities among lots of features and lots of data points. It splits the features and gives them to different trees, as you have low number of features the overall result is not as good as logistic regression. Random forest can handle numeric and categorical variables but is not good at hand...
Random forest vs regression
Random forest tries to find localities among lots of features and lots of data points. It splits the features and gives them to different trees, as you have low number of features the overall result i
Random forest vs regression Random forest tries to find localities among lots of features and lots of data points. It splits the features and gives them to different trees, as you have low number of features the overall result is not as good as logistic regression. Random forest can handle numeric and categorical varia...
Random forest vs regression Random forest tries to find localities among lots of features and lots of data points. It splits the features and gives them to different trees, as you have low number of features the overall result i
13,180
Random forest vs regression
I think that Random Forest (RF) is a good tool when the functional form of the relation between Xs and y is complicated (because of nonlinear relations and interaction effect). RF categorizes Xs based on the best cutpoint (in term of minimum SSE) and doesn't apply the researcher information about the functional form of...
Random forest vs regression
I think that Random Forest (RF) is a good tool when the functional form of the relation between Xs and y is complicated (because of nonlinear relations and interaction effect). RF categorizes Xs based
Random forest vs regression I think that Random Forest (RF) is a good tool when the functional form of the relation between Xs and y is complicated (because of nonlinear relations and interaction effect). RF categorizes Xs based on the best cutpoint (in term of minimum SSE) and doesn't apply the researcher information ...
Random forest vs regression I think that Random Forest (RF) is a good tool when the functional form of the relation between Xs and y is complicated (because of nonlinear relations and interaction effect). RF categorizes Xs based
13,181
Random forest vs regression
For the basics, Regression perform well over continuous variables and Random Forest over discrete variables. You need to provide more details about the problem and about the nature of the variables in order to be more specific...
Random forest vs regression
For the basics, Regression perform well over continuous variables and Random Forest over discrete variables. You need to provide more details about the problem and about the nature of the variables in
Random forest vs regression For the basics, Regression perform well over continuous variables and Random Forest over discrete variables. You need to provide more details about the problem and about the nature of the variables in order to be more specific...
Random forest vs regression For the basics, Regression perform well over continuous variables and Random Forest over discrete variables. You need to provide more details about the problem and about the nature of the variables in
13,182
Confounder - definition
Why must the confounder be causally related to the outcome? Would it be enough for the confounder to be associated with the outcome? No, it's not enough. Let's start with the case where you can have a variable which is both associated with the outcome and the treatment, but controlling for it would bias your estimat...
Confounder - definition
Why must the confounder be causally related to the outcome? Would it be enough for the confounder to be associated with the outcome? No, it's not enough. Let's start with the case where you can hav
Confounder - definition Why must the confounder be causally related to the outcome? Would it be enough for the confounder to be associated with the outcome? No, it's not enough. Let's start with the case where you can have a variable which is both associated with the outcome and the treatment, but controlling for it...
Confounder - definition Why must the confounder be causally related to the outcome? Would it be enough for the confounder to be associated with the outcome? No, it's not enough. Let's start with the case where you can hav
13,183
Are linear regression and least squares regression necessarily the same thing?
An explanation rather depends on what your background is. Suppose you have some so-called independent variables $x_1,x_2,\ldots, x_k$ (they do not have to be independent of each other) where each $x_i$ takes takes values $x_{i,1}, x_{i,2}\ldots, x_{i,n}$ and you want a regression for a dependent variable $y$ taking val...
Are linear regression and least squares regression necessarily the same thing?
An explanation rather depends on what your background is. Suppose you have some so-called independent variables $x_1,x_2,\ldots, x_k$ (they do not have to be independent of each other) where each $x_i
Are linear regression and least squares regression necessarily the same thing? An explanation rather depends on what your background is. Suppose you have some so-called independent variables $x_1,x_2,\ldots, x_k$ (they do not have to be independent of each other) where each $x_i$ takes takes values $x_{i,1}, x_{i,2}\ld...
Are linear regression and least squares regression necessarily the same thing? An explanation rather depends on what your background is. Suppose you have some so-called independent variables $x_1,x_2,\ldots, x_k$ (they do not have to be independent of each other) where each $x_i
13,184
Are linear regression and least squares regression necessarily the same thing?
Least squares is the processes of minimizing the sum of squared errors from some model. Given a function $f$ which depends on parameters $\theta$, the least squares estimates of $\theta$ are $$ \hat{\theta} = \underset{\theta \in \mathbb{R}^p}{\mbox{argmin}} \left\{ \sum_i (y_i - f(x_i ; \theta))^2 \right\}$$ If you l...
Are linear regression and least squares regression necessarily the same thing?
Least squares is the processes of minimizing the sum of squared errors from some model. Given a function $f$ which depends on parameters $\theta$, the least squares estimates of $\theta$ are $$ \hat{
Are linear regression and least squares regression necessarily the same thing? Least squares is the processes of minimizing the sum of squared errors from some model. Given a function $f$ which depends on parameters $\theta$, the least squares estimates of $\theta$ are $$ \hat{\theta} = \underset{\theta \in \mathbb{R}...
Are linear regression and least squares regression necessarily the same thing? Least squares is the processes of minimizing the sum of squared errors from some model. Given a function $f$ which depends on parameters $\theta$, the least squares estimates of $\theta$ are $$ \hat{
13,185
Are linear regression and least squares regression necessarily the same thing?
Both "Linear Regression" and "Ordinary Least Squares" (OLS) regression are often used to refer to the same kind of statistical model, but for different reasons. We call the model "linear" because it assumes that the relationship between the independent and dependent variables can be described by a straight line. We cal...
Are linear regression and least squares regression necessarily the same thing?
Both "Linear Regression" and "Ordinary Least Squares" (OLS) regression are often used to refer to the same kind of statistical model, but for different reasons. We call the model "linear" because it a
Are linear regression and least squares regression necessarily the same thing? Both "Linear Regression" and "Ordinary Least Squares" (OLS) regression are often used to refer to the same kind of statistical model, but for different reasons. We call the model "linear" because it assumes that the relationship between the ...
Are linear regression and least squares regression necessarily the same thing? Both "Linear Regression" and "Ordinary Least Squares" (OLS) regression are often used to refer to the same kind of statistical model, but for different reasons. We call the model "linear" because it a
13,186
Unconfoundedness in Rubin's Causal Model- Layman's explanation
I think you are getting hung up on the difference between potential outcomes $(Y^0,Y^1)$ and the observed outcome $Y$. The latter is very much influenced by treatment, but we hope the former pair is not. Here's the intuition (putting aside conditioning on $X$ for simplicity) about the observed outcome. For each observa...
Unconfoundedness in Rubin's Causal Model- Layman's explanation
I think you are getting hung up on the difference between potential outcomes $(Y^0,Y^1)$ and the observed outcome $Y$. The latter is very much influenced by treatment, but we hope the former pair is n
Unconfoundedness in Rubin's Causal Model- Layman's explanation I think you are getting hung up on the difference between potential outcomes $(Y^0,Y^1)$ and the observed outcome $Y$. The latter is very much influenced by treatment, but we hope the former pair is not. Here's the intuition (putting aside conditioning on $...
Unconfoundedness in Rubin's Causal Model- Layman's explanation I think you are getting hung up on the difference between potential outcomes $(Y^0,Y^1)$ and the observed outcome $Y$. The latter is very much influenced by treatment, but we hope the former pair is n
13,187
Unconfoundedness in Rubin's Causal Model- Layman's explanation
How would you describe the uncoundedness/ignorability assumption to somebody who has not studied the RCM? Regarding intuition to somebody not versed in causal inference, I think this is where you could use graphs. They are intuitive in the sense that they visually show "flow" and they will also make clear what ignor...
Unconfoundedness in Rubin's Causal Model- Layman's explanation
How would you describe the uncoundedness/ignorability assumption to somebody who has not studied the RCM? Regarding intuition to somebody not versed in causal inference, I think this is where you c
Unconfoundedness in Rubin's Causal Model- Layman's explanation How would you describe the uncoundedness/ignorability assumption to somebody who has not studied the RCM? Regarding intuition to somebody not versed in causal inference, I think this is where you could use graphs. They are intuitive in the sense that the...
Unconfoundedness in Rubin's Causal Model- Layman's explanation How would you describe the uncoundedness/ignorability assumption to somebody who has not studied the RCM? Regarding intuition to somebody not versed in causal inference, I think this is where you c
13,188
Unconfoundedness in Rubin's Causal Model- Layman's explanation
I will add to above answers by providing an intuitive and easy memorable interpretation of the unconfoundedness/ignorability assumption. It helps to memorize the unconfoundedness assumption by swapping the order of the definition and writing $$T \perp (Y(0),Y(1))|X$$ instead. In this way, we can read the defintion as "...
Unconfoundedness in Rubin's Causal Model- Layman's explanation
I will add to above answers by providing an intuitive and easy memorable interpretation of the unconfoundedness/ignorability assumption. It helps to memorize the unconfoundedness assumption by swappin
Unconfoundedness in Rubin's Causal Model- Layman's explanation I will add to above answers by providing an intuitive and easy memorable interpretation of the unconfoundedness/ignorability assumption. It helps to memorize the unconfoundedness assumption by swapping the order of the definition and writing $$T \perp (Y(0)...
Unconfoundedness in Rubin's Causal Model- Layman's explanation I will add to above answers by providing an intuitive and easy memorable interpretation of the unconfoundedness/ignorability assumption. It helps to memorize the unconfoundedness assumption by swappin
13,189
Nice example where a series without a unit root is non stationary?
Here is an example of a non-stationary series that not even a white noise test can detect (let alone a Dickey-Fuller type test): Yes, this might be surprising but This is not white noise. Most non-stationary counter example are based on a violation of the first two conditions of stationary: deterministic trends (non...
Nice example where a series without a unit root is non stationary?
Here is an example of a non-stationary series that not even a white noise test can detect (let alone a Dickey-Fuller type test): Yes, this might be surprising but This is not white noise. Most non-
Nice example where a series without a unit root is non stationary? Here is an example of a non-stationary series that not even a white noise test can detect (let alone a Dickey-Fuller type test): Yes, this might be surprising but This is not white noise. Most non-stationary counter example are based on a violation o...
Nice example where a series without a unit root is non stationary? Here is an example of a non-stationary series that not even a white noise test can detect (let alone a Dickey-Fuller type test): Yes, this might be surprising but This is not white noise. Most non-
13,190
Nice example where a series without a unit root is non stationary?
Unit root testing is notoriously difficult. Using one test is usually not enough and you must be very careful about the exact assumptions the test is using. The way ADF is constructed makes it vulnerable to a series which are simple non-linear trends with added white noise. Here is an example: library(dplyr) library(t...
Nice example where a series without a unit root is non stationary?
Unit root testing is notoriously difficult. Using one test is usually not enough and you must be very careful about the exact assumptions the test is using. The way ADF is constructed makes it vulner
Nice example where a series without a unit root is non stationary? Unit root testing is notoriously difficult. Using one test is usually not enough and you must be very careful about the exact assumptions the test is using. The way ADF is constructed makes it vulnerable to a series which are simple non-linear trends w...
Nice example where a series without a unit root is non stationary? Unit root testing is notoriously difficult. Using one test is usually not enough and you must be very careful about the exact assumptions the test is using. The way ADF is constructed makes it vulner
13,191
Nice example where a series without a unit root is non stationary?
Example 1 Unit-root processes with a strong negative MA component are known to lead to ADF tests with empirical size far higher than the nominal one (e.g., Schwert, JBES 1989). That is, if $$ Y_t=Y_{t-1}+\epsilon_t+\theta\epsilon_{t-1}, $$ with $\theta\approx-1$, the roots of the AR and MA part will almost cancel, so ...
Nice example where a series without a unit root is non stationary?
Example 1 Unit-root processes with a strong negative MA component are known to lead to ADF tests with empirical size far higher than the nominal one (e.g., Schwert, JBES 1989). That is, if $$ Y_t=Y_{
Nice example where a series without a unit root is non stationary? Example 1 Unit-root processes with a strong negative MA component are known to lead to ADF tests with empirical size far higher than the nominal one (e.g., Schwert, JBES 1989). That is, if $$ Y_t=Y_{t-1}+\epsilon_t+\theta\epsilon_{t-1}, $$ with $\theta...
Nice example where a series without a unit root is non stationary? Example 1 Unit-root processes with a strong negative MA component are known to lead to ADF tests with empirical size far higher than the nominal one (e.g., Schwert, JBES 1989). That is, if $$ Y_t=Y_{
13,192
Weird correlations in the SVD results of random data; do they have a mathematical explanation or is it a LAPACK bug?
This is not a bug. As we have explored (extensively) in the comments, there are two things happening. The first is that the columns of $U$ are constrained to meet the SVD requirements: each must have unit length and be orthogonal to all the others. Viewing $U$ as a random variable created from a random matrix $X$ via ...
Weird correlations in the SVD results of random data; do they have a mathematical explanation or is
This is not a bug. As we have explored (extensively) in the comments, there are two things happening. The first is that the columns of $U$ are constrained to meet the SVD requirements: each must have
Weird correlations in the SVD results of random data; do they have a mathematical explanation or is it a LAPACK bug? This is not a bug. As we have explored (extensively) in the comments, there are two things happening. The first is that the columns of $U$ are constrained to meet the SVD requirements: each must have uni...
Weird correlations in the SVD results of random data; do they have a mathematical explanation or is This is not a bug. As we have explored (extensively) in the comments, there are two things happening. The first is that the columns of $U$ are constrained to meet the SVD requirements: each must have
13,193
Weird correlations in the SVD results of random data; do they have a mathematical explanation or is it a LAPACK bug?
This answer presents a replication of @whuber's results in Matlab, and also a direct demonstration that the correlations are an "artifact" of how the SVD implementation chooses sign for components. Given the long chain of potentially confusing comments, I want to stress for the future readers that I fully agree with th...
Weird correlations in the SVD results of random data; do they have a mathematical explanation or is
This answer presents a replication of @whuber's results in Matlab, and also a direct demonstration that the correlations are an "artifact" of how the SVD implementation chooses sign for components. Gi
Weird correlations in the SVD results of random data; do they have a mathematical explanation or is it a LAPACK bug? This answer presents a replication of @whuber's results in Matlab, and also a direct demonstration that the correlations are an "artifact" of how the SVD implementation chooses sign for components. Given...
Weird correlations in the SVD results of random data; do they have a mathematical explanation or is This answer presents a replication of @whuber's results in Matlab, and also a direct demonstration that the correlations are an "artifact" of how the SVD implementation chooses sign for components. Gi
13,194
Weird correlations in the SVD results of random data; do they have a mathematical explanation or is it a LAPACK bug?
Check the norm of your singular vectors U and V, it's 1 by definition. You don't need to go through SVD to get the same exact matrix you plot by simply generating two random variables $x$ and $y$ with the constraint that the sum of their squares is 1: $$x^2+y^2=1$$ Assume that the means are zero, then $$Cov[x,y]=Var[xy...
Weird correlations in the SVD results of random data; do they have a mathematical explanation or is
Check the norm of your singular vectors U and V, it's 1 by definition. You don't need to go through SVD to get the same exact matrix you plot by simply generating two random variables $x$ and $y$ with
Weird correlations in the SVD results of random data; do they have a mathematical explanation or is it a LAPACK bug? Check the norm of your singular vectors U and V, it's 1 by definition. You don't need to go through SVD to get the same exact matrix you plot by simply generating two random variables $x$ and $y$ with th...
Weird correlations in the SVD results of random data; do they have a mathematical explanation or is Check the norm of your singular vectors U and V, it's 1 by definition. You don't need to go through SVD to get the same exact matrix you plot by simply generating two random variables $x$ and $y$ with
13,195
What's the difference between standardization and studentization?
A short recap. Given a model $y=X\beta+\varepsilon$, where $X$ is $n\times p$, $\hat\beta=(X'X)^{-1}X'y$ and $\hat y=X\hat\beta=X(X'X)^{-1}X'y=Hy$, where $H=X(X'X)^{-1}X'$ is the "hat matrix". Residuals are $$e=y-\hat y=y-Hy=(I-H)y$$ The population variance $\sigma^2$ is unknown and can be estimated by $MSE$, the mean ...
What's the difference between standardization and studentization?
A short recap. Given a model $y=X\beta+\varepsilon$, where $X$ is $n\times p$, $\hat\beta=(X'X)^{-1}X'y$ and $\hat y=X\hat\beta=X(X'X)^{-1}X'y=Hy$, where $H=X(X'X)^{-1}X'$ is the "hat matrix". Residua
What's the difference between standardization and studentization? A short recap. Given a model $y=X\beta+\varepsilon$, where $X$ is $n\times p$, $\hat\beta=(X'X)^{-1}X'y$ and $\hat y=X\hat\beta=X(X'X)^{-1}X'y=Hy$, where $H=X(X'X)^{-1}X'$ is the "hat matrix". Residuals are $$e=y-\hat y=y-Hy=(I-H)y$$ The population varia...
What's the difference between standardization and studentization? A short recap. Given a model $y=X\beta+\varepsilon$, where $X$ is $n\times p$, $\hat\beta=(X'X)^{-1}X'y$ and $\hat y=X\hat\beta=X(X'X)^{-1}X'y=Hy$, where $H=X(X'X)^{-1}X'$ is the "hat matrix". Residua
13,196
What's the difference between standardization and studentization?
In social sciences it is typically said that Studentizated scores uses Student's/Gosset's calculation for estimating the population variance/standard deviation from the sample variance/standard deviation ($s$). In contrast, Standardized scores (a noun, a particular type of statistic, the Z score) are said to use the p...
What's the difference between standardization and studentization?
In social sciences it is typically said that Studentizated scores uses Student's/Gosset's calculation for estimating the population variance/standard deviation from the sample variance/standard deviat
What's the difference between standardization and studentization? In social sciences it is typically said that Studentizated scores uses Student's/Gosset's calculation for estimating the population variance/standard deviation from the sample variance/standard deviation ($s$). In contrast, Standardized scores (a noun, ...
What's the difference between standardization and studentization? In social sciences it is typically said that Studentizated scores uses Student's/Gosset's calculation for estimating the population variance/standard deviation from the sample variance/standard deviat
13,197
What's the difference between standardization and studentization?
I am very late in answering this question!!. But couldn't find the answer in very simple language so humble attempt to answer this. Why we do standardization? Imagine you have two models-one predicts craziness from amount of time spent on studying statistics while other predicts log(craziness) with amount of time on st...
What's the difference between standardization and studentization?
I am very late in answering this question!!. But couldn't find the answer in very simple language so humble attempt to answer this. Why we do standardization? Imagine you have two models-one predicts
What's the difference between standardization and studentization? I am very late in answering this question!!. But couldn't find the answer in very simple language so humble attempt to answer this. Why we do standardization? Imagine you have two models-one predicts craziness from amount of time spent on studying statis...
What's the difference between standardization and studentization? I am very late in answering this question!!. But couldn't find the answer in very simple language so humble attempt to answer this. Why we do standardization? Imagine you have two models-one predicts
13,198
What's the difference between standardization and studentization?
Wikipedia has a good overview at https://en.wikipedia.org/wiki/Normalization_(statistics): Standard score $\frac{X - \mu}{\sigma}$ : Normalizing errors when population parameters are known. Works well for populations that are normally distributed Student's t-statistic $\frac{X - \overline{X}}{s}$ : Normalizing resi...
What's the difference between standardization and studentization?
Wikipedia has a good overview at https://en.wikipedia.org/wiki/Normalization_(statistics): Standard score $\frac{X - \mu}{\sigma}$ : Normalizing errors when population parameters are known. Works wel
What's the difference between standardization and studentization? Wikipedia has a good overview at https://en.wikipedia.org/wiki/Normalization_(statistics): Standard score $\frac{X - \mu}{\sigma}$ : Normalizing errors when population parameters are known. Works well for populations that are normally distributed Studen...
What's the difference between standardization and studentization? Wikipedia has a good overview at https://en.wikipedia.org/wiki/Normalization_(statistics): Standard score $\frac{X - \mu}{\sigma}$ : Normalizing errors when population parameters are known. Works wel
13,199
Is Benjamini-Hochberg correction more conservative as the number of comparisons increases?
First, you need to understand that these two multiple testing procedures do not control the same thing. Using your example, we have two groups with 18,000 observed variables, and you make 18,000 tests in order to identify some variables which are different from one group to the other. Bonferroni correction controls th...
Is Benjamini-Hochberg correction more conservative as the number of comparisons increases?
First, you need to understand that these two multiple testing procedures do not control the same thing. Using your example, we have two groups with 18,000 observed variables, and you make 18,000 tests
Is Benjamini-Hochberg correction more conservative as the number of comparisons increases? First, you need to understand that these two multiple testing procedures do not control the same thing. Using your example, we have two groups with 18,000 observed variables, and you make 18,000 tests in order to identify some va...
Is Benjamini-Hochberg correction more conservative as the number of comparisons increases? First, you need to understand that these two multiple testing procedures do not control the same thing. Using your example, we have two groups with 18,000 observed variables, and you make 18,000 tests
13,200
Computing the decision boundary of a linear SVM model
The Elements of Statistical Learning, from Hastie et al., has a complete chapter on support vector classifiers and SVMs (in your case, start page 418 on the 2nd edition). Another good tutorial is Support Vector Machines in R, by David Meyer. Unless I misunderstood your question, the decision boundary (or hyperplane) is...
Computing the decision boundary of a linear SVM model
The Elements of Statistical Learning, from Hastie et al., has a complete chapter on support vector classifiers and SVMs (in your case, start page 418 on the 2nd edition). Another good tutorial is Supp
Computing the decision boundary of a linear SVM model The Elements of Statistical Learning, from Hastie et al., has a complete chapter on support vector classifiers and SVMs (in your case, start page 418 on the 2nd edition). Another good tutorial is Support Vector Machines in R, by David Meyer. Unless I misunderstood y...
Computing the decision boundary of a linear SVM model The Elements of Statistical Learning, from Hastie et al., has a complete chapter on support vector classifiers and SVMs (in your case, start page 418 on the 2nd edition). Another good tutorial is Supp