idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
43,901 | Is Convolutional Neural Network (CNN) faster than Recurrent Neural Network (RNN)? | I have done some projects on text classification and relation extraction using CNN and RNN (specifically, LSTM and GRU): CNNs tend to be much faster (~5 times faster) than RNN.
It's hard to draw fair comparisons:
CNN and RNN have different hyperparameters (filter dimension, number of filters, hidden state dimension, ... | Is Convolutional Neural Network (CNN) faster than Recurrent Neural Network (RNN)? | I have done some projects on text classification and relation extraction using CNN and RNN (specifically, LSTM and GRU): CNNs tend to be much faster (~5 times faster) than RNN.
It's hard to draw fair | Is Convolutional Neural Network (CNN) faster than Recurrent Neural Network (RNN)?
I have done some projects on text classification and relation extraction using CNN and RNN (specifically, LSTM and GRU): CNNs tend to be much faster (~5 times faster) than RNN.
It's hard to draw fair comparisons:
CNN and RNN have differ... | Is Convolutional Neural Network (CNN) faster than Recurrent Neural Network (RNN)?
I have done some projects on text classification and relation extraction using CNN and RNN (specifically, LSTM and GRU): CNNs tend to be much faster (~5 times faster) than RNN.
It's hard to draw fair |
43,902 | Is Convolutional Neural Network (CNN) faster than Recurrent Neural Network (RNN)? | As already said by Franck Dernoncourt, the answer depends on your model. CNN and RNN are different architectures, used differently, usually for different purposes. You can't really replace one by another without changing other elements of the model to compare the performance.
However, CNN's are faster by design, since ... | Is Convolutional Neural Network (CNN) faster than Recurrent Neural Network (RNN)? | As already said by Franck Dernoncourt, the answer depends on your model. CNN and RNN are different architectures, used differently, usually for different purposes. You can't really replace one by anot | Is Convolutional Neural Network (CNN) faster than Recurrent Neural Network (RNN)?
As already said by Franck Dernoncourt, the answer depends on your model. CNN and RNN are different architectures, used differently, usually for different purposes. You can't really replace one by another without changing other elements of... | Is Convolutional Neural Network (CNN) faster than Recurrent Neural Network (RNN)?
As already said by Franck Dernoncourt, the answer depends on your model. CNN and RNN are different architectures, used differently, usually for different purposes. You can't really replace one by anot |
43,903 | Disagreement between the p-value and the confidence interval in a binomial test | The problem with tests of binomial proportion is that the tests used are generally approximate (since the exact "Clopper-Pearson" test is ridiculously conservative). Therefore, it's not clear that the procedure used to get the CI is the same as that used to test the hypothesis. Theoretically, either approach should lea... | Disagreement between the p-value and the confidence interval in a binomial test | The problem with tests of binomial proportion is that the tests used are generally approximate (since the exact "Clopper-Pearson" test is ridiculously conservative). Therefore, it's not clear that the | Disagreement between the p-value and the confidence interval in a binomial test
The problem with tests of binomial proportion is that the tests used are generally approximate (since the exact "Clopper-Pearson" test is ridiculously conservative). Therefore, it's not clear that the procedure used to get the CI is the sam... | Disagreement between the p-value and the confidence interval in a binomial test
The problem with tests of binomial proportion is that the tests used are generally approximate (since the exact "Clopper-Pearson" test is ridiculously conservative). Therefore, it's not clear that the |
43,904 | Disagreement between the p-value and the confidence interval in a binomial test | While the test and confidence interval provided by binom.test() are both exact, the confidence interval is unfortunately not based on inverting the test, so they may lead to inconsistent results. See the paper
Fay, M.P. (2010). Two-sided Exact Tests and Matching Confidence Intervals for Discrete Data. R Journal, volume... | Disagreement between the p-value and the confidence interval in a binomial test | While the test and confidence interval provided by binom.test() are both exact, the confidence interval is unfortunately not based on inverting the test, so they may lead to inconsistent results. See | Disagreement between the p-value and the confidence interval in a binomial test
While the test and confidence interval provided by binom.test() are both exact, the confidence interval is unfortunately not based on inverting the test, so they may lead to inconsistent results. See the paper
Fay, M.P. (2010). Two-sided Ex... | Disagreement between the p-value and the confidence interval in a binomial test
While the test and confidence interval provided by binom.test() are both exact, the confidence interval is unfortunately not based on inverting the test, so they may lead to inconsistent results. See |
43,905 | Disagreement between the p-value and the confidence interval in a binomial test | This is obviously a border case and the CI and test results are not derived in exactly the same manner (the CI is not an inversion of the test). You might want to look up binomial CIs and note that there are many ways they can be calculated all with pluses and minuses. But none of that gets at your central question of ... | Disagreement between the p-value and the confidence interval in a binomial test | This is obviously a border case and the CI and test results are not derived in exactly the same manner (the CI is not an inversion of the test). You might want to look up binomial CIs and note that th | Disagreement between the p-value and the confidence interval in a binomial test
This is obviously a border case and the CI and test results are not derived in exactly the same manner (the CI is not an inversion of the test). You might want to look up binomial CIs and note that there are many ways they can be calculated... | Disagreement between the p-value and the confidence interval in a binomial test
This is obviously a border case and the CI and test results are not derived in exactly the same manner (the CI is not an inversion of the test). You might want to look up binomial CIs and note that th |
43,906 | Disagreement between the p-value and the confidence interval in a binomial test | This is my first time answering a question, so I'm hoping that I am actually providing a useful answer.
When you run this in R:
x <- 31
n <- 50
p <- 0.75
binom.test(x, n, p = p)
... it returns the following results:
Exact binomial test
data: x and n
number of successes = 31, number of trials = 50, p-value = 0.04... | Disagreement between the p-value and the confidence interval in a binomial test | This is my first time answering a question, so I'm hoping that I am actually providing a useful answer.
When you run this in R:
x <- 31
n <- 50
p <- 0.75
binom.test(x, n, p = p)
... it returns the fo | Disagreement between the p-value and the confidence interval in a binomial test
This is my first time answering a question, so I'm hoping that I am actually providing a useful answer.
When you run this in R:
x <- 31
n <- 50
p <- 0.75
binom.test(x, n, p = p)
... it returns the following results:
Exact binomial test... | Disagreement between the p-value and the confidence interval in a binomial test
This is my first time answering a question, so I'm hoping that I am actually providing a useful answer.
When you run this in R:
x <- 31
n <- 50
p <- 0.75
binom.test(x, n, p = p)
... it returns the fo |
43,907 | How to judge if 5 point Likert scale data are normally distributed? | How to judge if 5 point likert scale data are normal distributed?
Values on 5-point ordinal scales are never normally distributed. But that's probably not the question you really need answered.
I have read that the t-test is used when the population is normally distributed.
It's an assumption of the test, but it's ... | How to judge if 5 point Likert scale data are normally distributed? | How to judge if 5 point likert scale data are normal distributed?
Values on 5-point ordinal scales are never normally distributed. But that's probably not the question you really need answered.
I ha | How to judge if 5 point Likert scale data are normally distributed?
How to judge if 5 point likert scale data are normal distributed?
Values on 5-point ordinal scales are never normally distributed. But that's probably not the question you really need answered.
I have read that the t-test is used when the population ... | How to judge if 5 point Likert scale data are normally distributed?
How to judge if 5 point likert scale data are normal distributed?
Values on 5-point ordinal scales are never normally distributed. But that's probably not the question you really need answered.
I ha |
43,908 | How to judge if 5 point Likert scale data are normally distributed? | Normal distribution is a continuous distribution while 5-point Likert-type scale is an ordinal variable, so by definition it is not normally distributed. | How to judge if 5 point Likert scale data are normally distributed? | Normal distribution is a continuous distribution while 5-point Likert-type scale is an ordinal variable, so by definition it is not normally distributed. | How to judge if 5 point Likert scale data are normally distributed?
Normal distribution is a continuous distribution while 5-point Likert-type scale is an ordinal variable, so by definition it is not normally distributed. | How to judge if 5 point Likert scale data are normally distributed?
Normal distribution is a continuous distribution while 5-point Likert-type scale is an ordinal variable, so by definition it is not normally distributed. |
43,909 | How to judge if 5 point Likert scale data are normally distributed? | If you are considering t-tests on Likert items, I would primarily be worried about how many 1's and 5's there are, since those values might represent censoring of responses that could exceed 1 or 5 if it permitted. This censoring is much problematic than the fact that you would be treating a discrete distribution as i... | How to judge if 5 point Likert scale data are normally distributed? | If you are considering t-tests on Likert items, I would primarily be worried about how many 1's and 5's there are, since those values might represent censoring of responses that could exceed 1 or 5 if | How to judge if 5 point Likert scale data are normally distributed?
If you are considering t-tests on Likert items, I would primarily be worried about how many 1's and 5's there are, since those values might represent censoring of responses that could exceed 1 or 5 if it permitted. This censoring is much problematic t... | How to judge if 5 point Likert scale data are normally distributed?
If you are considering t-tests on Likert items, I would primarily be worried about how many 1's and 5's there are, since those values might represent censoring of responses that could exceed 1 or 5 if |
43,910 | How to judge if 5 point Likert scale data are normally distributed? | I think you have two questions here:
how do you describe the distribution of a set of Likert scores? (by your question is such a set normally distributed)
how do you tell if two sets of Likert score are 'different' (or one Likert score different from the one that is most 'normal'?
For the first one, only continuous d... | How to judge if 5 point Likert scale data are normally distributed? | I think you have two questions here:
how do you describe the distribution of a set of Likert scores? (by your question is such a set normally distributed)
how do you tell if two sets of Likert score | How to judge if 5 point Likert scale data are normally distributed?
I think you have two questions here:
how do you describe the distribution of a set of Likert scores? (by your question is such a set normally distributed)
how do you tell if two sets of Likert score are 'different' (or one Likert score different from ... | How to judge if 5 point Likert scale data are normally distributed?
I think you have two questions here:
how do you describe the distribution of a set of Likert scores? (by your question is such a set normally distributed)
how do you tell if two sets of Likert score |
43,911 | How to judge if 5 point Likert scale data are normally distributed? | @Tim is correct, likert data cannot be normally distributed. Likert data are discrete and bounded, normal data go to infinity in both directions and can take any value in between.
The answer to your other question is that the standard deviation means pretty much the same thing whether your data are likert-type, norm... | How to judge if 5 point Likert scale data are normally distributed? | @Tim is correct, likert data cannot be normally distributed. Likert data are discrete and bounded, normal data go to infinity in both directions and can take any value in between.
The answer to you | How to judge if 5 point Likert scale data are normally distributed?
@Tim is correct, likert data cannot be normally distributed. Likert data are discrete and bounded, normal data go to infinity in both directions and can take any value in between.
The answer to your other question is that the standard deviation mean... | How to judge if 5 point Likert scale data are normally distributed?
@Tim is correct, likert data cannot be normally distributed. Likert data are discrete and bounded, normal data go to infinity in both directions and can take any value in between.
The answer to you |
43,912 | How to judge if 5 point Likert scale data are normally distributed? | I would suggest to give the data a hard look via a diagram. That sounds unscientific, but eyeballing is a really powerful tool for lots of problems.
Assessing normality of a Likert scale is one of them.
Look at the distribution and imagine you draw a normal distribution: would the data fit into the curve, or would the... | How to judge if 5 point Likert scale data are normally distributed? | I would suggest to give the data a hard look via a diagram. That sounds unscientific, but eyeballing is a really powerful tool for lots of problems.
Assessing normality of a Likert scale is one of the | How to judge if 5 point Likert scale data are normally distributed?
I would suggest to give the data a hard look via a diagram. That sounds unscientific, but eyeballing is a really powerful tool for lots of problems.
Assessing normality of a Likert scale is one of them.
Look at the distribution and imagine you draw a n... | How to judge if 5 point Likert scale data are normally distributed?
I would suggest to give the data a hard look via a diagram. That sounds unscientific, but eyeballing is a really powerful tool for lots of problems.
Assessing normality of a Likert scale is one of the |
43,913 | How to judge if 5 point Likert scale data are normally distributed? | If you have ordinal data why would you be concerned about a normal distribution? The only reason I can think of is if you are thinking of a latent trait that is manifested categorically; in that case, one can make the assumption that the latent trait is normally distributed. If you consider it a latent trait, and are t... | How to judge if 5 point Likert scale data are normally distributed? | If you have ordinal data why would you be concerned about a normal distribution? The only reason I can think of is if you are thinking of a latent trait that is manifested categorically; in that case, | How to judge if 5 point Likert scale data are normally distributed?
If you have ordinal data why would you be concerned about a normal distribution? The only reason I can think of is if you are thinking of a latent trait that is manifested categorically; in that case, one can make the assumption that the latent trait i... | How to judge if 5 point Likert scale data are normally distributed?
If you have ordinal data why would you be concerned about a normal distribution? The only reason I can think of is if you are thinking of a latent trait that is manifested categorically; in that case, |
43,914 | What does it mean to integrate over the posterior? | Matthew's answer provides correct technical explanation. For intuitive understanding, you may think that to integrate over a distribution is just a fancy term for averaging over the distribution (or, taking the expectation over the distribution). In this answer, I consider the same model with discrete distributions (f... | What does it mean to integrate over the posterior? | Matthew's answer provides correct technical explanation. For intuitive understanding, you may think that to integrate over a distribution is just a fancy term for averaging over the distribution (or, | What does it mean to integrate over the posterior?
Matthew's answer provides correct technical explanation. For intuitive understanding, you may think that to integrate over a distribution is just a fancy term for averaging over the distribution (or, taking the expectation over the distribution). In this answer, I con... | What does it mean to integrate over the posterior?
Matthew's answer provides correct technical explanation. For intuitive understanding, you may think that to integrate over a distribution is just a fancy term for averaging over the distribution (or, |
43,915 | What does it mean to integrate over the posterior? | The goal here to get the posterior predictive distribution. Suppose we are given previous data $y$ for learning parameters $\theta$ by which we attain the posterior $\pi(\theta \mid y)$. But now we want to understand the distribution of $y^*$, a new (set of) observation(s), given the data we already have. That would be... | What does it mean to integrate over the posterior? | The goal here to get the posterior predictive distribution. Suppose we are given previous data $y$ for learning parameters $\theta$ by which we attain the posterior $\pi(\theta \mid y)$. But now we wa | What does it mean to integrate over the posterior?
The goal here to get the posterior predictive distribution. Suppose we are given previous data $y$ for learning parameters $\theta$ by which we attain the posterior $\pi(\theta \mid y)$. But now we want to understand the distribution of $y^*$, a new (set of) observatio... | What does it mean to integrate over the posterior?
The goal here to get the posterior predictive distribution. Suppose we are given previous data $y$ for learning parameters $\theta$ by which we attain the posterior $\pi(\theta \mid y)$. But now we wa |
43,916 | Is visual inspection the only way to compare large datasets? | I suggest summarizing the difference with a general robust measure that does not depend on normality: the concordance probability that comes from the Wilcoxon-Mann-Whitney two-sample test. The concordance proportion estimates the probability that a randomly chosen value from group A exceeds a randomly chosen value fro... | Is visual inspection the only way to compare large datasets? | I suggest summarizing the difference with a general robust measure that does not depend on normality: the concordance probability that comes from the Wilcoxon-Mann-Whitney two-sample test. The concor | Is visual inspection the only way to compare large datasets?
I suggest summarizing the difference with a general robust measure that does not depend on normality: the concordance probability that comes from the Wilcoxon-Mann-Whitney two-sample test. The concordance proportion estimates the probability that a randomly ... | Is visual inspection the only way to compare large datasets?
I suggest summarizing the difference with a general robust measure that does not depend on normality: the concordance probability that comes from the Wilcoxon-Mann-Whitney two-sample test. The concor |
43,917 | Is visual inspection the only way to compare large datasets? | Yes. This is one key problem with standard goodness-of-fit tests on large datasets.
I would prefer visual inspection, as well as measures of effect size. Even if there is a large overlap in distributions, a 15% improvement in some KPI may be very useful. I wouldn't care too much about specific distributions, depending ... | Is visual inspection the only way to compare large datasets? | Yes. This is one key problem with standard goodness-of-fit tests on large datasets.
I would prefer visual inspection, as well as measures of effect size. Even if there is a large overlap in distributi | Is visual inspection the only way to compare large datasets?
Yes. This is one key problem with standard goodness-of-fit tests on large datasets.
I would prefer visual inspection, as well as measures of effect size. Even if there is a large overlap in distributions, a 15% improvement in some KPI may be very useful. I wo... | Is visual inspection the only way to compare large datasets?
Yes. This is one key problem with standard goodness-of-fit tests on large datasets.
I would prefer visual inspection, as well as measures of effect size. Even if there is a large overlap in distributi |
43,918 | Is visual inspection the only way to compare large datasets? | There is nothing wrong with statistical summaries of large data sets. If a method would be appropriate with N = 100 then it is appropriate with N = 100,000 or 100,000,000.
There is, however, something wrong with how most people interpret p-values. The answer to your first question is "yes", but that answer is just anot... | Is visual inspection the only way to compare large datasets? | There is nothing wrong with statistical summaries of large data sets. If a method would be appropriate with N = 100 then it is appropriate with N = 100,000 or 100,000,000.
There is, however, something | Is visual inspection the only way to compare large datasets?
There is nothing wrong with statistical summaries of large data sets. If a method would be appropriate with N = 100 then it is appropriate with N = 100,000 or 100,000,000.
There is, however, something wrong with how most people interpret p-values. The answer ... | Is visual inspection the only way to compare large datasets?
There is nothing wrong with statistical summaries of large data sets. If a method would be appropriate with N = 100 then it is appropriate with N = 100,000 or 100,000,000.
There is, however, something |
43,919 | Why is homogeneity of variance so important? | It is about a year since you have asked this question, @variant, and I assume you hopefully passed whatever exam you where studying for or passed your stats course. Homogeneity of variance is a standard assumption of ANOVA and most statistical tests. It is usually touched on quickly in most stats class. Most people ... | Why is homogeneity of variance so important? | It is about a year since you have asked this question, @variant, and I assume you hopefully passed whatever exam you where studying for or passed your stats course. Homogeneity of variance is a stand | Why is homogeneity of variance so important?
It is about a year since you have asked this question, @variant, and I assume you hopefully passed whatever exam you where studying for or passed your stats course. Homogeneity of variance is a standard assumption of ANOVA and most statistical tests. It is usually touched ... | Why is homogeneity of variance so important?
It is about a year since you have asked this question, @variant, and I assume you hopefully passed whatever exam you where studying for or passed your stats course. Homogeneity of variance is a stand |
43,920 | Why is homogeneity of variance so important? | When we conduct an ANOVA test, we examine the plausibility of a null hypothesis, a straw-man hypothesis that we may end up rejecting. Under this hypothesis we assume not only that all group means are equal, but that we have a certain data-generating process. This is a process in which 1) our observations come to us r... | Why is homogeneity of variance so important? | When we conduct an ANOVA test, we examine the plausibility of a null hypothesis, a straw-man hypothesis that we may end up rejecting. Under this hypothesis we assume not only that all group means are | Why is homogeneity of variance so important?
When we conduct an ANOVA test, we examine the plausibility of a null hypothesis, a straw-man hypothesis that we may end up rejecting. Under this hypothesis we assume not only that all group means are equal, but that we have a certain data-generating process. This is a proc... | Why is homogeneity of variance so important?
When we conduct an ANOVA test, we examine the plausibility of a null hypothesis, a straw-man hypothesis that we may end up rejecting. Under this hypothesis we assume not only that all group means are |
43,921 | Why is homogeneity of variance so important? | Within regression models homogeneity of variance of the residuals relative to the estimates, referred to as homoskedasticity, is a key underlying assumption of linear regression. If such residuals are not deemed homoskedastic but heteroskedastic (variance changes over observations instead of remaining roughly constant... | Why is homogeneity of variance so important? | Within regression models homogeneity of variance of the residuals relative to the estimates, referred to as homoskedasticity, is a key underlying assumption of linear regression. If such residuals ar | Why is homogeneity of variance so important?
Within regression models homogeneity of variance of the residuals relative to the estimates, referred to as homoskedasticity, is a key underlying assumption of linear regression. If such residuals are not deemed homoskedastic but heteroskedastic (variance changes over obser... | Why is homogeneity of variance so important?
Within regression models homogeneity of variance of the residuals relative to the estimates, referred to as homoskedasticity, is a key underlying assumption of linear regression. If such residuals ar |
43,922 | Is cross-validation the most important measure of a predictive model's effectiveness? | A good reason not to do this is that the cross-validation estimator has a finite variance, so if you evaluate it on many choices of input variables you will end up with a set that explains the data you have well, but will generalise poorly as it has effectively learned the noise that is particular to that dataset. The ... | Is cross-validation the most important measure of a predictive model's effectiveness? | A good reason not to do this is that the cross-validation estimator has a finite variance, so if you evaluate it on many choices of input variables you will end up with a set that explains the data yo | Is cross-validation the most important measure of a predictive model's effectiveness?
A good reason not to do this is that the cross-validation estimator has a finite variance, so if you evaluate it on many choices of input variables you will end up with a set that explains the data you have well, but will generalise p... | Is cross-validation the most important measure of a predictive model's effectiveness?
A good reason not to do this is that the cross-validation estimator has a finite variance, so if you evaluate it on many choices of input variables you will end up with a set that explains the data yo |
43,923 | Is cross-validation the most important measure of a predictive model's effectiveness? | I would personally favor cross-validated score evaluation because:
it is easily interpretable by the analyst provided that the underlying score function (accuracy, f1-score, RMSE...) is interpretable too,
it gives an idea of the uncertainty by looking at the stdev of the score values across CV folds,
it gives a way to... | Is cross-validation the most important measure of a predictive model's effectiveness? | I would personally favor cross-validated score evaluation because:
it is easily interpretable by the analyst provided that the underlying score function (accuracy, f1-score, RMSE...) is interpretable | Is cross-validation the most important measure of a predictive model's effectiveness?
I would personally favor cross-validated score evaluation because:
it is easily interpretable by the analyst provided that the underlying score function (accuracy, f1-score, RMSE...) is interpretable too,
it gives an idea of the unce... | Is cross-validation the most important measure of a predictive model's effectiveness?
I would personally favor cross-validated score evaluation because:
it is easily interpretable by the analyst provided that the underlying score function (accuracy, f1-score, RMSE...) is interpretable |
43,924 | Is cross-validation the most important measure of a predictive model's effectiveness? | Two scenarios spring to mind where you wouldn't want to just run iterations of all possible models:
Your model is in a clinical setting. For example, a nurse takes some measurements and uses it to predict something. If you include every possible covariate, then you are more likely to get missing values. Especially if ... | Is cross-validation the most important measure of a predictive model's effectiveness? | Two scenarios spring to mind where you wouldn't want to just run iterations of all possible models:
Your model is in a clinical setting. For example, a nurse takes some measurements and uses it to pr | Is cross-validation the most important measure of a predictive model's effectiveness?
Two scenarios spring to mind where you wouldn't want to just run iterations of all possible models:
Your model is in a clinical setting. For example, a nurse takes some measurements and uses it to predict something. If you include ev... | Is cross-validation the most important measure of a predictive model's effectiveness?
Two scenarios spring to mind where you wouldn't want to just run iterations of all possible models:
Your model is in a clinical setting. For example, a nurse takes some measurements and uses it to pr |
43,925 | Should the alternative hypothesis always be the research hypothesis? | I would say that the "alternative hypothesis" is usually NOT a "proposed hypothesis".
You do not define "proposed hypothesis" and it is not a common phrase. Presumably you mean that it is either a statistical hypothesis or it is a scientific hypothesis. They are usually quite different things.
A scientific hypothesis u... | Should the alternative hypothesis always be the research hypothesis? | I would say that the "alternative hypothesis" is usually NOT a "proposed hypothesis".
You do not define "proposed hypothesis" and it is not a common phrase. Presumably you mean that it is either a sta | Should the alternative hypothesis always be the research hypothesis?
I would say that the "alternative hypothesis" is usually NOT a "proposed hypothesis".
You do not define "proposed hypothesis" and it is not a common phrase. Presumably you mean that it is either a statistical hypothesis or it is a scientific hypothesi... | Should the alternative hypothesis always be the research hypothesis?
I would say that the "alternative hypothesis" is usually NOT a "proposed hypothesis".
You do not define "proposed hypothesis" and it is not a common phrase. Presumably you mean that it is either a sta |
43,926 | Should the alternative hypothesis always be the research hypothesis? | The principle of statistical hypothesis tests, by definition, treats the null hypothesis H0 and the alternative H1 asymmetrically. This always needs to be taken into account. A test is able to tell you whether there is evidence against the null hypothesis in the direction of the alternative.
It will never tell you that... | Should the alternative hypothesis always be the research hypothesis? | The principle of statistical hypothesis tests, by definition, treats the null hypothesis H0 and the alternative H1 asymmetrically. This always needs to be taken into account. A test is able to tell yo | Should the alternative hypothesis always be the research hypothesis?
The principle of statistical hypothesis tests, by definition, treats the null hypothesis H0 and the alternative H1 asymmetrically. This always needs to be taken into account. A test is able to tell you whether there is evidence against the null hypoth... | Should the alternative hypothesis always be the research hypothesis?
The principle of statistical hypothesis tests, by definition, treats the null hypothesis H0 and the alternative H1 asymmetrically. This always needs to be taken into account. A test is able to tell yo |
43,927 | Should the alternative hypothesis always be the research hypothesis? | In this case there should be the equality as an alternative hypothesis and therefore the difference as an null hypothesis?
Hypothesis testing works well when a particular hypothesis makes a precise prediction. Like the observed value is likely equal or above/below some value. Hypothesis testing is about making predict... | Should the alternative hypothesis always be the research hypothesis? | In this case there should be the equality as an alternative hypothesis and therefore the difference as an null hypothesis?
Hypothesis testing works well when a particular hypothesis makes a precise p | Should the alternative hypothesis always be the research hypothesis?
In this case there should be the equality as an alternative hypothesis and therefore the difference as an null hypothesis?
Hypothesis testing works well when a particular hypothesis makes a precise prediction. Like the observed value is likely equal ... | Should the alternative hypothesis always be the research hypothesis?
In this case there should be the equality as an alternative hypothesis and therefore the difference as an null hypothesis?
Hypothesis testing works well when a particular hypothesis makes a precise p |
43,928 | Should the alternative hypothesis always be the research hypothesis? | @Dave gave me a light about the question and told me about the equivalence test, explained here.
The hypothesis test for equivalence can be written as follows:
H0: The difference between the two group means is outside the equivalence interval
H1: The difference between the two group means is inside the equivalence inte... | Should the alternative hypothesis always be the research hypothesis? | @Dave gave me a light about the question and told me about the equivalence test, explained here.
The hypothesis test for equivalence can be written as follows:
H0: The difference between the two group | Should the alternative hypothesis always be the research hypothesis?
@Dave gave me a light about the question and told me about the equivalence test, explained here.
The hypothesis test for equivalence can be written as follows:
H0: The difference between the two group means is outside the equivalence interval
H1: The ... | Should the alternative hypothesis always be the research hypothesis?
@Dave gave me a light about the question and told me about the equivalence test, explained here.
The hypothesis test for equivalence can be written as follows:
H0: The difference between the two group |
43,929 | Should the alternative hypothesis always be the research hypothesis? | We generally assume as the null hypotheses as an old orthodox belief as true even though we do not have sufficient proof of its truth.
and Alternate Hypothesis H1 as a new radical belief that is challenging our old system of belief.
So we need a great level of effort to reject our old belief H0.
We will need a high deg... | Should the alternative hypothesis always be the research hypothesis? | We generally assume as the null hypotheses as an old orthodox belief as true even though we do not have sufficient proof of its truth.
and Alternate Hypothesis H1 as a new radical belief that is chall | Should the alternative hypothesis always be the research hypothesis?
We generally assume as the null hypotheses as an old orthodox belief as true even though we do not have sufficient proof of its truth.
and Alternate Hypothesis H1 as a new radical belief that is challenging our old system of belief.
So we need a great... | Should the alternative hypothesis always be the research hypothesis?
We generally assume as the null hypotheses as an old orthodox belief as true even though we do not have sufficient proof of its truth.
and Alternate Hypothesis H1 as a new radical belief that is chall |
43,930 | PDF does not integrate to 1 - where is my mistake? | As pointed out in comments, the range of integration in your integral does not match the listed support of the random variable (which is $\mu \leqslant x < \infty$). Start by correcting the expression for your density, with explicit statement of the support:
$$f(x) = \begin{cases}
\frac{2n}{\mu} \Big( \frac{\mu}{x} \B... | PDF does not integrate to 1 - where is my mistake? | As pointed out in comments, the range of integration in your integral does not match the listed support of the random variable (which is $\mu \leqslant x < \infty$). Start by correcting the expressio | PDF does not integrate to 1 - where is my mistake?
As pointed out in comments, the range of integration in your integral does not match the listed support of the random variable (which is $\mu \leqslant x < \infty$). Start by correcting the expression for your density, with explicit statement of the support:
$$f(x) = ... | PDF does not integrate to 1 - where is my mistake?
As pointed out in comments, the range of integration in your integral does not match the listed support of the random variable (which is $\mu \leqslant x < \infty$). Start by correcting the expressio |
43,931 | Why use bar chart with error whiskers instead of box plot? | Realistically, the reason people do most of the things they do, is tradition / habit. 'Such-and-such is what I learned in graduate school, it's what I've always done, it's what everyone else in my field does, it's what reviewers / editors / readers will expect and understand readily.'
Having said that, we can ask if ... | Why use bar chart with error whiskers instead of box plot? | Realistically, the reason people do most of the things they do, is tradition / habit. 'Such-and-such is what I learned in graduate school, it's what I've always done, it's what everyone else in my fi | Why use bar chart with error whiskers instead of box plot?
Realistically, the reason people do most of the things they do, is tradition / habit. 'Such-and-such is what I learned in graduate school, it's what I've always done, it's what everyone else in my field does, it's what reviewers / editors / readers will expect... | Why use bar chart with error whiskers instead of box plot?
Realistically, the reason people do most of the things they do, is tradition / habit. 'Such-and-such is what I learned in graduate school, it's what I've always done, it's what everyone else in my fi |
43,932 | Why use bar chart with error whiskers instead of box plot? | Personally, I have never encountered a good use case for bar plots and think it's mostly inertia in some fields that leads to their continued use.
If you like Tufte's ideas about data-ink ratios, it's clear that they take up far too much plot area to convey quite little information. All the information in the bar can ... | Why use bar chart with error whiskers instead of box plot? | Personally, I have never encountered a good use case for bar plots and think it's mostly inertia in some fields that leads to their continued use.
If you like Tufte's ideas about data-ink ratios, it' | Why use bar chart with error whiskers instead of box plot?
Personally, I have never encountered a good use case for bar plots and think it's mostly inertia in some fields that leads to their continued use.
If you like Tufte's ideas about data-ink ratios, it's clear that they take up far too much plot area to convey qu... | Why use bar chart with error whiskers instead of box plot?
Personally, I have never encountered a good use case for bar plots and think it's mostly inertia in some fields that leads to their continued use.
If you like Tufte's ideas about data-ink ratios, it' |
43,933 | Why use bar chart with error whiskers instead of box plot? | Boxplots (and violin plots, which I prefer because they convey more information, and the raw observations themselves) visualize observations and summary information of the observations. Barplots, as used in these communities, visualize parameter estimates. Typically, as in the example you give, the estimate is simply t... | Why use bar chart with error whiskers instead of box plot? | Boxplots (and violin plots, which I prefer because they convey more information, and the raw observations themselves) visualize observations and summary information of the observations. Barplots, as u | Why use bar chart with error whiskers instead of box plot?
Boxplots (and violin plots, which I prefer because they convey more information, and the raw observations themselves) visualize observations and summary information of the observations. Barplots, as used in these communities, visualize parameter estimates. Typi... | Why use bar chart with error whiskers instead of box plot?
Boxplots (and violin plots, which I prefer because they convey more information, and the raw observations themselves) visualize observations and summary information of the observations. Barplots, as u |
43,934 | Determining if difference is significant | You can do the chi-squared test and get a p-value. (I haven't done the test, so I don't know what the p-value is.)
What you cannot do — without making a big assumption — is to claim that the feature increases the probability of conversion. It may be the case that users who were more likely to convert were also more lik... | Determining if difference is significant | You can do the chi-squared test and get a p-value. (I haven't done the test, so I don't know what the p-value is.)
What you cannot do — without making a big assumption — is to claim that the feature i | Determining if difference is significant
You can do the chi-squared test and get a p-value. (I haven't done the test, so I don't know what the p-value is.)
What you cannot do — without making a big assumption — is to claim that the feature increases the probability of conversion. It may be the case that users who were ... | Determining if difference is significant
You can do the chi-squared test and get a p-value. (I haven't done the test, so I don't know what the p-value is.)
What you cannot do — without making a big assumption — is to claim that the feature i |
43,935 | Determining if difference is significant | If all the customers are independent (e.g. you don't have data on the same customer multiple encounters) then this is the classic 2 by 2 table that can be analyzed using a Chi-Squared test: https://en.wikipedia.org/wiki/Chi-squared_test. | Determining if difference is significant | If all the customers are independent (e.g. you don't have data on the same customer multiple encounters) then this is the classic 2 by 2 table that can be analyzed using a Chi-Squared test: https://en | Determining if difference is significant
If all the customers are independent (e.g. you don't have data on the same customer multiple encounters) then this is the classic 2 by 2 table that can be analyzed using a Chi-Squared test: https://en.wikipedia.org/wiki/Chi-squared_test. | Determining if difference is significant
If all the customers are independent (e.g. you don't have data on the same customer multiple encounters) then this is the classic 2 by 2 table that can be analyzed using a Chi-Squared test: https://en |
43,936 | Determining if difference is significant | Normally, testing involving two samples requires more complicated analysis than testing with one sample. But in this case, the sizes are so unbalanced that it's a reasonable approximation to just do an analysis on the smaller sample. We can take a null hypothesis that the smaller sample is from a Poisson distribution w... | Determining if difference is significant | Normally, testing involving two samples requires more complicated analysis than testing with one sample. But in this case, the sizes are so unbalanced that it's a reasonable approximation to just do a | Determining if difference is significant
Normally, testing involving two samples requires more complicated analysis than testing with one sample. But in this case, the sizes are so unbalanced that it's a reasonable approximation to just do an analysis on the smaller sample. We can take a null hypothesis that the smalle... | Determining if difference is significant
Normally, testing involving two samples requires more complicated analysis than testing with one sample. But in this case, the sizes are so unbalanced that it's a reasonable approximation to just do a |
43,937 | Correlations - Pearson and Spearman | Pearson correlation depends on the values of the data; Spearman correlation depends only on their (marginal) ranks. Thus, the former is (far) more sensitive to outlying data.
What kind of outlying data? Those with high leverage. These are far to the left or right of the rest of the points in a plot, as in the left p... | Correlations - Pearson and Spearman | Pearson correlation depends on the values of the data; Spearman correlation depends only on their (marginal) ranks. Thus, the former is (far) more sensitive to outlying data.
What kind of outlying da | Correlations - Pearson and Spearman
Pearson correlation depends on the values of the data; Spearman correlation depends only on their (marginal) ranks. Thus, the former is (far) more sensitive to outlying data.
What kind of outlying data? Those with high leverage. These are far to the left or right of the rest of th... | Correlations - Pearson and Spearman
Pearson correlation depends on the values of the data; Spearman correlation depends only on their (marginal) ranks. Thus, the former is (far) more sensitive to outlying data.
What kind of outlying da |
43,938 | Correlations - Pearson and Spearman | I know that Pearson correlation is sensitive to outliers, unlike Spearman correlation.
There is a more striking difference between the two: Pearson assumes a linear relationship between the data, whereas Spearman checks whether it is simply monotonuous (see the image below, taken from Wikipedia). Generating data via a... | Correlations - Pearson and Spearman | I know that Pearson correlation is sensitive to outliers, unlike Spearman correlation.
There is a more striking difference between the two: Pearson assumes a linear relationship between the data, whe | Correlations - Pearson and Spearman
I know that Pearson correlation is sensitive to outliers, unlike Spearman correlation.
There is a more striking difference between the two: Pearson assumes a linear relationship between the data, whereas Spearman checks whether it is simply monotonuous (see the image below, taken fr... | Correlations - Pearson and Spearman
I know that Pearson correlation is sensitive to outliers, unlike Spearman correlation.
There is a more striking difference between the two: Pearson assumes a linear relationship between the data, whe |
43,939 | Correlations - Pearson and Spearman | This is the basic idea. In this example Spearman's correlation is obviously 1, and Pearson's correlation is 0.65. You can generate "step data" that will look like almost a straight line, then add an outlier. | Correlations - Pearson and Spearman | This is the basic idea. In this example Spearman's correlation is obviously 1, and Pearson's correlation is 0.65. You can generate "step data" that will look like almost a straight line, then add an o | Correlations - Pearson and Spearman
This is the basic idea. In this example Spearman's correlation is obviously 1, and Pearson's correlation is 0.65. You can generate "step data" that will look like almost a straight line, then add an outlier. | Correlations - Pearson and Spearman
This is the basic idea. In this example Spearman's correlation is obviously 1, and Pearson's correlation is 0.65. You can generate "step data" that will look like almost a straight line, then add an o |
43,940 | Sample size calculation in COVID-19 study | A glib answer is that they probably just plugged their numbers into a power calculator. I've attached a screenshot re-creating this power analysis in G*Power 3.1, a freely available power calculator. Note to match their result of 621 I had to go to "Options" and select "Maximize Alpha".
The paper says "We anticipated ... | Sample size calculation in COVID-19 study | A glib answer is that they probably just plugged their numbers into a power calculator. I've attached a screenshot re-creating this power analysis in G*Power 3.1, a freely available power calculator. | Sample size calculation in COVID-19 study
A glib answer is that they probably just plugged their numbers into a power calculator. I've attached a screenshot re-creating this power analysis in G*Power 3.1, a freely available power calculator. Note to match their result of 621 I had to go to "Options" and select "Maximiz... | Sample size calculation in COVID-19 study
A glib answer is that they probably just plugged their numbers into a power calculator. I've attached a screenshot re-creating this power analysis in G*Power 3.1, a freely available power calculator. |
43,941 | Sample size calculation in COVID-19 study | I know I am several months late, but just want to respond to the other answers. All answers use simulations and/or claim the exact Fisher calculation is too computationally intensive. If you code this efficiently, you can get an exact computation very quickly. Below is a comparison time of the sample code fisherpowe... | Sample size calculation in COVID-19 study | I know I am several months late, but just want to respond to the other answers. All answers use simulations and/or claim the exact Fisher calculation is too computationally intensive. If you code th | Sample size calculation in COVID-19 study
I know I am several months late, but just want to respond to the other answers. All answers use simulations and/or claim the exact Fisher calculation is too computationally intensive. If you code this efficiently, you can get an exact computation very quickly. Below is a com... | Sample size calculation in COVID-19 study
I know I am several months late, but just want to respond to the other answers. All answers use simulations and/or claim the exact Fisher calculation is too computationally intensive. If you code th |
43,942 | Sample size calculation in COVID-19 study | They used Fisher's exact test, which relates to sampling without replacement.
But in reality this is not exactly like that, and it is more like binomial distributed data.
For that case you get the following:
For the null hypothesis it is sampling where you have equal probabilities that the people get covid-19, no mat... | Sample size calculation in COVID-19 study | They used Fisher's exact test, which relates to sampling without replacement.
But in reality this is not exactly like that, and it is more like binomial distributed data.
For that case you get the fol | Sample size calculation in COVID-19 study
They used Fisher's exact test, which relates to sampling without replacement.
But in reality this is not exactly like that, and it is more like binomial distributed data.
For that case you get the following:
For the null hypothesis it is sampling where you have equal probabil... | Sample size calculation in COVID-19 study
They used Fisher's exact test, which relates to sampling without replacement.
But in reality this is not exactly like that, and it is more like binomial distributed data.
For that case you get the fol |
43,943 | Sample size calculation in COVID-19 study | You are missing a critical piece of information that the article cited immediately prior to your quote:
We anticipated that illness compatible with Covid-19 would develop in 10% of close contacts exposed to Covid-19.
This is the assumed incidence in the control group under the alternative hypothesis; i.e., $\pi_c = 0... | Sample size calculation in COVID-19 study | You are missing a critical piece of information that the article cited immediately prior to your quote:
We anticipated that illness compatible with Covid-19 would develop in 10% of close contacts exp | Sample size calculation in COVID-19 study
You are missing a critical piece of information that the article cited immediately prior to your quote:
We anticipated that illness compatible with Covid-19 would develop in 10% of close contacts exposed to Covid-19.
This is the assumed incidence in the control group under th... | Sample size calculation in COVID-19 study
You are missing a critical piece of information that the article cited immediately prior to your quote:
We anticipated that illness compatible with Covid-19 would develop in 10% of close contacts exp |
43,944 | Mean vs. Trimmed mean in the normal distribution | For an exponential family like the Normal distribution, the sample average $\bar{x}$ is know to achieve the Cramér-Rao lower bound, that is the minimal possible variance among all unbiased estimators of the mean. It is thus no surprise that another estimator such as the trimmed mean is found to be more variable than $\... | Mean vs. Trimmed mean in the normal distribution | For an exponential family like the Normal distribution, the sample average $\bar{x}$ is know to achieve the Cramér-Rao lower bound, that is the minimal possible variance among all unbiased estimators | Mean vs. Trimmed mean in the normal distribution
For an exponential family like the Normal distribution, the sample average $\bar{x}$ is know to achieve the Cramér-Rao lower bound, that is the minimal possible variance among all unbiased estimators of the mean. It is thus no surprise that another estimator such as the ... | Mean vs. Trimmed mean in the normal distribution
For an exponential family like the Normal distribution, the sample average $\bar{x}$ is know to achieve the Cramér-Rao lower bound, that is the minimal possible variance among all unbiased estimators |
43,945 | Mean vs. Trimmed mean in the normal distribution | With a light-tailed distribution, the more distant points are most informative about location; with a heavier-tailed distribution their inclusion in an average may be anything from unhelpful to ruinous.
So when you use a suitably-trimmed mean with a heavy-tailed distribution, it will tend to have a lower variance than ... | Mean vs. Trimmed mean in the normal distribution | With a light-tailed distribution, the more distant points are most informative about location; with a heavier-tailed distribution their inclusion in an average may be anything from unhelpful to ruinou | Mean vs. Trimmed mean in the normal distribution
With a light-tailed distribution, the more distant points are most informative about location; with a heavier-tailed distribution their inclusion in an average may be anything from unhelpful to ruinous.
So when you use a suitably-trimmed mean with a heavy-tailed distribu... | Mean vs. Trimmed mean in the normal distribution
With a light-tailed distribution, the more distant points are most informative about location; with a heavier-tailed distribution their inclusion in an average may be anything from unhelpful to ruinou |
43,946 | Smarter example of biased but consistent estimator? | Here's a straightforward one.
Consider a uniform population with unknown upper bound
$$ X \sim U(0, \theta) $$
A simple estimator of $\theta$ is the sample maximum
$$ \hat \theta = \max(x_1, x_2, \ldots, x_n) $$
This is a biased estimator. With a little math you can show that
$$ E[\hat \theta] = \frac{n}{n+1} \theta $... | Smarter example of biased but consistent estimator? | Here's a straightforward one.
Consider a uniform population with unknown upper bound
$$ X \sim U(0, \theta) $$
A simple estimator of $\theta$ is the sample maximum
$$ \hat \theta = \max(x_1, x_2, \ldo | Smarter example of biased but consistent estimator?
Here's a straightforward one.
Consider a uniform population with unknown upper bound
$$ X \sim U(0, \theta) $$
A simple estimator of $\theta$ is the sample maximum
$$ \hat \theta = \max(x_1, x_2, \ldots, x_n) $$
This is a biased estimator. With a little math you can ... | Smarter example of biased but consistent estimator?
Here's a straightforward one.
Consider a uniform population with unknown upper bound
$$ X \sim U(0, \theta) $$
A simple estimator of $\theta$ is the sample maximum
$$ \hat \theta = \max(x_1, x_2, \ldo |
43,947 | Smarter example of biased but consistent estimator? | A very commonly used consistent but biased estimator used is that of the estimated standard deviation.
If we are looking at a simple situation in our data is distributed as $x_i \sim N(\mu, \sigma^2)$, then sometimes the MLE estimate of $\sigma$ is used, ie
$\hat \sigma^2 =
\frac{1}{n} \sum_{i = 1}^n (x_i - \bar x)^2... | Smarter example of biased but consistent estimator? | A very commonly used consistent but biased estimator used is that of the estimated standard deviation.
If we are looking at a simple situation in our data is distributed as $x_i \sim N(\mu, \sigma^2) | Smarter example of biased but consistent estimator?
A very commonly used consistent but biased estimator used is that of the estimated standard deviation.
If we are looking at a simple situation in our data is distributed as $x_i \sim N(\mu, \sigma^2)$, then sometimes the MLE estimate of $\sigma$ is used, ie
$\hat \si... | Smarter example of biased but consistent estimator?
A very commonly used consistent but biased estimator used is that of the estimated standard deviation.
If we are looking at a simple situation in our data is distributed as $x_i \sim N(\mu, \sigma^2) |
43,948 | Log transformation and correlation | Spearman correlation tests for monotonic association (tendency to increase together and decrease together); it's unaffected by monotonic-increasing transformation (like taking logs, square roots or squaring positive values).
To Spearman correlation, these are all perfectly correlated:
... since each variable increases... | Log transformation and correlation | Spearman correlation tests for monotonic association (tendency to increase together and decrease together); it's unaffected by monotonic-increasing transformation (like taking logs, square roots or sq | Log transformation and correlation
Spearman correlation tests for monotonic association (tendency to increase together and decrease together); it's unaffected by monotonic-increasing transformation (like taking logs, square roots or squaring positive values).
To Spearman correlation, these are all perfectly correlated:... | Log transformation and correlation
Spearman correlation tests for monotonic association (tendency to increase together and decrease together); it's unaffected by monotonic-increasing transformation (like taking logs, square roots or sq |
43,949 | Log transformation and correlation | The reason you aren't seeing any difference is because you're calculating Spearman's rather than Pearson's correlation. The latter is a measure of linear association, but Spearman's correlation measures the strength of any monotone relationship, which should be invariant to monotone transformations.
The way we calcula... | Log transformation and correlation | The reason you aren't seeing any difference is because you're calculating Spearman's rather than Pearson's correlation. The latter is a measure of linear association, but Spearman's correlation measu | Log transformation and correlation
The reason you aren't seeing any difference is because you're calculating Spearman's rather than Pearson's correlation. The latter is a measure of linear association, but Spearman's correlation measures the strength of any monotone relationship, which should be invariant to monotone ... | Log transformation and correlation
The reason you aren't seeing any difference is because you're calculating Spearman's rather than Pearson's correlation. The latter is a measure of linear association, but Spearman's correlation measu |
43,950 | Log transformation and correlation | Spearman's correlation coefficient uses rank, rather than the actual data values. Using Spearman's correlation is actually therefore already a transformation, as you are transforming the data values into ranks.
A log transformation will change the values of the variable, but it won't change the ranking of the values re... | Log transformation and correlation | Spearman's correlation coefficient uses rank, rather than the actual data values. Using Spearman's correlation is actually therefore already a transformation, as you are transforming the data values i | Log transformation and correlation
Spearman's correlation coefficient uses rank, rather than the actual data values. Using Spearman's correlation is actually therefore already a transformation, as you are transforming the data values into ranks.
A log transformation will change the values of the variable, but it won't ... | Log transformation and correlation
Spearman's correlation coefficient uses rank, rather than the actual data values. Using Spearman's correlation is actually therefore already a transformation, as you are transforming the data values i |
43,951 | Machine learning methods which takes time-to-event into account? | It is a mistake to assume that the Cox proportional hazards model makes simple assumptions such as linearity. All regression models have been extended for decades using regression splines, tensor interaction splines, and other approaches to allow for great flexibility in the low- to mid-dimensional case. As others ha... | Machine learning methods which takes time-to-event into account? | It is a mistake to assume that the Cox proportional hazards model makes simple assumptions such as linearity. All regression models have been extended for decades using regression splines, tensor int | Machine learning methods which takes time-to-event into account?
It is a mistake to assume that the Cox proportional hazards model makes simple assumptions such as linearity. All regression models have been extended for decades using regression splines, tensor interaction splines, and other approaches to allow for gre... | Machine learning methods which takes time-to-event into account?
It is a mistake to assume that the Cox proportional hazards model makes simple assumptions such as linearity. All regression models have been extended for decades using regression splines, tensor int |
43,952 | Machine learning methods which takes time-to-event into account? | You may might be interested in Random Survival Forests and the corresponding R package randomForestSRC:
http://www.ccs.miami.edu/~hishwaran/papers/randomSurvivalForests.pdf
https://cran.r-project.org/web/packages/randomForestSRC/
I believe the main limitation of the approach is that it doesn't deal with time varying pr... | Machine learning methods which takes time-to-event into account? | You may might be interested in Random Survival Forests and the corresponding R package randomForestSRC:
http://www.ccs.miami.edu/~hishwaran/papers/randomSurvivalForests.pdf
https://cran.r-project.org/ | Machine learning methods which takes time-to-event into account?
You may might be interested in Random Survival Forests and the corresponding R package randomForestSRC:
http://www.ccs.miami.edu/~hishwaran/papers/randomSurvivalForests.pdf
https://cran.r-project.org/web/packages/randomForestSRC/
I believe the main limita... | Machine learning methods which takes time-to-event into account?
You may might be interested in Random Survival Forests and the corresponding R package randomForestSRC:
http://www.ccs.miami.edu/~hishwaran/papers/randomSurvivalForests.pdf
https://cran.r-project.org/ |
43,953 | Machine learning methods which takes time-to-event into account? | The majority of the linear models based in the likelihood function have extension to the Cox regression. For example, penalized regression models (lasso, rigde regression, elastic net) or partial least squares. In other hand, there are extensions from the classification trees to survival trees. This means all the ensem... | Machine learning methods which takes time-to-event into account? | The majority of the linear models based in the likelihood function have extension to the Cox regression. For example, penalized regression models (lasso, rigde regression, elastic net) or partial leas | Machine learning methods which takes time-to-event into account?
The majority of the linear models based in the likelihood function have extension to the Cox regression. For example, penalized regression models (lasso, rigde regression, elastic net) or partial least squares. In other hand, there are extensions from the... | Machine learning methods which takes time-to-event into account?
The majority of the linear models based in the likelihood function have extension to the Cox regression. For example, penalized regression models (lasso, rigde regression, elastic net) or partial leas |
43,954 | Machine learning methods which takes time-to-event into account? | Any linear survival analysis method can be straightforwardly kernelised to generate a non-linear equivalent. I did something like this a while back for modelling the time-to-growth of microbial pathogens from spores in foods.
G. C. Cawley, N. L. C. Talbot, G. J. Janacek and M. W. Peck, Sparse Bayesian kernel survival ... | Machine learning methods which takes time-to-event into account? | Any linear survival analysis method can be straightforwardly kernelised to generate a non-linear equivalent. I did something like this a while back for modelling the time-to-growth of microbial patho | Machine learning methods which takes time-to-event into account?
Any linear survival analysis method can be straightforwardly kernelised to generate a non-linear equivalent. I did something like this a while back for modelling the time-to-growth of microbial pathogens from spores in foods.
G. C. Cawley, N. L. C. Talbo... | Machine learning methods which takes time-to-event into account?
Any linear survival analysis method can be straightforwardly kernelised to generate a non-linear equivalent. I did something like this a while back for modelling the time-to-growth of microbial patho |
43,955 | Standardizing a Standard normal Variable | If $X_i$ are iid Normal(0,1), then a sample from it won't have sample mean 0 or sample standard deviation 1 just due to random variation.
Now consider what happens when we do $Z=\frac{X-\overline{X}}{s_X}$
While we do now have sample mean 0 and sample standard deviation 1, what we don't have is $Z$ being normally distr... | Standardizing a Standard normal Variable | If $X_i$ are iid Normal(0,1), then a sample from it won't have sample mean 0 or sample standard deviation 1 just due to random variation.
Now consider what happens when we do $Z=\frac{X-\overline{X}}{ | Standardizing a Standard normal Variable
If $X_i$ are iid Normal(0,1), then a sample from it won't have sample mean 0 or sample standard deviation 1 just due to random variation.
Now consider what happens when we do $Z=\frac{X-\overline{X}}{s_X}$
While we do now have sample mean 0 and sample standard deviation 1, what ... | Standardizing a Standard normal Variable
If $X_i$ are iid Normal(0,1), then a sample from it won't have sample mean 0 or sample standard deviation 1 just due to random variation.
Now consider what happens when we do $Z=\frac{X-\overline{X}}{ |
43,956 | Standardizing a Standard normal Variable | We have that
$$X_i^* = \frac{X_i}{s} - \frac{\bar X}{s}$$
The sample variance from a normal sample follows an exact distribution,
$$(n-1)s^2/\sigma^2\sim\chi^2_{n-1} \implies s^2 \sim \frac{1}{n-1}\chi^2_{n-1} \implies s \sim \frac{1}{\sqrt{n-1}}\chi_{n-1}$$
i.e. $s$ follows the square root of a chi-square divided by ... | Standardizing a Standard normal Variable | We have that
$$X_i^* = \frac{X_i}{s} - \frac{\bar X}{s}$$
The sample variance from a normal sample follows an exact distribution,
$$(n-1)s^2/\sigma^2\sim\chi^2_{n-1} \implies s^2 \sim \frac{1}{n-1}\c | Standardizing a Standard normal Variable
We have that
$$X_i^* = \frac{X_i}{s} - \frac{\bar X}{s}$$
The sample variance from a normal sample follows an exact distribution,
$$(n-1)s^2/\sigma^2\sim\chi^2_{n-1} \implies s^2 \sim \frac{1}{n-1}\chi^2_{n-1} \implies s \sim \frac{1}{\sqrt{n-1}}\chi_{n-1}$$
i.e. $s$ follows th... | Standardizing a Standard normal Variable
We have that
$$X_i^* = \frac{X_i}{s} - \frac{\bar X}{s}$$
The sample variance from a normal sample follows an exact distribution,
$$(n-1)s^2/\sigma^2\sim\chi^2_{n-1} \implies s^2 \sim \frac{1}{n-1}\c |
43,957 | Standardizing a Standard normal Variable | The original standard normal variables have TRUE mean 0 (E(X) = 0) and are independent. By taking a set of them and dividing them by their standard deviation, you DO standardize them, but the result, ironically, isn't standard normal. They are dependent (because they share the denominator) and actually have t-distrib... | Standardizing a Standard normal Variable | The original standard normal variables have TRUE mean 0 (E(X) = 0) and are independent. By taking a set of them and dividing them by their standard deviation, you DO standardize them, but the result, | Standardizing a Standard normal Variable
The original standard normal variables have TRUE mean 0 (E(X) = 0) and are independent. By taking a set of them and dividing them by their standard deviation, you DO standardize them, but the result, ironically, isn't standard normal. They are dependent (because they share the... | Standardizing a Standard normal Variable
The original standard normal variables have TRUE mean 0 (E(X) = 0) and are independent. By taking a set of them and dividing them by their standard deviation, you DO standardize them, but the result, |
43,958 | Standardizing a Standard normal Variable | Just did some experiments. It seems after scale again, you are closer to get some data with $\mu=0$ and $\sigma=1$.
set.seed(123)
x <- rnorm(1000,0,1)
mean(x)
sd(x)
y<-scale(x)
mean(y)
sd(y)
Results:
> mean(x)
[1] 0.01612787
> sd(x)
[1] 0.991695
> y<-scale(x)
> mean(y)
[1] -8.235085e-18
> sd(y)
[1] 1 | Standardizing a Standard normal Variable | Just did some experiments. It seems after scale again, you are closer to get some data with $\mu=0$ and $\sigma=1$.
set.seed(123)
x <- rnorm(1000,0,1)
mean(x)
sd(x)
y<-scale(x)
mean(y)
sd(y)
Results | Standardizing a Standard normal Variable
Just did some experiments. It seems after scale again, you are closer to get some data with $\mu=0$ and $\sigma=1$.
set.seed(123)
x <- rnorm(1000,0,1)
mean(x)
sd(x)
y<-scale(x)
mean(y)
sd(y)
Results:
> mean(x)
[1] 0.01612787
> sd(x)
[1] 0.991695
> y<-scale(x)
> mean(y)
[1]... | Standardizing a Standard normal Variable
Just did some experiments. It seems after scale again, you are closer to get some data with $\mu=0$ and $\sigma=1$.
set.seed(123)
x <- rnorm(1000,0,1)
mean(x)
sd(x)
y<-scale(x)
mean(y)
sd(y)
Results |
43,959 | Standardizing a Standard normal Variable | Intuitive proof by counterexample
There are already some general answers that cover the question, but personally I find the following reasoning most easy to follow.
Suppose your sample size is 1.
Your definition of $X^*$ is as follows
$$X^*=\frac{x-\bar x}{sd(x)}$$
Because the sample size is 1, we have $\bar x = x$, so... | Standardizing a Standard normal Variable | Intuitive proof by counterexample
There are already some general answers that cover the question, but personally I find the following reasoning most easy to follow.
Suppose your sample size is 1.
Your | Standardizing a Standard normal Variable
Intuitive proof by counterexample
There are already some general answers that cover the question, but personally I find the following reasoning most easy to follow.
Suppose your sample size is 1.
Your definition of $X^*$ is as follows
$$X^*=\frac{x-\bar x}{sd(x)}$$
Because the s... | Standardizing a Standard normal Variable
Intuitive proof by counterexample
There are already some general answers that cover the question, but personally I find the following reasoning most easy to follow.
Suppose your sample size is 1.
Your |
43,960 | Proving Causality with t-test/regression | @John is correct, but, in addition you cannot prove causation with any experimental design: You can only have weaker or stronger evidence of causality.
In any study, but especially in an observational study, evidence for causality is increased by including relevant covariates, giving a scientifically plausible causal p... | Proving Causality with t-test/regression | @John is correct, but, in addition you cannot prove causation with any experimental design: You can only have weaker or stronger evidence of causality.
In any study, but especially in an observational | Proving Causality with t-test/regression
@John is correct, but, in addition you cannot prove causation with any experimental design: You can only have weaker or stronger evidence of causality.
In any study, but especially in an observational study, evidence for causality is increased by including relevant covariates, g... | Proving Causality with t-test/regression
@John is correct, but, in addition you cannot prove causation with any experimental design: You can only have weaker or stronger evidence of causality.
In any study, but especially in an observational |
43,961 | Proving Causality with t-test/regression | Causal relationships are established by experimental design, not a particular statistical test. You could use a correlation as your statistical test and demonstrate that the high quality true experiment you conducted strongly implies causation. You could perform a t-test as your statistic and show a relationship in yo... | Proving Causality with t-test/regression | Causal relationships are established by experimental design, not a particular statistical test. You could use a correlation as your statistical test and demonstrate that the high quality true experim | Proving Causality with t-test/regression
Causal relationships are established by experimental design, not a particular statistical test. You could use a correlation as your statistical test and demonstrate that the high quality true experiment you conducted strongly implies causation. You could perform a t-test as you... | Proving Causality with t-test/regression
Causal relationships are established by experimental design, not a particular statistical test. You could use a correlation as your statistical test and demonstrate that the high quality true experim |
43,962 | Proving Causality with t-test/regression | Like everyone else said, math alone cannot determine causality.
A solid way to find causality is to first develop your causal theory.
Once you have a causal theory you can group all the known variables. Having all the known variables will allow you to compare them all through multiple tests.
Then make a list of potenti... | Proving Causality with t-test/regression | Like everyone else said, math alone cannot determine causality.
A solid way to find causality is to first develop your causal theory.
Once you have a causal theory you can group all the known variable | Proving Causality with t-test/regression
Like everyone else said, math alone cannot determine causality.
A solid way to find causality is to first develop your causal theory.
Once you have a causal theory you can group all the known variables. Having all the known variables will allow you to compare them all through mu... | Proving Causality with t-test/regression
Like everyone else said, math alone cannot determine causality.
A solid way to find causality is to first develop your causal theory.
Once you have a causal theory you can group all the known variable |
43,963 | How to perform a non-equi-spaced histogram in R? | You will notice that there is an argument breaks as a part of the function hist(), with the default set to "Sturges". You can also set your own breakpoints and use them instead of the default sturges algorithm as follows:
breakpoints <- c(0, 1, 10, 11, 12)
hist(data, breaks=breakpoints)
If you read all the way down... | How to perform a non-equi-spaced histogram in R? | You will notice that there is an argument breaks as a part of the function hist(), with the default set to "Sturges". You can also set your own breakpoints and use them instead of the default sturges | How to perform a non-equi-spaced histogram in R?
You will notice that there is an argument breaks as a part of the function hist(), with the default set to "Sturges". You can also set your own breakpoints and use them instead of the default sturges algorithm as follows:
breakpoints <- c(0, 1, 10, 11, 12)
hist(data, ... | How to perform a non-equi-spaced histogram in R?
You will notice that there is an argument breaks as a part of the function hist(), with the default set to "Sturges". You can also set your own breakpoints and use them instead of the default sturges |
43,964 | How to perform a non-equi-spaced histogram in R? | Denby and Mallows 2009 ungated linkprovide a nice approach called the 'diagonally cut histogram', and provide a function 'dhist' in their supplementary material (available at the above link).
Here is the abstract:
When constructing a histogram, it is common to make all bars the same
width. One could also choose to m... | How to perform a non-equi-spaced histogram in R? | Denby and Mallows 2009 ungated linkprovide a nice approach called the 'diagonally cut histogram', and provide a function 'dhist' in their supplementary material (available at the above link).
Here is | How to perform a non-equi-spaced histogram in R?
Denby and Mallows 2009 ungated linkprovide a nice approach called the 'diagonally cut histogram', and provide a function 'dhist' in their supplementary material (available at the above link).
Here is the abstract:
When constructing a histogram, it is common to make all ... | How to perform a non-equi-spaced histogram in R?
Denby and Mallows 2009 ungated linkprovide a nice approach called the 'diagonally cut histogram', and provide a function 'dhist' in their supplementary material (available at the above link).
Here is |
43,965 | How to perform a non-equi-spaced histogram in R? | One easy solution would be to use quantiles as breaks:
x <- rnorm(100)
hist(x)
hist(x, breaks = quantile(x, 0:10 / 10)) | How to perform a non-equi-spaced histogram in R? | One easy solution would be to use quantiles as breaks:
x <- rnorm(100)
hist(x)
hist(x, breaks = quantile(x, 0:10 / 10)) | How to perform a non-equi-spaced histogram in R?
One easy solution would be to use quantiles as breaks:
x <- rnorm(100)
hist(x)
hist(x, breaks = quantile(x, 0:10 / 10)) | How to perform a non-equi-spaced histogram in R?
One easy solution would be to use quantiles as breaks:
x <- rnorm(100)
hist(x)
hist(x, breaks = quantile(x, 0:10 / 10)) |
43,966 | No valid coefficients for NegBin regression | Before jumping to a model that includes all interactions, you can try adding only the 2-way interactions:
model.nb.intr <- glm.nb(Response ~ (Pred1 + Pred2 + Pred3 + Pred4 + Pred5)^2 - 1, data=d) | No valid coefficients for NegBin regression | Before jumping to a model that includes all interactions, you can try adding only the 2-way interactions:
model.nb.intr <- glm.nb(Response ~ (Pred1 + Pred2 + Pred3 + Pred4 + Pred5)^2 - 1, data=d) | No valid coefficients for NegBin regression
Before jumping to a model that includes all interactions, you can try adding only the 2-way interactions:
model.nb.intr <- glm.nb(Response ~ (Pred1 + Pred2 + Pred3 + Pred4 + Pred5)^2 - 1, data=d) | No valid coefficients for NegBin regression
Before jumping to a model that includes all interactions, you can try adding only the 2-way interactions:
model.nb.intr <- glm.nb(Response ~ (Pred1 + Pred2 + Pred3 + Pred4 + Pred5)^2 - 1, data=d) |
43,967 | No valid coefficients for NegBin regression | Your model is too complex for the computer to work out some reasonable starting values that do not lead to infinite deviance when doing the glm.fit iterations.
Have you got enough data to estimate all these interactions? Do you think it is plausible for all predictors to interact with each other? If not, think about wh... | No valid coefficients for NegBin regression | Your model is too complex for the computer to work out some reasonable starting values that do not lead to infinite deviance when doing the glm.fit iterations.
Have you got enough data to estimate all | No valid coefficients for NegBin regression
Your model is too complex for the computer to work out some reasonable starting values that do not lead to infinite deviance when doing the glm.fit iterations.
Have you got enough data to estimate all these interactions? Do you think it is plausible for all predictors to inte... | No valid coefficients for NegBin regression
Your model is too complex for the computer to work out some reasonable starting values that do not lead to infinite deviance when doing the glm.fit iterations.
Have you got enough data to estimate all |
43,968 | No valid coefficients for NegBin regression | If you can't get satisfaction with R you can fit this model and more complicated
ones with AD Model Builder which is free software available at http://admb-project.org. ADMB permits you to model the over dispersion in a variety of ways,
rather than being confined to the GLM paradigm. I can advise you if you are intere... | No valid coefficients for NegBin regression | If you can't get satisfaction with R you can fit this model and more complicated
ones with AD Model Builder which is free software available at http://admb-project.org. ADMB permits you to model the | No valid coefficients for NegBin regression
If you can't get satisfaction with R you can fit this model and more complicated
ones with AD Model Builder which is free software available at http://admb-project.org. ADMB permits you to model the over dispersion in a variety of ways,
rather than being confined to the GLM ... | No valid coefficients for NegBin regression
If you can't get satisfaction with R you can fit this model and more complicated
ones with AD Model Builder which is free software available at http://admb-project.org. ADMB permits you to model the |
43,969 | Calculating False Acceptance Rate for a Gaussian Distribution of scores | Just to add to other responses, here is a brief recap' on terminology.
For any biometric or classification system, the main performance indicator is the receiver operating characteristic (ROC) curve, which is a plot of true acceptance rate (TAR=1-FRR, the false rejection rate) against false acceptance rate (FAR), which... | Calculating False Acceptance Rate for a Gaussian Distribution of scores | Just to add to other responses, here is a brief recap' on terminology.
For any biometric or classification system, the main performance indicator is the receiver operating characteristic (ROC) curve, | Calculating False Acceptance Rate for a Gaussian Distribution of scores
Just to add to other responses, here is a brief recap' on terminology.
For any biometric or classification system, the main performance indicator is the receiver operating characteristic (ROC) curve, which is a plot of true acceptance rate (TAR=1-F... | Calculating False Acceptance Rate for a Gaussian Distribution of scores
Just to add to other responses, here is a brief recap' on terminology.
For any biometric or classification system, the main performance indicator is the receiver operating characteristic (ROC) curve, |
43,970 | Calculating False Acceptance Rate for a Gaussian Distribution of scores | I'm not certain. I'm curious as to the other responses you get. However, I think you'll need to clarify a bit:
Does your Gaussian distribution represent the scores for a population of individuals which should be rejected by your biometric system?
If so, then I think you simply need to compute a cumulative probability... | Calculating False Acceptance Rate for a Gaussian Distribution of scores | I'm not certain. I'm curious as to the other responses you get. However, I think you'll need to clarify a bit:
Does your Gaussian distribution represent the scores for a population of individuals wh | Calculating False Acceptance Rate for a Gaussian Distribution of scores
I'm not certain. I'm curious as to the other responses you get. However, I think you'll need to clarify a bit:
Does your Gaussian distribution represent the scores for a population of individuals which should be rejected by your biometric system?... | Calculating False Acceptance Rate for a Gaussian Distribution of scores
I'm not certain. I'm curious as to the other responses you get. However, I think you'll need to clarify a bit:
Does your Gaussian distribution represent the scores for a population of individuals wh |
43,971 | Calculating False Acceptance Rate for a Gaussian Distribution of scores | it sounds as tho the following simplified situation may capture the essence of your problem:
there are two populations of individuals: A = acceptable individuals and U = unacceptables. associated
with each individual is a 'score' $X$. suppose in each of the two populations, the scores have
gaussian distributions, wher... | Calculating False Acceptance Rate for a Gaussian Distribution of scores | it sounds as tho the following simplified situation may capture the essence of your problem:
there are two populations of individuals: A = acceptable individuals and U = unacceptables. associated
with | Calculating False Acceptance Rate for a Gaussian Distribution of scores
it sounds as tho the following simplified situation may capture the essence of your problem:
there are two populations of individuals: A = acceptable individuals and U = unacceptables. associated
with each individual is a 'score' $X$. suppose in ea... | Calculating False Acceptance Rate for a Gaussian Distribution of scores
it sounds as tho the following simplified situation may capture the essence of your problem:
there are two populations of individuals: A = acceptable individuals and U = unacceptables. associated
with |
43,972 | Machine Learning conferences? [closed] | ICML (International Conference on Machine Learning)
ICML 2010 | Machine Learning conferences? [closed] | ICML (International Conference on Machine Learning)
ICML 2010 | Machine Learning conferences? [closed]
ICML (International Conference on Machine Learning)
ICML 2010 | Machine Learning conferences? [closed]
ICML (International Conference on Machine Learning)
ICML 2010 |
43,973 | Machine Learning conferences? [closed] | NIPS (Neural Information Processing Systems). It's actually an intersection of machine learning, and application areas such as speech/language, vision, neuro-science, and other related areas. | Machine Learning conferences? [closed] | NIPS (Neural Information Processing Systems). It's actually an intersection of machine learning, and application areas such as speech/language, vision, neuro-science, and other related areas. | Machine Learning conferences? [closed]
NIPS (Neural Information Processing Systems). It's actually an intersection of machine learning, and application areas such as speech/language, vision, neuro-science, and other related areas. | Machine Learning conferences? [closed]
NIPS (Neural Information Processing Systems). It's actually an intersection of machine learning, and application areas such as speech/language, vision, neuro-science, and other related areas. |
43,974 | Machine Learning conferences? [closed] | AISTATS -- Conference on Artificial Intelligence and Statistics
Similar flavor of papers to NIPS, although papers may be of slightly lower quality. It is much smaller than ICML or NIPS, which allows people to have deeper interactions. | Machine Learning conferences? [closed] | AISTATS -- Conference on Artificial Intelligence and Statistics
Similar flavor of papers to NIPS, although papers may be of slightly lower quality. It is much smaller than ICML or NIPS, which allows p | Machine Learning conferences? [closed]
AISTATS -- Conference on Artificial Intelligence and Statistics
Similar flavor of papers to NIPS, although papers may be of slightly lower quality. It is much smaller than ICML or NIPS, which allows people to have deeper interactions. | Machine Learning conferences? [closed]
AISTATS -- Conference on Artificial Intelligence and Statistics
Similar flavor of papers to NIPS, although papers may be of slightly lower quality. It is much smaller than ICML or NIPS, which allows p |
43,975 | Machine Learning conferences? [closed] | AAAI (in Atlanta this year) | Machine Learning conferences? [closed] | AAAI (in Atlanta this year) | Machine Learning conferences? [closed]
AAAI (in Atlanta this year) | Machine Learning conferences? [closed]
AAAI (in Atlanta this year) |
43,976 | Machine Learning conferences? [closed] | Artificial Intelligence In Medicine (AIME), odd years starting from 1985. | Machine Learning conferences? [closed] | Artificial Intelligence In Medicine (AIME), odd years starting from 1985. | Machine Learning conferences? [closed]
Artificial Intelligence In Medicine (AIME), odd years starting from 1985. | Machine Learning conferences? [closed]
Artificial Intelligence In Medicine (AIME), odd years starting from 1985. |
43,977 | Machine Learning conferences? [closed] | European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD)
To see the type of papers presented at the conference see the videos of the last confenece on videolectures.net | Machine Learning conferences? [closed] | European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD)
To see the type of papers presented at the conference see the videos of the last co | Machine Learning conferences? [closed]
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD)
To see the type of papers presented at the conference see the videos of the last confenece on videolectures.net | Machine Learning conferences? [closed]
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD)
To see the type of papers presented at the conference see the videos of the last co |
43,978 | Machine Learning conferences? [closed] | One of the only machine learning conferences for those in Australia and New Zealand is:
23rd Australasian Joint Conference on Artificial Intelligence
It's held in Adelaide this year. | Machine Learning conferences? [closed] | One of the only machine learning conferences for those in Australia and New Zealand is:
23rd Australasian Joint Conference on Artificial Intelligence
It's held in Adelaide this year. | Machine Learning conferences? [closed]
One of the only machine learning conferences for those in Australia and New Zealand is:
23rd Australasian Joint Conference on Artificial Intelligence
It's held in Adelaide this year. | Machine Learning conferences? [closed]
One of the only machine learning conferences for those in Australia and New Zealand is:
23rd Australasian Joint Conference on Artificial Intelligence
It's held in Adelaide this year. |
43,979 | Machine Learning conferences? [closed] | European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning | Machine Learning conferences? [closed] | European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning | Machine Learning conferences? [closed]
European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning | Machine Learning conferences? [closed]
European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning |
43,980 | Machine Learning conferences? [closed] | IEEE World Congress on Computational Intelligence. Note that it is the link for 2010 conference. | Machine Learning conferences? [closed] | IEEE World Congress on Computational Intelligence. Note that it is the link for 2010 conference. | Machine Learning conferences? [closed]
IEEE World Congress on Computational Intelligence. Note that it is the link for 2010 conference. | Machine Learning conferences? [closed]
IEEE World Congress on Computational Intelligence. Note that it is the link for 2010 conference. |
43,981 | Machine Learning conferences? [closed] | International Conference on Artificial Neural Networks. Note that the link is for the 2010 conference. | Machine Learning conferences? [closed] | International Conference on Artificial Neural Networks. Note that the link is for the 2010 conference. | Machine Learning conferences? [closed]
International Conference on Artificial Neural Networks. Note that the link is for the 2010 conference. | Machine Learning conferences? [closed]
International Conference on Artificial Neural Networks. Note that the link is for the 2010 conference. |
43,982 | Machine Learning conferences? [closed] | International Conference on Robotics and Automation
ICRA2015
ICRA2016 | Machine Learning conferences? [closed] | International Conference on Robotics and Automation
ICRA2015
ICRA2016 | Machine Learning conferences? [closed]
International Conference on Robotics and Automation
ICRA2015
ICRA2016 | Machine Learning conferences? [closed]
International Conference on Robotics and Automation
ICRA2015
ICRA2016 |
43,983 | Machine Learning conferences? [closed] | IEEE/RSJ International Conference on Intelligent Robots and Systems
IROS2015
IROS2016 | Machine Learning conferences? [closed] | IEEE/RSJ International Conference on Intelligent Robots and Systems
IROS2015
IROS2016 | Machine Learning conferences? [closed]
IEEE/RSJ International Conference on Intelligent Robots and Systems
IROS2015
IROS2016 | Machine Learning conferences? [closed]
IEEE/RSJ International Conference on Intelligent Robots and Systems
IROS2015
IROS2016 |
43,984 | P-value adjustment for a single test with low sample size | No, for a single test you have a single p-value, thus no correction method is needed. Indeed, multiple comparisons problem arises when you perform many statistical tests or when you build many confidence intervals on the same data. Also, the small sample size issue is irrelevant to the multiplicity issue.
If you are wo... | P-value adjustment for a single test with low sample size | No, for a single test you have a single p-value, thus no correction method is needed. Indeed, multiple comparisons problem arises when you perform many statistical tests or when you build many confide | P-value adjustment for a single test with low sample size
No, for a single test you have a single p-value, thus no correction method is needed. Indeed, multiple comparisons problem arises when you perform many statistical tests or when you build many confidence intervals on the same data. Also, the small sample size is... | P-value adjustment for a single test with low sample size
No, for a single test you have a single p-value, thus no correction method is needed. Indeed, multiple comparisons problem arises when you perform many statistical tests or when you build many confide |
43,985 | P-value adjustment for a single test with low sample size | With a small sample size, there are legitimate concerns.
What kind of power do you have to reject a false null hypothesis?
If your data lack normality, do you have enough data for the t-test to be robust to the deviation to the assumed normality?
The latter feeds into the second, as deviations from normality tend t... | P-value adjustment for a single test with low sample size | With a small sample size, there are legitimate concerns.
What kind of power do you have to reject a false null hypothesis?
If your data lack normality, do you have enough data for the t-test to be r | P-value adjustment for a single test with low sample size
With a small sample size, there are legitimate concerns.
What kind of power do you have to reject a false null hypothesis?
If your data lack normality, do you have enough data for the t-test to be robust to the deviation to the assumed normality?
The latter ... | P-value adjustment for a single test with low sample size
With a small sample size, there are legitimate concerns.
What kind of power do you have to reject a false null hypothesis?
If your data lack normality, do you have enough data for the t-test to be r |
43,986 | P-value adjustment for a single test with low sample size | (This answer ignores the issue with low sample size.)
I'd like to add a bit of nuance to the answers here, as it's tempting to read them and come away with this rule:
If a single p-value is observed, then correction is unnecessary; the type I error of the testing procedure is not inflated.
Type I error is inflated if... | P-value adjustment for a single test with low sample size | (This answer ignores the issue with low sample size.)
I'd like to add a bit of nuance to the answers here, as it's tempting to read them and come away with this rule:
If a single p-value is observed, | P-value adjustment for a single test with low sample size
(This answer ignores the issue with low sample size.)
I'd like to add a bit of nuance to the answers here, as it's tempting to read them and come away with this rule:
If a single p-value is observed, then correction is unnecessary; the type I error of the testi... | P-value adjustment for a single test with low sample size
(This answer ignores the issue with low sample size.)
I'd like to add a bit of nuance to the answers here, as it's tempting to read them and come away with this rule:
If a single p-value is observed, |
43,987 | Lasso Regression Assumptions | I will be a contrarian and say that most assumptions do not apply to LASSO regression.
In the classical linear model, those assumptions are used to show that the OLS estimator is the minimum-variance linear unbiased estimator (Gauss-Markov theorem) and to have correct t-stats, F-stats, and confidence intervals.
In LASS... | Lasso Regression Assumptions | I will be a contrarian and say that most assumptions do not apply to LASSO regression.
In the classical linear model, those assumptions are used to show that the OLS estimator is the minimum-variance | Lasso Regression Assumptions
I will be a contrarian and say that most assumptions do not apply to LASSO regression.
In the classical linear model, those assumptions are used to show that the OLS estimator is the minimum-variance linear unbiased estimator (Gauss-Markov theorem) and to have correct t-stats, F-stats, and ... | Lasso Regression Assumptions
I will be a contrarian and say that most assumptions do not apply to LASSO regression.
In the classical linear model, those assumptions are used to show that the OLS estimator is the minimum-variance |
43,988 | Lasso Regression Assumptions | Normality is not an assumption of linear regression.
Yes, they do. Lasso regression is a linear regression with a penalty term on the magnitude of the coefficients; the penalty term in no way affects the structure of the underlying model (linearity, independence, homoskedasticity) and the assumptions are the same. | Lasso Regression Assumptions | Normality is not an assumption of linear regression.
Yes, they do. Lasso regression is a linear regression with a penalty term on the magnitude of the coefficients; the penalty term in no way affect | Lasso Regression Assumptions
Normality is not an assumption of linear regression.
Yes, they do. Lasso regression is a linear regression with a penalty term on the magnitude of the coefficients; the penalty term in no way affects the structure of the underlying model (linearity, independence, homoskedasticity) and the... | Lasso Regression Assumptions
Normality is not an assumption of linear regression.
Yes, they do. Lasso regression is a linear regression with a penalty term on the magnitude of the coefficients; the penalty term in no way affect |
43,989 | Lasso Regression Assumptions | Yes, they are valid also for Lasso, Ridge Regression and Elastic Net. I think you are referring to classical linear model (CLM) assumptions for cross-sectional regression:
Linear in Parameters
Random Sampling
No perfect collinearity
Zero Conditional Mean
Homoskedasticity
where the first four are used to establish unb... | Lasso Regression Assumptions | Yes, they are valid also for Lasso, Ridge Regression and Elastic Net. I think you are referring to classical linear model (CLM) assumptions for cross-sectional regression:
Linear in Parameters
Random | Lasso Regression Assumptions
Yes, they are valid also for Lasso, Ridge Regression and Elastic Net. I think you are referring to classical linear model (CLM) assumptions for cross-sectional regression:
Linear in Parameters
Random Sampling
No perfect collinearity
Zero Conditional Mean
Homoskedasticity
where the first f... | Lasso Regression Assumptions
Yes, they are valid also for Lasso, Ridge Regression and Elastic Net. I think you are referring to classical linear model (CLM) assumptions for cross-sectional regression:
Linear in Parameters
Random |
43,990 | Using StandardScaler function of scikit-learn library | The StandardScaler function from the sklearn library actually does not convert a distribution into a Gaussian or Normal distribution. It is used when there are large variations among the distribution values. It simply is a Feature Scaling method used to standardize the distribution making the values lie in the same ran... | Using StandardScaler function of scikit-learn library | The StandardScaler function from the sklearn library actually does not convert a distribution into a Gaussian or Normal distribution. It is used when there are large variations among the distribution | Using StandardScaler function of scikit-learn library
The StandardScaler function from the sklearn library actually does not convert a distribution into a Gaussian or Normal distribution. It is used when there are large variations among the distribution values. It simply is a Feature Scaling method used to standardize ... | Using StandardScaler function of scikit-learn library
The StandardScaler function from the sklearn library actually does not convert a distribution into a Gaussian or Normal distribution. It is used when there are large variations among the distribution |
43,991 | Using StandardScaler function of scikit-learn library | Not limited to scikit-learn, standardization does not convert features/variables into a normal distribution. It just subtracts the mean and divides by the standard deviation. The resulting feature will have a mean $0$ and a variance $1$. This has nothing to do with normal distribution.
In essence, the following affine ... | Using StandardScaler function of scikit-learn library | Not limited to scikit-learn, standardization does not convert features/variables into a normal distribution. It just subtracts the mean and divides by the standard deviation. The resulting feature wil | Using StandardScaler function of scikit-learn library
Not limited to scikit-learn, standardization does not convert features/variables into a normal distribution. It just subtracts the mean and divides by the standard deviation. The resulting feature will have a mean $0$ and a variance $1$. This has nothing to do with ... | Using StandardScaler function of scikit-learn library
Not limited to scikit-learn, standardization does not convert features/variables into a normal distribution. It just subtracts the mean and divides by the standard deviation. The resulting feature wil |
43,992 | Expectation over a max operation | If $\text{max}(\mathbb{E}[X], c) = c$, as $\text{max}(X,c) \geq c$, we have
\begin{align*}
\mathbb{E}[\text{max}(X,c)] &\geq c \\
&\geq \text{max}(\mathbb{E}[X],c)
\end{align*}
When $\text{max}(\mathbb{E}[X],c) = \mathbb{E}[X]$ then again as $\text{max}(X,c) \geq X$ we have
\begin{align*}
\mathbb{E}[\text{max}(X,c)] &... | Expectation over a max operation | If $\text{max}(\mathbb{E}[X], c) = c$, as $\text{max}(X,c) \geq c$, we have
\begin{align*}
\mathbb{E}[\text{max}(X,c)] &\geq c \\
&\geq \text{max}(\mathbb{E}[X],c)
\end{align*}
When $\text{max}(\math | Expectation over a max operation
If $\text{max}(\mathbb{E}[X], c) = c$, as $\text{max}(X,c) \geq c$, we have
\begin{align*}
\mathbb{E}[\text{max}(X,c)] &\geq c \\
&\geq \text{max}(\mathbb{E}[X],c)
\end{align*}
When $\text{max}(\mathbb{E}[X],c) = \mathbb{E}[X]$ then again as $\text{max}(X,c) \geq X$ we have
\begin{alig... | Expectation over a max operation
If $\text{max}(\mathbb{E}[X], c) = c$, as $\text{max}(X,c) \geq c$, we have
\begin{align*}
\mathbb{E}[\text{max}(X,c)] &\geq c \\
&\geq \text{max}(\mathbb{E}[X],c)
\end{align*}
When $\text{max}(\math |
43,993 | Expectation over a max operation | Similar to winperikle's answer, just tightening the arguments a bit:
$\max\{X, c\} \geq X$ and $\max\{X, c\} \geq c$. So, by taking expectation, $\text{E}\left(\max\{X, c\}\right) \geq \text{E} X$ and $\text{E}\left(\max\{X, c\}\right) \geq c$. Combining, we get $\text{E}\left(\max\{X, c\}\right) \geq \max \{\text{E} X... | Expectation over a max operation | Similar to winperikle's answer, just tightening the arguments a bit:
$\max\{X, c\} \geq X$ and $\max\{X, c\} \geq c$. So, by taking expectation, $\text{E}\left(\max\{X, c\}\right) \geq \text{E} X$ and | Expectation over a max operation
Similar to winperikle's answer, just tightening the arguments a bit:
$\max\{X, c\} \geq X$ and $\max\{X, c\} \geq c$. So, by taking expectation, $\text{E}\left(\max\{X, c\}\right) \geq \text{E} X$ and $\text{E}\left(\max\{X, c\}\right) \geq c$. Combining, we get $\text{E}\left(\max\{X, ... | Expectation over a max operation
Similar to winperikle's answer, just tightening the arguments a bit:
$\max\{X, c\} \geq X$ and $\max\{X, c\} \geq c$. So, by taking expectation, $\text{E}\left(\max\{X, c\}\right) \geq \text{E} X$ and |
43,994 | Expectation over a max operation | The inequality you have asserted is false: A simple counter-example is $X \sim \text{Bin}(2,\tfrac{1}{2})$ and $c=1$, which gives you the expectation:
$$\mathbb{E}(\max(X,c)) = \frac{3}{4} \cdot 1 + \frac{1}{4} \cdot 2 = \frac{5}{4}.$$
For this counter-example we have:
$$\frac{5}{4} = \mathbb{E}(\max(X,c)) > \max(\math... | Expectation over a max operation | The inequality you have asserted is false: A simple counter-example is $X \sim \text{Bin}(2,\tfrac{1}{2})$ and $c=1$, which gives you the expectation:
$$\mathbb{E}(\max(X,c)) = \frac{3}{4} \cdot 1 + \ | Expectation over a max operation
The inequality you have asserted is false: A simple counter-example is $X \sim \text{Bin}(2,\tfrac{1}{2})$ and $c=1$, which gives you the expectation:
$$\mathbb{E}(\max(X,c)) = \frac{3}{4} \cdot 1 + \frac{1}{4} \cdot 2 = \frac{5}{4}.$$
For this counter-example we have:
$$\frac{5}{4} = \... | Expectation over a max operation
The inequality you have asserted is false: A simple counter-example is $X \sim \text{Bin}(2,\tfrac{1}{2})$ and $c=1$, which gives you the expectation:
$$\mathbb{E}(\max(X,c)) = \frac{3}{4} \cdot 1 + \ |
43,995 | Expectation over a max operation | Let X be uniform in (0, 5) and c=2. Here you have a counterexample with each side of the inequality being 3.5 and 2.5 | Expectation over a max operation | Let X be uniform in (0, 5) and c=2. Here you have a counterexample with each side of the inequality being 3.5 and 2.5 | Expectation over a max operation
Let X be uniform in (0, 5) and c=2. Here you have a counterexample with each side of the inequality being 3.5 and 2.5 | Expectation over a max operation
Let X be uniform in (0, 5) and c=2. Here you have a counterexample with each side of the inequality being 3.5 and 2.5 |
43,996 | Conditional distribution of $\exp(-|x|-|y|-a \cdot |x-y|)$ | Disclaimer: although there is nothing to complain about Ben's answer (!), except maybe that the normalising constant of the conditional is not of direct use, here is what I
wrote while being off-line, so I may as well post it!
The full conditional of $X$ given $Y$ has a density that is proportional to
\begin{align}
... | Conditional distribution of $\exp(-|x|-|y|-a \cdot |x-y|)$ | Disclaimer: although there is nothing to complain about Ben's answer (!), except maybe that the normalising constant of the conditional is not of direct use, here is what I
wrote while being off-lin | Conditional distribution of $\exp(-|x|-|y|-a \cdot |x-y|)$
Disclaimer: although there is nothing to complain about Ben's answer (!), except maybe that the normalising constant of the conditional is not of direct use, here is what I
wrote while being off-line, so I may as well post it!
The full conditional of $X$ giv... | Conditional distribution of $\exp(-|x|-|y|-a \cdot |x-y|)$
Disclaimer: although there is nothing to complain about Ben's answer (!), except maybe that the normalising constant of the conditional is not of direct use, here is what I
wrote while being off-lin |
43,997 | Conditional distribution of $\exp(-|x|-|y|-a \cdot |x-y|)$ | The conditional density kernels are:
$$\begin{equation} \begin{aligned}
f(x|y) &\propto \exp(-|x|-a \cdot |x-y|), \\[6pt]
f(y|x) &\propto \exp(-|y|-a \cdot |x-y|). \\[6pt]
\end{aligned} \end{equation}$$
The difficulty here is to derive the actual densities that go with these kernels, which takes a bit of algebra. Inte... | Conditional distribution of $\exp(-|x|-|y|-a \cdot |x-y|)$ | The conditional density kernels are:
$$\begin{equation} \begin{aligned}
f(x|y) &\propto \exp(-|x|-a \cdot |x-y|), \\[6pt]
f(y|x) &\propto \exp(-|y|-a \cdot |x-y|). \\[6pt]
\end{aligned} \end{equation} | Conditional distribution of $\exp(-|x|-|y|-a \cdot |x-y|)$
The conditional density kernels are:
$$\begin{equation} \begin{aligned}
f(x|y) &\propto \exp(-|x|-a \cdot |x-y|), \\[6pt]
f(y|x) &\propto \exp(-|y|-a \cdot |x-y|). \\[6pt]
\end{aligned} \end{equation}$$
The difficulty here is to derive the actual densities that... | Conditional distribution of $\exp(-|x|-|y|-a \cdot |x-y|)$
The conditional density kernels are:
$$\begin{equation} \begin{aligned}
f(x|y) &\propto \exp(-|x|-a \cdot |x-y|), \\[6pt]
f(y|x) &\propto \exp(-|y|-a \cdot |x-y|). \\[6pt]
\end{aligned} \end{equation} |
43,998 | Conditional distribution of $\exp(-|x|-|y|-a \cdot |x-y|)$ | An alternative to the painful simulation from the exact full conditional distributions is to ressort to slice sampling, that is, to express the density in $(x,y)$ as the marginal of a density in $(x,y,u_1,u_2,u_3)$ as follows:
\begin{align*}f(x,y)&\propto\exp(-|x|-|y|-a \cdot |x-y|)\\&=\int_0^\infty\mathbb{I}_{u_1\le\e... | Conditional distribution of $\exp(-|x|-|y|-a \cdot |x-y|)$ | An alternative to the painful simulation from the exact full conditional distributions is to ressort to slice sampling, that is, to express the density in $(x,y)$ as the marginal of a density in $(x,y | Conditional distribution of $\exp(-|x|-|y|-a \cdot |x-y|)$
An alternative to the painful simulation from the exact full conditional distributions is to ressort to slice sampling, that is, to express the density in $(x,y)$ as the marginal of a density in $(x,y,u_1,u_2,u_3)$ as follows:
\begin{align*}f(x,y)&\propto\exp(-... | Conditional distribution of $\exp(-|x|-|y|-a \cdot |x-y|)$
An alternative to the painful simulation from the exact full conditional distributions is to ressort to slice sampling, that is, to express the density in $(x,y)$ as the marginal of a density in $(x,y |
43,999 | What does it mean, when, three standard deviations away from the mean, I land outside of the minimum or maximum value? | “Three st.dev.s include 99.7% of the data”
You need to add some caveats to such a statement.
The 99.7% thing is a fact about normal distributions -- 99.7% of the population values will be within three population standard deviations of the population mean.
In large samples* from a normal distribution, it will usuall... | What does it mean, when, three standard deviations away from the mean, I land outside of the minimum | “Three st.dev.s include 99.7% of the data”
You need to add some caveats to such a statement.
The 99.7% thing is a fact about normal distributions -- 99.7% of the population values will be within thr | What does it mean, when, three standard deviations away from the mean, I land outside of the minimum or maximum value?
“Three st.dev.s include 99.7% of the data”
You need to add some caveats to such a statement.
The 99.7% thing is a fact about normal distributions -- 99.7% of the population values will be within thre... | What does it mean, when, three standard deviations away from the mean, I land outside of the minimum
“Three st.dev.s include 99.7% of the data”
You need to add some caveats to such a statement.
The 99.7% thing is a fact about normal distributions -- 99.7% of the population values will be within thr |
44,000 | What does it mean, when, three standard deviations away from the mean, I land outside of the minimum or maximum value? | The short answer is that your sample has not precisely followed a normal distribution, so suggests perhaps you might need to re-examine your base assumptions, specifically one that you can apply tools designed for working with a normally distributed population.
Just turn your question the other way round for enlightenm... | What does it mean, when, three standard deviations away from the mean, I land outside of the minimum | The short answer is that your sample has not precisely followed a normal distribution, so suggests perhaps you might need to re-examine your base assumptions, specifically one that you can apply tools | What does it mean, when, three standard deviations away from the mean, I land outside of the minimum or maximum value?
The short answer is that your sample has not precisely followed a normal distribution, so suggests perhaps you might need to re-examine your base assumptions, specifically one that you can apply tools ... | What does it mean, when, three standard deviations away from the mean, I land outside of the minimum
The short answer is that your sample has not precisely followed a normal distribution, so suggests perhaps you might need to re-examine your base assumptions, specifically one that you can apply tools |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.